aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1601.04406
2950637306
We present an approach for identifying picturesque highlights from large amounts of egocentric video data. Given a set of egocentric videos captured over the course of a vacation, our method analyzes the videos and looks for images that have good picturesque and artistic properties. We introduce novel techniques to automatically determine aesthetic features such as composition, symmetry and color vibrancy in egocentric videos and rank the video frames based on their photographic qualities to generate highlights. Our approach also uses contextual information such as GPS, when available, to assess the relative importance of each geographic location where the vacation videos were shot. Furthermore, we specifically leverage the properties of egocentric videos to improve our highlight detection. We demonstrate results on a new egocentric vacation dataset which includes 26.5 hours of videos taken over a 14 day vacation that spans many famous tourist destinations and also provide results from a user-study to access our results.
Another popular area of research and perhaps more relevant is of life logging." Egocentric cameras such as SenseCam @cite_31 allow a user to capture continuous time series images over long periods of time. Keyframe selection based on image quality metrics such as contrast, sharpness, noise, etc @cite_36 allow for quick summarization in such time-lapse imagery. In our scenario, we have a much larger dataset spanning several days and since we are dealing with vacation videos, we go a step further than image metrics and look at higher level artistic features such as composition, symmetry and color vibrancy.
{ "cite_N": [ "@cite_36", "@cite_31" ], "mid": [ "1990833761", "2096964711" ], "abstract": [ "The SenseCam is a passive capture wearable camera and when worn continuously it takes an average of 1,900 images per day. It can be used to create a personal lifelog or visual recording of a wearer's life which can be helpful as an aid to human memory. For such a large amount of visual information to be useful, it needs to be structured into \"events\", which can be achieved through automatic segmentation. An important component of this structuring process is the selection of keyframes to represent individual events. This work investigates a variety of techniques for the selection of a single representative keyframe image from each event, in order to provide the user with an instant visual summary of that event. In our experiments we use a large test set of 2,232 lifelog events collected by 5 users over a time period of one month each. We propose a novel keyframe selection technique which seeks to select the image with the highest \"quality\" as the keyframe. The inclusion of \"quality\" approaches in keyframe selection is demonstrated to be useful owing to the high variability in image visual quality within passively captured image collections.", "Passive capture lets people record their experiences without having to operate recording equipment, and without even having to give recording conscious thought. The advantages are increased capture, and improved participation in the event itself. However, passive capture also presents many new challenges. One key challenge is how to deal with the increased volume of media for retrieval, browsing, and organizing. This paper describes the SenseCam device, which combines a camera with a number of sensors in a pendant worn around the neck. Data from SenseCam is uploaded into a MyLifeBits repository, where a number of features, but especially correlation and relationships, are used to manage the data." ] }
1601.04406
2950637306
We present an approach for identifying picturesque highlights from large amounts of egocentric video data. Given a set of egocentric videos captured over the course of a vacation, our method analyzes the videos and looks for images that have good picturesque and artistic properties. We introduce novel techniques to automatically determine aesthetic features such as composition, symmetry and color vibrancy in egocentric videos and rank the video frames based on their photographic qualities to generate highlights. Our approach also uses contextual information such as GPS, when available, to assess the relative importance of each geographic location where the vacation videos were shot. Furthermore, we specifically leverage the properties of egocentric videos to improve our highlight detection. We demonstrate results on a new egocentric vacation dataset which includes 26.5 hours of videos taken over a 14 day vacation that spans many famous tourist destinations and also provide results from a user-study to access our results.
The other approach for vacation highlights is to look at the image aesthetics. These include high-level semantic features based on photography techniques @cite_4 , finding good composition for graphics image of a 3D object @cite_42 and cropping and retargeting based on an evaluation of the composition of the image like the rule-of-thirds, diagonal dominance and visual balance @cite_14 . We took inspiration from such approaches and developed novel algorithms to detect composition, symmetry and color vibrancy for egocentric videos.
{ "cite_N": [ "@cite_14", "@cite_42", "@cite_4" ], "mid": [ "2103598646", "2131107264", "2104915826" ], "abstract": [ "Aesthetic images evoke an emotional response that transcends mere visual appreciation. In this work we develop a novel computational means for evaluating the composition aesthetics of a given image based on measuring several well-grounded composition guidelines. A compound operator of crop-and-retarget is employed to change the relative position of salient regions in the image and thus to modify the composition aesthetics of the image. We propose an optimization method for automatically producing a maximally-aesthetic version of the input image. We validate the performance of the method and show its effectiveness in a variety of experiments.", "Altering the viewing parameters of a 3D object results in computer graphics images of varying quality. One aspect of image quality is the composition of the image. While the esthetic properties of an image are subjective, some heuristics used by artists to create images can be approximated quantitatively. We present an algorithm based on heuristic compositional rules for finding the format, viewpoint, and layout for an image of a 3D object. Our system computes viewing parameters automatically or allows a user to explicitly manipulate them.", "Traditionally, distinguishing between high quality professional photos and low quality amateurish photos is a human task. To automatically assess the quality of a photo that is consistent with humans perception is a challenging topic in computer vision. Various differences exist between photos taken by professionals and amateurs because of the use of photography techniques. Previous methods mainly use features extracted from the entire image. In this paper, based on professional photography techniques, we first extract the subject region from a photo, and then formulate a number of high-level semantic features based on this subject and background division. We test our features on a large and diverse photo database, and compare our method with the state of the art. Our method performs significantly better with a classification rate of 93 versus 72 by the best existing method. In addition, we conduct the first study on high-level video quality assessment. Our system achieves a precision of over 95 in a reasonable recall rate for both photo and video assessments. We also show excellent application results in web image search re-ranking." ] }
1601.04149
2283602880
In this paper, we design a Deep Dual-Domain ( @math ) based fast restoration model to remove artifacts of JPEG compressed images. It leverages the large learning capacity of deep networks, as well as the problem-specific expertise that was hardly incorporated in the past design of deep architectures. For the latter, we take into consideration both the prior knowledge of the JPEG compression scheme, and the successful practice of the sparsity-based dual-domain approach. We further design the One-Step Sparse Inference (1-SI) module, as an efficient and light-weighted feed-forward approximation of sparse coding. Extensive experiments verify the superiority of the proposed @math model over several state-of-the-art methods. Specifically, our best model is capable of outperforming the latest deep model for around 1 dB in PSNR, and is 30 times faster.
Our work is inspired by the prior wisdom in @cite_14 . Most previous works restored compressed images in either the pixel domain @cite_25 or the DCT domain @cite_23 solely. However, an isolated quantization error of one single DCT coefficient is propagated to all pixels of the same block. An aggressively quantized DCT coefficient can further produce structured errors in the pixel-domain that correlate to the latent signal. On the other hand, the compression process sets most high frequency coefficients to zero, making it impossible to recover details from only the DCT domain. In view of their complementary characteristics, the dual-domain model was proposed in @cite_14 . While the spatial redundancies in the pixel domain were exploited by a learned dictionary @cite_25 , the residual redundancies in the DCT domain were also utilized to directly restore DCT coefficients. In this way, quantization noises were suppressed without propagating errors. The final objective (see Section 3.1) is a combination of DCT- and pixel-domain sparse representations, which could cross validate each other.
{ "cite_N": [ "@cite_14", "@cite_25", "@cite_23" ], "mid": [ "1946766895", "2160547390", "1492380776" ], "abstract": [ "Arguably the most common cause of image degradation is compression. This papers presents a novel approach to restoring JPEG-compressed images. The main innovation is in the approach of exploiting residual redundancies of JPEG code streams and sparsity properties of latent images. The restoration is a sparse coding process carried out jointy in the DCT and. pixel domains. The prowess of the proposed approach is directly restoring DCT coefficients of the latent image to prevent the spreading of quantization errors into the pixel domain, and at the same time using on-line machine-learnt local spatial features to regulate the solution of the underlying inverse problem. Experimental results are encouraging and show the promise of the new approach in significantly improving the quality of DCT-coded images.", "In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data", "Foreword. Acknowledgments. Trademarks. Introduction. Image Concepts and Vocabulary. Aspects of the Human Visual Systems. The Discrete Cosine Transform (DCT). Image Compression Systems. JPEG Modes of Operation. JPEG Syntax and Data Organization. Entropy Coding Concepts. JPEG Binary Arithmetic Coding. JPEG Coding Models. JPEG Huffman Entropy Coding. Arithmetic Coding Statistical. More on Arithmetic Coding. Probability Estimation. Compression Performance. JPEG Enhancements. JPEG Applications and Vendors. Overview of CCITT, ISO, and IEC. History of JPEG. Other Image Compression Standards. Possible Future JPEG Directions. Appendix A. Appendix B. References. Index." ] }
1601.04293
2294856570
Action recognition in still images has seen major improvement in recent years due to advances in human pose estimation, object recognition and stronger feature representations. However, there are still many cases in which performance remains far from that of humans. In this paper, we approach the problem by learning explicitly, and then integrating three components of transitive actions: (1) the human body part relevant to the action (2) the object being acted upon and (3) the specific form of interaction between the person and the object. The process uses class-specific features and relations not used in the past for action recognition and which use inherently two cycles in the process unlike most standard approaches. We focus on face-related actions (FRA), a subset of actions that includes several currently challenging categories. We present an average relative improvement of 52 over state-of-the art. We also make a new benchmark publicly available.
Others attempt to find relevant image regions in a semi-supervised manner: @cite_12 find candidate regions for action-objects and optimize a cost function which seeks agreement between the appearance of the action objects for each class as well as their location relative to the person. In @cite_14 the objectness @cite_25 measure is applied to detect many candidate regions in each image, after which multiple-instance-learning is utilized to give more weight to the informative ones. Their method does not explicitly find regions containing action objects, but any region which is informative with respect to the target action. In @cite_2 a random forest is trained by choosing the most discriminative rectangular image region (from a large set of randomly generated candidates) at each split, where the images are aligned so the face is in a known location. This has the advantage of spatially interpretable results.
{ "cite_N": [ "@cite_14", "@cite_25", "@cite_12", "@cite_2" ], "mid": [ "69594547", "2128715914", "2129947832", "1994213117" ], "abstract": [ "We propose a multi-cue based approach for recognizing human actions in still images, where relevant object regions are discovered and utilized in a weakly supervised manner. Our approach does not require any explicitly trained object detector or part attribute annotation. Instead, a multiple instance learning approach is used over sets of object hypotheses in order to represent objects relevant to the actions. We test our method on the extensive Stanford 40 Actions dataset [1] and achieve significant performance gain compared to the state-of-the-art. Our results show that using multiple object hypotheses within multiple instance learning is effective for human action recognition in still images and such an object representation is suitable for using in conjunction with other visual features.", "We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. This includes an innovative cue measuring the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure [17], and the combined measure to perform better than any cue alone. Finally, we show how to sample windows from an image according to their objectness distribution and give an algorithm to employ them as location priors for modern class-specific object detectors. In experiments on PASCAL VOC 07 we show this greatly reduces the number of windows evaluated by class-specific object detectors.", "We introduce a weakly supervised approach for learning human actions modeled as interactions between humans and objects. Our approach is human-centric: We first localize a human in the image and then determine the object relevant for the action and its spatial relation with the human. The model is learned automatically from a set of still images annotated only with the action label. Our approach relies on a human detector to initialize the model learning. For robustness to various degrees of visibility, we build a detector that learns to combine a set of existing part detectors. Starting from humans detected in a set of images depicting the action, our approach determines the action object and its spatial relation to the human. Its final output is a probabilistic model of the human-object interaction, i.e., the spatial relation between the human and the object. We present an extensive experimental evaluation on the sports action data set from [1], the PASCAL Action 2010 data set [2], and a new human-object interaction data set.", "In this paper, we study the problem of fine-grained image categorization. The goal of our method is to explore fine image statistics and identify the discriminative image patches for recognition. We achieve this goal by combining two ideas, discriminative feature mining and randomization. Discriminative feature mining allows us to model the detailed information that distinguishes different classes of images, while randomization allows us to handle the huge feature space and prevents over-fitting. We propose a random forest with discriminative decision trees algorithm, where every tree node is a discriminative classifier that is trained by combining the information in this node as well as all upstream nodes. Our method is tested on both subordinate categorization and activity recognition datasets. Experimental results show that our method identifies semantically meaningful visual information and outperforms state-of-the-art algorithms on various datasets." ] }
1601.04293
2294856570
Action recognition in still images has seen major improvement in recent years due to advances in human pose estimation, object recognition and stronger feature representations. However, there are still many cases in which performance remains far from that of humans. In this paper, we approach the problem by learning explicitly, and then integrating three components of transitive actions: (1) the human body part relevant to the action (2) the object being acted upon and (3) the specific form of interaction between the person and the object. The process uses class-specific features and relations not used in the past for action recognition and which use inherently two cycles in the process unlike most standard approaches. We focus on face-related actions (FRA), a subset of actions that includes several currently challenging categories. We present an average relative improvement of 52 over state-of-the art. We also make a new benchmark publicly available.
Some methods seek the action objects more explicitly: @cite_15 apply object detectors from Object-Bank @cite_19 and use their output among other cues to classify the images. Recently, @cite_27 combine outputs of stronger object detectors together with a pose estimation of the upper body in a neural net-setting and show improved results, where the object detectors are the main cause for the improvement in performance.
{ "cite_N": [ "@cite_19", "@cite_15", "@cite_27" ], "mid": [ "2169177311", "2038765747", "2038052836" ], "abstract": [ "Robust low-level image features have been proven to be effective representations for a variety of visual recognition tasks such as object recognition and scene classification; but pixels, or even local image patches, carry little semantic meanings. For high level visual tasks, such low-level image representations are potentially not enough. In this paper, we propose a high-level image representation, called the Object Bank, where an image is represented as a scale-invariant response map of a large number of pre-trained generic object detectors, blind to the testing dataset or visual task. Leveraging on the Object Bank representation, superior performances on high level visual recognition tasks can be achieved with simple off-the-shelf classifiers such as logistic regression and linear SVM. Sparsity algorithms make our representation more efficient and scalable for large scene datasets, and reveal semantically meaningful feature patterns.", "In this work, we propose to use attributes and parts for recognizing human actions in still images. We define action attributes as the verbs that describe the properties of human actions, while the parts of actions are objects and poselets that are closely related to the actions. We jointly model the attributes and parts by learning a set of sparse bases that are shown to carry much semantic meaning. Then, the attributes and parts of an action image can be reconstructed from sparse coefficients with respect to the learned bases. This dual sparsity provides theoretical guarantee of our bases learning and feature reconstruction approach. On the PASCAL action dataset and a new “Stanford 40 Actions” dataset, we show that our method extracts meaningful high-order interactions between attributes and parts in human actions while achieving state-of-the-art classification performance.", "This paper aims at one newly raising task in vision and multimedia research: recognizing human actions from still images. Its main challenges lie in the large variations in human poses and appearances, as well as the lack of temporal motion information. Addressing these problems, we propose to develop an expressive deep model to naturally integrate human layout and surrounding contexts for higher level action understanding from still images. In particular, a Deep Belief Net is trained to fuse information from different noisy sources such as body part detection and object detection. To bridge the semantic gap, we used manually labeled data to greatly improve the effectiveness and efficiency of the pre-training and fine-tuning stages of the DBN training. The resulting framework is shown to be robust to sometimes unreliable inputs (e.g., imprecise detections of human parts and objects), and outperforms the state-of-the-art approaches." ] }
1601.04293
2294856570
Action recognition in still images has seen major improvement in recent years due to advances in human pose estimation, object recognition and stronger feature representations. However, there are still many cases in which performance remains far from that of humans. In this paper, we approach the problem by learning explicitly, and then integrating three components of transitive actions: (1) the human body part relevant to the action (2) the object being acted upon and (3) the specific form of interaction between the person and the object. The process uses class-specific features and relations not used in the past for action recognition and which use inherently two cycles in the process unlike most standard approaches. We focus on face-related actions (FRA), a subset of actions that includes several currently challenging categories. We present an average relative improvement of 52 over state-of-the art. We also make a new benchmark publicly available.
In contrast to @cite_12 , we seek the location of a specific, relevant body part only. For detecting objects, we also use a supervised approach, but unlike @math @cite_27 , we represent the location of the action-object using a shape mask, to enable the extraction of rich features between the person and the object. Furthermore, we use the fine pose of the human (i.e, facial landmarks) to predict where each action object can, or cannot, be, in contrast to @cite_23 , who use only relative location features between body parts and objects. We further explore features specific to the region of interaction and its form, which are arguably the most critical ones to consider.
{ "cite_N": [ "@cite_27", "@cite_23", "@cite_12" ], "mid": [ "2038052836", "2158234032", "2129947832" ], "abstract": [ "This paper aims at one newly raising task in vision and multimedia research: recognizing human actions from still images. Its main challenges lie in the large variations in human poses and appearances, as well as the lack of temporal motion information. Addressing these problems, we propose to develop an expressive deep model to naturally integrate human layout and surrounding contexts for higher level action understanding from still images. In particular, a Deep Belief Net is trained to fuse information from different noisy sources such as body part detection and object detection. To bridge the semantic gap, we used manually labeled data to greatly improve the effectiveness and efficiency of the pre-training and fine-tuning stages of the DBN training. The resulting framework is shown to be robust to sometimes unreliable inputs (e.g., imprecise detections of human parts and objects), and outperforms the state-of-the-art approaches.", "We investigate a discriminatively trained model of person-object interactions for recognizing common human actions in still images. We build on the locally order-less spatial pyramid bag-of-features model, which was shown to perform extremely well on a range of object, scene and human action recognition tasks. We introduce three principal contributions. First, we replace the standard quantized local HOG SIFT features with stronger discriminatively trained body part and object detectors. Second, we introduce new person-object interaction features based on spatial co-occurrences of individual body parts and objects. Third, we address the combinatorial problem of a large number of possible interaction pairs and propose a discriminative selection procedure using a linear support vector machine (SVM) with a sparsity inducing regularizer. Learning of action-specific body part and object interactions bypasses the difficult problem of estimating the complete human body pose configuration. Benefits of the proposed model are shown on human action recognition in consumer photographs, outperforming the strong bag-of-features baseline.", "We introduce a weakly supervised approach for learning human actions modeled as interactions between humans and objects. Our approach is human-centric: We first localize a human in the image and then determine the object relevant for the action and its spatial relation with the human. The model is learned automatically from a set of still images annotated only with the action label. Our approach relies on a human detector to initialize the model learning. For robustness to various degrees of visibility, we build a detector that learns to combine a set of existing part detectors. Starting from humans detected in a set of images depicting the action, our approach determines the action object and its spatial relation to the human. Its final output is a probabilistic model of the human-object interaction, i.e., the spatial relation between the human and the object. We present an extensive experimental evaluation on the sports action data set from [1], the PASCAL Action 2010 data set [2], and a new human-object interaction data set." ] }
1601.03622
2612704529
In this paper we study lower ramification numbers of power series tangent to the identity that are defined over fields of positive characteristics p. Let g be such a series, then g has a fixed poin ...
The relation between lower ramification numbers and arithmetic dynamics over ultrametric fields is one of the motivations for this study. However, ramification numbers have also been considered in different contexts. In the study of the potential sequences of ramification numbers, Keating @cite_18 used the relation between the ramification numbers and abelian extensions of @math . Laubie and Sa " i ne @cite_14 @cite_7 could later improve these results by applying Wintenberger's theory on fields of norms @cite_5 . In @cite_6 the authors study Lubin's conjecture @cite_0 , on the relation between wildly ramified power series and formal groups.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_7", "@cite_6", "@cite_0", "@cite_5" ], "mid": [ "2072037658", "2008670261", "", "230449344", "", "1974291093" ], "abstract": [ "Let k be a field of characteristic p and let σ ∈ Autk k((t)) . For m ≥ 0 define im = vt(σpmt − t) − 1. We show that if i0 = 1 and i1 = 1 + bp with 0 < b < p − 1 then im = 1 + bp + bp2 + ctdot; + bpm for all m ≥ 1.", "Abstract Let k be a field of characteristic p and let γ ∈Aut k ( k (( t ))). For m ⩾0 define i m = v t ( γ p m t − t )−1. We show that if p ∤ i 0 is and i 1 p 2 − p +1) i 0 then there exists an integer b such that i m = i 0 + bp + bp 2 +…+ bp m for all m ⩾1.", "", "Let ( O ) k be the ring of integers of a finite extension k of the field ( Q ) p of p-adic numbers. The endomorphisms of a formal group law defined over ( O ) k provide nontrivial examples of commuting formal series with coefficients in ( O ) k . This article deals with the inverse problem formulated by Jonathan Lubin within the context of non-Archimedean dynamical systems. We present a large family of series, with coefficients in ( Z ) p , which satisfy Lubin's conjecture. These series are constructed with the help of Lubin–Tate formal group laws over ( Q ) p . We introduce the notion of minimally ramified series which turn out to be modulo p reductions of some series of this family. The commutant monoids of these minimally ramified series are determined by using the Fontaine–Wintenberger theory of the field of norms which allows an interpretation of them as automorphisms of ( Z ) p -extensions of local fields of characteristic zero. A particularly effective example illustrating the paper is given by a family of series generalizing Cebysev polynomials", "", "Nous donnons une preuve que tout automorphisme sauvagement ramifie d'un corps de series formelles a une variable et a coefficients dans un corps parfait de caracteristique p provient de la construction du corps des nonnes d'une Zp-extension totalement ramifiee d'un corps local de caracteristique 0 ou p." ] }
1601.03712
2233268919
Super-resolution is generally referred to as the task of recovering fine details from coarse information. Motivated by applications such as single-molecule imaging, radar imaging, etc., we consider parameter estimation of complex exponentials from their modulations with unknown waveforms, allowing for non-stationary blind super-resolution. This problem, however, is ill-posed since both the parameters associated with the complex exponentials and the modulating waveforms are unknown. To alleviate this, we assume that the unknown waveforms live in a common low-dimensional subspace. Using a lifting trick, we recast the blind super-resolution problem as a structured low-rank matrix recovery problem. Atomic norm minimization is then used to enforce the structured low-rankness, and is reformulated as a semidefinite program that is solvable in polynomial time. We show that, up to scaling ambiguities, exact recovery of both of the complex exponential parameters and the unknown waveforms is possible when the waveform subspace is random and the number of measurements is proportional to the number of degrees of freedom in the problem. Numerical simulations support our theoretical findings, showing that non-stationary blind super-resolution using atomic norm minimization is possible.
Our work is most closely related to the recent works @cite_22 @cite_17 . In @cite_17 , a biconvex problem for simultaneous sparse recovery and unknown gain calibration is studied. In their work, a subspace model is employed for the unknown gains to make the problem well-posed. It is worth mentioning that they use @math minimization as a convex program, which is different from ours. Then, a sample complexity bound that is suboptimal is derived for sparse recovery and self-calibration. Inspired by @cite_17 , @cite_22 considers a super-resolution problem that has a similar setup to @cite_27 , except that the point spread function is assumed unknown. By employing a subspace model for the point spread function, an atomic norm minimization program is formulated for simultaneous super-resolution of point sources and recovery of the unknown point spread function. The atomic norm minimization problem therein is recast as an SDP. The sample complexity bound derived there, however, is suboptimal. As we explain in , our work further generalizes the model in @cite_22 to the non-stationary case, where the point spread functions can vary with the point sources.
{ "cite_N": [ "@cite_27", "@cite_22", "@cite_17" ], "mid": [ "2964325628", "589200591", "67860792" ], "abstract": [ "This paper develops a mathematical theory of super-resolution. Broadly speaking, super-resolution is the problem of recovering the fine details of an object—the high end of its spectrum—from coarse scale information only—from samples at the low end of the spectrum. Suppose we have many point sources at unknown locations in [0,1] and with unknown complex-valued amplitudes. We only observe Fourier samples of this object up to a frequency cutoff fc. We show that one can super-resolve these point sources with infinite precision—i.e., recover the exact locations and amplitudes—by solving a simple convex optimization problem, which can essentially be reformulated as a semidefinite program. This holds provided that the distance between sources is at least 2 fc. This result extends to higher dimensions and other models. In one dimension, for instance, it is possible to recover a piecewise smooth function by resolving the discontinuity points with infinite precision as well. We also show that the theory and methods are robust to noise. In particular, in the discrete setting we develop some theoretical results explaining how the accuracy of the super-resolved signal is expected to degrade when both the noise level and the super-resolution factor vary. © 2014 Wiley Periodicals, Inc.", "Neural recordings, returns from radars and sonars, images in astronomy and single-molecule microscopy can be modeled as a linear superposition of a small number of scaled and delayed copies of a band-limited or diffraction-limited point spread function, which is either determined by the nature or designed by the users; in other words, we observe the convolution between a point spread function and a sparse spike signal with unknown amplitudes and delays. While it is of great interest to accurately resolve the spike signal from as few samples as possible, however, when the point spread function is not known a priori, this problem is terribly ill-posed. This paper proposes a convex optimization framework to simultaneously estimate the point spread function as well as the spike signal, by mildly constraining the point spread function to lie in a known low-dimensional subspace. By applying the lifting trick, we obtain an underdetermined linear system of an ensemble of signals with joint spectral sparsity, to which atomic norm minimization is applied. Under mild randomness assumptions of the low-dimensional subspace as well as a separation condition of the spike signal, we prove the proposed algorithm, dubbed as AtomicLift, is guaranteed to recover the spike signal up to a scaling factor as soon as the number of samples is large enough. The extension of AtomicLift to handle noisy measurements is also discussed. Numerical examples are provided to validate the effectiveness of the proposed approaches.", "The design of high-precision sensing devises becomes ever more difficult and expensive. At the same time, the need for precise calibration of these devices (ranging from tiny sensors to space telescopes) manifests itself as a major roadblock in many scientific and technological endeavors. To achieve optimal performance of advanced high-performance sensors one must carefully calibrate them, which is often difficult or even impossible to do in practice. In this work we bring together three seemingly unrelated concepts, namely self-calibration, compressive sensing, and biconvex optimization. The idea behind self-calibration is to equip a hardware device with a smart algorithm that can compensate automatically for the lack of calibration. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations where both and the diagonal matrix (which models the calibration error) are unknown. By 'lifting' this biconvex inverse problem we arrive at a convex optimization problem. By exploiting sparsity in the signal model, we derive explicit theoretical guarantees under which both and can be recovered exactly, robustly, and numerically efficiently via linear programming. Applications in array calibration and wireless communications are discussed and numerical simulations are presented, confirming and complementing our theoretical analysis." ] }
1601.03712
2233268919
Super-resolution is generally referred to as the task of recovering fine details from coarse information. Motivated by applications such as single-molecule imaging, radar imaging, etc., we consider parameter estimation of complex exponentials from their modulations with unknown waveforms, allowing for non-stationary blind super-resolution. This problem, however, is ill-posed since both the parameters associated with the complex exponentials and the modulating waveforms are unknown. To alleviate this, we assume that the unknown waveforms live in a common low-dimensional subspace. Using a lifting trick, we recast the blind super-resolution problem as a structured low-rank matrix recovery problem. Atomic norm minimization is then used to enforce the structured low-rankness, and is reformulated as a semidefinite program that is solvable in polynomial time. We show that, up to scaling ambiguities, exact recovery of both of the complex exponential parameters and the unknown waveforms is possible when the waveform subspace is random and the number of measurements is proportional to the number of degrees of freedom in the problem. Numerical simulations support our theoretical findings, showing that non-stationary blind super-resolution using atomic norm minimization is possible.
Lastly, we would like to mention that the signal model in our work has both low-rank and spectrally sparse structures, and thus is simultaneously structured. Consistent with @cite_38 , we can achieve the information-theoretic limit on the measurement bound (up to a polylogarithmic factor) not by a combination of convex objectives but rather through a single convex objective---in this case via atomic norm minimization.
{ "cite_N": [ "@cite_38" ], "mid": [ "2110355775" ], "abstract": [ "Recovering structured models (e.g., sparse or group-sparse vectors, low-rank matrices) given a few linear observations have been well-studied recently. In various applications in signal processing and machine learning, the model of interest is structured in several ways, for example, a matrix that is simultaneously sparse and low rank. Often norms that promote the individual structures are known, and allow for recovery using an orderwise optimal number of measurements (e.g., @math norm for sparsity, nuclear norm for matrix rank). Hence, it is reasonable to minimize a combination of such norms. We show that, surprisingly, using multiobjective optimization with these norms can do no better, orderwise, than exploiting only one of the structures, thus revealing a fundamental limitation in sample complexity. This result suggests that to fully exploit the multiple structures, we need an entirely new convex relaxation. Further, specializing our results to the case of sparse and low-rank matrices, we show that a nonconvex formulation recovers the model from very few measurements (on the order of the degrees of freedom), whereas the convex problem combining the @math and nuclear norms requires many more measurements, illustrating a gap between the performance of the convex and nonconvex recovery problems. Our framework applies to arbitrary structure-inducing norms as well as to a wide range of measurement ensembles. This allows us to give sample complexity bounds for problems such as sparse phase retrieval and low-rank tensor completion." ] }
1601.03679
2952625673
In this paper, we focus on automatically detecting events in unconstrained videos without the use of any visual training exemplars. In principle, zero-shot learning makes it possible to train an event detection model based on the assumption that events (e.g. ) can be described by multiple mid-level semantic concepts (e.g. "blowing candle", "birthday cake"). Towards this goal, we first pre-train a bundle of concept classifiers using data from other sources. Then we evaluate the semantic correlation of each concept the event of interest and pick up the relevant concept classifiers, which are applied on all test videos to get multiple prediction score vectors. While most existing systems combine the predictions of the concept classifiers with fixed weights, we propose to learn the optimal weights of the concept classifiers for each testing video by exploring a set of online available videos with free-form text descriptions of their content. To validate the effectiveness of the proposed approach, we have conducted extensive experiments on the latest TRECVID MEDTest 2014, MEDTest 2013 and CCV dataset. The experimental results confirm the superiority of the proposed approach.
Complex event detection on unconstrained web videos has attracted wide attention in the field of multimedia and computer vision. Significant progress has been made in the past @cite_22 @cite_17 @cite_40 . A decent video event detection system usually consists of a good feature extraction module and a highly effective classification module (such as large margin support machines and kernel methods). Various low-level features (static, audio, .) already achieve good performances under the bag-of-words representation. Further improvements are obtained by aggregating complementary features in the video level, such as coding @cite_35 @cite_21 and pooling @cite_29 . It is observed that with enough labeled training data, superb performance can be obtained. However, when the number of positive training videos falls short, the detection performance drops dramatically. In this work, we focus on the more challenging zero-exemplar setting where labeled training videos for the event of interest are provided.
{ "cite_N": [ "@cite_35", "@cite_22", "@cite_29", "@cite_21", "@cite_40", "@cite_17" ], "mid": [ "2090042335", "2063438554", "195163846", "1606858007", "", "2123654294" ], "abstract": [ "Many successful models for scene or object recognition transform low-level descriptors (such as Gabor filter responses, or SIFT descriptors) into richer representations of intermediate complexity. This process can often be broken down into two steps: (1) a coding step, which performs a pointwise transformation of the descriptors into a representation better adapted to the task, and (2) a pooling step, which summarizes the coded features over larger neighborhoods. Several combinations of coding and pooling schemes have been proposed in the literature. The goal of this paper is threefold. We seek to establish the relative importance of each step of mid-level feature extraction through a comprehensive cross evaluation of several types of coding modules (hard and soft vector quantization, sparse coding) and pooling schemes (by taking the average, or the maximum), which obtains state-of-the-art performance or better on several recognition benchmarks. We show how to improve the best performing coding scheme by learning a supervised discriminative dictionary for sparse coding. We provide theoretical and empirical insight into the remarkable performance of max pooling. By teasing apart components shared by modern mid-level feature extractors, our approach aims to facilitate the design of better recognition architectures.", "Video event detection allows intelligent indexing of video content based on events. Traditional approaches extract features from video frames or shots, then quantize and pool the features to form a single vector representation for the entire video. Though simple and efficient, the final pooling step may lead to loss of temporally local information, which is important in indicating which part in a long video signifies presence of the event. In this work, we propose a novel instance-based video event detection approach. We represent each video as multiple 'instances', defined as video segments of different temporal intervals. The objective is to learn an instance-level event detection model based on only video-level labels. To solve this problem, we propose a large-margin formulation which treats the instance labels as hidden latent variables, and simultaneously infers the instance labels as well as the instance-level classification model. Our framework infers optimal solutions that assume positive videos have a large number of positive instances while negative videos have the fewest ones. Extensive experiments on large-scale video event datasets demonstrate significant performance gains. The proposed method is also useful in explaining the detection results by localizing the temporal segments in a video which is responsible for the positive detection.", "Real-world videos often contain dynamic backgrounds and evolving people activities, especially for those web videos generated by users in unconstrained scenarios. This paper proposes a new visual representation, namely scene aligned pooling, for the task of event recognition in complex videos. Based on the observation that a video clip is often composed with shots of different scenes, the key idea of scene aligned pooling is to decompose any video features into concurrent scene components, and to construct classification models adaptive to different scenes. The experiments on two large scale real-world datasets including the TRECVID Multimedia Event Detection 2011 and the Human Motion Recognition Databases (HMDB) show that our new visual representation can consistently improve various kinds of visual features such as different low-level color and texture features, or middle-level histogram of local descriptors such as SIFT, or space-time interest points, and high level semantic model features, by a significant margin. For example, we improve the-state-of-the-art accuracy on HMDB dataset by 20 in terms of accuracy.", "The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9 to 58.3 . Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets.", "", "The problem of adaptively selecting pooling regions for the classification of complex video events is considered. Complex events are defined as events composed of several characteristic behaviors, whose temporal configuration can change from sequence to sequence. A dynamic pooling operator is defined so as to enable a unified solution to the problems of event specific video segmentation, temporal structure modeling, and event detection. Video is decomposed into segments, and the segments most informative for detecting a given event are identified, so as to dynamically determine the pooling operator most suited for each sequence. This dynamic pooling is implemented by treating the locations of characteristic segments as hidden information, which is inferred, on a sequence-by-sequence basis, via a large-margin classification rule with latent variables. Although the feasible set of segment selections is combinatorial, it is shown that a globally optimal solution to the inference problem can be obtained efficiently, through the solution of a series of linear programs. Besides the coarse-level location of segments, a finer model of video structure is implemented by jointly pooling features of segment-tuples. Experimental evaluation demonstrates that the resulting event detector has state-of-the-art performance on challenging video datasets." ] }
1601.03679
2952625673
In this paper, we focus on automatically detecting events in unconstrained videos without the use of any visual training exemplars. In principle, zero-shot learning makes it possible to train an event detection model based on the assumption that events (e.g. ) can be described by multiple mid-level semantic concepts (e.g. "blowing candle", "birthday cake"). Towards this goal, we first pre-train a bundle of concept classifiers using data from other sources. Then we evaluate the semantic correlation of each concept the event of interest and pick up the relevant concept classifiers, which are applied on all test videos to get multiple prediction score vectors. While most existing systems combine the predictions of the concept classifiers with fixed weights, we propose to learn the optimal weights of the concept classifiers for each testing video by exploring a set of online available videos with free-form text descriptions of their content. To validate the effectiveness of the proposed approach, we have conducted extensive experiments on the latest TRECVID MEDTest 2014, MEDTest 2013 and CCV dataset. The experimental results confirm the superiority of the proposed approach.
Our work is inspired by the general zero-shot learning framework @cite_20 @cite_3 @cite_30 , which arises from practical considerations such as the tremendous cost of acquiring labeled data and the constant need of dealing with dynamic and evolving real-world object categories. On the event detection side, recent works have begun to explore intermediate semantic concepts @cite_25 , and achieved limited success on the zero-exemplar setting @cite_5 @cite_32 @cite_39 also considered selecting more informative concepts. However, none of these works consider discovering the optimal weights of different concept classifiers for each testing video.
{ "cite_N": [ "@cite_30", "@cite_32", "@cite_3", "@cite_39", "@cite_5", "@cite_25", "@cite_20" ], "mid": [ "2026973491", "2147502347", "2150295085", "2127944900", "1982795953", "2407145437", "2134270519" ], "abstract": [ "", "Representing videos using vocabularies composed of concept detectors appears promising for event recognition. While many have recently shown the benefits of concept vocabularies for recognition, the important question what concepts to include in the vocabulary is ignored. In this paper, we study how to create an effective vocabulary for arbitrary-event recognition in web video. We consider four research questions related to the number, the type, the specificity and the quality of the detectors in concept vocabularies. A rigorous experimental protocol using a pool of 1,346 concept detectors trained on publicly available annotations, a dataset containing 13,274 web videos from the Multimedia Event Detection benchmark, 25 event groundtruth definitions, and a state-of-the-art event recognition pipeline allow us to analyze the performance of various concept vocabulary definitions. From the analysis we arrive at the recommendation that for effective event recognition the concept vocabulary should i) contain more than 200 concepts, ii) be diverse by covering object, action, scene, people, animal and attribute concepts,iii) include both general and specific concepts, and iv) increase the number of concepts rather than improve the quality of the individual detectors. We consider the recommendations for video event recognition using concept vocabularies the most important contribution of the paper, as they provide guidelines for future work.", "We consider the problem of zero-shot learning, where the goal is to learn a classifier f : X → Y that must predict novel values of Y that were omitted from the training set. To achieve this, we define the notion of a semantic output code classifier (SOC) which utilizes a knowledge base of semantic properties of Y to extrapolate to novel classes. We provide a formalism for this type of classifier and study its theoretical properties in a PAC framework, showing conditions under which the classifier can accurately predict novel classes. As a case study, we build a SOC classifier for a neural decoding task and show that it can often predict words that people are thinking about from functional magnetic resonance images (fMRI) of their neural activity, even without training examples for those words.", "We consider automated detection of events in video without the use of any visual training examples. A common approach is to represent videos as classification scores obtained from a vocabulary of pre-trained concept classifiers. Where others construct the vocabulary by training individual concept classifiers, we propose to train classifiers for combination of concepts composed by Boolean logic operators. We call these concept combinations composite concepts and contribute an algorithm that automatically discovers them from existing video-level concept annotations. We discover composite concepts by jointly optimizing the accuracy of concept classifiers and their effectiveness for detecting events. We demonstrate that by combining concepts into composite concepts, we can train more accurate classifiers for the concept vocabulary, which leads to improved zero-shot event detection. Moreover, we demonstrate that by using different logic operators, namely \"AND\", \"OR\", we discover different types of composite concepts, which are complementary for zero-shot event detection. We perform a search for 20 events in 41K web videos from two test sets of the challenging TRECVID Multimedia Event Detection 2013 corpus. The experiments demonstrate the superior performance of the discovered composite concepts, compared to present-day alternatives, for zero-shot event detection.", "Recent research in video retrieval has been successful at finding videos when the query consists of tens or hundreds of sample relevant videos for training supervised models. Instead, we investigate unsupervised zero-shot retrieval where no training videos are provided: a query consists only of a text statement. For retrieval, we use text extracted from images in the videos, text recognized in the speech of its audio track, as well as automatically detected semantically meaningful visual video concepts identified with widely varying confidence in the videos. In this work we introduce a new method for automatically identifying relevant concepts given a text query using the Markov Random Field (MRF) retrieval framework. We use source expansion to build rich textual representations of semantic video concepts from large external sources such as the web. We find that concept-based retrieval significantly outperforms text based approaches in recall. Using an evaluation derived from the TRECVID MED'11 track, we present early results that an approach using multi-modal fusion can compensate for inadequacies in each modality, resulting in substantial effectiveness gains. With relevance feedback, our approach provides additional improvements of over 50 .", "We focus on detecting complex events in unconstrained Internet videos. While most existing works rely on the abundance of labeled training data, we consider a more difficult zero-shot setting where no training data is supplied. We first pre-train a number of concept classifiers using data from other sources. Then we evaluate the semantic correlation of each concept w.r.t. the event of interest. After further refinement to take prediction inaccuracy and discriminative power into account, we apply the discovered concept classifiers on all test videos and obtain multiple score vectors. These distinct score vectors are converted into pairwise comparison matrices and the nuclear norm rank aggregation framework is adopted to seek consensus. To address the challenging optimization formulation, we propose an efficient, highly scalable algorithm that is an order of magnitude faster than existing alternatives. Experiments on recent TRECVID datasets verify the superiority of the proposed approach.", "We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes." ] }
1601.03533
2951464300
The Austrian eID system constitutes a main pillar within the Austrian e-Government strategy. The eID system ensures unique identification and secure authentication for citizens protecting access to applications where sensitive and personal data is involved. In particular, the Austrian eID system supports three main use cases: Identification and authentication of Austrian citizens, electronic representation, and foreign citizen authentication at Austrian public sector applications. For supporting all these use cases, several components -- either locally deployed in the applications' domain or centrally deployed -- need to communicate with each other. While local deployments have some advantages in terms of scalability, still a central deployment of all involved components would be advantageous, e.g. due to less maintenance efforts. However, a central deployment can easily lead to load bottlenecks because theoretically the whole Austrian population as well as -- for foreign citizens -- the whole EU population could use the provided services. To mitigate the issue on scalability, in this paper we propose the migration of main components of the ecosystem into a public cloud. However, a move of trusted services into a public cloud brings up new obstacles, particular with respect to privacy. To bypass the issue on privacy, in this paper we propose an approach on how the complete Austrian eID ecosystem can be moved into a public cloud in a privacy-preserving manner by applying selected cryptographic technologies (in particular using proxy re-encryption and redactable signatures). Applying this approach, no sensitive data will be disclosed to a public cloud provider by still supporting all three main eID system use cases. We finally discuss our approach based on selected criteria.
To bypass this issue, a handful of privacy-preserving cloud identity management approaches have already emerged in the last years. For instance, @cite_2 proposed the integration of proxy re-encryption into the OpenID protocol. In follow-up work, they proposed a more generic privacy-preserving cloud identity management model, which they call @cite_52 . This model also applies proxy re-encryption but relies on SAML instead of OpenID for the transport protocol. A somewhat related architectural approach -- but particularly focusing on eIDs -- has been introduced in @cite_34 . A completely different approach based on anonymous credentials for privacy-preservation has been proposed in @cite_46 .
{ "cite_N": [ "@cite_46", "@cite_34", "@cite_52", "@cite_2" ], "mid": [ "1470456220", "1993504803", "2081250972", "2082024869" ], "abstract": [ "Electronic Identity eID cards are rapidly emerging in Europe and are gaining user acceptance. As an authentication token, an eID card is a gateway to personal information and as such it is subject to privacy risks. Several European countries have taken extra care to protect their citizens against these risks. A notable example is the German eID card, which we take as a case study in this paper. We first discuss important privacy and security threats that remain in the German eID system and elaborate on the advantages of using privacy attribute-based credentials Privacy-ABCs to address these threats. Then we study two approaches for integrating Privacy-ABCs with eID systems. In the first approach, we show that by introducing a new entity in the current German eID system, the citizen can get a lot of the Privacy-ABCs advantages, without further modifications. Then we concentrate on putting Privacy-ABCs directly on smart cards, and we present new results on performance, which demonstrate that it is now feasible for smart cards to support the required computations these mechanisms require.", "Unique identification and secure authentication of users are essential processes in numerous security-critical areas such as e-Government, e-Banking, or e-Business. Therefore, many countries (particularly in Europe) have implemented national eID solutions within the past years. Such implementations are typically based on smart cards holding some certified collection of citizen attributes and hence follow a client-side and user-centric approach. However, most of the implementations only support all-or-nothing disclosure of citizen attributes and thus do not allow privacy-friendly selective disclosure of attributes. Consequently, the complete identity of the citizen (all attributes) are always revealed to identity providers and or service providers, respectively. In this paper, we propose a novel user-centric identification and authentication model for eIDs, which supports selective attribute disclosure but only requires minimal changes in the existing eID architecture. In addition, our approach allows service providers to keep their infrastructure nearly untouched. Latter is often an inhibitor for the use of privacy-preserving cryptography like anonymous credentials in such architectures. Furthermore, our model can easily be deployed in the public cloud as we do not require full trust in identity providers. This fully features the Identity as a Service-paradigm while at the same time preserves citizens' privacy. We demonstrate the applicability of our model by adopting to the Austrian eID system to our approach.", "Identity management is an almost indispensable component of today's organizations and companies, as it plays a key role in authentication and access control; however, at the same time, it is widely recognized as a costly and time-consuming task. The advent of cloud computing technologies, together with the promise of flexible, cheap and efficient provision of services, has provided the opportunity to externalize such a common process, shaping what has been called Identity Management as a Service (IDaaS). Nevertheless, as in the case of other cloud-based services, IDaaS brings with it great concerns regarding security and privacy, such as the loss of control over the outsourced data. In this paper, we analyze these concerns and propose BlindIdM, a model for privacy-preserving IDaaS with a focus on data privacy protection. In particular, we describe how a SAML-based system can be augmented to employ proxy re-encryption techniques for achieving data confidentiality with respect to the cloud provider, while preserving the ability to supply the identity service. This is an innovative contribution to both the privacy and identity management landscapes.", "The inclusion of identity management in the cloud computing landscape represents a new business opportunity for providing what has been called Identity Management as a Service (IDaaS). Nevertheless, IDaaS introduces the same kind of problems regarding privacy and data confidentiality as other cloud services; on top of that, the nature of the outsourced information (users' identity) is critical. Traditionally, cloud services (including IDaaS) rely only on SLAs and security policies to protect the data, but these measures have proven insufficient in some cases; recent research has employed advanced cryptographic mechanisms as an additional safeguard. Apart from this, there are several identity management schemes that could be used for realizing IDaaS systems in the cloud; among them, OpenID has gained crescent popularity because of its open and decentralized nature, which makes it a prime candidate for this task. In this paper we demonstrate how a privacy-preserving IDaaS system can be implemented using OpenID Attribute Exchange and a proxy re-encryption scheme. Our prototype enables an identity provider to serve attributes to other parties without being able to read their values. This proposal constitutes a novel contribution to both privacy and identity management fields. Finally, we discuss the performance and economical viability of our proposal." ] }
1601.03533
2951464300
The Austrian eID system constitutes a main pillar within the Austrian e-Government strategy. The eID system ensures unique identification and secure authentication for citizens protecting access to applications where sensitive and personal data is involved. In particular, the Austrian eID system supports three main use cases: Identification and authentication of Austrian citizens, electronic representation, and foreign citizen authentication at Austrian public sector applications. For supporting all these use cases, several components -- either locally deployed in the applications' domain or centrally deployed -- need to communicate with each other. While local deployments have some advantages in terms of scalability, still a central deployment of all involved components would be advantageous, e.g. due to less maintenance efforts. However, a central deployment can easily lead to load bottlenecks because theoretically the whole Austrian population as well as -- for foreign citizens -- the whole EU population could use the provided services. To mitigate the issue on scalability, in this paper we propose the migration of main components of the ecosystem into a public cloud. However, a move of trusted services into a public cloud brings up new obstacles, particular with respect to privacy. To bypass the issue on privacy, in this paper we propose an approach on how the complete Austrian eID ecosystem can be moved into a public cloud in a privacy-preserving manner by applying selected cryptographic technologies (in particular using proxy re-encryption and redactable signatures). Applying this approach, no sensitive data will be disclosed to a public cloud provider by still supporting all three main eID system use cases. We finally discuss our approach based on selected criteria.
Prior to this paper, in @cite_12 we illustrate privacy-preserving design strategies for migrating the basic Austrian eID architecture into the public cloud. The three design strategies proposed there are based on proxy re-encryption, anonymous credentials, and fully homomorphic encryption respectively. Thereby, we conclude that using proxy re-encryption is the most practical approach. However, @cite_12 only investigates the basic use case of the Austrian eID system, namely identification and authentication of Austrian citizens (see Section for details). In this paper we follow a similar approach using proxy re-encryption, but now illustrate the migration of the complete Austrian identity infrastructure into the public cloud. Thereby, we include the two other main uses cases (identification and authentication in representation and foreign citizen authentication), which in part have already been discussed previously in @cite_19 @cite_29 . However, we want to emphasize that it is not a simple combination of these existing results, but we aim at demonstrating that privacy-preserving identity management in public clouds using proxy re-encryption is also possible for complex systems such as the complete Austrian eID ecosystem, which has broad applicability.
{ "cite_N": [ "@cite_19", "@cite_29", "@cite_12" ], "mid": [ "1562303395", "2044485718", "" ], "abstract": [ "The STORK framework — enabling secure eID federation across European countries — will be the dominant identification and authentication framework across Europe in the future. While still in its start up phase, adoption of the STORK framework is continuously increasing and high loads can be expected, since, theoretically, the entire population of the European Union will be able to run authentications through this framework. This can easily lead to scalability issues, especially for the proxy-based (PEPS) approach in STORK, which relies on a central gateway being responsible for managing and handling citizen authentications. In order to mitigate the associated scalability issues, the PEPS approach could be moved into the public cloud. However, a move of a trusted service into the public cloud brings up new obstacles, especially with respect to citizens' privacy. In this paper we propose an approach how this move could be successfully realized by still preserving citizens' privacy and keeping existing national eID infrastructures untouched. We present the approach in detail and evaluate its capability with respect to citizens' privacy protection as well as its practicability. We conclude, that the proposed approach is a viable way of realizing an efficient and scalable Pan-European citizen identification and authentication framework.", "The current Austrian electronic mandate system, which allows citizens to act as representatives for other citizens or companies in e-Government services, relies on a centralized deployment approach. Thereby, a trusted central service generates and issues electronic mandates on the fly for service providers. The usage of this service is continuously increasing and high loads can be expected in the near future. In order to mitigate the associated scalability issues, this service could be moved into the public cloud. However, a move of a trusted service into the public cloud brings up new obstacles, especially with respect to citizens' privacy. In this paper we propose two approaches how this move could be successfully realized by preserving citizens' privacy and still being compliant to national law. The main objectives we focus on are minimal data disclosure to untrusted entities by still keeping the existing infrastructure nearly untouched. We present both approaches in detail and evaluate their capabilities with respect to citizens' privacy protection as well as their practicability and conclude that both approaches are entirely practical.", "" ] }
1601.03623
2233281717
Abstract In this paper we evaluate the performance of Unified Parallel C (which implements the partitioned global address space programming model) using a numerical method that is widely used in fluid dynamics. In order to evaluate the incremental approach to parallelization (which is possible with UPC) and its performance characteristics, we implement different levels of optimization of the UPC code and compare it with an MPI parallelization on four different clusters of the Austrian HPC infrastructure (LEO3, LEO3E, VSC2, VSC3) and on an Intel Xeon Phi. We find that UPC is significantly easier to develop in compared to MPI and that the performance achieved is comparable to MPI in most situations. The obtained results show worse performance (on VSC2), competitive performance (on LEO3, LEO3E and VSC3), and superior performance (on the Intel Xeon Phi) compared with MPI.
A significant amount of research has been conducted with respect to the performance and the usability (for the latter see, e.g., @cite_9 ) of PGAS languages. Particularly the NAS benchmark is a popular collection of test problems to explore the PGAS paradigm. For example, in @cite_7 , the NAS benchmark is used to investigate the UPC language, while @cite_15 and @cite_18 employ the NAS benchmark to compare UPC with an MPI as well as an OpenMP parallelization (the latter on a vendor supported Cray XT5 platforms). Another kind of benchmark to measure fine- and course-grained shared memory accesses of UPC was developed in @cite_16 .
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_9", "@cite_15", "@cite_16" ], "mid": [ "121416212", "2056786111", "2148590584", "2112261349", "2148207191" ], "abstract": [ "Harnessing the power of multicore platforms is challenging due to the additional levels of parallelism present. In this paper, we examine the effect of the choice of programming model upon performance and overall memory usage on the Cray XT5. We use detailed time breakdowns to measure the contributions to the total runtime from computation, communication, and OpenMP regions of the applications, gaining insights into the reasons behind any performance differences observed. We also examine the performance differences between two different Cray XT5 machines, which have quad-core and hex-core processors.", "We describe a performance study of the UPC PGAS model applied to three application benchmarks from the NAS Parallel Benchmark suite. The work focuses on the performance implications of programming choices made for data affinity and data access. We compare runs of multiple versions of each benchmark encoded in UPC on both shared-memory and cluster-based parallel systems. This study points out the potential of UPC and some issues in achieving high performance when using the language.", "Summary form only given. Parallel programming paradigms, over the past decade, have focused on how to harness the computational power of contemporary parallel machines. Ease of use and code development productivity, has been a secondary goal. Recently, however, there has been a growing interest in understanding the code development productivity issues and their implications for the overall time-to-solution. Unified Parallel C (UPC) is a recently developed language which has been gaining rising attention. UPC holds the promise of leveraging the ease of use of the shared memory model and the performance benefit of locality exploitation. The performance potential for UPC has been extensively studied in recent research efforts. The aim of this study, however, is to examine the impact of UPC on programmer productivity. We propose several productivity metrics and consider a wide array of high performance applications. Further, we compare UPC to the most widely used parallel programming paradigm, MPI. The results show that UPC compares favorably with MPI in programmers productivity.", "The current trend to multicore architectures underscores the need of parallelism. While new languages and alternatives for supporting more efficiently these systems are proposed, MPI faces this new challenge. Therefore, up-to-date performance evaluations of current options for programming multicore systems are needed. This paper evaluates MPI performance against Unified Parallel C (UPC) and OpenMP on multicore architectures. From the analysis of the results, it can be concluded that MPI is generally the best choice on multicore systems with both shared and hybrid shared distributed memory, as it takes the highest advantage of data locality, the key factor for performance in these systems. Regarding UPC, although it exploits efficiently the data layout in memory, it suffers from remote shared memory accesses, whereas OpenMP usually lacks efficient data locality support and is restricted to shared memory systems, which limits its scalability.", "UPC is a parallel programming language based on the concept of partitioned shared memory. There are now several UPC compilers available and several different parallel architectures that support one or more of these compilers. This paper is the first to compare the performance of most of the currently available UPC implementations on several commonly used parallel platforms. These compilers are the GASNet UPC compiler from UC Berkeley, the v1.1 MuPC compiler from Michigan Tech, the Hewlet-Packard v2.2 compiler, and the Intrepid UPC compiler. The parallel architectures used in this study are a 16-node x86 Myrinet cluster, a 31-processor AlphaServer SC-40, and a 48-processor Cray T3E. A STREAM-like microbenchmark was developed to measure fine- and course-grained shared-memory accesses. Also measured are five NPB kernels using existing UPC implementations. These measurements and associated observations provide a snapshot of the relative performance of current UPC platforms." ] }
1601.03623
2233281717
Abstract In this paper we evaluate the performance of Unified Parallel C (which implements the partitioned global address space programming model) using a numerical method that is widely used in fluid dynamics. In order to evaluate the incremental approach to parallelization (which is possible with UPC) and its performance characteristics, we implement different levels of optimization of the UPC code and compare it with an MPI parallelization on four different clusters of the Austrian HPC infrastructure (LEO3, LEO3E, VSC2, VSC3) and on an Intel Xeon Phi. We find that UPC is significantly easier to develop in compared to MPI and that the performance achieved is comparable to MPI in most situations. The obtained results show worse performance (on VSC2), competitive performance (on LEO3, LEO3E and VSC3), and superior performance (on the Intel Xeon Phi) compared with MPI.
Even though benchmarks can give an interesting idea of how PGAS languages behave for simple codes, additional problems can and do occur in more involved programs. This leads to an investigation of the PGAS paradigm in different fields. In @cite_2 , the mini-app CloverLeaf @cite_14 , which implements a Lagrangian-Euler scheme to solve the two-dimensional Euler equations of gas dynamics is implemented using two PGAS approaches. This paper extensively optimizes the implementation using OpenSHMEM as well as Coarray Fortran. They manage to compete with MPI for up to @math cores on their high end systems. In contrast, our work considers HPC systems for which no vendor support for UPC or Coarray Fortran is provided (while in the before mentioned paper, a CRAY XC30 and an SGI ICE-X system are used). In addition, @cite_2 is not concerned with the usability of UPC for medium sized parallelism (which is a main focus of our work).
{ "cite_N": [ "@cite_14", "@cite_2" ], "mid": [ "2127013169", "2168264078" ], "abstract": [ "In this work we directly evaluate five candidate programming models for future exascale applications (MPI, MPI+OpenMP, MPI+OpenACC, MPI+CUDA and CAF) using a recently developed Lagrangian-Eulerian explicit hydrodynamics mini-application. The aim of this work is to better inform the exacsale planning at large HPC centres such as AWE. Such organisations invest significant resources maintaining and updating existing scientific codebases, many of which were not designed to run at the scale required to reach exascale levels of computation on future system architectures. We present our results and experiences of scaling these different approaches to high node counts on existing large-scale Cray systems (Titan and HECToR). We also examine the effect that improving the mapping between process layout and the underlying machine interconnect topology can have on performance and scalability, as well as highlighting several communication-focused optimisations.", "In this work we directly evaluate two PGAS programming models, CAF and OpenSHMEM, as candidate technologies for improving the performance and scalability of scientific applications on future exascale HPC platforms. PGAS approaches are considered by many to represent a promising research direction with the potential to solve some of the existing problems preventing codebases from scaling to exascale levels of performance. The aim of this work is to better inform the exacsale planning at large HPC centres such as AWE. Such organisations invest significant resources maintaining and updating existing scientific codebases, many of which were not designed to run at the scales required to reach exascale levels of computational performance on future system architectures. We document our approach for implementing a recently developed Lagrangian-Eulerian explicit hydrodynamics mini-application in each of these PGAS languages. Furthermore, we also present our results and experiences from scaling these different approaches to high node counts on two state-of-the-art, large scale system architectures from Cray (XC30) and SGI (ICE-X), and compare their utility against an equivalent existing MPI implementation." ] }
1601.03623
2233281717
Abstract In this paper we evaluate the performance of Unified Parallel C (which implements the partitioned global address space programming model) using a numerical method that is widely used in fluid dynamics. In order to evaluate the incremental approach to parallelization (which is possible with UPC) and its performance characteristics, we implement different levels of optimization of the UPC code and compare it with an MPI parallelization on four different clusters of the Austrian HPC infrastructure (LEO3, LEO3E, VSC2, VSC3) and on an Intel Xeon Phi. We find that UPC is significantly easier to develop in compared to MPI and that the performance achieved is comparable to MPI in most situations. The obtained results show worse performance (on VSC2), competitive performance (on LEO3, LEO3E and VSC3), and superior performance (on the Intel Xeon Phi) compared with MPI.
Another use of the PGAS paradigm in computational fluid dynamics (CFD) is described in @cite_17 , where the unstructured CFD solver TAU was implemented as a library using the PGAS-API GPI.
{ "cite_N": [ "@cite_17" ], "mid": [ "185800698" ], "abstract": [ "Whereas most applications in the realm of the partitioned global address space make use of PGAS languages we here demonstrate an implementation on top of a PGAS-API. In order to improve the scalability of the unstructured CFD solver TAU we have implemented an asynchronous communication strategy on top of the PGAS-API of GPI. We have replaced the bulk-synchronous two-sided MPI exchange with an asynchronous, RDMA-driven, one-sided communication pattern. We also have developed an asynchronous shared memory strategy for the TAU solver. We demonstrate that the corresponding implementation not only scales one order of magnitude higher than the original MPI implementation, but that it also outperforms the hybrid OpenMP MPI programming model." ] }
1601.03623
2233281717
Abstract In this paper we evaluate the performance of Unified Parallel C (which implements the partitioned global address space programming model) using a numerical method that is widely used in fluid dynamics. In order to evaluate the incremental approach to parallelization (which is possible with UPC) and its performance characteristics, we implement different levels of optimization of the UPC code and compare it with an MPI parallelization on four different clusters of the Austrian HPC infrastructure (LEO3, LEO3E, VSC2, VSC3) and on an Intel Xeon Phi. We find that UPC is significantly easier to develop in compared to MPI and that the performance achieved is comparable to MPI in most situations. The obtained results show worse performance (on VSC2), competitive performance (on LEO3, LEO3E and VSC3), and superior performance (on the Intel Xeon Phi) compared with MPI.
Let us also mention @cite_12 , where the old legacy high latency code FEniCS (which implements a finite element approach on an unstructured grid) is improved by using a hybrid MPI PGAS programming model. Since the algorithm used in the FEniCS code is formulated as a linear algebra program, the above mentioned paper substitutes the linear algbra backend PETSc with their own UPC based library. In contrast, in our work we employ a direct implementation of a specific numerical algorithm (as it is well known that a significant performance penalty is paid, if this algorithm on a structured grid is implemented using a generic linear algebra backend) and focus on UPC as a tool to simplify parallel programming (while @cite_12 uses UPC in order to selectively replace two-sided communication with one-sided communication to make a legacy application ready for exascale computing).
{ "cite_N": [ "@cite_12" ], "mid": [ "1482281836" ], "abstract": [ "We present our work on developing a hybrid parallel programming model for a general finite element solver. The main focus of our work is to demonstrate that legacy codes with high latency, two-side ..." ] }
1601.03623
2233281717
Abstract In this paper we evaluate the performance of Unified Parallel C (which implements the partitioned global address space programming model) using a numerical method that is widely used in fluid dynamics. In order to evaluate the incremental approach to parallelization (which is possible with UPC) and its performance characteristics, we implement different levels of optimization of the UPC code and compare it with an MPI parallelization on four different clusters of the Austrian HPC infrastructure (LEO3, LEO3E, VSC2, VSC3) and on an Intel Xeon Phi. We find that UPC is significantly easier to develop in compared to MPI and that the performance achieved is comparable to MPI in most situations. The obtained results show worse performance (on VSC2), competitive performance (on LEO3, LEO3E and VSC3), and superior performance (on the Intel Xeon Phi) compared with MPI.
The use of PGAS languages has also been investigated on the Intel Xeon Phi. In @cite_10 , e.g., the performance of OpenSHMEM is explored on Xeon Phi clusters. Furthermore, @cite_6 implements various benchmarks including the NAS benchmark with UPC on the Intel Xeon Phi. However, to the best of our knowledge, no realistic application written in UPC has been investigated on the Intel Xeon Phi.
{ "cite_N": [ "@cite_10", "@cite_6" ], "mid": [ "2035932528", "2610996097" ], "abstract": [ "Intel Many Integrated Core (MIC) architectures are becoming an integral part of modern supercomputer architectures due to their high compute density and performance per watt. Partitioned Global Address Space (PGAS) programming models, such as OpenSHMEM, provide an attractive approach for developing scientific applications with irregular communication characteristics, by abstracting shared memory address space, along with one-sided communication semantics. However, the current OpenSHMEM standard does not efficiently support heterogeneous memory architectures such as Xeon Phi. Host and Xeon Phi cores have different memory capacities and compute characteristics. But, the global symmetric memory allocation in the current OpenSHMEM standard mandates that same amount of memory be allocated on every process. In this paper, we propose extensions to overcome this restriction and propose high performance runtime-level designs for efficient communication involving Xeon Phi processors. Further, we re-design applications to demonstrate the effectiveness of the proposed designs and extensions. Experimental evaluations indicate 4X to 7X reduction in OpenSHMEM data movement operation latencies, and 6X to 11X improvement in performance for collective operations. Application evaluations in symmetric mode indicate performance improvements of 28 at 1,024 processes. Further, application redesigns using the proposed extensions provide several magnitudes of performance improvement, as compared to the symmetric mode. To the best of our knowledge, this is the first research work that proposes high performance runtime designs for OpenSHMEM on Intel Xeon Phi clusters.", "Intel Many Integrated Core (MIC) architecture is steadily being adopted in clusters owing to its high compute throughput and power efficiency. The current generation MIC coprocessor, Xeon Phi, provides a highly multi-threaded environment with support for multiple programming models. While regular programming models such as MPI OpenMP have started utilizing systems with MIC coprocessors, it is still not clear whether PGAS models can easily adopt and fully utilize such systems. In this paper, we discuss several ways of running UPC applications on the MIC architecture under Native Symmetric programming mode. These methods include the choice of process-based or thread-based UPC runtime for native mode and different communication channels between MIC and host for symmetric mode. We propose a thread-based UPC runtime with an improved “leader-to-all” connection scheme over InfiniBand and SCIF [3] through multi-endpoint support. For the native mode, we evaluate point-topoint and collective micro-benchmarks, Global Array Random Access, UTS and NAS benchmarks. For the symmetric mode, we evaluate the communication performance between host and MIC within a single node. Through our evaluations, we explore the effects of scaling UPC threads on the MIC and also highlight the bottlenecks (up to 10X degradation) involved in UPC communication routines arising from the per-core processing and memory limitations on the MIC. To the best of our knowledge, this is the first paper that evaluates UPC programming model on MIC systems." ] }
1601.03778
2294283642
Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on large-scale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in terms of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level of accuracy.
There exists large number of works which focus on factorization based models. The common thread among the factorization methods is that they explain the triples via latent features of entities. @cite_13 presents a tensor based model that decomposes each entity and predicate in knowledge graphs as a low dimensional vector. However, such a method fails to consider the symmetry property of the tensor. In order to solve this issue, @cite_19 proposes a relational latent feature model, RESCAL, an efficient approach which uses a tensor factorization model that takes the inherent structure of relational data into account. By leveraging relational domain knowledge about entity type information, @cite_10 proposes a tensor decomposition approach for relation extraction in knowledge base which is highly efficient in terms of time complexity. In addition, various other latent variable models, such as neural network based methods @cite_4 @cite_1 , have been explored for link prediction task. However, the major drawback of neural network based models is their complexity and computational cost in model training and parameter tuning. Many of these models require tuning large number of parameters, thus finding the right combination of these parameters is often considered more of an art than science.
{ "cite_N": [ "@cite_4", "@cite_10", "@cite_1", "@cite_19", "@cite_13" ], "mid": [ "2016753842", "2102363952", "2127426251", "205829674", "2119741678" ], "abstract": [ "Recent years have witnessed a proliferation of large-scale knowledge bases, including Wikipedia, Freebase, YAGO, Microsoft's Satori, and Google's Knowledge Graph. To increase the scale even further, we need to explore automatic methods for constructing knowledge bases. Previous approaches have primarily focused on text-based extraction, which can be very noisy. Here we introduce Knowledge Vault, a Web-scale probabilistic knowledge base that combines extractions from Web content (obtained via analysis of text, tabular data, page structure, and human annotations) with prior knowledge derived from existing knowledge repositories. We employ supervised machine learning methods for fusing these distinct information sources. The Knowledge Vault is substantially bigger than any previously published structured knowledge repository, and features a probabilistic inference system that computes calibrated probabilities of fact correctness. We report the results of multiple studies that explore the relative utility of the different information sources and extraction methods.", "While relation extraction has traditionally been viewed as a task relying solely on textual data, recent work has shown that by taking as input existing facts in the form of entity-relation triples from both knowledge bases and textual data, the performance of relation extraction can be improved significantly. Following this new paradigm, we propose a tensor decomposition approach for knowledge base embedding that is highly scalable, and is especially suitable for relation extraction. By leveraging relational domain knowledge about entity type information, our learning algorithm is significantly faster than previous approaches and is better able to discover new relations missing from the database. In addition, when applied to a relation extraction task, our approach alone is comparable to several existing systems, and improves the weighted mean average precision of a state-of-theart method by 10 points when used as a subcomponent.", "Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the \"Sumatran tiger\" and \"Bengal tiger.\" Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2 and 90.0 , respectively.", "Relational learning is becoming increasingly important in many areas of application. Here, we present a novel approach to relational learning based on the factorization of a three-way tensor. We show that unlike other tensor approaches, our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization. We substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution. Furthermore, we show on common benchmark datasets that our approach achieves better or on-par results, if compared to current state-of-the-art relational learning solutions, while it is significantly faster to compute.", "Abstract This paper explains the multi-way decomposition method PARAFAC and its use in chemometrics. PARAFAC is a generalization of PCA to higher order arrays, but some of the characteristics of the method are quite different from the ordinary two-way case. There is no rotation problem in PARAFAC, and e.g., pure spectra can be recovered from multi-way spectral data. One cannot as in PCA estimate components successively as this will give a model with poorer fit, than if the simultaneous solution is estimated. Finally scaling and centering is not as straightforward in the multi-way case as in the two-way case. An important advantage of using multi-way methods instead of unfolding methods is that the estimated models are very simple in a mathematical sense, and therefore more robust and easier to interpret. All these aspects plus more are explained in this tutorial and an implementation in Matlab code is available, that contains most of the features explained in the text. Three examples show how PARAFAC can be used for specific problems. The applications include subjects as: Analysis of variance by PARAFAC, a five-way application of PARAFAC, PARAFAC with half the elements missing, PARAFAC constrained to positive solutions and PARAFAC for regression as in principal component regression." ] }
1601.03778
2294283642
Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on large-scale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in terms of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level of accuracy.
Recently graphical models, such as Probabilistic Relational Models @cite_22 , Relational Markov Network @cite_25 , Markov Logic Network @cite_11 @cite_6 have also been used for link prediction in knowledge graph. For instance, @cite_6 proposes a Markov Logic Network (MLN) based approach, which is a template language for defining potential functions on knowledge graph by logical formula. Despite its utility for modeling knowledge graph, issues such as rule learning difficulty, tractability problem, and parameter estimation pose implementation challenge for MLNs.
{ "cite_N": [ "@cite_25", "@cite_22", "@cite_6", "@cite_11" ], "mid": [ "2137213923", "2126185296", "1977970897", "2138204945" ], "abstract": [ "In many supervised learning tasks, the entities to be labeled are related to each other in complex ways and their labels are not independent. For example, in hypertext classification, the labels of linked pages are highly correlated. A standard approach is to classify each entity independently, ignoring the correlations between them. Recently, Probabilistic Relational Models, a relational version of Bayesian networks, were used to define a joint probabilistic model for a collection of related entities. In this paper, we present an alternative framework that builds on (conditional) Markov networks and addresses two limitations of the previous approach. First, undirected models do not impose the acyclicity constraint that hinders representation of many important relational dependencies in directed models. Second, undirected models are well suited for discriminative training, where we optimize the conditional likelihood of the labels given the features, which generally improves classification accuracy. We show how to train these models effectively, and how to use approximate probabilistic inference over the learned model for collective classification of multiple related entities. We provide experimental results on a webpage classification task, showing that accuracy can be significantly improved by modeling relational dependencies.", "A large portion of real-world data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with “flat” data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much of the relational structure present in our database. This paper builds on the recent work on probabilistic relational models (PRMs), and describes how to learn them from databases. PRMs allow the properties of an object to depend probabilistically both on other properties of that object and on properties of related objects. Although PRMs are significantly more expressive than standard models, such as Bayesian networks, we show how to extend well-known statistical methods for learning Bayesian networks to learn these models. We describe both parameter estimation and structure learning — the automatic induction of the dependency structure in a model. Moreover, we show how the learning procedure can exploit standard database retrieval techniques for efficient learning from large datasets. We present experimental results on both real and synthetic relational databases.", "We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a first-order formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach.", "A number of text mining and information extraction projects such as Text Runner and NELL seek to automatically build knowledge bases from the rapidly growing amount of information on the web. In order to scale to the size of the web, these projects often employ ad hoc heuristics to reason about uncertain and contradictory information rather than reasoning jointly about all candidate facts. In this paper, we present a Markov logic-based system for cleaning an extracted knowledge base. This allows a scalable system such as NELL to take advantage of joint probabilistic inference, or, conversely, allows Markov logic to be applied to a web scale problem. Our system uses only the ontological constraints and confidence values of the original system, along with human-labeled data if available. The labeled data can be used to calibrate the confidence scores from the original system or learn the effectiveness of individual extraction patterns. To achieve scalability, we introduce a neighborhood grounding method that only instantiates the part of the network most relevant to the given query. This allows us to partition the knowledge cleaning task into tractable pieces that can be solved individually. In experiments on NELL's knowledge base, we evaluate several variants of our approach and find that they improve both F1 and area under the precision-recall curve." ] }
1601.03778
2294283642
Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on large-scale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in terms of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level of accuracy.
It has been demonstrated @cite_16 that no single approach emerges as a clear winner. Instead, the merits of factorization models and graph feature models are often complementary with each other. Thus combining the advantages of different approaches for learning knowledge graph is a promising option. For instance, @cite_24 proposes to use additive model, which is a linear combination between RESCAL and PRA. The combination results in not only decrease the training time but also increase the accuracy. @cite_7 combines a latent feature model with an additive term to learn from latent and neighborhood-based information on multi-relational data. @cite_4 fuses the outputs of PRA and neural network model as features for training a binary classifier. Our work strongly aligns with this combination approach. In this work, we build matrix factorization based techniques that have been proved successful for recommender systems and plan to incorporate graph based features in future work.
{ "cite_N": [ "@cite_24", "@cite_16", "@cite_4", "@cite_7" ], "mid": [ "2158781217", "2462160708", "2016753842", "2288526505" ], "abstract": [ "Tensor factorization has become a popular method for learning from multi-relational data. In this context, the rank of the factorization is an important parameter that determines runtime as well as generalization ability. To identify conditions under which factorization is an efficient approach for learning from relational data, we derive upper and lower bounds on the rank required to recover adjacency tensors. Based on our findings, we propose a novel additive tensor factorization model to learn from latent and observable patterns on multi-relational data and present a scalable algorithm for computing the factorization. We show experimentally both that the proposed additive model does improve the predictive performance over pure latent variable methods and that it also reduces the required rank — and therefore runtime and memory complexity — significantly.", "Recent years have witnessed a proliferation of large-scale knowledge graphs, from purely academic projects such as YAGO to major commercial projects such as Google's Knowledge Graph and Microsoft's Satori. Whereas there is a large body of research on mining homogeneous graphs, this new generation of information networks are highly heterogeneous, with thousands of entity and relation types and billions of instances of those types (graph vertices and edges). In this tutorial, we present the state of the art in constructing, mining, and growing knowledge graphs. The purpose of the tutorial is to equip newcomers to this exciting field with an understanding of the basic concepts, tools and methodologies, open research challenges, as well as pointers to available datasets and relevant literature. Knowledge graphs have become an enabling resource for a plethora of new knowledge-rich applications. Consequently, the tutorial will also discuss the role of knowledge bases in empowering a range of web applications, from web search to social networks to digital assistants. A publicly available knowledge base (Freebase) will be used throughout the tutorial to exemplify the different techniques.", "Recent years have witnessed a proliferation of large-scale knowledge bases, including Wikipedia, Freebase, YAGO, Microsoft's Satori, and Google's Knowledge Graph. To increase the scale even further, we need to explore automatic methods for constructing knowledge bases. Previous approaches have primarily focused on text-based extraction, which can be very noisy. Here we introduce Knowledge Vault, a Web-scale probabilistic knowledge base that combines extractions from Web content (obtained via analysis of text, tabular data, page structure, and human annotations) with prior knowledge derived from existing knowledge repositories. We employ supervised machine learning methods for fusing these distinct information sources. The Knowledge Vault is substantially bigger than any previously published structured knowledge repository, and features a probabilistic inference system that computes calibrated probabilities of fact correctness. We report the results of multiple studies that explore the relative utility of the different information sources and extraction methods.", "We present a general and novel framework for predicting links in multirelational graphs using a set of matrices describing the various instantiated relations in the knowledge base. We construct matrices that add information further remote in the knowledge graph by join operations and we describe how unstructured information can be integrated in the model. We show that efficient learning can be achieved using an alternating least squares approach exploiting sparse matrix algebra and low-rank approximations. We discuss the relevance of modeling nonlinear interactions and add corresponding model components. We also discuss a kernel solution which is of interest when it is easy to define sensible kernels. We discuss the relevance of feature selection for the interaction terms and apply a random search strategy to tune the hyperparameters in the model. We validate our approach using data sets from the Linked Open Data (LOD) cloud and from other sources." ] }
1601.02932
2236044868
Contig assembly is the first stage that most assemblers solve when reconstructing a genome from a set of reads. Its output consists of contigs -- a set of strings that are promised to appear in any genome that could have generated the reads. From the introduction of contigs 20 years ago, assemblers have tried to obtain longer and longer contigs, but the following question was never solved: given a genome graph @math (e.g. a de Bruijn, or a string graph), what are all the strings that can be safely reported from @math as contigs? In this paper we finally answer this question, and also give a polynomial time algorithm to find them. Our experiments show that these strings, which we call omnitigs, are 66 to 82 longer on average than the popular unitigs, and 29 of dbSNP locations have more neighbors in omnitigs than in unitigs.
The number of related assembly papers is vast, and we refer the reader to some surveys @cite_23 @cite_32 . For an empirical evaluation of the correctness of several state-of-the-art assemblers, see @cite_25 . Here, we discuss work on the theoretical underpinnings of assembly.
{ "cite_N": [ "@cite_25", "@cite_32", "@cite_23" ], "mid": [ "2127768708", "", "1972924519" ], "abstract": [ "New sequencing technology has dramatically altered the landscape of whole-genome sequencing, allowing scientists to initiate numerous projects to decode the genomes of previously unsequenced organisms. The lowest-cost technology can generate deep coverage of most species, including mammals, in just a few days. The sequence data generated by one of these projects consist of millions or billions of short DNA sequences (reads) that range from 50 to 150 nt in length. These sequences must then be assembled de novo before most genome analyses can begin. Unfortunately, genome assembly remains a very difficult problem, made more difficult by shorter reads and unreliable long-range linking information. In this study, we evaluated several of the leading de novo assembly algorithms on four different short-read data sets, all generated by Illumina sequencers. Our results describe the relative performance of the different assemblers as well as other significant differences in assembly difficulty that appear to be inherent in the genomes themselves. Three overarching conclusions are apparent: first, that data quality, rather than the assembler itself, has a dramatic effect on the quality of an assembled genome; second, that the degree of contiguity of an assembly varies enormously among different assemblers and different genomes; and third, that the correctness of an assembly also varies widely and is not well correlated with statistics on contiguity. To enable others to replicate our results, all of our data and methods are freely available, as are all assemblers used in this study.", "", "The emergence of next-generation sequencing platforms led to resurgence of research in whole-genome shotgun assembly algorithms and software. DNA sequencing data from the Roche 454, Illumina Solexa, and ABI SOLiD platforms typically present shorter read lengths, higher coverage, and different error profiles compared with Sanger sequencing data. Since 2005, several assembly software packages have been created or revised specifically for de novo assembly of next-generation sequencing data. This review summarizes and compares the published descriptions of packages named SSAKE, SHARCGS, VCAKE, Newbler, Celera Assembler, Euler, Velvet, ABySS, AllPaths, and SOAPdenovo. More generally, it compares the two standard methods known as the de Bruijn graph approach and the overlap layout consensus approach to assembly." ] }
1601.02932
2236044868
Contig assembly is the first stage that most assemblers solve when reconstructing a genome from a set of reads. Its output consists of contigs -- a set of strings that are promised to appear in any genome that could have generated the reads. From the introduction of contigs 20 years ago, assemblers have tried to obtain longer and longer contigs, but the following question was never solved: given a genome graph @math (e.g. a de Bruijn, or a string graph), what are all the strings that can be safely reported from @math as contigs? In this paper we finally answer this question, and also give a polynomial time algorithm to find them. Our experiments show that these strings, which we call omnitigs, are 66 to 82 longer on average than the popular unitigs, and 29 of dbSNP locations have more neighbors in omnitigs than in unitigs.
There are a few notable exceptions. @cite_26 , Boisvert and colleagues also define the assembly problem in terms of finding contigs, rather than a single reconstruction. Nagarajan and Pop @cite_31 observe that Waterman's characterization @cite_13 of the graphs with a unique Eulerian tour leads to a simple algorithm for finding all safe strings when a genomic reconstruction is an Eulerian tour. They also suggest an approach for finding all the safe strings when a genomic reconstruction is a Chinese Postman tour @cite_31 . We note, however, that in the Eulerian model, the exact copy count of each edge should be known in advance, while in the Chinese Postman model (minimizing the length of the genomic reconstruction), the solution will over-collapse all tandem repeats. Furthermore, these approaches have not been implemented and hence their effectiveness is unknown.
{ "cite_N": [ "@cite_31", "@cite_26", "@cite_13" ], "mid": [ "2078331392", "2033292629", "2041397659" ], "abstract": [ "Abstract In recent years, a flurry of new DNA sequencing technologies have altered the landscape of genomics, providing a vast amount of sequence information at a fraction of the costs that were previously feasible. The task of assembling these sequences into a genome has, however, still remained an algorithmic challenge that is in practice answered by heuristic solutions. In order to design better assembly algorithms and exploit the characteristics of sequence data from new technologies, we need an improved understanding of the parametric complexity of the assembly problem. In this article, we provide a first theoretical study in this direction, exploring the connections between repeat complexity, read lengths, overlap lengths and coverage in determining the “hard” instances of the assembly problem. Our work suggests at least two ways in which existing assemblers can be extended in a rigorous fashion, in addition to delineating directions for future theoretical investigations.", "Abstract An accurate genome sequence of a desired species is now a pre-requisite for genome research. An important step in obtaining a high-quality genome sequence is to correctly assemble short reads into longer sequences accurately representing contiguous genomic regions. Current sequencing technologies continue to offer increases in throughput, and corresponding reductions in cost and time. Unfortunately, the benefit of obtaining a large number of reads is complicated by sequencing errors, with different biases being observed with each platform. Although software are available to assemble reads for each individual system, no procedure has been proposed for high-quality simultaneous assembly based on reads from a mix of different technologies. In this paper, we describe a parallel short-read assembler, called Ray, which has been developed to assemble reads obtained from a combination of sequencing platforms. We compared its performance to other assemblers on simulated and real datasets. We used a combin...", "Preface Introduction Molecular Biology Mathematics, Statistics, and Computer Science Some Molecular Biology DNA and Proteins The Central Dogma The Genetic Code Transfer RNA and Protein Sequences Genes Are Not Simple Biological Chemistry Restriction Maps Introduction Graphs Interval Graphs Measuring Fragment Sizes Multiple Maps Double Digest Problem Classifying Multiple Solutions Algorithms for DDP Algorithms and Complexity DDP is N P-Complete Approaches to DDP Simulated Annealing: TSP and DDP Mapping with Real Data Cloning and Clone Libraries A Finite Number of Random Clones Libraries by Complete Digestion Libraries by Partial Digestion Genomes per Microgram Physical Genome Maps: Oceans, Islands, and Anchors Mapping by Fingerprinting Mapping by Anchoring An Overview of Clone Overlap Putting It Together Sequence Assembly Shotgun Sequencing Sequencing by Hybridization Shotgun Sequencing Revisited Databases and Rapid Sequence Analysis DNA and Protein Sequence Databases A Tree Representation of a Sequence Hashing a Sequence Repeats in a Sequence Sequence Comparison by Hashing Sequence Comparison with at most l Mismatches Sequence Comparison by Statistical Content Dynamic Programming Alignment of Two Sequences The Number of Alignments Shortest and Longest Paths in a Network Global Distance Alignment Global Similarity Alignment Fitting One Sequence into Another Local Alignment and Clumps Linear Space Algorithms Tracebacks Inversions Map Alignment Parametric Sequence Comparisons Multiple Sequence Alignment The Cystic Fibrosis Gene Dynamic Programming in r-Dimensions Weighted-Average Sequences Profile Analysis Alignment by Hidden Markov Models Consensus Word Analysis Probability and Statistics for Sequence Alignment Global Alignment Local Alignment Extreme Value Distributions The Chein-Stein Method Poisson Approximation and Long Matches Sequence Alignment with Scores Probability and Statistics for Sequence Patterns A Central Limit Theorem Nonoverlapping Pattern Counts Poisson Approximation Site Distributions RNA Secondary Structure Combinatorics Minimum Free-energy Structures Consensus folding Trees and Sequences Trees Distance Parsimony Maximum Likelihood Trees Sources and Perspectives Molecular Biology Physical Maps and Clone Libraries Sequence Assembly Sequence Comparisons Probability and Statistics RNA Secondary Structure Trees and Sequences References Problem Solutions and Hints Mathematical Notation Algorithm Index Author Index Subject Index" ] }
1601.03095
2237447357
We consider the problem of maximizing a monotone submodular function under noise. There has been a great deal of work on optimization of submodular functions under various constraints, resulting in algorithms that provide desirable approximation guarantees. In many applications, however, we do not have access to the submodular function we aim to optimize, but rather to some erroneous or noisy version of it. This raises the question of whether provable guarantees are obtainable in presence of error and noise. We provide initial answers, by focusing on the question of maximizing a monotone submodular function under a cardinality constraint when given access to a noisy oracle of the function. We show that: - For a cardinality constraint @math , there is an approximation algorithm whose approximation ratio is arbitrarily close to @math ; - For @math there is an algorithm whose approximation ratio is arbitrarily close to @math . No randomized algorithm can obtain an approximation ratio better than @math ; -If the noise is adversarial, no non-trivial approximation guarantee can be obtained.
Submodular functions have been studied in game theory almost fifty years ago @cite_29 . In mechanism design submodular functions are used to model agents' valuations @cite_2 and have been extensively studied in the context of combinatorial auctions (e.g. @cite_82 @cite_86 @cite_25 @cite_59 @cite_78 @cite_62 @cite_21 @cite_55 @cite_22 ). Maximizing submodular functions under cardinality constraints have been studied in the context of combinatorial public projects @cite_65 @cite_7 @cite_69 @cite_19 where the focus is on showing the computational hardness associated with not knowing agents valuations and having to resort to incentive compatible algorithms. Our adversarial lower bound implies that if agents err in their valuations, optimization may be hard, regardless of incentive constraints.
{ "cite_N": [ "@cite_69", "@cite_62", "@cite_22", "@cite_78", "@cite_7", "@cite_29", "@cite_21", "@cite_55", "@cite_65", "@cite_19", "@cite_59", "@cite_2", "@cite_86", "@cite_25", "@cite_82" ], "mid": [ "2031122640", "2021734699", "2950708966", "", "1564333628", "2085648025", "", "2169319812", "2141502056", "", "2059527251", "2004045061", "2035337032", "2045432319", "2126085282" ], "abstract": [ "The Combinatorial Public Projects Problem (CPPP) is an abstraction of resource allocation problems in which agents have preferences over alternatives, and an outcome that is to be collectively shared by the agents is chosen so as to maximize the social welfare. We explore CPPP from both computational perspective and a mechanism design perspective. We examine CPPP in the hierarchy of complement free (subadditive) valuation classes and present positive and negative results for both unrestricted and truthful algorithms.", "We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists an algorithm that finds the optimal randomized mechanism that runs in time polynomial in the size of the support. We leverage this result to show that in the oracle model introduced by Ronen and Saberi [FOCS'02], there exists a polynomial time truthful in expectation mechanism that provides a (1.5+e)-approximation to the revenue achievable by an optimal truthful-in-expectation mechanism, and a polynomial time deterministic truthful mechanism that guarantees 5 3 approximation to the revenue achievable by an optimal deterministic truthful mechanism. We show that the 5 3-approximation mechanism provides the same approximation ratio also with respect to the optimal truthful-in-expectation mechanism. This shows that the performance gap between truthful-in-expectation and deterministic mechanisms is relatively small. En route, we solve an open question of Mehta and Vazirani [EC'04]. Finally, we extend some of our results to the multi-item case, and show how to compute the optimal truthful-in-expectation mechanisms for bidders with more complex valuations.", "One of the fundamental questions of Algorithmic Mechanism Design is whether there exists an inherent clash between truthfulness and computational tractability: in particular, whether polynomial-time truthful mechanisms for combinatorial auctions are provably weaker in terms of approximation ratio than non-truthful ones. This question was very recently answered for universally truthful mechanisms for combinatorial auctions D11 , and even for truthful-in-expectation mechanisms DughmiV11 . However, both of these results are based on information-theoretic arguments for valuations given by a value oracle, and leave open the possibility of polynomial-time truthful mechanisms for succinctly described classes of valuations. This paper is the first to prove computational hardness results for truthful mechanisms for combinatorial auctions with succinctly described valuations. We prove that there is a class of succinctly represented submodular valuations for which no deterministic truthful mechanism provides an @math -approximation for a constant @math , unless @math ( @math denotes the number of items). Furthermore, we prove that even truthful-in-expectation mechanisms cannot approximate combinatorial auctions with certain succinctly described submodular valuations better than within @math , where @math is the number of bidders and @math some absolute constant, unless @math . In addition, we prove computational hardness results for two related problems.", "", "We study the Combinatorial Public Project Problem ( CPPP ) in which n agents are assigned a subset of m resources of size k so as to maximize the social welfare. Combinatorial public projects are an abstraction of many resource-assignment problems (Internet-related network design, elections, etc.). It is known that if all agents have submodular valuations then a constant approximation is achievable in polynomial time. However, submodularity is a strong assumption that does not always hold in practice. We show that (unlike similar problems such as combinatorial auctions) even slight relaxations of the submodularity assumption result in non-constant lower bounds for approximation.", "The core of ann-person game is the set of feasible outcomes that cannot be improved upon by any coalition of players. A convex game is defined as one that is based on a convex set function. In this paper it is shown that the core of a convex game is not empty and that it has an especially regular structure. It is further shown that certain other cooperative solution concepts are related in a simple way to the core: The value of a convex game is the center of gravity of the extreme points of the core, and the von Neumann-Morgenstern stable set solution of a convex game is unique and coincides with the core.", "", "We design an expected polynomial time, truthful in expectation, (1-1 e)-approximation mechanism for welfare maximization in a fundamental class of combinatorial auctions. Our results apply to bidders with valuations that are matroid rank sums (MRS), which encompass most concrete examples of submodular functions studied in this context, including coverage functions and matroid weighted-rank functions. Our approximation factor is the best possible, even for known and explicitly given coverage valuations, assuming P ≠ NP. Ours is the first truthful-in-expectation and polynomial-time mechanism to achieve a constant-factor approximation for an NP-hard welfare maximization problem in combinatorial auctions with heterogeneous goods and restricted valuations. Our mechanism is an instantiation of a new framework for designing approximation mechanisms based on randomized rounding algorithms. A typical such algorithm first optimizes over a fractional relaxation of the original problem, and then randomly rounds the fractional solution to an integral one. With rare exceptions, such algorithms cannot be converted into truthful mechanisms. The high-level idea of our mechanism design framework is to optimize directly over the (random) output of the rounding algorithm, rather than over the input to the rounding algorithm. This approach leads to truthful-in-expectation mechanisms, and these mechanisms can be implemented efficiently when the corresponding objective function is concave. For bidders with MRS valuations, we give a novel randomized rounding algorithm that leads to both a concave objective function and a (1-1 e)-approximation of the optimal welfare.", "The central problem in computational mechanism design is the tension between incentive compatibility and computational efficiency. We establish the first significant approximability gap between algorithms that are both truthful and computationally-efficient, and algorithms that only achieve one of these two desiderata. This is shown in the context of a novel mechanism design problem which we call the combinatorial public project problem (cppp). cpppis an abstraction of many common mechanism design situations, ranging from elections of kibbutz committees to network design.Our result is actually made up of two complementary results -- one in the communication-complexity model and one in the computational-complexity model. Both these hardness results heavily rely on a combinatorial characterization of truthful algorithms for our problem. Our computational-complexity result is one of the first impossibility results connecting mechanism design to complexity theory; its novel proof technique involves an application of the Sauer-Shelah Lemma and may be of wider applicability, both within and without mechanism design.", "", "We provide tight information-theoretic lower bounds for the welfare maximization problem in combinatorial auctions. In this problem, the goal is to partition m items among k bidders in a way that maximizes the sum of bidders' values for their allocated items. Bidders have complex preferences over items expressed by valuation functions that assign values to all subsets of items. We study the \"black box\" setting in which the auctioneer has oracle access to the valuation functions of the bidders. In particular, we explore the well-known value query model in which the permitted query to a valuation function is in the form of a subset of items, and the reply is the value assigned to that subset of items by the valuation function. We consider different classes of valuation functions: submodular,subadditive, and superadditive. For these classes, it has been shown that one can achieve approximation ratios of 1 -- 1 e, 1 √m, and √ m m, respectively, via a polynomial (in k and m) number of value queries. We prove that these approximation factors are essentially the best possible: For any fixed e > 0, a (1--1 e + e)-approximation for submodular valuations or an 1 m1 2-e-approximation for subadditive valuations would require exponentially many value queries, and a log1+e m m-approximation for superadditive valuations would require a superpolynomial number of value queries.", "In most of microeconomic theory, consumers are assumed to exhibit decreasing marginal utilities. This paper considers combinatorial auctions among such buyers. The valuations of such buyers are placed within a hierarchy of valuations that exhibit no complementarities, a hierarchy that includes also OR and XOR combinations of singleton valuations, and valuations satisfying the gross substitutes property. While we show that the allocation problem among valuations with decreasing marginal utilities is NP-hard, we present an efficient greedy 2-approximation algorithm for this case. No such approximation algorithm exists in a setting allowing for complementarities. Some results about strategic aspects of combinatorial auctions among players with decreasing marginal utilities are also presented.", "We explore the allocation problem in combinatorial auctions with submodular bidders. We provide an e e-1 approximation algorithm for this problem. Moreover, our algorithm applies to the more general class of XOS bidders. By presenting a matching unconditional lower bound in the communication model, we prove that the upper bound is tight for the XOS class.Our algorithm improves upon the previously known 2-approximation algorithm. In fact, we also exhibit another algorithm which obtains an approximation ratio better than 2 for submodular bidders, even in the value queries model.Throughout the paper we highlight interesting connections between combinatorial auctions with XOS and submodular bidders and various other combinatorial optimization problems. In particular, we discuss coverage problems and online problems.", "We study multi-unit auctions where the bidders have a budget constraint, a situation very common in practice that has received very little attention in the auction theory literature. Our main result is an impossibility: there are no incentive-compatible auctions that always produce a Pareto-optimal allocation. We also obtain some surprising positive results for certain special cases.", "We exhibit three approximation algorithms for the allocation problem in combinatorial auctions with complement free bidders. The running time of these algorithms is polynomial in the number of items @math and in the number of bidders n, even though the \"input size\" is exponential in m. The first algorithm provides an O(log m) approximation. The second algorithm provides an O(√ m) approximation in the weaker model of value oracles. This algorithm is also incentive compatible. The third algorithm provides an improved 2-approximation for the more restricted case of \"XOS bidders\", a class which strictly contains submodular bidders. We also prove lower bounds on the possible approximations achievable for these classes of bidders. These bounds are not tight and we leave the gaps as open problems." ] }
1601.03095
2237447357
We consider the problem of maximizing a monotone submodular function under noise. There has been a great deal of work on optimization of submodular functions under various constraints, resulting in algorithms that provide desirable approximation guarantees. In many applications, however, we do not have access to the submodular function we aim to optimize, but rather to some erroneous or noisy version of it. This raises the question of whether provable guarantees are obtainable in presence of error and noise. We provide initial answers, by focusing on the question of maximizing a monotone submodular function under a cardinality constraint when given access to a noisy oracle of the function. We show that: - For a cardinality constraint @math , there is an approximation algorithm whose approximation ratio is arbitrarily close to @math ; - For @math there is an algorithm whose approximation ratio is arbitrarily close to @math . No randomized algorithm can obtain an approximation ratio better than @math ; -If the noise is adversarial, no non-trivial approximation guarantee can be obtained.
In the past decade submodular optimization has become a central tool in machine learning and data mining (see surveys @cite_39 @cite_75 @cite_31 ). Problems include identifying influencers in social networks @cite_43 @cite_27 sensor placement @cite_42 @cite_14 , learning in data streams @cite_28 @cite_34 @cite_89 @cite_61 , information summarization @cite_83 @cite_47 , adaptive learning @cite_37 , vision @cite_50 @cite_51 @cite_80 , and general inference methods @cite_41 @cite_51 @cite_17 . In many cases the submodular function is learned from data, and our work aims to address the case in which there is potential for noise in the model.
{ "cite_N": [ "@cite_41", "@cite_42", "@cite_43", "@cite_75", "@cite_31", "@cite_39", "@cite_80", "@cite_17", "@cite_37", "@cite_28", "@cite_27", "@cite_83", "@cite_50", "@cite_34", "@cite_61", "@cite_14", "@cite_89", "@cite_47", "@cite_51" ], "mid": [ "2021774297", "2141403143", "", "", "", "2073110021", "", "2147716762", "2962795549", "2135414222", "", "2144933361", "2061958803", "2158504911", "1997959284", "2950807979", "2101246692", "2401608360", "1894718474" ], "abstract": [ "When monitoring spatial phenomena, such as the ecological condition of a river, deciding where to make observations is a challenging task. In these settings, a fundamental question is when an active learning, or sequential design, strategy, where locations are selected based on previous measurements, will perform significantly better than sensing at an a priori specified set of locations. For Gaussian Processes (GPs), which often accurately model spatial phenomena, we present an analysis and efficient algorithms that address this question. Central to our analysis is a theoretical bound which quantifies the performance difference between active and a priori design strategies. We consider GPs with unknown kernel parameters and present a nonmyopic approach for trading off exploration, i.e., decreasing uncertainty about the model parameters, and exploitation, i.e., near-optimally selecting observations when the parameters are (approximately) known. We discuss several exploration strategies, and present logarithmic sample complexity bounds for the exploration phase. We then extend our algorithm to handle nonstationary GPs exploiting local structure in the model. We also present extensive empirical evaluation on several real-world problems.", "Given a water distribution network, where should we place sensors toquickly detect contaminants? Or, which blogs should we read to avoid missing important stories?. These seemingly different problems share common structure: Outbreak detection can be modeled as selecting nodes (sensor locations, blogs) in a network, in order to detect the spreading of a virus or information asquickly as possible. We present a general methodology for near optimal sensor placement in these and related problems. We demonstrate that many realistic outbreak detection objectives (e.g., detection likelihood, population affected) exhibit the property of \"submodularity\". We exploit submodularity to develop an efficient algorithm that scales to large problems, achieving near optimal placements, while being 700 times faster than a simple greedy algorithm. We also derive online bounds on the quality of the placements obtained by any algorithm. Our algorithms and bounds also handle cases where nodes (sensor locations, blogs) have different costs. We evaluate our approach on several large real-world problems,including a model of a water distribution network from the EPA, andreal blog data. The obtained sensor placements are provably near optimal, providing a constant fraction of the optimal solution. We show that the approach scales, achieving speedups and savings in storage of several orders of magnitude. We also show how the approach leads to deeper insights in both applications, answering multicriteria trade-off, cost-sensitivity and generalization questions.", "", "", "", "Where should we place sensors to efficiently monitor natural drinking water resources for contamination? Which blogs should we read to learn about the biggest stories on the Web? These problems share a fundamental challenge: How can we obtain the most useful information about the state of the world, at minimum cost? Such information gathering, or active learning, problems are typically NP-hard, and were commonly addressed using heuristics without theoretical guarantees about the solution quality. In this article, we describe algorithms which efficiently find provably near-optimal solutions to large, complex information gathering problems. Our algorithms exploit submodularity, an intuitive notion of diminishing returns common to many sensing problems: the more sensors we have already deployed, the less we learn by placing another sensor. In addition to identifying the most informative sensing locations, our algorithms can handle more challenging settings, where sensors need to be able to reliably communicate over lossy links, where mobile robots are used for collecting data, or where solutions need to be robust against adversaries and sensor failures. We also present results applying our algorithms to several real-world sensing tasks, including environmental monitoring using robotic sensors, activity recognition using a built sensing chair, a sensor placement challenge, and deciding which blogs to read on the Web.", "", "Submodular optimization has found many applications in machine learning and beyond. We carry out the first systematic investigation of inference in probabilistic models defined through submodular functions, generalizing regular pairwise MRFs and Determinantal Point Processes. In particular, we present L-FIELD, a variational approach to general log-submodular and log-supermodular distributions based on sub- and supergradients. We obtain both lower and upper bounds on the log-partition function, which enables us to compute probability intervals for marginals, conditionals and marginal likelihoods. We also obtain fully factorized approximate posteriors, at the same computational cost as ordinary submodular optimization. Our framework results in convex problems for optimizing over differentials of submodular functions, which we show how to optimally solve. We provide theoretical guarantees of the approximation quality with respect to the curvature of the function. We further establish natural relations between our variational approach and the classical mean-field method. Lastly, we empirically demonstrate the accuracy of our inference scheme on several submodular models.", "Many problems in artificial intelligence require adaptively making a sequence of decisions with uncertain outcomes under partial observability. Solving such stochastic optimization problems is a fundamental but notoriously difficult challenge. In this paper, we introduce the concept of adaptive submodularity, generalizing submodular set functions to adaptive policies. We prove that if a problem satisfies this property, a simple adaptive greedy algorithm is guaranteed to be competitive with the optimal policy. In addition to providing performance guarantees for both stochastic maximization and coverage, adaptive submodularity can be exploited to drastically speed up the greedy algorithm by using lazy evaluations. We illustrate the usefulness of the concept by giving several examples of adaptive submodular objectives arising in diverse AI applications including management of sensing resources, viral marketing and active learning. Proving adaptive submodularity for these problems allows us to recover existing results in these applications as special cases, improve approximation guarantees and handle natural generalizations.", "Which ads should we display in sponsored search in order to maximize our revenue? How should we dynamically rank information sources to maximize the value of the ranking? These applications exhibit strong diminishing returns: Redundancy decreases the marginal utility of each ad or information source. We show that these and other problems can be formalized as repeatedly selecting an assignment of items to positions to maximize a sequence of monotone submodular functions that arrive one by one. We present an efficient algorithm for this general problem and analyze it in the no-regret model. Our algorithm possesses strong theoretical guarantees, such as a performance ratio that converges to the optimal constant of 1 - 1 e. We empirically evaluate our algorithm on two real-world online optimization problems on the web: ad allocation with submodular utilities, and dynamically ranking blogs to detect information cascades.", "", "We design a class of submodular functions meant for document summarization tasks. These functions each combine two terms, one which encourages the summary to be representative of the corpus, and the other which positively rewards diversity. Critically, our functions are monotone nondecreasing and submodular, which means that an efficient scalable greedy optimization scheme has a constant factor guarantee of optimality. When evaluated on DUC 2004-2007 corpora, we obtain better than existing state-of-art results in both generic and query-focused document summarization. Lastly, we show that several well-established methods for document summarization correspond, in fact, to submodular function optimization, adding further evidence that submodular functions are a natural fit for document summarization.", "We propose a new family of non-submodular global energy functions that still use submodularity internally to couple edges in a graph cut. We show it is possible to develop an efficient approximation algorithm that, thanks to the internal submodularity, can use standard graph cuts as a subroutine. We demonstrate the advantages of edge coupling in a natural setting, namely image segmentation. In particular, for fine-structured objects and objects with shading variation, our structured edge coupling leads to significant improvements over standard approaches.", "We consider the problem of extracting informative exemplars from a data stream. Examples of this problem include exemplar-based clustering and nonparametric inference such as Gaussian process regression on massive data sets. We show that these problems require maximization of a submodular function that captures the informativeness of a set of exemplars, over a data stream. We develop an efficient algorithm, Stream-Greedy, which is guaranteed to obtain a constant fraction of the value achieved by the optimal solution to this NP-hard optimization problem. We extensively evaluate our algorithm on large real-world data sets.", "How can one summarize a massive data set \"on the fly\", i.e., without even having seen it in its entirety? In this paper, we address the problem of extracting representative elements from a large stream of data. I.e., we would like to select a subset of say k data points from the stream that are most representative according to some objective function. Many natural notions of \"representativeness\" satisfy submodularity, an intuitive notion of diminishing returns. Thus, such problems can be reduced to maximizing a submodular set function subject to a cardinality constraint. Classical approaches to submodular maximization require full access to the data set. We develop the first efficient streaming algorithm with constant factor 1 2-e approximation guarantee to the optimum solution, requiring only a single pass through the data, and memory independent of data size. In our experiments, we extensively evaluate the effectiveness of our approach on several applications, including training large-scale kernel methods and exemplar-based clustering, on millions of data points. We observe that our streaming method, while achieving practically the same utility value, runs about 100 times faster than previous work.", "A key problem in sensor networks is to decide which sensors to query when, in order to obtain the most useful information (e.g., for performing accurate prediction), subject to constraints (e.g., on power and bandwidth). In many applications the utility function is not known a priori, must be learned from data, and can even change over time. Furthermore for large sensor networks solving a centralized optimization problem to select sensors is not feasible, and thus we seek a fully distributed solution. In this paper, we present Distributed Online Greedy (DOG), an efficient, distributed algorithm for repeatedly selecting sensors online, only receiving feedback about the utility of the selected sensors. We prove very strong theoretical no-regret guarantees that apply whenever the (unknown) utility function satisfies a natural diminishing returns property called submodularity. Our algorithm has extremely low communication requirements, and scales well to large sensor deployments. We extend DOG to allow observation-dependent sensor selection. We empirically demonstrate the effectiveness of our algorithm on several real-world sensing tasks.", "Greedy algorithms are practitioners' best friends - they are intuitive, simple to implement, and often lead to very good solutions. However, implementing greedy algorithms in a distributed setting is challenging since the greedy choice is inherently sequential, and it is not clear how to take advantage of the extra processing power. Our main result is a powerful sampling technique that aids in parallelization of sequential algorithms. We then show how to use this primitive to adapt a broad class of greedy algorithms to the MapReduce paradigm; this class includes maximum cover and submodular maximization subject to p-system constraints. Our method yields efficient algorithms that run in a logarithmic number of rounds, while obtaining solutions that are arbitrarily close to those produced by the standard sequential greedy algorithm. We begin with algorithms for modular maximization subject to a matroid constraint, and then extend this approach to obtain approximation algorithms for submodular maximization subject to knapsack or p-system constraints. Finally, we empirically validate our algorithms, and show that they achieve the same quality of the solution as standard greedy algorithms but run in a substantially fewer number of rounds.", "We address the problem of finding a subset of a large speech data corpus that is useful for accurately and rapidly prototyping novel and computationally expensive speech recognition architectures. To solve this problem, we express it as an optimization problem over submodular functions. Quantities such as vocabulary size (or quality) of a set of utterances, or quality of a bundle of word types are submodular functions which make finding the optimal solutions possible. We, moreover, are able to express our approach using graph cuts leading to a very fast implementation even on large initial corpora. We show results on the Switchboard-I corpus, demonstrating improved results over previous techniques for this purpose. We also demonstrate the variety of the resulting corpora that may be produced using our method.", "We analyze a family of probability distributions that are characterized by an embedded combinatorial structure. This family includes models having arbitrary treewidth and arbitrary sized factors. Unlike general models with such freedom, where the \"most probable explanation\" (MPE) problem is inapproximable, the combinatorial structure within our model, in particular the indirect use of sub-modularity, leads to several MPE algorithms that all have approximation guarantees." ] }
1601.03095
2237447357
We consider the problem of maximizing a monotone submodular function under noise. There has been a great deal of work on optimization of submodular functions under various constraints, resulting in algorithms that provide desirable approximation guarantees. In many applications, however, we do not have access to the submodular function we aim to optimize, but rather to some erroneous or noisy version of it. This raises the question of whether provable guarantees are obtainable in presence of error and noise. We provide initial answers, by focusing on the question of maximizing a monotone submodular function under a cardinality constraint when given access to a noisy oracle of the function. We show that: - For a cardinality constraint @math , there is an approximation algorithm whose approximation ratio is arbitrarily close to @math ; - For @math there is an algorithm whose approximation ratio is arbitrarily close to @math . No randomized algorithm can obtain an approximation ratio better than @math ; -If the noise is adversarial, no non-trivial approximation guarantee can be obtained.
Combinatorial optimization with noisy inputs can be largely studied through consistent (independent noisy answers when querying the oracle twice) and inconsistent oracles. For inconsistent oracles, it usually suffices to repeat every query @math times, and eliminate the noise. To the best of our knowledge, submodular optimization has been studied under noise only in instances where the oracle is inconsistent or equivalently small enough so that it does not affect the optimization @cite_43 @cite_3 . One line of work studies methods for reducing the number of samples required for optimization (see e.g. @cite_1 @cite_5 ), primarily for sorting and finding elements. On the other hand, if two identical queries to the oracle always yield the same result, the noise can not be averaged out so easily, and one needs to settle for approximate solutions, which has been studied in the context of tournaments and rankings @cite_54 @cite_71 @cite_56 .
{ "cite_N": [ "@cite_54", "@cite_1", "@cite_3", "@cite_56", "@cite_43", "@cite_71", "@cite_5" ], "mid": [ "1986978101", "2038435918", "1566959760", "", "", "2952149393", "2111790732" ], "abstract": [ "We present a polynomial time approximation scheme (PTAS) for the minimum feedback arc set problem on tournaments. A simple weighted generalization gives a PTAS for Kemeny-Young rank aggregation.", "This paper studies the depth of noisy decision trees in which each node gives the wrong answer with some constant probability. In the noisy Boolean decision tree model, tight bounds are given on the number of queries to input variables required to compute threshold functions, the parity function and symmetric functions. In the noisy comparison tree model, tight bounds are given on the number of noisy comparisons for searching, sorting, selection and merging. The paper also studies parallel selection and sorting with noisy comparisons, giving tight bounds for several problems.", "Many set functionsF in combinatorial optimization satisfy the diminishing returns propertyF (A[fXg) F (A) F (A 0 [fXg) F (A 0 ) forA A", "", "", "In this paper we study noisy sorting without re-sampling. In this problem there is an unknown order @math for all pairs @math , where @math is a constant and @math for all @math and @math . It is assumed that the errors are independent. Given the status of the queries the goal is to find the maximum likelihood order. In other words, the goal is find a permutation @math that minimizes the number of pairs @math where @math . The problem so defined is the feedback arc set problem on distributions of inputs, each of which is a tournament obtained as a noisy perturbations of a linear order. Note that when @math and @math is large, it is impossible to recover the original order @math . It is known that the weighted feedback are set problem on tournaments is NP-hard in general. Here we present an algorithm of running time @math and sampling complexity @math that with high probability solves the noisy sorting without re-sampling problem. We also show that if @math is an optimal solution of the problem then it is close'' to the original order. More formally, with high probability it holds that @math and @math . Our results are of interest in applications to ranking, such as ranking in sports, or ranking of search items based on comparisons by experts.", "We use a Bayesian approach to optimally solve problems in noisy binary search. We deal with two variants:1. Each comparison is erroneous with independent probability 1-p. 2. At each stage k comparisons can be performed in parallel and a noisy answer is returned. We present a (classical) algorithm which solves both variants optimally (with respect to p and k), up to an additive term of O(loglog n), and prove matching information-theoretic lower bounds. We use the algorithm to improve the results of , presenting an exact quantum search algorithm in an ordered list of expected complexity less than (log2 n) 3." ] }
1601.03316
2238542761
The modularity is a quality function in community detection, which was introduced by Newman and Girvan (2004). Community detection in graphs is now often conducted through modularity maximization: given an undirected graph @math , we are asked to find a partition @math of @math that maximizes the modularity. Although numerous algorithms have been developed to date, most of them have no theoretical approximation guarantee. Recently, to overcome this issue, the design of modularity maximization algorithms with provable approximation guarantees has attracted significant attention in the computer science community. In this study, we further investigate the approximability of modularity maximization. More specifically, we propose a polynomial-time @math -additive approximation algorithm for the modularity maximization problem. Note here that @math holds. This improves the current best additive approximation error of @math , which was recently provided by Dinh, Li, and Thai (2015). Interestingly, our analysis also demonstrates that the proposed algorithm obtains a nearly-optimal solution for any instance with a very high modularity value. Moreover, we propose a polynomial-time @math -additive approximation algorithm for the maximum modularity cut problem. It should be noted that this is the first non-trivial approximability result for the problem. Finally, we demonstrate that our approximation algorithm can be extended to some related problems.
The seminal work by Goemans and Williamson @cite_18 has opened the door to the design of approximation algorithms using the SDP relaxation and the hyperplane separation technique. To date, this approach has succeeded in developing approximation algorithms for various NP-hard problems @cite_5 . As mentioned above, Agarwal and Kempe @cite_2 introduced the SDP relaxation for the maximum modularity cut problem. For the original modularity maximization problem, the SDP relaxation was recently used by Dinh, Li, and Thai @cite_15 .
{ "cite_N": [ "@cite_5", "@cite_18", "@cite_15", "@cite_2" ], "mid": [ "1899379166", "1985123706", "2247411266", "2057504236" ], "abstract": [ "Discrete optimization problems are everywhere, from traditional operations research planning problems, such as scheduling, facility location, and network design; to computer science problems in databases; to advertising issues in viral marketing. Yet most such problems are NP-hard. Thus unless P = NP, there are no efficient algorithms to find optimal solutions to such problems. This book shows how to design approximation algorithms: efficient algorithms that find provably near-optimal solutions. The book is organized around central algorithmic techniques for designing approximation algorithms, including greedy and local search algorithms, dynamic programming, linear and semidefinite programming, and randomization. Each chapter in the first part of the book is devoted to a single algorithmic technique, which is then applied to several different problems. The second part revisits the techniques but offers more sophisticated treatments of them. The book also covers methods for proving that optimization problems are hard to approximate. Designed as a textbook for graduate-level algorithms courses, the book will also serve as a reference for researchers interested in the heuristic solution of discrete optimization problems.", "We present randomized approximation algorithms for the maximum cut (MAX CUT) and maximum 2-satisfiability (MAX 2SAT) problems that always deliver solutions of expected value at least.87856 times the optimal value. These algorithms use a simple and elegant technique that randomly rounds the solution to a nonlinear programming relaxation. This relaxation can be interpreted both as a semidefinite program and as an eigenvalue minimization problem. The best previously known approximation algorithms for these problems had performance guarantees of 1 2 for MAX CUT and 3 4 or MAX 2SAT. Slight extensions of our analysis lead to a.79607-approximation algorithm for the maximum directed cut problem (MAX DICUT) and a.758-approximation algorithm for MAX SAT, where the best previously known approximation algorithms had performance guarantees of 1 4 and 3 4, respectively. Our algorithm gives the first substantial progress in approximating MAX CUT in nearly twenty years, and represents the first use of semidefinite programming in the design of approximation algorithms.", "Many social networks and complex systems are found to be naturally divided into clusters of densely connected nodes, known as community structure (CS). Finding CS is one of fundamental yet challenging topics in network science. One of the most popular classes of methods for this problem is to maximize Newman's modularity. However, there is a littleunderstood on how well we can approximate the maximum modularity as well as the implications of finding community structure with provable guarantees. In this paper, we settle definitely the approximability of modularity clustering, proving that approximating the problem within any (multiplicative) positive factor is intractable, unless P = NP. Yet we propose the first additive approximation algorithm for modularity clustering with a constant factor. Moreover, we providea rigorous proof that a CS with modularity arbitrary close to maximum modularity QOPT might bear no similarity to the optimal CS of maximum modularity. Thus even when CS with near-optimal modularity are found, other verification methods are needed to confirm the significance of the structure.", "In many networks, it is of great interest to identify communities, unusually densely knit groups of individuals. Such communities often shed light on the function of the networks or underlying properties of the individuals. Recently, Newman suggested modularity as a natural measure of the quality of a network partitioning into communities. Since then, various algorithms have been proposed for (approximately) maximizing the modularity of the partitioning determined. In this paper, we introduce the technique of rounding mathematical programs to the problem of modularity maximization, presenting two novel algorithms. More specifically, the algorithms round solutions to linear and vector programs. Importantly, the linear programing algorithm comes with an a posteriori approximation guarantee: by comparing the solution quality to the fractional solution of the linear program, a bound on the available “room for improvement” can be obtained. The vector programming algorithm provides a similar bound for the best partition into two communities. We evaluate both algorithms using experiments on several standard test cases for network partitioning algorithms, and find that they perform comparably or better than past algorithms, while being more efficient than exhaustive techniques." ] }
1601.03316
2238542761
The modularity is a quality function in community detection, which was introduced by Newman and Girvan (2004). Community detection in graphs is now often conducted through modularity maximization: given an undirected graph @math , we are asked to find a partition @math of @math that maximizes the modularity. Although numerous algorithms have been developed to date, most of them have no theoretical approximation guarantee. Recently, to overcome this issue, the design of modularity maximization algorithms with provable approximation guarantees has attracted significant attention in the computer science community. In this study, we further investigate the approximability of modularity maximization. More specifically, we propose a polynomial-time @math -additive approximation algorithm for the modularity maximization problem. Note here that @math holds. This improves the current best additive approximation error of @math , which was recently provided by Dinh, Li, and Thai (2015). Interestingly, our analysis also demonstrates that the proposed algorithm obtains a nearly-optimal solution for any instance with a very high modularity value. Moreover, we propose a polynomial-time @math -additive approximation algorithm for the maximum modularity cut problem. It should be noted that this is the first non-trivial approximability result for the problem. Finally, we demonstrate that our approximation algorithm can be extended to some related problems.
DasGupta and Desai @cite_14 designed an @math -approximation algorithm for @math -regular graphs with @math . Moreover, they developed an approximation algorithm for the weighted modularity maximization problem. The approximation ratio is logarithmic in the maximum weighted degree of edge-weighted graphs (where the edge-weights are normalized so that the sum of weights are equal to the number of edges). This algorithm requires that the maximum weighted degree is less than about @math . These algorithms are not derived directly from logarithmic approximation algorithms for quadratic forms (e.g., see @cite_13 or @cite_4 ) because the quadratic form for modularity maximization has negative diagonal entries. To overcome this difficulty, they designed a more specialized algorithm using a graph decomposition technique.
{ "cite_N": [ "@cite_14", "@cite_13", "@cite_4" ], "mid": [ "", "2030987833", "2167681887" ], "abstract": [ "", "We introduce a new graph parameter, called the Grothendieck constant of a graph G=(V,E), which is defined as the least constant K such that for every A:E→ℝ, @math The classical Grothendieck inequality corresponds to the case of bipartite graphs, but the case of general graphs is shown to have various algorithmic applications. Indeed, our work is motivated by the algorithmic problem of maximizing the quadratic form ∑ u,v ∈E A(u,v)ϕ(u)ϕ(v) over all ϕ:V→ -1,1 , which arises in the study of correlation clustering and in the investigation of the spin glass model. We give upper and lower estimates for the integrality gap of this program. We show that the integrality gap is (O( ( G )) ), where ( ( G ) ) is the Lovasz Theta Function of the complement of G, which is always smaller than the chromatic number of G. This yields an efficient constant factor approximation algorithm for the above maximization problem for a wide range of graphs G. We also show that the maximum possible integrality gap is always at least Ω(log ω(G)), where ω(G) is the clique number of G. In particular it follows that the maximum possible integrality gap for the complete graph on n vertices with no loops is Θ(logn). More generally, the maximum possible integrality gap for any perfect graph with chromatic number n is Θ(logn). The lower bound for the complete graph improves a result of Kashin and Szarek on Gram matrices of uniformly bounded functions, and settles a problem of Megretski and of Charikar and Wirth.", "This paper considers the following type of quadratic programming problem. Given an arbitrary matrix A, whose diagonal elements are zero, find x spl isin -1, 1 sup n such that x sup T Ax is maximized. Our approximation algorithm for this problem uses the canonical semidefinite relaxation and returns a solution whose ratio to the optimum is in spl Omega (1 logn). This quadratic programming problem can be seen as an extension to that of maximizing x sup T Ay (where y's components are also spl plusmn 1). Grothendieck's inequality states that the ratio of the optimum value of the latter problem to the optimum of its canonical semidefinite relaxation is bounded below by a constant. The study of this type of quadratic program arose from a desire to approximate the maximum correlation in correlation clustering. Nothing substantive was known about this problem; we present an spl Omega (1 logn) approximation, based on our quadratic programming algorithm. We can also guarantee that our quadratic programming algorithm returns a solution to the MAXCUT problem that has a significant advantage over a random assignment." ] }
1601.03316
2238542761
The modularity is a quality function in community detection, which was introduced by Newman and Girvan (2004). Community detection in graphs is now often conducted through modularity maximization: given an undirected graph @math , we are asked to find a partition @math of @math that maximizes the modularity. Although numerous algorithms have been developed to date, most of them have no theoretical approximation guarantee. Recently, to overcome this issue, the design of modularity maximization algorithms with provable approximation guarantees has attracted significant attention in the computer science community. In this study, we further investigate the approximability of modularity maximization. More specifically, we propose a polynomial-time @math -additive approximation algorithm for the modularity maximization problem. Note here that @math holds. This improves the current best additive approximation error of @math , which was recently provided by Dinh, Li, and Thai (2015). Interestingly, our analysis also demonstrates that the proposed algorithm obtains a nearly-optimal solution for any instance with a very high modularity value. Moreover, we propose a polynomial-time @math -additive approximation algorithm for the maximum modularity cut problem. It should be noted that this is the first non-trivial approximability result for the problem. Finally, we demonstrate that our approximation algorithm can be extended to some related problems.
Dinh and Thai @cite_8 developed multiplicative approximation algorithms for the modularity maximization problem on scale-free graphs with a prescribed degree sequence. In their graphs, the number of vertices with degree @math is fixed to some value proportional to @math , where @math is called the power-law exponent. For such scale-free graphs with @math , they developed a polynomial-time @math -approximation algorithm for an arbitrarily small @math , where @math is the Riemann zeta function. For graphs with @math , they developed a polynomial-time @math -approximation algorithm using the logarithmic approximation algorithm for quadratic forms @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_8" ], "mid": [ "2167681887", "2094934391" ], "abstract": [ "This paper considers the following type of quadratic programming problem. Given an arbitrary matrix A, whose diagonal elements are zero, find x spl isin -1, 1 sup n such that x sup T Ax is maximized. Our approximation algorithm for this problem uses the canonical semidefinite relaxation and returns a solution whose ratio to the optimum is in spl Omega (1 logn). This quadratic programming problem can be seen as an extension to that of maximizing x sup T Ay (where y's components are also spl plusmn 1). Grothendieck's inequality states that the ratio of the optimum value of the latter problem to the optimum of its canonical semidefinite relaxation is bounded below by a constant. The study of this type of quadratic program arose from a desire to approximate the maximum correlation in correlation clustering. Nothing substantive was known about this problem; we present an spl Omega (1 logn) approximation, based on our quadratic programming algorithm. We can also guarantee that our quadratic programming algorithm returns a solution to the MAXCUT problem that has a significant advantage over a random assignment.", "Many networks, indifferent of their function and scope, converge to a scale-free architecture in which the degree distribution approximately follows a power law. Meanwhile, many of those scale-free networks are found to be naturally divided into communities of densely connected nodes, known as community structure. Finding this community structure is a fundamental but challenging topic in network science. Since Newman's suggestion of using modularity as a measure to qualify the strength of community structure, many efficient methods that find community structure based on maximizing modularity have been proposed. However, there is a lack of approximation algorithms that provide provable quality bounds for the problem. In this paper, we propose polynomial-time approximation algorithms for the modularity maximization problem together with their theoretical justifications in the context of scale-free networks. We prove that the solutions of the proposed algorithms, even in the worst-case, are optimal up to a constant factor for scale-free networks with either bidirectional or unidirectional links. Even though our focus in this work is not on designing another empirically good algorithms to detect community structure, experiments on real-world networks suggest that the proposed algorithm is competitive with the state-of-the-art modularity maximization algorithm." ] }
1601.03128
2239173263
An energy minimization based approach for scene text recognition with seamless integration of multiple cues.Applied also to the challenging open vocabulary setting, where a word-specific lexicon is unavailable.Comprehensive experimental evaluation on several state-of-the-art benchmarks. Recognizing scene text is a challenging problem, even more so than the recognition of scanned documents. This problem has gained significant attention from the computer vision community in recent years, and several methods based on energy minimization frameworks and deep learning approaches have been proposed. In this work, we focus on the energy minimization framework and propose a model that exploits both bottom-up and top-down cues for recognizing cropped words extracted from street images. The bottom-up cues are derived from individual character detections from an image. We build a conditional random field model on these detections to jointly model the strength of the detections and the interactions between them. These interactions are top-down cues obtained from a lexicon-based prior, i.e., language statistics. The optimal word represented by the text image is obtained by minimizing the energy function corresponding to the random field model. We evaluate our proposed algorithm extensively on a number of cropped scene text benchmark datasets, namely Street View Text, ICDAR 2003, 2011 and 2013 datasets, and IIIT 5K-word, and show better performance than comparable methods. We perform a rigorous analysis of all the steps in our approach and analyze the results. We also show that state-of-the-art convolutional neural network features can be integrated in our framework to further improve the recognition performance.
A study on human reading psychology shows that our reading improves significantly with prior knowledge of the language @cite_30 . Motivated by such studies, OCR systems have used, often in post-processing steps @cite_40 @cite_16 , statistical language models like @math -grams to improve their performance. Bigrams or trigrams have also been used in the context of scene text recognition as a post-processing step, e.g., @cite_8 . A few other works @cite_80 @cite_17 @cite_43 integrate character recognition and linguistic knowledge to deal with recognition errors. For example, @cite_80 computes @math -gram probabilities from more than 100 million characters and uses a Viterbi algorithm to find the correct word. The method in @cite_43 , developed in the same year as our CVPR 2012 work @cite_50 , builds a graph on potential character locations and uses @math -gram scores to constrain the inference algorithm to predict the word. In contrast, our approach uses a novel location-specific prior (cf. )).
{ "cite_N": [ "@cite_30", "@cite_8", "@cite_43", "@cite_40", "@cite_50", "@cite_80", "@cite_16", "@cite_17" ], "mid": [ "1913670219", "2077916012", "2106693967", "2141960239", "2049951199", "2170727726", "99399284", "1968304237" ], "abstract": [ "Contents: Preface. Part I: Background Information. Introduction and Preliminary Information. Writing Systems. Word Perception. Part II: Skilled Reading of Text. The Work of the Eyes. Eye-Movement Control During Reading. Inner Speech. Part III: Understanding Text. Words and Sentences. Representation of Discourse. Part IV: Beginning Reading and Reading Disability. Learning to Read. Stages of Reading Development. Dyslexia. Part V: Toward a Model of Reading. Speedreading, Proofreading, and Individual Differences. Models of Reading.", "With the increasing market of cheap cameras, natural scene text has to be handled in an efficient way. Some works deal with text detection in the image while more recent ones point out the challenge of text extraction and recognition. We propose here an OCR correction system to handle traditional issues of recognizer errors but also the ones due to natural scene images, i.e. cut characters, artistic display, incomplete sentences (present in advertisements) and out- of-vocabulary (OOV) words such as acronyms and so on. The main algorithm bases on finite-state machines (FSMs) to deal with learned OCR confusions, capital accented letters and lexicon look-up. Moreover, as OCR is not considered as a black box, several outputs are taken into account to intermingle recognition and correction steps. Based on a public database of natural scene words, detailed results are also presented along with future works.", "Understanding text captured in real-world scenes is a challenging problem in the field of visual pattern recognition and continues to generate a significant interest in the OCR (Optical Character Recognition) community. This paper proposes a novel method to recognize scene texts avoiding the conventional character segmentation step. The idea is to scan the text image with multi-scale windows and apply a robust recognition model, relying on a neural classification approach, to every window in order to recognize valid characters and identify non valid ones. Recognition results are represented as a graph model in order to determine the best sequence of characters. Some linguistic knowledge is also incorporated to remove errors due to recognition confusions. The designed method is evaluated on the ICDAR 2003 database of scene text images and outperforms state-of-the-art approaches.", "A technique is presented that uses visual relationships between word images in a document to improve the recognition of the text it contains. This technique takes advantage of the visual relationships between word images that are usually lost in most conventional optical character recognition (OCR) techniques. The visual relations are defined to be the equivalence that exists between images of the same word or portions of word images. An algorithm is presented that calculates these relationships in a document. The resulting clusters are integrated with the recognition results provided by an OCR system. Inconsistencies in OCR results between equivalent images are identified and used to improve recognition performance. Experimental results are presented in which the input is provided directly from a commercial OCR system.", "Scene text recognition has gained significant attention from the computer vision community in recent years. Recognizing such text is a challenging problem, even more so than the recognition of scanned documents. In this work, we focus on the problem of recognizing text extracted from street images. We present a framework that exploits both bottom-up and top-down cues. The bottom-up cues are derived from individual character detections from the image. We build a Conditional Random Field model on these detections to jointly model the strength of the detections and the interactions between them. We impose top-down cues obtained from a lexicon-based prior, i.e. language statistics, on the model. The optimal word represented by the text image is obtained by minimizing the energy function corresponding to the random field model. We show significant improvements in accuracies on two challenging public datasets, namely Street View Text (over 15 ) and ICDAR 2003 (nearly 10 ).", "This paper describes a mobile device which tries to give the blind or visually impaired access to text information. Three key technologies are required for this system: text detection, optical character recognition, and speech synthesis. Blind users and the mobile environment imply two strong constraints. First, pictures will be taken without control on camera settings and a priori information on text (font or size) and background. The second issue is to link several techniques together with an optimal compromise between computational constraints and recognition efficiency. We will present the overall description of the system from text detection to OCR error correction.", "", "This work aims at helping multimedia content understanding by deriving benefit from textual clues embedded in digital videos. For this, we developed a complete video Optical Character Recognition system (OCR), specifically adapted to detect and recognize embedded texts in videos. Based on a neural approach, this new method outperforms related work, especially in terms of robustness to style and size variabilities, to background complexity and to low resolution of the image. A language model that drives several steps of the video OCR is also introduced in order to remove ambiguities due to a local letter by letter recognition and to reduce segmentation errors. This approach has been evaluated on a database of French TV news videos and achieves an outstanding character recognition rate of 95 , corresponding to 78 of words correctly recognized, which enables its incorporation into an automatic video indexing and retrieval system." ] }
1601.03128
2239173263
An energy minimization based approach for scene text recognition with seamless integration of multiple cues.Applied also to the challenging open vocabulary setting, where a word-specific lexicon is unavailable.Comprehensive experimental evaluation on several state-of-the-art benchmarks. Recognizing scene text is a challenging problem, even more so than the recognition of scanned documents. This problem has gained significant attention from the computer vision community in recent years, and several methods based on energy minimization frameworks and deep learning approaches have been proposed. In this work, we focus on the energy minimization framework and propose a model that exploits both bottom-up and top-down cues for recognizing cropped words extracted from street images. The bottom-up cues are derived from individual character detections from an image. We build a conditional random field model on these detections to jointly model the strength of the detections and the interactions between them. These interactions are top-down cues obtained from a lexicon-based prior, i.e., language statistics. The optimal word represented by the text image is obtained by minimizing the energy function corresponding to the random field model. We evaluate our proposed algorithm extensively on a number of cropped scene text benchmark datasets, namely Street View Text, ICDAR 2003, 2011 and 2013 datasets, and IIIT 5K-word, and show better performance than comparable methods. We perform a rigorous analysis of all the steps in our approach and analyze the results. We also show that state-of-the-art convolutional neural network features can be integrated in our framework to further improve the recognition performance.
Our work belongs to the class of word recognition methods which build on individual character localization, similar to methods such as @cite_53 @cite_71 . In this framework, the potential characters are localized, then a graph is constructed from these locations, and then the problem of recognizing the word is formulated as finding an optimal path in this graph @cite_79 or inferring from an ensemble of HMMs @cite_71 . Our approach shows a seamless integration of higher order language priors into the graph (in the form of a CRF model), and uses more effective modern computer vision features, thus making it clearly different from previous works.
{ "cite_N": [ "@cite_79", "@cite_53", "@cite_71" ], "mid": [ "1976382093", "1488125194", "2140525724" ], "abstract": [ "An end-to-end real-time scene text localization and recognition method is presented. The three main novel features are: (i) keeping multiple segmentations of each character until the very last stage of the processing when the context of each character in a text line is known, (ii) an efficient algorithm for selection of character segmentations minimizing a global criterion, and (iii) showing that, despite using theoretically scale-invariant methods, operating on a coarse Gaussian scale space pyramid yields improved results as many typographical artifacts are eliminated. The method runs in real time and achieves state-of-the-art text localization results on the ICDAR 2011 Robust Reading dataset. Results are also reported for end-to-end text recognition on the ICDAR 2011 dataset.", "A general method for text localization and recognition in real-world images is presented. The proposed method is novel, as it (i) departs from a strict feed-forward pipeline and replaces it by a hypothesesverification framework simultaneously processing multiple text line hypotheses, (ii) uses synthetic fonts to train the algorithm eliminating the need for time-consuming acquisition and labeling of real-world training data and (iii) exploits Maximally Stable Extremal Regions (MSERs) which provides robustness to geometric and illumination conditions. The performance of the method is evaluated on two standard datasets. On the Char74k dataset, a recognition rate of 72 is achieved, 18 higher than the state-of-the-art. The paper is first to report both text detection and recognition results on the standard and rather challenging ICDAR 2003 dataset. The text localization works for number of alphabets and the method is easily adapted to recognition of other scripts, e.g. cyrillics.", "This paper develops word recognition methods for historical handwritten cursive and printed documents. It employs a powerful segmentation-free letter detection method based upon joint boosting with histograms of gradients as features. Efficient inference on an ensemble of hidden Markov models can select the most probable sequence of candidate character detections to recognize complete words in ambiguous handwritten text, drawing on character n-gram and physical separation models. Experiments with two corpora of handwritten historic documents show that this approach recognizes known words more accurately than previous efforts, and can also recognize out-of-vocabulary words." ] }
1601.03094
2233865268
Metrics on the space of sets of trajectories are important for scientists in the field of computer vision, machine learning, robotics, and general artificial intelligence. However, existing notions of closeness between sets of trajectories are either mathematically inconsistent or of limited practical use. In this paper, we outline the limitations in the current mathematically-consistent metrics, which are based on OSPA ( 2008); and the inconsistencies in the heuristic notions of closeness used in practice, whose main ideas are common to the CLEAR MOT measures (Keni and Rainer 2008) widely used in computer vision. In two steps, we then propose a new intuitive metric between sets of trajectories and address these limitations. First, we explain a solution that leads to a metric that is hard to compute. Then we modify this formulation to obtain a metric that is easy to compute while keeping the useful properties of the previous metric. Our notion of closeness is the first demonstrating the following three features: the metric 1) can be quickly computed, 2) incorporates confusion of trajectories' identity in an optimal way, and 3) is a metric in the mathematical sense.
All the spin-offs of @cite_42 focus on defining a metric between two sets of trajectories @math and @math for the purpose of evaluating the performance of tracking algorithms. We recall however that many applications in machine learning and AI apart from tracking benefit if we work with a metric, for example like the ones we define in this paper, rather than a similarity measure that is not a metric. It is crucial to note that all the spin-offs of @cite_42 only compare full trajectories in @math to full trajectories in @math and hence suffer from the same limitations that we describe in Example 4 in the Introduction.
{ "cite_N": [ "@cite_42" ], "mid": [ "2126885789" ], "abstract": [ "The concept of a miss-distance, or error, between a reference quantity and its estimated controlled value, plays a fundamental role in any filtering control problem. Yet there is no satisfactory notion of a miss-distance in the well-established field of multi-object filtering. In this paper, we outline the inconsistencies of existing metrics in the context of multi-object miss-distances for performance evaluation. We then propose a new mathematically and intuitively consistent metric that addresses the drawbacks of current multi-object performance evaluation metrics." ] }
1601.03094
2233865268
Metrics on the space of sets of trajectories are important for scientists in the field of computer vision, machine learning, robotics, and general artificial intelligence. However, existing notions of closeness between sets of trajectories are either mathematically inconsistent or of limited practical use. In this paper, we outline the limitations in the current mathematically-consistent metrics, which are based on OSPA ( 2008); and the inconsistencies in the heuristic notions of closeness used in practice, whose main ideas are common to the CLEAR MOT measures (Keni and Rainer 2008) widely used in computer vision. In two steps, we then propose a new intuitive metric between sets of trajectories and address these limitations. First, we explain a solution that leads to a metric that is hard to compute. Then we modify this formulation to obtain a metric that is easy to compute while keeping the useful properties of the previous metric. Our notion of closeness is the first demonstrating the following three features: the metric 1) can be quickly computed, 2) incorporates confusion of trajectories' identity in an optimal way, and 3) is a metric in the mathematical sense.
The authors in @cite_41 define the OSPA-T metric in two steps. In the first step they solve an optimization problem that optimally matches full tracks in @math to full tracks in @math while taking into account that tracks have different lengths and might be incomplete. In the second step, they assign labels to each track based on this match, they compute the OSPA metric for each time instant using a new metric between pair of vectors that considers both the vectors' components as well as their labels and they sum all the OSPA values across all time instants. Although the optimization problem of the first step defines a metric, @cite_34 point out that the full two-step procedure that defines OSPA-T can violate the triangle inequality.
{ "cite_N": [ "@cite_41", "@cite_34" ], "mid": [ "2101295974", "1881351112" ], "abstract": [ "Performance evaluation of multi-target tracking algorithms is of great practical importance in the design, parameter optimization and comparison of tracking systems. The goal of performance evaluation is to measure the distance between two sets of tracks: the ground truth tracks and the set of estimated tracks. This paper proposes a mathematically rigorous metric for this purpose. The basis of the proposed distance measure is the recently formulated consistent metric for performance evaluation of multi-target filters, referred to as the OSPA metric. Multi-target filters sequentially estimate the number of targets and their position in the state space. The OSPA metric is therefore defined on the space of finite sets of vectors. The distinction between filtering and tracking is that tracking algorithms output tracks and a track represents a labeled temporal sequence of state estimates, associated with the same target. The metric proposed in this paper is therefore defined on the space of finite sets of tracks and incorporates the labeling error. Numerical examples demonstrate that the proposed metric behaves in a manner consistent with our expectations.", "This paper proposes a new metric based on the Optimal Sub-pattern Assignment (OSPA) metric for evaluating the performance of multiple target tracking algorithms. It is shown in this paper that by considering the properties of false tracks, missing tracks and miss-detections, the minimization of all distances between tracks enables performance evaluation of multi-target algorithms in a more comprehensive manner than is possible with the OSPA metric for track (OSPAT) introduced by" ] }
1601.03094
2233865268
Metrics on the space of sets of trajectories are important for scientists in the field of computer vision, machine learning, robotics, and general artificial intelligence. However, existing notions of closeness between sets of trajectories are either mathematically inconsistent or of limited practical use. In this paper, we outline the limitations in the current mathematically-consistent metrics, which are based on OSPA ( 2008); and the inconsistencies in the heuristic notions of closeness used in practice, whose main ideas are common to the CLEAR MOT measures (Keni and Rainer 2008) widely used in computer vision. In two steps, we then propose a new intuitive metric between sets of trajectories and address these limitations. First, we explain a solution that leads to a metric that is hard to compute. Then we modify this formulation to obtain a metric that is easy to compute while keeping the useful properties of the previous metric. Our notion of closeness is the first demonstrating the following three features: the metric 1) can be quickly computed, 2) incorporates confusion of trajectories' identity in an optimal way, and 3) is a metric in the mathematical sense.
The authors in @cite_34 define the OSPAMT to be a metric and to be more reliable than OSPA-T when we evaluate the performance of multi-target tracking algorithms. The OSPAMT metric also computes an optimal match between full trajectories in @math and full trajectories in @math but unlike OSPA-T allows to match one full trajectory in @math to multiple full trajectories in @math (and vice-versa). The authors make this design choice not to penalize a tracker when it outputs only one track for two objects that move closely together.
{ "cite_N": [ "@cite_34" ], "mid": [ "1881351112" ], "abstract": [ "This paper proposes a new metric based on the Optimal Sub-pattern Assignment (OSPA) metric for evaluating the performance of multiple target tracking algorithms. It is shown in this paper that by considering the properties of false tracks, missing tracks and miss-detections, the minimization of all distances between tracks enables performance evaluation of multi-target algorithms in a more comprehensive manner than is possible with the OSPA metric for track (OSPAT) introduced by" ] }
1601.03094
2233865268
Metrics on the space of sets of trajectories are important for scientists in the field of computer vision, machine learning, robotics, and general artificial intelligence. However, existing notions of closeness between sets of trajectories are either mathematically inconsistent or of limited practical use. In this paper, we outline the limitations in the current mathematically-consistent metrics, which are based on OSPA ( 2008); and the inconsistencies in the heuristic notions of closeness used in practice, whose main ideas are common to the CLEAR MOT measures (Keni and Rainer 2008) widely used in computer vision. In two steps, we then propose a new intuitive metric between sets of trajectories and address these limitations. First, we explain a solution that leads to a metric that is hard to compute. Then we modify this formulation to obtain a metric that is easy to compute while keeping the useful properties of the previous metric. Our notion of closeness is the first demonstrating the following three features: the metric 1) can be quickly computed, 2) incorporates confusion of trajectories' identity in an optimal way, and 3) is a metric in the mathematical sense.
Some extensions to OSPA incorporate the uncertainty in the measurements. The Q-OSPA metric defined in @cite_15 incorporates uncertainty by weighting the distance between pairs of points by the product of their certainty and by adding a new term that is proportional to the product of the uncertainties. The H-OSPA metric defined in @cite_33 incorporates uncertainty by using OSAP with distributions as elements of @math and @math instead of vectors and using the Hellinger distance between distributions instead of the Euclidean distance between vectors. The authors of both works focus only on the simpler case where the sets @math and @math contain vectors points and not trajectories. However, combining their work with that of @cite_41 or @cite_34 to obtain a metric between sets of trajectories is immediate.
{ "cite_N": [ "@cite_41", "@cite_15", "@cite_34", "@cite_33" ], "mid": [ "2101295974", "2037300854", "1881351112", "2153363506" ], "abstract": [ "Performance evaluation of multi-target tracking algorithms is of great practical importance in the design, parameter optimization and comparison of tracking systems. The goal of performance evaluation is to measure the distance between two sets of tracks: the ground truth tracks and the set of estimated tracks. This paper proposes a mathematically rigorous metric for this purpose. The basis of the proposed distance measure is the recently formulated consistent metric for performance evaluation of multi-target filters, referred to as the OSPA metric. Multi-target filters sequentially estimate the number of targets and their position in the state space. The OSPA metric is therefore defined on the space of finite sets of vectors. The distinction between filtering and tracking is that tracking algorithms output tracks and a track represents a labeled temporal sequence of state estimates, associated with the same target. The metric proposed in this paper is therefore defined on the space of finite sets of tracks and incorporates the labeling error. Numerical examples demonstrate that the proposed metric behaves in a manner consistent with our expectations.", "The track qualities of multitarget state estimates are often available in many tracking algorithms, including the multiple hypothesis tracking and the joint integrated probabilistic data association estimators. However, the recently proposed Optimal Subpattern Assignment (OSPA) metric ignores the quality information and thus its capability to quantify the performance of multi-object estimation algorithms is limited. In this paper, a new metric, called the quality-based OSPA (Q-OSPA), is proposed based on the original OSPA metric. The proposed Q-OSPA metric is able to incorporate the quality information and thus provide more accurate quantification of the performance of multi-object estimation algorithms. Also, the mathematical consistency of the original OSPA metric is maintained by the proposed Q-OSPA metric. In addition, if the qualities of estimates are not available, the Q-OSPA metric reduces to the original OSPA metric by assigning equal qualities to the estimates. Besides theoretical derivations, simulations are presented to verify the advantages of the proposed metric.", "This paper proposes a new metric based on the Optimal Sub-pattern Assignment (OSPA) metric for evaluating the performance of multiple target tracking algorithms. It is shown in this paper that by considering the properties of false tracks, missing tracks and miss-detections, the minimization of all distances between tracks enables performance evaluation of multi-target algorithms in a more comprehensive manner than is possible with the OSPA metric for track (OSPAT) introduced by", "This paper proposes the use of the Hellinger distance in evaluating the localisation error in the OSPA metric. The Hellinger distance provides a measure of the difference between two distributions and is used here to measure the difference between the true and estimated targets where the true and estimated single-target states are characterised by Gaussian distributions. The OSPA metric is used to evaluate the performance of several multisensor PHD and CPHD filters. The importance of introducing track covariance into the metric is demonstrated through the application on several multisensor PHD and CPHD filters. In particular, we are able to identify filters that provide a poor estimate of the uncertainty associated with each track." ] }
1601.03094
2233865268
Metrics on the space of sets of trajectories are important for scientists in the field of computer vision, machine learning, robotics, and general artificial intelligence. However, existing notions of closeness between sets of trajectories are either mathematically inconsistent or of limited practical use. In this paper, we outline the limitations in the current mathematically-consistent metrics, which are based on OSPA ( 2008); and the inconsistencies in the heuristic notions of closeness used in practice, whose main ideas are common to the CLEAR MOT measures (Keni and Rainer 2008) widely used in computer vision. In two steps, we then propose a new intuitive metric between sets of trajectories and address these limitations. First, we explain a solution that leads to a metric that is hard to compute. Then we modify this formulation to obtain a metric that is easy to compute while keeping the useful properties of the previous metric. Our notion of closeness is the first demonstrating the following three features: the metric 1) can be quickly computed, 2) incorporates confusion of trajectories' identity in an optimal way, and 3) is a metric in the mathematical sense.
The papers above are fairly recent and the search for similarity measures between sets of trajectories that are a metric is not older. However, researchers in the field of computer vision have been interested in defining similarly measures for sets of trajectories to evaluate the performance of tracking algorithms much prior to these works. It is impossible to review all work done in this area. Specially because the evaluation of the performance of trackers has many challenges other than the problem of defining a similarity measure. See @cite_35 @cite_10 for some examples of these other challenges. Nonetheless, we mention a few works and point out ideas in them that relate to our problem. We emphasize that none of the following works defines a metric mathematically.
{ "cite_N": [ "@cite_35", "@cite_10" ], "mid": [ "1568589399", "2131052232" ], "abstract": [ "Performance evaluation has become an increasingly important feature of video surveillance systems, as researchers attempt to assess the reliability and robustness of their operation. Although many algorithms and systems have been developed to address the problem of detecting and tracking moving objects in the image, few systems have been tested in anything other than fairly ideal conditions. In order to satisfy the requirements of a real video surveillance task, the algorithms will need to be assessed over a wide range of conditions. The aim of this paper is to examine some of the main requirements for effective performance analysis and to examine methods for characterising video datasets.", "Evaluating multi-target tracking based on ground truth data is a surprisingly challenging task. Erroneous or ambiguous ground truth annotations, numerous evaluation protocols, and the lack of standardized benchmarks make a direct quantitative comparison of different tracking approaches rather difficult. The goal of this paper is to raise awareness of common pitfalls related to objective ground truth evaluation. We investigate the influence of different annotations, evaluation software, and training procedures using several publicly available resources, and point out the limitations of current definitions of evaluation metrics. Finally, we argue that the development an extensive standardized benchmark for multi-target tracking is an essential step toward more objective comparison of tracking approaches." ] }
1601.03094
2233865268
Metrics on the space of sets of trajectories are important for scientists in the field of computer vision, machine learning, robotics, and general artificial intelligence. However, existing notions of closeness between sets of trajectories are either mathematically inconsistent or of limited practical use. In this paper, we outline the limitations in the current mathematically-consistent metrics, which are based on OSPA ( 2008); and the inconsistencies in the heuristic notions of closeness used in practice, whose main ideas are common to the CLEAR MOT measures (Keni and Rainer 2008) widely used in computer vision. In two steps, we then propose a new intuitive metric between sets of trajectories and address these limitations. First, we explain a solution that leads to a metric that is hard to compute. Then we modify this formulation to obtain a metric that is easy to compute while keeping the useful properties of the previous metric. Our notion of closeness is the first demonstrating the following three features: the metric 1) can be quickly computed, 2) incorporates confusion of trajectories' identity in an optimal way, and 3) is a metric in the mathematical sense.
One of the reasons why the CLEAR MOT metrics are widely used is because they create a simple association between @math and @math , i.e., the association between @math and @math does not change often in time. It appears that @cite_44 is one of the first works that describes how to control the number of association changes when computing a similarity measure. The authors do not associate @math and @math independently every time instant but rather use a sequential matching procedure that tries to keep the association from the previous time instant if possible. This is similar to the procedure used in the much more recent CLEAR MOT measures that we discuss in the Introduction.
{ "cite_N": [ "@cite_44" ], "mid": [ "1563723533" ], "abstract": [ "This paper presents a new formalized procedure to assess the performance of automatic target tracking systems. The procedure involves comparing data output by a tracking system under test with track data in a truth file. The comparison of the data sources uses a total of 15 tracker performance metrics. These cover the categories of track initiation, track maintenance, track error and false tracks. The performance for each metric is expressed as a probability. An overall performance measure comes from using a weighted combination of these probabilities. The weighting relates to the metric’s importance. This approach is embodied in a Tracker Assessment Tool (TAT) and the processing steps in the TAT are reviewed with an example. A method to simplify the creation of truth data from recorded data is also described." ] }
1601.03094
2233865268
Metrics on the space of sets of trajectories are important for scientists in the field of computer vision, machine learning, robotics, and general artificial intelligence. However, existing notions of closeness between sets of trajectories are either mathematically inconsistent or of limited practical use. In this paper, we outline the limitations in the current mathematically-consistent metrics, which are based on OSPA ( 2008); and the inconsistencies in the heuristic notions of closeness used in practice, whose main ideas are common to the CLEAR MOT measures (Keni and Rainer 2008) widely used in computer vision. In two steps, we then propose a new intuitive metric between sets of trajectories and address these limitations. First, we explain a solution that leads to a metric that is hard to compute. Then we modify this formulation to obtain a metric that is easy to compute while keeping the useful properties of the previous metric. Our notion of closeness is the first demonstrating the following three features: the metric 1) can be quickly computed, 2) incorporates confusion of trajectories' identity in an optimal way, and 3) is a metric in the mathematical sense.
The association that @cite_44 use at every point in time is not one-to-one optimal like in @cite_31 or in the CLEAR MOT, rather the authors use a simple thresholding rule to associate neighboring elements of @math and @math . The idea of using a simple threshold rule to compare @math and @math seems to have survived until relatively recent. For example, in @cite_38 the authors match a full trajectory in @math to a full trajectory in @math if they are close in space for a sufficiently long time interval. The authors in @cite_6 use a similar thresholding method to match @math and @math .
{ "cite_N": [ "@cite_44", "@cite_38", "@cite_31", "@cite_6" ], "mid": [ "1563723533", "1952910563", "2048368220", "2138866680" ], "abstract": [ "This paper presents a new formalized procedure to assess the performance of automatic target tracking systems. The procedure involves comparing data output by a tracking system under test with track data in a truth file. The comparison of the data sources uses a total of 15 tracker performance metrics. These cover the categories of track initiation, track maintenance, track error and false tracks. The performance for each metric is expressed as a probability. An overall performance measure comes from using a weighted combination of these probabilities. The weighting relates to the metric’s importance. This approach is embodied in a Tracker Assessment Tool (TAT) and the processing steps in the TAT are reviewed with an example. A method to simplify the creation of truth data from recorded data is also described.", "This paper deals with the non-trivial problem of performance evaluation of motion tracking. We propose a rich set of metrics to assess different aspects of performance of motion tracking. We use six different video sequences that represent a variety of challenges to illustrate the practical value of the proposed metrics by evaluating and comparing two motion tracking algorithms. The contribution of our framework is that allows the identification of specific weaknesses of motion trackers, such as the performance of specific modules or failures under specific conditions.", "Scoring methods are described for evaluating the performance of a multiple target tracking (MTT) algorithm fairly, without undue bias towards any particular type. The methods were initially developed by individuals and further developed and adapted by the members of the SDI panels on tracking. Ambiguous track-to-target truth association is the fundamental difficulty in MTT performance evaluation. The methods use a global nearest neighbor assignment algorithm to uniquely associate tracks to targets. With the track- to-target associations, the methods employ measures of effectiveness for track purity, data association, state estimation accuracy, and credibility of filter calculated covariance.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.", "This paper presents a set of metrics and algorithms for performance evaluation of object tracking systems. Our emphasis is on wide-ranging, robust metrics which can be used for evaluation purposes without inducing any bias towards the evaluation results. The goal is to report a set of unbiased metrics and to leave the final evaluation of the evaluation process to the research community analyzing the results, keeping the human in the loop. We propose metrics from statistical detection and estimation theory tailored to object detection and tracking tasks using frame-based as well as object-based evaluation paradigms. Object correspondences between multiple ground truth objects to multiple tracker result objects are established from a correspondence matrix. The correspondence matrix is built using three different methods of distance computation between trajectories. Results on PETS 2001 data set are presented in terms of 1st and 2nd order statistical descriptors of these metrics." ] }
1601.03094
2233865268
Metrics on the space of sets of trajectories are important for scientists in the field of computer vision, machine learning, robotics, and general artificial intelligence. However, existing notions of closeness between sets of trajectories are either mathematically inconsistent or of limited practical use. In this paper, we outline the limitations in the current mathematically-consistent metrics, which are based on OSPA ( 2008); and the inconsistencies in the heuristic notions of closeness used in practice, whose main ideas are common to the CLEAR MOT measures (Keni and Rainer 2008) widely used in computer vision. In two steps, we then propose a new intuitive metric between sets of trajectories and address these limitations. First, we explain a solution that leads to a metric that is hard to compute. Then we modify this formulation to obtain a metric that is easy to compute while keeping the useful properties of the previous metric. Our notion of closeness is the first demonstrating the following three features: the metric 1) can be quickly computed, 2) incorporates confusion of trajectories' identity in an optimal way, and 3) is a metric in the mathematical sense.
It is worth mentioning a few works that differ from the main stream in this aspect. One of them is @cite_1 , where the authors define a similarity measure based on comparing the occurrence of special discrete events in @math and @math , and another is @cite_30 , where the authors propose an information theoretic measure of similarity between sets of trajectories. Finally, in @cite_7 the authors propose a similarity measured based on hidden Markov models that does not assume that the temporal sampling rates of the trajectories are equal.
{ "cite_N": [ "@cite_30", "@cite_1", "@cite_7" ], "mid": [ "1581210256", "2130827092", "2140017549" ], "abstract": [ "Automated tracking of vehicles and people is essential for the effective utilization of imagery in wide area surveillance applications. In order to determine the best tracking algorithm and parameters for a given application, a comprehensive evaluation procedure is required. However, despite half a century of research in multi-target tracking, there is no consensus on how to score the overall performance of these trackers. Existing evaluation approaches assess tracker performance through measures of correspondence between ground truth tracks and system tracks using metrics such as track detection rate, track completeness, track fragmentation rate, and track ID change rate. However, each of these only provides a partial measure of performance and no good method exists to combine them into a holistic metric. Towards this end, this paper presents a pair of information theoretic metrics with similar behavior to the Receiver Operating Characteristic (ROC) curves of signal detection theory. Overall performance is evaluated with the percentage of truth information that a tracker captured and the total amount of false information that it reported. Information content is quantified through conditional entropy and mutual information computations using numerical estimates of the probability of association between the truth and the system tracks. This paper demonstrates how these information quality metrics provide a comprehensive evaluation of overall tracker performance and how they can be used to perform tracker comparisons and parameter tuning on wide-area surveillance imagery and other applications.1", "A tracking system outputs a separate motion trajectory for each moving object in a scene. The paper presents a problem of performance evaluation and performance metrics for real time systems that track people, or moving objects, in video sequences, and it proposes performance measurement methodology for such systems. Two approaches to measuring performance are presented. The first approach compares the computed motion trajectories to the reference trajectories. It enables a complete evaluation of tracking results, but reference trajectories it requires are difficult to get. The second, more practical approach identifies in the computed trajectories specific discrete events, such as line crossings, and compares sequences of these events to sequences of reference events, which are much easier to obtain than reference trajectories. These events can usually be chosen such that they reflect the application goal of a tracking system, e.g. counting people in an area. Precision of evaluation increases with density of events. Short event sequences measure the sensitivity and selectivity of a tracking method, i.e. how well it satisfies the \"one person one trajectory\" objective. Long sequences measure continuity of trajectories: how long a method can keep track of one person. The paper shows performance measurement results for a real time people tracking system developed by the authors.", "In this paper, we introduce a set of novel distance metrics that use model based representations for trajectories. We determine the similarity of trajectories using the conformity of the corresponding HMM models. These metrics enable the comparison of tracjectories without any limitations of the conventional measures. They accurately identify the coordinate, orientation, and speed affinity. The proposed HMM based distance metrics can be used not only for ground truth comparisons but for clustering as well. Our experiments prove that they have superior discriminative properties." ] }
1601.02913
2238647304
In this paper, we present a subclass-representation approach that predicts the probability of a social image belonging to one particular class. We explore the co-occurrence of user-contributed tags to find subclasses with a strong connection to the top level class. We then project each image on to the resulting subclass space to generate a subclass representation for the image. The novelty of the approach is that subclass representations make use of not only the content of the photos themselves, but also information on the co-occurrence of their tags, which determines membership in both subclasses and top-level classes. The novelty is also that the images are classified into smaller classes, which have a chance of being more visually stable and easier to model. These subclasses are used as a latent space and images are represented in this space by their probability of relatedness to all of the subclasses. In contrast to approaches directly modeling each top-level class based on the image content, the proposed method can exploit more information for visually diverse classes. The approach is evaluated on a set of @math million photos with 10 classes, released by the Multimedia 2013 Yahoo! Large-scale Flickr-tag Image Classification Grand Challenge. Experiments show that the proposed system delivers sound performance for visually diverse classes compared with methods that directly model top classes.
Generating sub-categories has been considered as an effective method to deal with classification problems where intra-class variation is high. The ImageNet @cite_1 organizes image dataset with labels corresponding to a semantic hierarchy. This method is able to build comprehensive ontology for large scale dataset. However, for a particular dataset, the sub-categories generated by data driven strategies are expected to be more discriminative. @cite_8 exploits co-watch information to learn latent sub-tags for video tag prediction. @cite_4 proposes to discover the image hierarchy by using both visual and tag information. Our method generates category-specific subclasses by exploring image tag co-occurrence, and trains classifiers for each subclass-tag. These subclasses based models are expected to be discriminative in terms of estimating the target tags, corresponding to top-level classes.
{ "cite_N": [ "@cite_1", "@cite_4", "@cite_8" ], "mid": [ "2108598243", "1979936637", "1967664674" ], "abstract": [ "The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.", "A semantically meaningful image hierarchy can ease the human effort in organizing thousands and millions of pictures (e.g., personal albums), and help to improve performance of end tasks such as image annotation and classification. Previous work has focused on using either low-level image features or textual tags to build image hierarchies, resulting in limited success in their general usage. In this paper, we propose a method to automatically discover the “semantivisual” image hierarchy by incorporating both image and tag information. This hierarchy encodes a general-to-specific image relationship. We pay particular attention to quantifying the effectiveness of the learned hierarchy, as well as comparing our method with others in the end-task applications. Our experiments show that humans find our semantivisual image hierarchy more effective than those solely based on texts or low-level visual features. And using the constructed image hierarchy as a knowledge ontology, our algorithm can perform challenging image classification and annotation tasks more accurately.", "We consider the problem of content-based automated tag learning. In particular, we address semantic variations (sub-tags) of the tag. Each video in the training set is assumed to be associated with a sub-tag label, and we treat this sub-tag label as latent information. A latent learning framework based on LogitBoost is proposed, which jointly considers both the tag label and the latent sub-tag label. The latent sub-tag information is exploited in our framework to assist the learning of our end goal, i.e., tag prediction. We use the cowatch information to initialize the learning process. In experiments, we show that the proposed method achieves significantly better results over baselines on a large-scale testing video set which contains about 50 million YouTube videos." ] }
1601.02403
2236647290
The goal of argumentation mining, an evolving research field in computational linguistics, is to design methods capable of analyzing people's argumentation. In this article, we go beyond the state of the art in several ways. i We deal with actual Web data and take up the challenges given by the variety of registers, multiple domains, and unrestricted noisy user-generated Web discourse. ii We bridge the gap between normative argumentation theories and argumentation phenomena encountered in actual data by adapting an argumentation model tested in an extensive annotation study. iii We create a new gold standard corpus 90k tokens in 340 documents and experiment with several machine learning methods to identify argument components. We offer the data, source codes, and annotation guidelines to the community under free licenses. Our findings show that argumentation mining in user-generated Web discourse is a feasible but challenging task.
We structure the related work into three sub-categories, namely , , and , as these areas are closest to this article's focus. For a recent overview of general discourse analysis see @cite_69 . Apart from these, research on computer-supported argumentation has been also very active; see, e.g., @cite_64 for a survey of various models and argumentation formalisms from the educational perspective or @cite_76 which examines argumentation in the Semantic Web.
{ "cite_N": [ "@cite_69", "@cite_64", "@cite_76" ], "mid": [ "2111535382", "2133391912", "1575316482" ], "abstract": [ "An increasing number of researchers and practitioners in Natural Language Engineering face the prospect of having to work with entire texts, rather than individual sentences. While it is clear that text must have useful structure, its nature may be less clear, making it more difficult to exploit in applications. This survey of work on discourse structure thus provides a primer on the bases of which discourse is structured along with some of their formal properties. It then lays out the current state-of-the-art with respect to algorithms for recognizing these different structures, and how these algorithms are currently being used in Language Technology applications. After identifying resources that should prove useful in improving algorithm performance across a range of languages, we conclude by speculating on future discourse structure-enabled technology.", "Argumentation is an important skill to learn. It is valuable not only in many professional contexts, such as the law, science, politics, and business, but also in everyday life. However, not many people are good arguers. In response to this, researchers and practitioners over the past 15–20 years have developed software tools both to support and teach argumentation. Some of these tools are used in individual fashion, to present students with the “rules” of argumentation in a particular domain and give them an opportunity to practice, while other tools are used in collaborative fashion, to facilitate communication and argumentation between multiple, and perhaps distant, participants. In this paper, we review the extensive literature on argumentation systems, both individual and collaborative, and both supportive and educational, with an eye toward particular aspects of the past work. More specifically, we review the types of argument representations that have been used, the various types of interaction design and ontologies that have been employed, and the system architecture issues that have been addressed. In addition, we discuss intelligent and automated features that have been imbued in past systems, such as automatically analyzing the quality of arguments and providing intelligent feedback to support and or tutor argumentation. We also discuss a variety of empirical studies that have been done with argumentation systems, including, among other aspects, studies that have evaluated the effect of argument diagrams (e.g., textual versus graphical), different representations, and adaptive feedback on learning argumentation. Finally, we conclude by summarizing the “lessons learned” from this large and impressive body of work, particularly focusing on lessons for the CSCL research community and its ongoing efforts to develop computer-mediated collaborative argumentation systems.", "Argumentation represents the study of views and opinions that humans express with the goal of reaching a conclusion through logical reasoning. Since the 1950's, several models have been proposed to capture the essence of informal argumentation in different settings. With the emergence of the Web, and then the Semantic Web, this modeling shifted towards ontologies, while from the development perspective, we witnessed an important increase in Web 2.0 human-centered collaborative deliberation tools. Through a review of more than 150 scholarly papers, this article provides a comprehensive and comparative overview of approaches to modeling argumentation for the Social Semantic Web. We start from theoretical foundational models and investigate how they have influenced Social Web tools. We also look into Semantic Web argumentation models. Finally we end with Social Web tools for argumentation, including online applications combining Web 2.0 and Semantic Web technologies, following the path to a global World Wide Argument Web." ] }
1601.02433
2232612980
Collaborative vocabulary development in the context of data integration is the process of finding consensus between the experts of the different systems and domains. The complexity of this process is increased with the number of involved people, the variety of the systems to be integrated and the dynamics of their domain. In this paper we advocate that the realization of a powerful version control system is the heart of the problem. Driven by this idea and the success of Git in the context of software development, we investigate the applicability of Git for collaborative vocabulary development. Even though vocabulary development and software development have much more similarities than differences there are still important differences. These need to be considered within the development of a successful versioning and collaboration system for vocabulary development. Therefore, this paper starts by presenting the challenges we were faced with during the creation of vocabularies collaboratively and discusses its distinction to software development. Based on these insights we propose Git4Voc which comprises guidelines how Git can be adopted to vocabulary development. Finally, we demonstrate how Git hooks can be implemented to go beyond the plain functionality of Git by realizing vocabulary-specific features like syntactic validation and semantic diffs.
Collaborative vocabulary development is an active research area in the Semantic Web community @cite_9 . Existing approaches like @cite_6 provides a collaborative web frontend for a subset of the functionality of the Prot 'eg 'e OWL editor. The aim of WebProt 'eg 'e, is to lower the threshold for collaborative ontology development. @cite_16 is a vocabulary publishing platform, with a focus on ease of use and compatibility with Linked Data principles. Neologism focuses more on vocabulary publishing and less on collaboration. VocBench @cite_1 , is an open source web application for editing thesauri complying with the SKOS and SKOS-XL standards. VocBench has a focus on collaboration, supported by workflow management for content validation and publication.
{ "cite_N": [ "@cite_9", "@cite_16", "@cite_1", "@cite_6" ], "mid": [ "2061725584", "2185348002", "", "1659163511" ], "abstract": [ "This paper describes our methodological and technological approach for collaborative ontology development in inter-organizational settings. It is based on the formalization of the collaborative ontology development process by means of an explicit editorial workflow, which coordinates proposals for changes among ontology editors in a flexible manner. This approach is supported by new models, methods and strategies for ontology change management in distributed environments: we propose a new form of ontology change representation, organized in layers so as to provide as much independence as possible from the underlying ontology languages, together with methods and strategies for their manipulation, version management, capture, storage and maintenance, some of which are based on existing proposals in the state of the art. Moreover, we propose a set of change propagation strategies that allow keeping distributed copies of the same ontology synchronized. Finally, we illustrate and evaluate our approach with a test case in the fishery domain from the United Nations Food and Agriculture Organisation (FAO). The preliminary results obtained from our evaluation suggest positive indication on the practical value and usability of the work here presented.", "Creating, documenting, publishing and maintaining an RDF Schema vocabulary is a complex, time-consuming task. This makes vocabulary maintainers reluctant to evolve their creations quickly in response to user feedback; it prevents use of RDF for casual, ad-hoc data publication about niche topics; it leads to poorly documented vocabularies, and contributes to poor compliance of vocabularies with bestpractice recommendations. Neologism is a web-based vocabulary editor and publishing system that dramatically reduces the time required to create, publish and modify vocabularies. By removing a lot of pain from this process, Neologism will contribute to a generally more interesting, relevant and standards-compliant Semantic Web.", "", "In this paper, we present WebProtege---a lightweight ontology editor and knowledge acquisition tool for the Web. With the wide adoption of Web 2.0 platforms and the gradual adoption of ontologies and Semantic Web technologies in the real world, we need ontology-development tools that are better suited for the novel ways of interacting, constructing and consuming knowledge. Users today take Web-based content creation and online collaboration for granted. WebProtege integrates these features as part of the ontology development process itself. We tried to lower the entry barrier to ontology development by providing a tool that is accessible from any Web browser, has extensive support for collaboration, and a highly customizable and pluggable user interface that can be adapted to any level of user expertise. The declarative user interface enabled us to create custom knowledge-acquisition forms tailored for domain experts. We built WebProtege using the existing Protege infrastructure, which supports collaboration on the back end side, and the Google Web Toolkit for the front end. The generic and extensible infrastructure allowed us to easily deploy WebProtege in production settings for several projects. We present the main features of WebProtege and its architecture and describe briefly some of its uses for real-world projects. WebProtege is free and open source. An online demo is available at http: webprotege.stanford.edu." ] }
1601.02465
2397994764
In this article we present the Hayastan Shakarian (HS), a robustness index for complex networks. HS measures the impact of a network disconnection (edge) while comparing the sizes of the remaining connected components. Strictly speaking, the Hayastan Shakarian index is defined as edge removal that produces the maximal inverse of the size of the largest connected component divided by the sum of the sizes of the remaining ones. We tested our index in attack strategies where the nodes are disconnected in decreasing order of a specified metric. We considered using the Hayastan Shakarian cut (disconnecting the edge with max HS) and other well-known strategies as the higher betweenness centrality disconnection. All strategies were compared regarding the behavior of the robustness (R-index) during the attacks. In an attempt to simulate the internet backbone, the attacks were performed in complex networks with power-law degree distributions (scale-free networks). Preliminary results show that attacks based on disconnecting using the Hayastan Shakarian cut are more dangerous (decreasing the robustness) than the same attacks based on other centrality measures. We believe that the Hayastan Shakarian cut, as well as other measures based on the size of the largest connected component, provides a good addition to other robustness metrics for complex networks.
Over the last decade, there has been a huge interest in the analysis of complex networks and their connectivity properties @cite_20 . During the last years, networks and in particular social networks have gained significant popularity. An in-depth understanding of the graph structure is key to convert data into information. To do so, complex networks tools have emerged @cite_19 to classify networks @cite_0 , detect communities @cite_17 , determine important features and measure them @cite_15 .
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_15", "@cite_20", "@cite_17" ], "mid": [ "2112090702", "2124637492", "", "2065769502", "2131717044" ], "abstract": [ "Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.", "The emergence of order in natural systems is a constant source of inspiration for both physical and biological sciences. While the spatial order characterizing for example the crystals has been the basis of many advances in contemporary physics, most complex systems in nature do not offer such high degree of order. Many of these systems form complex networks whose nodes are the elements of the system and edges represent the interactions between them. Traditionally complex networks have been described by the random graph theory founded in 1959 by Paul Erdohs and Alfred Renyi. One of the defining features of random graphs is that they are statistically homogeneous, and their degree distribution (characterizing the spread in the number of edges starting from a node) is a Poisson distribution. In contrast, recent empirical studies, including the work of our group, indicate that the topology of real networks is much richer than that of random graphs. In particular, the degree distribution of real networks is a power-law, indicating a heterogeneous topology in which the majority of the nodes have a small degree, but there is a significant fraction of highly connected nodes that play an important role in the connectivity of the network. The scale-free topology of real networks has very important consequences on their functioning. For example, we have discovered that scale-free networks are extremely resilient to the random disruption of their nodes. On the other hand, the selective removal of the nodes with highest degree induces a rapid breakdown of the network to isolated subparts that cannot communicate with each other. The non-trivial scaling of the degree distribution of real networks is also an indication of their assembly and evolution. Indeed, our modeling studies have shown us that there are general principles governing the evolution of networks. Most networks start from a small seed and grow by the addition of new nodes which attach to the nodes already in the system. This process obeys preferential attachment: the new nodes are more likely to connect to nodes with already high degree. We have proposed a simple model based on these two principles wich was able to reproduce the power-law degree distribution of real networks. Perhaps even more importantly, this model paved the way to a new paradigm of network modeling, trying to capture the evolution of networks, not just their static topology.", "", "Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network1. Complex communication networks2 display a surprising degree of robustness: although key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. Here we demonstrate that error tolerance is not shared by all redundant systems: it is displayed only by a class of inhomogeneously wired networks, called scale-free networks, which include the World-Wide Web3,4,5, the Internet6, social networks7 and cells8. We find that such networks display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates. However, error tolerance comes at a high price in that these networks are extremely vulnerable to attacks (that is, to the selection and removal of a few nodes that play a vital role in maintaining the network's connectivity). Such error tolerance and attack vulnerability are generic properties of communication networks.", "A large body of work has been devoted to identifying community structure in networks. A community is often though of as a set of nodes that has more connections between its members than to the remainder of the network. In this paper, we characterize as a function of size the statistical and structural properties of such sets of nodes. We define the network community profile plot, which characterizes the \"best\" possible community - according to the conductance measure - over a wide range of size scales, and we study over 70 large sparse real-world networks taken from a wide range of application domains. Our results suggest a significantly more refined picture of community structure in large real-world networks than has been appreciated previously. Our most striking finding is that in nearly every network dataset we examined, we observe tight but almost trivial communities at very small scales, and at larger size scales, the best possible communities gradually \"blend in\" with the rest of the network and thus become less \"community-like.\" This behavior is not explained, even at a qualitative level, by any of the commonly-used network generation models. Moreover, this behavior is exactly the opposite of what one would expect based on experience with and intuition from expander graphs, from graphs that are well-embeddable in a low-dimensional structure, and from small social networks that have served as testbeds of community detection algorithms. We have found, however, that a generative model, in which new edges are added via an iterative \"forest fire\" burning process, is able to produce graphs exhibiting a network community structure similar to our observations." ] }
1601.02465
2397994764
In this article we present the Hayastan Shakarian (HS), a robustness index for complex networks. HS measures the impact of a network disconnection (edge) while comparing the sizes of the remaining connected components. Strictly speaking, the Hayastan Shakarian index is defined as edge removal that produces the maximal inverse of the size of the largest connected component divided by the sum of the sizes of the remaining ones. We tested our index in attack strategies where the nodes are disconnected in decreasing order of a specified metric. We considered using the Hayastan Shakarian cut (disconnecting the edge with max HS) and other well-known strategies as the higher betweenness centrality disconnection. All strategies were compared regarding the behavior of the robustness (R-index) during the attacks. In an attempt to simulate the internet backbone, the attacks were performed in complex networks with power-law degree distributions (scale-free networks). Preliminary results show that attacks based on disconnecting using the Hayastan Shakarian cut are more dangerous (decreasing the robustness) than the same attacks based on other centrality measures. We believe that the Hayastan Shakarian cut, as well as other measures based on the size of the largest connected component, provides a good addition to other robustness metrics for complex networks.
Concerning robustness metrics, betweenness centrality deserves special attention. Betweenness centrality is a metric that determines the importance of an edge by looking at the paths between all of the pairs of remaining nodes. Betweenness has been studied as a resilience metric for the routing layer @cite_10 and also as a robustness metric for complex networks @cite_6 and for internet autonomous systems networks @cite_18 among others. Betweenness centrality has been widely studied and standardized as a comparison base for robustness metrics, thus in this study it will be used for performance comparison.
{ "cite_N": [ "@cite_18", "@cite_10", "@cite_6" ], "mid": [ "2139905147", "2142843434", "2024982571" ], "abstract": [ "We calculate an extensive set of characteristics for Internet AS topologies extracted from the three data sources most frequently used by the research community: traceroutes, BGP, and WHOIS. We discover that traceroute and BGP topologies are similar to one another but differ substantially from the WHOIS topology. Among the widely considered metrics, we find that the joint degree distribution appears to fundamentally characterize Internet AS topologies as well as narrowly define values for other important metrics. We discuss the interplay between the specifics of the three data collection mechanisms and the resulting topology views. In particular, we how how the data collection peculiarities explain differences in the resulting joint degree distributions of the respective topologies. Finally, we release to the community the input topology datasets, along with the scripts and output of our calculations. This supplement hould enable researchers to validate their models against real data and to make more informed election of topology data sources for their specific needs", "The cost of failures within communication networks is significant and will only increase as their reach further extends into the way our society functions. Some aspects of network resilience, such as the application of fault-tolerant systems techniques to optical switching, have been studied and applied to great effect. However, networks - and the Internet in particular - are still vulnerable to malicious attacks, human mistakes such as misconfigurations, and a range of environmental challenges. We argue that this is, in part, due to a lack of a holistic view of the resilience problem, leading to inappropriate and difficult-to-manage solutions. In this article, we present a systematic approach to building resilient networked systems. We first study fundamental elements at the framework level such as metrics, policies, and information sensing mechanisms. Their understanding drives the design of a distributed multilevel architecture that lets the network defend itself against, detect, and dynamically respond to challenges. We then use a concrete case study to show how the framework and mechanisms we have developed can be applied to enhance resilience.", "Many complex systems can be described by networks, in which the constituent components are represented by vertices and the connections between the components are represented by edges between the corresponding vertices. A fundamental issue concerning complex networked systems is the robustness of the overall system to the failure of its constituent parts. Since the degree to which a networked system continues to function, as its component parts are degraded, typically depends on the integrity of the underlying network, the question of system robustness can be addressed by analyzing how the network structure changes as vertices are removed. Previous work has considered how the structure of complex networks change as vertices are removed uniformly at random, in decreasing order of their degree, or in decreasing order of their betweenness centrality. Here we extend these studies by investigating the effect on network structure of targeting vertices for removal based on a wider range of non-local measures of potential importance than simply degree or betweenness. We consider the effect of such targeted vertex removal on model networks with different degree distributions, clustering coefficients and assortativity coefficients, and for a variety of empirical networks." ] }
1601.02465
2397994764
In this article we present the Hayastan Shakarian (HS), a robustness index for complex networks. HS measures the impact of a network disconnection (edge) while comparing the sizes of the remaining connected components. Strictly speaking, the Hayastan Shakarian index is defined as edge removal that produces the maximal inverse of the size of the largest connected component divided by the sum of the sizes of the remaining ones. We tested our index in attack strategies where the nodes are disconnected in decreasing order of a specified metric. We considered using the Hayastan Shakarian cut (disconnecting the edge with max HS) and other well-known strategies as the higher betweenness centrality disconnection. All strategies were compared regarding the behavior of the robustness (R-index) during the attacks. In an attempt to simulate the internet backbone, the attacks were performed in complex networks with power-law degree distributions (scale-free networks). Preliminary results show that attacks based on disconnecting using the Hayastan Shakarian cut are more dangerous (decreasing the robustness) than the same attacks based on other centrality measures. We believe that the Hayastan Shakarian cut, as well as other measures based on the size of the largest connected component, provides a good addition to other robustness metrics for complex networks.
Another interesting edge-based metrics are: the increase of distances in a network caused by the deletion of vertices and edges @cite_14 ; the mean connectivity @cite_12 , that is, the probability that a network is disconnected by the random deletion of edges; the average connected distance (the average length of the shortest paths between connected pairs of nodes in the network) and the fragmentation (the decay of a network in terms of the size of its connected components after random edge disconnections) @cite_20 ; the balance-cut resilience, that is, the capacity of a minimum cut such that the two resulting vertex sets contain approximately the same number (similar to the HS-index but aiming to divide the network only in halves @cite_4 , not in equal sized connected components); the effective diameter ( '' @cite_9 ); and the Dynamic Network Robustness (DYNER) @cite_1 , where a backup path between nodes after a node disconnection and its length is used as a robustness metric in communication networks.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_9", "@cite_1", "@cite_12", "@cite_20" ], "mid": [ "2134239708", "2163252320", "1571874655", "", "", "2065769502" ], "abstract": [ "Two new measures of network performance, namely, the incremental distance sequence and the incremental diameter sequence, are introduced for application in network topology design. These sequences can be defined for both vertex deletions and edge deletions. A complete characterization of the vertex-deleted incremental distance sequence is presented. Proof of this characterization is constructive in nature. A condition for the feasibility of an edge-deleted incremental distance sequence and a procedure for realizing such a sequence are given. Interrelationships between the elements of incremental distance sequences and incremental diameter sequences are studied. Using these results, it is shown that a graph that has a specified diameter and a specified maximum increase in diameters for deletions of vertex sets of given cardinalities can be designed. >", "Following the long-held belief that the Internet is hierarchical, the network topology generators most widely used by the Internet research community, Transit-Stub and Tiers, create networks with a deliberately hierarchical structure. However, in 1999 a seminal paper by revealed that the Internet's degree distribution is a power-law. Because the degree distributions produced by the Transit-Stub and Tiers generators are not power-laws, the research community has largely dismissed them as inadequate and proposed new network generators that attempt to generate graphs with power-law degree distributions.Contrary to much of the current literature on network topology generators, this paper starts with the assumption that it is more important for network generators to accurately model the large-scale structure of the Internet (such as its hierarchical structure) than to faithfully imitate its local properties (such as the degree distribution). The purpose of this paper is to determine, using various topology metrics, which network generators better represent this large-scale structure. We find, much to our surprise, that network generators based on the degree distribution more accurately capture the large-scale structure of measured topologies. We then seek an explanation for this result by examining the nature of hierarchy in the Internet more closely; we find that degree-based generators produce a form of hierarchy that closely resembles the loosely hierarchical nature of the Internet.", "In this paper we apply data mining analysis to study the topology of the Internet thus creating a new processing framework To the best of our knowledge this is one of the rst studies that focus on the Internet topology at the router level i e each node is a router The size K nodes and the nature of the graph are such that new analysis methods have to be employed First we suggest computationally expensive metrics to characterize topological properties Then we present an e cient approximation algorithm that makes the calculation of these metrics possible Finally we demonstrate the initial results of our framework For example we show that we can identify central routers and poorly connected or even isolated nodes We also nd that the Internet is surprisingly resilient to random link and router failures having only small changes in the connectivity for fewer than failures Our framework seems a promising step towards understanding and characterizing the Internet topology and possible other real communication graphs such as web graphs", "", "", "Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network1. Complex communication networks2 display a surprising degree of robustness: although key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. Here we demonstrate that error tolerance is not shared by all redundant systems: it is displayed only by a class of inhomogeneously wired networks, called scale-free networks, which include the World-Wide Web3,4,5, the Internet6, social networks7 and cells8. We find that such networks display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates. However, error tolerance comes at a high price in that these networks are extremely vulnerable to attacks (that is, to the selection and removal of a few nodes that play a vital role in maintaining the network's connectivity). Such error tolerance and attack vulnerability are generic properties of communication networks." ] }
1601.02465
2397994764
In this article we present the Hayastan Shakarian (HS), a robustness index for complex networks. HS measures the impact of a network disconnection (edge) while comparing the sizes of the remaining connected components. Strictly speaking, the Hayastan Shakarian index is defined as edge removal that produces the maximal inverse of the size of the largest connected component divided by the sum of the sizes of the remaining ones. We tested our index in attack strategies where the nodes are disconnected in decreasing order of a specified metric. We considered using the Hayastan Shakarian cut (disconnecting the edge with max HS) and other well-known strategies as the higher betweenness centrality disconnection. All strategies were compared regarding the behavior of the robustness (R-index) during the attacks. In an attempt to simulate the internet backbone, the attacks were performed in complex networks with power-law degree distributions (scale-free networks). Preliminary results show that attacks based on disconnecting using the Hayastan Shakarian cut are more dangerous (decreasing the robustness) than the same attacks based on other centrality measures. We believe that the Hayastan Shakarian cut, as well as other measures based on the size of the largest connected component, provides a good addition to other robustness metrics for complex networks.
The idea of planning a network attack'' using centrality measures has captured the attention of researchers and practitioners nowadays. For instance, @cite_11 used bet -ween -ness-centrality () for planning a network attack, calculating the value for all nodes, ordering nodes from higher to lower , and then attacking (discarding) those nodes in that order. They have shown that disconnecting only two of the top -ranked nodes, their packet-delivery ratio is reduced to @math 10 @math 100+ @math 0 In the study of resilience after edge removing, @cite_13 study backup communication paths for network services defining that a network is '' given that ''.
{ "cite_N": [ "@cite_13", "@cite_11" ], "mid": [ "2115113435", "2014770087" ], "abstract": [ "We develop a graph-theoretic model for service-oriented networks and propose metrics that quantify the resilience of such networks under node and edge failures. These metrics are based on the topological structure of the network and the manner in which services are distributed over the network. We present efficient algorithms to determine the maximum number of node and edge failures that can be tolerated by a given service-oriented network. These algorithms rely on known algorithms for computing minimum cuts in graphs. We also present efficient algorithms for optimally allocating services over a given network so that the resulting service-oriented network can tolerate single node or edge failures. These algorithms are derived through a careful analysis of the decomposition of the underlying network into appropriate types of connected components.", "As the Internet becomes increasingly important to all aspects of society, the consequences of disruption become increasingly severe. Thus it is critical to increase the resilience and survivability of the future network. We define resilience as the ability of the network to provide desired service even when challenged by attacks, large-scale disasters, and other failures. This paper describes a comprehensive methodology to evaluate network resilience using a combination of analytical and simulation techniques with the goal of improving the resilience and survivability of the Future Internet." ] }
1601.02071
2237326720
This paper proposes the usage of for exploratory search with as a facet. Starting from specific design goals for depiction of ambivalence in sentiment, two visualization widgets were implemented: and . Those widgets were evaluated against a text baseline in a small-scale usability study with exploratory tasks using Wikipedia as dataset. The study results indicate that users spend more time browsing with scatter plots in a positive way. A post-hoc analysis of individual differences in behavior revealed that when considering two types of users, and , engagement with scatter plots is positive and significantly greater . We discuss the implications of these findings for sentiment-based exploratory search and personalised user interfaces.
@cite_18 is a search engine where information seekers can answer questions with an explicit sentiment component such as and obtain a visualisation of search results. The purpose of the visual depiction is artistic, and results can be filtered through facets of meta-data such as gender, age and mood. With regard to , @cite_6 depicts facets such as time, geo-location and topics. In @cite_10 , treemaps are used to depict a hierarchical facet. It was found that the usage of visualisation had positive impact on perceived task difficulty, repository understanding and enjoyment. Our work extends @cite_6 , as we present widgets for a specific facet that could be used among other widgets.
{ "cite_N": [ "@cite_18", "@cite_10", "@cite_6" ], "mid": [ "", "2024351142", "2158943277" ], "abstract": [ "", "Hierarchical representations are common in digital repositories, yet are not always fully leveraged in their online search interfaces. This work describes ResultMaps, which use hierarchical treemap representations with query string-driven digital library search engines. We describe two lab experiments, which find that ResultsMap users yield significantly better results over a control condition on some subjective measures, and we find evidence that ResultMaps have ancillary benefits via increased understanding of some aspects of repository content. The ResultMap system and experiments contribute an understanding of the benefits-direct and indirect-of the ResultMap approach to repository search visualization.", "In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets - interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds." ] }
1601.02376
2288597091
Predicting user responses, such as click-through rate and conversion rate, are critical in many web applications including web search, personalised recommendation, and online advertising. Different from continuous raw features that we usually found in the image and audio domains, the input features in web space are always of multi-field and are mostly discrete and categorical while their dependencies are little known. Major user response prediction models have to either limit themselves to linear models or require manually building up high-order combination features. The former loses the ability of exploring feature interactions, while the latter results in a heavy computation in the large feature space. To tackle the issue, we propose two novel models using deep neural networks (DNNs) to automatically learn effective patterns from categorical feature interactions and make predictions of users' ad clicks. To get our DNNs efficiently work, we propose to leverage three feature transformation methods, i.e., factorisation machines (FMs), restricted Boltzmann machines (RBMs) and denoising auto-encoders (DAEs). This paper presents the structure of our models and their efficient training algorithms. The large-scale experiments with real-world data demonstrate that our methods work better than major state-of-the-art models.
Click-through rate, defined as the probability of the ad click from a specific user on a displayed ad, is essential in online advertising @cite_13 . In order to maximise revenue and user satisfaction, online advertising platforms must predict the expected user behaviour for each displayed ad and maximise the expectation that users will click. The majority of current models use logistic regression based on a set of sparse binary features converted from the original categorical features via one-hot encoding @cite_10 @cite_7 . Heavy engineering efforts are needed to design features such as locations, top unigrams, combination features, etc. @cite_21 .
{ "cite_N": [ "@cite_10", "@cite_21", "@cite_13", "@cite_7" ], "mid": [ "2012905273", "", "2186584675", "2090883204" ], "abstract": [ "In targeted display advertising, the goal is to identify the best opportunities to display a banner ad to an online user who is most likely to take a desired action such as purchasing a product or signing up for a newsletter. Finding the best ad impression, i.e., the opportunity to show an ad to a user, requires the ability to estimate the probability that the user who sees the ad on his or her browser will take an action, i.e., the user will convert. However, conversion probability estimation is a challenging task since there is extreme data sparsity across different data dimensions and the conversion event occurs rarely. In this paper, we present our approach to conversion rate estimation which relies on utilizing past performance observations along user, publisher and advertiser data hierarchies. More specifically, we model the conversion event at different select hierarchical levels with separate binomial distributions and estimate the distribution parameters individually. Then we demonstrate how we can combine these individual estimators using logistic regression to accurately identify conversion events. In our presentation, we also discuss main practical considerations such as data imbalance, missing data, and output probability calibration, which render this estimation problem more difficult but yet need solving for a real-world implementation of the approach. We provide results from real advertising campaigns to demonstrate the effectiveness of our proposed approach.", "", "In online advertising campaigns, to measure purchase propensity, click-through rate (CTR), defined as a ratio of number of clicks to number of impressions, is one of the most informative metrics used in business activities such as performance evaluation and budget planning. No matter what channel an ad goes through (display ads, sponsored search or contextual advertising), CTR estimation for rare events is essential but challenging, often incurring with huge variance, due to the sparsity in data. In this chapter, to alleviate this sparsity, we develop models and methods to smoothen CTR estimation by taking advantage of the data hierarchy in nature or by clustering and data continuity in time to leverage information from data close to the events of interest. In a contextual advertising system running at Yahoo!, we demonstrate that our methods lead to significantly more accurate estimation of CTRs.", "Search engine advertising has become a significant element of the Web browsing experience. Choosing the right ads for the query and the order in which they are displayed greatly affects the probability that a user will see and click on each ad. This ranking has a strong impact on the revenue the search engine receives from the ads. Further, showing the user an ad that they prefer to click on improves user satisfaction. For these reasons, it is important to be able to accurately estimate the click-through rate of ads in the system. For ads that have been displayed repeatedly, this is empirically measurable, but for new ads, other means must be used. We show that we can use features of ads, terms, and advertisers to learn a model that accurately predicts the click-though rate for new ads. We also show that using our model improves the convergence and performance of an advertising system. As a result, our model increases both revenue and user satisfaction." ] }
1601.02003
2226330206
We study the lengths of monotone subsequences for permutations drawn from the Mallows measure. The Mallows measure was introduced by Mallows in connection with ranking problems in statistics. Under this measure, the probability of a permutation @math is proportional to @math where @math is a positive parameter and @math is the number of inversions in @math . In our main result we show that when @math , the limiting distribution of the longest increasing subsequence (LIS) is Gaussian, answering an open question in [Bhatnagar and Peled, PTRF, 2015]. This is in contrast to the case when @math where the limiting distribution of the LIS when scaled appropriately is the GUE Tracy-Widom distribution. We also obtain a law of large numbers for the length of the longest decreasing subsequence (LDS) and identify the precise constant in the order of the expectation, answering a further open question in [Bhatnagar and Peled, PTRF, 2015].
The normalizing constant @math in the Mallows distribution has a closed form formula which Diaconis and Ram @cite_28 observed to be the Poincar ' e polynomial The formula for @math implies a straightforward method for generating a random Mallows distributed permutation.
{ "cite_N": [ "@cite_28" ], "mid": [ "2036804629" ], "abstract": [ "When faced with a complex task, is it better to be systematic or to proceed by making random adjustments? We study aspects of this problem in the context of generating random elements of a finite group. For example, suppose we want to fill n empty spaces with zeros and ones such that the probability of configuration x = (x1, . . . , xn) is θ n−|x|(1− θ)|x|, with |x| the number of ones in x. A systematic scan approach works left to right, filling each successive place with a θ coin toss. A random scan approach picks places at random, and a given site may be hit many times before all sites are hit. The systematic approach takes order n steps and the random approach takes order 1 4n log n steps. Realistic versions of this toy problem arise in image analysis and Ising-like simulations, where one must generate a random array by a Monte Carlo Markov chain. Systematic updating and random updating are competing algorithms that are discussed in detail in Section 2. There are some successful analyses for random scan algorithms, but the intuitively appealing systematic scan algorithms have resisted analysis. Our main results show that the binary problem just described is exceptional; for the examples analyzed in this paper, systematic and random scans converge in about the same number of steps. Let W be a finite Coxeter group generated by simple reflections s1, s2, . . . , sn, where s2 i = id. For example, W may be the permutation group Sn+1 with si = (i, i + 1). The length function (w) is the smallest k such that w = si1si2 · · · sik . Fix 0 < θ ≤ 1 and define a probability distribution on W by π(w) = θ − (w) PW (θ−1) , where PW(θ −1) = ∑" ] }
1601.02003
2226330206
We study the lengths of monotone subsequences for permutations drawn from the Mallows measure. The Mallows measure was introduced by Mallows in connection with ranking problems in statistics. Under this measure, the probability of a permutation @math is proportional to @math where @math is a positive parameter and @math is the number of inversions in @math . In our main result we show that when @math , the limiting distribution of the longest increasing subsequence (LIS) is Gaussian, answering an open question in [Bhatnagar and Peled, PTRF, 2015]. This is in contrast to the case when @math where the limiting distribution of the LIS when scaled appropriately is the GUE Tracy-Widom distribution. We also obtain a law of large numbers for the length of the longest decreasing subsequence (LDS) and identify the precise constant in the order of the expectation, answering a further open question in [Bhatnagar and Peled, PTRF, 2015].
As mentioned before, the question of determining the the length of the longest increasing subsequence of permutations drawn from a Mallows model for general @math was raised in @cite_36 . When @math , i.e. in the case of uniform random permutations, the asymptotics of @math (known as Ulam's problem) have been extensively studied. Vershik and Kerov @cite_23 and Logan and Shepp @cite_12 showed that @math (see also @cite_18 for a proof using Hammersley's interacting particle system). Mueller and Starr @cite_15 first studied @math under the Mallows measure for @math . In the regime that @math tends to a constant @math , they established a weak law of large numbers showing that @math converges in probability to a constant @math which they determined explicitly. Their arguments rely on a Boltzmann-Gibbs formulation of a continuous version of the Mallows measure and the probabilistic approach of Deuschel and Zeitouni for analyzing the longest increasing subsequence of i.i.d. random points in the plane @cite_8 @cite_30 .
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_8", "@cite_36", "@cite_23", "@cite_15", "@cite_12" ], "mid": [ "", "1969051672", "2002114058", "2154683049", "", "2094040972", "2107037617" ], "abstract": [ "", "In a famous paper [8] Hammersley investigated the lengthL n of the longest increasing subsequence of a randomn-permutation. Implicit in that paper is a certain one-dimensional continuous-space interacting particle process. By studying a hydrodynamical limit for Hammersley's process we show by fairly “soft” arguments that limn ′1 2 EL n =2. This is a known result, but previous proofs [14, 11] relied on hard analysis of combinatorial asymptotics.", "We consider the concentration of measure for n i.i.d., two-dimensional random variables under the conditioning that they form a record. Under mild conditions, we show that all random variables tend to concentrate, as n → ∞, around limiting curves, which are the solutions of an appropriate variational problem. We also show that the same phenomenon occurs, without the records conditioning, for the longest increasing subsequence in the sample.", "Adding a column of numbers produces \"carries\" along the way. We show that random digits produce a pattern of carries with a neat probabilistic description: the carries form a one-dependent determinantal point process. This makes it easy to answer natural questions: How many carries are typical? Where are they located? We show that many further examples, from combinatorics, algebra and group theory, have essentially the same neat formulae, and that any one-dependent point process on the integers is determinantal. The examples give a gentle introduction to the emerging fields of one-dependent and determinantal point processes.", "", "The Mallows measure on the symmetric group S n is the probability measure such that each permutation has probability proportional to q raised to the power of the number of inversions, where q is a positive parameter and the number of inversions of π is equal to the number of pairs i π j . We prove a weak law of large numbers for the length of the longest increasing subsequence for Mallows distributed random permutations, in the limit that n→∞ and q→1 in such a way that n(1−q) has a limit in R.", "Abstract Stanley posed the problem of minimizing the functional H(f)= ∫ 0 ∞ dx ∫ 0 f(x) dy log (f (x) −y+f −1 (y) − x) over nonincreasing nonnegative f on (0, ∞) of integral unity. We show that the minimum is unique and has the value − 1 2 , as was conjectured by Stanley. The minimizing function f 0 , H(f 0 ) = − 1 2 , is given parametrically by f 0 (x)=( 2 π )(sin θ −θ cos θ), x =f o (x) + 2 cos θ, 0 ⩽ θ ⩽ π (2) for 0 ⩽ x ⩽ 2; and f 0 ( x ) = 0 for x ⩾ 2. Closely related unpublished results have been obtained by Hammersley. We also find the minimum of H ( f ) subject to the constraints f (0) ⩽ a and f −1 (0) = inf ( x : f ( x ) = 0) ⩽ b where a and b are given. Proofs of the results for the case of constraints are complicated and will be given elsewhere. Let λ n be the shape of the random Young tableau with n unit squares obtained from sampling from the Schensted distribution where P(λ n = n! π 2 (λ n ) (3) where π ( λ n ) is the product of the n hook lengths of λ n . Consider the stochastic processes λ n (t)=( 1 n 1 2 )λ(t n 1 2 , n ⩾ 1, t ⩾ 0 (4) where λ n ( t ) is the height of the tableau λ n at a horizontal distance t from the corner. We show λ n → f 0 in the sense of weak convergence in a certain metric, where f 0 is the deterministic function in (2). Let l ( σ n ) denote the length of the longest increasing subsequence of a random permutation σ n of 1,2,…, n . Hammersley showed that l(ω n ) n 1 2 →c in probability, n→∞ (6) Schensted showed that l ( σ n ) has the same distribution as λ n (0) under the distribution (3) on λ n . It has long been conjectured (apparently first by Baer and Brock) that c = 2. We show here that c ⩾ 2 as a by-product of (5)." ] }
1601.02003
2226330206
We study the lengths of monotone subsequences for permutations drawn from the Mallows measure. The Mallows measure was introduced by Mallows in connection with ranking problems in statistics. Under this measure, the probability of a permutation @math is proportional to @math where @math is a positive parameter and @math is the number of inversions in @math . In our main result we show that when @math , the limiting distribution of the longest increasing subsequence (LIS) is Gaussian, answering an open question in [Bhatnagar and Peled, PTRF, 2015]. This is in contrast to the case when @math where the limiting distribution of the LIS when scaled appropriately is the GUE Tracy-Widom distribution. We also obtain a law of large numbers for the length of the longest decreasing subsequence (LDS) and identify the precise constant in the order of the expectation, answering a further open question in [Bhatnagar and Peled, PTRF, 2015].
Subsequently, in @cite_22 , Bhatnagar and Peled established the leading order behavior of @math in the regime that @math and @math . They analyzed an insertion process which they called the Mallows process for randomly generating Mallows distributed permutations and showed that @math in @math for @math as @math . They established the order of @math and showed that it grows at different rates for different regimes of @math as a function of @math (in particular they showed @math when @math is fixed) and proved large deviation bounds for @math and @math . They also established a linear upper bound on the variance of @math and left open the questions of determining the precise variance and the distribution of @math for all regimes of @math and @math .
{ "cite_N": [ "@cite_22" ], "mid": [ "1989633256" ], "abstract": [ "We study the length of the longest increasing and longest decreasing subsequences of random permutations drawn from the Mallows measure. Under this measure, the probability of a permutation ( S_n ) is proportional to (q^ Inv ( ) ) where (q ) is a real parameter and ( Inv ( ) ) is the number of inversions in ( ). The case (q=1 ) corresponds to uniformly random permutations. The Mallows measure was introduced by Mallows in connection with ranking problems in statistics. We determine the typical order of magnitude of the lengths of the longest increasing and decreasing subsequences, as well as large deviation bounds for them. We also provide a simple bound on the variance of these lengths, and prove a law of large numbers for the length of the longest increasing subsequence. Assuming without loss of generality that (q<1 ), our results apply when (q ) is a function of (n ) satisfying (n(1-q) ). The case that (n(1-q)=O(1) ) was considered previously by Mueller and Starr. In our parameter range, the typical length of the longest increasing subsequence is of order (n 1-q ), whereas the typical length of the longest decreasing subsequence has four possible behaviors according to the precise dependence of (n ) and (q ). We show also that in the graphical representation of a Mallows-distributed permutation, most points are found in a symmetric strip around the diagonal whose width is of order (1 (1-q) ). This suggests a connection between the longest increasing subsequence in the Mallows model and the model of last passage percolation in a strip." ] }
1601.02003
2226330206
We study the lengths of monotone subsequences for permutations drawn from the Mallows measure. The Mallows measure was introduced by Mallows in connection with ranking problems in statistics. Under this measure, the probability of a permutation @math is proportional to @math where @math is a positive parameter and @math is the number of inversions in @math . In our main result we show that when @math , the limiting distribution of the longest increasing subsequence (LIS) is Gaussian, answering an open question in [Bhatnagar and Peled, PTRF, 2015]. This is in contrast to the case when @math where the limiting distribution of the LIS when scaled appropriately is the GUE Tracy-Widom distribution. We also obtain a law of large numbers for the length of the longest decreasing subsequence (LDS) and identify the precise constant in the order of the expectation, answering a further open question in [Bhatnagar and Peled, PTRF, 2015].
Recently, further progress been made in the analysis of the empirical measure of points corresponding to the Boltzmann-Gibbs measure of Mueller and Starr. Mukherjee @cite_11 determined the large deviation rate function of the empirical measure of points and Starr and Walters recently showed that the large deviation principle has a unique optimizer @cite_20 .
{ "cite_N": [ "@cite_20", "@cite_11" ], "mid": [ "1723089725", "161170238" ], "abstract": [ "For a positive number @math the Mallows measure on the symmetric group is the probability measure on @math such that @math is proportional to @math -to-the-power- @math where @math equals the number of inversions: @math equals the number of pairs @math . One may consider this as a mean-field model from statistical mechanics. The weak large deviation principle may replace the Gibbs variational principle for characterizing equilibrium measures. In this sense, we prove absence of phase transition, i.e., phase uniqueness.", "Using large deviation results for a uniformly random permutation in @math , asymptotics of normalizing constants are computed and limits of permutations obtained for some non uniform distributions on @math . A pseudo-likelihood type estimator is shown to be consistent in a class of one parameter exponential families on permutations. A new proof of the large deviation principle of a uniformly random permutation on @math is given." ] }
1601.02003
2226330206
We study the lengths of monotone subsequences for permutations drawn from the Mallows measure. The Mallows measure was introduced by Mallows in connection with ranking problems in statistics. Under this measure, the probability of a permutation @math is proportional to @math where @math is a positive parameter and @math is the number of inversions in @math . In our main result we show that when @math , the limiting distribution of the longest increasing subsequence (LIS) is Gaussian, answering an open question in [Bhatnagar and Peled, PTRF, 2015]. This is in contrast to the case when @math where the limiting distribution of the LIS when scaled appropriately is the GUE Tracy-Widom distribution. We also obtain a law of large numbers for the length of the longest decreasing subsequence (LDS) and identify the precise constant in the order of the expectation, answering a further open question in [Bhatnagar and Peled, PTRF, 2015].
The case @math is special because it is one of the exactly solvable models that belong to the so-called KPZ universality class. In this case, the longest increasing subsequence problem can also be represented as a directed last passage percolation problem in a Poissonian environment. The results of @cite_23 @cite_12 follow from an asymptotic analysis of exact formulae that can be obtained for @math through a combinatorial bijection between @math and Young tableaux known as the Robinson-Schensted-Knuth (RSK) bijection @cite_4 @cite_33 @cite_6 . The RSK bijection can be further used to obtain the order of fluctuations and scaling limit of @math in this case. In their breakthrough work, Baik, Deift and Johansson showed that for uniformly random permutations @math has fluctuations of the order of @math and the limiting distribution of @math is the GUE Tracy-Widom distribution from random matrix theory @cite_2 .
{ "cite_N": [ "@cite_4", "@cite_33", "@cite_6", "@cite_23", "@cite_2", "@cite_12" ], "mid": [ "89912732", "2014678176", "", "", "1691157804", "2107037617" ], "abstract": [ "In this chapter we construct all the irreducible representations of the symmetric group. We know that the number of such representations is equal to the number of conjugacy classes (Proposition 1.10.1), which in the case of S n is the number of partitions of n. It may not be obvious how to associate an irreducible with each partition λ = (λ1, λ2,...., λl), but it is easy to find a corresponding subgroup S λ that is an isomorphic copy of S λl x Sλ2 x · · · x S λl, inside S n . We can now produce the right number of representations by inducing the trivial representation on each Sλ up to S n .", "This paper deals with finite sequences of integers. Typical of the problems we shall treat is the determination of the number of sequences of length n, consisting of the integers 1,2, ... , m, which have a longest increasing subsequence of length α. Throughout the first part of the paper we will deal only with sequences in which no numbers are repeated. In the second part we will extend the results to include the possibility of repetition. Our results will be stated in terms of standard Young tableaux.", "", "", "Let SN be the group of permutations of 1,2,..., N. If 7r E SN, we say that 7(i1),... , 7F(ik) is an increasing subsequence in 7r if il < i2 < ... < ik and 7r(ii) < 7r(i2) < ...< 7r(ik). Let 1N(r) be the length of the longest increasing subsequence. For example, if N = 5 and 7r is the permutation 5 1 3 2 4 (in one-line notation: thus 7r(1) = 5, 7r(2) = 1, ... ), then the longest increasing subsequences are 1 2 4 and 1 3 4, and N() = 3. Equip SN with uniform distribution,", "Abstract Stanley posed the problem of minimizing the functional H(f)= ∫ 0 ∞ dx ∫ 0 f(x) dy log (f (x) −y+f −1 (y) − x) over nonincreasing nonnegative f on (0, ∞) of integral unity. We show that the minimum is unique and has the value − 1 2 , as was conjectured by Stanley. The minimizing function f 0 , H(f 0 ) = − 1 2 , is given parametrically by f 0 (x)=( 2 π )(sin θ −θ cos θ), x =f o (x) + 2 cos θ, 0 ⩽ θ ⩽ π (2) for 0 ⩽ x ⩽ 2; and f 0 ( x ) = 0 for x ⩾ 2. Closely related unpublished results have been obtained by Hammersley. We also find the minimum of H ( f ) subject to the constraints f (0) ⩽ a and f −1 (0) = inf ( x : f ( x ) = 0) ⩽ b where a and b are given. Proofs of the results for the case of constraints are complicated and will be given elsewhere. Let λ n be the shape of the random Young tableau with n unit squares obtained from sampling from the Schensted distribution where P(λ n = n! π 2 (λ n ) (3) where π ( λ n ) is the product of the n hook lengths of λ n . Consider the stochastic processes λ n (t)=( 1 n 1 2 )λ(t n 1 2 , n ⩾ 1, t ⩾ 0 (4) where λ n ( t ) is the height of the tableau λ n at a horizontal distance t from the corner. We show λ n → f 0 in the sense of weak convergence in a certain metric, where f 0 is the deterministic function in (2). Let l ( σ n ) denote the length of the longest increasing subsequence of a random permutation σ n of 1,2,…, n . Hammersley showed that l(ω n ) n 1 2 →c in probability, n→∞ (6) Schensted showed that l ( σ n ) has the same distribution as λ n (0) under the distribution (3) on λ n . It has long been conjectured (apparently first by Baer and Brock) that c = 2. We show here that c ⩾ 2 as a by-product of (5)." ] }
1601.02003
2226330206
We study the lengths of monotone subsequences for permutations drawn from the Mallows measure. The Mallows measure was introduced by Mallows in connection with ranking problems in statistics. Under this measure, the probability of a permutation @math is proportional to @math where @math is a positive parameter and @math is the number of inversions in @math . In our main result we show that when @math , the limiting distribution of the longest increasing subsequence (LIS) is Gaussian, answering an open question in [Bhatnagar and Peled, PTRF, 2015]. This is in contrast to the case when @math where the limiting distribution of the LIS when scaled appropriately is the GUE Tracy-Widom distribution. We also obtain a law of large numbers for the length of the longest decreasing subsequence (LDS) and identify the precise constant in the order of the expectation, answering a further open question in [Bhatnagar and Peled, PTRF, 2015].
When @math , the integrable structure is lost, and the powerful combinatorial, algebraic and analytic tools used in @cite_2 are no longer available for finding the limiting distribution of @math . Indeed, as Theorem shows, for @math bounded away from @math , we get a different scaling limit, namely Gaussian with diffusive scaling, in contrast with the Tracy-Widom limit with subdiffusive scaling one gets for @math .
{ "cite_N": [ "@cite_2" ], "mid": [ "1691157804" ], "abstract": [ "Let SN be the group of permutations of 1,2,..., N. If 7r E SN, we say that 7(i1),... , 7F(ik) is an increasing subsequence in 7r if il < i2 < ... < ik and 7r(ii) < 7r(i2) < ...< 7r(ik). Let 1N(r) be the length of the longest increasing subsequence. For example, if N = 5 and 7r is the permutation 5 1 3 2 4 (in one-line notation: thus 7r(1) = 5, 7r(2) = 1, ... ), then the longest increasing subsequences are 1 2 4 and 1 3 4, and N() = 3. Equip SN with uniform distribution," ] }
1601.01786
2226152253
Carrier-grade networks comprise several layers where different protocols coexist. Nowadays, most of these networks have different control planes to manage routing on different layers, leading to a suboptimal use of the network resources and additional operational costs. However, some routers are able to encapsulate, decapsulate and convert protocols and act as a liaison between these layers. A unified control plane would be useful to optimize the use of the network resources and automate the routing configurations. Software-Defined Networking (SDN) based architectures, such as OpenFlow, offer a chance to design such a control plane. One of the most important problems to deal with in this design is the path computation process. Classical path computation algorithms cannot resolve the problem as they do not take into account encapsulations and conversions of protocols. In this paper, we propose algorithms to solve this problem and study several cases: Path computation without bandwidth constraint, under bandwidth constraint and under other Quality of Service constraints. We study the complexity and the scalability of our algorithms and evaluate their performances on real topologies. The results show that they outperform the previous ones proposed in the literature.
The initial works dealing with protocol and technology heterogeneity circumscribed the problem at the optical layer. For instance, Chlamtac @cite_8 described a model and algorithms to compute a path under wavelength continuity constraints. Zhu @cite_14 addressed the same problem in WDM mesh networks tackling traffic grooming issues. @cite_26 , Gong and Jabbari provided an algorithm to compute an optimal path under constraints on several layers: wavelength continuity, label continuity, etc.
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_8" ], "mid": [ "2160995598", "2158044873", "2109088884" ], "abstract": [ "As the operation of our fiber-optic backbone networks migrates from interconnected SONET rings to arbitrary mesh topology, traffic grooming on wavelength-division multiplexing (WDM) mesh networks becomes an extremely important research problem. To address this problem, we propose a new generic graph model for traffic grooming in heterogeneous WDM mesh networks. The novelty of our model is that, by only manipulating the edges of the auxiliary graph created by our model and the weights of these edges, our model can achieve various objectives using different grooming policies, while taking into account various constraints such as transceivers, wavelengths, wavelength-conversion capabilities, and grooming capabilities. Based on the auxiliary graph, we develop an integrated traffic-grooming algorithm (IGABAG) and an integrated grooming procedure (INGPROC) which jointly solve several traffic-grooming subproblems by simply applying the shortest-path computation method. Different grooming policies can be represented by different weight-assignment functions, and the performance of these grooming policies are compared under both nonblocking scenario and blocking scenario. The IGABAG can be applied to both static and dynamic traffic grooming. In static grooming, the traffic-selection scheme is key to achieving good network performance. We propose several traffic-selection schemes based on this model and we evaluate their performance for different network topologies.", "This paper addresses the problem of end-to-end path computation in a transport network with multiple switching technologies, for which the label switched path (LSP) traffic engineering (TE) in the multi-layer networks is an important application. By transforming a network graph to a channel graph, we provide a novel and general solution to find optimal paths in multi-layer networks through vertically searching across layers and horizontally searching on the same layer. The channel graph yields an explicit view of the constraints associated with the nodes and links that otherwise are hidden in the network graph. The approach can be applied to constraints such as the wavelength continuity, encoding type, and switching bandwidth granularity. The proposed solution has been implemented in software and deployed in an experimental optical network.", "We address the problem of efficient circuit switching in wide area optical networks. The solution provided is based on finding optimal routes for lightpaths and the new concept of semilightpaths. A lightpath is a fully optical transmission path, while a semilightpath is a transmission path constructed by chaining together several lightpaths, using wavelength conversion at their junctions. A fast and practical algorithm is presented to optimally route lightpaths and semilightpaths taking into account both the cost of using the wavelengths on links and the cost of wavelength conversion. We prove that the running time of the algorithm is the best possible in the wide class of algorithms allowing linear algebraic operations on weights. This class encompasses all known related practical methods. Additionally, our method works for any physical realization of wavelength conversion, independently whether it is done via optoelectronic conversion or in a fully optical way." ] }
1601.01422
2232054157
Recently, covariance descriptors have received much attention as powerful representations of set of points. In this research, we present a new metric learning algorithm for covariance descriptors based on the Dykstra algorithm, in which the current solution is projected onto a half-space at each iteration, and runs at O(n^3) time. We empirically demonstrate that randomizing the order of half-spaces in our Dykstra-based algorithm significantly accelerates the convergence to the optimal solution. Furthermore, we show that our approach yields promising experimental results on pattern recognition tasks.
To the best of our knowledge, (2015) @cite_22 were the first to introduce the supervised metric learning approach for covariance descriptors. They vectorized the matrix logarithms of the covariance descriptors to apply existing metric learning methods to the vectorizations of matrices. The dimensionality of the vectorizations is @math when the size of the covariance matrices are @math . Thus, the size of the Mahalanobis matrix is @math , which is computationally prohibitive when @math is large.
{ "cite_N": [ "@cite_22" ], "mid": [ "1928707779" ], "abstract": [ "Over the past few years, symmetric positive definite (SPD) matrices have been receiving considerable attention from computer vision community. Though various distance measures have been proposed in the past for comparing SPD matrices, the two most widely-used measures are affine-invariant distance and log-Euclidean distance. This is because these two measures are true geodesic distances induced by Riemannian geometry. In this work, we focus on the log-Euclidean Riemannian geometry and propose a data-driven approach for learning Riemannian metrics geodesic distances for SPD matrices. We show that the geodesic distance learned using the proposed approach performs better than various existing distance measures when evaluated on face matching and clustering tasks." ] }
1601.01422
2232054157
Recently, covariance descriptors have received much attention as powerful representations of set of points. In this research, we present a new metric learning algorithm for covariance descriptors based on the Dykstra algorithm, in which the current solution is projected onto a half-space at each iteration, and runs at O(n^3) time. We empirically demonstrate that randomizing the order of half-spaces in our Dykstra-based algorithm significantly accelerates the convergence to the optimal solution. Furthermore, we show that our approach yields promising experimental results on pattern recognition tasks.
Yger and Sugiyama @cite_23 devised a different formulation of metric learning. They introduced the congruent transform @cite_10 and measures distances between the transformations of covariance descriptors. An objective function based on the kernel target alignment @cite_11 is employed to determine the transformation parameters. Compared to their algorithm, our algorithm has the capability to monitor the upper bound of the objective gap, i.e. the difference between the current objective and the minimum. This implies that the resultant solution is ensured to be @math -suboptimal if the algorithm’s convergence criterion is set such that the objective gap upper bound is less than a very small number @math . Since Yger and Sugiyama @cite_23 employed a gradient method for learning the congruent transform, there is no way to know the objective gap.
{ "cite_N": [ "@cite_10", "@cite_23", "@cite_11" ], "mid": [ "", "1674203940", "2142387771" ], "abstract": [ "", "Metric learning has been shown to be highly effective to improve the performance of nearest neighbor classification. In this paper, we address the problem of metric learning for Symmetric Positive Definite (SPD) matrices such as covariance matrices, which arise in many real-world applications. Naively using standard Mahalanobis metric learning methods under the Euclidean geometry for SPD matrices is not appropriate, because the difference of SPD matrices can be a non-SPD matrix and thus the obtained solution can be uninterpretable. To cope with this problem, we propose to use a properly parameterized LogEuclidean distance and optimize the metric with respect to kernel-target alignment, which is a supervised criterion for kernel learning. Then the resulting non-trivial optimization problem is solved by utilizing the Riemannian geometry. Finally, we experimentally demonstrate the usefulness of our LogEuclidean metric learning algorithm on real-world classification tasks for EEG signals and texture patches.", "We introduce the notion of kernel-alignment, a measure of similarity between two kernel functions or between a kernel and a target function. This quantity captures the degree of agreement between a kernel and a given learning task, and has very natural interpretations in machine learning, leading also to simple algorithms for model selection and learning. We analyse its theoretical properties, proving that it is sharply concentrated around its expected value, and we discuss its relation with other standard measures of performance. Finally we describe some of the algorithms that can be obtained within this framework, giving experimental results showing that adapting the kernel to improve alignment on the labelled data significantly increases the alignment on the test set, giving improved classification accuracy. Hence, the approach provides a principled method of performing transduction." ] }
1601.01770
2227532222
In recent years, the increased need to house and process large volumes of data has prompted the need for distributed storage and querying systems. The growth of machine-readable RDF triples has prompted both industry and academia to develop new database systems, called NoSQL, with characteristics that differ from classical databases. Many of these systems compromise ACID properties for increased horizontal scalability and data availability. This thesis concerns the development and evaluation of a NoSQL triplestore. Triplestores are database management systems central to emerging technologies such as the Semantic Web and linked data. The evaluation spans several benchmarks, including the two most commonly used in triplestore evaluation, the Berlin SPARQL Benchmark, and the DBpedia benchmark, a query workload that operates an RDF representation of Wikipedia. Results reveal that the join algorithm used by the system plays a critical role in dictating query runtimes. Distributed graph databases must carefully optimize queries before generating MapReduce query plans as network traffic for large datasets can become prohibitive if the query is executed naively.
However, MapReduce is not suitable for all workloads. Several studies have concluded that for datasets less than 1 terabyte (TB), parallel database systems either perform on par or outperform MapReduce on a cluster of 100 nodes @cite_0 @cite_40 . Several factors affect the performance of MapReduce that do not apply to parallel databases @cite_6 . Despite these results, several techniques and systems have been created to solve large-scale data warehousing problems.
{ "cite_N": [ "@cite_0", "@cite_40", "@cite_6" ], "mid": [ "2044490410", "", "2010279913" ], "abstract": [ "MapReduce complements DBMSs since databases are not designed for extract-transform-load tasks, a MapReduce specialty.", "", "MapReduce has been widely used for large-scale data analysis in the Cloud. The system is well recognized for its elastic scalability and fine-grained fault tolerance although its performance has been noted to be suboptimal in the database context. According to a recent study [19], Hadoop, an open source implementation of MapReduce, is slower than two state-of-the-art parallel database systems in performing a variety of analytical tasks by a factor of 3.1 to 6.5. MapReduce can achieve better performance with the allocation of more compute nodes from the cloud to speed up computation; however, this approach of \"renting more nodes\" is not cost effective in a pay-as-you-go environment. Users desire an economical elastically scalable data processing system, and therefore, are interested in whether MapReduce can offer both elastic scalability and efficiency. In this paper, we conduct a performance study of MapReduce (Hadoop) on a 100-node cluster of Amazon EC2 with various levels of parallelism. We identify five design factors that affect the performance of Hadoop, and investigate alternative but known methods for each factor. We show that by carefully tuning these factors, the overall performance of Hadoop can be improved by a factor of 2.5 to 3.5 for the same benchmark used in [19], and is thus more comparable to that of parallel database systems. Our results show that it is therefore possible to build a cloud data processing system that is both elastically scalable and efficient." ] }
1601.01770
2227532222
In recent years, the increased need to house and process large volumes of data has prompted the need for distributed storage and querying systems. The growth of machine-readable RDF triples has prompted both industry and academia to develop new database systems, called NoSQL, with characteristics that differ from classical databases. Many of these systems compromise ACID properties for increased horizontal scalability and data availability. This thesis concerns the development and evaluation of a NoSQL triplestore. Triplestores are database management systems central to emerging technologies such as the Semantic Web and linked data. The evaluation spans several benchmarks, including the two most commonly used in triplestore evaluation, the Berlin SPARQL Benchmark, and the DBpedia benchmark, a query workload that operates an RDF representation of Wikipedia. Results reveal that the join algorithm used by the system plays a critical role in dictating query runtimes. Distributed graph databases must carefully optimize queries before generating MapReduce query plans as network traffic for large datasets can become prohibitive if the query is executed naively.
Several cloud triple stores exist today and have vast differences among one another. Among these systems are 4store @cite_35 , CumulusRDF (Cassandra) @cite_20 , HBase @cite_31 , and Couchbase http: www.couchbase.com . In a comparative study of all of these triple-stores (including our HBase-Hive implementation), systems using MapReduce introduced significant query overhead while strictly in-memory stores were unable to accommodate large datasets @cite_15 . It was discovered that queries involving complex filters generally performed poorly on NoSQL systems. However, traditional relational database query optimization techniques work well in NoSQL environments.
{ "cite_N": [ "@cite_35", "@cite_31", "@cite_20", "@cite_15" ], "mid": [ "163708704", "", "2339861584", "1906114547" ], "abstract": [ "A common approach to providing persistent storage for RDF is to store statements in a three-column table in a relational database system. This is commonly referred to as a triple store. Each table row represents one RDF statement. For RDF graphs with frequent patterns, an alternative storage scheme is a property table. A property table comprises one column containing a statement subject plus one or more columns containing property values for that subject. In this approach, a single table row may store many RDF statements. This paper describes a property table design and implementation for Jena, an RDF Semantic Web toolkit. A design goal is to make Jena property tables look like normal relational database tables. This enables relational database tools such as loaders, report writers and query optimizers to work well with Jena property tables. This paper includes a basic performance analysis comparing a triple store with property tables for dataset load time and query response time. Depending on the application and data characteristics, Jena property tables may provide a performance advantage compared to a triple store for large RDF graphs with frequent patterns. The disadvantage is some loss in flexibility.", "", "Publishers of Linked Data require scalable storage and retrieval infras- tructure due to the size of datasets and potentially high rate of lookups on popular sites. In this paper we investigate the feasibility of using a distributed nested key- value store as an underlying storage component for a Linked Data server which provides functionality for serving Linked Data via HTTP lookups and in addi- tion offers single triple pattern lookups. We devise two storage schemes for our CumulusRDF system implemented on Apache Cassandra, an open-source nested key-value store. We compare the schemes on a subset of DBpedia and both syn- thetic workloads and workloads obtained from DBpedia's access logs. Results on a cluster of up to 8 machines indicate that CumulusRDF is competitive to state-of-the-art distributed RDF stores.", "Processing large volumes of RDF data requires sophisticated tools. In recent years, much effort was spent on optimizing native RDF stores and on repurposing relational query engines for large-scale RDF processing. Concurrently, a number of new data management systems–regrouped under the NoSQL (for \"not only SQL\") umbrella–rapidly rose to prominence and represent today a popular alternative to classical databases. Though NoSQL systems are increasingly used to manage RDF data, it is still difficult to grasp their key advantages and drawbacks in this context. This work is, to the best of our knowledge, the first systematic attempt at characterizing and comparing NoSQL stores for RDF processing. In the following, we describe four different NoSQL stores and compare their key characteristics when running standard RDF benchmarks on a popular cloud infrastructure using both single-machine and distributed deployments." ] }
1601.01556
2227439263
In the engineering and manufacturing domain, there is currently an atmosphere of departure to a new era of digitized production. In different regions, initiatives in these directions are known under different names, such as industrie du futur in France, industrial internet in the US or Industrie 4.0 in Germany. While the vision of digitizing production and manufacturing gained much traction lately, it is still relatively unclear how this vision can actually be implemented with concrete standards and technologies. Within the German Industry 4.0 initiative, the concept of an Administrative Shell was devised to respond to these requirements. The Administrative Shell is planned to provide a digital representation of all information being available about and from an object which can be a hardware system or a software platform. In this paper, we present an approach to developing such a digital representation based on semantic knowledge representation formalisms such as RDF, RDF Schema and OWL. We present our concept of a Semantic I4.0 Component which addresses the communication and comprehension challenges in Industry 4.0 scenarios using semantic technologies. Our approach is illustrated with a concrete example showing its benefits in a real-world use case.
Currently, there are some efforts discussing the need of bringing more semantics and data-driven approaches to I4.0. @cite_22 presents guidelines, aiming to help on choosing the level of semantic formalization for the representation of the different types of I4.0 projects. The crucial role of semantic technologies for mass customization is discussed in @cite_29 . This work recognizes semantic technologies as a glue to connect smart products, data and services. Obitko @cite_21 describes the application of semantics to I4.0 from the Big Data perspective. The features of Big Data, as well as, an ontology for sensor data are presented. tab:RelatedI40ComponentApproaches provides a comparison of our approach to the related I4.0 component description approaches.
{ "cite_N": [ "@cite_29", "@cite_21", "@cite_22" ], "mid": [ "25828490", "2185145144", "2104599944" ], "abstract": [ "We discuss the key role of semantic technologies for the mass customization of smart products, smart data, and smart services. It is shown that new semantic representation languages for the description of services and product memories like USDL and OMM provide the glue for integrating the Internet of Things, the Internet of Data, and the Internet of Services. Semantic service matchmaking in cyber-physical production systems is presented as a key enabler of the disruptive change in the production logic for Industrie 4.0. Finally, we discuss the platform stack for mass customization and show how a customized smart product can serve as a platform for personalized smart services that are based on smart data provided by the connected product.", "The Industry 4.0 is a vision that includes connecting more intensively physical systems with their virtual counterparts in computers. This computerization of manufacturing will bring many advantages, including allowing data gathering, integration and analysis in the scale not seen earlier. In this paper we describe our Semantic Big Data Historian that is intended to handle large volumes of heterogeneous data gathered from distributed data sources. We describe the approach and implementation with a special focus on using Semantic Web technologies for integrating the data.", "Under the context of Industrie 4.0 (I4.0), future production systems provide balanced operations between manufacturing flexibility and efficiency, realized in an autonomous, horizontal, and decentralized item-level production control framework. Structured interoperability via precise formulations on an appropriate degree is crucial to achieve engineering efficiency in the system life cycle. However, selecting the degree of formalization can be challenging, as it crucially depends on the desired common understanding (semantic degree) between multiple parties. In this paper, we categorize different semantic degrees and map a set of technologies in industrial automation to their associated degrees. Furthermore, we created guidelines to assist engineers selecting appropriate semantic degrees in their design. We applied these guidelines on publically available scenarios to examine the validity of the approach, and identified semantic elements over internally developed use cases targeting semantically-enabled plug-and-produce." ] }
1601.01556
2227439263
In the engineering and manufacturing domain, there is currently an atmosphere of departure to a new era of digitized production. In different regions, initiatives in these directions are known under different names, such as industrie du futur in France, industrial internet in the US or Industrie 4.0 in Germany. While the vision of digitizing production and manufacturing gained much traction lately, it is still relatively unclear how this vision can actually be implemented with concrete standards and technologies. Within the German Industry 4.0 initiative, the concept of an Administrative Shell was devised to respond to these requirements. The Administrative Shell is planned to provide a digital representation of all information being available about and from an object which can be a hardware system or a software platform. In this paper, we present an approach to developing such a digital representation based on semantic knowledge representation formalisms such as RDF, RDF Schema and OWL. We present our concept of a Semantic I4.0 Component which addresses the communication and comprehension challenges in Industry 4.0 scenarios using semantic technologies. Our approach is illustrated with a concrete example showing its benefits in a real-world use case.
The (OMM) is an XML-based format which allows for modeling of the information about individual physical elements @cite_17 . In this work, the memory of the elements is partitioned to include different types of data regarding the identification, name, etc. The idea of this approach was to bring a semantic layer to the physical components but still suffers the intrinsic limitations of XML. However, it is envisioned, that elements in the OMM (so called blocks) contain RDF and OWL payload data. Extending the concept of OMM @cite_24 is a framework for representation, management, and utilization of digital object memories The idea of bringing semantic descriptions of physical elements by combining OMM and a server realization has been conducted by @cite_6 . Nevertheless, this work is focused on the identification of the elements and still rely on the above mentioned limitations of OMM format.
{ "cite_N": [ "@cite_24", "@cite_6", "@cite_17" ], "mid": [ "2125868239", "38699895", "2902370669" ], "abstract": [ "In this paper we address the research question, how an infrastructure for digital object memories (DOMe) has to be designed. Primary goal is to identify and develop components and processes of an architecture concept particularly suited to represent, manage, and use digital object memories. In order to leverage acceptance and deployment of this novel technology, the envisioned infrastructure has to include tools for integration of new systems, and for migration with existing systems. Special requirements to object memories result from the heterogeneity of data in so-called open-loop scenarios. On the one hand, it has to be flexible enough to handle different data types. On the other hand, a simple and structured data access is required. Depending on the application scenario, the latter one needs to be complemented with concepts for a rights- and role-based access and version control. We present a framework based on a structuring data model and a set of tools to create new and to migrate existing applications to digital object memories.", "The SemProM format was basically designed for on-product RFID-based memories. Furthermore, some use cases demand centralized storage or data backups that cannot be achieved with on-product storage. For these cases (e.g., cheap products with very small labels, very large memories), a server-based solution might be more suitable. We developed the Object Memory Server (OMS) as an index server for product memories, based on the same set of metadata as the block format. The actual payload is outsourced to servers in the web. The URL used for accessing an OMS memory can be stored in simple and cheap RFID labels. Due to the large processing power of a server-based approach, the OMS can handle all SemProM incarnations, ranging from Reference SemProMs to Smart SemProMs. The conceptual ideas of the OMS were also transformed to provide a server-based solution for memories based on the W3C XG OMM format.", "This report summarizes the findings of the Object Memory Modeling Incubator Group. An XML-based object memory format is introduced, which allows for modeling of events or other information about individual physical artifacts, and which is designed to support data storage of those logs on so-called \"smart labels\" attached to the physical artifact. The group makes several recommendations concerning the future evolution of the object memory format at the W3C; these address connections to provenance modeling, the embedding of object memories in web pages, and potential benefits of an object memory API." ] }
1601.01556
2227439263
In the engineering and manufacturing domain, there is currently an atmosphere of departure to a new era of digitized production. In different regions, initiatives in these directions are known under different names, such as industrie du futur in France, industrial internet in the US or Industrie 4.0 in Germany. While the vision of digitizing production and manufacturing gained much traction lately, it is still relatively unclear how this vision can actually be implemented with concrete standards and technologies. Within the German Industry 4.0 initiative, the concept of an Administrative Shell was devised to respond to these requirements. The Administrative Shell is planned to provide a digital representation of all information being available about and from an object which can be a hardware system or a software platform. In this paper, we present an approach to developing such a digital representation based on semantic knowledge representation formalisms such as RDF, RDF Schema and OWL. We present our concept of a Semantic I4.0 Component which addresses the communication and comprehension challenges in Industry 4.0 scenarios using semantic technologies. Our approach is illustrated with a concrete example showing its benefits in a real-world use case.
The idea of bringing semantic descriptions of physical elements by combining OMM and a server realization has been conducted by @cite_6 . Nevertheless, this work is focused on the identification of the elements and still rely on the above mentioned limitations of OMM format.
{ "cite_N": [ "@cite_6" ], "mid": [ "38699895" ], "abstract": [ "The SemProM format was basically designed for on-product RFID-based memories. Furthermore, some use cases demand centralized storage or data backups that cannot be achieved with on-product storage. For these cases (e.g., cheap products with very small labels, very large memories), a server-based solution might be more suitable. We developed the Object Memory Server (OMS) as an index server for product memories, based on the same set of metadata as the block format. The actual payload is outsourced to servers in the web. The URL used for accessing an OMS memory can be stored in simple and cheap RFID labels. Due to the large processing power of a server-based approach, the OMS can handle all SemProM incarnations, ranging from Reference SemProMs to Smart SemProMs. The conceptual ideas of the OMS were also transformed to provide a server-based solution for memories based on the W3C XG OMM format." ] }
1601.01298
2230440109
We study versions of cop and robber pursuit-evasion games on the visibility graphs of polygons, and inside polygons with straight and curved sides. Each player has full information about the other player's location, players take turns, and the robber is captured when the cop arrives at the same point as the robber. In visibility graphs we show the cop can always win because visibility graphs are dismantlable, which is interesting as one of the few results relating visibility graphs to other known graph classes. We extend this to show that the cop wins games in which players move along straight line segments inside any polygon and, more generally, inside any simply connected planar region with a reasonable boundary. Essentially, our problem is a type of pursuit-evasion using the link metric rather than the Euclidean metric, and our result provides an interesting class of infinite cop-win graphs.
In a cop-win graph with @math vertices, the cop can win in at most @math moves. This result is implicit in the original papers, but a clear exposition can be found in the book of Bonato and Nowakowski [Section 2.2] Bonato-book . For a fixed number of cops, the number of cop moves needed to capture a robber in a given graph can be computed in polynomial time @cite_27 , but the problem becomes NP-hard in general @cite_8 .
{ "cite_N": [ "@cite_27", "@cite_8" ], "mid": [ "2012220350", "2035417355" ], "abstract": [ "We give an algorithmic characterisation of finite cop-win digraphs. The case of k>1 cops and k>=l>=1 robbers is then reduced to the one cop case. Similar characterisations are also possible in many situations where the movements of the cops and or the robbers are somehow restricted.", "We consider the game of Cops and Robbers played on finite and countably infinite connected graphs. The length of games is considered on cop-win graphs, leading to a new parameter, the capture time of a graph. While the capture time of a cop-win graph on n vertices is bounded above by n-3, half the number of vertices is sufficient for a large class of graphs including chordal graphs. Examples are given of cop-win graphs which have unique corners and have capture time within a small additive constant of the number of vertices. We consider the ratio of the capture time to the number of vertices, and extend this notion of capture time density to infinite graphs. For the infinite random graph, the capture time density can be any real number in [0,1]. We also consider the capture time when more than one cop is required to win. While the capture time can be calculated by a polynomial algorithm if the number k of cops is fixed, it is NP-complete to decide whether k cops can capture the robber in no more than t moves for every fixed t." ] }
1601.01298
2230440109
We study versions of cop and robber pursuit-evasion games on the visibility graphs of polygons, and inside polygons with straight and curved sides. Each player has full information about the other player's location, players take turns, and the robber is captured when the cop arrives at the same point as the robber. In visibility graphs we show the cop can always win because visibility graphs are dismantlable, which is interesting as one of the few results relating visibility graphs to other known graph classes. We extend this to show that the cop wins games in which players move along straight line segments inside any polygon and, more generally, inside any simply connected planar region with a reasonable boundary. Essentially, our problem is a type of pursuit-evasion using the link metric rather than the Euclidean metric, and our result provides an interesting class of infinite cop-win graphs.
Pursuit-Evasion. In the cops and robbers game, space is discrete. For continuous spaces, a main focus has been on polygonal regions, i.e., a region bounded by a polygon with polygonal holes removed. The seminal 1999 paper by @cite_4 concentrated on visibility-based'' pursuit-evasion where the evader is arbitrarily fast and the pursuers do not know the evader's location and must search the region until they make line-of-sight contact. This models the scenario of agents searching the floor-plan of a building to find a smart, fast intruder that can be zapped from a distance. @cite_4 showed that @math pursuers are needed in a simple polygyon, and more generally they bounded the number of pursuers in terms of the number of holes in the region. If the pursuers have the power to make random choices, @cite_23 showed that only one guard is needed for a polygon. For a survey on pursuit-evasion in polygonal regions, see @cite_19 .
{ "cite_N": [ "@cite_19", "@cite_4", "@cite_23" ], "mid": [ "40650588", "2163841363", "1976698879" ], "abstract": [ "This paper surveys recent results in pursuit-evasion and autonomous search relevant to applications in mobile robotics. We provide a taxonomy of search problems that highlights the differences resulting from varying assumptions on the searchers, targets, and the environment. We then list a number of fundamental results in the areas of pursuit-evasion and probabilistic search, and we discuss field implementations on mobile robotic systems. In addition, we highlight current open problems in the area and explore avenues for future work.", "In this survey article, we present open problems and conjectures on visibility graphs of points, segments, and polygons along with necessary backgrounds for understanding them.", "This paper contains two main results. First, we revisit the well-known visibility-based pursuit-evasion problem, and show that in contrast to deterministic strategies, a single pursuer can locate an unpredictable evader in any simply connected polygonal environment, using a randomized strategy. The evader can be arbitrarily faster than the pursuer, and it may know the position of the pursuer at all times, but it does not have prior knowledge of the random decisions made by the pursuer. Second, using the randomized algorithm, together with the solution to a problem called the \"lion and man problem\" as subroutines, we present a strategy for two pursuers (one of which is at least as fast as the evader) to quickly capture an evader in a simply connected polygonal environment. We show how this strategy can be extended to obtain a strategy for a polygonal room with a door, two pursuers who have only line-of-sight communication, and a single pursuer (at the expense of increased capture time)." ] }
1601.01298
2230440109
We study versions of cop and robber pursuit-evasion games on the visibility graphs of polygons, and inside polygons with straight and curved sides. Each player has full information about the other player's location, players take turns, and the robber is captured when the cop arrives at the same point as the robber. In visibility graphs we show the cop can always win because visibility graphs are dismantlable, which is interesting as one of the few results relating visibility graphs to other known graph classes. We extend this to show that the cop wins games in which players move along straight line segments inside any polygon and, more generally, inside any simply connected planar region with a reasonable boundary. Essentially, our problem is a type of pursuit-evasion using the link metric rather than the Euclidean metric, and our result provides an interesting class of infinite cop-win graphs.
There is also a vast literature on graph-based pursuit-evasion games, where players move continuously and have no knowledge of other players' positions. The terms graph searching'' and graph sweeping'' are used, and the concept is related to tree-width. For surveys see @cite_0 @cite_20 .
{ "cite_N": [ "@cite_0", "@cite_20" ], "mid": [ "1561296992", "2106518318" ], "abstract": [ "This papers surveys some of the work done on trying to capture an intruder in a graph. If the intruder may be located only at vertices, the term searching is employed. If the intruder may be located at vertices or along edges, the term sweeping is employed. There are a wide variety of applications for searching and sweeping. Old results, new results and active research directions are discussed.", "Graph searching encompasses a wide variety of combinatorial problems related to the problem of capturing a fugitive residing in a graph using the minimum number of searchers. In this annotated bibliography, we give an elementary classification of problems and results related to graph searching and provide a source of bibliographical references on this field." ] }
1601.01298
2230440109
We study versions of cop and robber pursuit-evasion games on the visibility graphs of polygons, and inside polygons with straight and curved sides. Each player has full information about the other player's location, players take turns, and the robber is captured when the cop arrives at the same point as the robber. In visibility graphs we show the cop can always win because visibility graphs are dismantlable, which is interesting as one of the few results relating visibility graphs to other known graph classes. We extend this to show that the cop wins games in which players move along straight line segments inside any polygon and, more generally, inside any simply connected planar region with a reasonable boundary. Essentially, our problem is a type of pursuit-evasion using the link metric rather than the Euclidean metric, and our result provides an interesting class of infinite cop-win graphs.
Curved Regions. Traditional algorithms in computational geometry deal with points and piecewise linear subspaces (lines, segments, polygons, etc.). The study of algorithms for curved inputs was initiated by Dobkin and Souvaine @cite_14 , who defined the widely-used splinegon model. A is a simply connected region formed by replacing each edge of a simple polygon by a curve of constant complexity such that the area bounded by the curve and the edge it replaces is convex. The standard assumption is that it takes constant time to perform primitive operations such as finding the intersection of a line with a splinegon edge or computing common tangents of two splinegon edges. This model is widely used as the standard model for curved planar environments in different studies.
{ "cite_N": [ "@cite_14" ], "mid": [ "2040904149" ], "abstract": [ "We extend the results of straight-edged computational geometry into the curved world by defining a pair of new geometric objects, thesplinegon and thesplinehedron, as curved generalizations of the polygon and polyhedron. We identify three distinct techniques for extending polygon algorithms to splinegons: the carrier polygon approach, the bounding polygon approach, and the direct approach. By these methods, large groups of algorithms for polygons can be extended as a class to encompass these new objects. In general, if the original polygon algorithm has time complexityO(f(n)), the comparable splinegon algorithm has time complexity at worstO(Kf(n)) whereK represents a constant number of calls to members of a set of primitive procedures on individual curved edges. These techniques also apply to splinehedra. In addition to presenting the general methods, we state and prove a series of specific theorems. Problem areas include convex hull computation, diameter computation, intersection detection and computation, kernel computation, monotonicity testing, and monotone decomposition, among others." ] }
1601.01298
2230440109
We study versions of cop and robber pursuit-evasion games on the visibility graphs of polygons, and inside polygons with straight and curved sides. Each player has full information about the other player's location, players take turns, and the robber is captured when the cop arrives at the same point as the robber. In visibility graphs we show the cop can always win because visibility graphs are dismantlable, which is interesting as one of the few results relating visibility graphs to other known graph classes. We extend this to show that the cop wins games in which players move along straight line segments inside any polygon and, more generally, inside any simply connected planar region with a reasonable boundary. Essentially, our problem is a type of pursuit-evasion using the link metric rather than the Euclidean metric, and our result provides an interesting class of infinite cop-win graphs.
Melissaratos and Souvaine @cite_24 gave a linear time algorithm to find a shortest path between two points in a splinegon. Their algorithm is similar to shortest path finding in a simple polygon but uses a trapezoid decomposition in place of polygon triangulation. For finding shortest paths among curved obstacles (the splinegon version of a polygonal domain) there is recent work @cite_3 , and also more efficient algorithms when the curves are more specialized @cite_18 @cite_1 .
{ "cite_N": [ "@cite_24", "@cite_18", "@cite_1", "@cite_3" ], "mid": [ "2063244634", "1983422667", "2130647774", "2022670034" ], "abstract": [ "The goal of this paper is to show that the concept of the shortest path inside a polygonal region contributes to the design of efficient algorithms for certain geometric optimization problems involving simple polygons: computing optimum separators, maximum area or perimeter-inscribed triangles, a minimum area circumscribed concave quadrilateral, or a maximum area contained triangle. The structure for the algorithms presented is as follows: (a) decompose the initial problem into a low-degree polynomial number of optimization problems; (b) solve each individual subproblem in constant time using standard methods of calculus, basic methods of numerical analysis, or linear programming. These same optimization techniques can be applied to splinegons (curved polygons). First a decomposition technique for curved polygons is developed; this technique is substituted for triangulation in creating equally efficient curved versions of the algorithms for the shortest-path tree, ray-shooting, and two-point shortest path...", "Multiple objects in the plane are called pseudodisks if the boundaries of any two of them intersect transversely at most twice. Given a set of @math (possibly intersecting) convex pseudodisks of @math complexity each and two points @math and @math in the plane, we present an efficient algorithm for computing a shortest @math -to- @math path avoiding the pseudodisks. After the union of the pseudodisks is computed, which can be done in @math randomized time or @math deterministic time, our algorithm runs in @math deterministic time, where @math is the size of the extended visibility graph of the union of the pseudodisks. Note that @math in the worst case. In over two decades, the previously best algorithms for this problem have not improved on the bound of @math time, even when all the pseudodisks are pairwise disjoint disks. Our technique is also applicable to a motion planning problem of finding a shortest path to translate a convex object in the plane from one location to anot...", "We propose an algorithm for the problem of computing shortest paths among curved obstacles in the plane. If the obstacles have O(n) description complexity, then the algorithm runs in O(n log n) time plus a term dependent on the properties of the boundary arcs. Specifically, if the arcs allow a certain kind of bisector intersection to be computed in constant time, or even in O(log n) time, then the running time of the overall algorithm is O(n log n). If the arcs support only constant-time tangent, intersection, and length queries, as is customarily assumed, then the algorithm computes an approximate shortest path, with relative error e, in time O(n log n + n log 1 e). In fact, the algorithm computes an approximate shortest path map, a data structure with O(n log n) size, that allows it to report the (approximate) length of a shortest path from a fixed source point to any query point in the plane in O(log n) time.", "In this paper, we study the problem of finding Euclidean shortest paths among curved obstacles in the plane. We model curved obstacles as splinegons. A splinegon can be viewed as replacing each edge of a polygon by a convex curved edge, and each curved edge is assumed to be of O(1) complexity. Given in the plane two points s and t and a set of h pairwise disjoint splinegons with a total of n vertices, we present an algorithm that can compute a shortest path from s to t avoiding the splinegons in O(n+hlogeh+k) time for any e>0, where k is a parameter sensitive to the input splinegons and k=O(h2). If all splinegons are convex, a common tangent of two splinegons is \"free\" if it does not intersect the interior of any splingegon; our techniques yield an output sensitive algorithm for computing all free common tangents of the h splinegons in O(n+hlogh+k) time and O(n) working space, where k is the number of all free common tangents." ] }
1601.01298
2230440109
We study versions of cop and robber pursuit-evasion games on the visibility graphs of polygons, and inside polygons with straight and curved sides. Each player has full information about the other player's location, players take turns, and the robber is captured when the cop arrives at the same point as the robber. In visibility graphs we show the cop can always win because visibility graphs are dismantlable, which is interesting as one of the few results relating visibility graphs to other known graph classes. We extend this to show that the cop wins games in which players move along straight line segments inside any polygon and, more generally, inside any simply connected planar region with a reasonable boundary. Essentially, our problem is a type of pursuit-evasion using the link metric rather than the Euclidean metric, and our result provides an interesting class of infinite cop-win graphs.
A draft for here: H.V. Traditional algorithms in computational geometry deal with points and piece wise linear subspaces (lines, segments, polygons, etc.). For real applications we need to handle the environments with curved boundaries. However, the complexity of the curved boundary highly effects the complexity of algorithms. There are some studies those try to solve the problems on curves. In @cite_18 @cite_28 the problem of finding shortest path on special curves like disks and pseudodisks is studied. Some heuristic approaches approximate the curved boundaries by polygons and use the algorithms designed for polygonal domains @cite_1 . The study of algorithms for more general curved inputs initiated by Dobkin and Souvaine in 1990 @cite_14 . They modeled curved boundaries with a finite set of convex splines and call them splinegons''. Each spline, like lines, is in @math complexity, so primitive operations such as finding the intersection of a line with a spline or the common tangents of two splines can be computed in constant time. This model is widely used as a standard model for curved planar environments in different studies.
{ "cite_N": [ "@cite_28", "@cite_18", "@cite_14", "@cite_1" ], "mid": [ "", "1983422667", "2040904149", "2130647774" ], "abstract": [ "", "Multiple objects in the plane are called pseudodisks if the boundaries of any two of them intersect transversely at most twice. Given a set of @math (possibly intersecting) convex pseudodisks of @math complexity each and two points @math and @math in the plane, we present an efficient algorithm for computing a shortest @math -to- @math path avoiding the pseudodisks. After the union of the pseudodisks is computed, which can be done in @math randomized time or @math deterministic time, our algorithm runs in @math deterministic time, where @math is the size of the extended visibility graph of the union of the pseudodisks. Note that @math in the worst case. In over two decades, the previously best algorithms for this problem have not improved on the bound of @math time, even when all the pseudodisks are pairwise disjoint disks. Our technique is also applicable to a motion planning problem of finding a shortest path to translate a convex object in the plane from one location to anot...", "We extend the results of straight-edged computational geometry into the curved world by defining a pair of new geometric objects, thesplinegon and thesplinehedron, as curved generalizations of the polygon and polyhedron. We identify three distinct techniques for extending polygon algorithms to splinegons: the carrier polygon approach, the bounding polygon approach, and the direct approach. By these methods, large groups of algorithms for polygons can be extended as a class to encompass these new objects. In general, if the original polygon algorithm has time complexityO(f(n)), the comparable splinegon algorithm has time complexity at worstO(Kf(n)) whereK represents a constant number of calls to members of a set of primitive procedures on individual curved edges. These techniques also apply to splinehedra. In addition to presenting the general methods, we state and prove a series of specific theorems. Problem areas include convex hull computation, diameter computation, intersection detection and computation, kernel computation, monotonicity testing, and monotone decomposition, among others.", "We propose an algorithm for the problem of computing shortest paths among curved obstacles in the plane. If the obstacles have O(n) description complexity, then the algorithm runs in O(n log n) time plus a term dependent on the properties of the boundary arcs. Specifically, if the arcs allow a certain kind of bisector intersection to be computed in constant time, or even in O(log n) time, then the running time of the overall algorithm is O(n log n). If the arcs support only constant-time tangent, intersection, and length queries, as is customarily assumed, then the algorithm computes an approximate shortest path, with relative error e, in time O(n log n + n log 1 e). In fact, the algorithm computes an approximate shortest path map, a data structure with O(n log n) size, that allows it to report the (approximate) length of a shortest path from a fixed source point to any query point in the plane in O(log n) time." ] }
1601.01298
2230440109
We study versions of cop and robber pursuit-evasion games on the visibility graphs of polygons, and inside polygons with straight and curved sides. Each player has full information about the other player's location, players take turns, and the robber is captured when the cop arrives at the same point as the robber. In visibility graphs we show the cop can always win because visibility graphs are dismantlable, which is interesting as one of the few results relating visibility graphs to other known graph classes. We extend this to show that the cop wins games in which players move along straight line segments inside any polygon and, more generally, inside any simply connected planar region with a reasonable boundary. Essentially, our problem is a type of pursuit-evasion using the link metric rather than the Euclidean metric, and our result provides an interesting class of infinite cop-win graphs.
Melissaratos and Souvaine developed a linear time algorithm to find shortest path between two points in a splinegon @cite_24 . They introduced a trapezoid decomposition of the curved region that is a substitute to polygon triangulation in @math time. Using the trapezoid decomposition the shortest path in a splinegon can be computed in @math time with an algorithm similar to shortest path finding in a simple polygon. D.Z. Chen and H. Wang @cite_3 developed an @math algorithm to find shortest path in a splinegon with @math splines and @math vertices, where @math is the number of all common tangents inside the region and is in @math . The algorithm works for any arbitrary small @math . They also proposed an algorithm in @math time to find all common tangents in a splinegon.
{ "cite_N": [ "@cite_24", "@cite_3" ], "mid": [ "2063244634", "2022670034" ], "abstract": [ "The goal of this paper is to show that the concept of the shortest path inside a polygonal region contributes to the design of efficient algorithms for certain geometric optimization problems involving simple polygons: computing optimum separators, maximum area or perimeter-inscribed triangles, a minimum area circumscribed concave quadrilateral, or a maximum area contained triangle. The structure for the algorithms presented is as follows: (a) decompose the initial problem into a low-degree polynomial number of optimization problems; (b) solve each individual subproblem in constant time using standard methods of calculus, basic methods of numerical analysis, or linear programming. These same optimization techniques can be applied to splinegons (curved polygons). First a decomposition technique for curved polygons is developed; this technique is substituted for triangulation in creating equally efficient curved versions of the algorithms for the shortest-path tree, ray-shooting, and two-point shortest path...", "In this paper, we study the problem of finding Euclidean shortest paths among curved obstacles in the plane. We model curved obstacles as splinegons. A splinegon can be viewed as replacing each edge of a polygon by a convex curved edge, and each curved edge is assumed to be of O(1) complexity. Given in the plane two points s and t and a set of h pairwise disjoint splinegons with a total of n vertices, we present an algorithm that can compute a shortest path from s to t avoiding the splinegons in O(n+hlogeh+k) time for any e>0, where k is a parameter sensitive to the input splinegons and k=O(h2). If all splinegons are convex, a common tangent of two splinegons is \"free\" if it does not intersect the interior of any splingegon; our techniques yield an output sensitive algorithm for computing all free common tangents of the h splinegons in O(n+hlogh+k) time and O(n) working space, where k is the number of all free common tangents." ] }
1601.01165
2223263137
Dedispersion, the removal of deleterious smearing of impulsive signals by the interstellar matter, is one of the most intensive processing steps in any radio survey for pulsars and fast transients. We here present a study of the parallelization of this algorithm on many-core accelerators, including GPUs from AMD and NVIDIA, and the Intel Xeon Phi. We find that dedispersion is inherently memory-bound. Even in a perfect scenario, hardware limitations keep the arithmetic intensity low, thus limiting performance. We next exploit auto-tuning to adapt dedispersion to different accelerators, observations, and even telescopes. We demonstrate that the optimal settings differ between observational setups, and that auto-tuning significantly improves performance. This impacts time-domain surveys from Apertif to SKA.
In the literature, auto-tuning is considered a viable technique to achieve performance that is both high and portable. In particular, @cite_8 show that it is possible to use auto-tuning to improve performance of even highly-tuned algorithms, and @cite_10 affirm that application specific auto-tuning is the most practical way to achieve high performance on multi-core systems. Highly relevant here is the work of @cite_5 , with whom we agree that auto-tuning can be used as a performance portability tool, especially with OpenCL @. Another attempt at achieving performance portability on heterogeneous systems has been made by @cite_6 , and while their approach focuses on the compiler, it still relies on auto-tuning to map algorithms and heterogeneous platforms in an optimal way. In recent years, we have been working on parallelizing and implementing radio astronomy kernels on multi and many-core platforms , and using auto-tuning to achieve high performance on applications like the LOFAR beam former . What makes our current work different, is that we do not only use auto-tuning to achieve high performance, but measure its impact, and show that the optimal configurations are difficult to guess without thorough tuning.
{ "cite_N": [ "@cite_5", "@cite_10", "@cite_6", "@cite_8" ], "mid": [ "2107911628", "2154786353", "2115148068", "1863336885" ], "abstract": [ "In this work, we evaluate OpenCL as a programming tool for developing performance-portable applications for GPGPU. While the Khronos group developed OpenCL with programming portability in mind, performance is not necessarily portable. OpenCL has required performance-impacting initializations that do not exist in other languages such as CUDA. Understanding these implications allows us to provide a single library with decent performance on a variety of platforms. We choose triangular solver (TRSM) and matrix multiplication (GEMM) as representative level 3 BLAS routines to implement in OpenCL. We profile TRSM to get the time distribution of the OpenCL runtime system. We then provide tuned GEMM kernels for both the NVIDIA Tesla C2050 and ATI Radeon 5870, the latest GPUs offered by both companies. We explore the benefits of using the texture cache, the performance ramifications of copying data into images, discrepancies in the OpenCL and CUDA compilers' optimizations, and other issues that affect the performance. Experimental results show that nearly 50 of peak performance can be obtained in GEMM on both GPUs in OpenCL. We also show that the performance of these kernels is not highly portable. Finally, we propose the use of auto-tuning to better explore these kernels' parameter space using search harness.", "Understanding the most efficient design and utilization of emerging multicore systems is one of the most challenging questions faced by the mainstream and scientific computing industries in several decades. Our work explores multicore stencil (nearest-neighbor) computations --- a class of algorithms at the heart of many structured grid codes, including PDF solvers. We develop a number of effective optimization strategies, and build an auto-tuning environment that searches over our optimizations and their parameters to minimize runtime, while maximizing performance portability. To evaluate the effectiveness of these strategies we explore the broadest set of multicore architectures in the current HPC literature, including the Intel Clovertown, AMD Barcelona, Sun Victoria Falls, IBM QS22 PowerXCell 8i, and NVIDIA GTX280. Overall, our auto-tuning optimization methodology results in the fastest multicore stencil performance to date. Finally, we present several key insights into the architectural tradeoffs of emerging multicore designs and their implications on scientific algorithm development.", "Trends in both consumer and high performance computing are bringing not only more cores, but also increased heterogeneity among the computational resources within a single machine. In many machines, one of the greatest computational resources is now their graphics coprocessors (GPUs), not just their primary CPUs. But GPU programming and memory models differ dramatically from conventional CPUs, and the relative performance characteristics of the different processors vary widely between machines. Different processors within a system often perform best with different algorithms and memory usage patterns, and achieving the best overall performance may require mapping portions of programs across all types of resources in the machine. To address the problem of efficiently programming machines with increasingly heterogeneous computational resources, we propose a programming model in which the best mapping of programs to processors and memories is determined empirically. Programs define choices in how their individual algorithms may work, and the compiler generates further choices in how they can map to CPU and GPU processors and memory systems. These choices are given to an empirical autotuning framework that allows the space of possible implementations to be searched at installation time. The rich choice space allows the autotuner to construct poly-algorithms that combine many different algorithmic techniques, using both the CPU and the GPU, to obtain better performance than any one technique alone. Experimental results show that algorithmic changes, and the varied use of both CPUs and GPUs, are necessary to obtain up to a 16.5x speedup over using a single program configuration for all architectures.", "The development of high performance dense linear algebra (DLA) critically depends on highly optimized BLAS, and especially on the matrix multiplication routine (GEMM). This is especially true for Graphics Processing Units (GPUs), as evidenced by recently published results on DLA for GPUs that rely on highly optimized GEMM. However, the current best GEMM performance, e.g. of up to 375 GFlop s in single precision and of up to 75 GFlop s in double precision arithmetic on NVIDIA's GTX 280, is difficult to achieve. The development involves extensive GPU knowledge and even backward engineering to understand some undocumented insides about the architecture that have been of key importance in the development. In this paper, we describe some GPU GEMM auto-tuning optimization techniques that allow us to keep up with changing hardware by rapidly reusing, rather than reinventing, the existing ideas. Auto-tuning, as we show in this paper, is a very practical solution where in addition to getting an easy portability, we can often get substantial speedups even on current GPUs (e.g. up to 27 in certain cases for both single and double precision GEMMs on the GTX 280)." ] }
1601.01165
2223263137
Dedispersion, the removal of deleterious smearing of impulsive signals by the interstellar matter, is one of the most intensive processing steps in any radio survey for pulsars and fast transients. We here present a study of the parallelization of this algorithm on many-core accelerators, including GPUs from AMD and NVIDIA, and the Intel Xeon Phi. We find that dedispersion is inherently memory-bound. Even in a perfect scenario, hardware limitations keep the arithmetic intensity low, thus limiting performance. We next exploit auto-tuning to adapt dedispersion to different accelerators, observations, and even telescopes. We demonstrate that the optimal settings differ between observational setups, and that auto-tuning significantly improves performance. This impacts time-domain surveys from Apertif to SKA.
While there are no previous attempts at auto-tuning dedispersion for many cores, there are a few previous GPU implementations documented in literature. First, in @cite_0 dedispersion is listed as a possible candidate for acceleration, together with other astronomy algorithms. While we agree that dedispersion is a potentially good candidate for many-core acceleration because of its inherently parallel structure, we believe their performance analysis to be too optimistic, and the AI estimate in to be unrealistically high. In fact, we showed in this paper that dedispersion's AI is low in all realistic scenarios, and that the algorithm is inherently memory-bound. The same authors implemented, in a follow-up paper , dedispersion for NVIDIA GPUs, using CUDA as their implementation framework. However, we do not completely agree with the performance results presented in for two reasons: first, they do not completely exploit data reuse, and we have shown here how important data reuse is for performance; and second, part of their results are not experimental, but derived from performance models.
{ "cite_N": [ "@cite_0" ], "mid": [ "1858911298" ], "abstract": [ "Astronomy depends on ever-increasing computing power. Processor clock rates have plateaued, and increased performance is now appearing in the form of additional processor cores on a single chip. This poses significant challenges to the astronomy software community. Graphics processing units (GPUs), now capable of general-purpose computation, exemplify both the difficult learning curve and the significant speedups exhibited by massively parallel hardware architectures. We present a generalized approach to tackling this paradigm shift, based on the analysis of algorithms. We describe a small collection of foundation algorithms relevant to astronomy and explain how they may be used to ease the transition to massively parallel computing architectures. We demonstrate the effectiveness of our approach by applying it to four well-known astronomy problems: H¨ ogbom CLEAN, inverse ray-shooting for gravitational lensing, pulsar dedispersion and volume rendering. Algorithms with well-defined memory access patterns and high arithmetic intensity stand to receive the greatest performance boost from massively parallel architectures, while those that involve a significant amount of decision-making may struggle to take advantage of the available processing power." ] }
1601.01134
2229952817
We describe large classes of compact self-adjoint Hankel operators whose eigenvalues have power asymptotics and obtain explicit expressions for the coefficient in front of the leading term. The results are stated both in the discrete and continuous representations for Hankel operators. We also elucidate two key principles underpinning the proof of such asymptotic relations. We call them the localization principle and the symmetry principle . The localization principle says that disjoint components of the singular support of the symbol of a Hankel operator make independent contributions into the asymptotics of eigenvalues. The symmetry principle says that if the singular support of a symbol does not contain the points @math and @math in the discrete case (or the points @math and @math in the continuous case), then the spectrum of the corresponding Hankel operator is asymptotically symmetric with respect to the reflection around zero.
We finally mention the fundamental paper @cite_8 where the spectra of all bounded self-adjoint Hankel operators were characterized in terms of a certain balance of their positive and negative parts. In particular, the spectra of compact Hankel operators were characterized by the two conditions: (i) the multiplicities of the eigenvalues @math and @math do not differ by more than one; (ii) if the point @math is an eigenvalue, then necessarily it has infinite multiplicity. The first of these conditions is similar in spirit to the asymptotic formulas , for the eigenvalues, but of course neither of these two results implies the other one.
{ "cite_N": [ "@cite_8" ], "mid": [ "1976047242" ], "abstract": [ "In §8.5 we have considered a geometric problem in the theory of stationary Gaussian processes and we have reduced this problem to the problem of the description of the bounded linear operators on Hilbert space that are unitarily equivalent to moduli of Hankel operators. In this chapter we are going to solve the latter problem, which in turn will lead to a solution of the above geometric problem in prediction theory." ] }
1601.01008
2229997123
The idea behind universal coating is to have a thin layer of a specific substance covering an object of any shape so that one can measure a certain condition (like temperature or cracks) at any spot on the surface of the object without requiring direct access to that spot. We study the universal coating problem in the context of self-organizing programmable matter consisting of simple computational elements, called particles, that can establish and release bonds and can actively move in a self-organized way. Based on that matter, we present a worst-case work-optimal universal coating algorithm that uniformly coats any object of arbitrary shape and size that allows a uniform coating. Our particles are anonymous, do not have any global information, have constant-size memory, and utilize only local interactions.
Many approaches have already been proposed that can potentially be used for smart coating. One can distinguish between active and passive systems. In passive systems the particles either do not have any intelligence at all (but just move and bond based on their structural properties or due to chemical interactions with the environment), or they have limited computational capabilities but cannot control their movements. Examples of research on are DNA self-assembly systems (see, e.g., the surveys in @cite_11 @cite_2 @cite_21 ), population protocols @cite_9 , and slime molds @cite_20 @cite_16 . We will not describe these models in detail since we are focusing on active systems. In , computational particles can control the way they act and move in order to solve a specific task. Robotic swarms, and modular robotic systems are some examples of active programmable matter systems.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_2", "@cite_16", "@cite_20", "@cite_11" ], "mid": [ "2706788079", "2050994027", "2079425200", "1565089780", "2138415668", "2044709436" ], "abstract": [ "The computational power of networks of small resource-limited mobile agents is explored. Two new models of computation based on pairwise interactions of finite-state agents in populations of finite but unbounded size are defined. With a fairness condition on interactions, the concept of stable computation of a function or predicate is defined. Protocols are given that stably compute any predicate in the class definable by formulas of Presburger arithmetic, which includes Boolean combinations of threshold-k, majority, and equivalence modulo m. All stably computable predicates are shown to be in NL. Assuming uniform random sampling of interacting pairs yields the model of conjugating automata. Any counter machine with O (1) counters of capacity O (n) can be simulated with high probability by a conjugating automaton in a population of size n. All predicates computable with high probability in this model are shown to be in P; they can also be computed by a randomized logspace machine in exponential time. Several open problems and promising future directions are discussed.", "This short survey of recent work in tile self-assembly discusses the use of simulation to classify and separate the computational and expressive power of self-assembly models. The journey begins with the result that there is a single universal tile set that, with proper initialization and scaling, simulates any tile assembly system. This universal tile set exhibits something stronger than Turing universality: it captures the geometry and dynamics of any simulated system. From there we find that there is no such tile set in the noncooperative, or temperature 1, model, proving it weaker than the full tile assembly model. In the two-handed or hierarchal model, where large assemblies can bind together on one step, we encounter an infinite set, of infinite hierarchies, each with strictly increasing simulation power. Towards the end of our trip, we find one tile to rule them all: a single rotatable flipable polygonal tile that can simulate any tile assembly system. It seems this could be the beginning of a much longer journey, so directions for future work are suggested.", "We first give an introduction to the field of tile-based self-assembly, focusing primarily on theoretical models and their algorithmic nature. We start with a description of Winfree’s abstract Tile Assembly Model (aTAM) and survey a series of results in that model, discussing topics such as the shapes which can be built and the computations which can be performed, among many others. Next, we introduce the more experimentally realistic kinetic Tile Assembly Model (kTAM) and provide an overview of kTAM results, focusing especially on the kTAM’s ability to model errors and several results targeted at preventing and correcting errors. We then describe the 2-Handed Assembly Model (2HAM), which allows entire assemblies to combine with each other in pairs (as opposed to the restriction of single-tile addition in the aTAM and kTAM) and doesn’t require a specified seed. We give overviews of a series of 2HAM results, which tend to make use of geometric techniques not applicable in the aTAM. Finally, we discuss and define a wide array of more recently developed models and discuss their various tradeoffs in comparison to the previous models and to each other.", "Many biological systems are composed of unreliable components which self-organize efficiently into systems that can tackle complex problems. One such example is the true slimemold Physarum polycephalum which is an amoeba-like organism that seeks food sources and efficiently distributes nutrients throughout its cell body. The distribution of nutrients is accomplished by a self-assembled resource distribution network of small tubes with varying diameter which can evolve with changing environmental conditions without any global control. In this paper, we use a phenomenological model for the tube evolution in slime mold and map it to a path formation protocol for wireless sensor networks. By selecting certain evolution parameters in the protocol, the network may evolve toward single paths connecting data sources to a data sink. In other parameter regimes, the protocol may evolve toward multiple redundant paths. We present detailed analysis of a small model network. A thorough understanding of the simple network leads to design insights into appropriate parameter selection. We also validate the design via simulation of large-scale realistic wireless sensor networks using the QualNet network simulator.", "Physarum Polycephalum is a slime mold that apparently is able to solve shortest path problems. A mathematical model has been proposed by biologists to describe the feedback mechanism used by the slime mold to adapt its tubular channels while foraging two food sources s0 and s1. We prove that, under this model, the mass of the mold will eventually converge to the shortest s0-s1 path of the network that the mold lies on, independently of the structure of the network or of the initial mass distribution. This matches the experimental observations by the biologists and can be seen as an example of a \"natural algorithm\", that is, an algorithm developed by evolution over millions of years.", "Self-assembly is the process by which small components automatically assemble themselves into large, complex structures. Examples in nature abound: lipids self-assemble a cell's membrane, and bacteriophage virus proteins self-assemble a capsid that allows the virus to invade other bacteria. Even a phenomenon as simple as crystal formation is a process of self-assembly. How could such a process be described as \"algorithmic?\" The key word in the first sentence is automatically. Algorithms automate a series of simple computational tasks. Algorithmic self-assembly systems automate a series of simple growth tasks, in which the object being grown is simultaneously the machine controlling its own growth." ] }
1601.01008
2229997123
The idea behind universal coating is to have a thin layer of a specific substance covering an object of any shape so that one can measure a certain condition (like temperature or cracks) at any spot on the surface of the object without requiring direct access to that spot. We study the universal coating problem in the context of self-organizing programmable matter consisting of simple computational elements, called particles, that can establish and release bonds and can actively move in a self-organized way. Based on that matter, we present a worst-case work-optimal universal coating algorithm that uniformly coats any object of arbitrary shape and size that allows a uniform coating. Our particles are anonymous, do not have any global information, have constant-size memory, and utilize only local interactions.
In a recent paper @cite_18 , Michail and Spirakis propose a model for network construction that is inspired by population protocols @cite_9 . The population protocol model relates to self-organizing particles systems, but is also intrinsically different: agents (which would correspond to our particles) freely move in space and can establish connections to any other agent in the system at any point in time, following the respective probabilistic distribution. In the paper the authors focus on network construction for specific topologies (e.g., spanning line, spanning star, etc.). However, in principle, it would be possible to adapt their approach also for studying coating problems under the population protocol model.
{ "cite_N": [ "@cite_9", "@cite_18" ], "mid": [ "2706788079", "1994387210" ], "abstract": [ "The computational power of networks of small resource-limited mobile agents is explored. Two new models of computation based on pairwise interactions of finite-state agents in populations of finite but unbounded size are defined. With a fairness condition on interactions, the concept of stable computation of a function or predicate is defined. Protocols are given that stably compute any predicate in the class definable by formulas of Presburger arithmetic, which includes Boolean combinations of threshold-k, majority, and equivalence modulo m. All stably computable predicates are shown to be in NL. Assuming uniform random sampling of interacting pairs yields the model of conjugating automata. Any counter machine with O (1) counters of capacity O (n) can be simulated with high probability by a conjugating automaton in a population of size n. All predicates computable with high probability in this model are shown to be in P; they can also be computed by a randomized logspace machine in exponential time. Several open problems and promising future directions are discussed.", "In this work, we study protocols so that populations of distributed processes can construct networks. In order to highlight the basic principles of distributed network construction we keep the model minimal in all respects. In particular, we assume finite-state processes that all begin from the same initial state and all execute the same protocol. Moreover, we assume pairwise interactions between the processes that are scheduled by a fair adversary. In order to allow processes to construct networks, we let them activate and deactivate their pairwise connections. When two processes interact, the protocol takes as input the states of the processes and the state of their connection and updates all of them. Initially all connections are inactive and the goal is for the processes, after interacting and activating deactivating connections for a while, to end up with a desired stable network. We give protocols (optimal in some cases) and lower bounds for several basic network construction problems such as spanning line, spanning ring, spanning star, and regular network. The expected time to convergence of our protocols is analyzed under a uniform random scheduler. Finally, we prove several universality results by presenting generic protocols that are capable of simulating a Turing Machine (TM) and exploiting it in order to construct a large class of networks. We additionally show how to partition the population into k supernodes, each being a line of log k nodes, for the largest such @math . This amount of local memory is sufficient for the supernodes to obtain unique names and exploit their names and their memory to realize nontrivial constructions." ] }
1601.01325
2226774821
The multiplicative coalescent is a Markov process taking values in ordered @math . It is a mean-field process in which any pair of blocks coalesces at rate proportional to the product of their masses. In Aldous and Limic (1998) each extreme eternal version @math of the multiplicative coalescent was described in three different ways. One of these specifications matches the (marginal) law of @math to that of the ordered excursion lengths above past minima of @math , where @math is a certain Levy-type process which (modulo shift and scaling) has infinitesimal drift @math at time @math . Using a modification of the breadth-first-walk construction from Aldous (1997) and Aldous and Limic (1998), and some new insight from the thesis by Uribe (2007), this work settles an open problem (3) from Aldous (1997), in the more general context of Aldous and Limic (1998). Informally speaking, @math is entirely encoded by @math , and contrary to Aldous' original intuition, the evolution of time for @math does correspond to the linear increase in the constant part of the drift of @math . In the "standard multiplicative coalescent" context of Aldous (1997), this result was first announced by Armendariz in 2001, and obtained in a recent preprint by Broutin and Marckert, who simultaneously account for the process of excess edge counts (or marks). The novel argument presented here is based on a sequence of relatively elementary observations. Some of its components (for example, the new dynamic random graph construction via "simultaneous" breadth-first walks) are of independent interest, and may be useful for obtaining more sophisticated asymptotic results on near critical random graphs and related processes.
For almost two decades the only stochastic merging process widely studied by probabilists was the (Kingman) coalescent @cite_24 @cite_39 . Starting with Aldous @cite_38 @cite_14 , and Pitman @cite_32 , Sagitov @cite_16 , and Donnelly and Kurtz @cite_37 , the main-stream probability research on coalescents was much diversified.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_14", "@cite_32", "@cite_39", "@cite_24", "@cite_16" ], "mid": [ "2057073325", "2114411217", "2056513564", "2065794571", "2319462502", "", "1986342834" ], "abstract": [ "Author(s): Aldous, DJ | Abstract: Consider N particles, which merge into clusters according to the following rule: a cluster of size x and a cluster of size y merge at (stochastic) rate K(x, y) N, were AT is a specified rate kernel. This Marcus-Lushnikov model of stochastic coalescence and the underlying deterministic approximation given by the Smoluchowski coagulation equations have an extensive scientific literature. Some mathematical literature (Kingman's coalescent in population genetics; component sizes in random graphs) implicitly studies the special cases K(x, y) = 1 and K(x, y) = xy. We attempt a wide-ranging survey. General kernels are only now starting to be studied rigorously; so many interesting open problems appear. © 1999 ISI BS.", "Models of populations in which a type or location, represented by a point in a metric space E, is associated with each individual in the population are considered. A population process is neutral if the chances of an individual replicating or dying do not depend on its type. Measurevalued processes are obtained as infinite population limits for a large class of neutral population models, and it is shown that these measure-valued processes can be represented in terms of the total mass of the population and the de Finetti measures associated with an E-valued particle model Ž.", "Let (B t (s), 0 ≤ s < ∞) be reflecting inhomogeneous Brownian motion with drift t - s at time s, started with B t (0) = 0. Consider the random graph script G sign(n, n -1 + tn -4 3 ), whose largest components have size of order n 2 3 . Normalizing by n -2 3 , the asymptotic joint distribution of component sizes is the same as the joint distribution of excursion lengths of B t (Corollary 2). The dynamics of merging of components as t increases are abstracted to define the multiplicative coalescent process. The states of this process are vectors x of nonnegative real cluster sizes (x i ), and clusters with sizes x i and x j merge at rate x i x j . The multiplicative coalescent is shown to be a Feller process on l 2 . The random graph limit specifies the standard multiplicative coalescent, which starts from infinitesimally small clusters at time -∞; the existence of such a process is not obvious.", "k−2 � 1 − xb−k � � dx� . Call this process a � -coalescent. Discrete measure-valued processes derived from the � -coalescent model a system of masses undergoing coalescent collisions. Kingman's coalescent, which has numerous applications in population genetics, is the δ0-coalescent for δ0 a unit mass at 0. The coalescent recently derived by Bolthausen and Sznit- man from Ruelle's probability cascades, in the context of the Sherrington- Kirkpatrick spin glass model in mathematical physics, is the U-coalescent for U uniform on � 0� 1� .F or� = U, and whenever an infinite number of masses are present, each collision in a � -coalescent involves an infinite number of masses almost surely, and the proportion of masses involved exists as a limit almost surely and is distributed proportionally to � . The two-parameter Poisson-Dirichlet family of random discrete distributions derived from a stable subordinator, and corresponding exchangeable ran- dom partitions ofgoverned by a generalization of the Ewens sampling formula, are applied to describe transition mechanisms for processes of coalescence and fragmentation, including the U-coalescent and its time reversal. 1. Introduction. Markovian coalescent models for the evolution of a sys- tem of masses by a random process of binary collisions were introduced by Marcus (29) and Lushnikov (28). See (3) for a recent survey of the scientific lit- erature of these models and their relation to Smoluchowski's mean-field theory of coagulation phenomena. Evans and Pitman (15) gave a general framework for the rigorous construction of partition-valued and discrete measure-valued coalescent Markov processes allowing infinitely many massses and treated the binary coalescent model where each pair of masses x and y is subject to a coa- lescent collision at rate κ� xyfor a suitable rate kernel κ. This paper studies a family of partition-valued Markov processes, with state space the compact set of all partitions of � �= � 1� 2 ���� � , such that the restriction of the partition to each finite subset ofis a Markov chain with transition rates of a simple form determined by the moments of a finite measureon the unit interval. The case � = δ 0 , a unit mass at 0, is Kingman's coalescent in which every", "A new Markov chain is introduced which can be used to describe the family relationships among n individuals drawn from a particular generation of a large haploid population. The properties of this process can be studied, simultaneously for all n, by coupling techniques. Recent results in neutral mutation theory are seen as consequences of the genealogy described by the chain. WRIGHT-FISHER MODEL; NEUTRAL MUTATION; RANDOM EQUIVALENCE RELATIONS; COALESCENT; EWENS SAMPLING FORMULA; COUPLING; ENTRANCE BOUNDARY", "", "Take a sample of individuals in the fixed-size population model with exchangeable family sizes. Follow the ancestral lines for the sampled individuals backwards in time to observe the ancestral process. We describe a class of asymptotic structures for the ancestral process via a convergence criterion. One of the basic conditions of the criterion prevents simultaneous mergers of ancestral lines. Another key condition implies that the marginal distribution of the family size is attracted by an infinitely divisible distribution. If the latter is normal the coalescent allows only for pairwise mergers (Kingman's coalescent). Otherwise multiple mergers happen with positive probability." ] }
1601.01325
2226774821
The multiplicative coalescent is a Markov process taking values in ordered @math . It is a mean-field process in which any pair of blocks coalesces at rate proportional to the product of their masses. In Aldous and Limic (1998) each extreme eternal version @math of the multiplicative coalescent was described in three different ways. One of these specifications matches the (marginal) law of @math to that of the ordered excursion lengths above past minima of @math , where @math is a certain Levy-type process which (modulo shift and scaling) has infinitesimal drift @math at time @math . Using a modification of the breadth-first-walk construction from Aldous (1997) and Aldous and Limic (1998), and some new insight from the thesis by Uribe (2007), this work settles an open problem (3) from Aldous (1997), in the more general context of Aldous and Limic (1998). Informally speaking, @math is entirely encoded by @math , and contrary to Aldous' original intuition, the evolution of time for @math does correspond to the linear increase in the constant part of the drift of @math . In the "standard multiplicative coalescent" context of Aldous (1997), this result was first announced by Armendariz in 2001, and obtained in a recent preprint by Broutin and Marckert, who simultaneously account for the process of excess edge counts (or marks). The novel argument presented here is based on a sequence of relatively elementary observations. Some of its components (for example, the new dynamic random graph construction via "simultaneous" breadth-first walks) are of independent interest, and may be useful for obtaining more sophisticated asymptotic results on near critical random graphs and related processes.
The Kingman coalescent and, more generally, the mass-less (exchangeable) coalescents of @cite_32 @cite_16 @cite_37 mostly appear in connection to the mathematical population genetics, as universal (robust) scaling limits of genealogical trees (see for example @cite_41 @cite_11 @cite_5 @cite_29 , or a survey @cite_58 ). The standard multiplicative coalescent is the universal scaling limit of numerous stochastic (typically combinatorial or graph-theoretic) homogeneous (or symmetric) merging-like models @cite_14 @cite_8 @cite_36 @cite_9 @cite_60 @cite_6 @cite_56 @cite_42 . The non-standard'' eternal extreme laws from @cite_10 are also scaling limits of inhomogeneous random graphs and related processes under appropriate assumptions @cite_10 @cite_43 @cite_52 .
{ "cite_N": [ "@cite_41", "@cite_36", "@cite_29", "@cite_42", "@cite_43", "@cite_5", "@cite_58", "@cite_10", "@cite_8", "@cite_60", "@cite_52", "@cite_37", "@cite_32", "@cite_6", "@cite_56", "@cite_16", "@cite_14", "@cite_9", "@cite_11" ], "mid": [ "1897267496", "1970474336", "2102793301", "1890866918", "2115976335", "2067409890", "1560472286", "2109430192", "2104099158", "2028633127", "", "2114411217", "2065794571", "2128097224", "2126516965", "1986342834", "2056513564", "2129754261", "2093012110" ], "abstract": [ "We consider a class of haploid population models with nonoverlapping generations and fixed population size N assuming that the family sizes within a generation are exchangeable random variables. A weak convergence criterion is established for a properly scaled ancestral process as N → oo. It results in a full classification of the coalescent generators in the case of exchangeable reproduction. In general the coalescent process allows for simultaneous multiple mergers of ancestral lines.", "Author(s): Aldous, DJ; Pittel, B | Abstract: A randomly evolving graph, with vertices immigrating at rate n and each possible edge appearing at rate 1 n, is studied. The detailed picture of emergence of giant components with O(n2 3) vertices is shown to be the same as in the Erdos-Renyi graph process with the number of vertices fixed at n at the start. A major difference is that now the transition occurs about a time t = π 2, rather than t = 1. The proof has three ingredients. The size of the largest component in the subcritical phase is bounded by comparison with a certain multitype branching process. With this bound at hand, the growth of the sum-of-squares and sum-of-cubes of component sizes is shown, via martingale methods, to follow closely a solution of the Smoluchowsky-type equations. The approximation allows us to apply results of Aldous [Brownian excursions, critical random graphs and the multiplicative coalescent, Ann Probab 25 (1997), 812-854] on emergence of giant components in the multiplicative coalescent, i.e., a nonuniform random graph process. © 2000 John Wiley a Sons, Inc. Random Struct. Alg., 17, 79-102, 2000.", "When a beneficial mutation occurs in a population, the new, favored allele may spread to the entire population. This process is known as a selective sweep. Suppose we sample n individuals at the end of a selective sweep. If we focus on a site on the chromosome that is close to the location of the beneficial mutation, then many of the lineages will likely be descended from the individual that had the beneficial mutation, while others will be descended from a dierent individual because of recombination between the two sites. We introduce two approximations for the eect of a selective sweep. The first one is simple but not very accurate: flip n independent coins with probability p of heads and say that the lineages whose coins come up heads are those that are descended from the individual with the beneficial mutation. A second approximation, which is related to Kingman’s paintbox construction, replaces the coin flips by integer-valued random variables and leads to very accurate results.", "Consider the random graph on n vertices 1,...,n. Each vertex i is assigned a type x(i) with x(1),...,x(n) being independent identically distributed as a nonnegative random variable X. We assume that EX3 < infinity. Given types of all vertices, an edge exists between vertices i and j independent of anything else and with probability min 1, x(i)x(j) n (1 + a n(1 3)) . We study the critical phase, which is known to take place when EX2 = 1. We prove that normalized by n(-2 3) the asymptotic joint distributions of component sizes of the graph equals the joint distribution of the excursions of a reflecting Brownian motion with diffusion coefficient root EXEX3 and drift a - EX3 EX s. In particular, we conclude that the size of the largest connected component is of order n(2 3). (c) 2013 Wiley Periodicals, Inc. Random Struct. Alg., 43, 486-539, 2013 (Less)", "We flnd scaling limits for the sizes of the largest components at criticality for the rank-1 inho- mogeneous random graphs with power-law degrees with exponent ?. We investigate the case where ? 2 (3;4), so that the degrees have flnite variance but inflnite third moment. The sizes of the largest clusters, rescaled by n i(?i2)=(?i1) , converge to hitting times of a 'thinned' Levy process. This process is intimately connected to the general multiplicative coalescents studied in (1) and (2). In particular, we use the results in (2) to show that, when interpreting the location ‚ inside the critical window as time, the limiting process is a multiplicative process with difiusion constant 0 and the entrance boundary describing the size of relative components in the ‚ ! i1 regime proportional to i i i1=(?i1) ¢ i‚1 . A crucial ingredient is the identiflcation of the scaling of the largest connected components in the barely subcritical regime. Our results should be contrasted to the case where the degree exponent ? satisfles ? > 4, so that the third moment is flnite. There, instead, we see that the sizes of the components rescaled by n i2=3 converge to the excursion lengths of an inhomogeneous Brownian motion, as proved in (1) for the Erd os- Renyi random graph and extended to the present setting in (3, 4). The limit again is a multiplicative coalescent, the only difierence with the limit for ? 2 (3;4) being the initial state, corresponding to the barely subcritical regime.", "We study a class of stochastic flows connected to the coalescent processes that have been studied recently by Mohle, Pitman, Sagitov and Schweinsberg in connection with asymptotic models for the genealogy of populations with a large fixed size. We define a bridge to be a right-continuous process (B(r),r[0,1]) with nondecreasing paths and exchangeable increments, such that B(0)=0 and B(1)=1. We show that flows of bridges are in one-to-one correspondence with the so-called exchangeable coalescents. This yields an infinite-dimensional version of the classical Kingman representation for exchangeable partitions of ℕ. We then propose a Poissonian construction of a general class of flows of bridges and identify the associated coalescents. We also discuss an important auxiliary measure-valued process, which is closely related to the genealogical structure coded by the coalescent and can be viewed as a generalized Fleming-Viot process.", "Coalescent theory is the study of random processes where particles may join each other to form clusters as time evolves. These notes provide an introduction to some aspects of the mathematics of coalescent processes and their applications to theoretical population genetics and other fields such as spin glass models. The emphasis is on recent work concerning in particular the connection of these processes to continuum random trees and spatial models such as coalescing random walks.", "The multiplicative coalescent @math is a @math -valued Markov process representing coalescence of clusters of mass, where each pair of clusters merges at rate proportional to product of masses. From random graph asymptotics it is known (Aldous (1997)) that there exists a standard version of this process starting with infinitesimally small clusters at time @math . In this paper, stochastic calculus techniques are used to describe all versions @math of the multiplicative coalescent. Roughly, an extreme version is specified by translation and scale parameters, and a vector @math of relative sizes of large clusters at time @math . Such a version may be characterized in three ways: via its @math behavior, via a representation of the marginal distribution @math in terms of excursion-lengths of a Levy-type process, or via a weak limit of processes derived from the standard version via a \"coloring\" construction.", "We consider the Erdős–Renyi random graph G(n, p) inside the critical window, that is when p = 1 n + λn −4 3, for some fixed ( R ) . We prove that the sequence of connected components of G(n, p), considered as metric spaces using the graph distance rescaled by n −1 3, converges towards a sequence of continuous compact metric spaces. The result relies on a bijection between graphs and certain marked random walks, and the theory of continuum random trees. Our result gives access to the answers to a great many questions about distances in critical random graphs. In particular, we deduce that the diameter of G(n, p) rescaled by n −1 3 converges in distribution to an absolutely continuous random variable with finite mean.", "Random graph models with limited choice have been studied extensively with the goal of understanding the mechanism of the emergence of the giant component. One of the standard models are the Achlioptas random graph processes on a fixed set of (n ) vertices. Here at each step, one chooses two edges uniformly at random and then decides which one to add to the existing configuration according to some criterion. An important class of such rules are the bounded-size rules where for a fixed (K 1 ), all components of size greater than (K ) are treated equally. While a great deal of work has gone into analyzing the subcritical and supercritical regimes, the nature of the critical scaling window, the size and complexity (deviation from trees) of the components in the critical regime and nature of the merging dynamics has not been well understood. In this work we study such questions for general bounded-size rules. Our first main contribution is the construction of an extension of Aldous’s standard multiplicative coalescent process which describes the asymptotic evolution of the vector of sizes and surplus of all components. We show that this process, referred to as the standard augmented multiplicative coalescent (AMC) is ‘nearly’ Feller with a suitable topology on the state space. Our second main result proves the convergence of suitably scaled component size and surplus vector, for any bounded-size rule, to the standard AMC. This result is new even for the classical Erdős–Renyi setting. The key ingredients here are a precise analysis of the asymptotic behavior of various susceptibility functions near criticality and certain bounds from (The barely subcritical regime. Arxiv preprint, 2012) on the size of the largest component in the barely subcritical regime.", "", "Models of populations in which a type or location, represented by a point in a metric space E, is associated with each individual in the population are considered. A population process is neutral if the chances of an individual replicating or dying do not depend on its type. Measurevalued processes are obtained as infinite population limits for a large class of neutral population models, and it is shown that these measure-valued processes can be represented in terms of the total mass of the population and the de Finetti measures associated with an E-valued particle model Ž.", "k−2 � 1 − xb−k � � dx� . Call this process a � -coalescent. Discrete measure-valued processes derived from the � -coalescent model a system of masses undergoing coalescent collisions. Kingman's coalescent, which has numerous applications in population genetics, is the δ0-coalescent for δ0 a unit mass at 0. The coalescent recently derived by Bolthausen and Sznit- man from Ruelle's probability cascades, in the context of the Sherrington- Kirkpatrick spin glass model in mathematical physics, is the U-coalescent for U uniform on � 0� 1� .F or� = U, and whenever an infinite number of masses are present, each collision in a � -coalescent involves an infinite number of masses almost surely, and the proportion of masses involved exists as a limit almost surely and is distributed proportionally to � . The two-parameter Poisson-Dirichlet family of random discrete distributions derived from a stable subordinator, and corresponding exchangeable ran- dom partitions ofgoverned by a generalization of the Ewens sampling formula, are applied to describe transition mechanisms for processes of coalescence and fragmentation, including the U-coalescent and its time reversal. 1. Introduction. Markovian coalescent models for the evolution of a sys- tem of masses by a random process of binary collisions were introduced by Marcus (29) and Lushnikov (28). See (3) for a recent survey of the scientific lit- erature of these models and their relation to Smoluchowski's mean-field theory of coagulation phenomena. Evans and Pitman (15) gave a general framework for the rigorous construction of partition-valued and discrete measure-valued coalescent Markov processes allowing infinitely many massses and treated the binary coalescent model where each pair of masses x and y is subject to a coa- lescent collision at rate κ� xyfor a suitable rate kernel κ. This paper studies a family of partition-valued Markov processes, with state space the compact set of all partitions of � �= � 1� 2 ���� � , such that the restriction of the partition to each finite subset ofis a Markov chain with transition rates of a simple form determined by the moments of a finite measureon the unit interval. The case � = δ 0 , a unit mass at 0, is Kingman's coalescent in which every", "We identify the scaling limit for the sizes of the largest components at criticality for inhomogeneous random graphs with weights that have finite third moments. We show that the sizes of the (rescaled) components converge to the excursion lengths of an inhomogeneous Brownian motion, which extends results of Aldous (1997) for the critical behavior of Erdos-Renyi random graphs. We rely heavily on martingale convergence techniques, and concentration properties of (super)martingales. This paper is part of a programme initiated in van der Hofstad (2009) to study the near-critical behavior in inhomogeneous random graphs of so-called rank-1.", "Let G = G(d) be a random graph with a given degree sequence d, such as a random r-regular graph where r ≥ 3 is fixed and n = |G| → ∞. We study the percolation phase transition on such graphs G, i.e., the emergence as p increases of a unique giant component in the random subgraph G[p] obtained by keeping edges independently with probability p. More generally, we study the emergence of a giant component in G(d) itself as d varies. We show that a single method can be used to prove very precise results below, inside and above the 'scaling window' of the phase transition, matching many of the known results for the much simpler model G(n, p). This method is a natural extension of that used by Bollobas and the author to study G(n, p), itself based on work of Aldous and of Nachmias and Peres; the calculations are significantly more involved in the present setting.", "Take a sample of individuals in the fixed-size population model with exchangeable family sizes. Follow the ancestral lines for the sampled individuals backwards in time to observe the ancestral process. We describe a class of asymptotic structures for the ancestral process via a convergence criterion. One of the basic conditions of the criterion prevents simultaneous mergers of ancestral lines. Another key condition implies that the marginal distribution of the family size is attracted by an infinitely divisible distribution. If the latter is normal the coalescent allows only for pairwise mergers (Kingman's coalescent). Otherwise multiple mergers happen with positive probability.", "Let (B t (s), 0 ≤ s < ∞) be reflecting inhomogeneous Brownian motion with drift t - s at time s, started with B t (0) = 0. Consider the random graph script G sign(n, n -1 + tn -4 3 ), whose largest components have size of order n 2 3 . Normalizing by n -2 3 , the asymptotic joint distribution of component sizes is the same as the joint distribution of excursion lengths of B t (Corollary 2). The dynamics of merging of components as t increases are abstracted to define the multiplicative coalescent process. The states of this process are vectors x of nonnegative real cluster sizes (x i ), and clusters with sizes x i and x j merge at rate x i x j . The multiplicative coalescent is shown to be a Feller process on l 2 . The random graph limit specifies the standard multiplicative coalescent, which starts from infinitesimally small clusters at time -∞; the existence of such a process is not obvious.", "The percolation phase transition and the mechanism of the emergence of the giant component through the critical scaling window for random graph models, has been a topic of great interest in many dierent communities ranging from statistical physics, combinatorics, computer science, social networks and probability theory. The last few years have witnessed an explosion of models which couple random aggregation rules, that specify how one adds edges to existing congurations, and choice, wherein one selects from a ‘limited’ set of edges at random to use in the conguration. These models exhibit fascinating new phenomenon, ranging from delaying or speeding up the emergence of the giant component, to percolation\", where the diameter of the scaling window is several orders of magnitude smaller than that for standard random graph models. While an intense study of such models has ensued, understanding the actual emergence of the giant component and merging dynamics in the critical scaling window has remained impenetrable to a rigorous analysis. In this work we take an important step in the analysis of such models by studying one of the standard examples of such processes, namely the Bohman-Frieze model, and provide rst results on the asymptotic dynamics, through the critical scaling window, that lead to the emergence of the giant component for such models. We identify the scaling window and show that through this window, the component sizes properly rescaled converge to the standard multiplicative coalescent. Proofs hinge on a careful analysis of certain innite-type branching processes with types taking values in the space of RCLL paths, and stochastic analytic techniques to estimate susceptibility functions of the components all the way through the scaling window where these functions explode. Previous approaches for analyzing random graphs at criticality have relied largely on classical breadth-rst search techniques that exploit asymptotic connections with Brownian excursions. For dynamic random graph models evolving via general Markovian rules, such approaches fail and we develop a quite dierent set of tools that can potentially be used for the study of critical dynamics for all bounded size rules.", "We study a family of coalescent processes that undergo simultaneous multiple collisions,'' meaning that many clusters of particles can merge into a single cluster at one time, and many such mergers can occur simultaneously. This family of processes, which we obtain from simple assumptions about the rates of different types of mergers, essentially coincides with a family of processes that Mohle and Sagitov obtain as a limit of scaled ancestral processes in a population model with exchangeable family sizes. We characterize the possible merger rates in terms of a single measure, show how these coalescents can be constructed from a Poisson process, and discuss some basic properties of these processes. This work generalizes some work of Pitman, who provides similar analysis for a family of coalescent processes in which many clusters can coalesce into a single cluster, but almost surely no two such mergers occur simultaneously." ] }
1601.01325
2226774821
The multiplicative coalescent is a Markov process taking values in ordered @math . It is a mean-field process in which any pair of blocks coalesces at rate proportional to the product of their masses. In Aldous and Limic (1998) each extreme eternal version @math of the multiplicative coalescent was described in three different ways. One of these specifications matches the (marginal) law of @math to that of the ordered excursion lengths above past minima of @math , where @math is a certain Levy-type process which (modulo shift and scaling) has infinitesimal drift @math at time @math . Using a modification of the breadth-first-walk construction from Aldous (1997) and Aldous and Limic (1998), and some new insight from the thesis by Uribe (2007), this work settles an open problem (3) from Aldous (1997), in the more general context of Aldous and Limic (1998). Informally speaking, @math is entirely encoded by @math , and contrary to Aldous' original intuition, the evolution of time for @math does correspond to the linear increase in the constant part of the drift of @math . In the "standard multiplicative coalescent" context of Aldous (1997), this result was first announced by Armendariz in 2001, and obtained in a recent preprint by Broutin and Marckert, who simultaneously account for the process of excess edge counts (or marks). The novel argument presented here is based on a sequence of relatively elementary observations. Some of its components (for example, the new dynamic random graph construction via "simultaneous" breadth-first walks) are of independent interest, and may be useful for obtaining more sophisticated asymptotic results on near critical random graphs and related processes.
The two nice graphical constructions for coalescents with masses were discovered early on: by Aldous in @cite_14 for the multiplicative case, and almost simultaneously by Aldous and Pitman @cite_51 for the additive case (here any pair of blocks of mass @math and @math merges at rate @math ). The analogue of @cite_10 in the additive coalescent case is again due to Aldous and Pitman @cite_12 . No nice graphical construction for another (merging rate) coalescent with masses seems to have been found since. For studies of stochastic coalescents with general kernel see Evans and Pitman @cite_19 and Fournier @cite_55 @cite_15 . Interest for probabilistic study of related Smoluchowski's equations (with general merging kernels) was also incited by @cite_38 , see for example Norris @cite_54 , Jeon @cite_2 , then Fournier and Lauren c ot @cite_13 @cite_17 and Bertoin @cite_35 for more recent, and Merle and Normand @cite_46 @cite_40 for even more recent developments. All of the above mentioned models are mean-field . See for example @cite_0 @cite_20 @cite_18 for studies of (mass-less) coalescent models in the presence of spatial structure.
{ "cite_N": [ "@cite_13", "@cite_38", "@cite_35", "@cite_14", "@cite_18", "@cite_46", "@cite_55", "@cite_54", "@cite_0", "@cite_19", "@cite_40", "@cite_2", "@cite_51", "@cite_15", "@cite_20", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2073069695", "2057073325", "1990173407", "2056513564", "", "", "2004400176", "2963450463", "2182862043", "1969546119", "", "1984378318", "1986567569", "2019394012", "2140447723", "2109430192", "2022131627", "" ], "abstract": [ "The uniqueness and existence of measure-valued solutions to Smoluchowski's coagulation equation are considered for a class of homogeneous kernels. Denoting by λ∈(-∞,2]⧹ 0 the degree of homogeneity of the coagulation kernel a, measure-valued solutions are shown to be unique under the sole assumption that the moment of order λ of the initial datum is finite. A similar result was already available for the kernels a(x,y)=2, x+y and xy, and is extended here to a much wider class of kernels by a different approach. The uniqueness result presented herein also seems to improve previous results for several explicit kernels. Furthermore, a comparison principle and a contraction property are obtained for the constant kernel.", "Author(s): Aldous, DJ | Abstract: Consider N particles, which merge into clusters according to the following rule: a cluster of size x and a cluster of size y merge at (stochastic) rate K(x, y) N, were AT is a specified rate kernel. This Marcus-Lushnikov model of stochastic coalescence and the underlying deterministic approximation given by the Smoluchowski coagulation equations have an extensive scientific literature. Some mathematical literature (Kingman's coalescent in population genetics; component sizes in random graphs) implicitly studies the special cases K(x, y) = 1 and K(x, y) = xy. We attempt a wide-ranging survey. General kernels are only now starting to be studied rigorously; so many interesting open problems appear. © 1999 ISI BS.", "We consider two simple models for the formation of polymers where at the initial time, each monomer has a certain number of potential links (called arms in the text) that are consumed when aggregations occur. Loosely speaking, this imposes restrictions on the number of aggregations. The dynamics of concentrations are governed by modifications of Smoluchowski's coagulation equations. Applying classical techniques based on generating functions, resolution of quasi-linear PDE's, and Lagrange inversion formula, we obtain explicit solutions to these non-linear systems of ODE's. We also discuss the asymptotic behavior of the solutions and point at some connexions with certain known solutions to Smoluchowski's coagulation equations with additive or multiplicative kernels.", "Let (B t (s), 0 ≤ s < ∞) be reflecting inhomogeneous Brownian motion with drift t - s at time s, started with B t (0) = 0. Consider the random graph script G sign(n, n -1 + tn -4 3 ), whose largest components have size of order n 2 3 . Normalizing by n -2 3 , the asymptotic joint distribution of component sizes is the same as the joint distribution of excursion lengths of B t (Corollary 2). The dynamics of merging of components as t increases are abstracted to define the multiplicative coalescent process. The states of this process are vectors x of nonnegative real cluster sizes (x i ), and clusters with sizes x i and x j merge at rate x i x j . The multiplicative coalescent is shown to be a Feller process on l 2 . The random graph limit specifies the standard multiplicative coalescent, which starts from infinitesimally small clusters at time -∞; the existence of such a process is not obvious.", "", "", "We build a Markovian system of particles entirely characterized by their masses, in which each pair of particles with masses @math and @math coalesce at rate @math , for some @math , and such that the system is initially composed of infinitesimally small particles.", "", "", "Abstract Partition-valued and measure-valued coalescent Markov processes are constructed whose state describes the decomposition of a finite total mass m into a finite or countably infinite number of masses with sum m , and whose evolution is determined by the following intuitive prescription: each pair of masses of magnitudes x and y runs the risk of a binary collision to form a single mass of magnitude x + y at rate к ( x,y ), for some non-negative, symmetric collision rate kernel к ( x,y ). Such processes with finitely many masses have been used to model polymerization, coagulation, condensation, and the evolution of galactic clusters by gravitational attraction. With a suitable choice of state space, and under appropriate restrictions on к and the initial distribution of mass, it is shown that such processes can be constructed as Feller or Fellerlike processes. A number of further results are obtained for the additive coalescent with collision kernel к ( x,y ) = x + y . This process, which arises from the evolution of tree components in a random graph process, has asymptotic properties related to the stable subordinator of index 1 2.", "", "We study the Smoluchowski coagulation-fragmentation equation, which is an infinite set of non-linear ordinary differential equations describing the evolution of a mono-disperse system of particles in a well stirred solution. Approximating the solutions of the Smoluchowski equations by a sequence of finite Markov chains, we investigate the qualitative behavior of the solutions. We determine a device on the finite chains which can detect the gelation phenomena – the density dropping phenomena. It shows how the gelation phenomena are reflected on the sequence of finite Markov chains. Using this device, we determine various types of gelation kernels and get the bounds of gelation times.", "Regard an element of the set Δ := (x 1 , x 2 , . . .): x 1 ≥ x 2 ≥ ⋯ ≥ 0, ∑ i x i = 1 as a fragmentation of unit mass into clusters of masses x i . The additive coalescent of Evans and Pitman is the Δ-valued Markov process in which pairs of clusters of masses x i , x j merge into a cluster of mass x i + x j at rate x i + x j . They showed that a version (X ∞ (t), -∞ < t < ∞) of this process arises as a n → ∞ weak limit of the process started at time -1 2 log n with n clusters of mass 1 n. We show this standard additive coalescent may be constructed from the continuum random tree of Aldous by Poisson splitting along the skeleton of the tree. We describe the distribution of X∞(t) on Δ at a fixed time t. We show that the size of the cluster containing a given atom, as a process in t, has a simple representation in terms of the stable subordinator of index 1 2. As t → -∞, we establish a Gaussian limit for (centered and normalized) cluster sizes and study the size of the largest cluster.", "We consider infinite systems of macroscopic particles characterized by their masses. Each pair of particles with masses x and y coalesce at a given rate K(x, y). We assume that K satisfies a sort of Holder property with index λ ∈ (0,1], and that the initial condition admits a moment of order λ. We show the existence of such infinite particle systems, as strong Markov processes enjoying a Feller property. We also show that the obtained processes are the only possible limits when making the number of particles tend to infinity in a sequence of finite particle systems with the same dynamics.", "We investigate a new model for populations evolving in a spatial continuum. This model can be thought of as a spatial version of the Lambda-Fleming-Viot process. It explicitly incorporates both small scale reproduction events and large scale extinction-recolonisation events. The lineages ancestral to a sample from a population evolving according to this model can be described in terms of a spatial version of the Lambda-coalescent. Using a technique of Evans (1997), we prove existence and uniqueness in law for the model. We then investigate the asymptotic behaviour of the genealogy of a finite number of individuals sampled uniformly at random (or more generally far enough apart') from a two-dimensional torus of sidelength L as L tends to infinity. Under appropriate conditions (and on a suitable timescale) we can obtain as limiting genealogical processes a Kingman coalescent, a more general Lambda-coalescent or a system of coalescing Brownian motions (with a non-local coalescence mechanism).", "The multiplicative coalescent @math is a @math -valued Markov process representing coalescence of clusters of mass, where each pair of clusters merges at rate proportional to product of masses. From random graph asymptotics it is known (Aldous (1997)) that there exists a standard version of this process starting with infinitesimally small clusters at time @math . In this paper, stochastic calculus techniques are used to describe all versions @math of the multiplicative coalescent. Roughly, an extreme version is specified by translation and scale parameters, and a vector @math of relative sizes of large clusters at time @math . Such a version may be characterized in three ways: via its @math behavior, via a representation of the marginal distribution @math in terms of excursion-lengths of a Levy-type process, or via a weak limit of processes derived from the standard version via a \"coloring\" construction.", "Regard an element of the set of ranked discrete distributions Δ := (x1, x2,…):x1≥x2≥…≥ 0, ∑ i x i = 1 as a fragmentation of unit mass into clusters of masses x i . The additive coalescent is the Δ-valued Markov process in which pairs of clusters of masses x i , x j merge into a cluster of mass x i + x j at rate x i + x j . Aldous and Pitman (1998) showed that a version of this process starting from time −∞ with infinitesimally small clusters can be constructed from the Brownian continuum random tree of Aldous (1991, 1993) by Poisson splitting along the skeleton of the tree. In this paper it is shown that the general such process may be constructed analogously from a new family of inhomogeneous continuum random trees.", "" ] }
1601.01325
2226774821
The multiplicative coalescent is a Markov process taking values in ordered @math . It is a mean-field process in which any pair of blocks coalesces at rate proportional to the product of their masses. In Aldous and Limic (1998) each extreme eternal version @math of the multiplicative coalescent was described in three different ways. One of these specifications matches the (marginal) law of @math to that of the ordered excursion lengths above past minima of @math , where @math is a certain Levy-type process which (modulo shift and scaling) has infinitesimal drift @math at time @math . Using a modification of the breadth-first-walk construction from Aldous (1997) and Aldous and Limic (1998), and some new insight from the thesis by Uribe (2007), this work settles an open problem (3) from Aldous (1997), in the more general context of Aldous and Limic (1998). Informally speaking, @math is entirely encoded by @math , and contrary to Aldous' original intuition, the evolution of time for @math does correspond to the linear increase in the constant part of the drift of @math . In the "standard multiplicative coalescent" context of Aldous (1997), this result was first announced by Armendariz in 2001, and obtained in a recent preprint by Broutin and Marckert, who simultaneously account for the process of excess edge counts (or marks). The novel argument presented here is based on a sequence of relatively elementary observations. Some of its components (for example, the new dynamic random graph construction via "simultaneous" breadth-first walks) are of independent interest, and may be useful for obtaining more sophisticated asymptotic results on near critical random graphs and related processes.
As already mentioned, Broutin and Marckert @cite_50 obtain Theorem in the standard case, via Prim's algorithm construction invented for the purpose of their study, and notably different from the approach presented here. Before them @cite_9 @cite_60 proved f.d.d. convergence for models similar to Erd "os-R 'enyi random graph. For the standard additive coalescent, analogous results were obtained rather early by Bertoin @cite_31 @cite_22 and Chassaing and Louchard @cite_49 , and are rederived in @cite_50 , again via an appropriate Prim's algorithm representation.
{ "cite_N": [ "@cite_22", "@cite_60", "@cite_9", "@cite_50", "@cite_49", "@cite_31" ], "mid": [ "1573705578", "2028633127", "2129754261", "1747067133", "2145039842", "2047042737" ], "abstract": [ "Aldous and Pitman have studied the asymptotic behavior of the additive coalescent processes using a nested family random forests derived by logging certain inhomogeneous continuum random trees. Here we propose a different approach based on partitions of the unit interval induced by certain bridges with exchangeable increments. The analysis is made simple by an interpretation in terms of an aggregating server system.", "Random graph models with limited choice have been studied extensively with the goal of understanding the mechanism of the emergence of the giant component. One of the standard models are the Achlioptas random graph processes on a fixed set of (n ) vertices. Here at each step, one chooses two edges uniformly at random and then decides which one to add to the existing configuration according to some criterion. An important class of such rules are the bounded-size rules where for a fixed (K 1 ), all components of size greater than (K ) are treated equally. While a great deal of work has gone into analyzing the subcritical and supercritical regimes, the nature of the critical scaling window, the size and complexity (deviation from trees) of the components in the critical regime and nature of the merging dynamics has not been well understood. In this work we study such questions for general bounded-size rules. Our first main contribution is the construction of an extension of Aldous’s standard multiplicative coalescent process which describes the asymptotic evolution of the vector of sizes and surplus of all components. We show that this process, referred to as the standard augmented multiplicative coalescent (AMC) is ‘nearly’ Feller with a suitable topology on the state space. Our second main result proves the convergence of suitably scaled component size and surplus vector, for any bounded-size rule, to the standard AMC. This result is new even for the classical Erdős–Renyi setting. The key ingredients here are a precise analysis of the asymptotic behavior of various susceptibility functions near criticality and certain bounds from (The barely subcritical regime. Arxiv preprint, 2012) on the size of the largest component in the barely subcritical regime.", "The percolation phase transition and the mechanism of the emergence of the giant component through the critical scaling window for random graph models, has been a topic of great interest in many dierent communities ranging from statistical physics, combinatorics, computer science, social networks and probability theory. The last few years have witnessed an explosion of models which couple random aggregation rules, that specify how one adds edges to existing congurations, and choice, wherein one selects from a ‘limited’ set of edges at random to use in the conguration. These models exhibit fascinating new phenomenon, ranging from delaying or speeding up the emergence of the giant component, to percolation\", where the diameter of the scaling window is several orders of magnitude smaller than that for standard random graph models. While an intense study of such models has ensued, understanding the actual emergence of the giant component and merging dynamics in the critical scaling window has remained impenetrable to a rigorous analysis. In this work we take an important step in the analysis of such models by studying one of the standard examples of such processes, namely the Bohman-Frieze model, and provide rst results on the asymptotic dynamics, through the critical scaling window, that lead to the emergence of the giant component for such models. We identify the scaling window and show that through this window, the component sizes properly rescaled converge to the standard multiplicative coalescent. Proofs hinge on a careful analysis of certain innite-type branching processes with types taking values in the space of RCLL paths, and stochastic analytic techniques to estimate susceptibility functions of the components all the way through the scaling window where these functions explode. Previous approaches for analyzing random graphs at criticality have relied largely on classical breadth-rst search techniques that exploit asymptotic connections with Brownian excursions. For dynamic random graph models evolving via general Markovian rules, such approaches fail and we develop a quite dierent set of tools that can potentially be used for the study of critical dynamics for all bounded size rules.", "We revisit the discrete additive and multiplicative coalescents, starting with n particles with unit mass. These cases are known to be related to some “combinatorial coalescent processes”: a time reversal of a fragmentation of Cayley trees or a parking scheme in the additive case, and the random graph process ((G(n,p))_p ) in the multiplicative case. Time being fixed, encoding these combinatorial objects in real-valued processes indexed by the line is the key to describing the asymptotic behaviour of the masses as (n + ). We propose to use the Prim order on the vertices instead of the classical breadth-first (or depth-first) traversal to encode the combinatorial coalescent processes. In the additive case, this yields interesting connections between the different representations of the process. In the multiplicative case, it allows one to answer to a stronger version of an open question of Aldous (Ann Probab 25:812–854, 1997): we prove that not only the sequence of (rescaled) masses, seen as a process indexed by the time ( ), converges in distribution to the reordered sequence of lengths of the excursions above the current minimum of a Brownian motion with parabolic drift ((B_t+ t - t^2 2, t 0) ), but we also construct a version of the standard augmented multiplicative coalescent of (Probab Theory Relat, 2013) using an additional Poisson point process.", "In this paper, we consider hashing with linear probing for a hashing table with m places, n items (n > m), and l = m - n empty places. For a noncomputer science-minded reader, we shall use the metaphore of n cars parking on m places: each car ci chooses a place pi at random, and if pi is occupied, ci tries successively pi + 1, pi + 2, until it finds an empty place. Pittel [42] proves that when l m goes to some positive limit β > 1, the size B1m,l1 of the largest block of consecutive cars satisfies 2(β - 1 - log β)B1m,l - 3 log log m + Ξm, where Ξm converges weakly to an extreme-value distribution. In this paper we examine at which level for n a phase transition occurs between B1m,l = o(m) and m - B1m,l = o(m). The intermediate case reveals an interesting behavior of sizes of blocks, related to the standard additive coalescent in the same way as the sizes of connected components of the random graph are related to the multiplicative coalescent.", "Let (B s , s≥ 0) be a standard Brownian motion and T1 its first passage time at level 1. For every t≥ 0, we consider ladder time set ℒ (t) of the Brownian motion with drift t, B (t) s = B s + ts, and the decreasing sequence F(t) = (F1(t), F2(t), …) of lengths of the intervals of the random partition of [0, T1] induced by ℒ (t) . The main result of this work is that (F(t), t≥ 0) is a fragmentation process, in the sense that for 0 ≤t < t′, F(t′) is obtained from F(t) by breaking randomly into pieces each component of F(t) according to a law that only depends on the length of this component, and independently of the others. We identify the fragmentation law with the one that appears in the construction of the standard additive coalescent by Aldous and Pitman [3]." ] }
1601.01325
2226774821
The multiplicative coalescent is a Markov process taking values in ordered @math . It is a mean-field process in which any pair of blocks coalesces at rate proportional to the product of their masses. In Aldous and Limic (1998) each extreme eternal version @math of the multiplicative coalescent was described in three different ways. One of these specifications matches the (marginal) law of @math to that of the ordered excursion lengths above past minima of @math , where @math is a certain Levy-type process which (modulo shift and scaling) has infinitesimal drift @math at time @math . Using a modification of the breadth-first-walk construction from Aldous (1997) and Aldous and Limic (1998), and some new insight from the thesis by Uribe (2007), this work settles an open problem (3) from Aldous (1997), in the more general context of Aldous and Limic (1998). Informally speaking, @math is entirely encoded by @math , and contrary to Aldous' original intuition, the evolution of time for @math does correspond to the linear increase in the constant part of the drift of @math . In the "standard multiplicative coalescent" context of Aldous (1997), this result was first announced by Armendariz in 2001, and obtained in a recent preprint by Broutin and Marckert, who simultaneously account for the process of excess edge counts (or marks). The novel argument presented here is based on a sequence of relatively elementary observations. Some of its components (for example, the new dynamic random graph construction via "simultaneous" breadth-first walks) are of independent interest, and may be useful for obtaining more sophisticated asymptotic results on near critical random graphs and related processes.
The arguments presented in the sequel are elementary in part due to direct applications of a non-trivial result from @cite_10 , Section 2.6 (depending on @cite_10 , Section 2.5) in Section (more precisely, Corollary ). In comparison, (a) @cite_50 also rely on the convergence results of @cite_14 in the standard setting, as well as additional estimates proved in @cite_8 , and (b) the analysis done in @cite_33 , Sections 4 and 5 seems to be a formal analogue of that in @cite_10 , Sections 2.5-2.6.
{ "cite_N": [ "@cite_14", "@cite_33", "@cite_8", "@cite_50", "@cite_10" ], "mid": [ "2056513564", "", "2104099158", "1747067133", "2109430192" ], "abstract": [ "Let (B t (s), 0 ≤ s < ∞) be reflecting inhomogeneous Brownian motion with drift t - s at time s, started with B t (0) = 0. Consider the random graph script G sign(n, n -1 + tn -4 3 ), whose largest components have size of order n 2 3 . Normalizing by n -2 3 , the asymptotic joint distribution of component sizes is the same as the joint distribution of excursion lengths of B t (Corollary 2). The dynamics of merging of components as t increases are abstracted to define the multiplicative coalescent process. The states of this process are vectors x of nonnegative real cluster sizes (x i ), and clusters with sizes x i and x j merge at rate x i x j . The multiplicative coalescent is shown to be a Feller process on l 2 . The random graph limit specifies the standard multiplicative coalescent, which starts from infinitesimally small clusters at time -∞; the existence of such a process is not obvious.", "", "We consider the Erdős–Renyi random graph G(n, p) inside the critical window, that is when p = 1 n + λn −4 3, for some fixed ( R ) . We prove that the sequence of connected components of G(n, p), considered as metric spaces using the graph distance rescaled by n −1 3, converges towards a sequence of continuous compact metric spaces. The result relies on a bijection between graphs and certain marked random walks, and the theory of continuum random trees. Our result gives access to the answers to a great many questions about distances in critical random graphs. In particular, we deduce that the diameter of G(n, p) rescaled by n −1 3 converges in distribution to an absolutely continuous random variable with finite mean.", "We revisit the discrete additive and multiplicative coalescents, starting with n particles with unit mass. These cases are known to be related to some “combinatorial coalescent processes”: a time reversal of a fragmentation of Cayley trees or a parking scheme in the additive case, and the random graph process ((G(n,p))_p ) in the multiplicative case. Time being fixed, encoding these combinatorial objects in real-valued processes indexed by the line is the key to describing the asymptotic behaviour of the masses as (n + ). We propose to use the Prim order on the vertices instead of the classical breadth-first (or depth-first) traversal to encode the combinatorial coalescent processes. In the additive case, this yields interesting connections between the different representations of the process. In the multiplicative case, it allows one to answer to a stronger version of an open question of Aldous (Ann Probab 25:812–854, 1997): we prove that not only the sequence of (rescaled) masses, seen as a process indexed by the time ( ), converges in distribution to the reordered sequence of lengths of the excursions above the current minimum of a Brownian motion with parabolic drift ((B_t+ t - t^2 2, t 0) ), but we also construct a version of the standard augmented multiplicative coalescent of (Probab Theory Relat, 2013) using an additional Poisson point process.", "The multiplicative coalescent @math is a @math -valued Markov process representing coalescence of clusters of mass, where each pair of clusters merges at rate proportional to product of masses. From random graph asymptotics it is known (Aldous (1997)) that there exists a standard version of this process starting with infinitesimally small clusters at time @math . In this paper, stochastic calculus techniques are used to describe all versions @math of the multiplicative coalescent. Roughly, an extreme version is specified by translation and scale parameters, and a vector @math of relative sizes of large clusters at time @math . Such a version may be characterized in three ways: via its @math behavior, via a representation of the marginal distribution @math in terms of excursion-lengths of a Levy-type process, or via a weak limit of processes derived from the standard version via a \"coloring\" construction." ] }
1601.01325
2226774821
The multiplicative coalescent is a Markov process taking values in ordered @math . It is a mean-field process in which any pair of blocks coalesces at rate proportional to the product of their masses. In Aldous and Limic (1998) each extreme eternal version @math of the multiplicative coalescent was described in three different ways. One of these specifications matches the (marginal) law of @math to that of the ordered excursion lengths above past minima of @math , where @math is a certain Levy-type process which (modulo shift and scaling) has infinitesimal drift @math at time @math . Using a modification of the breadth-first-walk construction from Aldous (1997) and Aldous and Limic (1998), and some new insight from the thesis by Uribe (2007), this work settles an open problem (3) from Aldous (1997), in the more general context of Aldous and Limic (1998). Informally speaking, @math is entirely encoded by @math , and contrary to Aldous' original intuition, the evolution of time for @math does correspond to the linear increase in the constant part of the drift of @math . In the "standard multiplicative coalescent" context of Aldous (1997), this result was first announced by Armendariz in 2001, and obtained in a recent preprint by Broutin and Marckert, who simultaneously account for the process of excess edge counts (or marks). The novel argument presented here is based on a sequence of relatively elementary observations. Some of its components (for example, the new dynamic random graph construction via "simultaneous" breadth-first walks) are of independent interest, and may be useful for obtaining more sophisticated asymptotic results on near critical random graphs and related processes.
The present approach to Theorem is of independent interest even in the standard setting (where Section would simplify further, since @math , and already Lemma 8 from @cite_14 would be sufficient for making conclusions in Section ). In addition, it may prove useful for continued analysis of the s, as well as various other processes in the domain of attraction''.
{ "cite_N": [ "@cite_14" ], "mid": [ "2056513564" ], "abstract": [ "Let (B t (s), 0 ≤ s < ∞) be reflecting inhomogeneous Brownian motion with drift t - s at time s, started with B t (0) = 0. Consider the random graph script G sign(n, n -1 + tn -4 3 ), whose largest components have size of order n 2 3 . Normalizing by n -2 3 , the asymptotic joint distribution of component sizes is the same as the joint distribution of excursion lengths of B t (Corollary 2). The dynamics of merging of components as t increases are abstracted to define the multiplicative coalescent process. The states of this process are vectors x of nonnegative real cluster sizes (x i ), and clusters with sizes x i and x j merge at rate x i x j . The multiplicative coalescent is shown to be a Feller process on l 2 . The random graph limit specifies the standard multiplicative coalescent, which starts from infinitesimally small clusters at time -∞; the existence of such a process is not obvious." ] }
1601.01325
2226774821
The multiplicative coalescent is a Markov process taking values in ordered @math . It is a mean-field process in which any pair of blocks coalesces at rate proportional to the product of their masses. In Aldous and Limic (1998) each extreme eternal version @math of the multiplicative coalescent was described in three different ways. One of these specifications matches the (marginal) law of @math to that of the ordered excursion lengths above past minima of @math , where @math is a certain Levy-type process which (modulo shift and scaling) has infinitesimal drift @math at time @math . Using a modification of the breadth-first-walk construction from Aldous (1997) and Aldous and Limic (1998), and some new insight from the thesis by Uribe (2007), this work settles an open problem (3) from Aldous (1997), in the more general context of Aldous and Limic (1998). Informally speaking, @math is entirely encoded by @math , and contrary to Aldous' original intuition, the evolution of time for @math does correspond to the linear increase in the constant part of the drift of @math . In the "standard multiplicative coalescent" context of Aldous (1997), this result was first announced by Armendariz in 2001, and obtained in a recent preprint by Broutin and Marckert, who simultaneously account for the process of excess edge counts (or marks). The novel argument presented here is based on a sequence of relatively elementary observations. Some of its components (for example, the new dynamic random graph construction via "simultaneous" breadth-first walks) are of independent interest, and may be useful for obtaining more sophisticated asymptotic results on near critical random graphs and related processes.
The rest of the paper is organized as follows: Section introduces the simultaneous breadth-first walks and explains how they are linked to the (marginal) law of the , and the original breadth-first walks of @cite_14 @cite_10 . Section recalls Uribe's diagrams and includes Proposition , that connects the diagrams to the . In Section the simultaneous BFWs and Uribe's diagrams are linked, and as a result an important conclusion is made in Proposition (the generalized version of the claim which precedes Theorem ). All the processes considered in Sections -- have finite initial states. Section serves to pass to the limit where the initial configuration is in @math . The similarities to and differences from @cite_10 are discussed along the way. Theorem is proved in Section . Several questions are included in Section (the reader is also referred to the list of open problems given at the end of @cite_14 ).
{ "cite_N": [ "@cite_14", "@cite_10" ], "mid": [ "2056513564", "2109430192" ], "abstract": [ "Let (B t (s), 0 ≤ s < ∞) be reflecting inhomogeneous Brownian motion with drift t - s at time s, started with B t (0) = 0. Consider the random graph script G sign(n, n -1 + tn -4 3 ), whose largest components have size of order n 2 3 . Normalizing by n -2 3 , the asymptotic joint distribution of component sizes is the same as the joint distribution of excursion lengths of B t (Corollary 2). The dynamics of merging of components as t increases are abstracted to define the multiplicative coalescent process. The states of this process are vectors x of nonnegative real cluster sizes (x i ), and clusters with sizes x i and x j merge at rate x i x j . The multiplicative coalescent is shown to be a Feller process on l 2 . The random graph limit specifies the standard multiplicative coalescent, which starts from infinitesimally small clusters at time -∞; the existence of such a process is not obvious.", "The multiplicative coalescent @math is a @math -valued Markov process representing coalescence of clusters of mass, where each pair of clusters merges at rate proportional to product of masses. From random graph asymptotics it is known (Aldous (1997)) that there exists a standard version of this process starting with infinitesimally small clusters at time @math . In this paper, stochastic calculus techniques are used to describe all versions @math of the multiplicative coalescent. Roughly, an extreme version is specified by translation and scale parameters, and a vector @math of relative sizes of large clusters at time @math . Such a version may be characterized in three ways: via its @math behavior, via a representation of the marginal distribution @math in terms of excursion-lengths of a Levy-type process, or via a weak limit of processes derived from the standard version via a \"coloring\" construction." ] }
1601.01348
2229087796
Some Machine-to-Machine (M2M) communication links particularly those in a industrial automation plant have stringent latency requirements. In this paper, we study the delay-performance for the M2M uplink from the sensors to a Programmable Logic Controller (PLC) in a industrial automation scenario. The uplink traffic can be broadly classified as either Periodic Update (PU) and Event Driven (ED). The PU arrivals from different sensors are periodic, synchronized by the PLC and need to be processed by a prespecified firm latency deadline. On the other hand, the ED arrivals are random, have low-arrival rate, but may need to be processed quickly depending upon the criticality of the application. To accommodate these contrasting Quality-of-Service (QoS) requirements, we model the utility of PU and ED packets using step function and sigmoidal functions of latency respectively. Our goal is to maximize the overall system utility while being proportionally fair to both PU and ED data. To this end, we propose a novel online QoS-aware packet scheduler that gives priority to ED data as long as that results the latency deadline is met for PU data. However as the size of networks increases, we drop the PU packets that fail to meet latency deadline which reduces congestion and improves overall system utility. Using extensive simulations, we compare the performance of our scheme with various scheduling policies such as First-Come-First-Serve (FCFS), Earliest-Due-Date (EDD) and (preemptive) priority. We show that our scheme outperforms the existing schemes for various simulation scenarios.
Another line of work focuses on QoS-aware packet scheduler for M2M traffic in Long Term Evolution (LTE) network (see @cite_4 and references therein). Most of these works use some variants of Access Grant Time Interval scheme for allocating fixed or dynamic access grants over periodic time intervals to M2M devices. Nusrat et. al. in @cite_2 designed a packet scheduler for M2M in LTE so as to maximize the percentage of uplink packets that satisfy their individual budget limits. Ray and Kwang in @cite_13 proposed a distributed congestion control algorithm which allocates rates to M2M flows in proportion of their demands. Unlike our work, all of these works design packet scheduler specific to a wireless standard such as LTE and are thus heavily influenced by the Medium Access Control (MAC) architecture of LTE. Unlike our work, they also don't explicitly segregate data arrivals into different QoS classes.
{ "cite_N": [ "@cite_13", "@cite_4", "@cite_2" ], "mid": [ "2048601342", "2092852434", "2060186542" ], "abstract": [ "Machine-to-machine (M2M) communications are bringing new challenges to congestion control in the Internet of Things. One key issue is to facilitate the proper functioning of a wide range of M2M applications with drastically different throughput demands. Traditional Internet congestion control algorithms aim at sharing bandwidth among traffic flows equally, limiting their suitability for M2M communications. To maintain comparable levels of QoS of heterogeneous M2M applications when congestion is present, we propose a distributed congestion control algorithm which allocates transmission rates to M2M flows in proportion to their demands, through the use of a simple technique which we call “proportional additive increase”. To ease M2M application development, we make a further attempt to stabilize the throughputs of M2M flows controlled by the algorithm. We present simulation results to illustrate the effectiveness of the algorithm in achieving the desired rate allocation, as well as the challenge we face in stabilizing throughputs.", "Providing LTE connectivity to emerging Machine-to-Machine (M2M) applications imposes several challenges to the operation and optimization of current and future 3GPP mobile broadband standard releases. Scheduling in an efficient way M2M traffic over the existing LTE MAC infrastructure is decisive for the smooth evolution towards an M2M-enabled LTE system. The large number of connecting devices compared to classical LTE terminals, and their vastly diverse quality-of-service requirements, call for the design of new packet scheduling schemes tailored to the M2M paradigm. To this end, we propose low complexity and signaling scheduling policies which periodically grant access to the M2M devices. In particular, we first propose an analytical model for predicting the QoS performance of M2M services when the fixed periodic scheduling algorithm is employed. Next we propose a modification to this scheme, which exploit queueing-dynamics and finally we examine QoS-differentiation issues when devices are grouped into clusters. Interesting performance-complexity trade-offs are exposed. The results of our study may aid the system designer in tuning and optimizing M2M traffic scheduling.", "Some M2M applications such as those found in a Smart Grid environment generate event driven and delay sensitive uplink traffic. Wide area cellular systems such as LTE are usually not optimized for such traffic. In this paper, we design an LTE scheduler with the main objective of maximizing the percentage of uplink packets that satisfy their individual delay budgets. In order to do this accurately, we allow devices to notify the eNodeB of the age of the oldest packet in their buffers via a new MAC control element in the uplink MPDU. This information is used by the eNodeB to calculate an absolute deadline for each packet request individually, and the eNodeB scheduler ranks requests according to an urgency metric that depends upon the time remaining to the deadline and other factors such as the volume of pending data in the device buffers. Using an OPNET simulation model of an LTE TDD system, we show that our proposed scheduler can satisfy the uplink delay budget for more than 99 of packets for bursty delay sensitive M2M traffic even when the system is fully loaded with regard to the data channel utilization." ] }
1601.01348
2229087796
Some Machine-to-Machine (M2M) communication links particularly those in a industrial automation plant have stringent latency requirements. In this paper, we study the delay-performance for the M2M uplink from the sensors to a Programmable Logic Controller (PLC) in a industrial automation scenario. The uplink traffic can be broadly classified as either Periodic Update (PU) and Event Driven (ED). The PU arrivals from different sensors are periodic, synchronized by the PLC and need to be processed by a prespecified firm latency deadline. On the other hand, the ED arrivals are random, have low-arrival rate, but may need to be processed quickly depending upon the criticality of the application. To accommodate these contrasting Quality-of-Service (QoS) requirements, we model the utility of PU and ED packets using step function and sigmoidal functions of latency respectively. Our goal is to maximize the overall system utility while being proportionally fair to both PU and ED data. To this end, we propose a novel online QoS-aware packet scheduler that gives priority to ED data as long as that results the latency deadline is met for PU data. However as the size of networks increases, we drop the PU packets that fail to meet latency deadline which reduces congestion and improves overall system utility. Using extensive simulations, we compare the performance of our scheme with various scheduling policies such as First-Come-First-Serve (FCFS), Earliest-Due-Date (EDD) and (preemptive) priority. We show that our scheme outperforms the existing schemes for various simulation scenarios.
Lastly, a number of scheduling algorithms have been proposed specifically for real-time embedded systems (see @cite_1 and references therein) that are agnostic to the application scenario, wireless technology used or hardware-software architecture. These schemes assume hybrid task sets comprising of periodic requests and aperiodic requests. They are broadly classified into Fixed-priority and Dynamic priority assignments. The goal of all these schemes is to guarantee completion of service (before deadline) for all periodic request and simultaneously aim to reduce the average response times of aperiodic requests. Fixed priority schemes schedule periodic tasks using Rate-Monotonic algorithm but differ in service for aperiodic tasks. Dynamic scheduling algorithms schedule periodic tasks using EDD scheme and allow better processor utilization and enhance aperiodic responsiveness as compared to the fixed priority schemes. The drawbacks of these schemes is that they assume that the random service times for each tasks are known . Hence these schemes are not truly in their present form and would require considerable modifications.
{ "cite_N": [ "@cite_1" ], "mid": [ "2137784941" ], "abstract": [ "Hard Real-Time Computing Systems: Predictable Scheduling Algorithms and Applications is a basic treatise on real-time computing, with particular emphasis on predictable scheduling algorithms. It introduces the fundamental concepts of real-time computing, illustrates the most significant results in the field, and provides the essential methodologies for designing predictable computing systems which can be used to support critical control applications. This volume serves as a textbook for advanced level courses on the topic. Each chapter provides basic concepts, which are followed by algorithms that are illustrated with concrete examples, figures and tables. Exercises are included with each chapter and solutions are given at the end of the book. The book also provides an excellent reference for those interested in real-time computing for designing and or developing predictable control applications." ] }
1601.00722
2231504921
Restricted Boltzmann Machine (RBM) is an importan- t generative model modeling vectorial data. While applying an RBM in practice to images, the data have to be vec- torized. This results in high-dimensional data and valu- able spatial information has got lost in vectorization. In this paper, a Matrix-Variate Restricted Boltzmann Machine (MVRBM) model is proposed by generalizing the classic RBM to explicitly model matrix data. In the new RBM model, both input and hidden variables are in matrix forms which are connected by bilinear transforms. The MVRBM has much less model parameters, resulting in a faster train- ing algorithm while retaining comparable performance as the classic RBM. The advantages of the MVRBM have been demonstrated on two real-world applications: Image super- resolution and handwritten digit recognition.
@cite_28 propose a tensor-product model based representation of neural networks in which a neural network structure for mutual interaction of variable components is introduced. It conceptually restructures the tensor product model transformation to a generalized form for fuzzy modeling and to propose new features and several new variants of the tensor product model transformation. However this type of neural networks is actually defined for vectorial data rather than tensorial variates. The similar idea can be seen in the most recent paper @cite_2 . The key characterization for these networks is the connection weights (neural networks parameters) are in tensor format, rather than the data variables in the networks. Thus except for the nonlinearity introduced by the activation function, the neural networks offer the capacity of encoding nonlinear interaction among the hidden variable components. The so-called tensor analyzer @cite_12 also serves as such an example.
{ "cite_N": [ "@cite_28", "@cite_12", "@cite_2" ], "mid": [ "2010639055", "", "2108597378" ], "abstract": [ "The approximation methods of mathematics are widely used in theory and practice for several problems. In the framework of the paper a novel tensor-product based approach for representation of neural networks (NNs) is proposed. The NNs in this case stand for local models based on which a more complex parameter varying model can numerically be reconstructed and reduced using the higher order singular value decomposition (HOSVD). The HOSVD as well as the tensor-product based representation of NNs will be discussed in detail.", "", "A novel deep architecture, the tensor deep stacking network (T-DSN), is presented. The T-DSN consists of multiple, stacked blocks, where each block contains a bilinear mapping from two hidden layers to the output layer, using a weight tensor to incorporate higher order statistics of the hidden binary (([0,1])) features. A learning algorithm for the T-DSN's weight matrices and tensors is developed and described in which the main parameter estimation burden is shifted to a convex subproblem with a closed-form solution. Using an efficient and scalable parallel implementation for CPU clusters, we train sets of T-DSNs in three popular tasks in increasing order of the data size: handwritten digit recognition using MNIST (60k), isolated state phone classification and continuous phone recognition using TIMIT (1.1 m), and isolated phone classification using WSJ0 (5.2 m). Experimental results in all three tasks demonstrate the effectiveness of the T-DSN and the associated learning methods in a consistent manner. In particular, a sufficient depth of the T-DSN, a symmetry in the two hidden layers structure in each T-DSN block, our model parameter learning algorithm, and a softmax layer on top of T-DSN are shown to have all contributed to the low error rates observed in the experiments for all three tasks." ] }
1601.00722
2231504921
Restricted Boltzmann Machine (RBM) is an importan- t generative model modeling vectorial data. While applying an RBM in practice to images, the data have to be vec- torized. This results in high-dimensional data and valu- able spatial information has got lost in vectorization. In this paper, a Matrix-Variate Restricted Boltzmann Machine (MVRBM) model is proposed by generalizing the classic RBM to explicitly model matrix data. In the new RBM model, both input and hidden variables are in matrix forms which are connected by bilinear transforms. The MVRBM has much less model parameters, resulting in a faster train- ing algorithm while retaining comparable performance as the classic RBM. The advantages of the MVRBM have been demonstrated on two real-world applications: Image super- resolution and handwritten digit recognition.
@cite_11 present another similar work. It uses the similar structure as proposed in @cite_2 to generalize several previous neural network models and provide a more powerful way to model correlation information than a standard neural network layer.
{ "cite_N": [ "@cite_2", "@cite_11" ], "mid": [ "2108597378", "2127426251" ], "abstract": [ "A novel deep architecture, the tensor deep stacking network (T-DSN), is presented. The T-DSN consists of multiple, stacked blocks, where each block contains a bilinear mapping from two hidden layers to the output layer, using a weight tensor to incorporate higher order statistics of the hidden binary (([0,1])) features. A learning algorithm for the T-DSN's weight matrices and tensors is developed and described in which the main parameter estimation burden is shifted to a convex subproblem with a closed-form solution. Using an efficient and scalable parallel implementation for CPU clusters, we train sets of T-DSNs in three popular tasks in increasing order of the data size: handwritten digit recognition using MNIST (60k), isolated state phone classification and continuous phone recognition using TIMIT (1.1 m), and isolated phone classification using WSJ0 (5.2 m). Experimental results in all three tasks demonstrate the effectiveness of the T-DSN and the associated learning methods in a consistent manner. In particular, a sufficient depth of the T-DSN, a symmetry in the two hidden layers structure in each T-DSN block, our model parameter learning algorithm, and a softmax layer on top of T-DSN are shown to have all contributed to the low error rates observed in the experiments for all three tasks.", "Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the \"Sumatran tiger\" and \"Bengal tiger.\" Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2 and 90.0 , respectively." ] }
1601.00722
2231504921
Restricted Boltzmann Machine (RBM) is an importan- t generative model modeling vectorial data. While applying an RBM in practice to images, the data have to be vec- torized. This results in high-dimensional data and valu- able spatial information has got lost in vectorization. In this paper, a Matrix-Variate Restricted Boltzmann Machine (MVRBM) model is proposed by generalizing the classic RBM to explicitly model matrix data. In the new RBM model, both input and hidden variables are in matrix forms which are connected by bilinear transforms. The MVRBM has much less model parameters, resulting in a faster train- ing algorithm while retaining comparable performance as the classic RBM. The advantages of the MVRBM have been demonstrated on two real-world applications: Image super- resolution and handwritten digit recognition.
There are several works on multiple ways Boltzmann machine @cite_24 @cite_4 . Taylor and Hinton @cite_24 propose a factored conditional restricted Boltzmann Machines for modeling motion style. In order to capture context of motion style, this model takes history and current information as input data, thus connections from the past to current visible units and the hidden units are increased. In this model, the input data consist of two vectors, and the output data is also in vector form and the weights between visible and hidden units are matrix.
{ "cite_N": [ "@cite_24", "@cite_4" ], "mid": [ "2115096495", "141412351" ], "abstract": [ "The Conditional Restricted Boltzmann Machine (CRBM) is a recently proposed model for time series that has a rich, distributed hidden state and permits simple, exact inference. We present a new model, based on the CRBM that preserves its most important computational properties and includes multiplicative three-way interactions that allow the effective interaction weight between two units to be modulated by the dynamic state of a third unit. We factor the three-way weight tensor implied by the multiplicative model, reducing the number of parameters from O(N3) to O(N2). The result is an efficient, compact model whose effectiveness we demonstrate by modeling human motion. Like the CRBM, our model can capture diverse styles of motion with a single set of parameters, and the three-way interactions greatly improve the model's ability to blend motion styles or to transition smoothly among them.", "In this paper a novel framework capable of both accurate predictions and classifications of dynamic images is introduced. The proposed technique makes of use of a novel combination of sparse coding, a feature extraction algorithm, and three-way weight tensor conditional restricted Boltzmann machines, a form of deep learning. Experiments performed on both the prediction and classification of various images show the efficiency, accuracy, and effectiveness of the proposed technique." ] }
1601.00722
2231504921
Restricted Boltzmann Machine (RBM) is an importan- t generative model modeling vectorial data. While applying an RBM in practice to images, the data have to be vec- torized. This results in high-dimensional data and valu- able spatial information has got lost in vectorization. In this paper, a Matrix-Variate Restricted Boltzmann Machine (MVRBM) model is proposed by generalizing the classic RBM to explicitly model matrix data. In the new RBM model, both input and hidden variables are in matrix forms which are connected by bilinear transforms. The MVRBM has much less model parameters, resulting in a faster train- ing algorithm while retaining comparable performance as the classic RBM. The advantages of the MVRBM have been demonstrated on two real-world applications: Image super- resolution and handwritten digit recognition.
@cite_4 use some video sequences for training RBM to get better classification performance. The video sequences are also vectorized as some vectors, used as the input to a classic RBM with a connection defined by a tensor weight.
{ "cite_N": [ "@cite_4" ], "mid": [ "141412351" ], "abstract": [ "In this paper a novel framework capable of both accurate predictions and classifications of dynamic images is introduced. The proposed technique makes of use of a novel combination of sparse coding, a feature extraction algorithm, and three-way weight tensor conditional restricted Boltzmann machines, a form of deep learning. Experiments performed on both the prediction and classification of various images show the efficiency, accuracy, and effectiveness of the proposed technique." ] }
1601.00722
2231504921
Restricted Boltzmann Machine (RBM) is an importan- t generative model modeling vectorial data. While applying an RBM in practice to images, the data have to be vec- torized. This results in high-dimensional data and valu- able spatial information has got lost in vectorization. In this paper, a Matrix-Variate Restricted Boltzmann Machine (MVRBM) model is proposed by generalizing the classic RBM to explicitly model matrix data. In the new RBM model, both input and hidden variables are in matrix forms which are connected by bilinear transforms. The MVRBM has much less model parameters, resulting in a faster train- ing algorithm while retaining comparable performance as the classic RBM. The advantages of the MVRBM have been demonstrated on two real-world applications: Image super- resolution and handwritten digit recognition.
All the above attempts aim to model the interaction between the components of vector variates. Similar to the classic RBM, these models are not appropriate for the matrix inputs. To the best of our knowledge, the first work aiming at modeling matrix variate inputs is proposed by @cite_1 , named (TvRBM) model. The authors have demonstrated the capacity of the model on three real-world applications with convincible performance. In its model architecture, the input is designed for tensorial data including matrix data while the hidden units are organized as a vector. The connection between the hidden layer and the visible layer is defined by the linear combination over tensorial weights. To reduce the number of weight parameters, the weight tensors are further specified as the so-called rank-r tensors @cite_25 . However our criticism over TvRBM is that the specification of the rank-r tensor weights is too restrictive to efficiently empower the model capability.
{ "cite_N": [ "@cite_1", "@cite_25" ], "mid": [ "2293139688", "2024165284" ], "abstract": [ "Restricted Boltzmann Machines (RBMs) are an important class of latent variable models for representing vector data. An under-explored area is multimode data, where each data point is a matrix or a tensor. Standard RBMs applying to such data would require vectorizing matrices and tensors, thus resulting in unnecessarily high dimensionality and at the same time, destroying the inherent higher-order interaction structures. This paper introduces Tensor-variate Restricted Boltzmann Machines (TvRBMs) which generalize RBMs to capture the multiplicative interaction between data modes and the latent variables. TvRBMs are highly compact in that the number of free parameters grows only linear with the number of modes. We demonstrate the capacity of TvRBMs on three real-world applications: handwritten digit classification, face recognition and EEG-based alcoholic diagnosis. The learnt features of the model are more discriminative than the rivals, resulting in better classification performance.", "This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or @math -way array. Decompositions of higher-order tensors (i.e., @math -way arrays with @math ) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors." ] }
1601.00722
2231504921
Restricted Boltzmann Machine (RBM) is an importan- t generative model modeling vectorial data. While applying an RBM in practice to images, the data have to be vec- torized. This results in high-dimensional data and valu- able spatial information has got lost in vectorization. In this paper, a Matrix-Variate Restricted Boltzmann Machine (MVRBM) model is proposed by generalizing the classic RBM to explicitly model matrix data. In the new RBM model, both input and hidden variables are in matrix forms which are connected by bilinear transforms. The MVRBM has much less model parameters, resulting in a faster train- ing algorithm while retaining comparable performance as the classic RBM. The advantages of the MVRBM have been demonstrated on two real-world applications: Image super- resolution and handwritten digit recognition.
Another model related to our proposed MVRBM is the so-called (RS-RBM) @cite_7 . Similar to TvRBM in @cite_1 , RS-RBM uses a linear mapping between a matrix input layer and a hidden vector layer. To model document topics in terms of word counts, an implicit condition is imposed on the matrix input, i.e., the sum of the binary entries of each row in the matrix input must be 1. Thus the Replicated Softmax model is actually equivalent to an RBM of vector softmax input units with identical weights for each unit.
{ "cite_N": [ "@cite_1", "@cite_7" ], "mid": [ "2293139688", "2100002341" ], "abstract": [ "Restricted Boltzmann Machines (RBMs) are an important class of latent variable models for representing vector data. An under-explored area is multimode data, where each data point is a matrix or a tensor. Standard RBMs applying to such data would require vectorizing matrices and tensors, thus resulting in unnecessarily high dimensionality and at the same time, destroying the inherent higher-order interaction structures. This paper introduces Tensor-variate Restricted Boltzmann Machines (TvRBMs) which generalize RBMs to capture the multiplicative interaction between data modes and the latent variables. TvRBMs are highly compact in that the number of free parameters grows only linear with the number of modes. We demonstrate the capacity of TvRBMs on three real-world applications: handwritten digit classification, face recognition and EEG-based alcoholic diagnosis. The learnt features of the model are more discriminative than the rivals, resulting in better classification performance.", "We introduce a two-layer undirected graphical model, called a \"Replicated Softmax\", that can be used to model and automatically extract low-dimensional latent semantic representations from a large unstructured collection of documents. We present efficient learning and inference algorithms for this model, and show how a Monte-Carlo based method, Annealed Importance Sampling, can be used to produce an accurate estimate of the log-probability the model assigns to test data. This allows us to demonstrate that the proposed model is able to generalize much better compared to Latent Dirichlet Allocation in terms of both the log-probability of held-out documents and the retrieval accuracy." ] }
1601.00605
2222636700
We develop a computational method for extremal Steklov eigenvalue problems and apply it to study the problem of maximizing the @math th Steklov eigenvalue as a function of the domain with a volume constraint. In contrast to the optimal domains for several other extremal Dirichlet- and Neumann-Laplacian eigenvalue problems, computational results suggest that the optimal domains for this problem are very structured. We reach the conjecture that the domain maximizing the @math th Steklov eigenvalue is unique (up to dilations and rigid transformations), has @math -fold symmetry, and has at least one axis of symmetry. The @math th Steklov eigenvalue has multiplicity 2 if @math is even and multiplicity 3 if @math is odd.
There are also a few computational studies of extremal Steklov problems. The most relevant is recent work of B. Bogosel @cite_24 . This paper is primary concerned with the development of methods based on fundamental solutions to compute the Steklov, Wentzell, and Laplace-Beltrami eigenvalues. This method was used to demonstrate that the ball is the minimizer for a variety of shape optimization problems. The author also studies the problem of maximizing the first five Wentzell eigenvalues subject to a volume constraint, for which is a special case. Shape optimization problems for Steklov eigenvalues with mixed boundary conditions have also been studied @cite_25 .
{ "cite_N": [ "@cite_24", "@cite_25" ], "mid": [ "2347102175", "2963931288" ], "abstract": [ "We develop methods based on fundamental solutions to compute the Steklov, Wentzell and Laplace-Beltrami eigenvalues in the context of shape optimization. In the class of smooth simply connected two dimensional domains the numerical method is accurate and fast. A theoretical error bound is given along with comparisons with mesh-based methods. We illustrate the use of this method in the study of a wide class of shape optimization problems in two dimensions. We extend the method to the computation of the Laplace-Beltrami eigenvalues on surfaces and we investigate some spectral optimal partitioning problems.", "We numerically study positions of high spots (extrema) of the fundamental sloshing mode of a liquid in an axisymmetric tank. Our approach is based on a linear model and reduces the problem to an appropriate Steklov eigenvalue problem. We propose a numerical scheme for calculating sloshing modes and a novel method for making images of an oscillating fluid." ] }
1601.00599
2223161374
A large amount of social media hosted on platforms like Flickr and Instagram is related to social events. The task of social event classification refers to the distinction of event and non-event-related content as well as the classification of event types (e.g. sports events, concerts, etc.). In this paper, we provide an extensive study of textual, visual, as well as multimodal representations for social event classification. We investigate strengths and weaknesses of the modalities and study synergy effects between the modalities. Experimental results obtained with our multimodal representation outperform state-of-the-art methods and provide a new baseline for future research.
With the increasing popularity of social media event detection and classification from images has become an attractive line of research. @cite_39 the authors present a purely image-based method to classify images into events like wedding" and road trip". The authors extract a Bag-of-Words (BoW) representation from dense SIFT and color features. Page rank is used for selecting the most important features and Support Vector Machines (SVM) finally predict the event type. The investigated dataset contains only event-related images. Hence, no event relevance detection is performed.
{ "cite_N": [ "@cite_39" ], "mid": [ "2165935688" ], "abstract": [ "We propose a method of mining most informative features for the event recognition from photo collections. Our goal is to classify different event categories based on the visual content of a group of photos that constitute the event. Such photo groups are typical in a personal photo collection of different events. Visual features are extracted from the images, yet the features from individual images are often noisy and not all of them represent the distinguishing characteristics of an event. We employ the PageRank technique to mine the most informative features from the images that belong to the same event. Subsequently, we classify different event categories using the multiple images of the same event because we argue that they are more informative about the content of an event rather than any single image. We compare our proposed approach with the standard bag of features method (BOF) and observe considerable improvements in recognition accuracy." ] }
1601.00599
2223161374
A large amount of social media hosted on platforms like Flickr and Instagram is related to social events. The task of social event classification refers to the distinction of event and non-event-related content as well as the classification of event types (e.g. sports events, concerts, etc.). In this paper, we provide an extensive study of textual, visual, as well as multimodal representations for social event classification. We investigate strengths and weaknesses of the modalities and study synergy effects between the modalities. Experimental results obtained with our multimodal representation outperform state-of-the-art methods and provide a new baseline for future research.
For classification methods such as K-NN, decision trees, random forests, as well as support vector machines (SVM) are employed. @cite_20 directly apply similarity measurements to assign images to event categories instead of using a trained classifier.
{ "cite_N": [ "@cite_20" ], "mid": [ "2293450225" ], "abstract": [ "This paper describes our participation in Social Detection Task@ MediaEval 2013, which involves the detection of social events with associated images in collaboratively annotated by online users. Two tasks are pursued: (i) cluster all images into events in a way that they belong together; (ii) classify the images based on event type. For this we have developed a framework for semantically structuring the social image collection. For Task 1 and Task2, we achieved an overall F1 main score 0.1426 and 0.4409 respectively." ] }
1601.00822
2230380333
We show that a steady-state stock-flow consistent macro-economic model can be represented as a Constraint Satisfaction Problem (CSP). The set of solutions is a polytope, which volume depends on the constraints applied and reveals the potential fragility of the economic circuit, with no need to study the dynamics. Several methods to compute the volume are compared, inspired by operations research methods and the analysis of metabolic networks, both exact and approximate. We also introduce a random transaction matrix, and study the particular case of linear flows with respect to money stocks.
Even though all ABM do not verify stock-flow consistency, it is an hypothesis used by several authors @cite_57 @cite_40 @cite_29 @cite_37 @cite_27 . ABM are praised for their flexibility, their ability to study large populations of heterogeneous and learning agents that interact in possibly non-linear ways. However, ABM need computer intensive simulations and face the problem of being over-parametrized. Calibration is thus difficult and unstable, all the more since empirical studies and reliable data are not abundant compared to the dimension of the parameter space. Recent works tackle the issue of an efficient exploration of the parameter space @cite_31 .The issue of comparing a model to experimental data is another problem posed to practitioners, that has been addressed by different methods @cite_47 . Under simplifying assumptions such as the aggregation of a subset of agents, theoretical results concerning some macro-economic ABM were recently obtained in a stock-flow-consistent framework, and phase diagrams established, for specific dynamic rules @cite_18 @cite_44 .
{ "cite_N": [ "@cite_37", "@cite_47", "@cite_18", "@cite_29", "@cite_57", "@cite_44", "@cite_27", "@cite_40", "@cite_31" ], "mid": [ "2031482292", "2338557740", "2140213624", "2023013717", "1977823936", "", "2159339766", "", "2017627906" ], "abstract": [ "We model a macroeconomy with stock-flow consistent national accounts built from the local interactions of heterogenous agents (households, firms, bankers, and a government) through product, labor, and money markets in discrete time. We use this model to show that, without any restrictions on the type of interactions agents can make, and with asymmetric information on the part of firms and households in this economy, power-law dynamics with respect to firm size and firm age, income distribution, skill set choice, returns to innovation, and earnings can emerge from multiplicative processes originating in the labor market.", "This paper proposes a new method for empirically validate simulation models that generate artificial time series data comparable with real-world data. The approach is based on comparing structures of vector autoregression models which are estimated from both artificial and real-world data by means of causal search algorithms. This relatively simple procedure is able to tackle both the problem of confronting theoretical simulation models with the data and the problem of comparing different models in terms of their empirical reliability. Moreover the paper provides an application of the validation procedure to the (2015) macro-model.", "The aim of this work is to explore the possible types of phenomena that simple macroeconomic Agent-Based models (ABMs) can reproduce. We propose a methodology, inspired by statistical physics, that characterizes a model through its “phase diagram” in the space of parameters. Our first motivation is to understand the large macro-economic fluctuations observed in the “Mark I” ABM devised by Delli Gatti and collaborators. In this regard, our major finding is the generic existence of a phase transition between a “good economy” where unemployment is low, and a “bad economy” where unemployment is high. We then introduce a simpler framework that allows us to show that this transition is robust against many modifications of the model, and is generically induced by an asymmetry between the rate of hiring and the rate of firing of the firms. The unemployment level remains small until a tipping point, beyond which the economy suddenly collapses. If the parameters are such that the system is close to this transition, any small fluctuation is amplified as the system jumps between the two equilibria. We have explored several natural extensions of the model. One is to introduce a bankruptcy threshold, limiting the firms maximum level of debt-to-sales ratio. This leads to a rich phase diagram with, in particular, a region where acute endogenous crises occur, during which the unemployment rate shoots up before the economy can recover. We also introduce simple wage policies. This leads to inflation (in the “good” phase) or deflation (in the “bad” phase), but leaves the overall phase diagram of the model essentially unchanged. We have also explored the effect of simple monetary policies that attempt to contain rising unemployment and defang crises. We end the paper with general comments on the usefulness of ABMs to model macroeconomic phenomena, in particular in view of the time needed to reach a steady state that raises the issue of ergodicity in these models.", "We present a model of a dynamic and complex economy in which the creation and the destruction of money result from interactions between multiple and heterogeneous agents. In the baseline scenario, we observe the stabilization of the income distribution between wages and profits. We then alter the model by increasing the flexibility of wages. This change leads to the formation of a deflationary spiral. Aggregate activity decreases and the unemployment increases. The macroeconomic stability of the model is affected and eventually a systemic crisis arises. Finally, we show that the introduction of a minimum wage would have allowed the aggregate demand to be boosted and to avoid this crisis.", "", "", "Given the economy's complex behavior and sudden transitions as evidenced in the 2007–08 crisis, agent-based models are widely considered a promising alternative to current macroeconomic practice dominated by DSGE models. Their failure is commonly interpreted as a failure to incorporate heterogeneous interacting agents. This paper explains that complex behavior and sudden transitions also arise from the economy's financial structure as reflected in its balance sheets, not just from heterogeneous interacting agents. It introduces \"flow-of-funds\" and \"accounting\" models, which were preeminent in successful anticipations of the recent crisis. In illustration, a simple balance-sheet model of the economy is developed to demonstrate that nonlinear behavior and sudden transition may arise from the economy’s balance-sheet structure, even without any microfoundations. The paper concludes by discussing one recent example of combining flow-of-funds and agent-based models. This appears a promising avenue for future research.", "", "Extensive exploration of simulation models comes at a high computational cost, all the more when the model involves a lot of parameters. Economists usually rely on random explorations, such as Monte Carlo simulations, and basic econometric modeling to approximate the properties of computational models. This paper aims to provide guidelines for the use of a much more efficient method that combines a parsimonious sampling of the parameter space using a specific design of experiments (DoE), with a well-suited metamodeling method first developed in geostatistics: kriging. We illustrate these guidelines by following them in the analysis of two simple and well known economic models: Nelson and Winter's industrial dynamics model, and Cournot oligopoly with learning firms. In each case, we show that our DoE experiments can catch the main effects of the parameters on the models' dynamics with a much lower number of simulations than the Monte-Carlo sampling (e.g. 85 simulations instead of 2,000 in the first case). In the analysis of the second model, we also introduce supplementary numerical tools that may be combined with this method, for characterizing configurations complying with a specific criterion (social optimal, replication of stylized facts, etc.). Our appendix gives an example of the R-project code that can be used to apply this method on other models, in order to encourage other researchers to quickly test this approach on their models." ] }
1601.00822
2230380333
We show that a steady-state stock-flow consistent macro-economic model can be represented as a Constraint Satisfaction Problem (CSP). The set of solutions is a polytope, which volume depends on the constraints applied and reveals the potential fragility of the economic circuit, with no need to study the dynamics. Several methods to compute the volume are compared, inspired by operations research methods and the analysis of metabolic networks, both exact and approximate. We also introduce a random transaction matrix, and study the particular case of linear flows with respect to money stocks.
The importance of the organization of interactions, embodied by networks, has been stressed in economics @cite_8 . Theoretical works have studied their generic properties (supply chain @cite_32 , interbank network @cite_9 , trade credit @cite_0 ). Empirical studies have laid emphasis upon the topology of real economic networks, such as goods market, national inter-firm trading @cite_21 , world trade @cite_19 , global corporate control among transnational corporations @cite_34 , firm ownership networks. They must sometime be reconstructed, starting from limited information (of equity investments in the stock market @cite_52 , the interbank market @cite_60 ). At the level of individuals, detailed topological information about banking, employment, or consumption seems to be lacking.
{ "cite_N": [ "@cite_8", "@cite_60", "@cite_9", "@cite_21", "@cite_32", "@cite_52", "@cite_0", "@cite_19", "@cite_34" ], "mid": [ "1533368239", "2004165317", "2085949089", "1993432489", "1990407767", "1993879598", "2134867358", "2594770789", "2116075173" ], "abstract": [ "Networks of relationships help determine the careers that people choose, the jobs they obtain, the products they buy, and how they vote. The many aspects of our lives that are governed by social networks make it critical to understand how they impact behavior, which network structures are likely to emerge in a society, and why we organize ourselves as we do. In Social and Economic Networks, Matthew Jackson offers a comprehensive introduction to social and economic networks, drawing on the latest findings in economics, sociology, computer science, physics, and mathematics. He provides empirical background on networks and the regularities that they exhibit, and discusses random graph-based models and strategic models of network formation. He helps readers to understand behavior in networked societies, with a detailed analysis of learning and diffusion in networks, decision making by individuals who are influenced by their social neighbors, game theory and markets on networks, and a host of related subjects. Jackson also describes the varied statistical and modeling techniques used to analyze social networks. Each chapter includes exercises to aid students in their analysis of how networks function. This book is an indispensable resource for students and researchers in economics, mathematics, physics, sociology, and business.", "We use the theory of complex networks in order to quantitatively characterize the formation of communities in a particular financial market. The system is composed by different banks exchanging on a daily basis loans and debts of liquidity. Through topological analysis and by means of a model of network growth we can determine the formation of different group of banks characterized by different business strategy. The model based on Pareto's Law makes no use of growth or preferential attachment and it reproduces correctly all the various statistical properties of the system. We believe that this network modeling of the market could be an efficient way to evaluate the impact of different policies in the market of liquidity.", "The Statistical Physics of Complex Networks has recently provided new theoretical tools for policy makers. Here we extend the notion of network controllability to detect the financial institutions, i.e. the drivers, that are most crucial to the functioning of an interbank market. The system we investigate is a paradigmatic case study for complex networks since it undergoes dramatic structural changes over time and links among nodes can be observed at several time scales. We find a scale-free decay of the fraction of drivers with increasing time resolution, implying that policies have to be adjusted to the time scales in order to be effective. Moreover, drivers are often not the most highly connected “hub” institutions, nor the largest lenders, contrary to the results of other studies. Our findings contribute quantitative indicators which can support regulators in developing more effective supervision and intervention policies.", "To investigate the actual phenomena of transport on a complex network, we analysed empirical data for an inter-firm trading network, which consists of about one million Japanese firms and the sales of these firms (a sale corresponds to the total in-flow into a node). First, we analysed the relationships between sales and sales of nearest neighbourhoods from which we obtain a simple linear relationship between sales and the weighted sum of sales of nearest neighbourhoods (i.e., customers). In addition, we introduce a simple money transport model that is coherent with this empirical observation. In this model, a firm (i.e., customer) distributes money to its out-edges (suppliers) proportionally to the in-degree of destinations. From intensive numerical simulations, we find that the steady flows derived from these models can approximately reproduce the distribution of sales of actual firms. The sales of individual firms deduced from the money-transport model are shown to be proportional, on an average, to the real sales.", "Although standard economics textbooks are seldom interested in production networks, modern economies are more and more based upon supplier customer interactions. One can consider entire sectors of the economy as generalised supply chains. We will take this view in the present paper and study under which conditions local failures to produce or simply to deliver can result in avalanches of shortage and bankruptcies and in localisation of the economic activity. We will show that a large class of models exhibit scale free distributions of production and wealth among firms and that regions of high production are localised.", "We propose a network description of large market investments, where both stocks and shareholders are represented as vertices connected by weighted links corresponding to shareholdings. In this framework, the in-degree (kin) and the sum of incoming link weights (v) of an investor correspond to the number of assets held (portfolio diversification) and to the invested wealth (portfolio volume), respectively. An empirical analysis of three different real markets reveals that the distributions of both kin and v display power-law tails with exponents γ and α. Moreover, we find that kin scales as a power-law function of v with an exponent β. Remarkably, despite the values of α, β and γ differ across the three markets, they are always governed by the scaling relation β=(1-α) (1-γ). We show that these empirical findings can be reproduced by a recent model relating the emergence of scale-free networks to an underlying Paretian distribution of ‘hidden’ vertex properties.", "We present a simple model of a production network in which firms are linked by supplier–customer relationships involving extension of trade–credit. Our aim is to identify the minimal set of mechanisms which reproduce qualitatively the main stylized facts of industrial demography, such as firms’ size distribution, and, at the same time, the correlation, over time and across firms, of output, growth and bankruptcies. The behavior of aggregate variables can be traced back to the direct firm–firm interdependence. In this paper, we assume that the number of firms is constant and the network has a periodic static structure. But the framework allows further extensions to investigate which network structures are more robust against domino effects and, if the network is let to evolve in time, which structures emerge spontaneously, depending on the individual strategies for orders and delivery. r 2007 Elsevier B.V. All rights reserved.", "Among the proposed network models, the hidden variable (or good get richer) one is particularly interesting, even if an explicit empirical test of its hypotheses has not yet been performed on a real network. Here we provide the first empirical test of this mechanism on the world trade web, the network defined by the trade relationships between world countries. We find that the power-law distributed gross domestic product can be successfully identified with the hidden variable (or fitness) determining the topology of the world trade web: all previously studied properties up to third-order correlation structure (degree distribution, degree correlations and hierarchy) are found to be in excellent agreement with the predictions of the model. The choice of the connection probability is such that all realizations of the network with the same degree sequence are equiprobable.", "The structure of the control network of transnational corporations affects global market competition and financial stability. So far, only small national samples were studied and there was no appropriate methodology to assess control globally. We present the first investigation of the architecture of the international ownership network, along with the computation of the control held by each global player. We find that transnational corporations form a giant bow-tie structure and that a large portion of control flows to a small tightly-knit core of financial institutions. This core can be seen as an economic “super-entity” that raises new important issues both for researchers and policy makers." ] }
1601.00576
2230951028
In this paper, we compute the cohomology with compact supports of a Picardmodular surface as a virtual module over the product of the appropriate Galoisgroup and the appropriate Hecke algebra. We use the method developed by Ihara,Langlands, and Kottwitz: comparison of the Grothendieck-Lefschetz formula andthe Arthur-Selberg trace formula. Our implementation of this method takes asits starting point the work of Laumon and Morel.
(1) In 1997, Laumon computed the cohomology with compact supports for the group @math ; see @cite_2 . Our work is adapted from that of Laumon's.
{ "cite_N": [ "@cite_2" ], "mid": [ "2008185589" ], "abstract": [ "In this paper we compute the cohomology with compact supports of a Siegelthreefold as a virtual module over the product of the Galois group of ( Q ^ ) over ( Q ) and the Hecke algebra. We use a method which has been developed by Ihara, Langlands and Kottwitz: comparison of the Grothendieck--Lefschetz formula and the Arthur--Selberg trace formula." ] }
1601.00770
2229639163
We present a novel end-to-end neural model to extract entities and relations between them. Our recurrent neural network based model captures both word sequence and dependency tree substructure information by stacking bidirectional tree-structured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows our model to jointly represent both entities and relations with shared parameters in a single model. We further encourage detection of entities during training and use of entity information in relation extraction via entity pretraining and scheduled sampling. Our model improves over the state-of-the-art feature-based model on end-to-end relation extraction, achieving 12.1 and 5.7 relative error reductions in F1-score on ACE2005 and ACE2004, respectively. We also show that our LSTM-RNN based model compares favorably to the state-of-the-art CNN based model (in F1-score) on nominal relation classification (SemEval-2010 Task 8). Finally, we present an extensive ablation analysis of several model components.
LSTM-RNNs have been widely used for sequential labeling, such as clause identification @cite_25 , phonetic labeling @cite_4 , and NER @cite_18 . Recently, showed that building a conditional random field (CRF) layer on top of bidirectional LSTM-RNNs performs comparably to the state-of-the-art methods in the part-of-speech (POS) tagging, chunking, and NER.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_25" ], "mid": [ "2042188227", "2513222501", "1567199133" ], "abstract": [ "In this approach to named entity recognition, a recurrent neural network, known as Long Short-Term Memory, is applied. The network is trained to perform 2 passes on each sentence, outputting its decisions on the second pass. The first pass is used to acquire information for disambiguation during the second pass. SARDNET, a self-organising map for sequences is used to generate representations for the lexical items presented to the LSTM network, whilst orthogonal representations are used to represent the part of speech and chunk tags.", "In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it'.", "" ] }
1601.00770
2229639163
We present a novel end-to-end neural model to extract entities and relations between them. Our recurrent neural network based model captures both word sequence and dependency tree substructure information by stacking bidirectional tree-structured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows our model to jointly represent both entities and relations with shared parameters in a single model. We further encourage detection of entities during training and use of entity information in relation extraction via entity pretraining and scheduled sampling. Our model improves over the state-of-the-art feature-based model on end-to-end relation extraction, achieving 12.1 and 5.7 relative error reductions in F1-score on ACE2005 and ACE2004, respectively. We also show that our LSTM-RNN based model compares favorably to the state-of-the-art CNN based model (in F1-score) on nominal relation classification (SemEval-2010 Task 8). Finally, we present an extensive ablation analysis of several model components.
For relation classification, in addition to traditional feature kernel-based approaches @cite_21 @cite_15 , several neural models have been proposed in the SemEval-2010 Task 8 @cite_26 , including embedding-based models @cite_11 , CNN-based models @cite_1 , and RNN-based models @cite_31 . Recently, and showed that the shortest dependency paths between relation arguments, which were used in feature kernel-based systems @cite_15 , are also useful in NN-based models. also showed that LSTM-RNNs are useful for relation classification, but the performance was worse than CNN-based models. compared separate sequence-based and tree-structured LSTM-RNNs on relation classification, using basic RNN model structures.
{ "cite_N": [ "@cite_31", "@cite_26", "@cite_21", "@cite_1", "@cite_15", "@cite_11" ], "mid": [ "1889268436", "2099779943", "1981082061", "2155454737", "2138627627", "1914293925" ], "abstract": [ "Single-word vector space models have been very successful at learning lexical information. However, they cannot capture the compositional meaning of longer phrases, preventing them from a deeper understanding of language. We introduce a recursive neural network (RNN) model that learns compositional vector representations for phrases and sentences of arbitrary syntactic type and length. Our model assigns a vector and a matrix to every node in a parse tree: the vector captures the inherent meaning of the constituent, while the matrix captures how it changes the meaning of neighboring words or phrases. This matrix-vector RNN can learn the meaning of operators in propositional logic and natural language. The model obtains state of the art performance on three different experiments: predicting fine-grained sentiment distributions of adverb-adjective pairs; classifying sentiment labels of movie reviews and classifying semantic relationships such as cause-effect or topic-message between nouns using the syntactic path between them.", "We present a brief overview of the main challenges in the extraction of semantic relations from English text, and discuss the shortcomings of previous data sets and shared tasks. This leads us to introduce a new task, which will be part of SemEval-2010: multi-way classification of mutually exclusive semantic relations between pairs of common nominals. The task is designed to compare different approaches to the problem and to provide a standard testbed for future research, which can benefit many applications in Natural Language Processing.", "We present an application of kernel methods to extracting relations from unstructured natural language sources. We introduce kernels defined over shallow parse representations of text, and design efficient algorithms for computing the kernels. We use the devised kernels in conjunction with Support Vector Machine and Voted Perceptron learning algorithms for the task of extracting person-affiliation and organization-location relations from text. We experimentally evaluate the proposed methods and compare them with feature-based learning algorithms, with promising results.", "Relation classification is an important semantic processing task for which state-ofthe-art systems still rely on costly handcrafted features. In this work we tackle the relation classification task using a convolutional neural network that performs classification by ranking (CR-CNN). We propose a new pairwise ranking loss function that makes it easy to reduce the impact of artificial classes. We perform experiments using the the SemEval-2010 Task 8 dataset, which is designed for the task of classifying the relationship between two nominals marked in a sentence. Using CRCNN, we outperform the state-of-the-art for this dataset and achieve a F1 of 84.1 without using any costly handcrafted features. Additionally, our experimental results show that: (1) our approach is more effective than CNN followed by a softmax classifier; (2) omitting the representation of the artificial class Other improves both precision and recall; and (3) using only word embeddings as input features is enough to achieve state-of-the-art results if we consider only the text between the two target nominals.", "We present a novel approach to relation extraction, based on the observation that the information required to assert a relationship between two named entities in the same sentence is typically captured by the shortest path between the two entities in the dependency graph. Experiments on extracting top-level relations from the ACE (Automated Content Extraction) newspaper corpus show that the new shortest path dependency kernel outperforms a recent approach based on dependency tree kernels.", "We present a novel learning method for word embeddings designed for relation classification. Our word embeddings are trained by predicting words between noun pairs using lexical relation-specific features on a large unlabeled corpus. This allows us to explicitly incorporate relation-specific information into the word embeddings. The learned word embeddings are then used to construct feature vectors for a relation classification model. On a well-established semantic relation classification task, our method significantly outperforms a baseline based on a previously introduced word embedding method, and compares favorably to previous state-of-the-art models that use syntactic information or manually constructed external resources." ] }
1601.00770
2229639163
We present a novel end-to-end neural model to extract entities and relations between them. Our recurrent neural network based model captures both word sequence and dependency tree substructure information by stacking bidirectional tree-structured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows our model to jointly represent both entities and relations with shared parameters in a single model. We further encourage detection of entities during training and use of entity information in relation extraction via entity pretraining and scheduled sampling. Our model improves over the state-of-the-art feature-based model on end-to-end relation extraction, achieving 12.1 and 5.7 relative error reductions in F1-score on ACE2005 and ACE2004, respectively. We also show that our LSTM-RNN based model compares favorably to the state-of-the-art CNN based model (in F1-score) on nominal relation classification (SemEval-2010 Task 8). Finally, we present an extensive ablation analysis of several model components.
Research on tree-structured LSTM-RNNs @cite_34 fixes the direction of information propagation from bottom to top, and also cannot handle an arbitrary number of typed children as in a typed dependency tree. Furthermore, no RNN-based relation classification model simultaneously uses word sequence and dependency tree information. We propose several such novel model structures and training settings, investigating the simultaneous use of bidirectional sequential and bidirectional tree-structured LSTM-RNNs to jointly capture linear and dependency context for end-to-end extraction of relations between entities.
{ "cite_N": [ "@cite_34" ], "mid": [ "2963355447" ], "abstract": [ "A Long Short-Term Memory (LSTM) network is a type of recurrent neural network architecture which has recently obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. TreeLSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank)." ] }
1601.00770
2229639163
We present a novel end-to-end neural model to extract entities and relations between them. Our recurrent neural network based model captures both word sequence and dependency tree substructure information by stacking bidirectional tree-structured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows our model to jointly represent both entities and relations with shared parameters in a single model. We further encourage detection of entities during training and use of entity information in relation extraction via entity pretraining and scheduled sampling. Our model improves over the state-of-the-art feature-based model on end-to-end relation extraction, achieving 12.1 and 5.7 relative error reductions in F1-score on ACE2005 and ACE2004, respectively. We also show that our LSTM-RNN based model compares favorably to the state-of-the-art CNN based model (in F1-score) on nominal relation classification (SemEval-2010 Task 8). Finally, we present an extensive ablation analysis of several model components.
As for end-to-end (joint) extraction of relations between entities, all existing models are feature-based systems (and no NN-based model has been proposed). Such models include structured prediction @cite_29 @cite_13 , integer linear programming @cite_22 @cite_24 , card-pyramid parsing @cite_17 , and global probabilistic graphical models @cite_7 @cite_6 . Among these, structured prediction methods are state-of-the-art on several corpora. We present an improved, NN-based alternative for the end-to-end relation extraction.
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_29", "@cite_6", "@cite_24", "@cite_13", "@cite_17" ], "mid": [ "2465041517", "2133439966", "2134033474", "2077054525", "2115834228", "2251091211", "1866795174" ], "abstract": [ "", "In this paper, we investigate the problem of entity identification and relation extraction from encyclopedia articles, and we propose a joint discriminative probabilistic model with arbitrary graphical structure to optimize all relevant subtasks simultaneously. This modeling offers a natural formalism for exploiting rich dependencies and interactions between relevant subtasks to capture mutual benefits, as well as a great flexibility to incorporate a large collection of arbitrary, overlapping and non-independent features. We show the parameter estimation algorithm of this model. Moreover, we propose a new inference method, namely collective iterative classification (CIC), to find the most likely assignments for both entities and relations. We evaluate our model on real-world data from Wikipedia for this task, and compare with current state-of-the-art pipeline and joint models, demonstrating the effectiveness and feasibility of our approach.", "We present an incremental joint framework to simultaneously extract entity mentions and relations using structured perceptron with efficient beam-search. A segment-based decoder based on the idea of semi-Markov chain is adopted to the new framework as opposed to traditional token-based tagging. In addition, by virtue of the inexact search, we developed a number of new and effective global features as soft constraints to capture the interdependency among entity mentions and relations. Experiments on Automatic Content Extraction (ACE) 1 corpora demonstrate that our joint model significantly outperforms a strong pipelined baseline, which attains better performance than the best-reported end-to-end system.", "Although joint inference is an effective approach to avoid cascading of errors when inferring multiple natural language tasks, its application to information extraction has been limited to modeling only two tasks at a time, leading to modest improvements. In this paper, we focus on the three crucial tasks of automated extraction pipelines: entity tagging, relation extraction, and coreference. We propose a single, joint graphical model that represents the various dependencies between the tasks, allowing flow of uncertainty across task boundaries. Since the resulting model has a high tree-width and contains a large number of variables, we present a novel extension to belief propagation that sparsifies the domains of variables during inference. Experimental results show that our joint model consistently improves results on all three tasks as we represent more dependencies. In particular, our joint model obtains 12 error reduction on tagging over the isolated models.", "This paper addresses the task of finegrained opinion extraction ‐ the identification of opinion-related entities: the opinion expressions, the opinion holders, and the targets of the opinions, and the relations between opinion expressions and their targets and holders. Most existing approaches tackle the extraction of opinion entities and opinion relations in a pipelined manner, where the interdependencies among different extraction stages are not captured. We propose a joint inference model that leverages knowledge from predictors that optimize subtasks of opinion extraction, and seeks a globally optimal solution. Experimental results demonstrate that our joint inference approach significantly outperforms traditional pipeline methods and baselines that tackle subtasks in isolation for the problem of opinion extraction.", "This paper proposes a history-based structured learning approach that jointly extracts entities and relations in a sentence. We introduce a novel simple and flexible table representation of entities and relations. We investigate several feature settings, search orders, and learning methods with inexact search on the table. The experimental results demonstrate that a joint learning approach significantly outperforms a pipeline approach by incorporating global features and by selecting appropriate learning methods and search orders.", "Both entity and relation extraction can benefit from being performed jointly, allowing each task to correct the errors of the other. We present a new method for joint entity and relation extraction using a graph we call a \"card-pyramid.\" This graph compactly encodes all possible entities and relations in a sentence, reducing the task of their joint extraction to jointly labeling its nodes. We give an efficient labeling algorithm that is analogous to parsing using dynamic programming. Experimental results show improved results for our joint extraction method compared to a pipelined approach." ] }
1601.00781
2227029114
This paper presents a method for analysis of the vote space created from the local features extraction process in a multi-detection system. The method is opposed to the classic clustering approach and gives a high level of control over the clusters composition for further verification steps. Proposed method comprises of the graphical vote space presentation, the proposition generation, the two-pass iterative vote aggregation and the cascade filters for verification of the propositions. Cascade filters contain all of the minor algorithms needed for effective object detection verification. The new approach does not have the drawbacks of the classic clustering approaches and gives a substantial control over process of detection. Method exhibits an exceptionally high detection rate in conjunction with a low false detection chance in comparison to alternative methods.
The best known feature points, up to this point, are SIFT points developed by Lowe @cite_8 , which became a model for various local features benchmarks. The closest alternative to SIFT is SURF @cite_7 , that comes with a lower dimensionality and, in the result, a higher computing efficiency. There are also known attempts to incorporate additional enhancements into SIFT and SURF as PCA-SIFT @cite_19 or Affine-SIFT @cite_0 . SIFT and SURF and its derivatives are computationally demanding during matching process. In last years there has been big development in feature points based on binary test pairs, that can be matched and described in a very fast manner. The flagships of this approach are BRIEF @cite_20 , ORB @cite_5 , BRISK @cite_18 and FREAK @cite_11 features. Most of the cited algorithms can be used to create dense and highly discriminative voting space, which holds substantial object correspondence data needed to accomplish many of the real-world detection tasks.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_8", "@cite_0", "@cite_19", "@cite_5", "@cite_20", "@cite_11" ], "mid": [ "2141584146", "2119605622", "", "2052094314", "2145072179", "2117228865", "1491719799", "1995266040" ], "abstract": [ "Effective and efficient generation of keypoints from an image is a well-studied problem in the literature and forms the basis of numerous Computer Vision applications. Established leaders in the field are the SIFT and SURF algorithms which exhibit great performance under a variety of image transformations, with SURF in particular considered as the most computationally efficient amongst the high-performance methods to date. In this paper we propose BRISK1, a novel method for keypoint detection, description and matching. A comprehensive evaluation on benchmark datasets reveals BRISK's adaptive, high quality performance as in state-of-the-art algorithms, albeit at a dramatically lower computational cost (an order of magnitude faster than SURF in cases). The key to speed lies in the application of a novel scale-space FAST-based detector in combination with the assembly of a bit-string descriptor from intensity comparisons retrieved by dedicated sampling of each keypoint neighborhood.", "This article presents a novel scale- and rotation-invariant detector and descriptor, coined SURF (Speeded-Up Robust Features). SURF approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (specifically, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper encompasses a detailed description of the detector and descriptor and then explores the effects of the most important parameters. We conclude the article with SURF's application to two challenging, yet converse goals: camera calibration as a special case of image registration, and object recognition. Our experiments underline SURF's usefulness in a broad range of topics in computer vision.", "", "If a physical object has a smooth or piecewise smooth boundary, its images obtained by cameras in varying positions undergo smooth apparent deformations. These deformations are locally well approximated by affine transforms of the image plane. In consequence the solid object recognition problem has often been led back to the computation of affine invariant image local features. Such invariant features could be obtained by normalization methods, but no fully affine normalization method exists for the time being. Even scale invariance is dealt with rigorously only by the scale-invariant feature transform (SIFT) method. By simulating zooms out and normalizing translation and rotation, SIFT is invariant to four out of the six parameters of an affine transform. The method proposed in this paper, affine-SIFT (ASIFT), simulates all image views obtainable by varying the two camera axis orientation parameters, namely, the latitude and the longitude angles, left over by the SIFT method. Then it covers the other four parameters by using the SIFT method itself. The resulting method will be mathematically proved to be fully affine invariant. Against any prognosis, simulating all views depending on the two camera orientation parameters is feasible with no dramatic computational load. A two-resolution scheme further reduces the ASIFT complexity to about twice that of SIFT. A new notion, the transition tilt, measuring the amount of distortion from one view to another, is introduced. While an absolute tilt from a frontal to a slanted view exceeding 6 is rare, much higher transition tilts are common when two slanted views of an object are compared (see Figure hightransitiontiltsillustration). The attainable transition tilt is measured for each affine image comparison method. The new method permits one to reliably identify features that have undergone transition tilts of large magnitude, up to 36 and higher. This fact is substantiated by many experiments which show that ASIFT significantly outperforms the state-of-the-art methods SIFT, maximally stable extremal region (MSER), Harris-affine, and Hessian-affine.", "Stable local feature detection and representation is a fundamental component of many image registration and object recognition algorithms. Mikolajczyk and Schmid (June 2003) recently evaluated a variety of approaches and identified the SIFT [D. G. Lowe, 1999] algorithm as being the most resistant to common image deformations. This paper examines (and improves upon) the local image descriptor used by SIFT. Like SIFT, our descriptors encode the salient aspects of the image gradient in the feature point's neighborhood; however, instead of using SIFT's smoothed weighted histograms, we apply principal components analysis (PCA) to the normalized gradient patch. Our experiments demonstrate that the PCA-based local descriptors are more distinctive, more robust to image deformations, and more compact than the standard SIFT representation. We also present results showing that using these descriptors in an image retrieval application results in increased accuracy and faster matching.", "Feature matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods rely on costly descriptors for detection and matching. In this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise. We demonstrate through experiments how ORB is at two orders of magnitude faster than SIFT, while performing as well in many situations. The efficiency is tested on several real-world applications, including object detection and patch-tracking on a smart phone.", "We propose to use binary strings as an efficient feature point descriptor, which we call BRIEF. We show that it is highly discriminative even when using relatively few bits and can be computed using simple intensity difference tests. Furthermore, the descriptor similarity can be evaluated using the Hamming distance, which is very efficient to compute, instead of the L2 norm as is usually done. As a result, BRIEF is very fast both to build and to match. We compare it against SURF and U-SURF on standard benchmarks and show that it yields a similar or better recognition performance, while running in a fraction of the time required by either.", "A large number of vision applications rely on matching keypoints across images. The last decade featured an arms-race towards faster and more robust keypoints and association algorithms: Scale Invariant Feature Transform (SIFT)[17], Speed-up Robust Feature (SURF)[4], and more recently Binary Robust Invariant Scalable Keypoints (BRISK)[I6] to name a few. These days, the deployment of vision algorithms on smart phones and embedded devices with low memory and computation complexity has even upped the ante: the goal is to make descriptors faster to compute, more compact while remaining robust to scale, rotation and noise. To best address the current requirements, we propose a novel keypoint descriptor inspired by the human visual system and more precisely the retina, coined Fast Retina Keypoint (FREAK). A cascade of binary strings is computed by efficiently comparing image intensities over a retinal sampling pattern. Our experiments show that FREAKs are in general faster to compute with lower memory load and also more robust than SIFT, SURF or BRISK. They are thus competitive alternatives to existing keypoints in particular for embedded applications." ] }