aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1608.01302 | 2462851002 | We investigate learning heuristics for domain-specific planning. Prior work framed learning a heuristic as an ordinary regression problem. However, in a greedy best-first search, the ordering of states induced by a heuristic is more indicative of the resulting planner's performance than mean squared error. Thus, we instead frame learning a heuristic as a learning to rank problem which we solve using a RankSVM formulation. Additionally, we introduce new methods for computing features that capture temporal interactions in an approximate plan. Our experiments on recent International Planning Competition problems show that the RankSVM learned heuristics outperform both the original heuristics and heuristics learned through ordinary regression. | investigated greedy heuristic search performance in several combinatorial search domains @cite_10 . Their results suggest that heuristics that exhibit strong correlation with the distance-to-go are less likely to produce large local minima. And large local minima are thought to often dominate the runtime of greedy planners @cite_23 @cite_0 . They later use the Kendall rank correlation coefficient ( @math ) to select a pattern database for some of these domains @cite_21 . Their use of @math as a heuristic quality metric differs from our own use because they score @math using sampled states near the goal while we score @math by ranking the states on a plan. | {
"cite_N": [
"@cite_0",
"@cite_21",
"@cite_10",
"@cite_23"
],
"mid": [
"2406763882",
"1949377792",
"2404262858",
"2170377262"
],
"abstract": [
"The ignoring delete lists relaxation is of paramount importance for both satisficing and optimal planning. In earlier work (Hoffmann 2005), it was observed that the optimal relaxation heuristic h+ has amazing qualities in many classical planning benchmarks, in particular pertaining to the complete absence of local minima. The proofs of this are hand-made, raising the question whether such proofs can be lead automatically by domain analysis techniques. In contrast to earlier disappointing results (Hoffmann 2005) - the analysis method has exponential runtime and succeeds only in two extremely simple benchmark domains - we herein answer this question in the affirmative. We establish connections between causal graph structure and h+ topology. This results in low-order polynomial time analysis methods, implemented in a tool we call TorchLight. Of the 12 domains where the absence of local minima has been proved, TorchLight gives strong success guarantees in 8 domains. Empirically, its analysis exhibits strong performance in a further 2 of these domains, plus in 4 more domains where local minima may exist but are rare. In this way, TorchLight can distinguish \"easy\" domains from \"hard\" ones. By summarizing structural reasons for analysis failure, TorchLight also provides diagnostic output indicating domain aspects that may cause local minima.",
"Suboptimal heuristic search algorithms such as greedy best-first search allow us to find solutions when constraints of either time, memory, or both prevent the application of optimal algorithms such as A*. Guidelines for building an effective heuristic for A* are well established in the literature, but we show that if those rules are applied for greedy best-first search, performance can actually degrade. Observing what went wrong for greedy best-first search leads us to a quantitative metric appropriate for greedy heuristics, called Goal Distance Rank Correlation (GDRC). We demonstrate that GDRC can be used to build effective heuristics for greedy best-first search automatically.",
"Weighted A* is the most popular satisficing algorithm for heuristic search. Although there is no formal guarantee that increasing the weight on the heuristic cost-to-go estimate will decrease search time, it is commonly assumed that increasing the weight leads to faster searches, and that greedy search will provide the fastest search of all. As we show, however, in some domains, increasing the weight slows down the search. This has an important consequence on the scaling behavior of Weighted A*: increasing the weight ad infinitum will only speed up the search if greedy search is effective. We examine several plausible hypotheses as to why greedy search would sometimes expand more nodes than A* and show that each of the simple explanations has flaws. Our contribution is to show that greedy search is fast if and only if there is a strong correlation between h(n) and d∗(n), the true distance-to-go, or if the heuristic is extremely accurate.",
"Between 1998 and 2004, the planning community has seen vast progress in terms of the sizes of benchmark examples that domain-independent planners can tackle successfully. The key technique behind this progress is the use of heuristic functions based on relaxing the planning task at hand, where the relaxation is to assume that all delete lists are empty. The unprecedented success of such methods, in many commonly used benchmark examples, calls for an understanding of what classes of domains these methods are well suited for. In the investigation at hand, we derive a formal background to such an understanding. We perform a case study covering a range of 30 commonly used STRIPS and ADL benchmark domains, including all examples used in the first four international planning competitions. We prove connections between domain structure and local search topology – heuristic cost surface properties – under an idealized version of the heuristic functions used in modern planners. The idealized heuristic function is called h + , and differs from the practically used functions in that it returns the length of an optimal relaxed plan, which is NP-hard to compute. We identify several key characteristics of the topology under h + , concerning the existence non-existence of unrecognized dead ends, as well as the existence non-existence of constant upper bounds on the difficulty of escaping local minima and benches. These distinctions divide the (set of all) planning domains into a taxonomy of classes of varying h + topology. As it turns out, many of the 30 investigated domains lie in classes with a relatively easy topology. Most particularly, 12 of the domains lie in classes where FF’s search algorithm, provided with h + , is a polynomial solving mechanism. We also present results relating h + to its approximation as implemented in FF. The behavior regarding dead ends is provably the same. We summarize the results of an empirical investigation showing that, in many domains, the topological qualities of h + are largely inherited by the approximation. The overall investigation gives a rare example of a successful analysis of the connections between typical-case problem structure, and search performance. The theoretical investigation also gives hints on how the topological phenomena might be automatically recognizable by domain analysis techniques. We outline some preliminary steps we made into that direction."
]
} |
1608.01441 | 2949111601 | Multi-label learning has attracted significant interests in computer vision recently, finding applications in many vision tasks such as multiple object recognition and automatic image annotation. Associating multiple labels to a complex image is very difficult, not only due to the intricacy of describing the image, but also because of the incompleteness nature of the observed labels. Existing works on the problem either ignore the label-label and instance-instance correlations or just assume these correlations are linear and unstructured. Considering that semantic correlations between images are actually structured, in this paper we propose to incorporate structured semantic correlations to solve the missing label problem of multi-label learning. Specifically, we project images to the semantic space with an effective semantic descriptor. A semantic graph is then constructed on these images to capture the structured correlations between them. We utilize the semantic graph Laplacian as a smooth term in the multi-label learning formulation to incorporate the structured semantic correlations. Experimental results demonstrate the effectiveness of the proposed semantic descriptor and the usefulness of incorporating the structured semantic correlations. We achieve better results than state-of-the-art multi-label learning methods on four benchmark datasets. | Nearest neighbors (NN) related methods are also commonly utilized in multi-label related applications. For label propagation, @cite_4 proposed the Correlated label propagation (CLP) framework that propagates multiple labels jointly based on kNN methods. @cite_3 utilized NN relationships as the label view in a multi-view multi-instance framework for multi-label object recognition. TagProp @cite_22 combines metric learning and kNN to propagate labels. For tag refinement, @cite_6 proposed to use low-rank matrix completion formula with several graph constraints as the objective function to refine noisy or incomplete labels. For tag ranking, several methods @cite_40 @cite_17 @cite_19 have been proposed to learn a ranking function utilizing the correlations between tags. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_3",
"@cite_6",
"@cite_19",
"@cite_40",
"@cite_17"
],
"mid": [
"2133510502",
"2536305071",
"2410641892",
"",
"2141697695",
"",
"2157031556"
],
"abstract": [
"Many computer vision applications, such as scene analysis and medical image interpretation, are ill-suited for traditional classification where each image can only be associated with a single class. This has stimulated recent work in multi-label learning where a given image can be tagged with multiple class labels. A serious problem with existing approaches is that they are unable to exploit correlations between class labels. This paper presents a novel framework for multi-label learning termed Correlated Label Propagation (CLP) that explicitly models interactions between labels in an efficient manner. As in standard label propagation, labels attached to training data points are propagated to test data points; however, unlike standard algorithms that treat each label independently, CLP simultaneously co-propagates multiple labels. Existing work eschews such an approach since naive algorithms for label co-propagation are intractable. We present an algorithm based on properties of submodular functions that efficiently finds an optimal solution. Our experiments demonstrate that CLP leads to significant gains in precision recall against standard techniques on two real-world computer vision tasks involving several hundred labels.",
"Image auto-annotation is an important open problem in computer vision. For this task we propose TagProp, a discriminatively trained nearest neighbor model. Tags of test images are predicted using a weighted nearest-neighbor model to exploit labeled training images. Neighbor weights are based on neighbor rank or distance. TagProp allows the integration of metric learning by directly maximizing the log-likelihood of the tag predictions in the training set. In this manner, we can optimally combine a collection of image similarity metrics that cover different aspects of image content, such as local shape descriptors, or global color histograms. We also introduce a word specific sigmoidal modulation of the weighted neighbor tag predictions to boost the recall of rare words. We investigate the performance of different variants of our model and compare to existing work. We present experimental results for three challenging data sets. On all three, TagProp makes a marked improvement as compared to the current state-of-the-art.",
"Convolutional neural networks (CNNs) have shown great performance as general feature representations for object recognition applications. However, for multi-label images that contain multiple objects from different categories, scales and locations, global CNN features are not optimal. In this paper, we incorporate local information to enhance the feature discriminative power. In particular, we first extract object proposals from each image. With each image treated as a bag and object proposals extracted from it treated as instances, we transform the multi-label recognition problem into a multi-class multi-instance learning problem. Then, in addition to extracting the typical CNN feature representation from each proposal, we propose to make use of ground-truth bounding box annotations (strong labels) to add another level of local information by using nearest-neighbor relationships of local regions to form a multi-view pipeline. The proposed multi-view multiinstance framework utilizes both weak and strong labels effectively, and more importantly it has the generalization ability to even boost the performance of unseen categories by partial strong labels from other categories. Our framework is extensively compared with state-of-the-art handcrafted feature based methods and CNN based methods on two multi-label benchmark datasets. The experimental results validate the discriminative power and the generalization ability of the proposed framework. With strong labels, our framework is able to achieve state-of-the-art results in both datasets.",
"",
"Tags of social images play a central role for text-based social image retrieval and browsing tasks. However, the original tags annotated by web users could be noisy, irrelevant, and often incomplete for describing the image contents, which may severely deteriorate the performance of text-based image retrieval models. In this paper, we aim to overcome the challenge of social tag ranking for a corpus of social images with rich user-generated tags by proposing a novel two-view learning approach. It can effectively exploit both textual and visual contents of social images to discover the complicated relationship between tags and images. Unlike the conventional learning approaches that usually assume some parametric models, our method is completely data-driven and makes no assumption of the underlying models, making the proposed solution practically more effective. We formally formulate our method as an optimization task and present an efficient algorithm to solve it. To evaluate the efficacy of our method, we conducted an extensive set of experiments by applying our technique to both text-based social image retrieval and automatic image annotation tasks, in which encouraging results showed that the proposed method is more effective than the conventional approaches.",
"",
"Folksonomy, considered a core component for Web 2.0 user-participation architecture, is a classification system made by user's tags on the web resources. Recently, various approaches for image retrieval exploiting folksonomy have been proposed to improve the result of image search. However, the characteristics of the tags such as semantic ambiguity and non-controlledness limit the effectiveness of tags on image retrieval. Especially, tags associated with images in a random order do not provide any information about the relevance between a tag and an image. In this paper, we propose a novel image tag ranking system called i-TagRanker which exploits the semantic relationships between tags for re-ordering the tags according to the relevance with an image. The proposed system consists of two phases: 1) tag propagation phase, 2) tag ranking phase. In tag propagation phase, we first collect the most relevant tags from similar images, and then propagate them to an untagged image. In tag ranking phase, tags are ranked according to their semantic relevance to the image. From the experimental results on a Flickr photo collection about over 30,000 images, we show the effectiveness of the proposed system."
]
} |
1608.01632 | 2514449832 | Primary user activity is a major bottleneck for existing routing protocols in cognitive radio networks. Typical routing protocols avoid areas that are highly congested with primary users, leaving only a small fragment of available links for secondary route construction. In addition, wireless links are prone to channel impairments such as multipath fading; which renders the quality of the available links highly fluctuating. In this paper, we investigate using cooperative communication mechanisms to reveal new routing opportunities, enhance route qualities, and enable true coexistence of primary and secondary networks. As a result, we propose Undercover: a cooperative routing protocol that utilizes the available location information to assist in the routing process. Specifically, our protocol revisits a fundamental assumption taken by the state of the art routing protocols designed for cognitive radio networks. Using Undercover, secondary users can transmit in the regions of primary users activity through utilizing cooperative communication techniques to null out transmission at primary receivers via beamforming. In addition, the secondary links qualities are enhanced using cooperative diversity. To account for the excessive levels of interference typically incurred due to cooperative transmissions, we allow our protocol to be interference-aware. Thus, cooperative transmissions are penalized in accordance to the amount of negatively affected secondary flows. We evaluate the performance of our proposed protocol via NS2 simulations which show that our protocol can enhance the network goodput by a ratio reaches up to 250 compared to other popular cognitive routing protocols with minimal added overhead. | The significant performance gains introduced by cooperative diversity have motivated some researchers to employ its techniques in the context of cognitive radio networks @cite_2 @cite_18 . For instance, use cooperative diversity to transmit data with higher capacity to maximize throughput @cite_18 . In addition, proposed a cooperative routing protocol in @cite_28 that enhances the end-to-end throughput. Unfortunately, non of these approaches (1) offer new communication opportunities, (2) mitigate the effect of having active primary users on the secondary network, (3) address the inter-path interference problem which increases as a result of cooperative transmission. | {
"cite_N": [
"@cite_28",
"@cite_18",
"@cite_2"
],
"mid": [
"2155380127",
"2150565827",
"2159179663"
],
"abstract": [
"Cognitive radio (CR) technology enables the opportunistic use of the vacant licensed frequency bands, thereby improving the spectrum utilization. Therefore, considering end-to-end throughput in CR ad-hoc networks is an important research issue because the availability of local spectrum resources may change frequently with the time and locations. In this paper, we propose a cooperative routing protocol in CR ad-hoc networks. An on-demand routing protocol is used to find an end-to-end minimum cost path between a pair of source and destination. The simulation results show that our proposed cooperative routing protocol not only obtains higher end-to-end throughput, but also reduces the end-to-end delay and the amount of control messages compared to previous work.",
"Throughput maximization is one of the main challenges in cognitive radio ad hoc networks, where the availability of local spectrum resources may change from time to time and hop-by-hop. Cooperative transmission exploits spatial diversity without multiple antennas at each node to increase capacity with reliability guarantees. This idea is particularly attractive in wireless environments due to the diverse channel quality and the limited energy and bandwidth resources. With cooperation, source node and relay node cooperatively transmit data to the destination. In such a virtual multiple antenna transmission system, the capacity of the cooperative link is much larger than that of the direct link from source to destination. In this paper, we will study decentralized and localized algorithms for joint dynamic routing, relay assignment, and spectrum allocation under a distributed and dynamic environment.",
"Recent studies demonstrated that dynamic spectrum access can improve spectrum utilization significantly by allowing secondary unlicensed users to dynamically share the spectrum that is not used by the primary licensed users. Cognitive radio was proposed to promote the spectrum utilization by opportunistically exploiting the existence of spectrum \"holes.\" Meanwhile, cooperative relay technology is regarded widely as a key technology for increasing transmission diversity gain in various types of wireless networks, including cognitive radio networks. In this article, we first give a brief overview of the envisioned applications of: cooperative relay technology to CRNs, cooperative transmission of primary traffic by secondary users, cooperative transmission between secondary nodes to improve spatial diversity, and cooperative relay between secondary nodes to improve spectrum diversity. As the latter is a new direction, in this article we focus on this scenario and investigate a simple wireless network, where a spectrum-rich node is selected as the relay node to improve the performance between the source and the destination. With the introduction of cooperative relay, many unique problems should be considered, especially the issue for relay selection and spectrum allocation. To demonstrate the feasibility and performance of cooperative relay for cognitive radio, a new MAC protocol was proposed and implemented in a universal software radio peripheral-based testbed. Experimental results show that the throughput of the whole system is greatly increased by exploiting the benefit of cooperative relay."
]
} |
1608.01632 | 2514449832 | Primary user activity is a major bottleneck for existing routing protocols in cognitive radio networks. Typical routing protocols avoid areas that are highly congested with primary users, leaving only a small fragment of available links for secondary route construction. In addition, wireless links are prone to channel impairments such as multipath fading; which renders the quality of the available links highly fluctuating. In this paper, we investigate using cooperative communication mechanisms to reveal new routing opportunities, enhance route qualities, and enable true coexistence of primary and secondary networks. As a result, we propose Undercover: a cooperative routing protocol that utilizes the available location information to assist in the routing process. Specifically, our protocol revisits a fundamental assumption taken by the state of the art routing protocols designed for cognitive radio networks. Using Undercover, secondary users can transmit in the regions of primary users activity through utilizing cooperative communication techniques to null out transmission at primary receivers via beamforming. In addition, the secondary links qualities are enhanced using cooperative diversity. To account for the excessive levels of interference typically incurred due to cooperative transmissions, we allow our protocol to be interference-aware. Thus, cooperative transmissions are penalized in accordance to the amount of negatively affected secondary flows. We evaluate the performance of our proposed protocol via NS2 simulations which show that our protocol can enhance the network goodput by a ratio reaches up to 250 compared to other popular cognitive routing protocols with minimal added overhead. | One of the intensively studied cooperative communications techniques is cooperative beamforming, which relies on sending precoded versions of the same data to reshape the signal beam producing transmission nulls at certain spatial directions @cite_42 . A direct consequence of employing cooperative beamforming is to allow for spatial multiplexing of concurrent transmissions of multiple nodes @cite_14 . Fortunately, cooperative beamforming provides the means for hiding secondary user communication from primary users and avoiding interfering with primary user communication. This opportunity was considered by a small number of attempts like @cite_41 which utilizes beamforming by developing MAC layer protocols that maximize the received signal-to-noise ratio (SINR) among SUs with different power constraints and the QoS requirement of PUs. However, this protocol deploys beamforming among one relay node only between the source and the destination. | {
"cite_N": [
"@cite_41",
"@cite_14",
"@cite_42"
],
"mid": [
"2013925279",
"",
"2060108923"
],
"abstract": [
"In this paper we investigate the cooperative beamforming in cognitive radio network (CRN) composed of a primary network (PN) and a secondary network (SN). In the SN, the transmitter has to communicate with the receiver with the help of the hybrid relays which use cooperative beamforming to retransmit data. The relays will select amplify-and-forward (AF) or decode-and-forward (DF) according to the signal to interference pulse noise ratio (SINR). The aim of this paper is to maximize the received SINR in the SN with different power constraints and the QoS requirement of the primary user (PU). Numerical results compare the performance of the SN between relays with beamforming and relays without beamforming which indicates that the gap between them is related to the maximum interference power at primary destination, while is irrelevant to maximum the total transmitted power of the relays. Besides there is a tradeoff between the performance of the SN and the number of the relays.",
"",
"An overview of beamforming from a signal-processing perspective is provided, with an emphasis on recent research. Data-independent, statistically optimum, adaptive, and partially adaptive beamforming are discussed. Basic notation, terminology, and concepts are included. Several beamformer implementations are briefly described. >"
]
} |
1608.01632 | 2514449832 | Primary user activity is a major bottleneck for existing routing protocols in cognitive radio networks. Typical routing protocols avoid areas that are highly congested with primary users, leaving only a small fragment of available links for secondary route construction. In addition, wireless links are prone to channel impairments such as multipath fading; which renders the quality of the available links highly fluctuating. In this paper, we investigate using cooperative communication mechanisms to reveal new routing opportunities, enhance route qualities, and enable true coexistence of primary and secondary networks. As a result, we propose Undercover: a cooperative routing protocol that utilizes the available location information to assist in the routing process. Specifically, our protocol revisits a fundamental assumption taken by the state of the art routing protocols designed for cognitive radio networks. Using Undercover, secondary users can transmit in the regions of primary users activity through utilizing cooperative communication techniques to null out transmission at primary receivers via beamforming. In addition, the secondary links qualities are enhanced using cooperative diversity. To account for the excessive levels of interference typically incurred due to cooperative transmissions, we allow our protocol to be interference-aware. Thus, cooperative transmissions are penalized in accordance to the amount of negatively affected secondary flows. We evaluate the performance of our proposed protocol via NS2 simulations which show that our protocol can enhance the network goodput by a ratio reaches up to 250 compared to other popular cognitive routing protocols with minimal added overhead. | Our previous work @cite_32 considered using beamforming in the routing layer. However, we only proposed a route maintenance mechanism to alleviate the need for route re-establishment upon the detection of a PU which limits the usefulness of beamforming. In this paper, we consider cooperative beamforming a foundation for building successful cognitive radio routing protocols. We understand the true potential of employing cooperative beamforming, mitigate its introduced interference effects, and provide practical mechanisms for efficient cooperative group construction. | {
"cite_N": [
"@cite_32"
],
"mid": [
"2064503103"
],
"abstract": [
"We consider a cognitive radio setting in which a relay-assisted secondary link employs cooperative beamforming to enhance its throughput and to provide protection to the primary receiver from interference. We assume the presence of infinite buffers at both the primary and secondary transmitters and characterize the maximum stable throughput region exactly using the dominant system approach. Numerical examples are provided to give insights into the impact of power control on the stability region."
]
} |
1608.01082 | 2951684280 | In this paper, we tackle the problem of RGB-D semantic segmentation of indoor images. We take advantage of deconvolutional networks which can predict pixel-wise class labels, and develop a new structure for deconvolution of multiple modalities. We propose a novel feature transformation network to bridge the convolutional networks and deconvolutional networks. In the feature transformation network, we correlate the two modalities by discovering common features between them, as well as characterize each modality by discovering modality specific features. With the common features, we not only closely correlate the two modalities, but also allow them to borrow features from each other to enhance the representation of shared information. With specific features, we capture the visual patterns that are only visible in one modality. The proposed network achieves competitive segmentation accuracy on NYU depth dataset V1 and V2. | Thanks to the low-cost RGB-D camera, we can obtain not only RGB but also depth information to tackle semantic segmentation of indoor images. @cite_16 use graphical model to capture contextual relations of different features. This method is computationally expensive as it relies on the 3D+RGB point clouds. @cite_17 propose to first model appearance (RGB) and shape (depth) similarities using kernel descriptors, then capture the context using superpixel Markov random field (MRF) and segmentation tree. @cite_20 extend the multi-scale convolutional neural network @cite_21 to learn multi-modality features for semantic segmentation of indoor scene. @cite_18 propose an unsupervised learning framework that can jointly learn visual patterns from RGB and depth information. @cite_28 introduce mutex constraints in conditional random field (CRF) formulation to eliminate the configurations that violate common sense physics laws. | {
"cite_N": [
"@cite_18",
"@cite_28",
"@cite_21",
"@cite_16",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"2221101993",
"2022508996",
"",
"1485037422",
"2066813062"
],
"abstract": [
"",
"In this paper, we address the problem of semantic scene segmentation of RGB-D images of indoor scenes. We propose a novel image region labeling method which augments CRF formulation with hard mutual exclusion (mutex) constraints. This way our approach can make use of rich and accurate 3D geometric structure coming from Kinect in a principled manner. The final labeling result must satisfy all mutex constraints, which allows us to eliminate configurations that violate common sense physics laws like placing a floor above a night stand. Three classes of mutex constraints are proposed: global object co-occurrence constraint, relative height relationship constraint, and local support relationship constraint. We evaluate our approach on the NYU-Depth V2 dataset, which consists of 1449 cluttered indoor scenes, and also test generalization of our model trained on NYU-Depth V2 dataset directly on a recent SUN3D dataset without any new training. The experimental results show that we significantly outperform the state-of-the-art methods in scene labeling on both datasets.",
"Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction.",
"",
"This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5 . We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA.",
"Scene labeling research has mostly focused on outdoor scenes, leaving the harder case of indoor scenes poorly understood. Microsoft Kinect dramatically changed the landscape, showing great potentials for RGB-D perception (color+depth). Our main objective is to empirically understand the promises and challenges of scene labeling with RGB-D. We use the NYU Depth Dataset as collected and analyzed by Silberman and Fergus [30]. For RGB-D features, we adapt the framework of kernel descriptors that converts local similarities (kernels) to patch descriptors. For contextual modeling, we combine two lines of approaches, one using a superpixel MRF, and the other using a segmentation tree. We find that (1) kernel descriptors are very effective in capturing appearance (RGB) and shape (D) similarities; (2) both superpixel MRF and segmentation tree are useful in modeling context; and (3) the key to labeling accuracy is the ability to efficiently train and test with large-scale data. We improve labeling accuracy on the NYU Dataset from 56.6 to 76.1 . We also apply our approach to image-only scene labeling and improve the accuracy on the Stanford Background Dataset from 79.4 to 82.9 ."
]
} |
1608.00895 | 2493744620 | In this work we release our extensible and easily configurable neural network training software. It provides a rich set of functional layers with a particular focus on efficient training of recurrent neural network topologies on multiple GPUs. The source of the software package is public and freely available for academic research purposes and can be used as a framework or as a standalone tool which supports a flexible configuration. The software allows to train state-of-the-art deep bidirectional long short-term memory (LSTM) models on both one dimensional data like speech or two dimensional data like handwritten text and was used to develop successful submission systems in several evaluation campaigns. | Torch @cite_0 uses the Lua programming language and consists of many flexible and modular components that were developed by the community. In contrast to Theano, Torch does not use symbolic expressions and all calculations are done explicitly. | {
"cite_N": [
"@cite_0"
],
"mid": [
"1548328233"
],
"abstract": [
"Keywords: learning Reference EPFL-REPORT-82802 URL: http: publications.idiap.ch downloads reports 2002 rr02-46.pdf Record created on 2006-03-10, modified on 2017-05-10"
]
} |
1608.00525 | 2505639562 | Referring expressions usually describe an object using properties of the object and relationships of the object with other objects. We propose a technique that integrates context between objects to understand referring expressions. Our approach uses an LSTM to learn the probability of a referring expression, with input features from a region and a context region. The context regions are discovered using multiple-instance learning (MIL) since annotations for context objects are generally not available for training. We utilize max-margin based MIL objective functions for training the LSTM. Experiments on the Google RefExp and UNC RefExp datasets show that modeling context between objects provides better performance than modeling only object properties. We also qualitatively show that our technique can ground a referring expression to its referred region along with the supporting context region. | The two tasks of localizing an object given a referring expression and generating a referring expression given an object are closely related. Some image caption generation techniques @cite_28 @cite_17 first learn to ground sentence fragments to image regions and then use the learned association to generate sentences. Since the caption datasets (Flickr30k-original @cite_24 , MS-COCO @cite_25 ) do not contain the mapping from phrases to object bounding boxes, the visual grounding is learned in a weakly supervised manner. @cite_4 use multiple-instance learning to learn the probability of a region corresponding to different words. However, the associations are learned for individual words and not in context with other words. @cite_19 learn a common embedding space for image and sentence with an MIL objective such that a sentence fragment has a high similarity with a single image region. Instead of associating each word to its best region, they use an MRF to encourage neighboring words to associate to common regions. | {
"cite_N": [
"@cite_4",
"@cite_28",
"@cite_24",
"@cite_19",
"@cite_25",
"@cite_17"
],
"mid": [
"2949769367",
"2951912364",
"2185175083",
"2951805548",
"1861492603",
"2949844032"
],
"abstract": [
"This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1 . When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34 of the time.",
"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.",
"We propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics , which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph , i.e. a subsumption hierarchy over constituents and their denotations, based on a large corpus of 30K images and 150K descriptive captions.",
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.",
"We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.",
"The Flickr30k dataset has become a standard benchmark for sentence-based image description. This paper presents Flickr30k Entities, which augments the 158k captions from Flickr30k with 244k coreference chains, linking mentions of the same entities across different captions for the same image, and associating them with 276k manually annotated bounding boxes. Such annotations are essential for continued progress in automatic image description and grounded language understanding. They enable us to define a new benchmark for localization of textual entity mentions in an image. We present a strong baseline for this task that combines an image-text embedding, detectors for common objects, a color classifier, and a bias towards selecting larger objects. While our baseline rivals in accuracy more complex state-of-the-art models, we show that its gains cannot be easily parlayed into improvements on such tasks as image-sentence retrieval, thus underlining the limitations of current methods and the need for further research."
]
} |
1608.00525 | 2505639562 | Referring expressions usually describe an object using properties of the object and relationships of the object with other objects. We propose a technique that integrates context between objects to understand referring expressions. Our approach uses an LSTM to learn the probability of a referring expression, with input features from a region and a context region. The context regions are discovered using multiple-instance learning (MIL) since annotations for context objects are generally not available for training. We utilize max-margin based MIL objective functions for training the LSTM. Experiments on the Google RefExp and UNC RefExp datasets show that modeling context between objects provides better performance than modeling only object properties. We also qualitatively show that our technique can ground a referring expression to its referred region along with the supporting context region. | Attention based models implicitly learn to select or weigh different regions in an image based on the words generated in a caption. @cite_2 propose two types of attention models for caption generation. In their stochastic hard attention model, the attention locations vary for each word and in the deterministic soft attention model, a soft weight is learned for different regions. Neither of these models are well suited for localizing a single region for a referring expression. @cite_6 learn to ground phrases in sentences using a two stage model. In the first stage, an attention model selects an image region and in the second stage, the selected region is trained to predict the original phrase. They evaluate their technique on the Flickr 30k Entities dataset @cite_17 which contains mappings for noun phrases in a sentence to bounding boxes in the corresponding image. The descriptions in this dataset do not always mention a salient object in the image. Many times the descriptions mention groups of objects and the scene at a higher level and hence it becomes challenging to learn object relationships. | {
"cite_N": [
"@cite_17",
"@cite_6",
"@cite_2"
],
"mid": [
"2949844032",
"2247513039",
"2950178297"
],
"abstract": [
"The Flickr30k dataset has become a standard benchmark for sentence-based image description. This paper presents Flickr30k Entities, which augments the 158k captions from Flickr30k with 244k coreference chains, linking mentions of the same entities across different captions for the same image, and associating them with 276k manually annotated bounding boxes. Such annotations are essential for continued progress in automatic image description and grounded language understanding. They enable us to define a new benchmark for localization of textual entity mentions in an image. We present a strong baseline for this task that combines an image-text embedding, detectors for common objects, a color classifier, and a bias towards selecting larger objects. While our baseline rivals in accuracy more complex state-of-the-art models, we show that its gains cannot be easily parlayed into improvements on such tasks as image-sentence retrieval, thus underlining the limitations of current methods and the need for further research.",
"Grounding (i.e. localizing) arbitrary, free-form textual phrases in visual content is a challenging problem with many applications for human-computer interaction and image-text reference resolution. Few datasets provide the ground truth spatial localization of phrases, thus it is desirable to learn from data with no or little grounding supervision. We propose a novel approach which learns grounding by reconstructing a given phrase using an attention mechanism, which can be either latent or optimized directly. During training our approach encodes the phrase using a recurrent network language model and then learns to attend to the relevant image region in order to reconstruct the input phrase. At test time, the correct attention, i.e., the grounding, is evaluated. If grounding supervision is available it can be directly applied via a loss over the attention mechanism. We demonstrate the effectiveness of our approach on the Flickr30k Entities and ReferItGame datasets with different levels of supervision, ranging from no supervision over partial supervision to full supervision. Our supervised variant improves by a large margin over the state-of-the-art on both datasets.",
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO."
]
} |
1608.00525 | 2505639562 | Referring expressions usually describe an object using properties of the object and relationships of the object with other objects. We propose a technique that integrates context between objects to understand referring expressions. Our approach uses an LSTM to learn the probability of a referring expression, with input features from a region and a context region. The context regions are discovered using multiple-instance learning (MIL) since annotations for context objects are generally not available for training. We utilize max-margin based MIL objective functions for training the LSTM. Experiments on the Google RefExp and UNC RefExp datasets show that modeling context between objects provides better performance than modeling only object properties. We also qualitatively show that our technique can ground a referring expression to its referred region along with the supporting context region. | @cite_29 learn visual grounding for nouns in descriptions of indoor scenes in a supervised manner. They use an MRF which jointly models scene classification, object detection and grounding to 3D cuboids. @cite_7 propose an end-to-end neural network that can localize regions in an image and generate descriptions for those regions. Their model is trained with full supervision with region descriptions present in the Visual Genome dataset @cite_3 . | {
"cite_N": [
"@cite_29",
"@cite_3",
"@cite_7"
],
"mid": [
"2048343491",
"2277195237",
"2963758027"
],
"abstract": [
"In this paper we exploit natural sentential descriptions of RGB-D scenes in order to improve 3D semantic parsing. Importantly, in doing so, we reason about which particular object each noun pronoun is referring to in the image. This allows us to utilize visual information in order to disambiguate the so-called coreference resolution problem that arises in text. Towards this goal, we propose a structure prediction model that exploits potentials computed from text and RGB-D imagery to reason about the class of the 3D objects, the scene type, as well as to align the nouns pronouns with the referred visual objects. We demonstrate the effectiveness of our approach on the challenging NYU-RGBD v2 dataset, which we enrich with natural lingual descriptions. We show that our approach significantly improves 3D detection and scene classification accuracy, and is able to reliably estimate the text-to-image alignment. Furthermore, by using textual and visual information, we are also able to successfully deal with coreference in text, improving upon the state-of-the-art Stanford coreference system [15].",
"Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked \"What vehicle is the person riding?\", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that \"the person is riding a horse-drawn carriage.\" In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 108K images where each image has an average of @math 35 objects, @math 26 attributes, and @math 21 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answer pairs.",
"We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. The dense captioning task generalizes object detection when the descriptions consist of a single word, and Image Captioning when one predicted region covers the full image. To address the localization and description task jointly we propose a Fully Convolutional Localization Network (FCLN) architecture that processes an image with a single, efficient forward pass, requires no external regions proposals, and can be trained end-to-end with a single round of optimization. The architecture is composed of a Convolutional Network, a novel dense localization layer, and Recurrent Neural Network language model that generates the label sequences. We evaluate our network on the Visual Genome dataset, which comprises 94,000 images and 4,100,000 region-grounded captions. We observe both speed and accuracy improvements over baselines based on current state of the art approaches in both generation and retrieval settings."
]
} |
1608.00525 | 2505639562 | Referring expressions usually describe an object using properties of the object and relationships of the object with other objects. We propose a technique that integrates context between objects to understand referring expressions. Our approach uses an LSTM to learn the probability of a referring expression, with input features from a region and a context region. The context regions are discovered using multiple-instance learning (MIL) since annotations for context objects are generally not available for training. We utilize max-margin based MIL objective functions for training the LSTM. Experiments on the Google RefExp and UNC RefExp datasets show that modeling context between objects provides better performance than modeling only object properties. We also qualitatively show that our technique can ground a referring expression to its referred region along with the supporting context region. | Most of the works on referring expressions learn to ground a single region by modeling object properties and image level context. Rule based approaches to generating referring expressions @cite_18 @cite_20 are restricted in the types of properties that can be modeled. @cite_15 designed an energy optimization model for generating referring expressions in the form of object attributes. @cite_5 propose an approach with three LSTMs which take in different feature inputs such as region features, image features and word embedding. @cite_16 propose an LSTM based technique that can perform both tasks of referring expression generation and referring expression comprehension. They use a max-margin based training method for the LSTM wherein the probability of a referring expression is high only for the referred region and low for every other region. This type of training significantly improves performance. We extend their max-margin approach to multiple-instance learning based training objectives for the LSTM. Unlike previous work, we model context between objects for comprehending referring expressions. | {
"cite_N": [
"@cite_18",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_20"
],
"mid": [
"2110022948",
"2963735856",
"2251512949",
"2144960104",
""
],
"abstract": [
"This paper offers a solution to a small problem within a much larger problem. We focus on modelling how people use size in reference, words like \"big\" and \"tall\", which is one piece within the much larger problem of how people refer to visible objects. Examining size in isolation allows us to begin untangling a few of the complex and interacting features that affect reference, and we isolate a set of features that may be used in a hand-coded algorithm or a machine learning approach to generate one of six basic size types. The hand-coded algorithm generates a modifier type with a high correspondence to those observed in human data, and achieves 81.3 accuracy in an entirely new domain. This trails oracle accuracy for this task by just 8 . Features used by the hand-coded algorithm are added to a larger set of features in the machine learning approach, and we do not find a statistically significant difference between the precision and recall of the two systems. The input and output of these systems are a novel characterization of the factors that affect referring expression generation, and the methods described here may serve as one building block in future work connecting vision to language.",
"In this paper, we address the task of natural language object retrieval, to localize a target object within a given image based on a natural language query of the object. Natural language object retrieval differs from text-based image retrieval task as it involves spatial information about objects within the scene and global scene context. To address this issue, we propose a novel Spatial Context Recurrent ConvNet (SCRC) model as scoring function on candidate boxes for object retrieval, integrating spatial configurations and global scene-level contextual information into the network. Our model processes query text, local image descriptors, spatial configurations and global context features through a recurrent network, outputs the probability of the query text conditioned on each candidate box as a score for the box, and can transfer visual-linguistic knowledge from image captioning domain to our task. Experimental results demonstrate that our method effectively utilizes both local and global information, outperforming previous baseline methods significantly on different datasets and scenarios, and can exploit large scale vision and language datasets for knowledge transfer.",
"In this paper we introduce a new game to crowd-source natural language referring expressions. By designing a two player game, we can both collect and verify referring expressions directly within the game. To date, the game has produced a dataset containing 130,525 expressions, referring to 96,654 distinct objects, in 19,894 photographs of natural scenes. This dataset is larger and more varied than previous REG datasets and allows us to study referring expressions in real-world scenes. We provide an in depth analysis of the resulting dataset. Based on our findings, we design a new optimization based model for generating referring expressions and perform experimental evaluations on 3 test sets.",
"We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MSCOCO. We have released the dataset and a toolbox for visualization and evaluation, see https: github.com mjhucla Google_Refexp_toolbox.",
""
]
} |
1608.00507 | 2951260882 | We aim to model the top-down attention of a Convolutional Neural Network (CNN) classifier for generating task-specific attention maps. Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we introduce the concept of contrastive attention to make the top-down attention maps more discriminative. In experiments, we demonstrate the accuracy and generalizability of our method in weakly supervised localization tasks on the MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is further validated in the text-to-region association task. On the Flickr30k Entities dataset, we achieve promising performance in phrase localization by leveraging the top-down attention of a CNN model that has been trained on weakly labeled web images. | There is a rich literature about modeling the top-down influences on selective attention in the human visual system (see @cite_32 for a review). It is hypothesized that top-down factors like knowledge, expectations and behavioral goals can affect the feature and location expectancy in visual processing @cite_35 @cite_11 @cite_34 @cite_9 , and bias the competition among the neurons @cite_16 @cite_3 @cite_9 @cite_18 @cite_21 . Our attention model is related to the Selective Tuning model of @cite_3 , which proposes a biologically inspired attention model using a top-down WTA inference process. | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_9",
"@cite_21",
"@cite_32",
"@cite_16",
"@cite_3",
"@cite_34",
"@cite_11"
],
"mid": [
"2093353037",
"2031650972",
"2118615399",
"2145419885",
"2165728411",
"",
"2089597841",
"",
"2149095485"
],
"abstract": [
"An important component of routine visual behavior is the ability to find one item in a visual world filled with other, distracting items. This ability to performvisual search has been the subject of a large body of research in the past 15 years. This paper reviews the visual search literature and presents a model of human search behavior. Built upon the work of Neisser, Treisman, Julesz, and others, the model distinguishes between a preattentive, massively parallel stage that processes information about basic visual features (color, motion, various depth cues, etc.) across large portions of the visual field and a subsequent limited-capacity stage that performs other, more complex operations (e.g., face recognition, reading, object identification) over a limited portion of the visual field. The spatial deployment of the limited-capacity process is under attentional control. The heart of the guided search model is the idea that attentional deployment of limited resources isguided by the output of the earlier parallel processes. Guided Search 2.0 (GS2) is a revision of the model in which virtually all aspects of the model have been made more explicit and or revised in light of new data. The paper is organized into four parts: Part 1 presents the model and the details of its computer simulation. Part 2 reviews the visual search literature on preattentive processing of basic features and shows how the GS2 simulation reproduces those results. Part 3 reviews the literature on the attentional deployment of limited-capacity processes in conjunction and serial searches and shows how the simulation handles those conditions. Finally, Part 4 deals with shortcomings of the model and unresolved issues.",
"According to conventional neurobiological accounts of visual attention, attention serves to enhance extrastriate neuronal responses to a stimulus at one spatial location in the visual field. However, recent results from recordings in extrastriate cortex of monkeys suggest that any enhancing effect of attention is best understood in the context of competitive interactions among neurons representing all of the stimuli present in the visual field. These interactions can be biased in favour of behaviourally relevant stimuli as a result of many different processes, both spatial and non–spatial, and both bottom–up and top–down. The resolution of this competition results in the suppression of the neuronal representations of behaviourally irrelevant stimuli in extrastriate cortex. A main source of top–down influence may derive from neuronal systems underlying working memory.",
"The two basic phenomena that define the problem of visual attention can be illustrated in a simple example. Consider the arrays shown in each panel of Figure 1. In a typical experiment, before the arrays were presented, subjects would be asked to report letters appearing in one color (targets, here black letters), and to disregard letters in the other color (nontargets, here white letters). The array would then be briefly flashed, and the subjects, without any opportunity for eye movements, would give their report. The display mimics our. usual cluttered visual environment: It contains one or more objects that are relevant to current behavior, along with others that are irrelevant. The first basic phenomenon is limited capacity for processing information. At any given time, only a small amount of the information available on the retina can be processed and used in the control of behavior. Subjectively, giving attention to any one target leaves less available for others. In Figure 1, the probability of reporting the target letter N is much lower with two accompa nying targets (Figure la) than with none (Figure Ib). The second basic phenomenon is selectivity-the ability to filter out un wanted information. Subjectively, one is aware of attended stimuli and largely unaware of unattended ones. Correspondingly, accuracy in identifying an attended stimulus may be independent of the number of nontargets in a display (Figure la vs Ie) (see Bundesen 1990, Duncan 1980).",
"The biased competition theory of selective attention has been an influential neural theory of attention, motivating numerous animal and human studies of visual attention and visual representation. There is now neural evidence in favor of all three of its most basic principles: that representation in the visual system is competitive; that both top-down and bottom-up biasing mechanisms influence the ongoing competition; and that competition is integrated across brain systems. We review the evidence in favor of these three principles, and in particular, findings related to six more specific neural predictions derived from these original principles.",
"Attention exhibits characteristic neural signatures in brain regions that process sensory signals. An important area of future research is to understand the nature of top-down signals that facilitate attentional guidance towards behaviorally relevant locations and features. In this review, we discuss recent studies that have made progress towards understanding: (i) the brain structures and circuits involved in attentional allocation; (ii) top-down attention pathways, particularly as elucidated by microstimulation and lesion studies; (iii) top-down modulatory influences involving subcortical structures and reward systems; (iv) plausible substrates and embodiments of top-down signals; and (v) information processing and theoretical constraints that might be helpful in guiding future experiments. Understanding top-down attention is crucial for elucidating the mechanisms by which we can filter sensory information to pay attention to the most behaviorally relevant events.",
"",
"A model for aspects of visual attention based on the concept of selective tuning is presented. It provides for a solution to the problems of selection in an image, information routing through the visual processing hierarchy and task-specific attentional bias. The central thesis is that attention acts to optimize the search procedure inherent in a solution to vision. It does so by selectively tuning the visual processing network which is accomplished by a top-down hierarchy of winner-take-all processes embedded within the visual processing pyramid. Comparisons to other major computational models of attention and to the relevant neurobiology are included in detail throughout the paper. The model has been implemented; several examples of its performance are shown. This model is a hypothesis for primate visual attention, but it also outperforms existing computational solutions for attention in machine vision and is highly appropriate to solving the problem in a robot vision system.",
"",
"A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not."
]
} |
1608.00507 | 2951260882 | We aim to model the top-down attention of a Convolutional Neural Network (CNN) classifier for generating task-specific attention maps. Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we introduce the concept of contrastive attention to make the top-down attention maps more discriminative. In experiments, we demonstrate the accuracy and generalizability of our method in weakly supervised localization tasks on the MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is further validated in the text-to-region association task. On the Flickr30k Entities dataset, we achieve promising performance in phrase localization by leveraging the top-down attention of a CNN model that has been trained on weakly labeled web images. | Various methods have been proposed for grounding a CNN classifier's prediction @cite_19 @cite_13 @cite_37 @cite_4 @cite_1 @cite_20 . In @cite_19 @cite_13 @cite_6 , error backpropagation based methods are used for visualizing relevant regions for a predicted class or the activation of a hidden neuron. Recently, a layer-wise relevance backpropagation method is proposed by @cite_20 to provide a pixel-level explanation of CNNs' classification decisions. Cao @cite_37 propose a feedback CNN architecture for capturing the top-down attention mechanism that can successfully identify task relevant regions. In @cite_1 , it is shown that replacing fully-connected layers with an average pooling layer can help generate coarse class activation maps that highlight task relevant regions. Unlike these previous methods, our top-down attention model is based on the WTA principle, and has an interpretable probabilistic formulation. Our method is also conceptually simpler than @cite_37 @cite_1 as we do not require modifying a network's architecture or additional training. The ultimate goal of our method goes beyond visualization and explanation of a classifier's decision @cite_13 @cite_6 @cite_20 , as we aim to maneuver CNNs' top-down attention to generate highly discriminative attention maps for the benefits of localization. | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_1",
"@cite_6",
"@cite_19",
"@cite_13",
"@cite_20"
],
"mid": [
"2221625691",
"2295107390",
"1899185266",
"2123045220",
"",
"2952186574",
"1787224781"
],
"abstract": [
"While feedforward deep convolutional neural networks (CNNs) have been a great success in computer vision, it is important to note that the human visual cortex generally contains more feedback than feedforward connections. In this paper, we will briefly introduce the background of feedbacks in the human visual cortex, which motivates us to develop a computational feedback mechanism in deep neural networks. In addition to the feedforward inference in traditional neural networks, a feedback loop is introduced to infer the activation status of hidden layer neurons according to the \"goal\" of the network, e.g., high-level semantic labels. We analogize this mechanism as \"Look and Think Twice.\" The feedback networks help better visualize and understand how deep neural networks work, and capture visual attention on expected objects, even in images with cluttered background and multiple objects. Experiments on ImageNet dataset demonstrate its effectiveness in solving tasks such as image classification and object localization.",
"In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network (CNN) to have remarkable localization ability despite being trained on imagelevel labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that exposes the implicit attention of CNNs on an image. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014 without training on any bounding box annotation. We demonstrate in a variety of experiments that our network is able to localize the discriminative image regions despite just being trained for solving classification task1.",
"With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.",
"Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the \"deconvolution approach\" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.",
"",
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.",
"Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest. We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package."
]
} |
1608.00507 | 2951260882 | We aim to model the top-down attention of a Convolutional Neural Network (CNN) classifier for generating task-specific attention maps. Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we introduce the concept of contrastive attention to make the top-down attention maps more discriminative. In experiments, we demonstrate the accuracy and generalizability of our method in weakly supervised localization tasks on the MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is further validated in the text-to-region association task. On the Flickr30k Entities dataset, we achieve promising performance in phrase localization by leveraging the top-down attention of a CNN model that has been trained on weakly labeled web images. | Training CNN models for weak supervised localization has been studied by @cite_22 @cite_27 @cite_38 @cite_33 @cite_29 . In @cite_22 @cite_29 @cite_33 , a CNN model is transformed into a fully convolutional net to perform efficient sliding window inference, and then Multiple Instance Learning (MIL) is integrated in the training process through various pooling methods over the confidence score map. Due to the large receptive field and stride of the output layer, the resultant score maps only provide very coarse location information. To overcome this issue, a variety of strategies, image re-scaling and shifting, have been proposed to increase the granularity of the score maps @cite_22 @cite_33 @cite_36 . Image and object priors are also leveraged to improve the object localization accuracy in @cite_27 @cite_38 @cite_33 . Compared with weakly supervised localization, the problem setting of our task is essentially different. We assume a pre-trained deep CNN model is given, which may not use any dedicated training process or model architecture for the purpose of localization. Our focus, instead, is to model the top-down attention mechanism of CNN models to produce interpretable and useful task-relevant attention maps. | {
"cite_N": [
"@cite_38",
"@cite_22",
"@cite_33",
"@cite_36",
"@cite_29",
"@cite_27"
],
"mid": [
"1529410181",
"1994488211",
"1945608308",
"2951277909",
"2949769367",
"2952004933"
],
"abstract": [
"Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at this https URL",
"Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training.",
"We are interested in inferring object segmentation by leveraging only object class information, and by considering only minimal priors on the object segmentation task. This problem could be viewed as a kind of weakly supervised segmentation task, and naturally fits the Multiple Instance Learning (MIL) framework: every training image is known to have (or not) at least one pixel corresponding to the image class label, and the segmentation task can be rewritten as inferring the pixels belonging to the class of the object (given one image, and its object class). We propose a Convolutional Neural Network-based model, which is constrained during training to put more weight on pixels which are important for classifying the image. We show that at test time, the model has learned to discriminate the right pixels well enough, such that it performs very well on an existing segmentation benchmark, by adding only few smoothing priors. Our system is trained using a subset of the Imagenet dataset and the segmentation experiments are performed on the challenging Pascal VOC dataset (with no fine-tuning of the model on Pascal VOC). Our model beats the state of the art results in weakly supervised object segmentation task by a large margin. We also compare the performance of our model with state of the art fully-supervised segmentation approaches.",
"Scene parsing is a technique that consist on giving a label to all pixels in an image according to the class they belong to. To ensure a good visual coherence and a high class accuracy, it is essential for a scene parser to capture image long range dependencies. In a feed-forward architecture, this can be simply achieved by considering a sufficiently large input context patch, around each pixel to be labeled. We propose an approach consisting of a recurrent convolutional neural network which allows us to consider a large input context, while limiting the capacity of the model. Contrary to most standard approaches, our method does not rely on any segmentation methods, nor any task-specific features. The system is trained in an end-to-end manner over raw pixels, and models complex spatial dependencies with low inference cost. As the context size increases with the built-in recurrence, the system identifies and corrects its own errors. Our approach yields state-of-the-art performance on both the Stanford Background Dataset and the SIFT Flow Dataset, while remaining very fast at test time.",
"This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1 . When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34 of the time.",
"We present an approach to learn a dense pixel-wise labeling from image-level tags. Each image-level tag imposes constraints on the output labeling of a Convolutional Neural Network (CNN) classifier. We propose Constrained CNN (CCNN), a method which uses a novel loss function to optimize for any set of linear constraints on the output space (i.e. predicted label distribution) of a CNN. Our loss formulation is easy to optimize and can be incorporated directly into standard stochastic gradient descent optimization. The key idea is to phrase the training objective as a biconvex optimization for linear models, which we then relax to nonlinear deep networks. Extensive experiments demonstrate the generality of our new learning framework. The constrained loss yields state-of-the-art results on weakly supervised semantic image segmentation. We further demonstrate that adding slightly more supervision can greatly improve the performance of the learning algorithm."
]
} |
1608.00887 | 2483265793 | Recent advances in AI and robotics have claimed many incredible results with deep learning, yet no work to date has applied deep learning to the problem of liquid perception and reasoning. In this paper, we apply fully-convolutional deep neural networks to the tasks of detecting and tracking liquids. We evaluate three models: a single-frame network, multi-frame network, and a LSTM recurrent network. Our results show that the best liquid detection results are achieved when aggregating data over multiple frames and that the LSTM network outperforms the other two in both tasks. This suggests that LSTM-based neural networks have the potential to be a key component for enabling robots to handle liquids using robust, closed-loop controllers. | In order to perceive liquids at the pixel level, we make use of fully-convolutional neural networks (FCN). FCNs have been successfully applied to the task of image segmentation in the past @cite_8 @cite_5 @cite_1 and are a natural fit for pixel-wise classification. In addition to FCNs, we utilize long short-term memory (LSTM) @cite_13 recurrent cells to reason about the temporal evolution of liquids. LSTMs are preferable over more standard recurrent networks for long-term memory as they overcome many of the numerical issues during training such as exploding gradients @cite_6 . LSTM-based CNNs have been successfully applied to many temporal memory tasks by previous work @cite_11 @cite_1 , and in fact LSTMs have even been combined with FCNs by replacing the standard fully-connected layers of their LSTMs with @math convolution layers @cite_1 . We use a similar method in this paper. | {
"cite_N": [
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_5",
"@cite_13",
"@cite_11"
],
"mid": [
"1903029394",
"",
"1689711448",
"1884191083",
"",
"2118688707"
],
"abstract": [
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.",
"",
"Several variants of the long short-term memory (LSTM) architecture for recurrent neural networks have been proposed since its inception in 1995. In recent years, these networks have become the state-of-the-art models for a variety of machine learning problems. This has led to a renewed interest in understanding the role and utility of various computational components of typical LSTM variants. In this paper, we present the first large-scale analysis of eight LSTM variants on three representative tasks: speech recognition, handwriting recognition, and polyphonic music modeling. The hyperparameters of all LSTM variants for each task were optimized separately using random search, and their importance was assessed using the powerful functional ANalysis Of VAriance framework. In total, we summarize the results of 5400 experimental runs ( @math years of CPU time), which makes our study the largest of its kind on LSTM networks. Our results show that none of the variants can improve upon the standard LSTM architecture significantly, and demonstrate the forget gate and the output activation function to be its most critical components. We further observe that the studied hyperparameters are virtually independent and derive guidelines for their efficient adjustment.",
"Abstract In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we’ve found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster.",
"",
"Motivated by vision-based reinforcement learning (RL) problems, in particular Atari games from the recent benchmark Aracade Learning Environment (ALE), we consider spatio-temporal prediction problems where future (image-)frames are dependent on control variables or actions as well as previous frames. While not composed of natural scenes, frames in Atari games are high-dimensional in size, can involve tens of objects with one or more objects being controlled by the actions directly and many other objects being influenced indirectly, can involve entry and departure of objects, and can involve deep partial observability. We propose and evaluate two deep neural network architectures that consist of encoding, action-conditional transformation, and decoding layers based on convolutional neural networks and recurrent neural networks. Experimental results show that the proposed architectures are able to generate visually-realistic frames that are also useful for control over approximately 100-step action-conditional futures in some games. To the best of our knowledge, this paper is the first to make and evaluate long-term predictions on high-dimensional video conditioned by control inputs."
]
} |
1608.00869 | 2510413766 | Verbs play a critical role in the meaning of sentences, but these ubiquitous words have received little attention in recent distributional semantics research. We introduce SimVerb-3500, an evaluation resource that provides human ratings for the similarity of 3,500 verb pairs. SimVerb-3500 covers all normed verb types from the USF free-association database, providing at least three examples for every VerbNet class. This broad coverage facilitates detailed analyses of how syntactic and semantic phenomena together influence human understanding of verb meaning. Further, with significantly larger development and test sets than existing benchmarks, SimVerb-3500 enables more robust evaluation of representation learning architectures and promotes the development of methods tailored to verbs. We hope that SimVerb-3500 will enable a richer understanding of the diversity and complexity of verb semantics and guide the development of systems that can effectively represent and interpret this meaning. | A natural way to evaluate representation quality is by judging the similarity of representations assigned to similar words. The most popular evaluation sets at present consist of word pairs with similarity ratings produced by human annotators. In some existing evaluation sets pairs are scored for relatedness which has some overlap with similarity. SimVerb-3500 focuses on similarity as this is a more focused semantic relation that seems to yield a higher agreement between human annotators. For a broader discussion see @cite_28 . Nevertheless, we find that all available datasets of this kind are insufficient for judging verb similarity due to their small size or narrow coverage of verbs. | {
"cite_N": [
"@cite_28"
],
"mid": [
"1854884267"
],
"abstract": [
"We present SimLex-999, a gold standard resource for evaluating distributional semantic models that improves on existing resources in several important ways. First, in contrast to gold standards such as WordSim-353 and MEN, it explicitly quantifies similarity rather than association or relatedness so that pairs of entities that are associated but not actually similar Freud, psychology have a low rating. We show that, via this focus on similarity, SimLex-999 incentivizes the development of models with a different, and arguably wider, range of applications than those which reflect conceptual association. Second, SimLex-999 contains a range of concrete and abstract adjective, noun, and verb pairs, together with an independent rating of concreteness and free association strength for each pair. This diversity enables fine-grained analyses of the performance of models on concepts of different types, and consequently greater insight into how architectures can be improved. Further, unlike existing gold standard evaluations, for which automatic approaches have reached or surpassed the inter-annotator agreement ceiling, state-of-the-art models perform well below this ceiling on SimLex-999. There is therefore plenty of scope for SimLex-999 to quantify future improvements to distributional semantic models, guiding the development of the next generation of representation-learning architectures."
]
} |
1608.00869 | 2510413766 | Verbs play a critical role in the meaning of sentences, but these ubiquitous words have received little attention in recent distributional semantics research. We introduce SimVerb-3500, an evaluation resource that provides human ratings for the similarity of 3,500 verb pairs. SimVerb-3500 covers all normed verb types from the USF free-association database, providing at least three examples for every VerbNet class. This broad coverage facilitates detailed analyses of how syntactic and semantic phenomena together influence human understanding of verb meaning. Further, with significantly larger development and test sets than existing benchmarks, SimVerb-3500 enables more robust evaluation of representation learning architectures and promotes the development of methods tailored to verbs. We hope that SimVerb-3500 will enable a richer understanding of the diversity and complexity of verb semantics and guide the development of systems that can effectively represent and interpret this meaning. | Two datasets that do focus on verb pairs to some extent are the data set of and Simlex-999 @cite_28 . These datasets, however, still contain a limited number of verb pairs (134 and 222, respectively), making them unrepresentative of the rich variety of verb semantic phenomena. | {
"cite_N": [
"@cite_28"
],
"mid": [
"1854884267"
],
"abstract": [
"We present SimLex-999, a gold standard resource for evaluating distributional semantic models that improves on existing resources in several important ways. First, in contrast to gold standards such as WordSim-353 and MEN, it explicitly quantifies similarity rather than association or relatedness so that pairs of entities that are associated but not actually similar Freud, psychology have a low rating. We show that, via this focus on similarity, SimLex-999 incentivizes the development of models with a different, and arguably wider, range of applications than those which reflect conceptual association. Second, SimLex-999 contains a range of concrete and abstract adjective, noun, and verb pairs, together with an independent rating of concreteness and free association strength for each pair. This diversity enables fine-grained analyses of the performance of models on concepts of different types, and consequently greater insight into how architectures can be improved. Further, unlike existing gold standard evaluations, for which automatic approaches have reached or surpassed the inter-annotator agreement ceiling, state-of-the-art models perform well below this ceiling on SimLex-999. There is therefore plenty of scope for SimLex-999 to quantify future improvements to distributional semantic models, guiding the development of the next generation of representation-learning architectures."
]
} |
1608.00708 | 2483907835 | Money laundering is a major global problem, enabling criminal organisations to hide their ill-gotten gains and to finance further operations. Prevention of money laundering is seen as a high priority by many governments, however detection of money laundering without prior knowledge of predicate crimes remains a significant challenge. Previous detection systems have tended to focus on individuals, considering transaction histories and applying anomaly detection to identify suspicious behaviour. However, money laundering involves groups of collaborating individuals, and evidence of money laundering may only be apparent when the collective behaviour of these groups is considered. In this paper we describe a detection system that is capable of analysing group behaviour, using a combination of network analysis and supervised learning. This system is designed for real-world application and operates on networks consisting of millions of interacting parties. Evaluation of the system using real-world data indicates that suspicious activity is successfully detected. Importantly, the system exhibits a low rate of false positives, and is therefore suitable for use in a live intelligence environment. | One of the earliest systems for detection of money laundering is that described by Senator @cite_11 , which applied rule-based evaluation to identify suspicious parties. The rules used by this system were derived from expert knowledge and encoded in an evaluation module that was run each time the target database was updated. Parties matching these rules would then be further investigated by analysts using an interactive query interface and a variety of visualisation tools provided by the system. More recently, describe an alternative rules-based system, where rules are encoded using a decision tree @cite_9 . | {
"cite_N": [
"@cite_9",
"@cite_11"
],
"mid": [
"2097882134",
"1597650360"
],
"abstract": [
"Money laundering (ML) involves moving illicit funds, which may be linked to drug trafficking or organized crime, through a series of transactions or accounts to disguise origin or ownership. China is facing severe challenge on money laundering with an estimated 200 billion RMB laundered annually. Decision tree method is used in this paper to create the determination rules of the money laundering risk by customer profiles of a commercial bank in China. A sample of twenty-eight customers with four attributes is used to induced and validate a decision tree method. The result indicates the effectiveness of decision tree in generating AML rules from companies' customer profiles. The anti-money laundering system in small and middle commerical bank in China is highly needed.",
"The Financial Crimes Enforcement Network (FIN-CEN) AI system (FAIS) links and evaluates reports of large cash transactions to identify potential money laundering. The objective of FAIS is to discover previously unknown, potentially high-value leads for possible investigation. FAIS integrates intelligent human and software agents in a cooperative discovery task on a very large data space. It is a complex system incorporating several aspects of AI technology, including rule-based reasoning and a blackboard. FAIS consists of an underlying database (that functions as a black-board), a graphic user interface, and several preprocessing and analysis modules. FAIS has been in operation at FINCEN since March 1993; a dedicated group of analysts process approximately 200,000 transactions a week, during which time over 400 investigative support reports corresponding to over $1 billion in potential laundered funds were developed. FAIS's unique analytic power arises primarily from a change in view of the underlying data from a transaction-oriented perspective to a subject-oriented (that is, person or organization) perspective."
]
} |
1608.00708 | 2483907835 | Money laundering is a major global problem, enabling criminal organisations to hide their ill-gotten gains and to finance further operations. Prevention of money laundering is seen as a high priority by many governments, however detection of money laundering without prior knowledge of predicate crimes remains a significant challenge. Previous detection systems have tended to focus on individuals, considering transaction histories and applying anomaly detection to identify suspicious behaviour. However, money laundering involves groups of collaborating individuals, and evidence of money laundering may only be apparent when the collective behaviour of these groups is considered. In this paper we describe a detection system that is capable of analysing group behaviour, using a combination of network analysis and supervised learning. This system is designed for real-world application and operates on networks consisting of millions of interacting parties. Evaluation of the system using real-world data indicates that suspicious activity is successfully detected. Importantly, the system exhibits a low rate of false positives, and is therefore suitable for use in a live intelligence environment. | Taking the structural considerations even further, the systems described in @cite_5 @cite_16 aim to identify subgraphs within a network that closely match known typologies . In these systems, the use of fuzzy matching means that subgraphs may deviate in some way from the given typology, providing greater flexibility than a simple motif search. | {
"cite_N": [
"@cite_5",
"@cite_16"
],
"mid": [
"2114755068",
"1482947703"
],
"abstract": [
"In this paper, we present a clique-based method for mining fuzzy graph patterns of money laundering and financing terrorism. The method will contribute to a new generation of intelligent anti-money laundering systems that incorporate comprehensive information from various information sources as well as from human subject matter experts. A fuzzy degree of confidence can therefore be associated to each relation between any two actors in a graph of transactions. We reduce the problem of fuzzy subgraph isomorphism to the problem of finding a fuzzy set of maximal cliques in a fuzzy extension of a mathematical construct that is similar to Vizing's modular product of graphs.",
"Suspicious transaction detection is used to report banking transactions that may be connected with criminal activities. Obviously, perpetrators of criminal acts strive to make the transactions as innocent-looking as possible. Because activities such as money laundering may involve complex organizational schemes, machine learning techniques based on individual transactions analysis may perform poorly when applied to suspicious transaction detection. In this paper, we propose a new machine learning method for mining transaction graphs. The method proposed in this paper builds a model of subgraphs that may contain suspicious transactions. The model used in our method is parametrized using fuzzy numbers which represent parameters of transactions and of the transaction subgraphs to be detected. Because money laundering may involve transferring money through a variable number of accounts the model representing transaction subgraphs is also parametrized with respect to some structural features. In contrast to some other graph mining methods in which graph isomorphisms are used to match data to the model, in our method we perform a fuzzy matching of graph structures."
]
} |
1608.00612 | 2949174300 | Sequence labeling is a widely used method for named entity recognition and information extraction from unstructured natural language data. In clinical domain one major application of sequence labeling involves extraction of medical entities such as medication, indication, and side-effects from Electronic Health Record narratives. Sequence labeling in this domain, presents its own set of challenges and objectives. In this work we experimented with various CRF based structured learning models with Recurrent Neural Networks. We extend the previously studied LSTM-CRF models with explicit modeling of pairwise potentials. We also propose an approximate version of skip-chain CRF inference with RNN potentials. We use these methodologies for structured prediction in order to improve the exact phrase detection of various medical entities. | As mentioned in the previous sections, both Neural Networks and Conditional Random Fields have been widely used for sequence labeling tasks in NLP. Specially, CRFs @cite_0 have a long history of being used for various sequence labeling tasks in general and named entity recognition in particular. Some early notable works include McCallum et. al. , and Sha et. al. . Hammerton et. al. and Chiu et. al. used Long Short Term Memory (LSTM) @cite_6 for named entity recognition. | {
"cite_N": [
"@cite_0",
"@cite_6"
],
"mid": [
"2147880316",
"2042188227"
],
"abstract": [
"We present conditional random fields , a framework for building probabilistic models to segment and label sequence data. Conditional random fields offer several advantages over hidden Markov models and stochastic grammars for such tasks, including the ability to relax strong independence assumptions made in those models. Conditional random fields also avoid a fundamental limitation of maximum entropy Markov models (MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased towards states with few successor states. We present iterative parameter estimation algorithms for conditional random fields and compare the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.",
"In this approach to named entity recognition, a recurrent neural network, known as Long Short-Term Memory, is applied. The network is trained to perform 2 passes on each sentence, outputting its decisions on the second pass. The first pass is used to acquire information for disambiguation during the second pass. SARDNET, a self-organising map for sequences is used to generate representations for the lexical items presented to the LSTM network, whilst orthogonal representations are used to represent the part of speech and chunk tags."
]
} |
1608.00247 | 2951769532 | We propose a minimal solution for the similarity registration (rigid pose and scale) between two sets of 3D lines, and also between a set of co-planar points and a set of 3D lines. The first problem is solved up to 8 discrete solutions with a minimum of 2 line-line correspondences, while the second is solved up to 4 discrete solutions using 4 point-line correspondences. We use these algorithms to perform the extrinsic calibration between a pose tracking sensor and a 2D 3D ultrasound (US) curvilinear probe using a tracked needle as calibration target. The needle is tracked as a 3D line, and is scanned by the ultrasound as either a 3D line (3D US) or as a 2D point (2D US). Since the scale factor that converts US scan units to metric coordinates is unknown, the calibration is formulated as a similarity registration problem. We present results with both synthetic and real data and show that the minimum solutions outperform the correspondent non-minimal linear formulations. | Estimating the similarity transformation (rigid pose and scale) between two coordinate frames gained recent attention due to its application in the registration of different Structure-from-Motion (SfM) sequences. If the same scene is recovered in two different monocular SfM runs, the scale of each reconstruction can be arbitrarily different. Therefore, to produce extended and more detailed 3D maps from independent SfM runs both the rigid pose and the scale must be recovered. If correspondences between SfM sequences are not available, one can use an extension of the ICP algorithm @cite_23 to handle unkown scale @cite_3 . If 2D-3D point correspondences are available, this is called the generalised pose and scale problem @cite_24 @cite_0 , and is solved by extending the @math formulation @cite_35 @cite_5 @cite_25 to handle the alignment of image rays from multiple view points. A closely related contribution estimates a similarity transformation from pairwise point correspondences between two generalised cameras @cite_19 . | {
"cite_N": [
"@cite_35",
"@cite_3",
"@cite_24",
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_5",
"@cite_25"
],
"mid": [
"2120333476",
"1971517213",
"1981526430",
"81726107",
"1936156518",
"",
"2134237713",
""
],
"abstract": [
"The major direct solutions to the three-point perspective pose estimation problems are reviewed from a unified perspective. The numerical stability of these three-point perspective solutions are discussed. It is shown that even in cases where the solution is not near the geometric unstable region considerable care must be exercised in the calculation. Depending on the order of the substitutions utilized, the relative error can change over a thousand to one. This difference is due entirely to the way the calculations are performed and not to any geometric structural instability of any problem instance. An analytical method is presented which produces a numerically stable calculation. >",
"Point set registration is important for calibration of multiple cameras, 3D reconstruction and recognition, etc. The iterative closest point (ICP) algorithm is accurate and fast for point set registration in a same scale, but it does not handle the case with different scales. This paper instead introduces a novel approach named the scaling iterative closest point (SICP) algorithm which integrates a scale matrix with boundaries into the original ICP algorithm for scaling registration. At each iterative step of this algorithm, we set up correspondence between two m-D point sets, and then use a simple and fast iterative algorithm with the singular value decomposition (SVD) method and the properties of parabola incorporated to compute scale, rotation and translation transformations. The SICP algorithm has been proved to converge monotonically to a local minimum from any given parameters. Hence, to reach desired global minimum, good initial parameters are required which are successfully estimated in this paper by analyzing covariance matrices of point sets. The SICP algorithm is independent of shape representation and feature extraction, and thereby it is general for scaling registration of m-D point sets. Experimental results demonstrate its efficiency and accuracy compared with the standard ICP algorithm.",
"We propose a novel solution to the generalized camera pose problem which includes the internal scale of the generalized camera as an unknown parameter. This further generalization of the well-known absolute camera pose problem has applications in multi-frame loop closure. While a well-calibrated camera rig has a fixed and known scale, camera trajectories produced by monocular motion estimation necessarily lack a scale estimate. Thus, when performing loop closure in monocular visual odometry, or registering separate structure-from-motion reconstructions, we must estimate a seven degree-of-freedom similarity transform from corresponding observations. Existing approaches solve this problem, in specialized configurations, by aligning 3D triangulated points or individual camera pose estimates. Our approach handles general configurations of rays and points and directly estimates the full similarity transformation from the 2D-3D correspondences. Four correspondences are needed in the minimal case, which has eight possible solutions. The minimal solver can be used in a hypothesize-and-test architecture for robust transformation estimation. Our solver also produces a least-squares estimate in the overdetermined case. The approach is evaluated experimentally on synthetic and real datasets, and is shown to produce higher accuracy solutions to multi-frame loop closure than existing approaches.",
"In this work, we present a scalable least-squares solution for computing a seven degree-of-freedom similarity transform. Our method utilizes the generalized camera model to compute relative rotation, translation, and scale from four or more 2D-3D correspondences. In particular, structure and motion estimations from monocular cameras lack scale without specific calibration. As such, our methods have applications in loop closure in visual odometry and registering multiple structure from motion reconstructions where scale must be recovered. We formulate the generalized pose and scale problem as a minimization of a least squares cost function and solve this minimization without iterations or initialization. Additionally, we obtain all minima of the cost function. The order of the polynomial system that we solve is independent of the number of points, allowing our overall approach to scale favorably. We evaluate our method experimentally on synthetic and real datasets and demonstrate that our methods produce higher accuracy similarity transform solutions than existing methods.",
"We propose a novel solution for computing the relative pose between two generalized cameras that includes reconciling the internal scale of the generalized cameras. This approach can be used to compute a similarity transformation between two coordinate systems, making it useful for loop closure in visual odometry and registering multiple structure from motion reconstructions together. In contrast to alternative similarity transformation methods, our approach uses 2D-2D image correspondences thus is not subject to the depth uncertainty that often arises with 3D points. We utilize a known vertical direction (which may be easily obtained from IMU data or vertical vanishing point detection) of the generalized cameras to solve the generalized relative pose and scale problem as an efficient Quadratic Eigenvalue Problem. To our knowledge, this is the first method for computing similarity transformations that does not require any 3D information. Our experiments on synthetic and real data demonstrate that this leads to improved performance compared to methods that use 3D-3D or 2D-3D correspondences, especially as the depth of the scene increases.",
"",
"The determination of camera position and orientation from known correspondences of 3D reference points and their images is known as pose estimation in computer vision and space resection in photogrammetry. It is well-known that from three corresponding points there are at most four algebraic solutions. Less appears to be known about the cases of four and five corresponding points. We propose a family of linear methods that yield a unique solution to 4- and 5-point pose determination for generic reference points. We first review the 3-point algebraic method. Then we present our two-step, 4-point and one-step, 5-point linear algorithms. The 5-point method can also be extended to handle more than five points. Finally, we demonstrate our methods on both simulated and real images. We show that they do not degenerate for coplanar configurations and even outperform the special linear algorithm for coplanar configurations in practice.",
""
]
} |
1608.00247 | 2951769532 | We propose a minimal solution for the similarity registration (rigid pose and scale) between two sets of 3D lines, and also between a set of co-planar points and a set of 3D lines. The first problem is solved up to 8 discrete solutions with a minimum of 2 line-line correspondences, while the second is solved up to 4 discrete solutions using 4 point-line correspondences. We use these algorithms to perform the extrinsic calibration between a pose tracking sensor and a 2D 3D ultrasound (US) curvilinear probe using a tracked needle as calibration target. The needle is tracked as a 3D line, and is scanned by the ultrasound as either a 3D line (3D US) or as a 2D point (2D US). Since the scale factor that converts US scan units to metric coordinates is unknown, the calibration is formulated as a similarity registration problem. We present results with both synthetic and real data and show that the minimum solutions outperform the correspondent non-minimal linear formulations. | The 2D US calibration problem is the similarity registration between a set of 3D lines and a set of co-planar points. This is a particular case of the pose and scale problem @cite_24 when the 3D points are co-planar, and therefore this method could be adapted to solve this problem. However, the co-planarity of points introduces further simplifications, and as we will show in this paper, this problem can be minimally solved with a much more compact set of equations. | {
"cite_N": [
"@cite_24"
],
"mid": [
"1981526430"
],
"abstract": [
"We propose a novel solution to the generalized camera pose problem which includes the internal scale of the generalized camera as an unknown parameter. This further generalization of the well-known absolute camera pose problem has applications in multi-frame loop closure. While a well-calibrated camera rig has a fixed and known scale, camera trajectories produced by monocular motion estimation necessarily lack a scale estimate. Thus, when performing loop closure in monocular visual odometry, or registering separate structure-from-motion reconstructions, we must estimate a seven degree-of-freedom similarity transform from corresponding observations. Existing approaches solve this problem, in specialized configurations, by aligning 3D triangulated points or individual camera pose estimates. Our approach handles general configurations of rays and points and directly estimates the full similarity transformation from the 2D-3D correspondences. Four correspondences are needed in the minimal case, which has eight possible solutions. The minimal solver can be used in a hypothesize-and-test architecture for robust transformation estimation. Our solver also produces a least-squares estimate in the overdetermined case. The approach is evaluated experimentally on synthetic and real datasets, and is shown to produce higher accuracy solutions to multi-frame loop closure than existing approaches."
]
} |
1608.00247 | 2951769532 | We propose a minimal solution for the similarity registration (rigid pose and scale) between two sets of 3D lines, and also between a set of co-planar points and a set of 3D lines. The first problem is solved up to 8 discrete solutions with a minimum of 2 line-line correspondences, while the second is solved up to 4 discrete solutions using 4 point-line correspondences. We use these algorithms to perform the extrinsic calibration between a pose tracking sensor and a 2D 3D ultrasound (US) curvilinear probe using a tracked needle as calibration target. The needle is tracked as a 3D line, and is scanned by the ultrasound as either a 3D line (3D US) or as a 2D point (2D US). Since the scale factor that converts US scan units to metric coordinates is unknown, the calibration is formulated as a similarity registration problem. We present results with both synthetic and real data and show that the minimum solutions outperform the correspondent non-minimal linear formulations. | Minimal solutions are a well established topic in computer vision literature @cite_31 @cite_14 @cite_12 @cite_22 @cite_7 . In most cases they require solving a system of polynomial equations, which can be achieved using Grobner basis methods @cite_14 @cite_9 @cite_7 . Although these methods provide a general framework to build numeric polynomial solvers, they require a certain amount of symbolic manipulation that often requires a case-by-case analysis. To address this issue an automatic generator of polynomial solvers have been proposed @cite_22 . In this paper we develop minimum solutions using the action matrix method as presented in @cite_9 . | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_7",
"@cite_9",
"@cite_31",
"@cite_12"
],
"mid": [
"1740452793",
"1608655332",
"2171271990",
"2058946587",
"",
""
],
"abstract": [
"A method is presented for building solvers for classes of multivariate polynomial equations. The method is based on solving an analogous template problem over a finite field, and then using the elimination order established for this problem for the original class of problems. A strength of this method is that this permits pivoting in the elimination. Solvers for several minimal problems in computer vision are presented. Relative pose is solved both for a generalised camera, and for a camera with unknown focal length, both in two positions with six visible points. A solver for optimal triangulation in three images is presented. Model-free calibration for pinhole cameras is investigated. It is shown that for a smooth deformation of the image plane, the image plane can be projectively reconstructed from two flow-fields from purely translating cameras. Methods for hand-eye calibration using the multilinear constraints and vehicle-eye for laser-scanner based navigation systems are presented. (Less)",
"Finding solutions to minimal problems for estimating epipolar geometry and camera motion leads to solving systems of algebraic equations. Often, these systems are not trivial and therefore special algorithms have to be designed to achieve numerical robustness and computational efficiency. The state of the art approach for constructing such algorithms is the Grobner basis method for solving systems of polynomial equations. Previously, the Grobner basis solvers were designed ad hoc for concrete problems and they could not be easily applied to new problems. In this paper we propose an automatic procedure for generating Grobner basis solvers which could be used even by non-experts to solve technical problems. The input to our solver generator is a system of polynomial equations with a finite number of solutions. The output of our solver generator is the Matlab or C code which computes solutions to this system for concrete coefficients. Generating solvers automatically opens possibilities to solve more complicated problems which could not be handled manually or solving existing problems in a better and more efficient way. We demonstrate that our automatic generator constructs efficient and numerically stable solvers which are comparable or outperform known manually constructed solvers. The automatic generator is available at http: cmp.felk.cvut.cz minimal",
"We present a method for solving systems of polynomial equations appearing in computer vision. This method is based on polynomial eigenvalue solvers and is more straightforward and easier to implement than the state-of-the-art Grobner basis method since eigenvalue problems are well studied, easy to understand, and efficient and robust algorithms for solving these problems are available. We provide a characterization of problems that can be efficiently solved as polynomial eigenvalue problems (PEPs) and present a resultant-based method for transforming a system of polynomial equations to a polynomial eigenvalue problem. We propose techniques that can be used to reduce the size of the computed polynomial eigenvalue problems. To show the applicability of the proposed polynomial eigenvalue method, we present the polynomial eigenvalue solutions to several important minimal relative pose problems.",
"This paper presents several new results on techniques for solving systems of polynomial equations in computer vision. Grobner basis techniques for equation solving have been applied successfully to several geometric computer vision problems. However, in many cases these methods are plagued by numerical problems. In this paper we derive a generalization of the Grobner basis method for polynomial equation solving, which improves overall numerical stability. We show how the action matrix can be computed in the general setting of an arbitrary linear basis for ?[x] I. In particular, two improvements on the stability of the computations are made by studying how the linear basis for ?[x] I should be selected. The first of these strategies utilizes QR factorization with column pivoting and the second is based on singular value decomposition (SVD). Moreover, it is shown how to improve stability further by an adaptive scheme for truncation of the Grobner basis. These new techniques are studied on some of the latest reported uses of Grobner basis methods in computer vision and we demonstrate dramatically improved numerical stability making it possible to solve a larger class of problems than previously possible.",
"",
""
]
} |
1608.00182 | 2500786414 | Despite the great success of convolutional neural networks (CNN) for the image classification task on datasets like Cifar and ImageNet, CNN's representation power is still somewhat limited in dealing with object images that have large variation in size and clutter, where Fisher Vector (FV) has shown to be an effective encoding strategy. FV encodes an image by aggregating local descriptors with a universal generative Gaussian Mixture Model (GMM). FV however has limited learning capability and its parameters are mostly fixed after constructing the codebook. To combine together the best of the two worlds, we propose in this paper a neural network structure with FV layer being part of an end-to-end trainable system that is differentiable; we name our network FisherNet that is learnable using backpropagation. Our proposed FisherNet combines convolutional neural network training and Fisher Vector encoding in a single end-to-end structure. We observe a clear advantage of FisherNet over plain CNN and standard FV in terms of both classification accuracy and computational efficiency on the challenging PASCAL VOC object classification task. | Bag of Visual Words (BoVW) based image representation is one of the most popular methods in computer vision community, especially for image classification @cite_7 @cite_17 @cite_19 @cite_4 . BoVW has been widely applied for their robustness, especially to object deformation, translation, and occlusion. Fisher Vector (FV) @cite_4 is one of the most powerful BoVW based representation methods. It has achieved many state-of-the-art performance on image classification. The traditional FV uses hand-crafted descriptors like SIFT @cite_18 as patch features, and learns FV parameters by Gaussian Mixture Models (GMM), which is not trainable for both patch features and FV parameters. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_7",
"@cite_19",
"@cite_17"
],
"mid": [
"2151103935",
"1966385142",
"2097018403",
"2115628259",
""
],
"abstract": [
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"A standard approach to describe an image for classification and retrieval purposes is to extract a set of local patch descriptors, encode them into a high dimensional vector and pool them into an image-level signature. The most common patch encoding strategy consists in quantizing the local descriptors into a finite set of prototypical elements. This leads to the popular Bag-of-Visual words representation. In this work, we propose to use the Fisher Kernel framework as an alternative patch encoding strategy: we describe patches by their deviation from an \"universal\" generative Gaussian mixture model. This representation, which we call Fisher vector has many advantages: it is efficient to compute, it leads to excellent results even with efficient linear classifiers, and it can be compressed with a minimal loss of accuracy using product quantization. We report experimental results on five standard datasets--PASCAL VOC 2007, Caltech 256, SUN 397, ILSVRC 2010 and ImageNet10K--with up to 9M images and 10K classes, showing that the FV framework is a state-of-the-art patch encoding technique.",
"Recently SVMs using spatial pyramid matching (SPM) kernel have been highly successful in image classification. Despite its popularity, these nonlinear SVMs have a complexity O(n2 n3) in training and O(n) in testing, where n is the training size, implying that it is nontrivial to scaleup the algorithms to handle more than thousands of training images. In this paper we develop an extension of the SPM method, by generalizing vector quantization to sparse coding followed by multi-scale spatial max pooling, and propose a linear SPM kernel based on SIFT sparse codes. This new approach remarkably reduces the complexity of SVMs to O(n) in training and a constant in testing. In a number of image categorization experiments, we find that, in terms of classification accuracy, the suggested linear SPM based on sparse coding of SIFT descriptors always significantly outperforms the linear SPM kernel on histograms, and is even better than the nonlinear SPM kernels, leading to state-of-the-art performance on several benchmarks by using a single type of descriptors.",
"Recent work on mid-level visual representations aims to capture information at the level of complexity higher than typical \"visual words\", but lower than full-blown semantic objects. Several approaches [5,6,12,23] have been proposed to discover mid-level visual elements, that are both 1) representative, i.e., frequently occurring within a visual dataset, and 2) visually discriminative. However, the current approaches are rather ad hoc and difficult to analyze and evaluate. In this work, we pose visual element discovery as discriminative mode seeking, drawing connections to the the well-known and well-studied mean-shift algorithm [2, 1, 4, 8]. Given a weakly-labeled image collection, our method discovers visually-coherent patch clusters that are maximally discriminative with respect to the labels. One advantage of our formulation is that it requires only a single pass through the data. We also propose the Purity-Coverage plot as a principled way of experimentally analyzing and evaluating different visual discovery approaches, and compare our method against prior work on the Paris Street View dataset of [5]. We also evaluate our method on the task of scene classification, demonstrating state-of-the-art performance on the MIT Scene-67 dataset.",
""
]
} |
1608.00182 | 2500786414 | Despite the great success of convolutional neural networks (CNN) for the image classification task on datasets like Cifar and ImageNet, CNN's representation power is still somewhat limited in dealing with object images that have large variation in size and clutter, where Fisher Vector (FV) has shown to be an effective encoding strategy. FV encodes an image by aggregating local descriptors with a universal generative Gaussian Mixture Model (GMM). FV however has limited learning capability and its parameters are mostly fixed after constructing the codebook. To combine together the best of the two worlds, we propose in this paper a neural network structure with FV layer being part of an end-to-end trainable system that is differentiable; we name our network FisherNet that is learnable using backpropagation. Our proposed FisherNet combines convolutional neural network training and Fisher Vector encoding in a single end-to-end structure. We observe a clear advantage of FisherNet over plain CNN and standard FV in terms of both classification accuracy and computational efficiency on the challenging PASCAL VOC object classification task. | Recently, a NetVLAD'' framework presented by Arandjelovi 'c et al @cite_22 develop a VLAD layer for deep networks. They choose outputs from the last convolutional layer as input descriptors, followed by a VLAD layer, which also learns all parameters of patch features and VLAD end-to-end. But notice that VLAD is just a simplified version of FV @cite_12 @cite_4 . It is more difficult to embed FV into CNN frameworks. Meanwhile, VLAD and NetVLAD are only able to capture first-order statistics of data, while FV and FisherNet capture both first- and second-order statics. So in many applications especially for image classification, FV is more suitable @cite_10 @cite_6 . Moreover, as receptive field sizes of convolutional layers are fixed, the patches from the last convolutional layer are only with one scale. We share the computation of convolutional layer for different patches, and use Spatial Pyramid Pooling (SPP) layer @cite_20 to generate patch features, making it possible to extract features from patches at multiple scales. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_6",
"@cite_20",
"@cite_10",
"@cite_12"
],
"mid": [
"1966385142",
"2951019013",
"",
"2109255472",
"",
"1984309565"
],
"abstract": [
"A standard approach to describe an image for classification and retrieval purposes is to extract a set of local patch descriptors, encode them into a high dimensional vector and pool them into an image-level signature. The most common patch encoding strategy consists in quantizing the local descriptors into a finite set of prototypical elements. This leads to the popular Bag-of-Visual words representation. In this work, we propose to use the Fisher Kernel framework as an alternative patch encoding strategy: we describe patches by their deviation from an \"universal\" generative Gaussian mixture model. This representation, which we call Fisher vector has many advantages: it is efficient to compute, it leads to excellent results even with efficient linear classifiers, and it can be compressed with a minimal loss of accuracy using product quantization. We report experimental results on five standard datasets--PASCAL VOC 2007, Caltech 256, SUN 397, ILSVRC 2010 and ImageNet10K--with up to 9M images and 10K classes, showing that the FV framework is a state-of-the-art patch encoding technique.",
"We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following three principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the \"Vector of Locally Aggregated Descriptors\" image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we develop a training procedure, based on a new weakly supervised ranking loss, to learn parameters of the architecture in an end-to-end manner from images depicting the same places over time downloaded from Google Street View Time Machine. Finally, we show that the proposed architecture significantly outperforms non-learnt image representations and off-the-shelf CNN descriptors on two challenging place recognition benchmarks, and improves over current state-of-the-art compact image representations on standard image retrieval benchmarks.",
"",
"Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224 @math 224) input image. This requirement is “artificial” and may reduce the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with another pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102 @math faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.",
"",
"This paper addresses the problem of large-scale image search. Three constraints have to be taken into account: search accuracy, efficiency, and memory usage. We first present and evaluate different ways of aggregating local image descriptors into a vector and show that the Fisher kernel achieves better performance than the reference bag-of-visual words approach for any given vector dimension. We then jointly optimize dimensionality reduction and indexing in order to obtain a precise vector comparison as well as a compact representation. The evaluation shows that the image representation can be reduced to a few dozen bytes while preserving high accuracy. Searching a 100 million image data set takes about 250 ms on one processor core."
]
} |
1608.00214 | 2949609009 | Consider a fully-connected synchronous distributed system consisting of @math nodes, where up to @math nodes may be faulty and every node starts in an arbitrary initial state. In the synchronous @math -counting problem, all nodes need to eventually agree on a counter that is increased by one modulo @math in each round for given @math . In the self-stabilising firing squad problem, the task is to eventually guarantee that all non-faulty nodes have simultaneous responses to external inputs: if a subset of the correct nodes receive an external "go" signal as input, then all correct nodes should agree on a round (in the not-too-distant future) in which to jointly output a "fire" signal. Moreover, no node should generate a "fire" signal without some correct node having previously received a "go" signal as input. We present a framework reducing both tasks to binary consensus at very small cost. For example, we obtain a deterministic algorithm for self-stabilising Byzantine firing squads with optimal resilience @math , asymptotically optimal stabilisation and response time @math , and message size @math . As our framework does not restrict the type of consensus routines used, we also obtain efficient randomised solutions, and it is straightforward to adapt our framework for other types of permanent faults. | In this section, we overview prior work on the synchronous counting and firing squad problems. By now it has been established that both problems @cite_6 @cite_4 are closely connected to the well-studied (non-self-stabilising) consensus @cite_5 @cite_25 . As there exists a vast body of literature on synchronous consensus, we refer the interested reader to e.g. the survey by Raynal @cite_26 . We note that self-stabilising variants of consensus have been studied @cite_22 @cite_19 @cite_11 @cite_28 but in different models of computation and or for different types of failures than what we consider in this work. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_22",
"@cite_28",
"@cite_6",
"@cite_19",
"@cite_5",
"@cite_25",
"@cite_11"
],
"mid": [
"2001315747",
"2294230742",
"2162775152",
"",
"",
"2086069023",
"2126924915",
"",
"1838286088"
],
"abstract": [
"Abstract Understanding distributed computing is not an easy task. This is due to the many facets of uncertainty one has to cope with and master in order to produce correct distributed software. A previous book Communication and Agreement Abstraction for Fault-tolerant Asynchronous Distributed Systems (published by Morgan & Claypool, 2010) was devoted to the problems created by crash failures in asynchronous message-passing systems. The present book focuses on the way to cope with the uncertainty created by process failures (crash, omission failures and Byzantine behavior) in synchronous message-passing systems (i.e., systems whose progress is governed by the passage of time). To that end, the book considers fundamental problems that distributed synchronous processes have to solve. These fundamental problems concern agreement among processes (if processes are unable to agree in one way or another in presence of failures, no non-trivial problem can be solved). They are consensus, interactive consistency, k-...",
"",
"In the standard consensus problem there are n processes with possibly different input values and the goal is to eventually reach a point at which all processes commit to exactly one of these values. We are studying a slight variant of the consensus problem called the stabilizing consensus problem [2]. In this problem, we do not require that each process commits to a final value at some point, but that eventually they arrive at a common, stable value without necessarily being aware of that. This should work irrespective of the states in which the processes are starting. Our main result is a simple randomized algorithm called median rule that, with high probability, just needs O(log m log log n + log n) time and work per process to arrive at an almost stable consensus for any set of m legal values as long as an adversary can corrupt the states of at most √n processes at any time. Without adversarial involvement, just O(log n) time and work is needed for a stable consensus, with high probability. As a by-product, we obtain a simple distributed algorithm for approximating the median of n numbers in time O(log m log log n + log n) under adversarial presence.",
"",
"",
"This paper presents a shared-memory self-stabilizing failure detector, asynchronous consensus and replicated state-machine algorithm suite, the components of which can be started in an arbitrary state and converge to act as a virtual state-machine. Self-stabilizing algorithms can cope with transient faults. Transient faults can alter the system state to an arbitrary state and hence, cause a temporary violation of the safety property of the consensus. Started in an arbitrary state, the long lived, memory bounded and self-stabilizing failure detector, asynchronous consensus, and replicated state-machine suite, presented in the paper, recovers to satisfy eventual safety and eventual liveness requirements. Several new techniques and paradigms are introduced. The bounded memory failure detector abstracts away synchronization assumptions using bounded heartbeat counters combined with a balance-unbalance mechanism. The practically infinite paradigm is introduced in the scope of self-stabilization, where an execution of, say, 2^6^4 sequential steps is regarded as (practically) infinite. Finally, we present the first self-stabilizing wait-free reset mechanism that ensures eventual safety and can be used to implement efficient self-stabilizing timestamps that are of independent interest.",
"The problem addressed here concerns a set of isolated processors, some unknown subset of which may be faulty, that communicate only by means of two-party messages. Each nonfaulty processor has a private value of information that must be communicated to each other nonfaulty processor. Nonfaulty processors always communicate honestly, whereas faulty processors may lie. The problem is to devise an algorithm in which processors communicate their own values and relay values received from others that allows each nonfaulty processor to infer a value for each other processor. The value inferred for a nonfaulty processor must be that processor's private value, and the value inferred for a faulty one must be consistent with the corresponding value inferred by each other nonfaulty processor. It is shown that the problem is solvable for, and only for, n ≥ 3 m + 1, where m is the number of faulty processors and n is the total number. It is also shown that if faulty processors can refuse to pass on information but cannot falsely relay information, the problem is solvable for arbitrary n ≥ m ≥ 0. This weaker assumption can be approximated in practice using cryptographic methods.",
"",
"Inspired by the characteristics of biologically-motivated systems consisting of autonomous agents, we define the notion of stabilizing consensus in fully decentralized and highly dynamic ad hoc systems. Stabilizing consensus requires non-faulty nodes to eventually agree on one of their inputs, but individual nodes do not necessarily know when agreement is reached. First we show that, similar to the original consensus problem in the synchronous model, there exist deterministic solutions to the stabilizing consensus problem tolerating crash faults. Similarly, stabilizing consensus can also be solved deterministically in presence of Byzantine faults with the assumption that n > 3f where n is the number of nodes and f is the number of faulty nodes. Our main result is a Byzantine consensus protocol in a model in which the input to each node can change finitely many times during execution and eventually stabilizes. Finally we present an impossibility result for stabilizing consensus in systems of identical nodes."
]
} |
1608.00214 | 2949609009 | Consider a fully-connected synchronous distributed system consisting of @math nodes, where up to @math nodes may be faulty and every node starts in an arbitrary initial state. In the synchronous @math -counting problem, all nodes need to eventually agree on a counter that is increased by one modulo @math in each round for given @math . In the self-stabilising firing squad problem, the task is to eventually guarantee that all non-faulty nodes have simultaneous responses to external inputs: if a subset of the correct nodes receive an external "go" signal as input, then all correct nodes should agree on a round (in the not-too-distant future) in which to jointly output a "fire" signal. Moreover, no node should generate a "fire" signal without some correct node having previously received a "go" signal as input. We present a framework reducing both tasks to binary consensus at very small cost. For example, we obtain a deterministic algorithm for self-stabilising Byzantine firing squads with optimal resilience @math , asymptotically optimal stabilisation and response time @math , and message size @math . As our framework does not restrict the type of consensus routines used, we also obtain efficient randomised solutions, and it is straightforward to adapt our framework for other types of permanent faults. | In the past two decades, there has been increased interest in combining self-stabilisation with Byzantine fault-tolerance. One reason is that algorithms in this fault model are very attractive in terms of designing highly-resilient hardware @cite_4 . A substantial amount of work on synchronous counting has been carried out @cite_18 @cite_27 @cite_15 @cite_13 @cite_17 @cite_2 , comprising both positive and negative results. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_27",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_17"
],
"mid": [
"1970745521",
"2294230742",
"",
"2952504335",
"1520594164",
"1997521442",
"1911929785"
],
"abstract": [
"We initiate a study of bounded clock synchronization under a more severe fault model than that proposed by Lamport and Melliar-Smith [1985]. Realistic aspects of the problem of synchronizing clocks in the presence of faults are considered. One aspect is that clock synchronization is an on-going task, thus the assumption that some of the processors never fail is too optimistic. To cope with this reality, we suggest self-stabilizing protocols that stabilize in any (long enough) period in which less than a third of the processors are faulty. Another aspect is that the clock value of each processor is bounded. A single transient fault may cause the clock to reach the upper bound. Therefore, we suggest a bounded clock that wraps around when appropriate.We present two randomized self-stabilizing protocols for synchronizing bounded clocks in the presence of Byzantine processor failures. The first protocol assumes that processors have a common pulse, while the second protocol does not. A new type of distributed counter based on the Chinese remainder theorem is used as part of the first protocol.",
"",
"",
"Consider a complete communication network of @math nodes, where the nodes receive a common clock pulse. We study the synchronous @math -counting problem: given any starting state and up to @math faulty nodes with arbitrary behaviour, the task is to eventually have all correct nodes labeling the pulses with increasing values modulo @math in agreement. Thus, we are considering algorithms that are self-stabilising despite Byzantine failures. In this work, we give new algorithms for the synchronous counting problem that (1) are deterministic, (2) have optimal resilience, (3) have a linear stabilisation time in @math (asymptotically optimal), (4) use a small number of states, and consequently, (5) communicate a small number of bits per round. Prior algorithms either resort to randomisation, use a large number of states and need high communication bandwidth, or have suboptimal resilience. In particular, we achieve an exponential improvement in both state complexity and message size for deterministic algorithms. Moreover, we present two complementary approaches for reducing the number of bits communicated during and after stabilisation.",
"Consider a distributed network of n nodes that is connected to a global source of \"beats\". All nodes receive the \"beats\" simultaneously, and operate in lock-step. A scheme that produces a \"pulse\" every Cycle beats is shown. That is, the nodes agree on \"special beats\", which are spaced Cycle beats apart. Given such a scheme, a clock synchronization algorithm is built. The \"pulsing\" scheme is self-stabilized despite any transient faults and the continuous presence of up to f < n 3 Byzantine nodes. Therefore, the clock synchronization built on top of the \"pulse\" is highly fault tolerant. In addition, a highly fault tolerant general stabilizer algorithm is constructed on top of the \"pulse\" mechanism. Previous clock synchronization solutions, operating in the exact same model as this one, either support f < n 4 and converge in linear time, or support f < n 3 and have exponential convergence time that also depends on the value of max-clock (the clock wrap around value). The proposed scheme combines the best of both worlds: it converges in linear time that is independent of max-clock and is tolerant to up to f < n 3 Byzantine nodes. Moreover, considering problems in a self-stabilizing, Byzantine tolerant environment that require nodes to know the global state (clock synchronization, token circulation, agreement, etc.), the work presented here is the first protocol to operate in a network that is not fully connected.",
"Consider a distributed network in which up to a third of the nodes may be Byzantine, and in which the non-faulty nodes may be subject to transient faults that alter their memory in an arbitrary fashion. Within the context of this model, we are interested in the digital clock synchronization problem; which consists of agreeing on bounded integer counters, and increasing these counters regularly. It has been postulated in the past that synchronization cannot be solved in a Byzantine tolerant and self-stabilizing manner. The first solution to this problem had an expected exponential convergence time. Later, a deterministic solution was published with linear convergence time, which is optimal for deterministic solutions. In the current paper we achieve an expected constant convergence time. We thus obtain the optimal probabilistic solution, both in terms of convergence time and in terms of resilience to Byzantine adversaries.",
"Consider a complete communication network on n nodes. In synchronous 2-counting, the nodes receive a common clock pulse and they have to agree on which pulses are \"odd\" and which are \"even\". Furthermore, the solution needs to be self-stabilising (reaching correct operation from any initial state) and tolerate f Byzantine failures (nodes that send arbitrary misinformation). Prior algorithms either require a source of random bits or a large number of states per node. In this work, we give fast state-optimal deterministic algorithms for the first non-trivial case f = 1 . To obtain these algorithms, we develop and evaluate two different techniques for algorithm synthesis. Both are based on casting the synthesis problem as a propositional satisfiability (SAT) problem; a direct encoding is efficient for synthesising time-optimal algorithms, while an approach based on counter-example guided abstraction refinement discovers non-optimal algorithms quickly. We develop computational techniques to find algorithms for synchronous 2-counting.Automated synthesis yields state-optimal self-stabilising fault-tolerant algorithms.We give a thorough experimental comparison of our two SAT-based synthesis techniques.A direct SAT encoding is more efficient for finding time-optimal algorithms.An iterative CEGAR-based approach finds non-optimal algorithms quickly."
]
} |
1608.00214 | 2949609009 | Consider a fully-connected synchronous distributed system consisting of @math nodes, where up to @math nodes may be faulty and every node starts in an arbitrary initial state. In the synchronous @math -counting problem, all nodes need to eventually agree on a counter that is increased by one modulo @math in each round for given @math . In the self-stabilising firing squad problem, the task is to eventually guarantee that all non-faulty nodes have simultaneous responses to external inputs: if a subset of the correct nodes receive an external "go" signal as input, then all correct nodes should agree on a round (in the not-too-distant future) in which to jointly output a "fire" signal. Moreover, no node should generate a "fire" signal without some correct node having previously received a "go" signal as input. We present a framework reducing both tasks to binary consensus at very small cost. For example, we obtain a deterministic algorithm for self-stabilising Byzantine firing squads with optimal resilience @math , asymptotically optimal stabilisation and response time @math , and message size @math . As our framework does not restrict the type of consensus routines used, we also obtain efficient randomised solutions, and it is straightforward to adapt our framework for other types of permanent faults. | In terms of lower bounds, many impossibility results for consensus @cite_5 @cite_29 @cite_10 @cite_31 also directly apply to synchronous counting, as synchronous counting solves binary consensus @cite_4 @cite_17 . In particular, no algorithm can tolerate more than @math Byzantine faulty nodes @cite_5 (unless cryptographic assumptions are made) and any deterministic algorithm needs at least @math rounds to stabilise @cite_29 . | {
"cite_N": [
"@cite_4",
"@cite_29",
"@cite_5",
"@cite_31",
"@cite_10",
"@cite_17"
],
"mid": [
"2294230742",
"2039164882",
"2126924915",
"2077963568",
"2014772227",
"1911929785"
],
"abstract": [
"",
"Abstract : The problem of 'assuring interactive consistency' is defined in (PSL). It is assumed that there are n isolated processors, of which at most m are faulty. The processors can communicate by means of two-party messages, using a medium which is reliable and of negligible delay. The sender of a message is always identifiable by the receiver. Each processor p has a private value sigma(p). The problem is to devise an algorithm that will allow each processor p to compute a value for each processor r, such that (a) if p and r are nonfaulty then p computes r's private value sigma(r), and (b) all the nonfaulty processors compute the same value for each processor r. It is shown in (PSL) that if n 3m + 1, then there is no algorithm which assures interactive consistency. On the other hand, if n or = 3m + 1, then an algorithm does exist. The algorithm presented in (PSL) uses m + 1 rounds of communication, and thus can be said to require 'time' m + 1. An obvious question is whether fewer rounds of communication suffice to solve the problem. In this paper, we answer this question in the negative. That is, we show that any algorithm which assures interactive consistency in the presence of m faulty processors requires at least m + 1 rounds of communication. (Author)",
"The problem addressed here concerns a set of isolated processors, some unknown subset of which may be faulty, that communicate only by means of two-party messages. Each nonfaulty processor has a private value of information that must be communicated to each other nonfaulty processor. Nonfaulty processors always communicate honestly, whereas faulty processors may lie. The problem is to devise an algorithm in which processors communicate their own values and relay values received from others that allows each nonfaulty processor to infer a value for each other processor. The value inferred for a nonfaulty processor must be that processor's private value, and the value inferred for a faulty one must be consistent with the corresponding value inferred by each other nonfaulty processor. It is shown that the problem is solvable for, and only for, n ≥ 3 m + 1, where m is the number of faulty processors and n is the total number. It is also shown that if faulty processors can refuse to pass on information but cannot falsely relay information, the problem is solvable for arbitrary n ≥ m ≥ 0. This weaker assumption can be approximated in practice using cryptographic methods.",
"Byzantine Agreement has become increasingly important in establishing distributed properties when errors may exist in the systems. Recent polynomial algorithms for reaching Byzantine Agreement provide us with feasible solutions for obtaining coordination and synchronization in distributed systems. In this paper the amount of information exchange necessary to ensure Byzantine Agreement is studied. This is measured by the total number of messages the participating processors have to send in the worst case. In algorithms that use a signature scheme, the number of signatures appended to messages are also counted. First it is shown that O( nt ) is a lower bound for the number of signatures for any algorithm using authentication, where n denotes the number of processors and t the upper bound on the number of faults the algorithm is supposed to handle. For algorithms that reach Byzantine Agreement without using authentication this is even a lower bound for the total number of messages. If n is large compared to t , these bounds match the upper bounds from previously known algorithms. For the number of messages in the authenticated case we prove the lower bound O( n + t 2 ). Finally algorithms that achieve this bound are presented.",
"Can unanimity be achieved in an unreliable distributed system? This problem was named \"The Byzantine Generals Problem,\" by Lamport, Pease and Shostak [1980]. The results obtained in the present paper prove that unanimity is achievable in any distributed system if and only if the number of faulty processors in the system is: 1) less than one third of the total number of processors; and 2) less than one half of the connectivity of the system''s network. In cases where unanimity is achievable, algorithms to obtain it are given. This result forms a complete characterization of networks in light of the Byzantine Problem.",
"Consider a complete communication network on n nodes. In synchronous 2-counting, the nodes receive a common clock pulse and they have to agree on which pulses are \"odd\" and which are \"even\". Furthermore, the solution needs to be self-stabilising (reaching correct operation from any initial state) and tolerate f Byzantine failures (nodes that send arbitrary misinformation). Prior algorithms either require a source of random bits or a large number of states per node. In this work, we give fast state-optimal deterministic algorithms for the first non-trivial case f = 1 . To obtain these algorithms, we develop and evaluate two different techniques for algorithm synthesis. Both are based on casting the synthesis problem as a propositional satisfiability (SAT) problem; a direct encoding is efficient for synthesising time-optimal algorithms, while an approach based on counter-example guided abstraction refinement discovers non-optimal algorithms quickly. We develop computational techniques to find algorithms for synchronous 2-counting.Automated synthesis yields state-optimal self-stabilising fault-tolerant algorithms.We give a thorough experimental comparison of our two SAT-based synthesis techniques.A direct SAT encoding is more efficient for finding time-optimal algorithms.An iterative CEGAR-based approach finds non-optimal algorithms quickly."
]
} |
1608.00214 | 2949609009 | Consider a fully-connected synchronous distributed system consisting of @math nodes, where up to @math nodes may be faulty and every node starts in an arbitrary initial state. In the synchronous @math -counting problem, all nodes need to eventually agree on a counter that is increased by one modulo @math in each round for given @math . In the self-stabilising firing squad problem, the task is to eventually guarantee that all non-faulty nodes have simultaneous responses to external inputs: if a subset of the correct nodes receive an external "go" signal as input, then all correct nodes should agree on a round (in the not-too-distant future) in which to jointly output a "fire" signal. Moreover, no node should generate a "fire" signal without some correct node having previously received a "go" signal as input. We present a framework reducing both tasks to binary consensus at very small cost. For example, we obtain a deterministic algorithm for self-stabilising Byzantine firing squads with optimal resilience @math , asymptotically optimal stabilisation and response time @math , and message size @math . As our framework does not restrict the type of consensus routines used, we also obtain efficient randomised solutions, and it is straightforward to adapt our framework for other types of permanent faults. | In a seminal work, Dolev and Welch @cite_18 showed that the task can be solved in a self-stabilising manner in the presence of (the optimal number of) @math Byzantine faults using randomisation; see also [Ch. 6] dolev00self-stabilization . While this algorithm can be implemented using only constant-size messages, the expected stabilisation time is exponential. Later, Ben- @cite_13 showed that it is possible to obtain optimally-resilient solutions that stabilise in expected constant time. However, their algorithm relies on shared coins, which are costly to implement and assume private communication channels. | {
"cite_N": [
"@cite_18",
"@cite_13"
],
"mid": [
"1970745521",
"1997521442"
],
"abstract": [
"We initiate a study of bounded clock synchronization under a more severe fault model than that proposed by Lamport and Melliar-Smith [1985]. Realistic aspects of the problem of synchronizing clocks in the presence of faults are considered. One aspect is that clock synchronization is an on-going task, thus the assumption that some of the processors never fail is too optimistic. To cope with this reality, we suggest self-stabilizing protocols that stabilize in any (long enough) period in which less than a third of the processors are faulty. Another aspect is that the clock value of each processor is bounded. A single transient fault may cause the clock to reach the upper bound. Therefore, we suggest a bounded clock that wraps around when appropriate.We present two randomized self-stabilizing protocols for synchronizing bounded clocks in the presence of Byzantine processor failures. The first protocol assumes that processors have a common pulse, while the second protocol does not. A new type of distributed counter based on the Chinese remainder theorem is used as part of the first protocol.",
"Consider a distributed network in which up to a third of the nodes may be Byzantine, and in which the non-faulty nodes may be subject to transient faults that alter their memory in an arbitrary fashion. Within the context of this model, we are interested in the digital clock synchronization problem; which consists of agreeing on bounded integer counters, and increasing these counters regularly. It has been postulated in the past that synchronization cannot be solved in a Byzantine tolerant and self-stabilizing manner. The first solution to this problem had an expected exponential convergence time. Later, a deterministic solution was published with linear convergence time, which is optimal for deterministic solutions. In the current paper we achieve an expected constant convergence time. We thus obtain the optimal probabilistic solution, both in terms of convergence time and in terms of resilience to Byzantine adversaries."
]
} |
1608.00214 | 2949609009 | Consider a fully-connected synchronous distributed system consisting of @math nodes, where up to @math nodes may be faulty and every node starts in an arbitrary initial state. In the synchronous @math -counting problem, all nodes need to eventually agree on a counter that is increased by one modulo @math in each round for given @math . In the self-stabilising firing squad problem, the task is to eventually guarantee that all non-faulty nodes have simultaneous responses to external inputs: if a subset of the correct nodes receive an external "go" signal as input, then all correct nodes should agree on a round (in the not-too-distant future) in which to jointly output a "fire" signal. Moreover, no node should generate a "fire" signal without some correct node having previously received a "go" signal as input. We present a framework reducing both tasks to binary consensus at very small cost. For example, we obtain a deterministic algorithm for self-stabilising Byzantine firing squads with optimal resilience @math , asymptotically optimal stabilisation and response time @math , and message size @math . As our framework does not restrict the type of consensus routines used, we also obtain efficient randomised solutions, and it is straightforward to adapt our framework for other types of permanent faults. | In addition to the lower bound results, there also exist deterministic algorithms for the synchronous counting problem @cite_27 @cite_15 @cite_17 @cite_2 . Many of these algorithms utilise consensus routines @cite_27 @cite_15 @cite_2 , but obtaining fast and communication-efficient solutions with optimal resilience has been a challenge. For example, Dolev and Hoch @cite_15 apply a pipelining technique, where @math consensus instances are run in parallel. While this approach attains optimal resilience and linear stabilisation time in @math , the large number of parallel consensus instances necessitates large messages. | {
"cite_N": [
"@cite_27",
"@cite_2",
"@cite_17",
"@cite_15"
],
"mid": [
"",
"2952504335",
"1911929785",
"1520594164"
],
"abstract": [
"",
"Consider a complete communication network of @math nodes, where the nodes receive a common clock pulse. We study the synchronous @math -counting problem: given any starting state and up to @math faulty nodes with arbitrary behaviour, the task is to eventually have all correct nodes labeling the pulses with increasing values modulo @math in agreement. Thus, we are considering algorithms that are self-stabilising despite Byzantine failures. In this work, we give new algorithms for the synchronous counting problem that (1) are deterministic, (2) have optimal resilience, (3) have a linear stabilisation time in @math (asymptotically optimal), (4) use a small number of states, and consequently, (5) communicate a small number of bits per round. Prior algorithms either resort to randomisation, use a large number of states and need high communication bandwidth, or have suboptimal resilience. In particular, we achieve an exponential improvement in both state complexity and message size for deterministic algorithms. Moreover, we present two complementary approaches for reducing the number of bits communicated during and after stabilisation.",
"Consider a complete communication network on n nodes. In synchronous 2-counting, the nodes receive a common clock pulse and they have to agree on which pulses are \"odd\" and which are \"even\". Furthermore, the solution needs to be self-stabilising (reaching correct operation from any initial state) and tolerate f Byzantine failures (nodes that send arbitrary misinformation). Prior algorithms either require a source of random bits or a large number of states per node. In this work, we give fast state-optimal deterministic algorithms for the first non-trivial case f = 1 . To obtain these algorithms, we develop and evaluate two different techniques for algorithm synthesis. Both are based on casting the synthesis problem as a propositional satisfiability (SAT) problem; a direct encoding is efficient for synthesising time-optimal algorithms, while an approach based on counter-example guided abstraction refinement discovers non-optimal algorithms quickly. We develop computational techniques to find algorithms for synchronous 2-counting.Automated synthesis yields state-optimal self-stabilising fault-tolerant algorithms.We give a thorough experimental comparison of our two SAT-based synthesis techniques.A direct SAT encoding is more efficient for finding time-optimal algorithms.An iterative CEGAR-based approach finds non-optimal algorithms quickly.",
"Consider a distributed network of n nodes that is connected to a global source of \"beats\". All nodes receive the \"beats\" simultaneously, and operate in lock-step. A scheme that produces a \"pulse\" every Cycle beats is shown. That is, the nodes agree on \"special beats\", which are spaced Cycle beats apart. Given such a scheme, a clock synchronization algorithm is built. The \"pulsing\" scheme is self-stabilized despite any transient faults and the continuous presence of up to f < n 3 Byzantine nodes. Therefore, the clock synchronization built on top of the \"pulse\" is highly fault tolerant. In addition, a highly fault tolerant general stabilizer algorithm is constructed on top of the \"pulse\" mechanism. Previous clock synchronization solutions, operating in the exact same model as this one, either support f < n 4 and converge in linear time, or support f < n 3 and have exponential convergence time that also depends on the value of max-clock (the clock wrap around value). The proposed scheme combines the best of both worlds: it converges in linear time that is independent of max-clock and is tolerant to up to f < n 3 Byzantine nodes. Moreover, considering problems in a self-stabilizing, Byzantine tolerant environment that require nodes to know the global state (clock synchronization, token circulation, agreement, etc.), the work presented here is the first protocol to operate in a network that is not fully connected."
]
} |
1608.00214 | 2949609009 | Consider a fully-connected synchronous distributed system consisting of @math nodes, where up to @math nodes may be faulty and every node starts in an arbitrary initial state. In the synchronous @math -counting problem, all nodes need to eventually agree on a counter that is increased by one modulo @math in each round for given @math . In the self-stabilising firing squad problem, the task is to eventually guarantee that all non-faulty nodes have simultaneous responses to external inputs: if a subset of the correct nodes receive an external "go" signal as input, then all correct nodes should agree on a round (in the not-too-distant future) in which to jointly output a "fire" signal. Moreover, no node should generate a "fire" signal without some correct node having previously received a "go" signal as input. We present a framework reducing both tasks to binary consensus at very small cost. For example, we obtain a deterministic algorithm for self-stabilising Byzantine firing squads with optimal resilience @math , asymptotically optimal stabilisation and response time @math , and message size @math . As our framework does not restrict the type of consensus routines used, we also obtain efficient randomised solutions, and it is straightforward to adapt our framework for other types of permanent faults. | In order to achieve better communication and state complexity, the use of computational algorithm design and synthesis techniques have also been investigated @cite_17 @cite_12 . While this line of research has produced novel optimal and computer-verified algorithms, so far the techniques have not scaled beyond @math faulty node due to the inherent combinatorial explosion in the search space of potential algorithms. | {
"cite_N": [
"@cite_12",
"@cite_17"
],
"mid": [
"2479556144",
"1911929785"
],
"abstract": [
"Fault-tolerant distributed algorithms play an increasingly important role in many applications, and their correct and efficient implementation is notoriously difficult. We present an automatic approach to synthesise provably correct fault-tolerant distributed algorithms from formal specifications in linear-time temporal logic. The supported system model covers synchronous reactive systems with finite local state, while the failure model includes strong self-stabilisation as well as Byzantine failures. The synthesis approach for a fixed-size network of processes is complete for realisable specifications, and can optimise the solution for small implementations and short stabilisation time. To solve the bounded synthesis problem with Byzantine failures more efficiently, we design an incremental, CEGIS-like loop. Finally, we define two classes of problems for which our synthesis algorithm obtains solutions that are not only correct in fixed-size networks, but in networks of arbitrary size.",
"Consider a complete communication network on n nodes. In synchronous 2-counting, the nodes receive a common clock pulse and they have to agree on which pulses are \"odd\" and which are \"even\". Furthermore, the solution needs to be self-stabilising (reaching correct operation from any initial state) and tolerate f Byzantine failures (nodes that send arbitrary misinformation). Prior algorithms either require a source of random bits or a large number of states per node. In this work, we give fast state-optimal deterministic algorithms for the first non-trivial case f = 1 . To obtain these algorithms, we develop and evaluate two different techniques for algorithm synthesis. Both are based on casting the synthesis problem as a propositional satisfiability (SAT) problem; a direct encoding is efficient for synthesising time-optimal algorithms, while an approach based on counter-example guided abstraction refinement discovers non-optimal algorithms quickly. We develop computational techniques to find algorithms for synchronous 2-counting.Automated synthesis yields state-optimal self-stabilising fault-tolerant algorithms.We give a thorough experimental comparison of our two SAT-based synthesis techniques.A direct SAT encoding is more efficient for finding time-optimal algorithms.An iterative CEGAR-based approach finds non-optimal algorithms quickly."
]
} |
1608.00214 | 2949609009 | Consider a fully-connected synchronous distributed system consisting of @math nodes, where up to @math nodes may be faulty and every node starts in an arbitrary initial state. In the synchronous @math -counting problem, all nodes need to eventually agree on a counter that is increased by one modulo @math in each round for given @math . In the self-stabilising firing squad problem, the task is to eventually guarantee that all non-faulty nodes have simultaneous responses to external inputs: if a subset of the correct nodes receive an external "go" signal as input, then all correct nodes should agree on a round (in the not-too-distant future) in which to jointly output a "fire" signal. Moreover, no node should generate a "fire" signal without some correct node having previously received a "go" signal as input. We present a framework reducing both tasks to binary consensus at very small cost. For example, we obtain a deterministic algorithm for self-stabilising Byzantine firing squads with optimal resilience @math , asymptotically optimal stabilisation and response time @math , and message size @math . As our framework does not restrict the type of consensus routines used, we also obtain efficient randomised solutions, and it is straightforward to adapt our framework for other types of permanent faults. | Recently, we gave recursive constructions that achieve linear stabilisation time using only polylogarithmic message size and state bits per node @cite_0 @cite_8 ; see also the extended and revised version @cite_2 . However, our previous constructions relied on specific (deterministic) consensus routines and their properties in a relatively ad hoc manner. In contrast, our new framework presented here lends itself to any (possibly randomised) synchronous consensus routine and improves the best known upper bound on the message size to @math bits. Currently, it is unknown whether it is possible to deterministically achieve message size @math . | {
"cite_N": [
"@cite_0",
"@cite_2",
"@cite_8"
],
"mid": [
"2039651269",
"2952504335",
""
],
"abstract": [
"Consider a complete communication network of n nodes, in which the nodes receive a common clock pulse. We study the synchronous c-counting problem: given any starting state and up to f faulty nodes with arbitrary behaviour, the task is to eventually have all correct nodes count modulo c in agreement. Thus, we are considering algorithms that are self-stabilising despite Byzantine failures. In this work, we give new algorithms for the synchronous counting problem that (1) are deterministic, (2) have linear stabilisation time in f, (3) use a small number of states, and (4) achieve almost-optimal resilience. Prior algorithms either resort to randomisation, use a large number of states, or have poor resilience. In particular, we achieve an exponential improvement in the state complexity of deterministic algorithms, while still achieving linear stabilisation time and almost-linear resilience.",
"Consider a complete communication network of @math nodes, where the nodes receive a common clock pulse. We study the synchronous @math -counting problem: given any starting state and up to @math faulty nodes with arbitrary behaviour, the task is to eventually have all correct nodes labeling the pulses with increasing values modulo @math in agreement. Thus, we are considering algorithms that are self-stabilising despite Byzantine failures. In this work, we give new algorithms for the synchronous counting problem that (1) are deterministic, (2) have optimal resilience, (3) have a linear stabilisation time in @math (asymptotically optimal), (4) use a small number of states, and consequently, (5) communicate a small number of bits per round. Prior algorithms either resort to randomisation, use a large number of states and need high communication bandwidth, or have suboptimal resilience. In particular, we achieve an exponential improvement in both state complexity and message size for deterministic algorithms. Moreover, we present two complementary approaches for reducing the number of bits communicated during and after stabilisation.",
""
]
} |
1608.00214 | 2949609009 | Consider a fully-connected synchronous distributed system consisting of @math nodes, where up to @math nodes may be faulty and every node starts in an arbitrary initial state. In the synchronous @math -counting problem, all nodes need to eventually agree on a counter that is increased by one modulo @math in each round for given @math . In the self-stabilising firing squad problem, the task is to eventually guarantee that all non-faulty nodes have simultaneous responses to external inputs: if a subset of the correct nodes receive an external "go" signal as input, then all correct nodes should agree on a round (in the not-too-distant future) in which to jointly output a "fire" signal. Moreover, no node should generate a "fire" signal without some correct node having previously received a "go" signal as input. We present a framework reducing both tasks to binary consensus at very small cost. For example, we obtain a deterministic algorithm for self-stabilising Byzantine firing squads with optimal resilience @math , asymptotically optimal stabilisation and response time @math , and message size @math . As our framework does not restrict the type of consensus routines used, we also obtain efficient randomised solutions, and it is straightforward to adapt our framework for other types of permanent faults. | In the original formulation of the firing squad synchronisation problem, the system consists of an @math -length path consisting of finite state machines (whose number of states is independent of @math ) and the goal is to have all machines switch to the same fire'' state simultaneously after one node receives a go'' signal. This formulation of the problem has been attributed to John Myhill and Edward Moore and has subsequently been studied in various settings; see e.g. @cite_7 for survey of early work related to the problem. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2120510885"
],
"abstract": [
"Reliable computer systems must handle malfunctioning components that give conflicting information to different parts of the system. This situation can be expressed abstractly in terms of a group of generals of the Byzantine army camped with their troops around an enemy city. Communicating only by messenger, the generals must agree upon a common battle plan. However, one or more of them may be traitors who will try to confuse the others. The problem is to find an algorithm to ensure that the loyal generals will reach agreement. It is shown that, using only oral messages, this problem is solvable if and only if more than two-thirds of the generals are loyal; so a single traitor can confound two loyal generals. With unforgeable written messages, the problem is solvable for any number of generals and possible traitors. Applications of the solutions to reliable computer systems are then discussed."
]
} |
1608.00346 | 2479814928 | Partly on the basis of heuristic arguments from physics, it has been suggested that the performance of certain types of algorithms on random @math -SAT formulas is linked to phase transitions that affect the geometry of the set of satisfying assignments. But, beyond intuition, there has been scant rigorous evidence that “practical” algorithms are affected by these phase transitions. In this paper we prove that Walksat, a popular randomized satisfiability algorithm, fails on random @math -SAT formulas not very far above clause variable density, where the set of satisfying assignments shatters into tiny, well-separated clusters. Specifically, we prove that Walksat is ineffective with high probability (w.h.p.) if @math , where @math is the number of clauses, @math is the number of variables, and @math is an absolute constant. By comparison, Walksat is known to find satisfying assignments in linear time w.h.p. if @math [A. Coja-Oghlan and A. Frieze, SIAM J. Comput., 43 (2... | While is not a sequential local algorithm, we critically use one idea of the analysis from @cite_32 , called overlap structures'' in that paper. Specifically, Gamarnik and Sudan prove that for an appropriate integer @math no @math -tuple of NAE-satsifying assignments exist with pairwise distance about @math if the clause variable densities is above @math . However, a coupling argument shows that if a local sequential algorithm were likely to succeed, then there would have to be such an @math -tuple of NAE-satisfying assignments with a non-vanishing probability. Actually the idea of overlap structures originates from the work of Rahman and Virag @cite_1 , who improved the density of an earlier negative result of Gamarnik and Sudan @cite_34 for a more specialised class of algorithms for the independent set problem. The definition of mists'' in the present paper is directly inspired by overlap structures. | {
"cite_N": [
"@cite_34",
"@cite_1",
"@cite_32"
],
"mid": [
"2952246906",
"2962770133",
""
],
"abstract": [
"Local algorithms on graphs are algorithms that run in parallel on the nodes of a graph to compute some global structural feature of the graph. Such algorithms use only local information available at nodes to determine local aspects of the global structure, while also potentially using some randomness. Recent research has shown that such algorithms show significant promise in computing structures like large independent sets in graphs locally. Indeed the promise led to a conjecture by Hatami, and Szegedy HatamiLovaszSzegedy that local algorithms may be able to compute maximum independent sets in (sparse) random @math -regular graphs. In this paper we refute this conjecture and show that every independent set produced by local algorithms is multiplicative factor @math smaller than the largest, asymptotically as @math . Our result is based on an important clustering phenomena predicted first in the literature on spin glasses, and recently proved rigorously for a variety of constraint satisfaction problems on random graphs. Such properties suggest that the geometry of the solution space can be quite intricate. The specific clustering property, that we prove and apply in this paper shows that typically every two large independent sets in a random graph either have a significant intersection, or have a nearly empty intersection. As a result, large independent sets are clustered according to the proximity to each other. While the clustering property was postulated earlier as an obstruction for the success of local algorithms, such as for example, the Belief Propagation algorithm, our result is the first one where the clustering property is used to formally prove limits on local algorithms.",
"We show that the largest density of factor of i.i.d. independent sets in the dd-regular tree is asymptotically at most (logd) d(logd) d as d→∞d→∞. This matches the lower bound given by previous constructions. It follows that the largest independent sets given by local algorithms on random dd-regular graphs have the same asymptotic density. In contrast, the density of the largest independent sets in these graphs is asymptotically 2(logd) d2(logd) d. We prove analogous results for Poisson–Galton–Watson trees, which yield bounds for local algorithms on sparse Erdős–Renyi graphs.",
""
]
} |
1608.00346 | 2479814928 | Partly on the basis of heuristic arguments from physics, it has been suggested that the performance of certain types of algorithms on random @math -SAT formulas is linked to phase transitions that affect the geometry of the set of satisfying assignments. But, beyond intuition, there has been scant rigorous evidence that “practical” algorithms are affected by these phase transitions. In this paper we prove that Walksat, a popular randomized satisfiability algorithm, fails on random @math -SAT formulas not very far above clause variable density, where the set of satisfying assignments shatters into tiny, well-separated clusters. Specifically, we prove that Walksat is ineffective with high probability (w.h.p.) if @math , where @math is the number of clauses, @math is the number of variables, and @math is an absolute constant. By comparison, Walksat is known to find satisfying assignments in linear time w.h.p. if @math [A. Coja-Oghlan and A. Frieze, SIAM J. Comput., 43 (2... | The first and the last author obtained negaitve results for message passing algorithms for random @math -SAT that do not require bounds on the number of iterations @cite_5 @cite_11 . Specifically, @cite_5 shows that a basic version of Belief Propagation Guided Decimation fails to find satisfying assignments for densities @math for a certain constant @math . Moreover, @cite_11 shows that a basic version of the conceptually more powerful Survey Propagation Guided Decimation algorithm fails if @math . | {
"cite_N": [
"@cite_5",
"@cite_11"
],
"mid": [
"1833343854",
"2290712804"
],
"abstract": [
"Let Φ be a uniformly distributed random k-SAT formula with n variables and m clauses. Non-constructive arguments show that Φ is satisfiable for clause variable ratios m n ≤ rk 2k ln 2 with high probability (Achlioptas, Moore: SICOMP 2006; Achlioptas, Peres: J. AMS 2004). Yet no efficient algorithm is know to find a satisfying assignment for densities as low as m n rk · ln(k) k with a non-vanishing probability. In fact, the density m n rk · ln(k) k seems to form a barrier for a broad class of local search algorithms (Achlioptas, Coja-Oghlan: FOCS 2008). On the basis of deep but non-rigorous statistical mechanics considerations, a message passing algorithm called belief propagation guided decimation for solving random k-SAT has been forward (Mezard, Parisi, Zecchina: Science 2002; Braunstein, Mezard, Zecchina: RSA 2005). Experiments suggest that the algorithm might succeed for densities very close to rk for k = 3, 4, 5 (Kroc, Sabharwal, Selman: SAC 2009). Furnishing the first rigorous analysis of belief propagation guided decimation on random k-SAT, the present paper shows that the algorithm fails to find a satisfying assignment already for m n ≥ ρ · rk k, for a constant ρ > 0 independent of k.",
"Let @math be a uniformly distributed random @math -SAT formula with @math variables and @math clauses. For clauses variables ratio @math the formula @math is satisfiable with high probability. However, no efficient algorithm is known to provably find a satisfying assignment beyond @math with a non-vanishing probability. Non-rigorous statistical mechanics work on @math -CNF led to the development of a new efficient \"message passing algorithm\" called [M ', Science 2002]. Experiments conducted for @math suggest that the algorithm finds satisfying assignments close to @math . However, in the present paper we prove that the basic version of Survey Propagation Guided Decimation fails to solve random @math -SAT formulas efficiently already for @math with @math almost a factor @math below @math ."
]
} |
1608.00139 | 2500735746 | In this paper, we propose a fundamentally new approach to Datalog evaluation. Given a linear Datalog program DB written using N constants and binary predicates, we first translate if-and-only-if completions of clauses in DB into a set Eq(DB) of matrix equations with a non-linear operation where relations in M_DB, the least Herbrand model of DB, are encoded as adjacency matrices. We then translate Eq(DB) into another, but purely linear matrix equations tilde_Eq(DB). It is proved that the least solution of tilde_Eq(DB) in the sense of matrix ordering is converted to the least solution of Eq(DB) and the latter gives M_DB as a set of adjacency matrices. Hence computing the least solution of tilde_Eq(DB) is equivalent to computing M_DB specified by DB. For a class of tail recursive programs and for some other types of programs, our approach achieves O(N^3) time complexity irrespective of the number of variables in a clause since only matrix operations costing O(N^3) or less are used. We conducted two experiments that compute the least Herbrand models of linear Datalog programs. The first experiment computes transitive closure of artificial data and real network data taken from the Koblenz Network Collection. The second one compared the proposed approach with the state-of-the-art symbolic systems including two Prolog systems and two ASP systems, in terms of computation time for a transitive closure program and the same generation program. In the experiment, it is observed that our linear algebraic approach runs 10^1 10^4 times faster than the symbolic systems when data is not sparse. To appear in Theory and Practice of Logic Programming (TPLP). | Applying linear algebra to logical computation is not new. For example, the SAT problem is formulated using matrices and vectors in @cite_3 . Concerning Datalog, Ceri @cite_12 describes a bottom-up evaluation method which is essentially identical to the one referred to as Iteration'' in Subsection . Our approach is nether bottom-up nor iterative. It abolishes iteration and replaces it with inverse matrix application. Also there are a couple of papers concerning KGs that evaluate ground atoms in a vector space. Grefenstette @cite_26 for example successfully embeds Herbrand models in tensor spaces but the embedding excludes quantified formulas; they need to be treated separately by another framework which does not accept nested quantification. So his formalism is not applicable to our case that embeds Datalog programs into vector spaces, as Datalog programs can have nested existential quantifiers in their clause bodies. | {
"cite_N": [
"@cite_26",
"@cite_12",
"@cite_3"
],
"mid": [
"2952300142",
"2167685423",
""
],
"abstract": [
"The development of compositional distributional models of semantics reconciling the empirical aspects of distributional semantics with the compositional aspects of formal semantics is a popular topic in the contemporary literature. This paper seeks to bring this reconciliation one step further by showing how the mathematical constructs commonly used in compositional distributional models, such as tensors and matrices, can be used to simulate different aspects of predicate logic. This paper discusses how the canonical isomorphism between tensors and multilinear maps can be exploited to simulate a full-blown quantifier-free predicate calculus using tensors. It provides tensor interpretations of the set of logical connectives required to model propositional calculi. It suggests a variant of these tensor calculi capable of modelling quantifiers, using few non-linear operations. It finally discusses the relation between these variants, and how this relation should constitute the subject of future work.",
"Datalog, a database query language based on the logic programming paradigm, is described. The syntax and semantics of Datalog and its use for querying a relational database are presented. Optimization methods for achieving efficient evaluations of Datalog queries are classified, and the most relevant methods are presented. Various improvements of Datalog currently under study are discussed, and what is still needed in order to extend Datalog's applicability to the solution of real-life problems is indicated. >",
""
]
} |
1608.00207 | 2480276594 | In this paper, we propose a novel face alignment method that trains deep convolutional network from coarse to fine. It divides given landmarks into principal subset and elaborate subset. We firstly keep a large weight for principal subset to make our network primarily predict their locations while slightly take elaborate subset into account. Next the weight of principal subset is gradually decreased until two subsets have equivalent weights. This process contributes to learn a good initial model and search the optimal model smoothly to avoid missing fairly good intermediate models in subsequent procedures. On the challenging COFW dataset [1], our method achieves 6.33 mean error with a reduction of 21.37 compared with the best previous result [2]. | @cite_13 trains a deep convolutional network with multitask learning which jointly optimizes landmark detection together with the recognition of some facial attributes. It pre-trains the network by five landmarks and then fine-tunes to predict the dense landmarks. However, our method doesn't require labeling extra attributes for training samples. Different from pre-training, we also consider predicting the location of other elaborate landmarks. Compared to the method consists of pre-training and fine-tuning, we gradually adjust the weight of principal subset and elaborate subset respectively to avoid missing good models in subsequent training procedures. | {
"cite_N": [
"@cite_13"
],
"mid": [
"1795776638"
],
"abstract": [
"In this study, we show that landmark detection or face alignment task is not a single and independent problem. Instead, its robustness can be greatly improved with auxiliary information. Specifically, we jointly optimize landmark detection together with the recognition of heterogeneous but subtly correlated facial attributes, such as gender, expression, and appearance attributes. This is non-trivial since different attribute inference tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, which not only learns the inter-task correlation but also employs dynamic task coefficients to facilitate the optimization convergence when learning multiple complex tasks. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing face alignment methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art methods based on cascaded deep model."
]
} |
1607.08682 | 2491454164 | We study algorithmic and structural aspects of connectivity in hypergraphs. Given a hypergraph @math with @math , @math and @math the best known algorithm to compute a global minimum cut in @math runs in time @math for the uncapacitated case and in @math time for the capacitated case. We show the following new results. 1. Given an uncapacitated hypergraph @math and an integer @math we describe an algorithm that runs in @math time to find a subhypergraph @math with sum of degrees @math that preserves all edge-connectivities up to @math (a @math -sparsifier). This generalizes the corresponding result of Nagamochi and Ibaraki from graphs to hypergraphs. Using this sparsification we obtain an @math time algorithm for computing a global minimum cut of @math where @math is the minimum cut value. 2. We generalize Matula's argument for graphs to hypergraphs and obtain a @math -approximation to the global minimum cut in a capacitated hypergraph in @math time. 3. We show that a hypercactus representation of all the global minimum cuts of a capacitated hypergraph can be computed in @math time and @math space. We utilize vertex ordering based ideas to obtain our results. Unlike graphs we observe that there are several different orderings for hypergraphs which yield different insights. | In a recent work Kogan and Krauthgamer @cite_30 examined the properties of random contraction algorithm of Karger when applied to hypergraphs. They showed that if the rank of the hypergraph is @math then the number of @math -mincuts for @math is at most @math which is a substantial improvement over a naive analysis that would give a bound of @math . The exponential dependence on @math is necessary. They also showed cut-sparsification results ala Benczur and Karger's result for graphs @cite_22 . In particular, given a @math -vertex capacitated hypergraph @math of rank @math they show that there is a capacitated hypergraph @math with @math edges such that every cut capacity in @math is preserved to within a @math factor in @math . @cite_33 considered parametric mincuts in graphs and hypergraphs of fixed rank and obtained polynomial bounds on the number of distinct mincuts. | {
"cite_N": [
"@cite_30",
"@cite_22",
"@cite_33"
],
"mid": [
"1978676170",
"1543491698",
"2132147722"
],
"abstract": [
"Sketching and streaming algorithms are in the forefront of current research directions for cut problems in graphs. In the streaming model, we show that (1--e)-approximation for Max-Cut must use n 1-O(e) space; moreover, beating 4 5-approximation requires polynomial space. For the sketching model, we show that every r-uniform hypergraph admits a (1+ e)-cut-sparsifier (i.e., a weighted subhypergraph that approximately preserves all the cuts) with O(e-2n(r+log n)) edges. We also make first steps towards sketching general CSPs (Constraint Satisfaction Problems).",
"We describe random sampling techniques for approximately solving problems that involve cuts and flows in graphs. We give a near-linear-time randomized combinatorial construction that transforms any graph on @math vertices into an @math -edge graph on the same vertices whose cuts have approximately the same value as the original graph's. In this new graph, for example, we can run the @math -time maximum flow algorithm of Goldberg and Rao to find an @math - @math minimum cut in @math time. This corresponds to a @math -times minimum @math - @math cut in the original graph. A related approach leads to a randomized divide-and-conquer algorithm producing an approximately maximum flow in @math time. Our algorithm can also be used to improve the running time of sparsest cut approximation algorithms from @math to @math and to accelerate several other recent cut and flow algorithms. Our algorithms are based on a general theorem analyzing the concent...",
"We consider multiobjective and parametric versions of the global minimum cut problem in undirected graphs and bounded-rank hypergraphs with multiple edge cost functions. For a fixed number of edge cost functions, we show that the total number of supported non-dominated (SND) cuts is bounded by a polynomial in the numbers of nodes and edges, i.e., is strongly polynomial. This bound also applies to the combinatorial facet complexity of the problem, i.e., the maximum number of facets (linear pieces) of the parametric curve for the parametrized (linear combination) objective, over the set of all parameter vectors such that the parametrized edge costs are nonnegative and the parametrized cut costs are positive. We sharpen this bound in the case of two objectives (the bicriteria problem), for which we also derive a strongly polynomial upper bound on the total number of non-dominated (Pareto optimal) cuts. In particular, the bicriteria global minimum cut problem in an n-node graph admits @math O(n3logn) SND cuts and @math O(n5logn) Pareto optimal cuts. These results significantly improve on earlier graph cut results by Mulmuley (SIAM J Comput 28(4):1460---1509, 1999) and Armon and Zwick (Algorithmica 46(1):15---26, 2006). They also imply that the parametric curve and all SND cuts, and, for the bicriteria problems, all Pareto optimal cuts, can be computed in strongly polynomial time when the number of objectives is fixed."
]
} |
1607.08682 | 2491454164 | We study algorithmic and structural aspects of connectivity in hypergraphs. Given a hypergraph @math with @math , @math and @math the best known algorithm to compute a global minimum cut in @math runs in time @math for the uncapacitated case and in @math time for the capacitated case. We show the following new results. 1. Given an uncapacitated hypergraph @math and an integer @math we describe an algorithm that runs in @math time to find a subhypergraph @math with sum of degrees @math that preserves all edge-connectivities up to @math (a @math -sparsifier). This generalizes the corresponding result of Nagamochi and Ibaraki from graphs to hypergraphs. Using this sparsification we obtain an @math time algorithm for computing a global minimum cut of @math where @math is the minimum cut value. 2. We generalize Matula's argument for graphs to hypergraphs and obtain a @math -approximation to the global minimum cut in a capacitated hypergraph in @math time. 3. We show that a hypercactus representation of all the global minimum cuts of a capacitated hypergraph can be computed in @math time and @math space. We utilize vertex ordering based ideas to obtain our results. Unlike graphs we observe that there are several different orderings for hypergraphs which yield different insights. | Hypergraph cuts have also been studied in the context of @math -way cuts. Here the goal is to partition the vertex set @math into @math non-empty sets so as to minimize the number of hyperedges crossing the partition. For @math a polynomial time algorithm is known @cite_5 while the complexity is unknown for fixed @math . The problem is NP-Complete when @math is part of the input even for graphs @cite_28 . Fukunaga @cite_36 obtained a polynomial-time algorithm for @math -way cut when @math and the rank @math are fixed; this generalizes the result the polynomial-time algorithm for graphs @cite_28 @cite_6 . Karger's contraction algorithm also yields a randomized algorithm when @math and the rank @math are fixed. When @math is part of the input, @math -way cut in graphs admits a @math -approximation @cite_1 . This immediately yields a @math -approximation for hypergraphs. If @math is not fixed and @math is part of the input, it was recently shown @cite_10 that the approximability of the @math -way cut problem is related to that of the @math -densest subgraph problem. | {
"cite_N": [
"@cite_28",
"@cite_36",
"@cite_1",
"@cite_6",
"@cite_5",
"@cite_10"
],
"mid": [
"2167637346",
"2067062892",
"2122466336",
"1995027289",
"1981202693",
""
],
"abstract": [
"The k-cut problem is to find a partition of an edge weighted graph into k nonempty components, such that the total edge weight between components is minimum. This problem is NP-complete for an arbitrary k and its version involving fixing a vertex in each component is NP-hard even for k = 3. We present a polynomial algorithm for k fixed, that runs in Onki¾² 2-3k 2+4Tn, m steps, where Tn, m is the running time required to find the minimum s, t-cut on a graph with n vertices and m edges.",
"Abstract The hypergraph k -cut problem is the problem of finding a minimum capacity set of hyperedges whose removal divides a given hypergraph into at least k connected components. We present an algorithm for this problem, that runs in strongly polynomial time if both k and the maximum size of the hyperedges are constants. Our algorithm extends the algorithm proposed by Thorup (2008) for computing minimum k -cuts of graphs from greedy packings of spanning trees.",
"Two simple approximation algorithms for the minimum @math -cut problem are presented. Each algorithm finds a @math cut having weight within a factor of @math of the optimal. One of our algorithms is particularly efficient---it requires a total of only @math maximum flow computations for finding a set of near-optimal @math cuts, one for each value of @math between 2 and @math .",
"We present a simple and fast deterministic algorithm for the minimum k-way cut problem in a capacitated graph, that is, finding a set of edges with minimum total capacity whose removal splits the graph into at least k components. The algorithm packs O(mk3 log n) trees. Each new tree is a minimal spanning tree with respect to the edge utilizations, and the utilization of an edge is the number of times it has been used in previous spanning trees divided by its capacity. We prove that each minimum k-way cut is crossed at most 2k-2 times by one of the trees. We can enumerate all such cuts in O(n2k) time, which is hence the running time of our algorithm producing all minimum k-way cuts. The previous fastest deterministic algorithm of [SICOMP'06] took O(n(4+o(1))k) time, so this is a near-quadratic improvement. Moreover, we essentially match the O(n(2-o(1))k) running time of the Monto Carlo (no correctness guarantee) randomized algorithm of Karger and Stein [JACM'96].",
"The minimum 3-way cut problem in an edge-weighted hypergraph is to find a partition of the vertices into 3 nonempty sets minimizing the total weight of hyperedges that have at least two endpoints in two different sets. In this paper we show that a minimum 3-way cut in hypergraphs can be found by using O(n^3) hypergraph minimum (s,t) cut computations, where n is the number of vertices in the hypergraph. Our simple algorithm is the first polynomial-time algorithm for finding minimum 3-way cuts in hypergraphs.",
""
]
} |
1607.08774 | 2951691122 | Nowadays, both the amount of cyberattacks and their sophistication have considerably increased, and their prevention is of concern of most of organizations. Cooperation by means of information sharing is a promising strategy to address this problem, but unfortunately it poses many challenges. Indeed, looking for a win-win environment is not straightforward and organizations are not properly motivated to share information. This work presents a model to analyse the benefits and drawbacks of information sharing among organizations that presents a certain level of dependency. The proposed model applies functional dependency network analysis to emulate attacks propagation and game theory for information sharing management. We present a simulation framework implementing the model that allows for testing different sharing strategies under several network and attack settings. Experiments using simulated environments show how the proposed model provides insights on which conditions and scenarios are beneficial for information sharing. | Several works have proposed the use of game theory to analyse the trade-off in terms of incentives and costs of sharing information among entities. Naghizadeh et Liu @cite_21 propose folk theorems and use an analytical method to study how the role of private and public monitoring through inter-temporal incentives can support degree of cooperation. Similar to this work, we also propose a game theory based model where utilities of sharing information are calculated upon gains and costs. However, instead of using historical public available actions, we propose immunization factor and reputation as the main variables for incentive sharing. Furthermore, the work in @cite_21 identifies the cost of disclosure as one of the main drawbacks related with information sharing. While we also consider privacy and disclosure costs as two key drawbacks for sharing, we also introduce a third variable that affects costs, i.e. trust. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2579399277"
],
"abstract": [
"Sharing of security information among organizations has often been proposed as a method for improving the state of cyber-security for all. However, such disclosure entails costs for the reporting entity, including a drop in market value, loss of customer confidence, and bureaucratic burdens. By adopting a game-theoretic approach for understanding firms' incentives, we first show that in one shot interactions, disclosure costs act as disincentives for sharing, leading to no information sharing at equilibrium. We then consider a repeated game formulation that enables the use of inter-temporal incentives (i.e., the conditioning of future decisions on the history of past interactions) to support firms' cooperation on information sharing. We show that the nature of the monitoring available to firms greatly affects the possibility of sustaining nearly efficient outcomes through repeated interactions. Specifically, we first illustrate the limitations arising when firms' have independent and imperfect private monitoring technologies. We then consider the role of a public monitoring assessment system, and show that despite using the same imperfect technology as the individual firms, the monitor can facilitate cooperation among participants. Our results therefore illustrate the impact of a public rating reputation system on the viability of security information sharing agreements."
]
} |
1607.08774 | 2951691122 | Nowadays, both the amount of cyberattacks and their sophistication have considerably increased, and their prevention is of concern of most of organizations. Cooperation by means of information sharing is a promising strategy to address this problem, but unfortunately it poses many challenges. Indeed, looking for a win-win environment is not straightforward and organizations are not properly motivated to share information. This work presents a model to analyse the benefits and drawbacks of information sharing among organizations that presents a certain level of dependency. The proposed model applies functional dependency network analysis to emulate attacks propagation and game theory for information sharing management. We present a simulation framework implementing the model that allows for testing different sharing strategies under several network and attack settings. Experiments using simulated environments show how the proposed model provides insights on which conditions and scenarios are beneficial for information sharing. | @cite_11 use game theory to help organizations to decide whether to share information or not, using the CYBEX framework @cite_7 . Authors use evolutionary game theory in order to attain an evolutionary stable strategy (ESS) under various conditions. These conditions are extracted through simulation with synthetic data in a non-cooperative scenario with rational and profit-seeking firms. The main incentive for sharing is the information received, and thus the knowledge gained. In our work we also consider this knowledge as an incentive for sharing. @cite_12 present a two stage Bayesian game between two firms to help to decide how much to invest in searching vulnerabilities and how much of this information to share. Authors determine the Perfect Bayesian Equilibrium to extract analytically strategy conditions encouraging information sharing. In @cite_12 a firm benefits from losses in another, namely due to exploited bugs. Moreover, they distinguished costs between: direct loss (of compromised firm), common loss by market shrinkage and competitive loss. | {
"cite_N": [
"@cite_12",
"@cite_7",
"@cite_11"
],
"mid": [
"",
"2054480731",
"2608407642"
],
"abstract": [
"",
"The cybersecurity information exchange framework, known as CYBEX, is currently undergoing its first iteration of standardization efforts within ITU-T. The framework describes how cybersecurity information is exchanged between cybersecurity entities on a global scale and how the exchange is assured. The worldwide implementation of the framework will eventually minimize the disparate availability of cybersecurity information. This paper provides a specification overview, use cases, and the current status of CYBEX.",
"Technology Information technology; Terrorism and threats Cyberterrorism; Technology Modeling and simulation; Cyberspace and Cybersecurity"
]
} |
1607.08774 | 2951691122 | Nowadays, both the amount of cyberattacks and their sophistication have considerably increased, and their prevention is of concern of most of organizations. Cooperation by means of information sharing is a promising strategy to address this problem, but unfortunately it poses many challenges. Indeed, looking for a win-win environment is not straightforward and organizations are not properly motivated to share information. This work presents a model to analyse the benefits and drawbacks of information sharing among organizations that presents a certain level of dependency. The proposed model applies functional dependency network analysis to emulate attacks propagation and game theory for information sharing management. We present a simulation framework implementing the model that allows for testing different sharing strategies under several network and attack settings. Experiments using simulated environments show how the proposed model provides insights on which conditions and scenarios are beneficial for information sharing. | One of the main problems when analysing the costs and benefits of information sharing is the experimentation with real data. Whereas most of the proposed works @cite_21 @cite_11 @cite_16 perform evaluation using analytical methods, @cite_17 present a controlled data sharing approach and make empirical evaluation using a dataset of suspicious IP addresses. Authors in @cite_17 use different similarity metrics to analyse benefits of sharing and compare different sharing strategies: sharing everything or only information about attack entities. Authors in @cite_17 rely on a static scenario and provide useful metrics to mathematically predict benefits of info sharing. By contrast, using a simulated setting we empirically analyse how impacts are propagated through the network, at runtime, to afterwards analyse how information sharing is able to mitigate such impacts in the future. | {
"cite_N": [
"@cite_21",
"@cite_17",
"@cite_16",
"@cite_11"
],
"mid": [
"2579399277",
"2952479443",
"2002263800",
"2608407642"
],
"abstract": [
"Sharing of security information among organizations has often been proposed as a method for improving the state of cyber-security for all. However, such disclosure entails costs for the reporting entity, including a drop in market value, loss of customer confidence, and bureaucratic burdens. By adopting a game-theoretic approach for understanding firms' incentives, we first show that in one shot interactions, disclosure costs act as disincentives for sharing, leading to no information sharing at equilibrium. We then consider a repeated game formulation that enables the use of inter-temporal incentives (i.e., the conditioning of future decisions on the history of past interactions) to support firms' cooperation on information sharing. We show that the nature of the monitoring available to firms greatly affects the possibility of sustaining nearly efficient outcomes through repeated interactions. Specifically, we first illustrate the limitations arising when firms' have independent and imperfect private monitoring technologies. We then consider the role of a public monitoring assessment system, and show that despite using the same imperfect technology as the individual firms, the monitor can facilitate cooperation among participants. Our results therefore illustrate the impact of a public rating reputation system on the viability of security information sharing agreements.",
"Although sharing data across organizations is often advocated as a promising way to enhance cybersecurity, collaborative initiatives are rarely put into practice owing to confidentiality, trust, and liability challenges. In this paper, we investigate whether collaborative threat mitigation can be realized via a controlled data sharing approach, whereby organizations make informed decisions as to whether or not, and how much, to share. Using appropriate cryptographic tools, entities can estimate the benefits of collaboration and agree on what to share in a privacy-preserving way, without having to disclose their datasets. We focus on collaborative predictive blacklisting, i.e., forecasting attack sources based on one's logs and those contributed by other organizations. We study the impact of different sharing strategies by experimenting on a real-world dataset of two billion suspicious IP addresses collected from Dshield over two months. We find that controlled data sharing yields up to 105 accuracy improvement on average, while also reducing the false positive rate.",
"New regulations mandating firms to share information on security breaches and security practices with authorities are high on the policy agenda around the globe. These initiatives are based on the hope that authorities can effectively advise and warn other firms, thereby strengthening overall defense and response to cyberthreats in an economy. If this mechanism works (as assumed in this paper with varying effectiveness), it has consequences on security investments of rational firms. We devise an economic model that distinguishes between investments in detective and preventive controls, and analyze its Nash equilibria. The model suggests that firms subject to mandatory security information sharing 1) over-invest in security breach detection as well as under-invest in breach prevention, and 2), depending on the enforcement practices, may shift investment priorities from detective to preventive controls. We also identify conditions where the regulation increases welfare.",
"Technology Information technology; Terrorism and threats Cyberterrorism; Technology Modeling and simulation; Cyberspace and Cybersecurity"
]
} |
1607.08754 | 2950174932 | An equitable graph coloring is a proper vertex coloring of a graph G where the sizes of the color classes differ by at most one. The equitable chromatic number is the smallest number k such that G admits such equitable k-coloring. We focus on enumerative algorithms for the computation of the equitable coloring number and propose a general scheme to derive pruning rules for them: We show how the extendability of a partial coloring into an equitable coloring can be modeled via network flows. Thus, we obtain pruning rules which can be checked via flow algorithms. Computational experiments show that the search tree of enumerative algorithms can be significantly reduced in size by these rules and, in most instances, such naive approach even yields a faster algorithm. Moreover, the stability, i.e., the number of solved instances within a given time limit, is greatly improved. Since the execution of flow algorithms at each node of a search tree is time consuming, we derive arithmetic pruning rules (generalized Hall-conditions) from the network model. Adding these rules to an enumerative algorithm yields an even larger runtime improvement. | We repeat basic definitions and briefly describe the approach from M ' e ndez-D ' i @cite_11 to tackle ECP. For this section and the remainder of the work, we assume a graph @math to be given. We start with some basic observations. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2102068573"
],
"abstract": [
"This paper describes a new exact algorithm for the Equitable Coloring Problem, a coloring problem where the sizes of two arbitrary color classes differ in at most one unit. Based on the well known DSatur algorithm for the classic Coloring Problem, a pruning criterion arising from equity constraints is proposed and analyzed. The good performance of the algorithm is shown through computational experiments over random and benchmark instances."
]
} |
1607.08754 | 2950174932 | An equitable graph coloring is a proper vertex coloring of a graph G where the sizes of the color classes differ by at most one. The equitable chromatic number is the smallest number k such that G admits such equitable k-coloring. We focus on enumerative algorithms for the computation of the equitable coloring number and propose a general scheme to derive pruning rules for them: We show how the extendability of a partial coloring into an equitable coloring can be modeled via network flows. Thus, we obtain pruning rules which can be checked via flow algorithms. Computational experiments show that the search tree of enumerative algorithms can be significantly reduced in size by these rules and, in most instances, such naive approach even yields a faster algorithm. Moreover, the stability, i.e., the number of solved instances within a given time limit, is greatly improved. Since the execution of flow algorithms at each node of a search tree is time consuming, we derive arithmetic pruning rules (generalized Hall-conditions) from the network model. Adding these rules to an enumerative algorithm yields an even larger runtime improvement. | Since any equitable coloring is a coloring, it is clear that @math . Let @math be fixed. If @math admits an equitable @math -coloring, the sizes of the color classes are fixed. Recall that a clique is a set of pairwise adjacent vertices and that we call a set of vertices stable if it only contains pairwise nonadjacent vertices. In the following, we adopt several notations and definitions from @cite_11 . A partial @math -coloring of @math is given by @math such that: all @math are pairwise disjoint, @math for @math , and the @math are stable for @math . Note, that a partial coloring can also be completely empty, i.e., @math for @math | {
"cite_N": [
"@cite_11"
],
"mid": [
"2102068573"
],
"abstract": [
"This paper describes a new exact algorithm for the Equitable Coloring Problem, a coloring problem where the sizes of two arbitrary color classes differ in at most one unit. Based on the well known DSatur algorithm for the classic Coloring Problem, a pruning criterion arising from equity constraints is proposed and analyzed. The good performance of the algorithm is shown through computational experiments over random and benchmark instances."
]
} |
1607.08754 | 2950174932 | An equitable graph coloring is a proper vertex coloring of a graph G where the sizes of the color classes differ by at most one. The equitable chromatic number is the smallest number k such that G admits such equitable k-coloring. We focus on enumerative algorithms for the computation of the equitable coloring number and propose a general scheme to derive pruning rules for them: We show how the extendability of a partial coloring into an equitable coloring can be modeled via network flows. Thus, we obtain pruning rules which can be checked via flow algorithms. Computational experiments show that the search tree of enumerative algorithms can be significantly reduced in size by these rules and, in most instances, such naive approach even yields a faster algorithm. Moreover, the stability, i.e., the number of solved instances within a given time limit, is greatly improved. Since the execution of flow algorithms at each node of a search tree is time consuming, we derive arithmetic pruning rules (generalized Hall-conditions) from the network model. Adding these rules to an enumerative algorithm yields an even larger runtime improvement. | We repeat some further definitions to introduce the results of @cite_11 . Consider @math and a partial coloring @math we denote by @math the size of the largest color class. Denote by @math the indices of the largest color classes and the cardinality @math | {
"cite_N": [
"@cite_11"
],
"mid": [
"2102068573"
],
"abstract": [
"This paper describes a new exact algorithm for the Equitable Coloring Problem, a coloring problem where the sizes of two arbitrary color classes differ in at most one unit. Based on the well known DSatur algorithm for the classic Coloring Problem, a pruning criterion arising from equity constraints is proposed and analyzed. The good performance of the algorithm is shown through computational experiments over random and benchmark instances."
]
} |
1607.08764 | 2953132777 | Current state of the art object recognition architectures achieve impressive performance but are typically specialized for a single depictive style (e.g. photos only, sketches only). In this paper, we present SwiDeN : our Convolutional Neural Network (CNN) architecture which recognizes objects regardless of how they are visually depicted (line drawing, realistic shaded drawing, photograph etc.). In SwiDeN, we utilize a novel deep' depictive style-based switching mechanism which appropriately addresses the depiction-specific and depiction-invariant aspects of the problem. We compare SwiDeN with alternative architectures and prior work on a 50-category Photo-Art dataset containing objects depicted in multiple styles. Experimental results show that SwiDeN outperforms other approaches for the depiction-invariant object recognition problem. | Object class (category) recognition, albeit restricted to photographic depictions, has been studied extensively by researchers @cite_9 @cite_14 @cite_13 . However, little previous work exists for truly general multi- depiction object recognition. @cite_12 construct multi-attribute part-graphs for object categories and use graph matching for classification on the same dataset we use. However, their evaluation procedure, also used by @cite_11 @cite_19 , induces an unreasonable amount of category bias which makes comparison difficult. We present an alternative evaluation procedure which is more principled (See Section ). @cite_17 present a graph-based object modelling approach and evaluate it on @math augmented classes of Caltech-256. @cite_7 utilize a depiction-invariant method for image matching. Domain adaption approaches have been also been tried @cite_19 . However, when the domain-specific identifiers (e.g. target domain labels) are available as in our case, a domain-adaptation procedure unnecessarily makes the overall problem harder since the objective in domain-adaptation is typically to forget" the source domain. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_9",
"@cite_17",
"@cite_19",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"2145743319",
"2035652042",
"2154422044",
"",
"2270586871",
"2159589665",
"",
"1587734090"
],
"abstract": [
"This paper proposes a novel approach to constructing a hierarchical representation of visual input that aims to enable recognition and detection of a large number of object categories. Inspired by the principles of efficient indexing (bottom-up,), robust matching (top-down,), and ideas of compositionality, our approach learns a hierarchy of spatially flexible compositions, i.e. parts, in an unsupervised, statistics-driven manner. Starting with simple, frequent features, we learn the statistically most significant compositions (parts composed of parts), which consequently define the next layer. Parts are learned sequentially, layer after layer, optimally adjusting to the visual data. Lower layers are learned in a category-independent way to obtain complex, yet sharable visual building blocks, which is a crucial step towards a scalable representation. Higher layers of the hierarchy, on the other hand, are constructed by using specific categories, achieving a category representation with a small number of highly generalizable parts that gained their structural flexibility through composition within the hierarchy. Built in this way, new categories can be efficiently and continuously added to the system by adding a small number of parts only in the higher layers. The approach is demonstrated on a large collection of images and a variety of object categories. Detection results confirm the effectiveness and robustness of the learned parts.",
"The goal of this work is to find visually similar images even if they appear quite different at the raw pixel level. This task is particularly important for matching images across visual domains, such as photos taken over different seasons or lighting conditions, paintings, hand-drawn sketches, etc. We propose a surprisingly simple method that estimates the relative importance of different features in a query image based on the notion of \"data-driven uniqueness\". We employ standard tools from discriminative object detection in a novel way, yielding a generic approach that does not depend on a particular image representation or a specific visual domain. Our approach shows good performance on a number of difficult cross-domain visual tasks e.g., matching paintings or sketches to real photographs. The method also allows us to demonstrate novel applications such as Internet re-photography, and painting2gps. While at present the technique is too computationally intensive to be practical for interactive image retrieval, we hope that some of the ideas will eventually become applicable to that domain as well.",
"We present a method to learn and recognize object class models from unlabeled and unsegmented cluttered scenes in a scale invariant manner. Objects are modeled as flexible constellations of parts. A probabilistic representation is used for all aspects of the object: shape, appearance, occlusion and relative scale. An entropy-based feature detector is used to select regions and their scale within the image. In learning the parameters of the scale-invariant object model are estimated. This is done using expectation-maximization in a maximum-likelihood setting. In recognition, this model is used in a Bayesian manner to classify images. The flexible nature of the model is demonstrated by excellent results over a range of datasets including geometrically constrained classes (e.g. faces, cars) and flexible objects (such as animals).",
"",
"The cross-depiction problem is that of recognising visual objects regardless of whether they are photographed, painted, drawn, etc. It introduces great challenge as the variance across photo and art domains is much larger than either alone. We extensively evaluate classification, domain adaptation and detection benchmarks for leading techniques, demonstrating that none perform consistently well given the cross-depiction problem. Finally we refine the DPM model, based on query expansion, enabling it to bridge the gap across depiction boundaries to some extent.",
"Psychophysical studies show that we can recognize objects using fragments of outline contour alone. This paper proposes a new automatic visual recognition system based only on local contour features, capable of localizing objects in space and scale. The system first builds a class-specific codebook of local fragments of contour using a novel formulation of chamfer matching. These local fragments allow recognition that is robust to within-class variation, pose changes, and articulation. Boosting combines these fragments into a cascaded sliding-window classifier, and mean shift is used to select strong responses as a final set of detection. We show how learning can be performed iteratively on both training and test sets to bootstrap an improved classifier. We compare with other methods based on contour and local descriptors in our detailed evaluation over 17 challenging categories and obtain highly competitive results. The results confirm that contour is indeed a powerful cue for multiscale and multiclass visual object recognition.",
"",
"The cross-depiction problem is that of recognising visual objects regardless of whether they are photographed, painted, drawn, etc. It is a potentially significant yet under-researched problem. Emulating the remarkable human ability to recognise objects in an astonishingly wide variety of depictive forms is likely to advance both the foundations and the applications of Computer Vision. In this paper we benchmark classification, domain adaptation, and deep learning methods; demonstrating that none perform consistently well in the cross-depiction problem. Given the current interest in deep learning, the fact such methods exhibit the same behaviour as all but one other method: they show a significant fall in performance over inhomogeneous databases compared to their peak performance, which is always over data comprising photographs only. Rather, we find the methods that have strong models of spatial relations between parts tend to be more robust and therefore conclude that such information is important in modelling object classes regardless of appearance details."
]
} |
1607.08821 | 2492705755 | People usually get involved in multiple social networks to enjoy new services or to fulfill their needs. Many new social networks try to attract users of other existing networks to increase the number of their users. Once a user (called source user) of a social network (called source network) joins a new social network (called target network), a new inter-network link (called anchor link) is formed between the source and target networks. In this paper, we concentrated on predicting the formation of such anchor links between heterogeneous social networks. Unlike conventional link prediction problems in which the formation of a link between two existing users within a single network is predicted, in anchor link prediction, the target user is missing and will be added to the target network once the anchor link is created. To solve this problem, we use meta-paths as a powerful tool for utilizing heterogeneous information in both the source and target networks. To this end, we propose an effective general meta-path-based approach called Connector and Recursive Meta-Paths (CRMP). By using those two different categories of meta-paths, we model different aspects of social factors that may affect a source user to join the target network, resulting in the formation of a new anchor link. Extensive experiments on real-world heterogeneous social networks demonstrate the effectiveness of the proposed method against the recent methods. | The preliminary works have been focused on homogeneous networks containing single type of nodes and links. Recently, the use of heterogeneous information in link prediction problem has gained much attention. @cite_20 used a meta-path-based approach called PathPredict to predict co-authorship links in heterogeneous bibliographic networks. @cite_14 proposed a probabilistic method called MRIP to predict links in heterogeneous networks. @cite_16 also suggested a meta-path-based method to predict multiple type of links in heterogeneous information networks. @cite_18 devised an unsupervised method using aggregative statistics for link prediction problem in heterogeneous networks. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_20",
"@cite_16"
],
"mid": [
"2002163386",
"",
"2109480754",
"2086643053"
],
"abstract": [
"The concern of privacy has become an important issue for online social networks. In services such as Foursquare.com, whether a person likes an article is considered private and therefore not disclosed; only the aggregative statistics of articles (i.e., how many people like this article) is revealed. This paper tries to answer a question: can we predict the opinion holder in a heterogeneous social network without any labeled data? This question can be generalized to a link prediction with aggregative statistics problem. This paper devises a novel unsupervised framework to solve this problem, including two main components: (1) a three-layer factor graph model and three types of potential functions; (2) a ranked-margin learning and inference algorithm. Finally, we evaluate our method on four diverse prediction scenarios using four datasets: preference (Foursquare), repost (Twitter), response (Plurk), and citation (DBLP). We further exploit nine unsupervised models to solve this problem as baselines. Our approach not only wins out in all scenarios, but on the average achieves 9.90 AUC and 12.59 NDCG improvement over the best competitors. The resources are available at http: www.csie.ntu.edu.tw d97944007 aggregative",
"",
"The problem of predicting links or interactions between objects in a network, is an important task in network analysis. Along this line, link prediction between co-authors in a co-author network is a frequently studied problem. In most of these studies, authors are considered in a homogeneous network, .e., only one type of objects(author type) and one type of links (co-authorship) exist in the network. However, in a real bibliographic network, there are multiple types of objects ( .g., venues, topics, papers) and multiple types of links among these objects. In this paper, we study the problem of co-author relationship prediction in the heterogeneous bibliographic network, and a new methodology called, .e., meta path-based relationship prediction model, is proposed to solve this problem. First, meta path-based topological features are systematically extracted from the network. Then, a supervised model is used to learn the best weights associated with different topological features in deciding the co-author relationships. We present experiments on a real bibliographic network, the DBLP network, which show that metapath-based heterogeneous topological features can generate more accurate prediction results as compared to homogeneous topological features. In addition, the level of significance of each topological feature can be learned from the model, which is helpful in understanding the mechanism behind the relationship building.",
"Link prediction has become an important and active research topic in recent years, which is prevalent in many real-world applications. Current research on link prediction focuses on predicting one single type of links, such as friendship links in social networks, or predicting multiple types of links independently. However, many real-world networks involve more than one type of links, and different types of links are not independent, but related with complex dependencies among them. In such networks, the prediction tasks for different types of links are also correlated and the links of different types should be predicted collectively. In this paper, we study the problem of collective prediction of multiple types of links in heterogeneous information networks. To address this problem, we introduce the linkage homophily principle and design a relatedness measure, called RM, between different types of objects to compute the existence probability of a link. We also extend conventional proximity measures to heterogeneous links. Furthermore, we propose an iterative framework for heterogeneous collective link prediction, called HCLP, to predict multiple types of links collectively by exploiting diverse and complex linkage information in heterogeneous information networks. Empirical studies on real-world tasks demonstrate that the proposed collective link prediction approach can effectively boost link prediction performances in heterogeneous information networks."
]
} |
1607.08821 | 2492705755 | People usually get involved in multiple social networks to enjoy new services or to fulfill their needs. Many new social networks try to attract users of other existing networks to increase the number of their users. Once a user (called source user) of a social network (called source network) joins a new social network (called target network), a new inter-network link (called anchor link) is formed between the source and target networks. In this paper, we concentrated on predicting the formation of such anchor links between heterogeneous social networks. Unlike conventional link prediction problems in which the formation of a link between two existing users within a single network is predicted, in anchor link prediction, the target user is missing and will be added to the target network once the anchor link is created. To solve this problem, we use meta-paths as a powerful tool for utilizing heterogeneous information in both the source and target networks. To this end, we propose an effective general meta-path-based approach called Connector and Recursive Meta-Paths (CRMP). By using those two different categories of meta-paths, we model different aspects of social factors that may affect a source user to join the target network, resulting in the formation of a new anchor link. Extensive experiments on real-world heterogeneous social networks demonstrate the effectiveness of the proposed method against the recent methods. | Meanwhile, the problem of anchor link prediction has not gained much attention in the research community, until recently. To the best of our knowledge, the closest proposed method to the anchor link prediction problem, is the CICF method of @cite_19 . The main idea of CICF is to transfer knowledge only through those anchor users who behave consistently across both the source and target networks. However, they formulated the problem as a cross-domain learning task, and thus missed some other important factors such as peer influence that affects the formation of anchor links. Furthermore, they did not intend to use heterogeneous information and the task of feature extraction is delegated to the application. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2108467801"
],
"abstract": [
"We study the target node prediction problem: given two social networks, identify those nodes users from one network (called the source network) who are likely to join another (called the target network, with nodes called target nodes). Although this problem can be solved using existing techniques in the field of cross domain classification, we observe that in many realworld situations the cross-domain classifiers perform sub-optimally due to the heterogeneity between source and target networks that prevents the knowledge from being transferred. In this paper, we propose learning the consistent behavior of common users to help the knowledge transfer. We first present the Consistent Incidence Co-Factorization (CICF) for identifying the consistent users, i.e., common users that behave consistently across networks. Then we introduce the Domain-UnBiased (DUB) classifiers that transfer knowledge only through those consistent users. Extensive experiments are conducted and the results show that our proposal copes with heterogeneity and improves prediction accuracy."
]
} |
1607.08821 | 2492705755 | People usually get involved in multiple social networks to enjoy new services or to fulfill their needs. Many new social networks try to attract users of other existing networks to increase the number of their users. Once a user (called source user) of a social network (called source network) joins a new social network (called target network), a new inter-network link (called anchor link) is formed between the source and target networks. In this paper, we concentrated on predicting the formation of such anchor links between heterogeneous social networks. Unlike conventional link prediction problems in which the formation of a link between two existing users within a single network is predicted, in anchor link prediction, the target user is missing and will be added to the target network once the anchor link is created. To solve this problem, we use meta-paths as a powerful tool for utilizing heterogeneous information in both the source and target networks. To this end, we propose an effective general meta-path-based approach called Connector and Recursive Meta-Paths (CRMP). By using those two different categories of meta-paths, we model different aspects of social factors that may affect a source user to join the target network, resulting in the formation of a new anchor link. Extensive experiments on real-world heterogeneous social networks demonstrate the effectiveness of the proposed method against the recent methods. | Table shows a comparison between link prediction, anchor link inference, and anchor link prediction problems. Besides the conventional link prediction studies, there are other works that are related to the problem of anchor link prediction. In @cite_11 , the most important factors that cause users to switch between social networks are investigated, empirically. They used the Push-Pull-Mooring model of migration @cite_23 as a basis, and categorized different factors into push, pull, and mooring. However, they did not propose a model for prediction of user migration in social networks. In another related work @cite_10 , have studied the formation and evolution of communities in social networks. They have shown that for a user, both number of her friends and the associated share of her friends' activities in the target community have great impact on her to join that community. We have used the results of these studies as a background theory for the proposed method. | {
"cite_N": [
"@cite_10",
"@cite_23",
"@cite_11"
],
"mid": [
"2432978112",
"",
"2050568170"
],
"abstract": [
"The processes by which communities come together, attract new members, and develop over time is a central research issue in the social sciences - political movements, professional organizations, and religious denominations all provide fundamental examples of such communities. In the digital domain, on-line groups are becoming increasingly prominent due to the growth of community and social networking sites such as MySpace and LiveJournal. However, the challenge of collecting and analyzing large-scale time-resolved data on social groups and communities has left most basic questions about the evolution of such groups largely unresolved: what are the structural features that influence whether individuals will join communities, which communities will grow rapidly, and how do the overlaps among pairs of communities change over time.Here we address these questions using two large sources of data: friendship links and community membership on LiveJournal, and co-authorship and conference publications in DBLP. Both of these datasets provide explicit user-defined communities, where conferences serve as proxies for communities in DBLP. We study how the evolution of these communities relates to properties such as the structure of the underlying social networks. We find that the propensity of individuals to join communities, and of communities to grow rapidly, depends in subtle ways on the underlying network structure. For example, the tendency of an individual to join a community is influenced not just by the number of friends he or she has within the community, but also crucially by how those friends are connected to one another. We use decision-tree techniques to identify the most significant structural determinants of these properties. We also develop a novel methodology for measuring movement of individuals between communities, and show how such movements are closely aligned with changes in the topics of interest within the communities.",
"",
"This paper seeks to offer strategic recommendations for SNS providers.We identify characteristics that combine to distinguish SNSs from conventional ISs.Our PPM-based model extends research on IS continuance and service switching.The findings reveal four significant factors that promote switching.SNS providers can devise user strategies grounded in these factors. Users are the most critical strategic resource of any online social networking service (SNS). This paper offers strategic recommendations for SNS providers based on an empirical study exploring why users switch from a primary SNS to others. We first identify important characteristics that combine to distinguish SNSs from conventional information systems, then develop a \"cyber migration\" research model that includes push, pull and mooring factors which influence user intention to switch from one SNS to another. Findings from a field survey of 180 users reveal four significant factors that promote switching: dissatisfaction with socialization support, dissatisfaction with entertainment value, continuity cost, and peer influence. Strategies grounded in these factors are suggested for SNS providers to better attract and retain users."
]
} |
1607.08497 | 2490699242 | One of the most widely studied problem in mining and analysis of complex networks is the detection of community structures. The problem has been extensively studied by researchers due to its high utility and numerous applications in various domains. Many algorithmic solutions have been proposed for the community detection problem but the quest to find the best algorithm is still on. More often than not, researchers focus on developing fast and accurate algorithms that can be generically applied to networks from a variety of domains without taking into consideration the structural and topological variations in these networks. In this paper, we evaluate the performance of different clustering algorithms as a function of varying network topology. Along with the well known LFR model to generate benchmark networks with communities,we also propose a new model named Naive Scale Free Model to study the behavior of community detection algorithms with respect to different topological features. More specifically, we are interested in the size of networks, the size of community structures, the average connectivity of nodes and the ratio of inter-intra cluster edges. Results reveal several limitations of the current popular network clustering algorithms failing to correctly find communities. This suggests the need to revisit the design of current clustering algorithms that fail to incorporate varying topological features of different networks. | Although the problem of community detection has been widely studied, the associated problem of generating standard benchmark datasets to evaluate the quality of clustering algorithms has not attracted much interest. One of the earlier works in this direction was by Girvan and Newman @cite_48 usually referred to as the GN benchmark. They used a network of 128 nodes divided into four communities of 32 nodes each with each node having approximately the same node degree. A parameter was used to control the intra-cluster and inter-cluster edges. A number of drawbacks were identified in this benchmark @cite_2 , most notably that all nodes of the network have the same degree and all the communities have the same number of nodes in them. | {
"cite_N": [
"@cite_48",
"@cite_2"
],
"mid": [
"1971421925",
"2023655578"
],
"abstract": [
"A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases.",
"Community structure is one of the most important features of real networks and reveals the internal organization of the nodes. Many algorithms have been proposed but the crucial issue of testing, i.e., the question of how good an algorithm is, with respect to others, is still open. Standard tests include the analysis of simple artificial graphs with a built-in community structure, that the algorithm has to recover. However, the special graphs adopted in actual tests have a structure that does not reflect the real properties of nodes and communities found in real networks. Here we introduce a class of benchmark graphs, that account for the heterogeneity in the distributions of node degrees and of community sizes. We use this benchmark to test two popular methods of community detection, modularity optimization, and Potts model clustering. The results show that the benchmark poses a much more severe test to algorithms than standard benchmarks, revealing limits that may not be apparent at a first analysis."
]
} |
1607.08497 | 2490699242 | One of the most widely studied problem in mining and analysis of complex networks is the detection of community structures. The problem has been extensively studied by researchers due to its high utility and numerous applications in various domains. Many algorithmic solutions have been proposed for the community detection problem but the quest to find the best algorithm is still on. More often than not, researchers focus on developing fast and accurate algorithms that can be generically applied to networks from a variety of domains without taking into consideration the structural and topological variations in these networks. In this paper, we evaluate the performance of different clustering algorithms as a function of varying network topology. Along with the well known LFR model to generate benchmark networks with communities,we also propose a new model named Naive Scale Free Model to study the behavior of community detection algorithms with respect to different topological features. More specifically, we are interested in the size of networks, the size of community structures, the average connectivity of nodes and the ratio of inter-intra cluster edges. Results reveal several limitations of the current popular network clustering algorithms failing to correctly find communities. This suggests the need to revisit the design of current clustering algorithms that fail to incorporate varying topological features of different networks. | Most networks found in the real world have a non-uniform degree distribution, often following power-law @cite_54 . Furthermore, the community sizes in these real networks also follow a power-law @cite_52 @cite_26 @cite_7 which in turn justifies the drawbacks identified in the benchmark proposed by @cite_48 . Thus @cite_2 proposed a new benchmark called the LFR benchmark which has a number of interesting features. The degree distribution of the generated networks follow power-law and the average degree can be adjusted as required. The distribution of the community sizes also follow power-law and can be parametrized between a minimum and maximum value. Each node can have a fraction of intra-cluster and inter-cluster edges which are also controlled by a mixing parameter. | {
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_48",
"@cite_54",
"@cite_52",
"@cite_2"
],
"mid": [
"2155167324",
"2087897009",
"1971421925",
"2008620264",
"",
"2023655578"
],
"abstract": [
"We propose a procedure for analyzing and characterizing complex networks. We apply this to the social network as constructed from email communications within a medium sized university with about 1700 employees. Email networks provide an accurate and nonintrusive description of the flow of information within human organizations. Our results reveal the self-organization of the network into a state where the distribution of community sizes is self-similar. This suggests that a universal mechanism, responsible for emergence of scaling in other self-organized complex systems, as, for instance, river networks, could also be the underlying driving force in the formation and evolution of social networks.",
"In order to describe the community structure upon the dynamical evolution of complex networks, we propose a new community based evolving network (CBEN) model having increasing communities with preferential mechanisms of both community sizes and node degrees, whose cumulative distribution and raw distribution follow scale-invariant power-law distributions P(S⩾s)∼s−ν and P(k)∼k−γ with exponents of ν⩾1 and γ∈[2, +∞), respectively. Besides, complex networks generated by the CBEN model are hierarchically structured, which cover the range from disassortative networks to assortative networks.",
"A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases.",
"Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.",
"",
"Community structure is one of the most important features of real networks and reveals the internal organization of the nodes. Many algorithms have been proposed but the crucial issue of testing, i.e., the question of how good an algorithm is, with respect to others, is still open. Standard tests include the analysis of simple artificial graphs with a built-in community structure, that the algorithm has to recover. However, the special graphs adopted in actual tests have a structure that does not reflect the real properties of nodes and communities found in real networks. Here we introduce a class of benchmark graphs, that account for the heterogeneity in the distributions of node degrees and of community sizes. We use this benchmark to test two popular methods of community detection, modularity optimization, and Potts model clustering. The results show that the benchmark poses a much more severe test to algorithms than standard benchmarks, revealing limits that may not be apparent at a first analysis."
]
} |
1607.08497 | 2490699242 | One of the most widely studied problem in mining and analysis of complex networks is the detection of community structures. The problem has been extensively studied by researchers due to its high utility and numerous applications in various domains. Many algorithmic solutions have been proposed for the community detection problem but the quest to find the best algorithm is still on. More often than not, researchers focus on developing fast and accurate algorithms that can be generically applied to networks from a variety of domains without taking into consideration the structural and topological variations in these networks. In this paper, we evaluate the performance of different clustering algorithms as a function of varying network topology. Along with the well known LFR model to generate benchmark networks with communities,we also propose a new model named Naive Scale Free Model to study the behavior of community detection algorithms with respect to different topological features. More specifically, we are interested in the size of networks, the size of community structures, the average connectivity of nodes and the ratio of inter-intra cluster edges. Results reveal several limitations of the current popular network clustering algorithms failing to correctly find communities. This suggests the need to revisit the design of current clustering algorithms that fail to incorporate varying topological features of different networks. | Lancichinetti and Fortunato @cite_27 also proposed methods to generate benchmarks for directed and weighted networks as well as networks with overlapping community structures but in this study, we limit our analysis to undirected, unweighed graphs with hard and flat community structures. | {
"cite_N": [
"@cite_27"
],
"mid": [
"1991408655"
],
"abstract": [
"Many complex networks display a mesoscopic structure with groups of nodes sharing many links with the other nodes in their group and comparatively few with nodes of different groups. This feature is known as community structure and encodes precious information about the organization and the function of the nodes. Many algorithms have been proposed but it is not yet clear how they should be tested. Recently we have proposed a general class of undirected and unweighted benchmark graphs, with heterogeneous distributions of node degree and community size. An increasing attention has been recently devoted to develop algorithms able to consider the direction and the weight of the links, which require suitable benchmark graphs for testing. In this paper we extend the basic ideas behind our previous benchmark to generate directed and weighted networks with built-in community structure. We also consider the possibility that nodes belong to more communities, a feature occurring in real systems, such as social networks. As a practical application, we show how modularity optimization performs on our benchmark."
]
} |
1607.08497 | 2490699242 | One of the most widely studied problem in mining and analysis of complex networks is the detection of community structures. The problem has been extensively studied by researchers due to its high utility and numerous applications in various domains. Many algorithmic solutions have been proposed for the community detection problem but the quest to find the best algorithm is still on. More often than not, researchers focus on developing fast and accurate algorithms that can be generically applied to networks from a variety of domains without taking into consideration the structural and topological variations in these networks. In this paper, we evaluate the performance of different clustering algorithms as a function of varying network topology. Along with the well known LFR model to generate benchmark networks with communities,we also propose a new model named Naive Scale Free Model to study the behavior of community detection algorithms with respect to different topological features. More specifically, we are interested in the size of networks, the size of community structures, the average connectivity of nodes and the ratio of inter-intra cluster edges. Results reveal several limitations of the current popular network clustering algorithms failing to correctly find communities. This suggests the need to revisit the design of current clustering algorithms that fail to incorporate varying topological features of different networks. | Moriano and Finke @cite_16 proposed a model to generate networks with groups of nodes densely connected to each other and sparsely connected with other nodes. The model attempted to explain networks with extended power law degree distributions and clustering coefficients that do not diminish for huge size networks. The connectivity of new nodes probabilistically choose nodes of same type to form community structures. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2135959919"
],
"abstract": [
"Based on the formation of triad junctions, the mechanism proposed in this paper generates networks that exhibit extended rather than single power law behavior. Triad formation guarantees strong neighborhood clustering and community-level characteristics as the network size grows to infinity. The asymptotic behavior is of interest in the study of directed networks in which (i) the formation of links cannot be described according to the principle of preferential attachment, (ii) the in-degree distribution fits a power law for nodes with a high degree and an exponential form otherwise, (iii) clustering properties emerge at multiple scales and depend on both the number of links that newly added nodes establish and the probability of forming triads, and (iv) groups of nodes form modules that feature fewer links to the rest of the nodes."
]
} |
1607.08497 | 2490699242 | One of the most widely studied problem in mining and analysis of complex networks is the detection of community structures. The problem has been extensively studied by researchers due to its high utility and numerous applications in various domains. Many algorithmic solutions have been proposed for the community detection problem but the quest to find the best algorithm is still on. More often than not, researchers focus on developing fast and accurate algorithms that can be generically applied to networks from a variety of domains without taking into consideration the structural and topological variations in these networks. In this paper, we evaluate the performance of different clustering algorithms as a function of varying network topology. Along with the well known LFR model to generate benchmark networks with communities,we also propose a new model named Naive Scale Free Model to study the behavior of community detection algorithms with respect to different topological features. More specifically, we are interested in the size of networks, the size of community structures, the average connectivity of nodes and the ratio of inter-intra cluster edges. Results reveal several limitations of the current popular network clustering algorithms failing to correctly find communities. This suggests the need to revisit the design of current clustering algorithms that fail to incorporate varying topological features of different networks. | Pasta @cite_18 recently proposed a tunable network generation model with community structures whose flexibility allows it to generate a variety of networks with varying structural properties. The authors focus on three structural features of community structures generated through this model, the degree distribution within each community follows power-law, high clustering coefficient of nodes within each community, and each community can be further divided into sub-communities. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2952731778"
],
"abstract": [
"Recent years have seen a growing interest in the modeling and simulation of social networks to understand several social phenomena. Two important classes of networks, small world and scale free networks have gained a lot of research interest. Another important characteristic of social networks is the presence of community structures. Many social processes such as information diffusion and disease epidemics depend on the presence of community structures making it an important property for network generation models to be incorporated. In this paper, we present a tunable and growing network generation model with small world and scale free properties as well as the presence of community structures. The major contribution of this model is that the communities thus created satisfy three important structural properties: connectivity within each community follows power-law, communities have high clustering coefficient and hierarchical community structures are present in the networks generated using the proposed model. Furthermore, the model is highly robust and capable of producing networks with a number of different topological characteristics varying clustering coefficient and inter-cluster edges. Our simulation results show that the model produces small world and scale free networks along with the presence of communities depicting real world societies and social networks."
]
} |
1607.08497 | 2490699242 | One of the most widely studied problem in mining and analysis of complex networks is the detection of community structures. The problem has been extensively studied by researchers due to its high utility and numerous applications in various domains. Many algorithmic solutions have been proposed for the community detection problem but the quest to find the best algorithm is still on. More often than not, researchers focus on developing fast and accurate algorithms that can be generically applied to networks from a variety of domains without taking into consideration the structural and topological variations in these networks. In this paper, we evaluate the performance of different clustering algorithms as a function of varying network topology. Along with the well known LFR model to generate benchmark networks with communities,we also propose a new model named Naive Scale Free Model to study the behavior of community detection algorithms with respect to different topological features. More specifically, we are interested in the size of networks, the size of community structures, the average connectivity of nodes and the ratio of inter-intra cluster edges. Results reveal several limitations of the current popular network clustering algorithms failing to correctly find communities. This suggests the need to revisit the design of current clustering algorithms that fail to incorporate varying topological features of different networks. | Lancichinetti and Fortunato @cite_49 comapared the performance of 12 different clustering algorithms using the GN and the LFR benchmarks. The authors focus on mixing parameter to perform a comparative analysis for different values of network sizes (1000 and 5000) and different community sizes (between 10 to 50 and 20 to 100) in an attempt to identify the algorithms that perform well on these benchmarks. The authors also performed a comparative analysis for directed-unweighted graphs, undirected-weighted graphs and undirected-unweighted graphs with overlapping community structures. As compared to this study, we also use the LFR benchmark but vary parameters with a different perspective specially varying the average degree of the graphs and large variation in the size of clusters generated. | {
"cite_N": [
"@cite_49"
],
"mid": [
"1995996823"
],
"abstract": [
"Uncovering the community structure exhibited by real networks is a crucial step toward an understanding of complex systems that goes beyond the local organization of their constituents. Many algorithms have been proposed so far, but none of them has been subjected to strict tests to evaluate their performance. Most of the sporadic tests performed so far involved small networks with known community structure and or artificial graphs with a simplified structure, which is very uncommon in real systems. Here we test several methods against a recently introduced class of benchmark graphs, with heterogeneous distributions of degree and community size. The methods are also tested against the benchmark by Girvan and Newman [Proc. Natl. Acad. Sci. U.S.A. 99, 7821 (2002)] and on random graphs. As a result of our analysis, three recent algorithms introduced by Rosvall and Bergstrom [Proc. Natl. Acad. Sci. U.S.A. 104, 7327 (2007); Proc. Natl. Acad. Sci. U.S.A. 105, 1118 (2008)], [J. Stat. Mech.: Theory Exp. (2008), P10008], and Ronhovde and Nussinov [Phys. Rev. E 80, 016109 (2009)] have an excellent performance, with the additional advantage of low computational complexity, which enables one to analyze large systems."
]
} |
1607.08497 | 2490699242 | One of the most widely studied problem in mining and analysis of complex networks is the detection of community structures. The problem has been extensively studied by researchers due to its high utility and numerous applications in various domains. Many algorithmic solutions have been proposed for the community detection problem but the quest to find the best algorithm is still on. More often than not, researchers focus on developing fast and accurate algorithms that can be generically applied to networks from a variety of domains without taking into consideration the structural and topological variations in these networks. In this paper, we evaluate the performance of different clustering algorithms as a function of varying network topology. Along with the well known LFR model to generate benchmark networks with communities,we also propose a new model named Naive Scale Free Model to study the behavior of community detection algorithms with respect to different topological features. More specifically, we are interested in the size of networks, the size of community structures, the average connectivity of nodes and the ratio of inter-intra cluster edges. Results reveal several limitations of the current popular network clustering algorithms failing to correctly find communities. This suggests the need to revisit the design of current clustering algorithms that fail to incorporate varying topological features of different networks. | @cite_17 studied various network and common objective functions optimized to detect communities with the aim to understand the structural properties of identified clusters by different methods with the overall motive to find the best suited algorithm for specific applications. They conclude that the performance of community detection algorithms varies for certain classes of networks for example, communities of larger size tend be less dense. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2951938759"
],
"abstract": [
"Detecting clusters or communities in large real-world graphs such as large social or information networks is a problem of considerable interest. In practice, one typically chooses an objective function that captures the intuition of a network cluster as set of nodes with better internal connectivity than external connectivity, and then one applies approximation algorithms or heuristics to extract sets of nodes that are related to the objective function and that \"look like\" good communities for the application of interest. In this paper, we explore a range of network community detection methods in order to compare them and to understand their relative performance and the systematic biases in the clusters they identify. We evaluate several common objective functions that are used to formalize the notion of a network community, and we examine several different classes of approximation algorithms that aim to optimize such objective functions. In addition, rather than simply fixing an objective and asking for an approximation to the best cluster of any size, we consider a size-resolved version of the optimization problem. Considering community quality as a function of its size provides a much finer lens with which to examine community detection algorithms, since objective functions and approximation algorithms often have non-obvious size-dependent behavior."
]
} |
1607.08497 | 2490699242 | One of the most widely studied problem in mining and analysis of complex networks is the detection of community structures. The problem has been extensively studied by researchers due to its high utility and numerous applications in various domains. Many algorithmic solutions have been proposed for the community detection problem but the quest to find the best algorithm is still on. More often than not, researchers focus on developing fast and accurate algorithms that can be generically applied to networks from a variety of domains without taking into consideration the structural and topological variations in these networks. In this paper, we evaluate the performance of different clustering algorithms as a function of varying network topology. Along with the well known LFR model to generate benchmark networks with communities,we also propose a new model named Naive Scale Free Model to study the behavior of community detection algorithms with respect to different topological features. More specifically, we are interested in the size of networks, the size of community structures, the average connectivity of nodes and the ratio of inter-intra cluster edges. Results reveal several limitations of the current popular network clustering algorithms failing to correctly find communities. This suggests the need to revisit the design of current clustering algorithms that fail to incorporate varying topological features of different networks. | Sousa and Zhao @cite_14 empirically compared different clustering algorithms using synthetic and real networks. The objective of their work was to evaluate the performance of different algorithms. They found that the performance of some algorithms is effected by the number of nodes in a network. The artificial networks they generated were with a maximum node size of 100 which makes it difficult to generalize the results to large networks with hundreds and thousands of nodes. They use Modularity Q @cite_20 in part, to evaluate the quality of clustering algorithms which is shown to have problems with underestimation @cite_37 and overestimation @cite_59 of the number of communities when maximizing modularity. | {
"cite_N": [
"@cite_59",
"@cite_37",
"@cite_14",
"@cite_20"
],
"mid": [
"2146191525",
"2963681272",
"2078570923",
""
],
"abstract": [
"In this paper we discuss some problematic aspects of Newman and Girvan’s modularity function Q N . Given a graph G, the modularity of G can be written as Q N = Q f − Q 0, where Q f is the intracluster edge fraction of G and Q 0 is the expected intracluster edge fraction of the null model, i.e., a randomly connected graph with same expected degree distribution as G. It follows that the maximization of Q N must accomodate two factors pulling in opposite directions:Q f favors a small number of clusters and Q 0 favors many balanced (i.e., with approximately equal degrees) clusters. In certain cases the Q 0 term can cause overestimation of the true cluster number; this is the opposite of the well-known underestimation effect caused by the “resolution limit” of modularity. We illustrate the overestimation effect by constructing families of graphs with a “natural” community structure which, however, does not maximize modularity. In fact, we show there exist graphs G with a “natural clustering” V of G and another, balanced clustering U of G such that (i) the pair (G, U) has higher modularity than (G, V) and (ii) V and U are arbitrarily different.",
"Detecting community structure is fundamental for uncovering the links between structure and function in complex networks and for practical applications in many disciplines such as biology and sociology. A popular method now widely used relies on the optimization of a quantity called modularity, which is a quality index for a partition of a network into communities. We find that modularity optimization may fail to identify modules smaller than a scale which depends on the total size of the network and on the degree of interconnectedness of the modules, even in cases where modules are unambiguously defined. This finding is confirmed through several examples, both in artificial and in real social, biological, and technological networks, where we show that mod ularity optimization indeed does not resolve a large number of modules. A check of the modules obtained through modularity optimization is thus necessary, and we provide here key elements for the assessment of the reliability of this community detection method.",
"Complex networks became a very important tool in machine learning field, helping researchers to investigate and mine data. They can model real dynamic networks, aiding to unveil information's about the systems they model. Communities are notable groups that may exist in a complex network and the community detection problem is the focus of attention of many researchers. The igraph library implements a good set of community detection algorithms, allowing researchers to easily apply them to data mining tasks. But each algorithm uses a different approach, leading to different performances. In this paper, the community detection algorithms implemented in the igraph library are investigated and ranked according to their performances in a set of different scenarios. Results show walktrap and multi-level got the highest scores while leading eigenvector and spinglass got the lowest ones. These findings are an important contribution for aiding researchers to select or discard algorithms in their own experiments using igraph library.",
""
]
} |
1607.08497 | 2490699242 | One of the most widely studied problem in mining and analysis of complex networks is the detection of community structures. The problem has been extensively studied by researchers due to its high utility and numerous applications in various domains. Many algorithmic solutions have been proposed for the community detection problem but the quest to find the best algorithm is still on. More often than not, researchers focus on developing fast and accurate algorithms that can be generically applied to networks from a variety of domains without taking into consideration the structural and topological variations in these networks. In this paper, we evaluate the performance of different clustering algorithms as a function of varying network topology. Along with the well known LFR model to generate benchmark networks with communities,we also propose a new model named Naive Scale Free Model to study the behavior of community detection algorithms with respect to different topological features. More specifically, we are interested in the size of networks, the size of community structures, the average connectivity of nodes and the ratio of inter-intra cluster edges. Results reveal several limitations of the current popular network clustering algorithms failing to correctly find communities. This suggests the need to revisit the design of current clustering algorithms that fail to incorporate varying topological features of different networks. | A more recent work in evaluating network clustering algorithms was performed by Wang @cite_40 . They proposed a procedure-oriented framework for benchmarking to evaluate different community detection approaches in an attempt to find better algorithms. Although their objectives were clearly different from what we have proposed in this paper, one of their findings also pointed towards the varying performance of algorithms for different networks. | {
"cite_N": [
"@cite_40"
],
"mid": [
"2296746822"
],
"abstract": [
"Revealing the latent community structure, which is crucial to understanding the features of networks, is an important problem in network and graph analysis. During the last decade, many approaches have been proposed to solve this challenging problem in diverse ways, i.e. different measures or data structures. Unfortunately, experimental reports on existing techniques fell short in validity and integrity since many comparisons were not based on a unified code base or merely discussed in theory. We engage in an in-depth benchmarking study of community detection in social networks. We formulate a generalized community detection procedure and propose a procedure-oriented framework for benchmarking. This framework enables us to evaluate and compare various approaches to community detection systematically and thoroughly under identical experimental conditions. Upon that we can analyze and diagnose the inherent defect of existing approaches deeply, and further make effective improvements correspondingly. We have re-implemented ten state-of-the-art representative algorithms upon this framework and make comprehensive evaluations of multiple aspects, including the efficiency evaluation, performance evaluations, sensitivity evaluations, etc. We discuss their merits and faults in depth, and draw a set of take-away interesting conclusions. In addition, we present how we can make diagnoses for these algorithms resulting in significant improvements."
]
} |
1607.08277 | 2476593844 | In recent time, the standards for Vehicular Ad-hoc Networks (VANETs) and Intelligent Transportation Systems (ITSs) matured and scientific and industry interest is high especially as autonomous driving gets a lot of media attention. Autonomous driving and other assistance systems for cars make heavy use of VANETs to exchange information.They may provide more comfort, security and safety for drivers. However, it is of crucial importance for the user's trust in these assistance systems that they could not be influenced by malicious users. VANETs are likely attack vectors for such malicious users, hence application-specific security requirements must be considered during the design of applications using VANETs. In literature, many attacks on vehicular communication have been described but attacks on specific vehicular networking applications are often missing. This paper fills in this gap by describing standardized vehicular networking applications, defining and extending previous attacker models, and using the resulting new models to characterize the possible attackers interested in the specific vehicular networking application. The attacker models presented in this paper hopefully provide great benefit for the scientific community and industry as they allow to compare security evaluations of different works, characterize attackers, their intentions and help to plan application-specific security controls for vehicular networking applications. | A realistic attack scenario is the exploitation of low level software or hardware vulnerabilities in the network stacks of wireless transceivers. The existence and importance of these vulnerabilities has been discussed in various publications, @cite_18 @cite_1 @cite_11 @cite_15 . This scenario marks the lower bound of attack scenarios that are discussed in this paper. While still being relevant specifically to wireless communication, cellular or ad-hoc, it is also not specific to only one vehicular networking application and the root cause of vulnerable soft- and hardware proliferates through all the layers of current systems and is not specific to wireless communications. Therefore, this is not in focus for this publication. Instead, the main contribution is the combination and extension of previous attacker models by @cite_4 @cite_8 @cite_22 and the detailed description of realistic attacker models via the extended model. Most of the previous works @cite_12 @cite_17 @cite_7 @cite_21 @cite_16 , are missing realistic attacker models. Some like @cite_17 @cite_7 @cite_21 use categories of attacks, like impersonation, data tampering, sybil, or DOS attacks and describe each attacker based on its category. @cite_16 is really close to defining realistic attacker models by defining categories of attackers, like driver, road side or infrastructure. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_21",
"@cite_1",
"@cite_17",
"@cite_15",
"@cite_16",
"@cite_12",
"@cite_11"
],
"mid": [
"2293493339",
"",
"2121247918",
"2088946378",
"",
"2136490971",
"",
"2312586861",
"",
"2128269064",
"1655377165",
"2405941171"
],
"abstract": [
"Mobile communication is an essential part of our daily lives. Therefore, it needs to be secure and reliable. In this paper, we study the security of feature phones, the most common type of mobile phone in the world. We built a framework to analyze the security of SMS clients of feature phones. The framework is based on a small GSM base station, which is readily available on the market. Through our analysis we discovered vulnerabilities in the feature phone platforms of all major manufacturers. Using these vulnerabilities we designed attacks against end-users as well as mobile operators. The threat is serious since the attacks can be used to prohibit communication on a large scale and can be carried out from anywhere in the world. Through further analysis we determined that such attacks are amplified by certain configurations of the mobile network. We conclude our research by providing a set of countermeasures.",
"",
"Vehicular networks are very likely to be deployed in the coming years and thus become the most relevant form of mobile ad hoc networks. In this paper, we address the security of these networks. We provide a detailed threat analysis and devise an appropriate security architecture. We also describe some major design decisions still to be made, which in some cases have more than mere technical implications. We provide a set of security protocols, we show that they protect privacy and we analyze their robustness and efficiency.",
"In this paper, we present a model for application distribution and related security attacks in dense vehicular ad hoc networks (VANET) and sparse VANET which forms a delay tolerant network (DTN). We study the vulnerabilities of VANET to evaluate the attack scenarios and introduce a new attacker‘s model as an extension to the work done in [6]. Then a VANET model has been proposed that supports the application distribution through proxy app stores on top of mobile platforms installed in vehicles. The steps of application distribution have been studied in detail. We have identified key attacks (e.g. malware, spamming and phishing, software attack and threat to location privacy) for dense VANET and two attack scenarios for sparse VANET. It has been shown that attacks can be launched by distributing malicious applications and injecting malicious codes to On Board Unit (OBU) by exploiting OBU software security holes. Consequences of such security attacks have been described. Finally, countermeasures including the concepts of sandbox have also been presented in depth.",
"",
"Last few years, vehicular network has been taken more attention of researchers and automotive industries due to life saving factor. Vehicular Ad hoc Network (VANET) needs security to implement the wireless environment and serves users with safety and non safety applications. Attackers generate different attacks in this life saving vehicular network. In this paper, we propose five different classes of attacks and every class is expected to provide better perspective for the VANET security. The main contribution of this paper is the proposed solution for classification and identification of different attacks in VANET.",
"",
"Vehicular Ad hoc Networks (VANETs) have emerged recently as one of the most attractive topics for researchers and automotive industries due to their tremendous potential to improve traffic safety, efficiency and other added services. However, VANETs are themselves vulnerable against attacks that can directly lead to the corruption of networks and then possibly provoke big losses of time, money, and even lives. This paper presents a survey of VANETs attacks and solutions in carefully considering other similar works as well as updating new attacks and categorizing them into different classes.",
"",
"Communication using VANETs is commonly seen as the next milestone for improving traffic safety. Vehicles will be enabled to exchange any kind of information that helps to detect and mitigate dangerous situations. Security research in the past years has shown that VANETs are endangered by a plethora of severe security risk. Subject of this work is the modeling of attackers that target active safety applications in VANETs. Through a risk analysis, this work identifies assets, threats and potential attacks in inter-vehicle communication. The risk analysis shows that the most serious threat arises from a quasi-stationary (road-side) attacker that distributed forged warning messages. This attacker is discussed more deeply. We show the degrees of freedom that are available for position forging and find thereby two attacks that demand attention: single position forging having low effort compared to sophisticated movement path forging having a potentially high influence on road traffic safety.",
"Autonomous vehicles capable of navigating unpredictable real-world environments with little human feedback are a reality today. Such systems rely heavily on onboard sensors such as cameras, radar LIDAR, and GPS as well as capabilities such as 3G 4G connectivity and V2V V2I communication to make real-time maneuvering decisions. Autonomous vehicle control imposes very strict requirements on the security of the communication channels used by the vehicle to exchange information as well as the control logic that performs complex driving tasks such as adapting vehicle velocity or changing lanes. This study presents a first look at the effects of security attacks on the communication channel as well as sensor tampering of a connected vehicle stream equipped to achieve CACC. Our simulation results show that an insider attack can cause significant instability in the CACC vehicle stream. We also illustrate how different countermeasures, such as downgrading to ACC mode, could potentially be used to improve the security and safety of the connected vehicle streams.",
"Published attacks against smartphones have concentrated on software running on the application processor. With numerous countermeasures like ASLR, DEP and code signing being deployed by operating system vendors, practical exploitation of memory corruptions on this processor has become a time-consuming endeavor. At the same time, the cellular baseband stack of most smart-phones runs on a separate processor and is significantly less hardened, if at all. In this paper we demonstrate the risk of remotely exploitable memory corruptions in cellular baseband stacks. We analyze two widely deployed baseband stacks and give exemplary cases of memory corruptions that can be leveraged to inject and execute arbitrary code on the baseband processor. The vulnerabilities can be triggered over the air interface using a rogue GSM base station, for instance using OpenBTS together with a USRP software defined radio."
]
} |
1607.08477 | 2493727926 | Hashing methods have been widely used for efficient similarity retrieval on large scale image database. Traditional hashing methods learn hash functions to generate binary codes from hand-crafted features, which achieve limited accuracy since the hand-crafted features cannot optimally represent the image content and preserve the semantic similarity. Recently, several deep hashing methods have shown better performance because the deep architectures generate more discriminative feature representations. However, these deep hashing methods are mainly designed for supervised scenarios, which only exploit the semantic similarity information, but ignore the underlying data structures. In this paper, we propose the semi-supervised deep hashing approach, to perform more effective hash function learning by simultaneously preserving semantic similarity and underlying data structures. The main contributions are as follows: 1) We propose a semi-supervised loss to jointly minimize the empirical error on labeled data, as well as the embedding error on both labeled and unlabeled data, which can preserve the semantic similarity and capture the meaningful neighbors on the underlying data structures for effective hashing. 2) A semi-supervised deep hashing network is designed to extensively exploit both labeled and unlabeled data, in which we propose an online graph construction method to benefit from the evolving deep features during training to better capture semantic neighbors. To the best of our knowledge, the proposed deep network is the first deep hashing method that can perform hash code learning and feature learning simultaneously in a semi-supervised fashion. Experimental results on five widely-used data sets show that our proposed approach outperforms the state-of-the-art hashing methods. | Different from the two-stage framework in CNNH @cite_28 , Network in Network Hashing (NINH) @cite_7 integrates image representation learning and hash code learning into a one-stage framework. NINH constructs the deep framework for hash function learning based on the Network in Network architecture @cite_23 , with a shared sub-network composed of several stacked convolutional layers to extract image features, as well as a divide-and-encode module encouraged by sigmoid activation function and a piece-wise threshold function to output binary hash codes. During the learning process, without generating approximate hash codes in advance, NINH designs a triplet ranking loss function to exploit the relative similarity of the training images to directly guide hash function learning: where @math and @math specify the triplet constraint that image @math is more similar to image @math than to image @math based on image labels, @math denotes binary hash code, and @math denotes the Hamming distance. For simplifying optimization of equation ), NINH applies two relaxation tricks: relaxing the integer constraint on binary hash code, and replacing Hamming distance with Euclidean distance. | {
"cite_N": [
"@cite_28",
"@cite_23",
"@cite_7"
],
"mid": [
"2293824885",
"",
"1939575207"
],
"abstract": [
"Hashing is a popular approximate nearest neighbor search approach for large-scale image retrieval. Supervised hashing, which incorporates similarity dissimilarity information on entity pairs to improve the quality of hashing function learning, has recently received increasing attention. However, in the existing supervised hashing methods for images, an input image is usually encoded by a vector of handcrafted visual features. Such hand-crafted feature vectors do not necessarily preserve the accurate semantic similarities of images pairs, which may often degrade the performance of hashing function learning. In this paper, we propose a supervised hashing method for image retrieval, in which we automatically learn a good image representation tailored to hashing as well as a set of hash functions. The proposed method has two stages. In the first stage, given the pairwise similarity matrix S over training images, we propose a scalable coordinate descent method to decompose S into a product of HHT where H is a matrix with each of its rows being the approximate hash code associated to a training image. In the second stage, we propose to simultaneously learn a good feature representation for the input images as well as a set of hash functions, via a deep convolutional network tailored to the learned hash codes in H and optionally the discrete class labels of the images. Extensive empirical evaluations on three benchmark datasets with different kinds of images show that the proposed method has superior performance gains over several state-of-the-art supervised and unsupervised hashing methods.",
"",
"Similarity-preserving hashing is a widely-used method for nearest neighbour search in large-scale image retrieval tasks. For most existing hashing methods, an image is first encoded as a vector of hand-engineering visual features, followed by another separate projection or quantization step that generates binary codes. However, such visual feature vectors may not be optimally compatible with the coding process, thus producing sub-optimal hashing codes. In this paper, we propose a deep architecture for supervised hashing, in which images are mapped into binary codes via carefully designed deep neural networks. The pipeline of the proposed deep architecture consists of three building blocks: 1) a sub-network with a stack of convolution layers to produce the effective intermediate image features; 2) a divide-and-encode module to divide the intermediate image features into multiple branches, each encoded into one hash bit; and 3) a triplet ranking loss designed to characterize that one image is more similar to the second image than to the third one. Extensive evaluations on several benchmark image datasets show that the proposed simultaneous feature learning and hash coding pipeline brings substantial improvements over other state-of-the-art supervised or unsupervised hashing methods."
]
} |
1607.08506 | 2501157883 | Context: Software performance is a critical non-functional requirement, appearing in many fields such as mission critical applications, financial, and real time systems. In this work we focused on early detection of performance bugs, our software under study was a real time system used in the mobile advertisement marketing domain. Goal: Find a simple and easy to implement solution, predicting performance bugs. Method: We built several models using four machine learning methods, commonly used for defect prediction: C4.5 Decision Trees, Naive Bayes, Bayesian Networks, and Logistic Regression. Results: Our empirical results show that a C4.5 model, using lines of code changed, file's age and size as explanatory variables, can be used to predict performance bugs (recall = 0.73, accuracy = 0.85, and precision = 0.96). We show that reducing the number of changes delivered on a commit, can decrease the chance of performance bug injection. Conclusions: We believe that our approach can help practitioners to eliminate performance bugs early in the development cycle. Our results are also of interest to theoreticians, establishing a link between functional bugs and (non-functional) performance bugs, and explicitly showing that attributes used for prediction of functional bugs can be used for prediction of performance bugs. | There are many studies in the area of defect prediction (see @cite_47 for review of 208 defect prediction models) and quality assurance of software, since this is one of the most significant endeavors during a product’s life cycle. Reduced number of bugs assures better quality, thereby improving the product being delivered and allowing for better resource allocation @cite_31 @cite_49 . | {
"cite_N": [
"@cite_47",
"@cite_31",
"@cite_49"
],
"mid": [
"2151666086",
"2101728371",
""
],
"abstract": [
"Background: The accurate prediction of where faults are likely to occur in code can help direct test effort, reduce costs, and improve the quality of software. Objective: We investigate how the context of models, the independent variables used, and the modeling techniques applied influence the performance of fault prediction models. Method: We used a systematic literature review to identify 208 fault prediction studies published from January 2000 to December 2010. We synthesize the quantitative and qualitative results of 36 studies which report sufficient contextual and methodological information according to the criteria we develop and apply. Results: The models that perform well tend to be based on simple modeling techniques such as Naive Bayes or Logistic Regression. Combinations of independent variables have been used by models that perform well. Feature selection has been applied to these combinations when models are performing particularly well. Conclusion: The methodology used to build models seems to be influential to predictive performance. Although there are a set of fault prediction studies in which confidence is possible, more studies are needed that use a reliable methodology and which report their context, methodology, and performance comprehensively.",
"Many organizations want to predict the number of defects (faults) in software systems, before they are deployed, to gauge the likely delivered quality and maintenance effort. To help in this numerous software metrics and statistical models have been developed, with a correspondingly large literature. We provide a critical review of this literature and the state-of-the-art. Most of the wide range of prediction models use size and complexity metrics to predict defects. Others are based on testing data, the \"quality\" of the development process, or take a multivariate approach. The authors of the models have often made heroic contributions to a subject otherwise bereft of empirical studies. However, there are a number of serious theoretical and practical problems in many studies. The models are weak because of their inability to cope with the, as yet, unknown relationship between defects and failures. There are fundamental statistical and data quality problems that undermine model validity. More significantly many prediction models tend to model only part of the underlying problem and seriously misspecify it. To illustrate these points the Goldilock's Conjecture, that there is an optimum module size, is used to show the considerable problems inherent in current defect prediction approaches. Careful and considered analysis of past and new results shows that the conjecture lacks support and that some models are misleading. We recommend holistic models for software defect prediction, using Bayesian belief networks, as alternative approaches to the single-issue models used at present. We also argue for research into a theory of \"software decomposition\" in order to test hypotheses about defect introduction and help construct a better science of software engineering.",
""
]
} |
1607.08506 | 2501157883 | Context: Software performance is a critical non-functional requirement, appearing in many fields such as mission critical applications, financial, and real time systems. In this work we focused on early detection of performance bugs, our software under study was a real time system used in the mobile advertisement marketing domain. Goal: Find a simple and easy to implement solution, predicting performance bugs. Method: We built several models using four machine learning methods, commonly used for defect prediction: C4.5 Decision Trees, Naive Bayes, Bayesian Networks, and Logistic Regression. Results: Our empirical results show that a C4.5 model, using lines of code changed, file's age and size as explanatory variables, can be used to predict performance bugs (recall = 0.73, accuracy = 0.85, and precision = 0.96). We show that reducing the number of changes delivered on a commit, can decrease the chance of performance bug injection. Conclusions: We believe that our approach can help practitioners to eliminate performance bugs early in the development cycle. Our results are also of interest to theoreticians, establishing a link between functional bugs and (non-functional) performance bugs, and explicitly showing that attributes used for prediction of functional bugs can be used for prediction of performance bugs. | There are several categories of tools and methods used to detect and predict defects, with the aim of delivering better software @cite_51 . However, static bug finders and statistical models @cite_34 have become the two most prominent categories of defect prediction. The defects can be further categorized to create prediction models in order to distribute the resolution resources more efficiently @cite_43 @cite_10 @cite_35 . At the same time, efficiency, density, and principally the technical debt caused by defects, play critical roles as well @cite_46 . | {
"cite_N": [
"@cite_35",
"@cite_10",
"@cite_43",
"@cite_46",
"@cite_34",
"@cite_51"
],
"mid": [
"",
"",
"2029277954",
"2139219304",
"2111421634",
"2055447156"
],
"abstract": [
"",
"",
"Defect prediction is a well-established research area in software engineering . Prediction models in the literature do not predict defect-prone modules in different test phases. We investigate the relationships between defects and test phases in order to build defect prediction models for different test phases. We mined the version history of a large-scale enterprise software product to extract churn and static code metrics. We used three testing phases that have been employed by our industry partner, namely function, system and field, to build a learning-based model for each testing phase. We examined the relation of different defect symptoms with the testing phases. We compared the performance of our proposed model with a benchmark model that has been constructed for the entire test phase (benchmark model). Our results show that building a model to predict defect-prone modules for each test phase significantly improves defect prediction performance and shortens defect detection time. The benefit analysis shows that using the proposed model, the defects are detected on the average 7 months earlier than the actual. The outcome of prediction models should lead to an action in a software development organization. Our proposed model gives a more granular outcome in terms of predicting defect-prone modules in each testing phase so that managers may better organize the testing teams and effort.",
"The technical debt (TD) metaphor describes a tradeoff between short-term and long-term goals in software development. Developers, in such situations, accept compromises in one dimension (e.g. maintainability) to meet an urgent demand in another dimension (e.g. delivering a release on time). Since TD produces interests in terms of time spent to correct the code and accomplish quality goals, accumulation of TD in software systems is dangerous because it could lead to more difficult and expensive maintenance. The research presented in this paper is focused on the usage of automatic static analysis to identify Technical Debt at code level with respect to different quality dimensions. The methodological approach is that of Empirical Software Engineering and both past and current achieved results are presented, focusing on functionality, efficiency and maintainability.",
"The all-important goal of delivering better software at lower cost has led to a vital, enduring quest for ways to find and remove defects efficiently and accurately. To this end, two parallel lines of research have emerged over the last years. Static analysis seeks to find defects using algorithms that process well-defined semantic abstractions of code. Statistical defect prediction uses historical data to estimate parameters of statistical formulae modeling the phenomena thought to govern defect occurrence and predict where defects are likely to occur. These two approaches have emerged from distinct intellectual traditions and have largely evolved independently, in “splendid isolation”. In this paper, we evaluate these two (largely) disparate approaches on a similar footing. We use historical defect data to apprise the two approaches, compare them, and seek synergies. We find that under some accounting principles, they provide comparable benefits; we also find that in some settings, the performance of certain static bug-finders can be enhanced using information provided by statistical defect prediction.",
"Despite decades of work by researchers and practitioners on numerous software quality assurance techniques, testing remains one of the most widely practiced and studied approaches for assessing and improving software quality. Our goal, in this paper, is to provide an accounting of some of the most successful research performed in software testing since the year 2000, and to present what appear to be some of the most significant challenges and opportunities in this area. To be more inclusive in this effort, and to go beyond our own personal opinions and biases, we began by contacting over 50 of our colleagues who are active in the testing research area, and asked them what they believed were (1) the most significant contributions to software testing since 2000 and (2) the greatest open challenges and opportunities for future research in this area. While our colleagues’ input (consisting of about 30 responses) helped guide our choice of topics to cover and ultimately the writing of this paper, we by no means claim that our paper represents all the relevant and noteworthy research performed in the area of software testing in the time period considered—a task that would require far more space and time than we have available. Nevertheless, we hope that the approach we followed helps this paper better reflect not only our views, but also those of the software testing community in general."
]
} |
1607.08506 | 2501157883 | Context: Software performance is a critical non-functional requirement, appearing in many fields such as mission critical applications, financial, and real time systems. In this work we focused on early detection of performance bugs, our software under study was a real time system used in the mobile advertisement marketing domain. Goal: Find a simple and easy to implement solution, predicting performance bugs. Method: We built several models using four machine learning methods, commonly used for defect prediction: C4.5 Decision Trees, Naive Bayes, Bayesian Networks, and Logistic Regression. Results: Our empirical results show that a C4.5 model, using lines of code changed, file's age and size as explanatory variables, can be used to predict performance bugs (recall = 0.73, accuracy = 0.85, and precision = 0.96). We show that reducing the number of changes delivered on a commit, can decrease the chance of performance bug injection. Conclusions: We believe that our approach can help practitioners to eliminate performance bugs early in the development cycle. Our results are also of interest to theoreticians, establishing a link between functional bugs and (non-functional) performance bugs, and explicitly showing that attributes used for prediction of functional bugs can be used for prediction of performance bugs. | Failures after the release of a product have also momentous effects, especially for large-scale commercial distributions @cite_18 . Even though developers dedicate large amounts of effort and time to testing, they can never be sure that the system is absolutely reliable. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2145458045"
],
"abstract": [
"Developers frequently use inefficient code sequences that could be fixed by simple patches. These inefficient code sequences can cause significant performance degradation and resource waste, referred to as performance bugs. Meager increases in single threaded performance in the multi-core era and increasing emphasis on energy efficiency call for more effort in tackling performance bugs. This paper conducts a comprehensive study of 110 real-world performance bugs that are randomly sampled from five representative software suites (Apache, Chrome, GCC, Mozilla, and MySQL). The findings of this study provide guidance for future work to avoid, expose, detect, and fix performance bugs. Guided by our characteristics study, efficiency rules are extracted from 25 patches and are used to detect performance bugs. 332 previously unknown performance problems are found in the latest versions of MySQL, Apache, and Mozilla applications, including 219 performance problems found by applying rules across applications."
]
} |
1607.08506 | 2501157883 | Context: Software performance is a critical non-functional requirement, appearing in many fields such as mission critical applications, financial, and real time systems. In this work we focused on early detection of performance bugs, our software under study was a real time system used in the mobile advertisement marketing domain. Goal: Find a simple and easy to implement solution, predicting performance bugs. Method: We built several models using four machine learning methods, commonly used for defect prediction: C4.5 Decision Trees, Naive Bayes, Bayesian Networks, and Logistic Regression. Results: Our empirical results show that a C4.5 model, using lines of code changed, file's age and size as explanatory variables, can be used to predict performance bugs (recall = 0.73, accuracy = 0.85, and precision = 0.96). We show that reducing the number of changes delivered on a commit, can decrease the chance of performance bug injection. Conclusions: We believe that our approach can help practitioners to eliminate performance bugs early in the development cycle. Our results are also of interest to theoreticians, establishing a link between functional bugs and (non-functional) performance bugs, and explicitly showing that attributes used for prediction of functional bugs can be used for prediction of performance bugs. | The work that is closest to ours is by @cite_18 , which studied a set of 109 real-world performance bugs. They studied the bugs’ lifetime from inception to fix, their root causes and introduction mechanisms, in order to create rule-based detectors. We used a comparable approach to create detectors (referred to as "patterns" in our study) with the aim to identify similar performance bugs. They studied software written in Java, C, C++, and JavaScript; ours was written in Python Programming language does affect performance of the models, as shown by @cite_15 . . Moreover, we focused on finding an efficient method to automatically detect performance bugs based on data extracted through the use of patterns (197 real-world performance bugs in our case). Summarizing, @cite_18 focused on analyzing characteristics of performance bugs, while in this study, our focus is on understanding the contribution of each source code attribute, as extracted from the source code repository, to the predicting power of the several machine learning algorithms that we used. Lastly, our general approach and the methodology for creating the prediction model were meant to be easily reproducible in the future. Therefore, our work is complementary. | {
"cite_N": [
"@cite_18",
"@cite_15"
],
"mid": [
"2145458045",
"2021509116"
],
"abstract": [
"Developers frequently use inefficient code sequences that could be fixed by simple patches. These inefficient code sequences can cause significant performance degradation and resource waste, referred to as performance bugs. Meager increases in single threaded performance in the multi-core era and increasing emphasis on energy efficiency call for more effort in tackling performance bugs. This paper conducts a comprehensive study of 110 real-world performance bugs that are randomly sampled from five representative software suites (Apache, Chrome, GCC, Mozilla, and MySQL). The findings of this study provide guidance for future work to avoid, expose, detect, and fix performance bugs. Guided by our characteristics study, efficiency rules are extracted from 25 patches and are used to detect performance bugs. 332 previously unknown performance problems are found in the latest versions of MySQL, Apache, and Mozilla applications, including 219 performance problems found by applying rules across applications.",
"We compare the effectiveness of four modeling methods--negative binomial regression, recursive partitioning, random forests and Bayesian additive regression trees--for predicting the files likely to contain the most faults for 28 to 35 releases of three large industrial software systems. Predictor variables included lines of code, file age, faults in the previous release, changes in the previous two releases, and programming language. To compare the effectiveness of the different models, we use two metrics--the percent of faults contained in the top 20 of files identified by the model, and a new, more general metric, the fault-percentile-average. The negative binomial regression and random forests models performed significantly better than recursive partitioning and Bayesian additive regression trees, as assessed by either of the metrics. For each of the three systems, the negative binomial and random forests models identified 20 of the files in each release that contained an average of 76 to 94 of the faults."
]
} |
1607.08506 | 2501157883 | Context: Software performance is a critical non-functional requirement, appearing in many fields such as mission critical applications, financial, and real time systems. In this work we focused on early detection of performance bugs, our software under study was a real time system used in the mobile advertisement marketing domain. Goal: Find a simple and easy to implement solution, predicting performance bugs. Method: We built several models using four machine learning methods, commonly used for defect prediction: C4.5 Decision Trees, Naive Bayes, Bayesian Networks, and Logistic Regression. Results: Our empirical results show that a C4.5 model, using lines of code changed, file's age and size as explanatory variables, can be used to predict performance bugs (recall = 0.73, accuracy = 0.85, and precision = 0.96). We show that reducing the number of changes delivered on a commit, can decrease the chance of performance bug injection. Conclusions: We believe that our approach can help practitioners to eliminate performance bugs early in the development cycle. Our results are also of interest to theoreticians, establishing a link between functional bugs and (non-functional) performance bugs, and explicitly showing that attributes used for prediction of functional bugs can be used for prediction of performance bugs. | Static analysis tools can also eliminate performance bugs @cite_17 . However, this approach requires knowledge of pattern template, while ours does not (as we omit pattern type info in our models), hence the complementarity of our work. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2060015309"
],
"abstract": [
"Performance bugs are programming errors that slow down program execution. While existing techniques can detect various types of performance bugs, a crucial and practical aspect of performance bugs has not received the attention it deserves: how likely are developers to fix a performance bug? In practice, fixing a performance bug can have both benefits and drawbacks, and developers fix a performance bug only when the benefits outweigh the drawbacks. Unfortunately, for many performance bugs, the benefits and drawbacks are difficult to assess accurately. This paper presents C aramel , a novel static technique that detects and fixes performance bugs that have non-intrusive fixes likely to be adopted by developers. Each performance bug detected by C aramel is associated with a loop and a condition. When the condition becomes true during the loop execution, all the remaining computation performed by the loop is wasted. Developers typically fix such performance bugs because these bugs waste computation in loops and have non-intrusive fixes: when some condition becomes true dynamically, just break out of the loop. Given a program, C aramel detects such bugs statically and gives developers a potential source-level fix for each bug. We evaluate C aramel on real-world applications, including 11 popular Java applications (e.g., Groovy, Log4J, Lucene, Struts, Tomcat, etc) and 4 widely used C C++ applications (Chromium, GCC, Mozilla, and MySQL). C aramel finds 61 new performance bugs in the Java applications and 89 new performance bugs in the C C++ applications. Based on our bug reports, developers so far have fixed 51 and 65 performance bugs in the Java and C C++ applications, respectively. Most of the remaining bugs are still under consideration by developers."
]
} |
1607.08434 | 2951194323 | With the spread of wearable devices and head mounted cameras, a wide range of application requiring precise user localization is now possible. In this paper we propose to treat the problem of obtaining the user position with respect to a known environment as a video registration problem. Video registration, i.e. the task of aligning an input video sequence to a pre-built 3D model, relies on a matching process of local keypoints extracted on the query sequence to a 3D point cloud. The overall registration performance is strictly tied to the actual quality of this 2D-3D matching, and can degrade if environmental conditions such as steep changes in lighting like the ones between day and night occur. To effectively register an egocentric video sequence under these conditions, we propose to tackle the source of the problem: the matching process. To overcome the shortcomings of standard matching techniques, we introduce a novel embedding space that allows us to obtain robust matches by jointly taking into account local descriptors, their spatial arrangement and their temporal robustness. The proposal is evaluated using unconstrained egocentric video sequences both in terms of matching quality and resulting registration performance using different 3D models of historical landmarks. The results show that the proposed method can outperform state of the art registration algorithms, in particular when dealing with the challenges of night and day sequences. | The task of registering a video sequence to a 3D model can be performed in two fundamentally different ways. The first, while not being a proper registration technique in the sense that the 3D model is not pre-built but learned online, is Simultaneous Localization and Mapping (SLAM) @cite_47 @cite_38 @cite_8 . SLAM jointly builds the 3D model of the scene and locates the camera in the environment using the 3D model as reference. Similarly to video registration, this process is often performed through robust keypoint matching; on the other hand a major difference is due to the fact that the camera employed in building the model and in acquiring the query frames is the same, so the internal camera parameters remain constant throughout the problem and both the model and query sequence share the same lighting and environmental conditions. Widely popular in the field of robotics, few SLAM approaches have been extended to the task of video registration. | {
"cite_N": [
"@cite_38",
"@cite_47",
"@cite_8"
],
"mid": [
"",
"1970504153",
"2210972093"
],
"abstract": [
"",
"We propose a semi-direct monocular visual odometry algorithm that is precise, robust, and faster than current state-of-the-art methods. The semi-direct approach eliminates the need of costly feature extraction and robust matching techniques for motion estimation. Our algorithm operates directly on pixel intensities, which results in subpixel precision at high frame-rates. A probabilistic mapping method that explicitly models outlier measurements is used to estimate 3D points, which results in fewer outliers and more reliable points. Precise and high frame-rate motion estimation brings increased robustness in scenes of little, repetitive, and high-frequency texture. The algorithm is applied to micro-aerial-vehicle state-estimation in GPS-denied environments and runs at 55 frames per second on the onboard embedded computer and at more than 300 frames per second on a consumer laptop. We call our approach SVO (Semi-direct Visual Odometry) and release our implementation as open-source software.",
"This paper proposes a direct monocular SLAM algorithm that estimates a dense reconstruction of a scene in real-time on a CPU. Highly textured image areas are mapped using standard direct mapping techniques [1], that minimize the photometric error across different views. We make the assumption that homogeneous-color regions belong to approximately planar areas. Our contribution is a new algorithm for the estimation of such planar areas, based on the information of a superpixel segmentation and the semidense map from highly textured areas. We compare our approach against several alternatives using the public TUM dataset [2] and additional live experiments with a hand-held camera. We demonstrate that our proposal for piecewise planar monocular SLAM is faster, more accurate and more robust than the piecewise planar baseline [3]. In addition, our experimental results show how the depth regularization of monocular maps can damage its accuracy, being the piecewise planar assumption a reasonable option in indoor scenarios."
]
} |
1607.08188 | 2489253441 | Trajectory segmentation is the process of subdividing a trajectory into parts either by grouping points similar with respect to some measure of interest, or by minimizing a global objective function. Here we present a novel online algorithm for segmentation and summary, based on point density along the trajectory, and based on the nature of the naturally occurring structure of intermittent bouts of locomotive and local activity. We show an application to visualization of trajectory datasets, and discuss the use of the summary as an index allowing efficient queries which are otherwise impossible or computationally expensive, over very large datasets. | In the field of Ecology, Trajectory segmentation is widely used in order to segment the path of an animal into functionally homogeneous units. The main approach used is to compute a single point-wise feature along the trajectory and then group similar points, with respect to the feature. For example, a method of change-point analysis @cite_32 has been used in conjunction with Residence Time (a metric of the total amount of time spent in the vicinity of a point @cite_1 ). Another approach is to segment a trajectory with respect to the momentary behavior of an animal along a path @cite_28 . | {
"cite_N": [
"@cite_28",
"@cite_1",
"@cite_32"
],
"mid": [
"2167975231",
"",
"2117227640"
],
"abstract": [
"Background The study of animal movement is experiencing rapid progress in recent years, forcefully driven by technological advancement. Biologgers with Acceleration (ACC) recordings are becoming increasingly popular in the fields of animal behavior and movement ecology, for estimating energy expenditure and identifying behavior, with prospects for other potential uses as well. Supervised learning of behavioral modes from acceleration data has shown promising results in many species, and for a diverse range of behaviors. However, broad implementation of this technique in movement ecology research has been limited due to technical difficulties and complicated analysis, deterring many practitioners from applying this approach. This highlights the need to develop a broadly applicable tool for classifying behavior from acceleration data.",
"",
"A methodology for model selection based on a penalized contrast is developed. This methodology is applied to the change-point problem, for estimating the number of change points and their location. We aim to complete previous asymptotic results by constructing algorithms that can be used in diverse practical situations. First, we propose an adaptive choice of the penalty function for automatically estimating the dimension of the model, i.e., the number of change points. In a Bayesian framework, we define the posterior distribution of the change-point sequence as a function of the penalized contrast. MCMC procedures are available for sampling this posterior distribution. The parameters of this distribution are estimated with a stochastic version of EM algorithm (SAEM). An application to EEG analysis and some Monte-Carlo experiments illustrate these algorithms."
]
} |
1607.08188 | 2489253441 | Trajectory segmentation is the process of subdividing a trajectory into parts either by grouping points similar with respect to some measure of interest, or by minimizing a global objective function. Here we present a novel online algorithm for segmentation and summary, based on point density along the trajectory, and based on the nature of the naturally occurring structure of intermittent bouts of locomotive and local activity. We show an application to visualization of trajectory datasets, and discuss the use of the summary as an index allowing efficient queries which are otherwise impossible or computationally expensive, over very large datasets. | A more general framework was also suggested @cite_23 , based on finding the segmentation with the minimum number of segments, such that a given metric will not differ within any segment by more than a pre-defined factor. For a wide range of metrics (such as speed, velocity, heading, curvature, etc.), this can be achieved in @math . | {
"cite_N": [
"@cite_23"
],
"mid": [
"2101932598"
],
"abstract": [
"In this paper we address the problem of segmenting a trajectory such that each segment is in some sense homogeneous. We formally define different spatio-temporal criteria under which a trajectory can be homogeneous, including location, heading, speed, velocity, curvature, sinuosity, and curviness. We present a framework that allows us to segment any trajectory into a minimum number of segments under any of these criteria, or any combination of these criteria. In this framework, the segmentation problem can generally be solved in O(n log n) time, where n is the number of edges of the trajectory to be segmented."
]
} |
1607.08188 | 2489253441 | Trajectory segmentation is the process of subdividing a trajectory into parts either by grouping points similar with respect to some measure of interest, or by minimizing a global objective function. Here we present a novel online algorithm for segmentation and summary, based on point density along the trajectory, and based on the nature of the naturally occurring structure of intermittent bouts of locomotive and local activity. We show an application to visualization of trajectory datasets, and discuss the use of the summary as an index allowing efficient queries which are otherwise impossible or computationally expensive, over very large datasets. | Warped k-means @cite_3 adopts a completely different view to the segmentation problem. Since this is essentially a problem of clustering similar points, the method attempts (as in the well known K-means algorithm @cite_27 ) to find centroids and assignments, in order to minimize the mean square distance of each point to the centroid it is assigned to, under the additional constraint that if two points are in the same cluster, so are all the points between them along the trajectory. | {
"cite_N": [
"@cite_27",
"@cite_3"
],
"mid": [
"2150593711",
"2116013313"
],
"abstract": [
"It has long been realized that in pulse-code modulation (PCM), with a given ensemble of signals to handle, the quantum values should be spaced more closely in the voltage regions where the signal amplitude is more likely to fall. It has been shown by Panter and Dite that, in the limit as the number of quanta becomes infinite, the asymptotic fractional density of quanta per unit voltage should vary as the one-third power of the probability density per unit voltage of signal amplitudes. In this paper the corresponding result for any finite number of quanta is derived; that is, necessary conditions are found that the quanta and associated quantization intervals of an optimum finite quantization scheme must satisfy. The optimization criterion used is that the average quantization noise power be a minimum. It is shown that the result obtained here goes over into the Panter and Dite result as the number of quanta become large. The optimum quautization schemes for 2^ b quanta, b=1,2, , 7 , are given numerically for Gaussian and for Laplacian distribution of signal amplitudes.",
"Many devices generate large amounts of data that follow some sort of sequentiality, e.g., motion sensors, e-pens, eye trackers, etc. and often these data need to be compressed for classification, storage, and or retrieval tasks. Traditional clustering algorithms can be used for this purpose, but unfortunately they do not cope with the sequential information implicitly embedded in such data. Thus, we revisit the well-known K-means algorithm and provide a general method to properly cluster sequentially-distributed data. We present Warped K-Means (WKM), a multi-purpose partitional clustering procedure that minimizes the sum of squared error criterion, while imposing a hard sequentiality constraint in the classification step. We illustrate the properties of WKM in three applications, one being the segmentation and classification of human activity. WKM outperformed five state-of-the-art clustering techniques to simplify data trajectories, achieving a recognition accuracy of near 97 , which is an improvement of around 66 over their peers. Moreover, such an improvement came with a reduction in the computational cost of more than one order of magnitude."
]
} |
1607.07956 | 2503134594 | Due to the lack of structured knowledge applied in learning distributed representation of cate- gories, existing work cannot incorporate category hierarchies into entity information. We propose a framework that embeds entities and categories into a semantic space by integrating structured knowledge and taxonomy hierarchy from large knowledge bases. The framework allows to com- pute meaningful semantic relatedness between entities and categories. Our framework can han- dle both single-word concepts and multiple-word concepts with superior performance on concept categorization and yield state of the art results on dataless hierarchical classification. | , also known as knowledge embedding, is a family of methods to represent entities as vectors and relations as operations applied to entities such that certain properties are preserved. @cite_5 @cite_16 @cite_15 @cite_13 @cite_30 @cite_8 @cite_25 @cite_1 . For instance, the linear relational embedding @cite_5 @cite_16 applies a relation to an entity based on matrix-vector multiplication while TransE @cite_25 simplifies the operation to vector addition. To derive the embedding representation, they minimize a global loss function considering all (entity, relation, entity) triplets so that the embeddings encode meaningful semantics. Our approach is different from this line since we use probabilistic models instead of transition-based models. | {
"cite_N": [
"@cite_30",
"@cite_8",
"@cite_1",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_25"
],
"mid": [
"2099752825",
"2101802482",
"2949403529",
"2106346128",
"205829674",
"2164695327",
"2156954687",
"2127795553"
],
"abstract": [
"Vast amounts of structured information have been published in the Semantic Web's Linked Open Data (LOD) cloud and their size is still growing rapidly. Yet, access to this information via reasoning and querying is sometimes difficult, due to LOD's size, partial data inconsistencies and inherent noisiness. Machine Learning offers an alternative approach to exploiting LOD's data with the advantages that Machine Learning algorithms are typically robust to both noise and data inconsistencies and are able to efficiently utilize non-deterministic dependencies in the data. From a Machine Learning point of view, LOD is challenging due to its relational nature and its scale. Here, we present an efficient approach to relational learning on LOD data, based on the factorization of a sparse tensor that scales to data consisting of millions of entities, hundreds of relations and billions of known facts. Furthermore, we show how ontological knowledge can be incorporated in the factorization to improve learning results and how computation can be distributed across multiple nodes. We demonstrate that our approach is able to factorize the YAGO 2 core ontology and globally predict statements for this large knowledge base using a single dual-core desktop computer. Furthermore, we show experimentally that our approach achieves good results in several relational learning tasks that are relevant to Linked Data. Once a factorization has been computed, our model is able to predict efficiently, and without any additional training, the likelihood of any of the 4.3 ⋅ 1014 possible triples in the YAGO 2 core ontology.",
"Many data such as social networks, movie preferences or knowledge bases are multi-relational, in that they describe multiple relations between entities. While there is a large body of work focused on modeling these data, modeling these multiple types of relations jointly remains challenging. Further, existing approaches tend to breakdown when the number of these types grows. In this paper, we propose a method for modeling large multi-relational datasets, with possibly thousands of relations. Our model is based on a bilinear structure, which captures various orders of interaction of the data, and also shares sparse latent factors across different relations. We illustrate the performance of our approach on standard tensor-factorization datasets where we attain, or outperform, state-of-the-art results. Finally, a NLP application demonstrates our scalability and the ability of our model to learn efficient and semantically meaningful verb representations.",
"Most of previous work in knowledge base (KB) completion has focused on the problem of relation extraction. In this work, we focus on the task of inferring missing entity type instances in a KB, a fundamental task for KB competition yet receives little attention. Due to the novelty of this task, we construct a large-scale dataset and design an automatic evaluation methodology. Our knowledge base completion method uses information within the existing KB and external information from Wikipedia. We show that individual methods trained with a global objective that considers unobserved cells from both the entity and the type side gives consistently higher quality predictions compared to baseline methods. We also perform manual evaluation on a small subset of the data to verify the effectiveness of our knowledge base completion methods and the correctness of our proposed automatic evaluation method.",
"We introduce linear relational embedding as a means of learning a distributed representation of concepts from data consisting of binary relations between these concepts. The key idea is to represent concepts as vectors, binary relations as matrices, and the operation of applying a relation to a concept as a matrix-vector multiplication that produces an approximation to the related concept. A representation for concepts and relations is learned by maximizing an appropriate discriminative goodness function using gradient ascent. On a task involving family relationships, learning is fast and leads to good generalization.",
"Relational learning is becoming increasingly important in many areas of application. Here, we present a novel approach to relational learning based on the factorization of a three-way tensor. We show that unlike other tensor approaches, our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization. We substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution. Furthermore, we show on common benchmark datasets that our approach achieves better or on-par results, if compared to current state-of-the-art relational learning solutions, while it is significantly faster to compute.",
"We present Linear Relational Embedding (LRE), a new method of learning a distributed representation of concepts from data consisting of instances of relations between given concepts. Its final goal is to be able to generalize, i.e. infer new instances of these relations among the concepts. On a task involving family relationships we show that LRE can generalize better than any previously published method. We then show how LRE can be used effectively to find compact distributed representations for variable-sized recursive data structures, such as trees and lists.",
"",
"We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples."
]
} |
1607.07956 | 2503134594 | Due to the lack of structured knowledge applied in learning distributed representation of cate- gories, existing work cannot incorporate category hierarchies into entity information. We propose a framework that embeds entities and categories into a semantic space by integrating structured knowledge and taxonomy hierarchy from large knowledge bases. The framework allows to com- pute meaningful semantic relatedness between entities and categories. Our framework can han- dle both single-word concepts and multiple-word concepts with superior performance on concept categorization and yield state of the art results on dataless hierarchical classification. | Another line of methods is based on the skip-gram model @cite_11 , a recently proposed embedding model that learns to predict each context word given the target word. This model tries to maximize the average log likelihood of the context word so that the embeddings encode meaningful semantics. For instance, Entity hierarchy embedding @cite_4 extends it to predict each context entity given target entity in KBs. @cite_17 proposed a method to jointly embed words and entities through jointly optimizing word-word, entity-word, and entity-entity predicting models. Our models extend this line of research by incorporating hierarchical category information to jointly embed categories and entities in the same semantic space. | {
"cite_N": [
"@cite_4",
"@cite_17",
"@cite_11"
],
"mid": [
"2250333922",
"2230586094",
"2950133940"
],
"abstract": [
"Existing distributed representations are limited in utilizing structured knowledge to improve semantic relatedness modeling. We propose a principled framework of embedding entities that integrates hierarchical information from large-scale knowledge bases. The novel embedding model associates each category node of the hierarchy with a distance metric. To capture structured semantics, the entity similarity of context prediction are measured under the aggregated metrics of relevant categories along all inter-entity paths. We show that both the entity vectors and category distance metrics encode meaningful semantics. Experiments in entity linking and entity search show superiority of the proposed method.",
"Named Entity Disambiguation (NED) refers to the task of resolving multiple named entity mentions in a document to their correct references in a knowledge base (KB) (e.g., Wikipedia). In this paper, we propose a novel embedding method specifically designed for NED. The proposed method jointly maps words and entities into the same continuous vector space. We extend the skip-gram model by using two models. The KB graph model learns the relatedness of entities using the link structure of the KB, whereas the anchor context model aims to align vectors such that similar words and entities occur close to one another in the vector space by leveraging KB anchors and their context words. By combining contexts based on the proposed embedding with standard NED features, we achieved state-of-the-art accuracy of 93.1 on the standard CoNLL dataset and 85.2 on the TAC 2010 dataset.",
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible."
]
} |
1607.07770 | 2491206165 | This paper formulates and presents a solution to the new problem of budgeted semantic video segmentation. Given a video, the goal is to accurately assign a semantic class label to every pixel in the video within a specified time budget. Typical approaches to such labeling problems, such as Conditional Random Fields (CRFs), focus on maximizing accuracy but do not provide a principled method for satisfying a time budget. For video data, the time required by CRF and related methods is often dominated by the time to compute low-level descriptors of supervoxels across the video. Our key contribution is the new budgeted inference framework for CRF models that intelligently selects the most useful subsets of descriptors to run on subsets of supervoxels within the time budget. The objective is to maintain an accuracy as close as possible to the CRF model with no time bound, while remaining within the time budget. Our second contribution is the algorithm for learning a policy for the sparse selection of supervoxels and their descriptors for budgeted CRF inference. This learning algorithm is derived by casting our problem in the framework of Markov Decision Processes, and then instantiating a state-of-the-art policy learning algorithm known as Classification-Based Approximate Policy Iteration. Our experiments on multiple video datasets show that our learning approach and framework is able to significantly reduce computation time, and maintain competitive accuracy under varying budgets. | Semantic video segmentation is mostly formulated as a graphical-model based labeling of supervoxels in the video @cite_17 @cite_39 @cite_32 @cite_1 @cite_13 @cite_16 @cite_36 @cite_43 . For example, graphical models were used for: i) Propagating manual annotations of supervoxels of the first few frames to other supervoxels in the video @cite_1 @cite_7 @cite_37 , or ii) Supervoxel labeling based on week supervision in training @cite_31 . The accuracy of such labeling can be improved by CRF-based reasoning about 3D relations @cite_22 or context @cite_16 among object classes in the scene. Therefore, it seems reasonable that we develop our framework for an existing CRF-based labeling of supervoxels. None of these methods explicitly studied their runtime efficiency, except for a few empirical results of sensitivity to the total number of supervoxels used @cite_41 . | {
"cite_N": [
"@cite_37",
"@cite_22",
"@cite_7",
"@cite_36",
"@cite_41",
"@cite_1",
"@cite_32",
"@cite_39",
"@cite_43",
"@cite_31",
"@cite_16",
"@cite_13",
"@cite_17"
],
"mid": [
"",
"801273237",
"",
"61823270",
"",
"",
"",
"2115186366",
"",
"2029859592",
"1973071925",
"",
"2139086308"
],
"abstract": [
"",
"We present an approach for joint inference of 3D scene structure and semantic labeling for monocular video. Starting with monocular image stream, our framework produces a 3D volumetric semantic + occupancy map, which is much more useful than a series of 2D semantic label images or a sparse point cloud produced by traditional semantic segmentation and Structure from Motion(SfM) pipelines respectively. We derive a Conditional Random Field (CRF) model defined in the 3D space, that jointly infers the semantic category and occupancy for each voxel. Such a joint inference in the 3D CRF paves the way for more informed priors and constraints, which is otherwise not possible if solved separately in their traditional frameworks. We make use of class specific semantic cues that constrain the 3D structure in areas, where multiview constraints are weak. Our model comprises of higher order factors, which helps when the depth is unobservable.We also make use of class specific semantic cues to reduce either the degree of such higher order factors, or to approximately model them with unaries if possible. We demonstrate improved 3D structure and temporally consistent semantic segmentation for difficult, large scale, forward moving monocular image sequences.",
"",
"We describe an approach to incorporate scene topology and semantics into pixel-level object detection and localization. Our method requires video to determine occlusion regions and thence local depth ordering, and any visual recognition scheme that provides a score at local image regions, for instance object detection probabilities. We set up a cost functional that incorporates occlusion cues induced by object boundaries, label consistency and recognition priors, and solve it using a convex optimization scheme. We show that our method improves localization accuracy of existing recognition approaches, or equivalently provides semantic labels to pixel-level localization and segmentation.",
"",
"",
"",
"We propose a novel directed graphical model for label propagation in lengthy and complex video sequences. Given hand-labelled start and end frames of a video sequence, a variational EM based inference strategy propagates either one of several class labels or assigns an unknown class (void) label to each pixel in the video. These labels are used to train a multi-class classifier. The pixel labels estimated by this classifier are injected back into the Bayesian network for another iteration of label inference. The novel aspect of this iterative scheme, as compared to a recent approach [1], is its ability to handle occlusions. This is attributed to a hybrid of generative propagation and discriminative classification in a pseudo time-symmetric video model. The end result is a conservative labelling of the video; large parts of the static scene are labelled into known classes, and a void label is assigned to moving objects and remaining parts of the static scene. These labels can be used as ground truth data to learn the static parts of a scene from videos of it or more generally for semantic video segmentation. We demonstrate the efficacy of the proposed approach using extensive qualitative and quantitative tests over six challenging sequences. We bring out the advantages and drawbacks of our approach, both to encourage its repeatability and motivate future research directions.",
"",
"The desire of enabling computers to learn semantic concepts from large quantities of Internet videos has motivated increasing interests on semantic video understanding, while video segmentation is important yet challenging for understanding videos. The main difficulty of video segmentation arises from the burden of labeling training samples, making the problem largely unsolved. In this paper, we present a novel nearest neighbor-based label transfer scheme for weakly supervised video segmentation. Whereas previous weakly supervised video segmentation methods have been limited to the two-class case, our proposed scheme focuses on more challenging multiclass video segmentation, which finds a semantically meaningful label for every pixel in a video. Our scheme enjoys several favorable properties when compared with conventional methods. First, a weakly supervised hashing procedure is carried out to handle both metric and semantic similarity. Second, the proposed nearest neighbor-based label transfer algorithm effectively avoids overfitting caused by weakly supervised data. Third, a multi-video graph model is built to encourage smoothness between regions that are spatiotemporally adjacent and similar in appearance. We demonstrate the effectiveness of the proposed scheme by comparing it with several other state-of-the-art weakly supervised segmentation methods on one new Wild8 dataset and two other publicly available datasets.",
"We tackle the problem of semantic segmentation of dynamic scene in video sequences. We propose to incorporate foreground object information into pixel labeling by jointly reasoning semantic labels of super-voxels, object instance tracks and geometric relations between objects. We take an exemplar approach to object modeling by using a small set of object annotations and exploring the temporal consistency of object motion. After generating a set of moving object hypotheses, we design a CRF framework that jointly models the super voxel and object instances. The optimal semantic labeling is inferred by the MAP estimation of the model, which is solved by a single move-making based optimization procedure. We demonstrate the effectiveness of our method on three public datasets and show that our model can achieve superior or comparable results than the state of-the-art with less object-level supervision.",
"",
"The effective propagation of pixel labels through the spatial and temporal domains is vital to many computer vision and multimedia problems, yet little attention have been paid to the temporal video domain propagation in the past. Previous video label propagation algorithms largely avoided the use of dense optical flow estimation due to their computational costs and inaccuracies, and relied heavily on complex (and slower) appearance models. We show in this paper the limitations of pure motion and appearance based propagation methods alone, especially the fact that their performances vary on different type of videos. We propose a probabilistic framework that estimates the reliability of the sources and automatically adjusts the weights between them. Our experiments show that the “dragging effect” of pure optical-flow-based methods are effectively avoided, while the problems of pure appearance-based methods such the large intra-class variance is also effectively handled."
]
} |
1607.07967 | 2951925966 | In the real world datasets (e.g.,DBpedia query log), queries built on well-designed patterns containing only AND and OPT operators (for short, WDAO-patterns) account for a large proportion among all SPARQL queries. In this paper, we present a plugin-based framework for all SELECT queries built on WDAO-patterns, named PIWD. The framework is based on a parse tree called (for short, WDAO-tree) whose leaves are basic graph patterns (BGP) and inner nodes are the OPT operators. We prove that for any WDAO-pattern, its parse tree can be equivalently transformed into a WDAO-tree. Based on the proposed framework, we can employ any query engine to evaluate BGP for evaluating queries built on WDAO-patterns in a convenient way. Theoretically, we can reduce the query evaluation of WDAO-patterns to subgraph homomorphism as well as BGP since the query evaluation of BGP is equivalent to subgraph homomorphism. Finally, our preliminary experiments on gStore and RDF-3X show that PIWD can answer all queries built on WDAO-patterns effectively and efficiently. | BGP query algorithms have been developed for many years. Existing algorithms mainly focus on finding all embedding in a single large graph, such as ULLmann @cite_13 , VF2 @cite_24 , QUICKSI @cite_3 , GraphQL @cite_22 , SPath @cite_7 , STW @cite_17 and TurboIso @cite_8 . Some optimization method has been adapted in these techniques, such as adjusting matching order, pruning out the candidate vertices. However, the evaluation of well-designed SPARQL is not equivalent to the BGP query evaluation problem since there exists inexact matching. | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_3",
"@cite_24",
"@cite_13",
"@cite_17"
],
"mid": [
"2166959472",
"2143363350",
"2035173902",
"",
"2147405597",
"2126359798",
""
],
"abstract": [
"With the prevalence of graph data in a variety of domains, there is an increasing need for a language to query and manipulate graphs with heterogeneous attributes and structures. We present a graph query language (GraphQL) that supports bulk operations on graphs with arbitrary structures and annotated at- tributes. In this language, graphs are the basic unit of information and each query manipulates one or more collections of graphs at a time. The core of GraphQL is a graph algebra extended from the relational algebra in which the selection operator is generalized to graph pattern matching and a composition operator is introduced for rewriting matched graphs. Then, we investigate access methods of the selection operator. Pattern matching over large graphs is challenging due to the NP-completeness of subgraph isomorphism. We address this by a combination of techniques: use of neighborhood subgraphs and pro- files, joint reduction of the search space, and optimization of the search order. Experimental results on real and synthetic large graphs demonstrate that graph specific optimizations outperform an SQL-based implementation by orders of magnitude.",
"The dramatic proliferation of sophisticated networks has resulted in a growing need for supporting effective querying and mining methods over such large-scale graph-structured data. At the core of many advanced network operations lies a common and critical graph query primitive: how to search graph structures efficiently within a large network? Unfortunately, the graph query is hard due to the NP-complete nature of subgraph isomorphism. It becomes even challenging when the network examined is large and diverse. In this paper, we present a high performance graph indexing mechanism, SPath, to address the graph query problem on large networks. SPath leverages decomposed shortest paths around vertex neighborhood as basic indexing units, which prove to be both effective in graph search space pruning and highly scalable in index construction and deployment. Via SPath, a graph query is processed and optimized beyond the traditional vertex-at-a-time fashion to a more efficient path-at-a-time way: the query is first decomposed to a set of shortest paths, among which a subset of candidates with good selectivity is picked by a query plan optimizer; Candidate paths are further joined together to help recover the query graph to finalize the graph query processing. We evaluate SPath with the state-of-the-art GraphQL on both real and synthetic data sets. Our experimental studies demonstrate the effectiveness and scalability of SPath, which proves to be a more practical and efficient indexing method in addressing graph queries on large networks.",
"Given a query graph q and a data graph g, the subgraph isomorphism search finds all occurrences of q in g and is considered one of the most fundamental query types for many real applications. While this problem belongs to NP-hard, many algorithms have been proposed to solve it in a reasonable time for real datasets. However, a recent study has shown, through an extensive benchmark with various real datasets, that all existing algorithms have serious problems in their matching order selection. Furthermore, all algorithms blindly permutate all possible mappings for query vertices, often leading to useless computations. In this paper, we present an efficient and robust subgraph search solution, called TurboISO, which is turbo-charged with two novel concepts, candidate region exploration and the combine and permute strategy (in short, Comb Perm). The candidate region exploration identifies on-the-fly candidate subgraphs (i.e, candidate regions), which contain embeddings, and computes a robust matching order for each candidate region explored. The Comb Perm strategy exploits the novel concept of the neighborhood equivalence class (NEC). Each query vertex in the same NEC has identically matching data vertices. During subgraph isomorphism search, Comb Perm generates only combinations for each NEC instead of permutating all possible enumerations. Thus, if a chosen combination is determined to not contribute to a complete solution, all possible permutations for that combination will be safely pruned. Extensive experiments with many real datasets show that TurboISO consistently and significantly outperforms all competitors by up to several orders of magnitude.",
"",
"We present an algorithm for graph isomorphism and subgraph isomorphism suited for dealing with large graphs. A first version of the algorithm has been presented in a previous paper, where we examined its performance for the isomorphism of small and medium size graphs. The algorithm is improved here to reduce its spatial complexity and to achieve a better performance on large graphs; its features are analyzed in detail with special reference to time and memory requirements. The results of a testing performed on a publicly available database of synthetically generated graphs and on graphs relative to a real application dealing with technical drawings are presented, confirming the effectiveness of the approach, especially when working with large graphs.",
"Subgraph isomorphism can be determined by means of a brute-force tree-search enumeration procedure. In this paper a new algorithm is introduced that attains efficiency by inferentially eliminating successor nodes in the tree search. To assess the time actually taken by the new algorithm, subgraph isomorphism, clique detection, graph isomorphism, and directed graph isomorphism experiments have been carried out with random and with various nonrandom graphs. A parallel asynchronous logic-in-memory implementation of a vital part of the algorithm is also described, although this hardware has not actually been built. The hardware implementation would allow very rapid determination of isomorphism.",
""
]
} |
1607.07967 | 2951925966 | In the real world datasets (e.g.,DBpedia query log), queries built on well-designed patterns containing only AND and OPT operators (for short, WDAO-patterns) account for a large proportion among all SPARQL queries. In this paper, we present a plugin-based framework for all SELECT queries built on WDAO-patterns, named PIWD. The framework is based on a parse tree called (for short, WDAO-tree) whose leaves are basic graph patterns (BGP) and inner nodes are the OPT operators. We prove that for any WDAO-pattern, its parse tree can be equivalently transformed into a WDAO-tree. Based on the proposed framework, we can employ any query engine to evaluate BGP for evaluating queries built on WDAO-patterns in a convenient way. Theoretically, we can reduce the query evaluation of WDAO-patterns to subgraph homomorphism as well as BGP since the query evaluation of BGP is equivalent to subgraph homomorphism. Finally, our preliminary experiments on gStore and RDF-3X show that PIWD can answer all queries built on WDAO-patterns effectively and efficiently. | It has been shown that the complexity of the evaluation problem for the well-designed fragment is coNP-complete @cite_11 . The quasi well-designed pattern trees (QWDPTs), which are undirected and ordered, has been proposed @cite_19 . This work aims at the analysis of containment and equivalence of well-designed pattern. Efficient evaluation and semantic optimization of WDPT have been proposed in @cite_15 . Sparm is a tool for SPARQL analysis and manipulation in @cite_10 . Above-mentioned all aim at checking well-designed patterns or complexity analysis without evaluation on well-designed patterns. Our WDAO-tree is different from QWDPTs in structure and it emphasizes reconstructing query plans. The OPT operation optimization has been proposed in @cite_2 , which is different from our work since our work aims to handle a plugin in any BGP query engine in order to deal with WDAO-patterns in SPARQL queries. | {
"cite_N": [
"@cite_19",
"@cite_2",
"@cite_15",
"@cite_10",
"@cite_11"
],
"mid": [
"2076631482",
"2095664693",
"1967827263",
"2018369355",
"2131785201"
],
"abstract": [
"Static analysis is a fundamental task in query optimization. In this paper we study static analysis and optimization techniques for SPARQL, which is the standard language for querying Semantic Web data. Of particular interest for us is the optionality feature in SPARQL. It is crucial in Semantic Web data management, where data sources are inherently incomplete and the user is usually interested in partial answers to queries. This feature is one of the most complicated constructors in SPARQL and also the one that makes this language depart from classical query languages such as relational conjunctive queries. We focus on the class of well-designed SPARQL queries, which has been proposed in the literature as a fragment of the language with good properties regarding query evaluation. We first propose a tree representation for SPARQL queries, called pattern trees, which captures the class of well-designed SPARQL graph patterns and which can be considered as a query execution plan. Among other results, we propose several transformation rules for pattern trees, a simple normal form, and study equivalence and containment. We also study the enumeration and counting problems for this class of queries.",
"SPARQL basic graph pattern (BGP) (a.k.a. SQL inner-join) query optimization is a well researched area. However, optimization of OPTIONAL pattern queries (a.k.a. SQL left-outer-joins) poses additional challenges, due to the restrictions on the reordering of left-outer-joins. The occurrence of such queries tends to be as high as 50 of the total queries (e.g., DBPedia query logs). In this paper, we present Left Bit Right (LBR), a technique for well-designed nested BGP and OPTIONAL pattern queries. Through LBR, we propose a novel method to represent such queries using a graph of supernodes, which is used to aggressively prune the RDF triples, with the help of compressed indexes. We also propose novel optimization strategies -- first of a kind, to the best of our knowledge -- that combine together the characteristics of acyclicity of queries, minimality, and nullification, best-match operators. In this paper, we focus on OPTIONAL patterns without UNIONs or FILTERs, but we also show how UNIONs and FILTERs can be handled with our technique using a query rewrite. Our evaluation on RDF graphs of up to and over one billion triples, on a commodity laptop with 8 GB memory, shows that LBR can process well-designed low-selectivity complex queries up to 11 times faster compared to the state-of-the-art RDF column-stores as Virtuoso and MonetDB, and for highly selective queries, LBR is at par with them.",
"Conjunctive queries (CQs) fail to provide an answer when the pattern described by the query does not exactly match the data. CQs might thus be too restrictive as a querying mechanism when data is semistructured or incomplete. The semantic web therefore provides a formalism - known as well-designed pattern trees (WDPTs) - that tackles this problem: WDPTs allow us to match patterns over the data, if available, but do not fail to give an answer otherwise. Here we abstract away the specifics of semantic web applications and study WDPTs over arbitrary relational schemas. Our language properly subsumes the class of CQs. Hence, WDPT evaluation is intractable. We identify structural proper ties of WDPTs that lead to tractability of various variants of the evaluation problem. For checking if a WDPT is equivalent to one in our tractable class, we prove 2EXPTIME-membership. As a corollary, we obtain fixed-parameter tractability of (variants of) the evaluation problem. Our techniques also allow us to develop a theory of approximations for WDPTs.",
"SQL developers are used to having elaborate tools which help them in writing queries. In contrast, the creation of tools to assist users in the development of SPARQL queries is still in its infancy. In this system demo, we present the SPARQL Analysis and Manipulation (SPAM) tool, which provides help for the development of SPARQL queries. The main features of the SPAM tool comprise an editor with both text and graphical interface, as well as various functions for the static and dynamic analysis of SPARQL queries.",
"SPARQL is the standard language for querying RDF data. In this article, we address systematically the formal study of the database aspects of SPARQL, concentrating in its graph pattern matching facility. We provide a compositional semantics for the core part of SPARQL, and study the complexity of the evaluation of several fragments of the language. Among other complexity results, we show that the evaluation of general SPARQL patterns is PSPACE-complete. We identify a large class of SPARQL patterns, defined by imposing a simple and natural syntactic restriction, where the query evaluation problem can be solved more efficiently. This restriction gives rise to the class of well-designed patterns. We show that the evaluation problem is coNP-complete for well-designed patterns. Moreover, we provide several rewriting rules for well-designed patterns whose application may have a considerable impact in the cost of evaluating SPARQL queries."
]
} |
1607.07967 | 2951925966 | In the real world datasets (e.g.,DBpedia query log), queries built on well-designed patterns containing only AND and OPT operators (for short, WDAO-patterns) account for a large proportion among all SPARQL queries. In this paper, we present a plugin-based framework for all SELECT queries built on WDAO-patterns, named PIWD. The framework is based on a parse tree called (for short, WDAO-tree) whose leaves are basic graph patterns (BGP) and inner nodes are the OPT operators. We prove that for any WDAO-pattern, its parse tree can be equivalently transformed into a WDAO-tree. Based on the proposed framework, we can employ any query engine to evaluate BGP for evaluating queries built on WDAO-patterns in a convenient way. Theoretically, we can reduce the query evaluation of WDAO-patterns to subgraph homomorphism as well as BGP since the query evaluation of BGP is equivalent to subgraph homomorphism. Finally, our preliminary experiments on gStore and RDF-3X show that PIWD can answer all queries built on WDAO-patterns effectively and efficiently. | RDF-3X @cite_14 , TripleBit @cite_9 , SW-Store @cite_23 , Hexastore @cite_0 and gStore @cite_1 @cite_6 have high performance in BGPs. RDF-3X create indexes in the form of B+ tree, as well as TripleBit in the form of ID-Chunk. All of them have efficient performance since they concentrate on the design of indexing or storage. However, they can only support exact SPARQL queries, since they replace all literals (in RDF triples) by ids using a mapping dictionary. In other words, they cannot support WDAO-patterns well. Virtuoso @cite_4 and MonetDB @cite_29 support open-source and commercial services. Jena @cite_21 and Sesame @cite_20 are free open source Java frameworks for building semantic web and Linked Data applications, which focus on SPARQL parse without supporting large-scale date. Our work is independent on these BGP query frameworks, and any BGP query engine is adaptable for our plugin. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_9",
"@cite_29",
"@cite_1",
"@cite_21",
"@cite_6",
"@cite_0",
"@cite_23",
"@cite_20"
],
"mid": [
"2000656232",
"1495014783",
"2036380454",
"2124851765",
"1982177147",
"2101491706",
"2121041488",
"2135577024",
"2137152505",
"1812636409"
],
"abstract": [
"RDF is a data model for schema-free structured information that is gaining momentum in the context of Semantic-Web data, life sciences, and also Web 2.0 platforms. The \"pay-as-you-go\" nature of RDF and the flexible pattern-matching capabilities of its query language SPARQL entail efficiency and scalability challenges for complex queries including long join paths. This paper presents the RDF-3X engine, an implementation of SPARQL that achieves excellent performance by pursuing a RISC-style architecture with streamlined indexing and query processing. The physical design is identical for all RDF-3X databases regardless of their workloads, and completely eliminates the need for index tuning by exhaustive indexes for all permutations of subject-property-object triples and their binary and unary projections. These indexes are highly compressed, and the query processor can aggressively leverage fast merge joins with excellent performance of processor caches. The query optimizer is able to choose optimal join orders even for complex queries, with a cost model that includes statistical synopses for entire join paths. Although RDF-3X is optimized for queries, it also provides good support for efficient online updates by means of a staging architecture: direct updates to the main database indexes are deferred, and instead applied to compact differential indexes which are later merged into the main indexes in a batched manner. Experimental studies with several large-scale datasets with more than 50 million RDF triples and benchmark queries that include pattern matching, manyway star-joins, and long path-joins demonstrate that RDF-3X can outperform the previously best alternatives by one or two orders of magnitude.",
"This paper discusses RDF related work in the context of OpenLink Virtuoso, a general purpose relational federated database and applications platform. The use cases are dual 1. large RDF repositories 2. making arbitrary relational data queriable with SPARQL and RDF by mapping on demand. We discuss adapting a relational engine for native RDF support with dedicated data types, bitmap indexing and SQL optimizer techniques. We discuss adaptations of the query engine for running on shared nothing clusters, providing virtually unbounded scalability for RDF or relational warehouses. We further discuss mapping existing relational data into RDF for SPARQL access without converting the data into physical triples. We present conclusions and metrics as well as a number of use cases, from DBpedia to bio informatics and collaborative web applications.",
"The volume of RDF data continues to grow over the past decade and many known RDF datasets have billions of triples. A grant challenge of managing this huge RDF data is how to access this big RDF data efficiently. A popular approach to addressing the problem is to build a full set of permutations of (S, P, O) indexes. Although this approach has shown to accelerate joins by orders of magnitude, the large space overhead limits the scalability of this approach and makes it heavyweight. In this paper, we present TripleBit, a fast and compact system for storing and accessing RDF data. The design of TripleBit has three salient features. First, the compact design of TripleBit reduces both the size of stored RDF data and the size of its indexes. Second, TripleBit introduces two auxiliary index structures, ID-Chunk bit matrix and ID-Predicate bit matrix, to minimize the cost of index selection during query evaluation. Third, its query processor dynamically generates an optimal execution ordering for join queries, leading to fast query execution and effective reduction on the size of intermediate results. Our experiments show that TripleBit outperforms RDF-3X, MonetDB, BitMat on LUBM, UniProt and BTC 2012 benchmark queries and it offers orders of mangnitude performance improvement for some complex join queries.",
"Database systems tend to achieve only low IPC (instructions-per-cycle) eciency on modern CPUs in compute-intensive application areas like decision support, OLAP and multimedia retrieval. This paper starts with an in-depth investigation to the reason why this happens, focusing on the TPC-H benchmark. Our analysis of various relational systems and MonetDB leads us to a new set of guidelines for designing a query processor. The second part of the paper describes the architecture of our new X100 query engine for the MonetDB system that follows these guidelines. On the surface, it resembles a classical Volcano-style engine, but the crucial dierence to base all execution on the concept of vector processing makes it highly CPU ecien t. We evaluate the power of MonetDB X100 on the 100GB version of TPC-H, showing its raw execution power to be between one and two orders of magnitude higher than previous technology.",
"Due to the increasing use of RDF data, efficient processing of SPARQL queries over RDF datasets has become an important issue. However, existing solutions suffer from two limitations: 1) they cannot answer SPARQL queries with wildcards in a scalable manner; and 2) they cannot handle frequent updates in RDF repositories efficiently. Thus, most of them have to reprocess the dataset from scratch. In this paper, we propose a graph-based approach to store and query RDF data. Rather than mapping RDF triples into a relational database as most existing methods do, we store RDF data as a large graph. A SPARQL query is then converted into a corresponding subgraph matching query. In order to speed up query processing, we develop a novel index, together with some effective pruning rules and efficient search algorithms. Our method can answer exact SPARQL queries and queries with wildcards in a uniform manner. We also propose an effective maintenance algorithm to handle online updates over RDF repositories. Extensive experiments confirm the efficiency and effectiveness of our solution.",
"The new Semantic Web recommendations for RDF, RDFS and OWL have, at their heart, the RDF graph. Jena2, a second-generation RDF toolkit, is similarly centered on the RDF graph. RDFS and OWL reasoning are seen as graph-to-graph transforms, producing graphs of virtual triples. Rich APIs are provided. The Model API includes support for other aspects of the RDF recommendations, such as containers and reification. The Ontology API includes support for RDFS and OWL, including advanced OWL Full support. Jena includes the de facto reference RDF XML parser, and provides RDF XML output using the full range of the rich RDF XML grammar. N3 I O is supported. RDF graphs can be stored in-memory or in databases. Jena's query language, RDQL, and the Web API are both offered for the next round of standardization.",
"We address efficient processing of SPARQL queries over RDF datasets. The proposed techniques, incorporated into the gStore system, handle, in a uniform and scalable manner, SPARQL queries with wildcards and aggregate operators over dynamic RDF datasets. Our approach is graph based. We store RDF data as a large graph and also represent a SPARQL query as a query graph. Thus, the query answering problem is converted into a subgraph matching problem. To achieve efficient and scalable query processing, we develop an index, together with effective pruning rules and efficient search algorithms. We propose techniques that use this infrastructure to answer aggregation queries. We also propose an effective maintenance algorithm to handle online updates over RDF repositories. Extensive experiments confirm the efficiency and effectiveness of our solutions.",
"Despite the intense interest towards realizing the Semantic Web vision, most existing RDF data management schemes are constrained in terms of efficiency and scalability. Still, the growing popularity of the RDF format arguably calls for an effort to offset these drawbacks. Viewed from a relational-database perspective, these constraints are derived from the very nature of the RDF data model, which is based on a triple format. Recent research has attempted to address these constraints using a vertical-partitioning approach, in which separate two-column tables are constructed for each property. However, as we show, this approach suffers from similar scalability drawbacks on queries that are not bound by RDF property value. In this paper, we propose an RDF storage scheme that uses the triple nature of RDF as an asset. This scheme enhances the vertical partitioning idea and takes it to its logical conclusion. RDF data is indexed in six possible ways, one for each possible ordering of the three RDF elements. Each instance of an RDF element is associated with two vectors; each such vector gathers elements of one of the other types, along with lists of the third-type resources attached to each vector element. Hence, a sextuple-indexing scheme emerges. This format allows for quick and scalable general-purpose query processing; it confers significant advantages (up to five orders of magnitude) compared to previous approaches for RDF data management, at the price of a worst-case five-fold increase in index space. We experimentally document the advantages of our approach on real-world and synthetic data sets with practical queries.",
"Efficient management of RDF data is an important prerequisite for realizing the Semantic Web vision. Performance and scalability issues are becoming increasingly pressing as Semantic Web technology is applied to real-world applications. In this paper, we examine the reasons why current data management solutions for RDF data scale poorly, and explore the fundamental scalability limitations of these approaches. We review the state of the art for improving performance of RDF databases and consider a recent suggestion, \"property tables\". We then discuss practically and empirically why this solution has undesirable features. As an improvement, we propose an alternative solution: vertically partitioning the RDF data. We compare the performance of vertical partitioning with prior art on queries generated by a Web-based RDF browser over a large-scale (more than 50 million triples) catalog of library data. Our results show that a vertically partitioned schema achieves similar performance to the property table technique while being much simpler to design. Further, if a column-oriented DBMS (a database architected specially for the vertically partitioned case) is used instead of a row-oriented DBMS, another order of magnitude performance improvement is observed, with query times dropping from minutes to several seconds. Encouraged by these results, we describe the architecture of SW-Store, a new DBMS we are actively building that implements these techniques to achieve high performance RDF data management.",
"RDF and RDF Schema are two W3C standards aimed at enriching the Web with machine-processable semantic data.We have developed Sesame, an architecture for efficient storage and expressive querying of large quantities of metadata in RDF and RDF Schema. Sesame's design and implementation are independent from any specific storage device. Thus, Sesame can be deployed on top of a variety of storage devices, such as relational databases, triple stores, or object-oriented databases, without having to change the query engine or other functional modules. Sesame offers support for concurrency control, independent export of RDF and RDFS information and a query engine for RQL, a query language for RDF that offers native support for RDF Schema semantics. We present an overview of Sesame as a generic architecture, as well as its implementation and our first experiences with this implementation."
]
} |
1607.08086 | 2496520886 | As capacity and complexity of on-chip cache memory hierarchy increases, the service cost to the critical loads from Last Level Cache (LLC), which are frequently repeated, has become a major concern. The processor may stall for a considerable interval while waiting to access the data stored in the cache blocks in LLC, if there are no independent instructions to execute. To provide accelerated service to the critical loads requests from LLC, this work concentrates on leveraging the additional capacity offered by replacing SRAM-based L2 with Spin-Transfer Torque Random Access Memory (STT-RAM) to accommodate frequently accessed cache blocks in exclusive read mode in favor of reducing the overall read service time. Our proposed technique partitions L2 cache into two STT-RAM arrangements with different write performance and data retention time. The retention-relaxed STT-RAM arrays are utilized to effectively deal with the regular L2 cache requests while the high retention STT-RAM arrays in L2 are selected for maintaining repeatedly read accessed cache blocks from LLC by incurring negligible energy consumption for data retention. Our experimental results show that the proposed technique can reduce the mean L2 read miss ratio by 51.4 and increase the IPC by 11.7 on average across PARSEC benchmark suite while significantly decreasing the total L2 energy consumption compared to conventional SRAM-based L2 design. | The proposed hybrid LLC design proposed in [3] utilizes an intelligent adaptive data block placement by taking each cache line future access pattern into consideration. The write accesses are categorized into three main classes: prefetch-write, core-write and demand-write. Around 26 On the other hand, the data blocks which experience long interval between consecutive reads are placed into the STT-RAM to benefit from the low leakage power cost of long-residency offered by non-volatile devices @cite_27 . This design may incur prediction design overhead which incurs extra energy consumption for tracking the access pattern to each cache line and making the decision for placement. | {
"cite_N": [
"@cite_27"
],
"mid": [
"2127419525"
],
"abstract": [
"Emerging Non-Volatile Memories (NVM) such as Spin-Torque Transfer RAM (STT-RAM) and Resistive RAM (RRAM) have been explored as potential alternatives for traditional SRAM-based Last-Level-Caches (LLCs) due to the benefits of higher density and lower leakage power. However, NVM technologies have long latency and high energy overhead associated with the write operations. Consequently, a hybrid STT-RAM and SRAM based LLC architecture has been proposed in the hope of exploiting high density and low leakage power of STT-RAM and low write overhead of SRAM. Such a hybrid cache design relies on an intelligent block placement policy that makes good use of the characteristics of both STT-RAM and SRAM technology."
]
} |
1607.08086 | 2496520886 | As capacity and complexity of on-chip cache memory hierarchy increases, the service cost to the critical loads from Last Level Cache (LLC), which are frequently repeated, has become a major concern. The processor may stall for a considerable interval while waiting to access the data stored in the cache blocks in LLC, if there are no independent instructions to execute. To provide accelerated service to the critical loads requests from LLC, this work concentrates on leveraging the additional capacity offered by replacing SRAM-based L2 with Spin-Transfer Torque Random Access Memory (STT-RAM) to accommodate frequently accessed cache blocks in exclusive read mode in favor of reducing the overall read service time. Our proposed technique partitions L2 cache into two STT-RAM arrangements with different write performance and data retention time. The retention-relaxed STT-RAM arrays are utilized to effectively deal with the regular L2 cache requests while the high retention STT-RAM arrays in L2 are selected for maintaining repeatedly read accessed cache blocks from LLC by incurring negligible energy consumption for data retention. Our experimental results show that the proposed technique can reduce the mean L2 read miss ratio by 51.4 and increase the IPC by 11.7 on average across PARSEC benchmark suite while significantly decreasing the total L2 energy consumption compared to conventional SRAM-based L2 design. | In @cite_11 , Low-Retention (LR) and High-Retention (HR) STT-RAM arrays are utilized simultaneously to balance the performance versus the energy consumption. To accomplish this, the retention time of the STT-RAM is relaxed for LR architecture to improve the write operation speed. On the other hand, the HR cache offers the long-term residency of data block while incurring very small leakage power. Regarding to the features that each cache design offers, the proposed method manages the write-intensive cache blocks to be placed into the LR cache in favor of performance, while the read-intensive cache blocks are kept in the HR arrays to meet the required power budget limits. Furthermore, the migration policy is devised to transfer the data blocks to the proper cache design by continuously monitoring the access pattern to each cache line. Since the lifespan of the cache lines in the LR arrays is limited to the retention time which is deliberated at the design time, a refresh mechanism is designed to periodically refresh cache lines for preventing data loss. RRAP utilizes the same refresh approach to maintain the stability of data in LRSC design. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2108048675"
],
"abstract": [
"Spin-transfer torque random access memory (STT-RAM) has received increasing attention because of its attractive features: good scalability, zero standby power, non-volatility and radiation hardness. The use of STT-RAM technology in the last level on-chip caches has been proposed as it minimizes cache leakage power with technology scaling down. Furthermore, the cell area of STT-RAM is only 1 9 1 3 that of SRAM. This allows for a much larger cache with the same die footprint, improving overall system performance through reducing cache misses. However, deploying STT-RAM technology in L1 caches is challenging because of the long and power-consuming write operations. In this paper, we propose both L1 and lower level cache designs that use STT-RAM. In particular, our designs use STT-RAM cells with various data retention time and write performances, made possible by different magnetic tunneling junction (MTJ) designs. For the fast STT-RAM bits with reduced data retention time, a counter controlled dynamic refresh scheme is proposed to maintain the data validity. Our dynamic scheme saves more than 80 refresh energy compared to the simple refresh scheme proposed in previous works. A L1 cache built with ultra low retention STT-RAM coupled with our proposed dynamic refresh scheme can achieve 9.2 in performance improvement, and saves up to 30 of the total energy when compared to one that uses traditional SRAM. For lower level caches with relative large cache capacity, we propose a data migration scheme that moves data between portions of the cache with different retention characteristics so as to maximize the performance and power benefits. Our experiments show that on the average, our proposed multi retention level STT-RAM cache reduces 30 70 of the total energy compared to previous works, while improving IPC performance for both 2-level and 3-level cache hierarchy."
]
} |
1607.08073 | 2479894021 | A warning system for assisting drivers during overtaking maneuvers is proposed. The system relies on Car-2-Car communication technologies and multi-agent systems. A protocol for safety overtaking is proposed based on ACL communicative acts. The mathematical model for safety overtaking used Kalman filter to minimize localization error. | Vehicular agents are empowered with domain knowledge and they can perform geospatial and temporal reasoning in @cite_10 . An ontology for the vehicular network domain has been developed for supporting agents reasoning in vanet-based cooperative applications @cite_10 . A different line of research @cite_14 is based on mobile agents and norm-aware agents. In this paper, we assume communication through vanet technologies, aiming to be consistent with the IEEE802.11p standard. A trust model in @cite_17 employs a penalty in case of misleading reports. Experiments with altruist selfish agents in vanets @cite_6 shown that average speed increased when the vehicular agents cooperate. A vanet-based emergency vehicle warning system has been developed in @cite_12 . In line with @cite_16 which argue on semantic exchange of mobility data and knowledge among moving objects, we will take a step towards formalizing vehicular-related knowledge and semantic interchange of events. | {
"cite_N": [
"@cite_14",
"@cite_6",
"@cite_16",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2091255661",
"2084204665",
"2023996483",
"76489388",
"2018425507"
],
"abstract": [
"",
"Although drivers obtain road information through radio broadcasting or specific in-car equipment, there is still a wide gap between the synchronization of information and the actual conditions on the road. In the absence of adequate information, drivers often react to conditions with inefficient behaviors that do not contribute to their own driving goals, but increase traffic complication. Therefore, this study applies the features of information exchanged between ''Multi-Agents'' and mutual communication and collaboration mechanisms to intelligent transportation systems (ITS). If drivers could achieve distributed communication, share their driving information, and submit their own reasoned driving advice to others, many traffic situations will improve effectively. Additionally, the efficiency of the computing processes could have improved through distributed communication. At the same time, this paper proposes an architecture design, including vehicle components, OBU (On-Board Unit) devices and roadside device components (Roadside Unit) with hybrid architecture, which is intended to establish intelligent diversified road services to provide information support and applications.",
"We review the state-of-the-art in the semantic representation of moving objects.We present the key research challenges for the semantic management of moving objects.We propose a scalable framework for the semantic management of moving objects.The framework supports a distributed storage and analysis of semantic mobility data. This position paper presents our vision for the semantic management of moving objects. We argue that exploiting semantic techniques in mobility data management can bring valuable benefits to many domains characterized by the mobility of users and moving objects in general, such as traffic management, urban dynamics analysis, ambient assisted living, emergency management, m-health, etc. We present the state-of-the-art in the domain of management of semantic locations and trajectories, and outline research challenges that need to be investigated to enable a full-fledged and intelligent semantic management of moving objects and location-based services that support smarter mobility. We propose a distributed framework for the semantic enrichment and management of mobility data and analyze the potential deployment and exploitation of such a framework.",
"This paper deals with the integration of agent technology in the emerging field of vehicular networks. The agents are empowered with domain knowledge and they can perform geospatial and temporal reasoning. For the domain knowledge we developed a vehicular network ontology. The geospatial reasoning is performed with AllegroGraph, while event reasoning in RacerPro. The vehicle overtaking scenario is used to exemplify our solution.",
"With the recent introduction of the eCall system, the cars involved in accidents exchange relevant information directly with the emergency healthcare services. For road safety, Vehicular Ad-hoc Networks can be used to exchange safety information between cars and ambulances, via vehicle-2-x communication. In this paper, we exploit recent advances in vehicle-2-x communication and the advantages of knowledge representation and reasoning in order to deploy cooperative communication for medical emergency services. The developed system continuously matches data retrieved from inter-vehicular communication with structured knowledge from vehicular ontologies and open street maps.",
"In this paper we introduce a multi-faceted trust model of use for the application of ad-hoc vehicular networks (VANETs) – scenarios where agents representing drivers exchange information with other drivers regarding road and traffic conditions. We argue that there is a need to model trust in various dimensions and that combining these elements effectively can assist agents in making transportation decisions. We then introduce two new elements to our proposed model: i) distinguishing direct and indirect reports that are shared ii) employing a penalty for misleading reports, to promote honesty. We demonstrate how these two elements together serve to increase the value of the trust model, through a series of experiments of simulated traffic. In brief, we present a framework to facilitate the effective sharing of information in VANET environments between agents representing the vehicles."
]
} |
1607.07508 | 2493453930 | This paper investigates the offline packet-delay-minimization problem for an energy harvesting transmitter. To overcome the non-convexity of the problem, we propose a C2-diffeomorphic transformation and provide the necessary and sufficient condition for the transformed problem to a standard convex optimization problem. Based on this condition, a simple choice of the transformation is determined which allows an analytically tractable solution of the original non-convex problem to be easily obtained once the transformed convex problem is solved. We further study the structure of the optimal transmission policy in a special case and find it to follow a weighted-directional-water-filling structure. In particular, the optimal policy tends to allocate more power in earlier time slots and less power in later time slots. Our analytical insight is verified by simulation results. | Energy harvesting is recognized as a key enabling technology for self-sustainable communication networks. It also brings in significant and necessary changes in the design of the communication protocol when energy harvesting becomes the main or sole source of energy supply for the communication nodes @cite_1 @cite_13 @cite_14 . By focusing on the design of an energy harvesting transmitter, the existing studies can be categorized into offline and online scenarios. For offline scenarios, all future information (such as the energy arrival process, data arrival process and channel fading process) is assumed to be predictable or completely deterministic. Hence, the transmission policy is designed offline by using the available knowledge about the future. For online scenarios, although the exact future behaviors of the energy arrival, data arrival and channel fading processes are not known, the existing literature have considered different levels of statistical knowledge: the statistics of the processes in the future are exactly known (e.g., @cite_16 ), partly known (e.g., @cite_3 ) or completely unknown (e.g., @cite_11 ). | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_3",
"@cite_16",
"@cite_13",
"@cite_11"
],
"mid": [
"2141058243",
"2075142536",
"2065154528",
"2152589112",
"1969461599",
"2206189632"
],
"abstract": [
"Wireless networks composed of energy harvesting devices will introduce several transformative changes in wireless networking as we know it: energy self-sufficient, energy self-sustaining, perpetual operation; reduced use of conventional energy and accompanying carbon footprint; untethered mobility; and an ability to deploy wireless networks in hard-to-reach places such as remote rural areas, within structures, and within the human body. Energy harvesting brings new dimensions to the wireless communication problem in the form of intermittency and randomness of available energy, which necessitates a fresh look at wireless communication protocols at the physical, medium access, and networking layers. Scheduling and optimization aspects of energy harvesting communications in the medium access and networking layers have been relatively wellunderstood and surveyed in the recent paper [1]. This branch of literature takes a physical layer rate-power relationship that is valid in energy harvesting conditions under large-enough batteries and long-enough durations between energy harvests so that information-theoretic asymptotes are achieved, and optimizes the transmit power over time in order to maximize the throughput. Another branch of recent literature aims to understand the fundamental capacity limits, i.e. information-theoretic capacities, of energy harvesting links under smaller scale dynamics, considering energy harvests at the channel use level. This branch necessitates a deeper look at the coding and transmission schemes in the physical layer, and ultimately aims to develop an information theory of energy harvesting communications, akin to Shannon's development of an information theory for average power constrained communications. In this introductory article, we survey recent results in this branch and point to open problems that could be of interest to a broad set of researchers in the fields of communication theory, information theory, signal processing, and networking. In particular, we review capacities of energy harvesting links with infinite-sized, finitesized, and no batteries at the transmitter.",
"From being a scientific curiosity only a few years ago, energy harvesting (EH) is well on its way to becoming a game-changing technology in the field of autonomous wireless networked systems. The promise of long-term, uninterrupted and self-sustainable operation in a diverse array of applications has captured the interest of academia and industry alike. Yet the road to the ultimate network of perpetual communicating devices is plagued with potholes: ambient energy is intermittent and scarce, energy storage capacity is limited, and devices are constrained in size and complexity. In dealing with these challenges, this article will cover recent developments in the design of intelligent energy management policies for EH wireless devices and discuss pressing research questions in this rapidly growing field.",
"A point-to-point wireless communication system in which the transmitter is equipped with an energy harvesting device and a rechargeable battery, is studied. Both the energy and the data arrivals at the transmitter are modeled as Markov processes. Delay-limited communication is considered assuming that the underlying channel is block fading with memory, and the instantaneous channel state information is available at both the transmitter and the receiver. The expected total transmitted data during the transmitter's activation time is maximized under three different sets of assumptions regarding the information available at the transmitter about the underlying stochastic processes. A learning theoretic approach is introduced, which does not assume any a priori information on the Markov processes governing the communication system. In addition, online and offline optimization problems are studied for the same setting. Full statistical knowledge and causal information on the realizations of the underlying stochastic processes are assumed in the online optimization problem, while the offline optimization problem assumes non-causal knowledge of the realizations in advance. Comparing the optimal solutions in all three frameworks, the performance loss due to the lack of the transmitter's information regarding the behaviors of the underlying Markov processes is quantified.",
"We study a sensor node with an energy harvesting source. The generated energy can be stored in a buffer. The sensor node periodically senses a random field and generates a packet. These packets are stored in a queue and transmitted using the energy available at that time. We obtain energy management policies that are throughput optimal, i.e., the data queue stays stable for the largest possible data rate. Next we obtain energy management policies which minimize the mean delay in the queue. We also compare performance of several easily implementable sub-optimal energy management policies. A greedy policy is identified which, in low SNR regime, is throughput optimal and also minimizes mean delay.",
"This paper summarizes recent contributions in the broad area of energy harvesting wireless communications. In particular, we provide the current state of the art for wireless networks composed of energy harvesting nodes, starting from the information-theoretic performance limits to transmission scheduling policies and resource allocation, medium access, and networking issues. The emerging related area of energy transfer for self-sustaining energy harvesting wireless networks is considered in detail covering both energy cooperation aspects and simultaneous energy and information transfer. Various potential models with energy harvesting nodes at different network scales are reviewed, as well as models for energy consumption at the nodes.",
"This paper studies online transmission policies for an energy harvesting transmitter. Unlike the existing online policies which more or less require the knowledge on the future behavior of the energy and data arrival processes, we consider a practical but significantly more challenging scenario where the energy and data arrival processes are assumed to be totally unknown. Our design is formulated as a robust-optimal control problem which aims to optimize the worst-case performance. The transmission policy is designed only based on the current battery energy level and the data queue length directly monitored by the transmitter itself. Specifically, we apply an event-trigger approach in which the transmitter continuously monitors the battery energy and data queue length, and triggers an event when a significant change occurs in either of them. Once an event is triggered, the transmission policy is updated by the solution to the robust-optimal control problem. We consider both the transmission time and throughput as the performance metrics and formulated two optimization problems. The solutions are given in either a simple analytical form or an easy-to-implement algorithm."
]
} |
1607.07508 | 2493453930 | This paper investigates the offline packet-delay-minimization problem for an energy harvesting transmitter. To overcome the non-convexity of the problem, we propose a C2-diffeomorphic transformation and provide the necessary and sufficient condition for the transformed problem to a standard convex optimization problem. Based on this condition, a simple choice of the transformation is determined which allows an analytically tractable solution of the original non-convex problem to be easily obtained once the transformed convex problem is solved. We further study the structure of the optimal transmission policy in a special case and find it to follow a weighted-directional-water-filling structure. In particular, the optimal policy tends to allocate more power in earlier time slots and less power in later time slots. Our analytical insight is verified by simulation results. | The importance of studying the offline design for energy harvesting transmitters is two-fold @cite_5 @cite_12 @cite_8 : (i) The offline design tells the fundamental performance limit of an energy harvesting communication system. It serves as a benchmark for any online algorithm. Methods such as competitive analysis @cite_5 can be used to derive and analyze online algorithms once the performance of the offline design is known. (ii) The offline solution often helps one to gain insights on the design problem and the behavior of the optimal transmission policy, which inspire possible online design solutions (e.g., Section VI.B in @cite_12 and Section VII in @cite_8 ). As a result, a significant effort has been made into the study of offline designs in the past few years. | {
"cite_N": [
"@cite_5",
"@cite_12",
"@cite_8"
],
"mid": [
"2067453003",
"2145834571",
"1990808825"
],
"abstract": [
"The design of online algorithms for maximizing the achievable rate in a wireless communication channel between a source and a destination over a fixed number of slots is considered. The source is assumed to be powered by a natural renewable source, and the most general case of arbitrarily varying energy arrivals is considered, where neither the future energy arrival instants or amount nor their distribution is known. The fading coefficients are also assumed to be arbitrarily varying over time, with only causal information available at the source. For a maximization problem, the utility of an online algorithm is tested by finding its competitive ratio or competitiveness that is defined to be the maximum of the ratio of the gain of the optimal offline algorithm and the gain of the online algorithm over all input sequences. We show that the lower bound on the optimal competitive ratio for maximizing the achievable rate is arbitrarily close to the number of slots. Conversely, we propose a simple strategy that invests available energy uniformly over all remaining slots until the next energy arrival, and show that its competitive ratio is equal to the number of slots, to conclude that it is an optimal online algorithm.",
"Wireless systems comprised of rechargeable nodes have a significantly prolonged lifetime and are sustainable. A distinct characteristic of these systems is the fact that the nodes can harvest energy throughout the duration in which communication takes place. As such, transmission policies of the nodes need to adapt to these harvested energy arrivals. In this paper, we consider optimization of point-to-point data transmission with an energy harvesting transmitter which has a limited battery capacity, communicating in a wireless fading channel. We consider two objectives: maximizing the throughput by a deadline, and minimizing the transmission completion time of the communication session. We optimize these objectives by controlling the time sequence of transmit powers subject to energy storage capacity and causality constraints. We, first, study optimal offline policies. We introduce a directional water-filling algorithm which provides a simple and concise interpretation of the necessary optimality conditions. We show the optimality of an adaptive directional water-filling algorithm for the throughput maximization problem. We solve the transmission completion time minimization problem by utilizing its equivalence to its throughput maximization counterpart. Next, we consider online policies. We use stochastic dynamic programming to solve for the optimal online policy that maximizes the average number of bits delivered by a deadline under stochastic fading and energy arrival processes with causal channel state feedback. We also propose near-optimal policies with reduced complexity, and numerically study their performances along with the performances of the offline and online optimal policies under various different configurations.",
"Communication over a broadband fading channel powered by an energy harvesting transmitter is studied. Assuming non-causal knowledge of energy data arrivals and channel gains, optimal transmission schemes are identified by taking into account the energy cost of the processing circuitry as well as the transmission energy. A constant processing cost for each active sub-channel is assumed. Three different system objectives are considered: i) throughput maximization, in which the total amount of transmitted data by a deadline is maximized for a backlogged transmitter with a finite capacity battery; ii) energy maximization, in which the remaining energy in an infinite capacity battery by a deadline is maximized such that all the arriving data packets are delivered; iii) transmission completion time minimization, in which the delivery time of all the arriving data packets is minimized assuming infinite size battery. For each objective, a convex optimization problem is formulated, the properties of the optimal transmission policies are identified, and an algorithm which computes an optimal transmission policy is proposed. Finally, based on the insights gained from the offline optimizations, low-complexity online algorithms performing close to the optimal dynamic programming solution for the throughput and energy maximization problems are developed under the assumption that the energy data arrivals and channel states are known causally at the transmitter."
]
} |
1607.07508 | 2493453930 | This paper investigates the offline packet-delay-minimization problem for an energy harvesting transmitter. To overcome the non-convexity of the problem, we propose a C2-diffeomorphic transformation and provide the necessary and sufficient condition for the transformed problem to a standard convex optimization problem. Based on this condition, a simple choice of the transformation is determined which allows an analytically tractable solution of the original non-convex problem to be easily obtained once the transformed convex problem is solved. We further study the structure of the optimal transmission policy in a special case and find it to follow a weighted-directional-water-filling structure. In particular, the optimal policy tends to allocate more power in earlier time slots and less power in later time slots. Our analytical insight is verified by simulation results. | The objective of energy harvesting transmission design has been defined from a range of perspectives, including throughput maximization (e.g., @cite_12 @cite_3 @cite_8 @cite_5 @cite_11 ), remaining energy maximization (e.g., @cite_8 ), completion time minimization (e.g., @cite_2 @cite_12 @cite_6 @cite_4 @cite_8 @cite_11 ), and packet delay minimization @cite_16 @cite_0 @cite_7 . In particular, efficient offline designs have been found for the throughput-maximization, remaining-energy-maximization and completion-time-minimization problems @cite_12 @cite_8 @cite_2 . However, effective solutions to the offline packet-delay-minimization problem are still yet to be found. A very recent study in @cite_7 considered the packet-delay-minimization problem in a non-fading channel and obtained a deep insight of the solution structure given through the KKT conditions. However, the optimal solution was derived for the dual problem rather than the original problem. Since the original problem is non-convex, there is a duality gap @cite_9 between the original and dual problems. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_12",
"@cite_11"
],
"mid": [
"2035229442",
"1488261589",
"1990808825",
"",
"2065154528",
"2081751842",
"",
"1979096885",
"2067453003",
"2152589112",
"2145834571",
"2206189632"
],
"abstract": [
"Wireless networks with energy harvesting battery equipped nodes are quickly emerging as a viable option for future wireless networks with extended lifetime. Equally important to their counterpart in the design of energy harvesting radios are the design principles that this new networking paradigm calls for. In particular, unlike wireless networks considered to date, the energy replenishment process and the storage constraints of the rechargeable batteries need to be taken into account in designing efficient transmission strategies. In this work, such transmission policies for rechargeable nodes are considered, and optimum solutions for two related problems are identified. Specifically, the transmission policy that maximizes the short term throughput, i.e., the amount of data transmitted in a finite time horizon is found. In addition, the relation of this optimization problem to another, namely, the minimization of the transmission completion time for a given amount of data is demonstrated, which leads to the solution of the latter as well. The optimum transmission policies are identified under the constraints on energy causality, i.e., energy replenishment process, as well as the energy storage, i.e., battery capacity. For battery replenishment, a model with discrete packets of energy arrivals is considered. The necessary conditions that the throughput-optimal allocation satisfies are derived, and then the algorithm that finds the optimal transmission policy with respect to the short-term throughput and the minimum transmission completion time is given. Numerical results are presented to confirm the analytical findings.",
"We consider an energy harvesting communication system, where both energy and data packets arrive at the transmitter during the course of communication. We determine the optimum packet scheduling scheme that minimizes the average delay experienced by all packets. We show that, different from the existing literature, the optimum transmission power is not constant between the energy harvesting and data arrival events; the transmission power starts high, decreases linearly, and potentially reaches zero between energy harvests and data arrivals. Intuitively, untransmitted bits experience cumulative delay due to the bits to be transmitted ahead of them, and hence the reason for transmission power starting high and decreasing over time between energy harvests and data arrivals.",
"Communication over a broadband fading channel powered by an energy harvesting transmitter is studied. Assuming non-causal knowledge of energy data arrivals and channel gains, optimal transmission schemes are identified by taking into account the energy cost of the processing circuitry as well as the transmission energy. A constant processing cost for each active sub-channel is assumed. Three different system objectives are considered: i) throughput maximization, in which the total amount of transmitted data by a deadline is maximized for a backlogged transmitter with a finite capacity battery; ii) energy maximization, in which the remaining energy in an infinite capacity battery by a deadline is maximized such that all the arriving data packets are delivered; iii) transmission completion time minimization, in which the delivery time of all the arriving data packets is minimized assuming infinite size battery. For each objective, a convex optimization problem is formulated, the properties of the optimal transmission policies are identified, and an algorithm which computes an optimal transmission policy is proposed. Finally, based on the insights gained from the offline optimizations, low-complexity online algorithms performing close to the optimal dynamic programming solution for the throughput and energy maximization problems are developed under the assumption that the energy data arrivals and channel states are known causally at the transmitter.",
"",
"A point-to-point wireless communication system in which the transmitter is equipped with an energy harvesting device and a rechargeable battery, is studied. Both the energy and the data arrivals at the transmitter are modeled as Markov processes. Delay-limited communication is considered assuming that the underlying channel is block fading with memory, and the instantaneous channel state information is available at both the transmitter and the receiver. The expected total transmitted data during the transmitter's activation time is maximized under three different sets of assumptions regarding the information available at the transmitter about the underlying stochastic processes. A learning theoretic approach is introduced, which does not assume any a priori information on the Markov processes governing the communication system. In addition, online and offline optimization problems are studied for the same setting. Full statistical knowledge and causal information on the realizations of the underlying stochastic processes are assumed in the online optimization problem, while the offline optimization problem assumes non-causal knowledge of the realizations in advance. Comparing the optimal solutions in all three frameworks, the performance loss due to the lack of the transmitter's information regarding the behaviors of the underlying Markov processes is quantified.",
"We consider the minimization of the transmission completion time with a battery limited energy harvesting transmitter in an M-user AWGN broadcast channel where the transmitter is able to harvest energy from the nature, using a finite storage capacity rechargeable battery. The harvested energy is modeled to arrive (be harvested) at the transmitter during the course of transmissions at arbitrary time instants. The transmitter has fixed number of packets for each receiver. Due to the finite battery capacity, energy may overflow without being utilized for data transmission. We derive the optimal offline transmission policy that minimizes the time by which all of the data packets are delivered to their respective destinations. We analyze the structural properties of the optimal transmission policy using a dual problem. We find the optimal total transmit power sequence by a directional water-filling algorithm. We prove that there exist M-1 cut-off power levels such that user i is allocated the power between the i-1st and the ith cut-off power levels subject to the availability of the allocated total power level. Based on these properties, we propose an algorithm that gives the globally optimal offline policy. The proposed algorithm uses directional water-filling repetitively. Finally, we illustrate the optimal policy and compare its performance with several suboptimal policies under different settings.",
"",
"We consider the optimal packet scheduling problem in a single-user energy harvesting wireless communication system. In this system, both the data packets and the harvested energy are modeled to arrive at the source node randomly. Our goal is to adaptively change the transmission rate according to the traffic load and available energy, such that the time by which all packets are delivered is minimized. Under a deterministic system setting, we assume that the energy harvesting times and harvested energy amounts are known before the transmission starts. For the data traffic arrivals, we consider two different scenarios. In the first scenario, we assume that all bits have arrived and are ready at the transmitter before the transmission starts. In the second scenario, we consider the case where packets arrive during the transmissions, with known arrival times and sizes. We develop optimal off-line scheduling policies which minimize the time by which all packets are delivered to the destination, under causality constraints on both data and energy arrivals.",
"The design of online algorithms for maximizing the achievable rate in a wireless communication channel between a source and a destination over a fixed number of slots is considered. The source is assumed to be powered by a natural renewable source, and the most general case of arbitrarily varying energy arrivals is considered, where neither the future energy arrival instants or amount nor their distribution is known. The fading coefficients are also assumed to be arbitrarily varying over time, with only causal information available at the source. For a maximization problem, the utility of an online algorithm is tested by finding its competitive ratio or competitiveness that is defined to be the maximum of the ratio of the gain of the optimal offline algorithm and the gain of the online algorithm over all input sequences. We show that the lower bound on the optimal competitive ratio for maximizing the achievable rate is arbitrarily close to the number of slots. Conversely, we propose a simple strategy that invests available energy uniformly over all remaining slots until the next energy arrival, and show that its competitive ratio is equal to the number of slots, to conclude that it is an optimal online algorithm.",
"We study a sensor node with an energy harvesting source. The generated energy can be stored in a buffer. The sensor node periodically senses a random field and generates a packet. These packets are stored in a queue and transmitted using the energy available at that time. We obtain energy management policies that are throughput optimal, i.e., the data queue stays stable for the largest possible data rate. Next we obtain energy management policies which minimize the mean delay in the queue. We also compare performance of several easily implementable sub-optimal energy management policies. A greedy policy is identified which, in low SNR regime, is throughput optimal and also minimizes mean delay.",
"Wireless systems comprised of rechargeable nodes have a significantly prolonged lifetime and are sustainable. A distinct characteristic of these systems is the fact that the nodes can harvest energy throughout the duration in which communication takes place. As such, transmission policies of the nodes need to adapt to these harvested energy arrivals. In this paper, we consider optimization of point-to-point data transmission with an energy harvesting transmitter which has a limited battery capacity, communicating in a wireless fading channel. We consider two objectives: maximizing the throughput by a deadline, and minimizing the transmission completion time of the communication session. We optimize these objectives by controlling the time sequence of transmit powers subject to energy storage capacity and causality constraints. We, first, study optimal offline policies. We introduce a directional water-filling algorithm which provides a simple and concise interpretation of the necessary optimality conditions. We show the optimality of an adaptive directional water-filling algorithm for the throughput maximization problem. We solve the transmission completion time minimization problem by utilizing its equivalence to its throughput maximization counterpart. Next, we consider online policies. We use stochastic dynamic programming to solve for the optimal online policy that maximizes the average number of bits delivered by a deadline under stochastic fading and energy arrival processes with causal channel state feedback. We also propose near-optimal policies with reduced complexity, and numerically study their performances along with the performances of the offline and online optimal policies under various different configurations.",
"This paper studies online transmission policies for an energy harvesting transmitter. Unlike the existing online policies which more or less require the knowledge on the future behavior of the energy and data arrival processes, we consider a practical but significantly more challenging scenario where the energy and data arrival processes are assumed to be totally unknown. Our design is formulated as a robust-optimal control problem which aims to optimize the worst-case performance. The transmission policy is designed only based on the current battery energy level and the data queue length directly monitored by the transmitter itself. Specifically, we apply an event-trigger approach in which the transmitter continuously monitors the battery energy and data queue length, and triggers an event when a significant change occurs in either of them. Once an event is triggered, the transmission policy is updated by the solution to the robust-optimal control problem. We consider both the transmission time and throughput as the performance metrics and formulated two optimization problems. The solutions are given in either a simple analytical form or an easy-to-implement algorithm."
]
} |
1607.07508 | 2493453930 | This paper investigates the offline packet-delay-minimization problem for an energy harvesting transmitter. To overcome the non-convexity of the problem, we propose a C2-diffeomorphic transformation and provide the necessary and sufficient condition for the transformed problem to a standard convex optimization problem. Based on this condition, a simple choice of the transformation is determined which allows an analytically tractable solution of the original non-convex problem to be easily obtained once the transformed convex problem is solved. We further study the structure of the optimal transmission policy in a special case and find it to follow a weighted-directional-water-filling structure. In particular, the optimal policy tends to allocate more power in earlier time slots and less power in later time slots. Our analytical insight is verified by simulation results. | Another interesting issue is the relationship among the pa-cket-delay-minimization, completion-time-minimization, and throughput-maximization problems. For the last two problems, they are linked through the maximum departure curve (see Section V in @cite_12 ). However, it is unclear whether the first problem has any correlation with the last two problems. Intuitively, the first two problems are related: the quicker the transmission is completed, the smaller the packet delay is. A similar observation was also made in the literature (e.g., @cite_15 @cite_8 ). Then, through the maximum departure curve, all these three problems appear to be highly related. Nevertheless, there has not been any analytical studied on the extent to which the three problems are related and, more importantly, whether the optimal offline transmission designs for the three problems have the similar behaviors. | {
"cite_N": [
"@cite_15",
"@cite_12",
"@cite_8"
],
"mid": [
"2153679452",
"2145834571",
"1990808825"
],
"abstract": [
"We consider the transmission completion time minimization problem in a single-user energy harvesting wireless communication system. In this system, both the data packets and the harvested energy are modelled to arrive at the source node randomly. Our goal is to adaptively change the transmission rate according to the traffic load and available energy, such that the transmission completion time is minimized. Under a deterministic system setting, we assume that the energy harvesting times and harvested energy amounts are known before the transmission starts. For the data traffic arrivals, we consider two different scenarios. In the first scenario, we assume that all bits have arrived and are ready at the transmitter before the transmission starts. In the second scenario we consider, packets arrive during the transmissions with known arriving times and sizes. We develop optimal off-line scheduling policies which minimize the overall transmission completion time under causality constraints on both data and energy arrivals.",
"Wireless systems comprised of rechargeable nodes have a significantly prolonged lifetime and are sustainable. A distinct characteristic of these systems is the fact that the nodes can harvest energy throughout the duration in which communication takes place. As such, transmission policies of the nodes need to adapt to these harvested energy arrivals. In this paper, we consider optimization of point-to-point data transmission with an energy harvesting transmitter which has a limited battery capacity, communicating in a wireless fading channel. We consider two objectives: maximizing the throughput by a deadline, and minimizing the transmission completion time of the communication session. We optimize these objectives by controlling the time sequence of transmit powers subject to energy storage capacity and causality constraints. We, first, study optimal offline policies. We introduce a directional water-filling algorithm which provides a simple and concise interpretation of the necessary optimality conditions. We show the optimality of an adaptive directional water-filling algorithm for the throughput maximization problem. We solve the transmission completion time minimization problem by utilizing its equivalence to its throughput maximization counterpart. Next, we consider online policies. We use stochastic dynamic programming to solve for the optimal online policy that maximizes the average number of bits delivered by a deadline under stochastic fading and energy arrival processes with causal channel state feedback. We also propose near-optimal policies with reduced complexity, and numerically study their performances along with the performances of the offline and online optimal policies under various different configurations.",
"Communication over a broadband fading channel powered by an energy harvesting transmitter is studied. Assuming non-causal knowledge of energy data arrivals and channel gains, optimal transmission schemes are identified by taking into account the energy cost of the processing circuitry as well as the transmission energy. A constant processing cost for each active sub-channel is assumed. Three different system objectives are considered: i) throughput maximization, in which the total amount of transmitted data by a deadline is maximized for a backlogged transmitter with a finite capacity battery; ii) energy maximization, in which the remaining energy in an infinite capacity battery by a deadline is maximized such that all the arriving data packets are delivered; iii) transmission completion time minimization, in which the delivery time of all the arriving data packets is minimized assuming infinite size battery. For each objective, a convex optimization problem is formulated, the properties of the optimal transmission policies are identified, and an algorithm which computes an optimal transmission policy is proposed. Finally, based on the insights gained from the offline optimizations, low-complexity online algorithms performing close to the optimal dynamic programming solution for the throughput and energy maximization problems are developed under the assumption that the energy data arrivals and channel states are known causally at the transmitter."
]
} |
1607.07553 | 2484918492 | Distributed optimization algorithms are frequently faced with solving sub-problems on disjoint connected parts of a network. Unfortunately, the diameter of these parts can be significantly larger than the diameter of the underlying network, leading to slow running times. Recent work by [Ghaffari and Hauepler; SODA'16] showed that this phenomenon can be seen as the broad underlying reason for the pervasive Omega(√n + D) lower bounds that apply to most optimization problems in the CONGEST model. On the positive side, this work also introduced low-congestion shortcuts as an elegant solution to circumvent this problem in certain topologies of interest. Particularly, they showed that there exist good shortcuts for any planar network and more generally any bounded genus network. This directly leads to fast O(DlogO(1)n) distributed optimization algorithms on such topologies, e.g., for MST and Min-Cut approximation, given that one can efficiently construct these shortcuts in a distributed manner. Unfortunately, the shortcut construction of [Ghaffari and Hauepler; SODA'16] relies heavily on having access to a bounded genus embedding of the network. Computing such an embedding distributedly, however, is a hard problem - even for planar networks. No distributed embedding algorithm for bounded genus graphs is in sight. In this work, we side-step this problem by defining a slightly restricted and more structured form of shortcuts and giving a novel construction algorithm which efficiently finds a shortcut which is, up to a logarithmic factor, as good as the best shortcut that exists for a given network. This new construction algorithm directly leads to an O(D logO(1) n)-round algorithm for solving optimization problems like MST for any topology for which good restricted shortcuts exist - without the need to compute any embedding. This includes the first efficient algorithm for bounded genus graphs. | The complexity theoretic issues in the design of distributed graph algorithms for the CONGEST model have received much attention in the last decade, and got an extensive progress for many problems: Minimum-spanning tree @cite_18 @cite_11 @cite_6 @cite_2 , Maximum flow @cite_7 , Minimum Cut @cite_21 @cite_15 , Shortest paths and Diameter @cite_22 @cite_20 @cite_23 @cite_14 @cite_17 @cite_10 @cite_12 , and so on. Most of those problems have @math -round upper and lower bounds for some sort of approximation guarantee @cite_3 @cite_17 @cite_21 @cite_8 @cite_6 . The guarantee of exact results sometimes yield a nearly liner-time bound @cite_20 . Note that almost all lower bounds above holds for small diameter graphs. Thus, in any case, the general lower bound is more expensive than the universal lower bound of @math rounds. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_21",
"@cite_17",
"@cite_6",
"@cite_3",
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_10",
"@cite_12",
"@cite_20",
"@cite_11"
],
"mid": [
"2145492278",
"2107282727",
"2040011014",
"2116055396",
"2172178473",
"",
"2112702616",
"1945440063",
"2083323175",
"2048098617",
"2058191123",
"1508113861",
"2040273581",
"37512724",
"1582638066",
"2005900781"
],
"abstract": [
"This paper considers the question of identifying the parameters governing the behavior of fundamental global network problems. Many papers on distributed network algorithms consider the task of optimizing the running time successful when an O(n) bound is achieved on an n-vertex network. We propose that a more sensitive parameter is the network's diameter Diam. This is demonstrated in the paper by providing a distributed minimum-weight spanning tree algorithm whose time complexity is sub-linear in n, but linear in Diam (specifically, O(Diam+n sup 0.614 )). Our result is achieved through the application of graph decomposition and edge elimination techniques that may be of independent interest. >",
"Given a simple graph G=(V,E) and a set of sources S ⊆ V, denote for each node ν e V by Lν(∞) the lexicographically ordered list of distance source pairs (d(s,v),s), where s ∈ S. For integers d,k ∈ N∪ ∞ , we consider the source detection, or (S,d,k)-detection task, requiring each node v to learn the first k entries of Lν(∞) (if for all of them d(s,v) ≤ d) or all entries (d(s,v),s) ∈ Lν(∞) satisfying that d(s,v) ≤ d (otherwise). Solutions to this problem provide natural generalizations of concurrent breadth-first search (BFS) tree constructions. For example, the special case of k=∞ requires each source s ∈ S to build a complete BFS tree rooted at s, whereas the special case of d=∞ and S=V requires constructing a partial BFS tree comprising at least k nodes from every node in V. In this work, we give a simple, near-optimal solution for the source detection task in the CONGEST model, where messages contain at most O(log n) bits, running in d+k rounds. We demonstrate its utility for various routing problems, exact and approximate diameter computation, and spanner construction. For those problems, we obtain algorithms in the CONGEST model that are faster and in some cases much simpler than previous solutions.",
"A distributed network is modeled by a graph having n nodes (processors) and diameter D. We study the time complexity of approximating weighted (undirected) shortest paths on distributed networks with a O (log n) bandwidth restriction on edges (the standard synchronous CONGEST model). The question whether approximation algorithms help speed up the shortest paths and distance computation (more precisely distance computation) was raised since at least 2004 by Elkin (SIGACT News 2004). The unweighted case of this problem is well-understood while its weighted counterpart is fundamental problem in the area of distributed approximation algorithms and remains widely open. We present new algorithms for computing both single-source shortest paths (SSSP) and all-pairs shortest paths (APSP) in the weighted case. Our main result is an algorithm for SSSP. Previous results are the classic O(n)-time Bellman-Ford algorithm and an O(n1 2+1 2k + D)-time (8k⌈log(k + 1)⌉ --1)-approximation algorithm, for any integer k ≥ 1, which follows from the result of Lenzen and Patt-Shamir (STOC 2013). (Note that Lenzen and Patt-Shamir in fact solve a harder problem, and we use O(·) to hide the O(poly log n) term.) We present an O (n1 2D1 4 + D)-time (1 + o(1))-approximation algorithm for SSSP. This algorithm is sublinear-time as long as D is sublinear, thus yielding a sublinear-time algorithm with almost optimal solution. When D is small, our running time matches the lower bound of Ω(n1 2 + D) by Das (SICOMP 2012), which holds even when D=Θ(log n), up to a poly log n factor. As a by-product of our technique, we obtain a simple O (n)-time (1+ o(1))-approximation algorithm for APSP, improving the previous O(n)-time O(1)-approximation algorithm following from the results of Lenzen and Patt-Shamir. We also prove a matching lower bound. Our techniques also yield an O(n1 2) time algorithm on fully-connected networks, which guarantees an exact solution for SSSP and a (2+ o(1))-approximate solution for APSP. All our algorithms rely on two new simple tools: light-weight algorithm for bounded-hop SSSP and shortest-path diameter reduction via shortcuts. These tools might be of an independent interest and useful in designing other distributed algorithms.",
"We present a near-optimal distributed algorithm for (1+o(1))-approximation of single-commodity maximum flow in undirected weighted networks that runs in (D+ √n)⋅ no(1) communication rounds in the Congest model. Here, n and D denote the number of nodes and the network diameter, respectively. This is the first improvement over the trivial O(m) time bound, and it nearly matches the Ω(D+√n) round complexity lower bound. The algorithm contains two sub-algorithms of independent interest, both with running time (D+√n)⋅ no(1): Distributed construction of a spanning tree of average stretch no(1). Distributed construction of an no(1)-congestion approximator consisting of the cuts induced by O(log n) virtual trees. The distributed representation of the cut approximator allows for evaluation in (D+√n)⋅ no(1) rounds. All our algorithms make use of randomization and succeed with high probability.",
"The design of distributed approximation protocols is a relatively new rapidly developing area of research. However, so far little progress was done in the study of the hardness of distributed approximation. In this paper we initiate the systematic study of this subject, and show strong unconditional lower bounds on the time-approximation tradeoff of the distributed minimum spanning tree problem, and some of its variants.",
"",
"We describe a distributed randomized algorithm to construct routing tables. Given 0< e <= 1 2, the algorithm runs in time O(n1 2+e + HD), where n is the number of nodes and HD denotes the diameter of the network in hops (i.e., as if the network is unweighted). The weighted length of the produced routes is at most O(e-1log e-1) times the optimal weighted length. This is the first algorithm to break the Omega(n) complexity barrier for computing weighted shortest paths even for a single source. Moreover, the algorithm nearly meets the Omega(n1 2 + HD) lower bound for distributed computation of routing tables and approximate distances (with optimality, up to polylog factors, for e=1 log n). The presented techniques have many applications, including improved distributed approximation algorithms for Generalized Steiner Forest, all-pairs distance estimation, and estimation of the weighted diameter.",
"This paper presents a lower bound of spl Omega (D+ spl radic n) on the time required for the distributed construction of a minimum-weight spanning tree (MST) in n-vertex networks of diameter D= spl Omega (log n), in the bounded message model. This establishes the asymptotic near-optimality of existing time-efficient distributed algorithms for the problem, whose complexity is O(D+ spl radic nlog* n).",
"We study the verification problem in distributed networks, stated as follows. Let @math be a subgraph of a network @math where each vertex of @math knows which edges incident on it are in @math . We would l...",
"We present an algorithm to compute All Pairs Shortest Paths (APSP) of a network in a distributed way. The model of distributed computation we consider is the message passing model: in each synchronous round, every node can transmit a different (but short) message to each of its neighbors. We provide an algorithm that computes APSP in O(n) communication rounds, where n denotes the number of nodes in the network. This implies a linear time algorithm for computing the diameter of a network. Due to a lower bound these two algorithms are optimal up to a logarithmic factor. Furthermore, we present a new lower bound for approximating the diameter D of a graph: Being allowed to answer D+1 or D can speed up the computation by at most a factor D. On the positive side, we provide an algorithm that achieves such a speedup of D and computes an (1+eepsilon) multiplicative approximation of the diameter. We extend these algorithms to compute or approximate other problems, such as girth, radius, center and peripheral vertices. At the heart of these approximation algorithms is the S-Shortest Paths problem which we solve in O(|S|+D) time.",
"We present a distributed algorithm that constructs an O(log n)-approximate minimum spanning tree (MST) in any arbitrary network. This algorithm runs in time O(D(G) + L(G, w)) where L(G, w) is a parameter called the local shortest path diameter and D(G) is the (unweighted) diameter of the graph. Our algorithm is existentially optimal (up to polylogarithmic factors), i.e., there exist graphs which need Ω(D(G) + L(G, w)) time to compute an H-approximation to the MST for any (H , ,[1, ( log n)] ) . Our result also shows that there can be a significant time gap between exact and approximate MST computation: there exists graphs in which the running time of our approximation algorithm is exponentially faster than the time-optimal distributed algorithm that computes the MST. Finally, we show that our algorithm can be used to find an approximate MST in wireless networks and in random weighted networks in almost optimal O(D(G)) time.",
"We study the problem of computing the minimum cut in a weighted distributed message-passing networks (the CONGEST model). Let λ be the minimum cut, n be the number of nodes (processors) in the network, and D be the network diameter. Our algorithm can compute λ exactly in (O(( n ^ * n +D) ^4 ^2 n) ) time. To the best of our knowledge, this is the first paper that explicitly studies computing the exact minimum cut in the distributed setting. Previously, non-trivial sublinear time algorithms for this problem are known only for unweighted graphs when λ ≤ 3 due to Pritchard and Thurimella’s O(D)-time and O(D + n 1 2log* n)-time algorithms for computing 2-edge-connected and 3-edge-connected components [ACM Transactions on Algorithms 2011].",
"We study approximate distributed solutions to the weighted all-pairs shortest-paths (APSP) problem in the CONGEST model. We obtain the following results. A deterministic (1+epsilon)-approximation to APSP with running time O(e-2n log n) rounds. The best previously known algorithm was randomized and slower by a Theta(log n) factor. In many cases, routing schemes involve relabeling, i.e., assigning new names to nodes and that are used in distance and routing queries. It is known that relabeling is necessary to achieve running times of o(n log n). In the relabeling model, we obtain the following results. A randomized O(k)-approximation to APSP, for any integer k>1, running in O(n1 2+1 k+D) rounds, where D is the hop diameter of the network. This algorithm simplifies the best previously known result and reduces its approximation ratio from O(k log k) to O(k). Also, the new algorithm uses O(log n)-bit labels, which is asymptotically optimal. A randomized O(k)-approximation to APSP, for any integer k>1, running in time O((nD)1 2 n1 k+D) and producing compact routing tables of size O(n1 k). The node labels consist of O(k log n) bits. This improves on the approximation ratio of Theta(k2) for tables of that size achieved by the best previously known algorithm, which terminates faster, in O(n1 2+1 k+D) rounds. In addition, we improve on the time complexity of the best known deterministic algorithm for distributed approximate Steiner forest.",
"Distributed distance oracles consist of a labeling scheme which assigns a label to each node and a local data structure deployed to each node. When a node v wants to know the distance to a node u, it queries its local data structure with the label of u. The data structure returns an estimated distance to u, which must be larger than the actual distance but can be overestimated. The accuracy of the distance oracle is measured by stretch, which is defined as the maximum ratio between actual distances and estimated distances over all pairs (u, v).",
"We study the problem of computing the diameter of a network in a distributed way. The model of distributed computation we consider is: in each synchronous round, each node can transmit a different (but short) message to each of its neighbors. We provide an Ω(n) lower bound for the number of communication rounds needed, where n denotes the number of nodes in the network. This lower bound is valid even if the diameter of the network is a small constant. We also show that a (3 2 − e)-approximation of the diameter requires Ω (√n + D) rounds. Furthermore we use our new technique to prove an Ω (√n + D) lower bound on approximating the girth of a graph by a factor 2 − e.",
""
]
} |
1607.07553 | 2484918492 | Distributed optimization algorithms are frequently faced with solving sub-problems on disjoint connected parts of a network. Unfortunately, the diameter of these parts can be significantly larger than the diameter of the underlying network, leading to slow running times. Recent work by [Ghaffari and Hauepler; SODA'16] showed that this phenomenon can be seen as the broad underlying reason for the pervasive Omega(√n + D) lower bounds that apply to most optimization problems in the CONGEST model. On the positive side, this work also introduced low-congestion shortcuts as an elegant solution to circumvent this problem in certain topologies of interest. Particularly, they showed that there exist good shortcuts for any planar network and more generally any bounded genus network. This directly leads to fast O(DlogO(1)n) distributed optimization algorithms on such topologies, e.g., for MST and Min-Cut approximation, given that one can efficiently construct these shortcuts in a distributed manner. Unfortunately, the shortcut construction of [Ghaffari and Hauepler; SODA'16] relies heavily on having access to a bounded genus embedding of the network. Computing such an embedding distributedly, however, is a hard problem - even for planar networks. No distributed embedding algorithm for bounded genus graphs is in sight. In this work, we side-step this problem by defining a slightly restricted and more structured form of shortcuts and giving a novel construction algorithm which efficiently finds a shortcut which is, up to a logarithmic factor, as good as the best shortcut that exists for a given network. This new construction algorithm directly leads to an O(D logO(1) n)-round algorithm for solving optimization problems like MST for any topology for which good restricted shortcuts exist - without the need to compute any embedding. This includes the first efficient algorithm for bounded genus graphs. | On the positive side, distributed algorithms typically use a variety of ideas. In an effort to unify them in an elegant framework, Ghaffari and Haeupler introduced @cite_13 . Specifically, their ideas can be turned into a very short and clean @math round MST algorithm for general graphs. Furthermore, low-congestion shortcuts can serve as a simple explanation of the pervasive @math lower bound. However, the main contribution of their techniques is a @math -round algorithm for planar graphs. To the best of our knowledge, it is the first attempt that considers a non-trivial popular graph class. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2294674552"
],
"abstract": [
"This paper introduces the concept of low-congestion shortcuts for (near-)planar networks, and demonstrates their power by using them to obtain near-optimal distributed algorithms for problems such as Minimum Spanning Tree (MST) or Minimum Cut, in planar networks. Consider a graph G = (V, E) and a partitioning of V into subsets of nodes S1, . . ., SN, each inducing a connected subgraph G[Si]. We define an α-congestion shortcut with dilation β to be a set of subgraphs H1, . . ., HN ⊆ G, one for each subset Si, such that 1. For each i ∈ [1, N], the diameter of the subgraph G[Si] + Hi is at most β. 2. For each edge e ∈ E, the number of subgraphs G[Si] + Hi containing e is at most α. We prove that any partition of a D-diameter planar graph into individually-connected parts admits an O(D log D)-congestion shortcut with dilation O(D log D), and we also present a distributed construction of it in O(D) rounds. We moreover prove these parameters to be near-optimal; i.e., there are instances in which, unavoidably, max α, β = Ω(D[EQUATION]). Finally, we use low-congestion shortcuts, and their efficient distributed construction, to derive O(D)-round distributed algorithms for MST and Min-Cut, in planar networks. This complexity nearly matches the trivial lower bound of Ω(D). We remark that this is the first result bypassing the well-known Ω(D + [EQUATION]) existential lower bound of general graphs (see Peleg and Rubinovich [FOCS'99]; Elkin [STOC'04]; and Das [STOC'11]) in a family of graphs of interest."
]
} |
1607.07614 | 2950461596 | In domain generalization, the knowledge learnt from one or multiple source domains is transferred to an unseen target domain. In this work, we propose a novel domain generalization approach for fine-grained scene recognition. We first propose a semantic scene descriptor that jointly captures the subtle differences between fine-grained scenes, while being robust to varying object configurations across domains. We model the occurrence patterns of objects in scenes, capturing the informativeness and discriminability of each object for each scene. We then transform such occurrences into scene probabilities for each scene image. Second, we argue that scene images belong to hidden semantic topics that can be discovered by clustering our semantic descriptors. To evaluate the proposed method, we propose a new fine-grained scene dataset in cross-domain settings. Extensive experiments on the proposed dataset and three benchmark scene datasets show the effectiveness of the proposed approach for fine-grained scene transfer, where we outperform state-of-the-art scene recognition and domain generalization methods. | Recent approaches have been proposed to target domain generalization for vision tasks. They can be roughly grouped into classifier based @cite_10 @cite_30 approaches and feature-based @cite_26 @cite_1 approaches. In @cite_10 , a support vector machine approach is proposed that learns a set of dataset-specific models and a visual-world model that is common to all datasets. An exemplar-SVM approach is proposed in @cite_30 that exploits the structure of positive samples in the source domain. In feature-based approaches, the goal is to learn invariant features that generalize across domains. In @cite_26 , a kernel-based method is proposed that learns a shared subspace. A feature-learning approach is proposed in @cite_1 that extends denoising autoeconders with naturally-occurring variability in object appearance. While the previous approaches yield good results in object recognition, their performance was not investigated for scene transfer. Also, to the best of our knowledge, there is no prior work that exploits a semantic approach to domain generalization. | {
"cite_N": [
"@cite_30",
"@cite_1",
"@cite_26",
"@cite_10"
],
"mid": [
"96659543",
"2953039697",
"2949436635",
"1852255964"
],
"abstract": [
"In this paper, we propose a new approach for domain generalization by exploiting the low-rank structure from multiple latent source domains. Motivated by the recent work on exemplar-SVMs, we aim to train a set of exemplar classifiers with each classifier learnt by using only one positive training sample and all negative training samples. While positive samples may come from multiple latent domains, for the positive samples within the same latent domain, their likelihoods from each exemplar classifier are expected to be similar to each other. Based on this assumption, we formulate a new optimization problem by introducing the nuclear-norm based regularizer on the likelihood matrix to the objective function of exemplar-SVMs. We further extend Domain Adaptation Machine (DAM) to learn an optimal target classifier for domain adaptation. The comprehensive experiments for object recognition and action recognition demonstrate the effectiveness of our approach for domain generalization and domain adaptation.",
"The problem of domain generalization is to take knowledge acquired from a number of related domains where training data is available, and to then successfully apply it to previously unseen domains. We propose a new feature learning algorithm, Multi-Task Autoencoder (MTAE), that provides good generalization performance for cross-domain object recognition. Our algorithm extends the standard denoising autoencoder framework by substituting artificially induced corruption with naturally occurring inter-domain variability in the appearance of objects. Instead of reconstructing images from noisy versions, MTAE learns to transform the original image into analogs in multiple related domains. It thereby learns features that are robust to variations across domains. The learnt features are then used as inputs to a classifier. We evaluated the performance of the algorithm on benchmark image recognition datasets, where the task is to learn features from multiple datasets and to then predict the image label from unseen datasets. We found that (denoising) MTAE outperforms alternative autoencoder-based models as well as the current state-of-the-art algorithms for domain generalization.",
"This paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output variables. A learning-theoretic analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on new domains, motivating the proposed algorithm. Experimental results on synthetic and real-world datasets demonstrate that DICA successfully learns invariant features and improves classifier performance in practice.",
"The presence of bias in existing object recognition datasets is now well-known in the computer vision community. While it remains in question whether creating an unbiased dataset is possible given limited resources, in this work we propose a discriminative framework that directly exploits dataset bias during training. In particular, our model learns two sets of weights: (1) bias vectors associated with each individual dataset, and (2) visual world weights that are common to all datasets, which are learned by undoing the associated bias from each dataset. The visual world weights are expected to be our best possible approximation to the object model trained on an unbiased dataset, and thus tend to have good generalization ability. We demonstrate the effectiveness of our model by applying the learned weights to a novel, unseen dataset, and report superior results for both classification and detection tasks compared to a classical SVM that does not account for the presence of bias. Overall, we find that it is beneficial to explicitly account for bias when combining multiple datasets."
]
} |
1607.07614 | 2950461596 | In domain generalization, the knowledge learnt from one or multiple source domains is transferred to an unseen target domain. In this work, we propose a novel domain generalization approach for fine-grained scene recognition. We first propose a semantic scene descriptor that jointly captures the subtle differences between fine-grained scenes, while being robust to varying object configurations across domains. We model the occurrence patterns of objects in scenes, capturing the informativeness and discriminability of each object for each scene. We then transform such occurrences into scene probabilities for each scene image. Second, we argue that scene images belong to hidden semantic topics that can be discovered by clustering our semantic descriptors. To evaluate the proposed method, we propose a new fine-grained scene dataset in cross-domain settings. Extensive experiments on the proposed dataset and three benchmark scene datasets show the effectiveness of the proposed approach for fine-grained scene transfer, where we outperform state-of-the-art scene recognition and domain generalization methods. | Many approaches have been proposed for scene classification. A popular approach is to represent a scene in terms of its semantics @cite_21 @cite_13 , using a pre-defined vocabulary of visual concepts and a bank of detectors for those concepts @cite_7 @cite_2 @cite_24 @cite_4 @cite_11 . A second class of approaches relies on the automatic discovery of mid-level patches in scene images @cite_14 @cite_35 @cite_20 @cite_9 . While all these methods have been shown able to classify scenes, there are no previous studies of their performance for fine-grained classification. Our method is most related to object-based approaches that are more suitable for fine-grained scenes than holistic representation methods, such as the scene gist @cite_15 . Our proposed method is more invariant than previous attempts, such as objectBank @cite_7 and the semantic FV @cite_24 . These methods provide an encoding based on raw (CNN-based) detection scores, which vary widely across domains. In contrast, we quantize the detection scores into scene probabilities for each object. Such probabilities are adaptive to the varying detection scores through considering a range of thresholds. The process of quantization imparts invariance to the CNN-based semantics, thus improves the generalization ability. We compare with both representations in Section . | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_21",
"@cite_24",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_20",
"@cite_11"
],
"mid": [
"1590510366",
"2033832873",
"",
"",
"",
"",
"",
"2953360861",
"1566135517",
"99353449",
"2115628259",
"1524680991"
],
"abstract": [
"The goal of this paper is to discover a set of discriminative patches which can serve as a fully unsupervised mid-level visual representation. The desired patches need to satisfy two requirements: 1) to be representative, they need to occur frequently enough in the visual world; 2) to be discriminative, they need to be different enough from the rest of the visual world. The patches could correspond to parts, objects, \"visual phrases\", etc. but are not restricted to be any one of them. We pose this as an unsupervised discriminative clustering problem on a huge dataset of image patches. We use an iterative procedure which alternates between clustering and training discriminative classifiers, while applying careful cross-validation at each step to prevent overfitting. The paper experimentally demonstrates the effectiveness of discriminative patches as an unsupervised mid-level visual representation, suggesting that it could be used in place of visual words for many tasks. Furthermore, discriminative patches can also be used in a supervised regime, such as scene classification, where they demonstrate state-of-the-art performance on the MIT Indoor-67 dataset.",
"The automatic discovery of distinctive parts for an object or scene class is challenging since it requires simultaneously to learn the part appearance and also to identify the part occurrences in images. In this paper, we propose a simple, efficient, and effective method to do so. We address this problem by learning parts incrementally, starting from a single part occurrence with an Exemplar SVM. In this manner, additional part instances are discovered and aligned reliably before being considered as training examples. We also propose entropy-rank curves as a means of evaluating the distinctiveness of parts shareable between categories and use them to select useful parts out of a set of candidates. We apply the new representation to the task of scene categorisation on the MIT Scene 67 benchmark. We show that our method can learn parts which are significantly more informative and for a fraction of the cost, compared to previous part-learning methods such as [28]. We also show that a well constructed bag of words or Fisher vector model can substantially outperform the previous state-of-the-art classification performance on this data.",
"",
"",
"",
"",
"",
"We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.",
"In this paper, we propose a computational model of the recognition of real world scenes that bypasses the segmentation and the processing of individual objects or regions. The procedure is based on a very low dimensional representation of the scene, that we term the Spatial Envelope. We propose a set of perceptual dimensions (naturalness, openness, roughness, expansion, ruggedness) that represent the dominant spatial structure of a scene. Then, we show that these dimensions may be reliably estimated using spectral and coarsely localized information. The model generates a multidimensional space in which scenes sharing membership in semantic categories (e.g., streets, highways, coasts) are projected closed together. The performance of the spatial envelope model shows that specific information about object shape or identity is not a requirement for scene categorization and that modeling a holistic representation of the scene informs about its probable semantic category.",
"A new architecture, denoted spatial pyramid matching on the semantic manifold (SPMSM), is proposed for scene recognition. SPMSM is based on a recent image representation on a semantic probability simplex, which is now augmented with a rough encoding of spatial information. A connection between the semantic simplex and a Riemmanian manifold is established, so as to equip the architecture with a similarity measure that respects the manifold structure of the semantic space. It is then argued that the closed-form geodesic distance between two manifold points is a natural measure of similarity between images. This leads to a conditionally positive definite kernel that can be used with any SVM classifier. An approximation of the geodesic distance reveals connections to the well-known Bhattacharyya kernel, and is explored to derive an explicit feature embedding for this kernel, by simple square-rooting. This enables a low-complexity SVM implementation, using a linear SVM on the embedded features. Several experiments are reported, comparing SPMSM to state-of-the-art recognition methods. SPMSM is shown to achieve the best recognition rates in the literature for two large datasets (MIT Indoor and SUN) and rates equivalent or superior to the state-of-the-art on a number of smaller datasets. In all cases, the resulting SVM also has much smaller dimensionality and requires much fewer support vectors than previous classifiers. This guarantees much smaller complexity and suggests improved generalization beyond the datasets considered.",
"Recent work on mid-level visual representations aims to capture information at the level of complexity higher than typical \"visual words\", but lower than full-blown semantic objects. Several approaches [5,6,12,23] have been proposed to discover mid-level visual elements, that are both 1) representative, i.e., frequently occurring within a visual dataset, and 2) visually discriminative. However, the current approaches are rather ad hoc and difficult to analyze and evaluate. In this work, we pose visual element discovery as discriminative mode seeking, drawing connections to the the well-known and well-studied mean-shift algorithm [2, 1, 4, 8]. Given a weakly-labeled image collection, our method discovers visually-coherent patch clusters that are maximally discriminative with respect to the labels. One advantage of our formulation is that it requires only a single pass through the data. We also propose the Purity-Coverage plot as a principled way of experimentally analyzing and evaluating different visual discovery approaches, and compare our method against prior work on the Paris Street View dataset of [5]. We also evaluate our method on the task of scene classification, demonstrating state-of-the-art performance on the MIT Scene-67 dataset.",
"Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012 2013 classification and INRIA Holidays retrieval datasets."
]
} |
1607.07614 | 2950461596 | In domain generalization, the knowledge learnt from one or multiple source domains is transferred to an unseen target domain. In this work, we propose a novel domain generalization approach for fine-grained scene recognition. We first propose a semantic scene descriptor that jointly captures the subtle differences between fine-grained scenes, while being robust to varying object configurations across domains. We model the occurrence patterns of objects in scenes, capturing the informativeness and discriminability of each object for each scene. We then transform such occurrences into scene probabilities for each scene image. Second, we argue that scene images belong to hidden semantic topics that can be discovered by clustering our semantic descriptors. To evaluate the proposed method, we propose a new fine-grained scene dataset in cross-domain settings. Extensive experiments on the proposed dataset and three benchmark scene datasets show the effectiveness of the proposed approach for fine-grained scene transfer, where we outperform state-of-the-art scene recognition and domain generalization methods. | A Convolutional Neural Network @cite_0 @cite_33 , is another example of a classifier that has the ability to discover semantic" entities in higher levels of its feature hierarchy @cite_31 @cite_36 . The scene CNN of @cite_33 was shown to detect objects that are discriminative for the scene classes @cite_36 . Our proposed method investigates scene transfer using a network trained on objects only, namely imageNET @cite_23 . This is achieved without the need to train a network on millions of scene images, which is the goal of transfer. We compare the performance of the two in Section . dataset. The dataset contains @math store categories that are closely related to each other. For each category, @math training images are shown. Some categories are significantly visually similar with very confusing spatial layout and objects. Other store classes have widely varying visual features, which is difficult to model. | {
"cite_N": [
"@cite_33",
"@cite_36",
"@cite_0",
"@cite_23",
"@cite_31"
],
"mid": [
"2134670479",
"1899185266",
"",
"2108598243",
"2952186574"
],
"abstract": [
"Scene recognition is one of the hallmark tasks of computer vision, allowing definition of a context for object recognition. Whereas the tremendous recent progress in object recognition tasks is due to the availability of large datasets like ImageNet and the rise of Convolutional Neural Networks (CNNs) for learning high-level features, performance at scene recognition has not attained the same level of success. This may be because current deep features trained from ImageNet are not competitive enough for such tasks. Here, we introduce a new scene-centric database called Places with over 7 million labeled pictures of scenes. We propose new methods to compare the density and diversity of image datasets and show that Places is as dense as other scene datasets and has more diversity. Using CNN, we learn deep features for scene recognition tasks, and establish new state-of-the-art results on several scene-centric datasets. A visualization of the CNN layers' responses allows us to show differences in the internal representations of object-centric and scene-centric networks.",
"With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.",
"",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets."
]
} |
1607.07472 | 2952070652 | We present a novel approach for collision-free global navigation for continuous-time multi-agent systems with general linear dynamics. Our approach is general and can be used to perform collision-free navigation in 2D and 3D workspaces with narrow passages and crowded regions. As part of pre-computation, we compute multiple bridges in the narrow or tight regions in the workspace using kinodynamic RRT algorithms. Our bridge has certain geometric characteristics, that en- able us to calculate a collision-free trajectory for each agent using simple interpolation at runtime. Moreover, we combine interpolated bridge trajectories with local multi-agent navigation algorithms to compute global collision-free paths for each agent. The overall approach combines the performance benefits of coupled multi-agent algorithms with the pre- computed trajectories of the bridges to handle challenging scenarios. In practice, our approach can handle tens to hundreds of agents in real-time on a single CPU core in 2D and 3D workspaces. | There has been extensive work on multi-agent motion planning. This work includes many reactive methods such as RVO @cite_0 , HRVO @cite_6 , and their variants. These techniques compute a feasible movement for each agent such that it can avoid other agents and obstacles in a short time horizon. However, they cannot provide mathematical guarantees about whether or not agents can always find collision-free trajectories. In particular, they may not be able to avoid the inevitable collision states (ICS) @cite_23 @cite_5 @cite_13 @cite_17 in the configuration space, due to robots' dynamical constraints or obstacles in the scenario. Some methods @cite_17 @cite_18 @cite_4 provide partial solutions to these problems, but they still cannot guarantee avoidance of all ICS in the long horizon while working in a crowd scenario with narrow passages. Actually, even for a scenario without static obstacles, it is still difficult to achieve robust collision-avoidance coordination when there is a large number of agents @cite_11 @cite_16 . | {
"cite_N": [
"@cite_18",
"@cite_11",
"@cite_4",
"@cite_6",
"@cite_0",
"@cite_23",
"@cite_5",
"@cite_16",
"@cite_13",
"@cite_17"
],
"mid": [
"",
"1751446056",
"2417113351",
"2144583323",
"192919555",
"2119360047",
"2141698884",
"1485718613",
"2083093559",
""
],
"abstract": [
"",
"We study the problem of path planning for unlabeled (indistinguishable) unit-disc robots in a planar environment cluttered with polygonal obstacles. We introduce an algorithm which minimizes the total path length, i.e., the sum of lengths of the individual paths. Our algorithm is guaranteed to find a solution if one exists, or report that none exists otherwise. It runs in time @math , where @math is the number of robots and @math is the total complexity of the workspace. Moreover, the total length of the returned solution is at most @math , where OPT is the optimal solution cost. To the best of our knowledge this is the first algorithm for the problem that has such guarantees. The algorithm has been implemented in an exact manner and we present experimental results that attest to its efficiency.",
"We present a decentralized algorithm for group-based coherent and reciprocal multi-agent navigation. In addition to generating collision-free trajectories for each agent, our approach is able to simulate macroscopic group movements and proxemic behaviors that result in coherent navigation. Our approach is general, makes no assumptions about the size or shape of the group, and can generate smooth trajectories for the agents. Furthermore, it can dynamically adapt to obstacles or the behavior of other agents. The additional overhead of generating proxemic group behaviors is relatively small and our approach can simulate hundreds of agents in real-time. We highlight its benefits on different benchmarks.",
"We introduce a new concept; meso-scale planning in real-time multi-agent navigation. Whereas many traditional approaches to multi-agent navigation typically consist of two-levels - a macro-scale level providing agents with a global direction of motion around (large) static obstacles, and a micro-scale level in which agents seek to avoid collision with other agents - our approach adds a meso-scale level to give agents realistic behavior in scenarios where groups of other agents (e.g. families or crowds in a virtual world) form coherent entities. Rather than moving straight through such groups, our approach lets agents move around them. Our formulation considers each agent as an individual that may perceive sets of other agents as a group, and plans its motion accordingly. We base our approach on the velocity obstacle concept, and we show using simulation results that our method dramatically improves the quality of the trajectories computed for the agents.",
"In this paper, we present a formal approach to reciprocal n-body collision avoidance, where multiple mobile robots need to avoid collisions with each other while moving in a common workspace. In our formulation, each robot acts fully independently, and does not communicate with other robots. Based on the definition of velocity obstacles [5], we derive sufficient conditions for collision-free motion by reducing the problem to solving a low-dimensional linear program. We test our approach on several dense and complex simulation scenarios involving thousands of robots and compute collision-free actions for all of them in only a few milliseconds. To the best of our knowledge, this method is the first that can guarantee local collision-free motion for a large number of robots in a cluttered workspace.",
"An inevitable collision state for a robotic system can be defined as a state for which, no matter what the future trajectory followed by the system is, a collision with an obstacle eventually occurs. An inevitable collision state takes into account both the dynamics of the system and the obstacles, fixed or moving. The main contribution of this paper is to lay down and explore this novel concept (and the companion concept of inevitable collision obstacle). Formal definitions of the inevitable collision states and obstacles are given. Properties fundamental for their characterisation are established. This concept is very general and can be useful both for navigation and motion planning purposes (for its own safety, a robotic system should never find itself in an inevitable collision state). The interest of this concept is illustrated by a safe motion planning example.",
"Motion safety for robotic systems operating in the real world is critical (especially when their size and dynamics make them potentially harmful for themselves or their environment). Motion safety is a taken-for-granted and ill-defined notion in the Robotics literature and the primary contribution of this paper is to propose three safety criteria that helps in understanding a number of key aspects related to the motion safety issue. A number of navigation schemes used by robotic systems operating in the real-world are then evaluated with respect to these safety criteria. It is established that, in all cases, they violate one or several of them. Accordingly, motion safety, especially in the presence of moving objects, cannot be guaranteed (in the sense that these robotic systems may end up in a situation where a collision inevitably occurs later in the future). Finally, it is shown that the concept of inevitable collision states introduced by Fraichard and Asama (2004) does respect the three above-mentioned safety criteria and therefore offers a theoretical answer to the motion safety issue.",
"In unlabeled multi-robot motion planning several interchangeable robots operate in a common workspace. The goal is to move the robots to a set of target positions such that each position will be occupied by some robot. In this paper, we study this problem for the specific case of unit-square robots moving amidst polygonal obstacles and show that it is PSPACE-hard. We also consider three additional variants of this problem and show that they are all PSPACE-hard as well. To the best of our knowledge, this is the first hardness proof for the unlabeled case. Furthermore, our proofs can be used to show that the labeled variant (where each robot is assigned with a specific target position), again, for unit-square robots, is PSPACE-hard as well, which sets another precedence, as previous hardness results require the robots to be of different shapes.",
"This paper investigates the computational complexity of planning the motion of a body B in 2-D or 3-D space, so as to avoid collision with moving obstacles of known, easily computed, trajectories. Dynamic movement problems are of fundamental importance to robotics, but their computational complexity has not previously been investigated. We provide evidence that the 3-D dynamic movement problem is intractable even if B has only a constant number of degrees of freedom of movement. In particular, we prove the problem is PSPACE-hard if B is given a velocity modulus bound on its movements and is NP-hard even if B has no velocity modulus bound, where, in both cases, B has 6 degrees of freedom. To prove these results, we use a unique method of simulation of a Turing machine that uses time to encode configurations (whereas previous lower bound proofs in robotic motion planning used the system position to encode configurations and so required unbounded number of degrees of freedom). We also investigate a natural class of dynamic problems that we call asteroid avoidance problems : B, the object we wish to move, is a convex polyhedron that is free to move by translation with bounded velocity modulus, and the polyhedral obstacles have known translational trajectories but cannot rotate. This problem has many applications to robot, automobile, and aircraft collision avoidance. Our main positive results are polynomial time algorithms for the 2-D asteroid avoidance problem, where B is a moving polygon and we assume a constant number of obstacles, as well as single exponential time or polynomial space algorithms for the 3-D asteroid avoidance problem, where B is a convex polyhedron and there are arbitrarily many obstacles. Our techniques for solving these asteroid avoidance problems use “normal path” arguments, which are an intereting generalization of techniques previously used to solve static shortest path problems. We also give some additional positive results for various other dynamic movers problems, and in particular give polynomial time algorithms for the case in which B has no velocity bounds and the movements of obstacles are algebraic in space-time.",
""
]
} |
1607.07472 | 2952070652 | We present a novel approach for collision-free global navigation for continuous-time multi-agent systems with general linear dynamics. Our approach is general and can be used to perform collision-free navigation in 2D and 3D workspaces with narrow passages and crowded regions. As part of pre-computation, we compute multiple bridges in the narrow or tight regions in the workspace using kinodynamic RRT algorithms. Our bridge has certain geometric characteristics, that en- able us to calculate a collision-free trajectory for each agent using simple interpolation at runtime. Moreover, we combine interpolated bridge trajectories with local multi-agent navigation algorithms to compute global collision-free paths for each agent. The overall approach combines the performance benefits of coupled multi-agent algorithms with the pre- computed trajectories of the bridges to handle challenging scenarios. In practice, our approach can handle tens to hundreds of agents in real-time on a single CPU core in 2D and 3D workspaces. | The simplest solution to the difficulty of inevitable collision states is to design suitable protocols for multi-robot coordination interaction @cite_20 @cite_12 . Some other approaches precompute roadmaps or corridors in the entire workspace to achieve high-quality path planning @cite_10 @cite_24 . However, these methods are not complete and may provide sub-optimal trajectories. Our method also leverages pre-computed bridges to deal with the navigation challenges in narrow or crowded regions. However, the bridges used in our approach have special properties beneficial for efficient global navigation in challenging areas in the workspace. | {
"cite_N": [
"@cite_24",
"@cite_10",
"@cite_12",
"@cite_20"
],
"mid": [
"2085512211",
"",
"2114510811",
"2137514195"
],
"abstract": [
"The motion-planning problem, involving the computation of a collision-free path for a moving entity amidst obstacles, is a central problem in fields such robotics and game design. In this paper we study the problem of planning high-quality paths. A high-quality path should have some desirable properties: it should be short, avoiding long detours, and at the same time it should stay at a safe distance from the obstacles, namely it should have clearance. We suggest a quality measure for paths, which balances between the above criteria of minimizing the path length while maximizing its clearance. We analyze the properties of optimal paths according to our measure, and devise an approximation algorithm to compute near-optimal paths amidst polygonal obstacles in the plane. We also apply our quality measure to corridors. Instead of planning a one-dimensional motion path for a moving entity, it is often more convenient to let the entity move in a corridor, where the exact motion path is determined by a local pla...",
"",
"The MARTHA project objectives are the control and the management of a fleet of autonomous mobile robots for transshipment tasks in harbors, airports and marshalling yards. One of the most challenging and key problems of the MARTHA project is multi-robot cooperation. A general concept for the control of a large fleet of autonomous mobile robots has been developed, implemented and validated in the framework of the MARTHA project. This is the first study in the autonomous mobile robot field to add multi-robot cooperation capabilities to such a large fleet of robots.",
"Artificial intelligence research is ushering in a new era of sophisticated, mass-market transportation technology. While computers can already fly a passenger jet better than a trained human pilot, people are still faced with the dangerous yet tedious task of driving automobiles. Intelligent Transportation Systems (ITS) is the field that focuses on integrating information technology with vehicles and transportation infrastructure to make transportation safer, cheaper, and more efficient. Recent advances in ITS point to a future in which vehicles themselves handle the vast majority of the driving task. Once autonomous vehicles become popular, autonomous interactions amongst multiple vehicles will be possible. Current methods of vehicle coordination, which are all designed to work with human drivers, will be outdated. The bottleneck for roadway efficiency will no longer be the drivers, but rather the mechanism by which those drivers' actions are coordinated. While open-road driving is a well-studied and more-or-less-solved problem, urban traffic scenarios, especially intersections, are much more challenging. We believe current methods for controlling traffic, specifically at intersections, will not be able to take advantage of the increased sensitivity and precision of autonomous vehicles as compared to human drivers. In this article, we suggest an alternative mechanism for coordinating the movement of autonomous vehicles through intersections. Drivers and intersections in this mechanism are treated as autonomous agents in a multiagent system. In this multiagent system, intersections use a new reservation-based approach built around a detailed communication protocol, which we also present. We demonstrate in simulation that our new mechanism has the potential to significantly outperform current intersection control technology--traffic lights and stop signs. Because our mechanism can emulate a traffic light or stop sign, it subsumes the most popular current methods of intersection control. This article also presents two extensions to the mechanism. The first extension allows the system to control human-driven vehicles in addition to autonomous vehicles. The second gives priority to emergency vehicles without significant cost to civilian vehicles. The mechanism, including both extensions, is implemented and tested in simulation, and we present experimental results that strongly attest to the efficacy of this approach."
]
} |
1607.07472 | 2952070652 | We present a novel approach for collision-free global navigation for continuous-time multi-agent systems with general linear dynamics. Our approach is general and can be used to perform collision-free navigation in 2D and 3D workspaces with narrow passages and crowded regions. As part of pre-computation, we compute multiple bridges in the narrow or tight regions in the workspace using kinodynamic RRT algorithms. Our bridge has certain geometric characteristics, that en- able us to calculate a collision-free trajectory for each agent using simple interpolation at runtime. Moreover, we combine interpolated bridge trajectories with local multi-agent navigation algorithms to compute global collision-free paths for each agent. The overall approach combines the performance benefits of coupled multi-agent algorithms with the pre- computed trajectories of the bridges to handle challenging scenarios. In practice, our approach can handle tens to hundreds of agents in real-time on a single CPU core in 2D and 3D workspaces. | Centralized multi-agent navigation approaches usually leverage global single-robot planning algorithms (such as PRM or RRT) to compute a roadmap or grids for the high-level coordination @cite_7 @cite_14 . Compared to the decentralized methods, these algorithms compute all agents' trajectories simultaneously and thus can better handle the complex interactions among agents. These methods can also be extended to handle non-holonomic multi-agent systems (e.g., systems composed of differential-drive robots), by using local planners like RRT @cite_22 , RRT @math @cite_25 , or other algorithms that can deal with differential dynamics @cite_19 . In addition to their benefit in terms of finding feasible trajectories, the centralized algorithms can avoid deadlock cases by leveraging high-level scheduling or coordination strategies, either coupled @cite_8 @cite_1 or decoupled @cite_2 @cite_15 @cite_9 @cite_3 . In general, these strategies only work in theory, because they have to ignore the robot's dynamics and assume the robots to be operating in a discrete state space. Finally, centralized multi-agent navigation algorithms are computationally expensive and can sometimes be too slow for real-world applications. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_19",
"@cite_2",
"@cite_15",
"@cite_25"
],
"mid": [
"2007193228",
"2000359213",
"1968478252",
"",
"",
"2108643443",
"",
"2054585537",
"1971458750",
"2133067819",
"2105204961"
],
"abstract": [
"In this paper we address the problem of motion planning for multiple robots. We introduce a prioritized method, based on a powerful method for motion planning in dynamic environments, recently developed by the authors. Our approach is generically applicable: there is no limitation on the number of degrees of freedom of each of the robots, and robots of various types - for instance free-flying robots and articulated robots - can be used simultaneously. Results show that high-quality paths can be produced in less than a second of computation time, even in confined environments involving many robots. We examine three issues in particular in this paper: the assignment of priorities to the robots, the performance of prioritized planning versus coordinated planning, and the influence of the extent by which the robot motions are constrained on the performance of the method. Results are reported in terms of both running time and the quality of the paths produced.",
"This paper presents the first randomized approach to kinodynamic planning (also known as trajectory planning or trajectory design). The task is to determine control inputs to drive a robot from an ini ial configuration and velocity to a goal configuration and velocity while obeying physically based dynamical models and avoiding obstacles in the robot’s environment. The authors consider generic systems that express the nonlinear dynamics of a robot in terms of the robot’s high-dimensional configuration space. Kinodynamic planning is treated as a motion-planning problem in a higher dimensional state space that has both first-order differential constraints and obstacle-based global constraints. The state space serves the same role as the configuration space for basic path planning; however, standard randomized path-planning techniques do not directly apply to planning trajectories in the state space. The authors have developed a randomized planning approach that is particularly tailored to trajectory plannin...",
"In this paper, a new method is presented for motion planning in dynamic environments, that is, finding a trajectory for a robot in a scene consisting of both static and dynamic, moving obstacles. We propose a practical algorithm based on a roadmap that is created for the static part of the scene. On this roadmap, an approximately time-optimal trajectory from a start to a goal configuration is computed, such that the robot does not collide with any moving obstacle. The trajectory is found by performing a two-level search for a shortest path. On the local level, trajectories on single edges of the roadmap are found using a depth-first search on an implicit grid in state-time space. On the global level, these local trajectories are coordinated using an A sup * -search to find a global trajectory to the goal configuration. The approach is applicable to any robot type in configuration spaces with any dimension, and the motions of the dynamic obstacles are unconstrained, as long as they are known beforehand. The approach has been implemented for both free-flying and articulated robots in three-dimensional workspaces, and it has been applied to multirobot motion planning, as well. Experiments show that the method achieves interactive performance in complex environments.",
"",
"",
"We present a replanning algorithm for repairing rapidly-exploring random trees when changes are made to the configuration space. Instead of abandoning the current RRT, our algorithm efficiently removes just the newly-invalid parts and maintains the rest. It then grows the resulting tree until a new solution is found. We use this algorithm to create a probabilistic analog to the widely-used D* family of deterministic algorithms, and demonstrate its effectiveness in a multirobot planning domain",
"",
"We present Kinodynamic RRT*, an incremental sampling-based approach for asymptotically optimal motion planning for robots with linear dynamics. Our approach extends RRT*, which was introduced for holonomic robots [10], by using a fixed-final-state-free-final-time controller that optimally connects any pair of states, where the cost function is expressed as a trade-off between the duration of a trajectory and the expended control effort. Our approach generalizes earlier work on RRT* for kinodynamic systems, as it guarantees asymptotic optimality for any system with controllable linear dynamics, in state spaces of any dimension. In addition, we show that for the rich subclass of systems with a nilpotent dynamics matrix, closed-form solutions for optimal trajectories can be derived, which keeps the computational overhead of our algorithm compared to traditional RRT* at a minimum. We demonstrate the potential of our approach by computing asymptotically optimal trajectories in three challenging motion planning scenarios: (i) a planar robot with a 4-D state space and double integrator dynamics, (ii) an aerial vehicle with a 10-D state space and linearized quadrotor dynamics, and (iii) a car-like robot with a 5-D state space and non-linear dynamics.",
"We propose a framework, called Lightning, for planning paths in high-dimensional spaces that is able to learn from experience, with the aim of reducing computation time. This framework is intended for manipulation tasks that arise in applications ranging from domestic assistance to robot-assisted surgery. Our framework consists of two main modules, which run in parallel: a planning-from-scratch module, and a module that retrieves and repairs paths stored in a path library. After a path is generated for a new query, a library manager decides whether to store the path based on computation time and the generated path's similarity to the retrieved path. To retrieve an appropriate path from the library we use two heuristics that exploit two key aspects of the problem: (i) A correlation between the amount a path violates constraints and the amount of time needed to repair that path, and (ii) the implicit division of constraints into those that vary across environments in which the robot operates and those that do not. We evaluated an implementation of the framework on several tasks for the PR2 mobile manipulator and a minimally-invasive surgery robot in simulation. We found that the retrieve-and-repair module produced paths faster than planning-from-scratch in over 90 of test cases for the PR2 and in 58 of test cases for the minimally-invasive surgery robot.",
"Describes experiments with a probabilistic roadmap planner (PRM) on a spot-welding station with 2 to 6 robot manipulators combining 12 to 36 degrees of freedom. When performing centralized planning, the planner has proven to be reliable and fast. When performing decoupled planning, it was not significantly faster, but it was much less reliable, failing to find a solution 30 to 75 of the times in 6-robot examples. This is an important result as it invalidates the assumption that the loss of completeness in performing decoupled planning is not very substantial in practice and indicates that centralized planning is a more desirable approach-at least in applications like spot-welding, which requires rather tight robot coordination.",
""
]
} |
1607.07660 | 2950636009 | Computing the epipolar geometry between cameras with very different viewpoints is often very difficult. The appearance of objects can vary greatly, and it is difficult to find corresponding feature points. Prior methods searched for corresponding epipolar lines using points on the convex hull of the silhouette of a single moving object. These methods fail when the scene includes multiple moving objects. This paper extends previous work to scenes having multiple moving objects by using the "Motion Barcodes", a temporal signature of lines. Corresponding epipolar lines have similar motion barcodes, and candidate pairs of corresponding epipoar lines are found by the similarity of their motion barcodes. As in previous methods we assume that cameras are relatively stationary and that moving objects have already been extracted using background subtraction. | Sinha and Pollefeys @cite_9 used silhouettes to calibrate a network of cameras, assuming a single moving silhouette in a video. Each RANSAC iteration takes a different frame and samples two pairs of corresponding tangent lines to the convex hull of the silhouette @cite_5 . The intersection of each pair of lines proposes an epipole. | {
"cite_N": [
"@cite_5",
"@cite_9"
],
"mid": [
"2063409146",
"2085571680"
],
"abstract": [
"For smooth curved surfaces the dominant image feature is the apparent contour, or outline. This is the projection of the contour generator, the locus of points on the surface which separate visible and occluded parts. The contour generator is dependent of the local surface geometry and the viewpoint. Each viewpoint will generate a different contour generator. This paper addresses the problem of recovering the three–dimensional shape and motion of curves and surfaces from image sequences of apparent contours. @PARASPLIT For known viewer motion the visible surfaces can then be reconstructed by exploiting a spatio–temporal parametrization of the apparent contours and contour generators under viewer motion. A natural parametrization exploits the contour generators and the epipolar geometry between successive viewpoints. The epipolar parametrization leads to simplified expressions for the recovery of depth and surface curvatures from image velocities and accelerations and known viewer motion. @PARASPLIT The parametrization is, however, degenerate when the apparent contour is singular since the ray is tangent to the contour generator and at frontier points when the epipolar plane is a tangent plane to the surface. At these isolated points the epipolar parametrization can no longer be used to recover the local surface geometry. This paper reviews the epipolar parametrization and shows how the degenerate cases can be used to recover surface geometry and unknown viewer motion from apparent contours of curved surfaces. Practical implementations are outlined.",
"In this paper we present an automatic method for calibrating a network of cameras that works by analyzing only the motion of silhouettes in the multiple video streams. This is particularly useful for automatic reconstruction of a dynamic event using a camera network in a situation where pre-calibration of the cameras is impractical or even impossible. The key contribution of this work is a RANSAC-based algorithm that simultaneously computes the epipolar geometry and synchronization of a pair of cameras only from the motion of silhouettes in video. Our approach involves first independently computing the fundamental matrix and synchronization for multiple pairs of cameras in the network. In the next stage the calibration and synchronization for the complete network is recovered from the pairwise information. Finally, a visual-hull algorithm is used to reconstruct the shape of the dynamic object from its silhouettes in video. For unsynchronized video streams with sub-frame temporal offsets, we interpolate silhouettes between successive frames to get more accurate visual hulls. We show the effectiveness of our method by remotely calibrating several different indoor camera networks from archived video streams."
]
} |
1607.07660 | 2950636009 | Computing the epipolar geometry between cameras with very different viewpoints is often very difficult. The appearance of objects can vary greatly, and it is difficult to find corresponding feature points. Prior methods searched for corresponding epipolar lines using points on the convex hull of the silhouette of a single moving object. These methods fail when the scene includes multiple moving objects. This paper extends previous work to scenes having multiple moving objects by using the "Motion Barcodes", a temporal signature of lines. Corresponding epipolar lines have similar motion barcodes, and candidate pairs of corresponding epipoar lines are found by the similarity of their motion barcodes. As in previous methods we assume that cameras are relatively stationary and that moving objects have already been extracted using background subtraction. | Both methods above @cite_9 @cite_11 fail when there are multiple moving objects in the scene, as they are based on the convex hull of all the moving objects in the image. In the example shown in Fig , objects that appear only in one of the cameras have a destructive effect on the convex hull. Our current paper presents an approach that does not use the convex hull, and can be used with videos having multiple moving objects. | {
"cite_N": [
"@cite_9",
"@cite_11"
],
"mid": [
"2085571680",
"2247382355"
],
"abstract": [
"In this paper we present an automatic method for calibrating a network of cameras that works by analyzing only the motion of silhouettes in the multiple video streams. This is particularly useful for automatic reconstruction of a dynamic event using a camera network in a situation where pre-calibration of the cameras is impractical or even impossible. The key contribution of this work is a RANSAC-based algorithm that simultaneously computes the epipolar geometry and synchronization of a pair of cameras only from the motion of silhouettes in video. Our approach involves first independently computing the fundamental matrix and synchronization for multiple pairs of cameras in the network. In the next stage the calibration and synchronization for the complete network is recovered from the pairwise information. Finally, a visual-hull algorithm is used to reconstruct the shape of the dynamic object from its silhouettes in video. For unsynchronized video streams with sub-frame temporal offsets, we interpolate silhouettes between successive frames to get more accurate visual hulls. We show the effectiveness of our method by remotely calibrating several different indoor camera networks from archived video streams.",
"Computing the epipolar geometry between cameras with very different viewpoints is often problematic as matching points are hard to find. In these cases, it has been proposed to use information from dynamic objects in the scene for suggesting point and line correspondences. We propose a speed up of about two orders of magnitude, as well as an increase in robustness and accuracy, to methods computing epipolar geometry from dynamic silhouettes. This improvement is based on a new temporal signature: motion barcode for lines. Motion barcode is a binary temporal sequence for lines, indicating for each frame the existence of at least one foreground pixel on that line. The motion barcodes of two corresponding epipolar lines are very similar, so the search for corresponding epipolar lines can be limited only to lines having similar barcodes. The use of motion barcodes leads to increased speed, accuracy, and robustness in computing the epipolar geometry."
]
} |
1607.07660 | 2950636009 | Computing the epipolar geometry between cameras with very different viewpoints is often very difficult. The appearance of objects can vary greatly, and it is difficult to find corresponding feature points. Prior methods searched for corresponding epipolar lines using points on the convex hull of the silhouette of a single moving object. These methods fail when the scene includes multiple moving objects. This paper extends previous work to scenes having multiple moving objects by using the "Motion Barcodes", a temporal signature of lines. Corresponding epipolar lines have similar motion barcodes, and candidate pairs of corresponding epipoar lines are found by the similarity of their motion barcodes. As in previous methods we assume that cameras are relatively stationary and that moving objects have already been extracted using background subtraction. | In other related work, @cite_7 computed essential matrices between each pair of cameras from image trajectories of moving objects. They used the image centroids of the objects as corresponding points. However, since for most objects and most different views the centroids do not represent the same 3D point, this computation is error prone. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2130654917"
],
"abstract": [
"Camera networks are being used in more applications as different types of sensor networks are used to instrument large spaces. Here we show a method for localizing the cameras in a camera network to recover the orientation and position up to scale of each camera, even when cameras are wide-baseline or have different photometric properties. Using moving objects in the scene, we use an intra-camera step and an inter-camera step in order to localize. The intra-camera step compares frames from a single camera to build the tracks of the objects in the image plane of the camera. The inter-camera step uses these object image tracks from each camera as features for correspondence between cameras. We demonstrate this idea on both simulated and real data."
]
} |
1607.07660 | 2950636009 | Computing the epipolar geometry between cameras with very different viewpoints is often very difficult. The appearance of objects can vary greatly, and it is difficult to find corresponding feature points. Prior methods searched for corresponding epipolar lines using points on the convex hull of the silhouette of a single moving object. These methods fail when the scene includes multiple moving objects. This paper extends previous work to scenes having multiple moving objects by using the "Motion Barcodes", a temporal signature of lines. Corresponding epipolar lines have similar motion barcodes, and candidate pairs of corresponding epipoar lines are found by the similarity of their motion barcodes. As in previous methods we assume that cameras are relatively stationary and that moving objects have already been extracted using background subtraction. | Other methods assumed that the objects are moving on a plane @cite_6 , or assume that the objects are people walking on a plane, for both single camera calibration @cite_8 @cite_4 and two camera calibration @cite_12 . | {
"cite_N": [
"@cite_4",
"@cite_12",
"@cite_6",
"@cite_8"
],
"mid": [
"2129597587",
"2142909180",
"2165462906",
""
],
"abstract": [
"A self-calibration method to estimate a camera's intrinsic and extrinsic parameters from vertical line segments of the same height is presented. An algorithm to obtain the needed line segments by detecting the head and feet positions of a walking human in his leg-crossing phases is described. Experimental results show that the method is accurate and robust with respect to various viewing angles and subjects",
"A calibration algorithm of two cameras using observations of a moving person is presented. Similar methods have been proposed for self-calibration with a single camera, but internal parameter estimation is only limited to the focal length. Recently it has been demonstrated that principal point supposed in the center of the image causes inaccuracy of all estimated parameters. Our method exploits two cameras, using image points of head and foot locations of a moving person, to determine for both cameras the focal length and the principal point. Moreover with the increasing number of cameras there is a demand of procedures to determine their relative placements. In this paper we also describe a method to find the relative position and orientation of two cameras: the rotation matrix and the translation vector which describe the rigid motion between the coordinate frames fixed in two cameras. Results in synthetic and real scenes are presented to evaluate the performance of the proposed method.",
"This paper tackles the problem of self-calibration of multiple cameras which are very far apart. Given a set of feature correspondences one can determine the camera geometry. The key problem we address is finding such correspondences. Since the camera geometry (location and orientation) and photometric characteristics vary considerably between images one cannot use brightness and or proximity constraints. Instead we propose a three step approach: first we use moving objects in the scene to determine a rough planar alignment, next we use static features to improve the alignment, finally we compute the epipolar geometry from the the homography matrix of the planar alignment. We do not assume synchronized cameras and we show that enforcing geometric constraints enables us to align the tracking data in time. We present results on challenging outdoor scenes using real time tracking data.",
""
]
} |
1607.06871 | 2513385807 | The "interpretation through synthesis" approach to analyze face images, particularly Active Appearance Models (AAMs) method, has become one of the most successful face modeling approaches over the last two decades. AAM models have ability to represent face images through synthesis using a controllable parameterized Principal Component Analysis (PCA) model. However, the accuracy and robustness of the synthesized faces of AAM are highly depended on the training sets and inherently on the generalizability of PCA subspaces. This paper presents a novel Deep Appearance Models (DAMs) approach, an efficient replacement for AAMs, to accurately capture both shape and texture of face images under large variations. In this approach, three crucial components represented in hierarchical layers are modeled using the Deep Boltzmann Machines (DBM) to robustly capture the variations of facial shapes and appearances. DAMs are therefore superior to AAMs in inferencing a representation for new face images under various challenging conditions. The proposed approach is evaluated in various applications to demonstrate its robustness and capabilities, i.e. facial super-resolution reconstruction, facial off-angle reconstruction or face frontalization, facial occlusion removal and age estimation using challenging face databases, i.e. Labeled Face Parts in the Wild (LFPW), Helen and FG-NET. Comparing to AAMs and other deep learning based approaches, the proposed DAMs achieve competitive results in those applications, thus this showed their advantages in handling occlusions, facial representation, and reconstruction. | @cite_15 proposed a deep network called stacked progressive auto-encoders to learn pose-robust features for face recognition by modeling the complex non-linear transformation from the non-frontal face images to frontal ones. Similarly, @cite_7 proposed to learn robust face representation features using a deep architecture based on supervised auto-encoder. The learning aims at transforming the faces with variants to the canonical view, and extracting similar features from the same subject. @cite_28 proposed a deep learning framework to jointly learn face representation using multimodal information. The model consists of a set of CNNs to extract complementary facial features, and a three-layer stacked auto-encoder to compress the extracted features. Besides learning identity features for face recognition or verification tasks, age-related features could also be extracted . Locating facial key points is also an essential step to represent facial shape. @cite_14 proposed three-level cascaded CNNs for coarse-to-fine facial point detection (only detecting left eye, right eye, nose, and two mouth corners). | {
"cite_N": [
"@cite_28",
"@cite_15",
"@cite_14",
"@cite_7"
],
"mid": [
"1951319388",
"2054814877",
"1976948919",
"1546200464"
],
"abstract": [
"Face images appearing in multimedia applications, e.g., social networks and digital entertainment, usually exhibit dramatic pose, illumination, and expression variations, resulting in considerable performance degradation for traditional face recognition algorithms. This paper proposes a comprehensive deep learning framework to jointly learn face representation using multimodal information. The proposed deep learning structure is composed of a set of elaborately designed convolutional neural networks (CNNs) and a three-layer stacked auto-encoder (SAE). The set of CNNs extracts complementary facial features from multimodal data. Then, the extracted features are concatenated to form a high-dimensional feature vector, whose dimension is compressed by SAE. All of the CNNs are trained using a subset of 9,000 subjects from the publicly available CASIA-WebFace database, which ensures the reproducibility of this work. Using the proposed single CNN architecture and limited training data, 98.43 verification rate is achieved on the LFW database. Benefitting from the complementary information contained in multimodal data, our small ensemble system achieves higher than 99.0 recognition rate on LFW using publicly available training set.",
"While Boltzmann Machines have been successful at unsupervised learning and density modeling of images and speech data, they can be very sensitive to noise in the data. In this paper, we introduce a novel model, the Robust Boltzmann Machine (RoBM), which allows Boltzmann Machines to be robust to corruptions. In the domain of visual recognition, the RoBM is able to accurately deal with occlusions and noise by using multiplicative gating to induce a scale mixture of Gaussians over pixels. Image denoising and in-painting correspond to posterior inference in the RoBM. Our model is trained in an unsupervised fashion with unlabeled noisy data and can learn the spatial structure of the occluders. Compared to standard algorithms, the RoBM is significantly better at recognition and denoising on several face databases.",
"We propose a new approach for estimation of the positions of facial key points with three-level carefully designed convolutional networks. At each level, the outputs of multiple networks are fused for robust and accurate estimation. Thanks to the deep structures of convolutional networks, global high-level features are extracted over the whole face region at the initialization stage, which help to locate high accuracy key points. There are two folds of advantage for this. First, the texture context information over the entire face is utilized to locate each key point. Second, since the networks are trained to predict all the key points simultaneously, the geometric constraints among key points are implicitly encoded. The method therefore can avoid local minimum caused by ambiguity and data corruption in difficult image samples due to occlusions, large pose variations, and extreme lightings. The networks at the following two levels are trained to locally refine initial predictions and their inputs are limited to small regions around the initial predictions. Several network structures critical for accurate and robust facial point detection are investigated. Extensive experiments show that our approach outperforms state-of-the-art methods in both detection accuracy and reliability.",
"This paper targets learning robust image representation for single training sample per person face recognition. Motivated by the success of deep learning in image representation, we propose a supervised autoencoder, which is a new type of building block for deep architectures. There are two features distinct our supervised autoencoder from standard autoencoder. First, we enforce the faces with variants to be mapped with the canonical face of the person, for example, frontal face with neutral expression and normal illumination; Second, we enforce features corresponding to the same person to be similar. As a result, our supervised autoencoder extracts the features which are robust to variances in illumination, expression, occlusion, and pose, and facilitates the face recognition. We stack such supervised autoencoders to get the deep architecture and use it for extracting features in image representation. Experimental results on the AR, Extended Yale B, CMU-PIE, and Multi-PIE data sets demonstrate that by coupling with the commonly used sparse representation-based classification, our stacked supervised autoencoders-based face representation significantly outperforms the commonly used image representations in single sample per person face recognition, and it achieves higher recognition accuracy compared with other deep learning models, including the deep Lambertian network, in spite of much less training data and without any domain information. Moreover, supervised autoencoder can also be used for face verification, which further demonstrates its effectiveness for face representation."
]
} |
1607.07326 | 2500403796 | We propose Meta-Prod2vec, a novel method to compute item similarities for recommendation that leverages existing item metadata. Such scenarios are frequently encountered in applications such as content recommendation, ad targeting and web search. Our method leverages past user interactions with items and their attributes to compute low-dimensional embeddings of items. Specifically, the item metadata is injected into the model as side information to regularize the item embeddings. We show that the new item representations lead to better performance on recommendation tasks on an open music dataset. | Existing methods for recommender systems can roughly be categorized into collaborative filtering (CF) based methods, content-based (CB) methods and hybrid methods. CF-based methods @cite_8 are based on user's interaction with items, such as clicks, and don't require domain knowledge. Content-based methods make use of the user or product content profiles. In practice, CF methods are more popular because they can discover interesting associations between products without requiring the heavy knowledge collection needed by the content-based methods. However, CF methods suffer from cold-start problem in which no or few interaction are available with niche or new items in the system. In recent years, more sophisticated methods, namely latent factor models, have been developed to address the data sparsity problem of CF methods which we will discuss in Section . To further help overcoming cold-start problem, recent works focused on developing hybrid methods by combining latent factor models with content information which we will cover in Section . | {
"cite_N": [
"@cite_8"
],
"mid": [
"1987431925"
],
"abstract": [
"This paper focuses on developing effective and efficient algorithms for top-N recommender systems. A novel Sparse Linear Method (SLIM) is proposed, which generates top-N recommendations by aggregating from user purchase rating profiles. A sparse aggregation coefficient matrix W is learned from SLIM by solving an 1-norm and 2-norm regularized optimization problem. W is demonstrated to produce high quality recommendations and its sparsity allows SLIM to generate recommendations very fast. A comprehensive set of experiments is conducted by comparing the SLIM method and other state-of-the-art top-N recommendation methods. The experiments show that SLIM achieves significant improvements both in run time performance and recommendation quality over the best existing methods."
]
} |
1607.07326 | 2500403796 | We propose Meta-Prod2vec, a novel method to compute item similarities for recommendation that leverages existing item metadata. Such scenarios are frequently encountered in applications such as content recommendation, ad targeting and web search. Our method leverages past user interactions with items and their attributes to compute low-dimensional embeddings of items. Specifically, the item metadata is injected into the model as side information to regularize the item embeddings. We show that the new item representations lead to better performance on recommendation tasks on an open music dataset. | Several modifications have been proposed to better align MF methods with the recommendation objective, for instance, Bayesian Personalized Ranking @cite_0 and Logistic MF @cite_19 . The former learns user and item latent vectors through pairwise ranking loss to emphasize the relevance-based ranking of items. The latter models the probability that a user would interact with an item by replacing the square loss in MF method with the logistic loss @cite_19 . | {
"cite_N": [
"@cite_0",
"@cite_19"
],
"mid": [
"2140310134",
"2054141820"
],
"abstract": [
"Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive k-nearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion.",
"As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels."
]
} |
1607.07326 | 2500403796 | We propose Meta-Prod2vec, a novel method to compute item similarities for recommendation that leverages existing item metadata. Such scenarios are frequently encountered in applications such as content recommendation, ad targeting and web search. Our method leverages past user interactions with items and their attributes to compute low-dimensional embeddings of items. Specifically, the item metadata is injected into the model as side information to regularize the item embeddings. We show that the new item representations lead to better performance on recommendation tasks on an open music dataset. | One of the first methods that learns user and item latent representations through neural network was proposed in @cite_23 . The authors utilized Restricted Boltzmann Machines to explain user-item interaction and perform recommendations. Recently, shallow neural networks has been gaining attention thanks to the success of word embeddings in various NLP tasks, the focus being on Word2Vec model @cite_20 . An application of Word2Vec to the recommendation task was proposed in @cite_26 , called Prod2Vec model. It generates product embeddings from sequences of purchases and performs recommendation based on most similar products in the embedding space. Our work is an extension of Prod2Vec and we will present its details in Section . | {
"cite_N": [
"@cite_20",
"@cite_23",
"@cite_26"
],
"mid": [
"1614298861",
"2099866409",
"1963836406"
],
"abstract": [
"",
"Most of the existing approaches to collaborative filtering cannot handle very large data sets. In this paper we show how a class of two-layer undirected graphical models, called Restricted Boltzmann Machines (RBM's), can be used to model tabular data, such as user's ratings of movies. We present efficient learning and inference procedures for this class of models and demonstrate that RBM's can be successfully applied to the Netflix data set, containing over 100 million user movie ratings. We also show that RBM's slightly outperform carefully-tuned SVD models. When the predictions of multiple RBM models and multiple SVD models are linearly combined, we achieve an error rate that is well over 6 better than the score of Netflix's own system.",
"In recent years online advertising has become increasingly ubiquitous and effective. Advertisements shown to visitors fund sites and apps that publish digital content, manage social networks, and operate e-mail services. Given such large variety of internet resources, determining an appropriate type of advertising for a given platform has become critical to financial success. Native advertisements, namely ads that are similar in look and feel to content, have had great success in news and social feeds. However, to date there has not been a winning formula for ads in e-mail clients. In this paper we describe a system that leverages user purchase history determined from e-mail receipts to deliver highly personalized product ads to Yahoo Mail users. We propose to use a novel neural language-based algorithm specifically tailored for delivering effective product recommendations, which was evaluated against baselines that included showing popular products and products predicted based on co-occurrence. We conducted rigorous offline testing using a large-scale product purchase data set, covering purchases of more than 29 million users from 172 e-commerce websites. Ads in the form of product recommendations were successfully tested on online traffic, where we observed a steady 9 lift in click-through rates over other ad formats in mail, as well as comparable lift in conversion rates. Following successful tests, the system was launched into production during the holiday season of 2014."
]
} |
1607.07326 | 2500403796 | We propose Meta-Prod2vec, a novel method to compute item similarities for recommendation that leverages existing item metadata. Such scenarios are frequently encountered in applications such as content recommendation, ad targeting and web search. Our method leverages past user interactions with items and their attributes to compute low-dimensional embeddings of items. Specifically, the item metadata is injected into the model as side information to regularize the item embeddings. We show that the new item representations lead to better performance on recommendation tasks on an open music dataset. | Many techniques have been used recently to create unified representations from latent factors and content information. One way to integrate user and item content information is to use it to estimate user and item latent factors through regression @cite_16 . Another approach is to learn latent factors for both CF and content features, known as Factorization Machines @cite_4 . | {
"cite_N": [
"@cite_16",
"@cite_4"
],
"mid": [
"2054553473",
"2002834872"
],
"abstract": [
"We propose a novel latent factor model to accurately predict response for large scale dyadic data in the presence of features. Our approach is based on a model that predicts response as a multiplicative function of row and column latent factors that are estimated through separate regressions on known row and column features. In fact, our model provides a single unified framework to address both cold and warm start scenarios that are commonplace in practical applications like recommender systems, online advertising, web search, etc. We provide scalable and accurate model fitting methods based on Iterated Conditional Mode and Monte Carlo EM algorithms. We show our model induces a stochastic process on the dyadic space with kernel (covariance) given by a polynomial function of features. Methods that generalize our procedure to estimate factors in an online fashion for dynamic applications are also considered. Our method is illustrated on benchmark datasets and a novel content recommendation application that arises in the context of Yahoo! Front Page. We report significant improvements over several commonly used methods on all datasets.",
"The situation in which a choice is made is an important information for recommender systems. Context-aware recommenders take this information into account to make predictions. So far, the best performing method for context-aware rating prediction in terms of predictive accuracy is Multiverse Recommendation based on the Tucker tensor factorization model. However this method has two drawbacks: (1) its model complexity is exponential in the number of context variables and polynomial in the size of the factorization and (2) it only works for categorical context variables. On the other hand there is a large variety of fast but specialized recommender methods which lack the generality of context-aware methods. We propose to apply Factorization Machines (FMs) to model contextual information and to provide context-aware rating predictions. This approach results in fast context-aware recommendations because the model equation of FMs can be computed in linear time both in the number of context variables and the factorization size. For learning FMs, we develop an iterative optimization method that analytically finds the least-square solution for one parameter given the other ones. Finally, we show empirically that our approach outperforms Multiverse Recommendation in prediction quality and runtime."
]
} |
1607.07326 | 2500403796 | We propose Meta-Prod2vec, a novel method to compute item similarities for recommendation that leverages existing item metadata. Such scenarios are frequently encountered in applications such as content recommendation, ad targeting and web search. Our method leverages past user interactions with items and their attributes to compute low-dimensional embeddings of items. Specifically, the item metadata is injected into the model as side information to regularize the item embeddings. We show that the new item representations lead to better performance on recommendation tasks on an open music dataset. | Tensor factorization have been suggested as a generalization of MF for considering additional information @cite_24 . In this approach, user-item-content matrix is factorized in a common latent space. The authors in @cite_25 propose co-factorization approach where the latent user and item factors are shared between factorizations of user-item matrix and user and item content matrices. Similar to @cite_30 , they also assigned weights to negative examples based on user-item content-based dissimilarity. | {
"cite_N": [
"@cite_24",
"@cite_30",
"@cite_25"
],
"mid": [
"2102937240",
"1992380306",
"1999031685"
],
"abstract": [
"Context has been recognized as an important factor to consider in personalized Recommender Systems. However, most model-based Collaborative Filtering approaches such as Matrix Factorization do not provide a straightforward way of integrating context information into the model. In this work, we introduce a Collaborative Filtering method based on Tensor Factorization, a generalization of Matrix Factorization that allows for a flexible and generic integration of contextual information by modeling the data as a User-Item-Context N-dimensional tensor instead of the traditional 2D User-Item matrix. In the proposed model, called Multiverse Recommendation, different types of context are considered as additional dimensions in the representation of the data as a tensor. The factorization of this tensor leads to a compact model of the data which can be used to provide context-aware recommendations. We provide an algorithm to address the N-dimensional factorization, and show that the Multiverse Recommendation improves upon non-contextual Matrix Factorization up to 30 in terms of the Mean Absolute Error (MAE). We also compare to two state-of-the-art context-aware methods and show that Tensor Factorization consistently outperforms them both in semi-synthetic and real-world data - improvements range from 2.5 to more than 12 depending on the data. Noticeably, our approach outperforms other methods by a wider margin whenever more contextual information is available.",
"One-Class Collaborative Filtering (OCCF) is an emerging setup in collaborative filtering in which only positive examples or implicit feedback can be observed. Compared with the traditional collaborative filtering setting where the data has ratings, OCCF is more realistic in many scenarios when no ratings are available. In this paper, we propose to improve OCCF accuracy by exploiting the rich user information that is often naturally available in community-based interactive information systems, including a user's search query history, purchasing and browsing activities. We propose two ways to incorporate such user information into the OCCF models: one is to linearly combine scores from different sources and the other is to embed user information into collaborative filtering. Experimental results on a large-scale retail data set from a major e-commerce company show that the proposed methods are effective and can improve the performance of the One-Class Collaborative Filtering over baseline methods through leveraging rich user information.",
"Most recommender systems focus on the areas of leisure activities. As the Web evolves into omnipresent utility, recommender systems penetrate more serious applications such as those in online scientific communities. In this paper, we investigate the task of recommendation in online scientific communities which exhibit two characteristics: 1) there exists very rich information about users and items; 2) The users in the scientific communities tend not to give explicit ratings to the resources, even though they have clear preference in their minds. To address the above two characteristics, we propose matrix factorization techniques to incorporate rich user and item information into recommendation with implicit feedback. Specifically, the user information matrix is decomposed into a shared subspace with the implicit feedback matrix, and so does the item information matrix. In other words, the subspaces between multiple related matrices are jointly learned by sharing information between the matrices. The experiments on the testbed from an online scientific community (i.e., Nanohub) show that the proposed method can effectively improve the recommendation performance."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.