aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1412.6706 | 1780723536 | Abstract : Graphs change over time, and typically variations on the small multiples or animation pattern is used to convey this dynamism visually. However, both of these classical techniques have significant drawbacks, so a new approach, Storyline Visualization of Events on a Network (SVEN) is proposed. SVEN builds on storyline techniques, conveying nodes as contiguous lines over time. SVEN encodes time in a natural manner, along the horizontal axis, and optimizes the vertical placement of storylines to decrease clutter (line crossings, straightness, and bends) in the drawing. This paper demonstrates SVEN on several different flavors of real-world dynamic data, and outlines the remaining near-term future work. | Network analysts can often still gain insight when provided only the ego networks (the subgraph containing the ego and its direct neighbors) of individuals instead of the entire network. Based on this, developed a 1.5-dimensional'' dynamic network visualization capable of showing the ego network of a particular individual of interest over time in a single picture @cite_46 . adapted the classical parallel coordinates visualization for dynamic networks @cite_23 . In their system, vertices are drawn on parallel axes, the area between each axis represents a time interval, and edges connect vertices in adjacent axes. The inevitable problem of having an overwhelming number of edge crossings for larger datasets is addressed by reducing the opacity of the lines drawn. | {
"cite_N": [
"@cite_46",
"@cite_23"
],
"mid": [
"2171558084",
"2106268337"
],
"abstract": [
"The dynamic network visualization has been a challenging topic due to the complexity introduced by the extra time dimension. Existing solutions to this problem are usually good for the overview and presentation, but not for the interactive analysis. We propose in this paper a new approach which only considers the dynamic network central to a focus node (aka dynamic ego network). The navigation of the entire network is achieved by switching the focus node with user interactions. With this approach, the complexity of the compressed dynamic network is greatly reduced without sacrificing the network and time affinity central to the focus node. As a result, we are able to present each dynamic ego network in a single static view, well supporting user analysis on temporal network patterns. We describe our general framework including the network data pre-processing, 1.5D network and trend visualization design, layout algorithms, as well as several customized interactions. In addition, we show that our approach can also be extended to visualize the event-based and multimodal dynamic networks. Finally, we demonstrate, through two practical case studies, the effectiveness of our solution in support of visual evidence and pattern discovery.",
"We present a novel dynamic graph visualization technique based on node-link diagrams. The graphs are drawn side-byside from left to right as a sequence of narrow stripes that are placed perpendicular to the horizontal time line. The hierarchically organized vertices of the graphs are arranged on vertical, parallel lines that bound the stripes; directed edges connect these vertices from left to right. To address massive overplotting of edges in huge graphs, we employ a splatting approach that transforms the edges to a pixel-based scalar field. This field represents the edge densities in a scalable way and is depicted by non-linear color mapping. The visualization method is complemented by interaction techniques that support data exploration by aggregation, filtering, brushing, and selective data zooming. Furthermore, we formalize graph patterns so that they can be interactively highlighted on demand. A case study on software releases explores the evolution of call graphs extracted from the JUnit open source software project. In a second application, we demonstrate the scalability of our approach by applying it to a bibliography dataset containing more than 1.5 million paper titles from 60 years of research history producing a vast amount of relations between title words."
]
} |
1412.6706 | 1780723536 | Abstract : Graphs change over time, and typically variations on the small multiples or animation pattern is used to convey this dynamism visually. However, both of these classical techniques have significant drawbacks, so a new approach, Storyline Visualization of Events on a Network (SVEN) is proposed. SVEN builds on storyline techniques, conveying nodes as contiguous lines over time. SVEN encodes time in a natural manner, along the horizontal axis, and optimizes the vertical placement of storylines to decrease clutter (line crossings, straightness, and bends) in the drawing. This paper demonstrates SVEN on several different flavors of real-world dynamic data, and outlines the remaining near-term future work. | To the author's knowledge, the earliest portrayal of events on a network with a spatial encoding of time are sequence diagrams,'' developed in the 1970s, which are used to understand timing and synchronization in distributed systems @cite_6 . These visualizations were originally hand drawn; nodes were represented as vertical parallel lines with time moving from the bottom to the top of the page. Communication events between processes were represented as wavy lines connecting connecting the processes sending and receiving the message at the local times the message was sent and received. Sequence diagrams are effective for understanding trivially small networks, but become difficult to scale up as more nodes are added. The ordering of nodes in the diagram can be chosen to minimize crossings, but this is the Traveling Salesman Problem, and even if an optimal solution was found, there is no guarantee that the resulting diagram would be interpretable for large or dense networks. | {
"cite_N": [
"@cite_6"
],
"mid": [
"1973501242"
],
"abstract": [
"The concept of one event happening before another in a distributed system is examined, and is shown to define a partial ordering of the events. A distributed algorithm is given for synchronizing a system of logical clocks which can be used to totally order the events. The use of the total ordering is illustrated with a method for solving synchronization problems. The algorithm is then specialized for synchronizing physical clocks, and a bound is derived on how far out of synchrony the clocks can become."
]
} |
1412.6706 | 1780723536 | Abstract : Graphs change over time, and typically variations on the small multiples or animation pattern is used to convey this dynamism visually. However, both of these classical techniques have significant drawbacks, so a new approach, Storyline Visualization of Events on a Network (SVEN) is proposed. SVEN builds on storyline techniques, conveying nodes as contiguous lines over time. SVEN encodes time in a natural manner, along the horizontal axis, and optimizes the vertical placement of storylines to decrease clutter (line crossings, straightness, and bends) in the drawing. This paper demonstrates SVEN on several different flavors of real-world dynamic data, and outlines the remaining near-term future work. | Visibility representations, which have existed since the mid 1980s, are also visually similar to storylines @cite_3 . These are representations of planar graphs where nodes are drawn as horizontal lines, and edges are drawn as vertical lines connecting their endpoints. Node-edge crossings are not allowed, so non-planar graphs do not have visibility representations. Due to this constraint, it is not apparent how visibility representations can be used as a general purpose solution for the dynamic graph visualization problem. Nor is it apparent how to overload the horizontal axis in a visibility representation to also encode time. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2039512408"
],
"abstract": [
"We studyvisibility representations of graphs, which are constructed by mapping vertices to horizontal segments, and edges to vertical segments that intersect only adjacent vertex-segments. Every graph that admits this representation must be planar. We consider three types of visibility representations, and we give complete characterizations of the classes of graphs that admit them. Furthermore, we present linear time algorithms for testing the existence of and constructing visibility representations of planar graphs. Many applications of our results can be found in VLSI layout."
]
} |
1412.6706 | 1780723536 | Abstract : Graphs change over time, and typically variations on the small multiples or animation pattern is used to convey this dynamism visually. However, both of these classical techniques have significant drawbacks, so a new approach, Storyline Visualization of Events on a Network (SVEN) is proposed. SVEN builds on storyline techniques, conveying nodes as contiguous lines over time. SVEN encodes time in a natural manner, along the horizontal axis, and optimizes the vertical placement of storylines to decrease clutter (line crossings, straightness, and bends) in the drawing. This paper demonstrates SVEN on several different flavors of real-world dynamic data, and outlines the remaining near-term future work. | Some researchers have simplified the dynamic graph visualization problem by increasing the level of abstraction and portraying how the network communities change over time @cite_11 @cite_32 . Visualizing this information is inherently simpler than visualizing the underlying dynamic network. Rosvall and Bergstrom apply a significance clustering technique to the dynamic network at fixed time windows to partition the vertices into groups. Then the flow of nodes between clusters at consecutive time windows is visualized using a technique similar to Sankey diagrams @cite_50 or parallel sets @cite_25 , which they call alluvial diagrams.'' However, the authors make no attempt to improve the aesthetic quality of their visualizations by re-arranging nodes to reduce clutter (crossings between time windows). Instead nodes are ordered within each time window according to cluster size, which causes the diagrams to scale poorly as a function of the number of communities and time windows. | {
"cite_N": [
"@cite_25",
"@cite_32",
"@cite_50",
"@cite_11"
],
"mid": [
"1775412516",
"2045317751",
"",
"2155369095"
],
"abstract": [
"The discrete nature of categorical data makes it a particular challenge for visualization. Methods that work very well for continuous data are often hardly usable with categorical dimensions. Only few methods deal properly with such data, mostly because of the discrete nature of categorical data, which does not translate well into the continuous domains of space and color. Parallel sets is a new visualization method that adopts the layout of parallel coordinates, but substitutes the individual data points by a frequency based representation. This abstracted view, combined with a set of carefully designed interactions, supports visual data analysis of large and complex data sets. The technique allows efficient work with meta data, which is particularly important when dealing with categorical datasets. By creating new dimensions from existing ones, for example, the user can filter the data according to his or her current needs. We also present the results from an interactive analysis of CRM data using parallel sets. We demonstrate how the flexible layout eases the process of knowledge crystallization, especially when combined with a sophisticated interaction scheme.",
"Social network analysis is the study of patterns of interaction between social entities. The field is attracting increasing attention from diverse disciplines including sociology, epidemiology, and behavioral ecology. An important sociological phenomenon that draws the attention of analysts is the emergence of communities, which tend to form, evolve, and dissolve gradually over a period of time. Understanding this evolution is crucial to sociologists and domain scientists, and often leads to a better appreciation of the social system under study. Therefore, it is imperative that social network visualization tools support this task. While graph-based representations are well suited for investigating structural properties of networks at a single point in time, they appear to be significantly less useful when used to analyze gradual structural changes over a period of time. In this paper, we present an interactive visualization methodology for dynamic social networks. Our technique focuses on revealing the community structure implied by the evolving interaction patterns between individuals. We apply our visualization to analyze the community structure in the US House of Representatives. We also report on a user study conducted with the participation of behavioral ecologists working with social network datasets that depict interactions between wild animals. Findings from the user study confirm that the visualization was helpful in providing answers to sociological questions as well as eliciting new observations on the social organization of the population under study.",
"",
"Change is a fundamental ingredient of interaction patterns in biology, technology, the economy, and science itself: Interactions within and between organisms change; transportation patterns by air, land, and sea all change; the global financial flow changes; and the frontiers of scientific research change. Networks and clustering methods have become important tools to comprehend instances of these large-scale structures, but without methods to distinguish between real trends and noisy data, these approaches are not useful for studying how networks change. Only if we can assign significance to the partitioning of single networks can we distinguish meaningful structural changes from random fluctuations. Here we show that bootstrap resampling accompanied by significance clustering provides a solution to this problem. To connect changing structures with the changing function of networks, we highlight and summarize the significant structural changes with alluvial diagrams and realize de Solla Price's vision of mapping change in science: studying the citation pattern between about 7000 scientific journals over the past decade, we find that neuroscience has transformed from an interdisciplinary specialty to a mature and stand-alone discipline."
]
} |
1412.6706 | 1780723536 | Abstract : Graphs change over time, and typically variations on the small multiples or animation pattern is used to convey this dynamism visually. However, both of these classical techniques have significant drawbacks, so a new approach, Storyline Visualization of Events on a Network (SVEN) is proposed. SVEN builds on storyline techniques, conveying nodes as contiguous lines over time. SVEN encodes time in a natural manner, along the horizontal axis, and optimizes the vertical placement of storylines to decrease clutter (line crossings, straightness, and bends) in the drawing. This paper demonstrates SVEN on several different flavors of real-world dynamic data, and outlines the remaining near-term future work. | Another disadvantage of Rosvall and Bergstrom's technique is that significance clustering is performed on time windows independently, which might introduce noise (nodes oscillating between clusters over time), adding clutter and artifacts to the visualization. Berger-Wolf and Saia introduced an optimization based approach for dynamic community detection that overcomes this pitfall @cite_2 , and this technique was improved in @cite_51 @cite_12 . This framework was utilized to visualize dynamic communities for several datasets. Communities, which span time, are represented as stacked rectangles that span the horizontal space of the visualization. Similar to storyline visualizations, nodes are represented as horizontal lines, placed inside communities, and can only exist in one community at a time. When a node changes communities a diagonal line is used to represent this change. Edges in the network are not shown, and no attempt is made to optimize the ordering of the communities or the nodes within communities. This would likely reduce clutter and improve the aesthetic properties of the visualization. | {
"cite_N": [
"@cite_51",
"@cite_12",
"@cite_2"
],
"mid": [
"2122278421",
"2134714321",
"2162691596"
],
"abstract": [
"We propose frameworks and algorithms for identifying communities in social networks that change over time. Communities are intuitively characterized as \"unusually densely knit\" subsets of a social network. This notion becomes more problematic if the social interactions change over time. Aggregating social networks over time can radically misrepresent the existing and changing community structure. Instead, we propose an optimization-based approach for modeling dynamic community structure. We prove that finding the most explanatory community structure is NP-hard and APX-hard, and propose algorithms based on dynamic programming, exhaustive search, maximum matching, and greedy heuristics. We demonstrate empirically that the heuristics trace developments of community structure accurately for several synthetic and real-world examples.",
"We propose two approximation algorithms for identifying communities in dynamic social networks. Communities are intuitively characterized as \"unusually densely knit\" subsets of a social network. This notion becomes more problematic if the social interactions change over time. Aggregating social networks over time can radically misrepresent the existing and changing community structure. Recently, we have proposed an optimization-based framework for modeling dynamic community structure. Also, we have proposed an algorithm for finding such structure based on maximum weight bipartite matching. In this paper, we analyze its performance guarantee for a special case where all actors can be observed at all times. In such instances, we show that the algorithm is a small constant factor approximation of the optimum. We use a similar idea to design an approximation algorithm for the general case where some individuals are possibly unobserved at times, and to show that the approximation factor increases twofold but remains a constant regardless of the input size. This is the first algorithm for inferring communities in dynamic networks with a provable approximation guarantee. We demonstrate the general algorithm on real data sets. The results confirm the efficiency and effectiveness of the algorithm in identifying dynamic communities.",
"Finding patterns of social interaction within a population has wide-ranging applications including: disease modeling, cultural and information transmission, and behavioral ecology. Social interactions are often modeled with networks. A key characteristic of social interactions is their continual change. However, most past analyses of social networks are essentially static in that all information about the time that social interactions take place is discarded. In this paper, we propose a new mathematical and computational framework that enables analysis of dynamic social networks and that explicitly makes use of information about when social interactions occur."
]
} |
1412.6765 | 1962656736 | General purpose CPUs used in high performance computing (HPC) support a vector instruction set and an out-of-order engine dedicated to increase the instruction level parallelism. Hence, related optimizations are currently critical to improve the performance of applications requiring numerical computation. Moreover, the use of a Java run-time environment such as the HotSpot Java Virtual Machine (JVM) in high performance computing is a promising alternative. It benefits from its programming flexibility, productivity and the performance is ensured by the Just-In-Time (JIT) compiler. Though, the JIT compiler suffers from two main drawbacks. First, the JIT is a black box for developers. We have no control over the generated code nor any feedback from its optimization phases like vectorization. Secondly, the time constraint narrows down the degree of optimization compared to static compilers like GCC or LLVM. So, it is compelling to use statically compiled code since it benefits from additional optimization reducing performance bottlenecks. Java enables to call native code from dynamic libraries through the Java Native Interface (JNI). Nevertheless , JNI methods are not inlined and require an additional cost to be invoked compared to Java ones. Therefore, to benefit from better static optimization, this call overhead must be leveraged by the amount of computation performed at each JNI invocation. In this paper we tackle this problem and we propose to do this analysis for a set of micro-kernels. Our goal is to select the most efficient implementation considering the amount of computation defined by the calling context. We also investigate the impact on performance of several different optimization schemes which are vector-ization, out-of-order optimization, data alignment, method inlining and the use of native memory for JNI methods. | There is so far no performance comparison between pure Java and JNI that takes into account both the overhead of JNI calls and potential deeper optimization provided by the static compilation. presented a split auto-vectorization framework @cite_0 combining dynamic compilation with an off-line compilation stage aiming at being competitive with static compilation while conserving application portability. @cite_10 designed an API called jSIMD that uses JNI as a bridge to map Java code to SIMD instructions using vectorized data of various types. Regarding JNI performance issues, implemented the Graal Native Function Interface (GNFI) for the Graal Virtual Machine @cite_8 as an alternative to JNI. GNFI aims to mitigate all the disadvantages met using JNI both concerning programming flexibility than performance. @cite_2 proposed an approach for the IBM TR JIT compiler to widen the compilation span by inlining native code. Finally, Kurzinyec and Sunderam @cite_11 studied the performance of different JNI implementations for several different JVM. | {
"cite_N": [
"@cite_8",
"@cite_0",
"@cite_2",
"@cite_10",
"@cite_11"
],
"mid": [
"",
"2121176848",
"2116129553",
"2077678195",
"128593230"
],
"abstract": [
"",
"Just-in-Time (JIT) compiler technology offers portability while facilitating target- and context-specific specialization. Single-Instruction-Multiple-Data (SIMD) hardware is ubiquitous and markedly diverse, but can be difficult for JIT compilers to efficiently target due to resource and budget constraints. We present our design for a synergistic auto-vectorizing compilation scheme. The scheme is composed of an aggressive, generic offline stage coupled with a lightweight, target-specific online stage. Our method leverages the optimized intermediate results provided by the first stage across disparate SIMD architectures from different vendors, having distinct characteristics ranging from different vector sizes, memory alignment and access constraints, to special computational idioms. We demonstrate the effectiveness of our design using a set of kernels that exercise innermost loop, outer loop, as well as straight-line code vectorization, all automatically extracted by the common offline compilation stage. This results in performance comparable to that provided by specialized monolithic offline compilers. Our framework is implemented using open-source tools and standards, thereby promoting interoperability and extendibility.",
"We introduce a strategy for inlining native functions into Java™ applications using a JIT compiler. We perform further optimizations to transform inlined callbacks into semantically equivalent lightweight operations. We show that this strategy can substantially reduce the overhead of performing JNI calls, while preserving the key safety and portability properties of the JNI. Our work leverages the ability to store statically-generated IL alongside native binaries, to facilitate native inlining at Java callsites at JIT compilation time. Preliminary results with our prototype implementation show speedups of up to 93X when inlining and callback transformation are combined.",
"Exposing SIMD units within interpreted languages could simplify programs and unleash floods of untapped processor power.",
"Continuously evolving Java technology provides effective solutions for many industrial and scientific computing challenges. These solutions, however, often require cooperation between Java and native languages. It is possible to achieve such interoperability using the Java Native Interface (JNI); however, this facility introduces an overhead which must be considered while developing interface code. This paper presents JNI performance benchmarks for several popular Java Virtual Machine implementations. These may be useful in avoiding certain JNI pitfalls and provide a better understanding of JNI-related performance issues."
]
} |
1412.6181 | 1826232489 | The problem we address is the following: how can a user employ a predictive model that is held by a third party, without compromising private information. For example, a hospital may wish to use a cloud service to predict the readmission risk of a patient. However, due to regulations, the patient's medical files cannot be revealed. The goal is to make an inference using the model, without jeopardizing the accuracy of the prediction or the privacy of the data. To achieve high accuracy, we use neural networks, which have been shown to outperform other learning models for many tasks. To achieve the privacy requirements, we use homomorphic encryption in the following protocol: the data owner encrypts the data and sends the ciphertexts to the third party to obtain a prediction from a trained model. The model operates on these ciphertexts and sends back the encrypted prediction. In this protocol, not only the data remains private, even the values predicted are available only to the data owner. Using homomorphic encryption and modifications to the activation functions and training algorithms of neural networks, we show that it is protocol is possible and may be feasible. This method paves the way to build a secure cloud-based neural network prediction services without invading users' privacy. | @cite_1 suggested a scheme for using homomorphic encryption with neural networks. They suggest solving the problem of non-linear activation functions by creating an interactive protocol between the data owner and the model owner. In a nut-shell, every non-linear transformation is computed by the data-owner: the model sends the input to the non-linear transformation in encrypted form to the data owner who decrypts the message, applies the transformation, encrypts the result and sends it back. Unfortunately, this interaction requires large latencies and increases the complexity on the data owner side, effectively making it impractical. Moreover, it leaks information about the model. Therefore, @cite_1 had to introduce safety mechanisms, such as random order of execution, to mitigate this issue. In comparison, the procedure we introduce does not require complicated communication schemes: the data owner encrypts the data and sends it. The model does its computation and sends back the (encrypted) prediction. Therefore, it allows for asynchronous communication and it does not leak unnecessary information about the model. | {
"cite_N": [
"@cite_1"
],
"mid": [
"1973124816"
],
"abstract": [
"The problem of secure data processing by means of a neural network (NN) is addressed. Secure processing refers to the possibility that the NN owner does not get any knowledge about the processed data since they are provided to him in encrypted format. At the same time, the NN itself is protected, given that its owner may not be willing to disclose the knowledge embedded within it. The considered level of protection ensures that the data provided to the network and the network weights and activation functions are kept secret. Particular attention is given to prevent any disclosure of information that could bring a malevolent user to get access to the NN secrets by properly inputting fake data to any point of the proposed protocol. With respect to previous works in this field, the interaction between the user and the NN owner is kept to a minimum with no resort to multiparty computation protocols."
]
} |
1412.6505 | 2951053892 | In this paper, we present a new feature representation for first-person videos. In first-person video understanding (e.g., activity recognition), it is very important to capture both entire scene dynamics (i.e., egomotion) and salient local motion observed in videos. We describe a representation framework based on time series pooling, which is designed to abstract short-term long-term changes in feature descriptor elements. The idea is to keep track of how descriptor values are changing over time and summarize them to represent motion in the activity video. The framework is general, handling any types of per-frame feature descriptors including conventional motion descriptors like histogram of optical flows (HOF) as well as appearance descriptors from more recent convolutional neural networks (CNN). We experimentally confirm that our approach clearly outperforms previous feature representations including bag-of-visual-words and improved Fisher vector (IFV) when using identical underlying feature descriptors. We also confirm that our feature representation has superior performance to existing state-of-the-art features like local spatio-temporal features and Improved Trajectory Features (originally developed for 3rd-person videos) when handling first-person videos. Multiple first-person activity datasets were tested under various settings to confirm these findings. | Recognition from first-person videos is a topic with an increasing amount of attention. There are works focusing on first-person-specific features, including hand locations in first-person videos @cite_15 and human gaze estimation based on first-person videos @cite_16 . There also have been works on object recognition from first-person videos @cite_17 @cite_12 . | {
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_12",
"@cite_17"
],
"mid": [
"2025581566",
"2136668269",
"",
"2106229755"
],
"abstract": [
"Egocentric cameras are becoming more popular, introducing increasing volumes of video in which the biases and framing of traditional photography are replaced with those of natural viewing tendencies. This paradigm enables new applications, including novel studies of social interaction and human development. Recent work has focused on identifying the camera wearer's hands as a first step towards more complex analysis. In this paper, we study how to disambiguate and track not only the observer's hands but also those of social partners. We present a probabilistic framework for modeling paired interactions that incorporates the spatial, temporal, and appearance constraints inherent in egocentric video. We test our approach on a dataset of over 30 minutes of video from six pairs of subjects.",
"We present a model for gaze prediction in egocentric video by leveraging the implicit cues that exist in camera wearer's behaviors. Specifically, we compute the camera wearer's head motion and hand location from the video and combine them to estimate where the eyes look. We further model the dynamic behavior of the gaze, in particular fixations, as latent variables to improve the gaze prediction. Our gaze prediction results outperform the state-of-the-art algorithms by a large margin on publicly available egocentric vision datasets. In addition, we demonstrate that we get a significant performance boost in recognizing daily actions and segmenting foreground objects by plugging in our gaze predictions into state-of-the-art methods.",
"",
"We present a video summarization approach for egocentric or “wearable” camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video — such as the nearness to hands, gaze, and frequency of occurrence — and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results with 17 hours of egocentric data show the method's promise relative to existing techniques for saliency and summarization."
]
} |
1412.6115 | 1724438581 | Deep convolutional neural networks (CNN) has become the most promising method for object recognition, repeatedly demonstrating record breaking results for image classification and object detection in recent years. However, a very deep CNN generally involves many layers with millions of parameters, making the storage of the network model to be extremely large. This prohibits the usage of deep CNNs on resource limited hardware, especially cell phones or other embedded devices. In this paper, we tackle this model storage issue by investigating information theoretical vector quantization methods for compressing the parameters of CNNs. In particular, we have found in terms of compressing the most storage demanding dense connected layers, vector quantization methods have a clear gain over existing matrix factorization methods. Simply applying k-means clustering to the weights or conducting product quantization can lead to a very good balance between model size and recognition accuracy. For the 1000-category classification task in the ImageNet challenge, we are able to achieve 16-24 times compression of the network with only 1 loss of classification accuracy using the state-of-the-art CNN. | Deep convolutional neural network has achieved great successes in image classification , object detection , and image retrieval . With the great progress in this area, the state-of-the-art image classifier can achieve 94 As discussed in the above section, a state-of-the-art CNN usually involves hundreds of millions of parameters, which require huge storage for the model that is difficult to achieve. The bottleneck comes from model storage and testing speed. Several works have been published on speeding up CNN prediction speed. @cite_4 , who explored the properties of CPU to speed up the execution of CNN, particularly focused on the aligning of memory and SIMD operations to boost matrix operations. @cite_1 showed that the convolutional operation can be efficiently carried out in the Fourier domain, which leads to a speed-up of 200 The use of vector quantization methods to compress CNN parameters is mainly inspired by the work of @cite_3 , who demonstrate the redundancies in neural network parameters. They show that the weights within one layer can be accurately predicted by a small ( 5 | {
"cite_N": [
"@cite_1",
"@cite_4",
"@cite_3"
],
"mid": [
"1922123711",
"587794757",
"2952899695"
],
"abstract": [
"Convolutional networks are one of the most widely employed architectures in computer vision and machine learning. In order to leverage their ability to learn complex functions, large amounts of data are required for training. Training a large convolutional network to produce state-of-the-art results can take weeks, even when using modern GPUs. Producing labels using a trained network can also be costly when dealing with web-scale datasets. In this work, we present a simple algorithm which accelerates training and inference by a significant factor, and can yield improvements of over an order of magnitude compared to existing state-of-the-art implementations. This is done by computing convolutions as pointwise products in the Fourier domain while reusing the same transformed feature map many times. The algorithm is implemented on a GPU architecture and addresses a number of related challenges.",
"Recent advances in deep learning have made the use of large, deep neural networks with tens of millions of parameters suitable for a number of applications that require real-time processing. The sheer size of these networks can represent a challenging computational burden, even for modern CPUs. For this reason, GPUs are routinely used instead to train and run such networks. This paper is a tutorial for students and researchers on some of the techniques that can be used to reduce this computational cost considerably on modern x86 CPUs. We emphasize data layout, batching of the computation, the use of SSE2 instructions, and particularly leverage SSSE3 and SSE4 fixed-point instructions which provide a 3× improvement over an optimized floating-point baseline. We use speech recognition as an example task, and show that a real-time hybrid hidden Markov model neural network (HMM NN) large vocabulary system can be built with a 10× speedup over an unoptimized baseline and a 4× speedup over an aggressively optimized floating-point baseline at no cost in accuracy. The techniques described extend readily to neural network training and provide an effective alternative to the use of specialized hardware.",
"We demonstrate that there is significant redundancy in the parameterization of several deep learning models. Given only a few weight values for each feature it is possible to accurately predict the remaining values. Moreover, we show that not only can the parameter values be predicted, but many of them need not be learned at all. We train several different architectures by learning only a small number of weights and predicting the rest. In the best case we are able to predict more than 95 of the weights of a network without any drop in accuracy."
]
} |
1412.6149 | 146075782 | In this paper, we propose a design for novel and experimental cloud computing systems. The proposed system aims at enhancing computational, communicational and annalistic capabilities of road navigation services by merging several independent technologies, namely vision-based embedded navigation systems, prominent Cloud Computing Systems (CCSs) and Vehicular Ad-hoc NETwork (VANET). This work presents our initial investigations by describing the design of a global generic system. The designed system has been experimented with various scenarios of video-based road services. Moreover, the associated architecture has been implemented on a small-scale simulator of an in-vehicle embedded system. The implemented architecture has been experimented in the case of a simulated road service to aid the police agency. The goal of this service is to recognize and track searched individuals and vehicles in a real-time monitoring system remotely connected to moving cars. The presented work demonstrates the potential of our system for efficiently enhancing and diversifying real-time video services in road environments. | Nowadays, cloud computing developments are revolutionizing the world by providing to companies more and more powerful services. In particular, many companies tend to store their data on external servers or data centers. Indeed, this technology improves the Quality of Service (QoS); notably for the data management, the data security as well as for the data distribution. By this way, the providers of cloud computing systems allow many companies to develop services specifically focused on their principal activities. More precisely, cloud computing can be defined as a technology providing resources at three levels, namely infrastructures, software platforms and services @cite_3 . The cloud computing was initially employed through wire-based network for internet and it has been progressively extended to the mobile network (e.g., through cellular networks). Notably, the cloud computing technologies facilitate the development of hybrid systems as well as the mutualizing of computational resources. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2056163839"
],
"abstract": [
"Vehicular networking has become a significant research area due to its specific features and applications such as standardization, efficient traffic management, road safety and infotainment. Vehicles are expected to carry relatively more communication systems, on board computing facilities, storage and increased sensing power. Hence, several technologies have been deployed to maintain and promote Intelligent Transportation Systems (ITS). Recently, a number of solutions were proposed to address the challenges and issues of vehicular networks. Vehicular Cloud Computing (VCC) is one of the solutions. VCC is a new hybrid technology that has a remarkable impact on traffic management and road safety by instantly using vehicular resources, such as computing, storage and internet for decision making. This paper presents the state-of-the-art survey of vehicular cloud computing. Moreover, we present a taxonomy for vehicular cloud in which special attention has been devoted to the extensive applications, cloud formations, key management, inter cloud communication systems, and broad aspects of privacy and security issues. Through an extensive review of the literature, we design an architecture for VCC, itemize the properties required in vehicular cloud that support this model. We compare this mechanism with normal Cloud Computing (CC) and discuss open research issues and future directions. By reviewing and analyzing literature, we found that VCC is a technologically feasible and economically viable technological shifting paradigm for converging intelligent vehicular networks towards autonomous traffic, vehicle control and perception systems."
]
} |
1412.6392 | 1818445599 | High intensive computation applications can usually take days to months to nish an execution. During this time, it is common to have variations of the available resources when considering that such hardware is usually shared among a plurality of researchers departments within an organization. On the other hand, High Performance Clusters can take advantage of Cloud Computing bursting techniques for the execution of applications together with on-premise resources. In order to meet deadlines, high intensive computational applications can use the Cloud to boost their performance when they are data and task parallel. This article presents an ongoing work towards the use of extended resources of an HPC execution platform together with Cloud. We propose an unied view of such heterogeneous environments and a method that monitors, predicts the application execution time, and dynamically shifts part of the domain previously running in local HPC hardware to be computed in the Cloud, meeting then a specic deadline. The method is exemplied along with a seismic application that, at runtime, adapts itself to move part of the processing to the Cloud (in a movement called bursting) and also auto-scales (the moved part) over cloud nodes. Our preliminary results show that there is an expected overhead for performing this movement and for synchronizing results, but the outcomes demonstrate it is an important feature for meeting deadlines in the case an on-premise cluster is overloaded or cannot provide the capacity needed for a particular project. | High Performance Computing applications are being tested on Cloud platforms. Works like @cite_1 , @cite_3 and @cite_2 performed a performance evaluation of a set of benchmarks and complex HPC applications on a range of platforms, from supercomputers to clusters, both in-house and in the cloud. These studies show that a Cloud can be effective for such applications mainly in the case of complementing supercomputers using models such as cloud burst and application-aware mapping to achieve significant cost benefits. Although these findings do not propose an automatic and adaptive approach for using both environments, their empirical studies opened the opportunity for proposals of tools that promote a hybrid approach based on these environments, like ours. | {
"cite_N": [
"@cite_1",
"@cite_3",
"@cite_2"
],
"mid": [
"2624365150",
"2088445131",
"2082245129"
],
"abstract": [
"",
"HPC applications are increasingly being used in academia and laboratories for scientific research and in industries for business and analytics. Cloud computing offers the benefits of virtualization, elasticity of resources and elimination of cluster setup cost and time to HPC applications users. However, poor network performance, performance variation and OS noise are some of the challenges for execution of HPC applications on Cloud. In this paper, we propose that Cloud can be viable platform for some HPC applications depending upon application characteristics such as communication volume and pattern and sensitivity to OS noise and scale. We present an evaluation of the performance and cost tradeoffs of HPC applications on a range of platforms varying from Cloud (with and without virtualization) to HPC-optimized cluster. Our results show that Cloud is viable platform for some applications, specifically, non communicationintensive applications such as embarrassingly parallel and tree-structured computations up to high processor count and for communication-intensive applications up to low processor count.",
"We introduce a hybrid High Performance Computing (HPC) infrastructure architecture that provides predictable execution of scientific applications, and scales from a single resource to multiple resources, with different ownership, policy, and geographic locations. We identify three paradigms in the evolution of HPC and high-throughput computing: owner-centric HPC (traditional), Grid computing, and Cloud computing. After analyzing the synergies among HPC, Grid and Cloud computing, we argue for an architecture that combines the benefits of these technologies. We call the building block of this architecture, Elastic Cluster. We describe the concept of Elastic Cluster and show how it can be used to achieve effective and predictable execution of HPC workloads. Then we discuss implementation aspects, and propose a new distributed information system design that combines features of distributed hash tables and relational databases."
]
} |
1412.6392 | 1818445599 | High intensive computation applications can usually take days to months to nish an execution. During this time, it is common to have variations of the available resources when considering that such hardware is usually shared among a plurality of researchers departments within an organization. On the other hand, High Performance Clusters can take advantage of Cloud Computing bursting techniques for the execution of applications together with on-premise resources. In order to meet deadlines, high intensive computational applications can use the Cloud to boost their performance when they are data and task parallel. This article presents an ongoing work towards the use of extended resources of an HPC execution platform together with Cloud. We propose an unied view of such heterogeneous environments and a method that monitors, predicts the application execution time, and dynamically shifts part of the domain previously running in local HPC hardware to be computed in the Cloud, meeting then a specic deadline. The method is exemplied along with a seismic application that, at runtime, adapts itself to move part of the processing to the Cloud (in a movement called bursting) and also auto-scales (the moved part) over cloud nodes. Our preliminary results show that there is an expected overhead for performing this movement and for synchronizing results, but the outcomes demonstrate it is an important feature for meeting deadlines in the case an on-premise cluster is overloaded or cannot provide the capacity needed for a particular project. | Analyzing Cloud as stand alone execution platform for HPC applications, like seismic, the authors of @cite_5 evaluated the Linpack workload on the Amazon EC2 cloud. Their conclusions indicate that the tested cloud environment has a potential, but it is not mature to provide a price-performance for HPC applications. @cite_4 also evaluated the EC2 for a number of kernels used by HPC applications, coming also to a conclusion that such cloud services need an order of magnitude in performance improvement to better serve the scientific community. It is hard to evaluate one provider or another, but they are evolving to offers that are private and with specialized infrastructures, like with GPUs and Infiniband. In our study, we evaluated the Cloud (virtualized nodes) and preliminary results indicate the environment as cost-effective in budget and performance when at least combined with on-premise clusters in a dynamic changing scenario. | {
"cite_N": [
"@cite_5",
"@cite_4"
],
"mid": [
"2108376207",
"2130062566"
],
"abstract": [
"Computing as a utility has reached the mainstream. Scientists can now rent time on large commercial clusters through several vendors. The cloud computing model provides flexible support for \"pay as you go\" systems. In addition to no upfront investment in large clusters or supercomputers, such systems incur no maintenance costs. Furthermore, they can be expanded and reduced on-demand in real-time. Current cloud computing performance falls short of systems specifically designed for scientific applications. Scientific computing needs are quite different from those of web applications--composed primarily of database queries--that have been the focus of cloud computing vendors. In this paper we investigate the use of cloud computing for high-performance numerical applications. In particular, we assume unlimited monetary resources to answer the question, \"How high can a cloud computing service get in the TOP500 list?\" We show results for the Linpack benchmark on different allocations on Amazon EC2.",
"Cloud Computing is emerging today as a commercial infrastructure that eliminates the need for maintaining expensive computing hardware. Through the use of virtualization, clouds promise to address with the same shared set of physical resources a large user base with different needs. Thus, clouds promise to be for scientists an alternative to clusters, grids, and supercomputers. However, virtualization may induce significant performance penalties for the demanding scientific computing workloads. In this work we present an evaluation of the usefulness of the current cloud computing services for scientific computing. We analyze the performance of the Amazon EC2 platform using micro-benchmarks and kernels. While clouds are still changing, our results indicate that the current cloud services need an order of magnitude in performance improvement to be useful to the scientific community."
]
} |
1412.6392 | 1818445599 | High intensive computation applications can usually take days to months to nish an execution. During this time, it is common to have variations of the available resources when considering that such hardware is usually shared among a plurality of researchers departments within an organization. On the other hand, High Performance Clusters can take advantage of Cloud Computing bursting techniques for the execution of applications together with on-premise resources. In order to meet deadlines, high intensive computational applications can use the Cloud to boost their performance when they are data and task parallel. This article presents an ongoing work towards the use of extended resources of an HPC execution platform together with Cloud. We propose an unied view of such heterogeneous environments and a method that monitors, predicts the application execution time, and dynamically shifts part of the domain previously running in local HPC hardware to be computed in the Cloud, meeting then a specic deadline. The method is exemplied along with a seismic application that, at runtime, adapts itself to move part of the processing to the Cloud (in a movement called bursting) and also auto-scales (the moved part) over cloud nodes. Our preliminary results show that there is an expected overhead for performing this movement and for synchronizing results, but the outcomes demonstrate it is an important feature for meeting deadlines in the case an on-premise cluster is overloaded or cannot provide the capacity needed for a particular project. | More recently, @cite_7 evaluated a computational fluid dynamics application over a heterogeneous environment of a cluster and the EC2 cloud. The results indicated that there is a need to adjust the CPU power (configuration) and workload by means of load-balancing. We are in line with this study and went further with the present work -- a dynamic self-adaptive method for application load-balancing over a hybrid platform composed of cluster and cloud. | {
"cite_N": [
"@cite_7"
],
"mid": [
"1987512728"
],
"abstract": [
"In this paper, we report on the experimental results of running a large, tightly coupled, distributed multiscale computation over a hybrid High Performance Computing (HPC) infrastructures. We connected EC2 based cloud clusters located in USA to university clusters located in Switzerland. We ran a concurrent multiscale MPI based application on this infrastructure and measured the overhead induced by extending our HPC clusters with EC2 resources. Our results indicate that accommodating some parts of the multiscale computation on cloud resources can lead to low performance without a proper adjustment of CPUs power and workload. However, by enforcing a load-balancing strategy one can benefit from the extra Cloud resources. We connect an EC2-cloud cluster to a university cluster located in Switzerland.We run a distributed multiscale CFD computation over this extended infrastructure.We evaluate and compare the distributed execution to a local execution.We describe our experience of running parallel CFD application on hybrid platforms.Multiscale computation on cloud requires an adjustment of CPUs power and workload."
]
} |
1412.6124 | 2952910708 | In this paper, we study the problem of semantic part segmentation for animals. This is more challenging than standard object detection, object segmentation and pose estimation tasks because semantic parts of animals often have similar appearance and highly varying shapes. To tackle these challenges, we build a mixture of compositional models to represent the object boundary and the boundaries of semantic parts. And we incorporate edge, appearance, and semantic part cues into the compositional model. Given part-level segmentation annotation, we develop a novel algorithm to learn a mixture of compositional models under various poses and viewpoints for certain animal classes. Furthermore, a linear complexity algorithm is offered for efficient inference of the compositional model using dynamic programming. We evaluate our method for horse and cow using a newly annotated dataset on Pascal VOC 2010 which has pixelwise part labels. Experimental results demonstrate the effectiveness of our method. | Our work also bears a similarity to @cite_31 in the spirit that a mixture of graphical models are used to capture global variation due to viewpoints poses. But our compositional model is able to capture spatial relation between children nodes while still achieving linear complexity inference, and we develop an algorithm to learn the mixtures of compositional models. Besides, our task is part segmentation for animals of various poses and viewpoints, which appears more challenging than landmark localization for faces in @cite_31 . | {
"cite_N": [
"@cite_31"
],
"mid": [
"2047508432"
],
"abstract": [
"We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com)."
]
} |
1412.6396 | 1831301291 | Descriptive complexity theory aims at inferring a problem's computational complexity from the syntactic complexity of its description. A cornerstone of this theory is Fagin's Theorem, by which a graph property is expressible in existential second-order logic (ESO logic) if, and only if, it is in NP. A natural question, from the theory's point of view, is which syntactic fragments of ESO logic also still characterize NP. Research on this question has culminated in a dichotomy result by Gottlob, Kolatis, and Schwentick: for each possible quantifier prefix of an ESO formula, the resulting prefix class either contains an NP-complete problem or is contained in P. However, the exact complexity of the prefix classes inside P remained elusive. In the present paper, we clear up the picture by showing that for each prefix class of ESO logic, its reduction closure under first-order reductions is either FO, L, NL, or NP. For undirected, self-loop-free graphs two containment results are especially challenging to prove: containment in L for the prefix @math and containment in FO for the prefix @math for monadic @math . The complex argument by Gottlob, Kolatis, and Schwentick concerning polynomial time needs to be carefully reexamined and either combined with the logspace version of Courcelle's Theorem or directly improved to first-order computations. A different challenge is posed by formulas with the prefix @math : We show that they express special constraint satisfaction problems that lie in L. | Concerning syntactic fragments of logic, the two papers most closely related to the present paper are @cite_12 by Eiter, Gottlob, and Gurevich and @cite_0 by Gottlob, Kolaitis, and Schwentick. In the first paper, a similar kind of classification is presented as in the present paper, only over rather than . It is shown there that for all prefix patterns @math the class @math is either equal to @math ; is not equal to @math but contains an @math -complete problem; is equal to @math ; or is a subclass of @math . Interestingly, two classes of special interest are @math and @math , both of which are the minimal classes equal to @math (by the results of Büchi @cite_7 ). In comparison, by the results of the present paper @math , while @math , and @math , while @math . | {
"cite_N": [
"@cite_0",
"@cite_7",
"@cite_12"
],
"mid": [
"2055288944",
"",
"1975410968"
],
"abstract": [
"Fagin's theorem, the first important result of descriptive complexity, asserts that a property of graphs is in NP if and only if it is definable by an existential second-order formula. In this article, we study the complexity of evaluating existential second-order formulas that belong to prefix classses of existential second-order logic, where a prefix class is the collection of all existential second-order formulas in prenex normal form such that the second-order and the first-order quantifiers obey a certain quantifier pattern. We completely characterize the computational complexity of prefix classes of existential second-order logic in three different contexts: (1) over directed graphs, (2) over undirected graphs with self-loops and (3) over undirected graphs without self-loops. Our main result is that in each of these three contexts a dichotomy holds, that is to say, each prefix class of existential second-order logic either contains sentences that can express NP-complete problems, or each of its sentences expresses a polynomial-time solvable problem. Although the boundary of the dichotomy coincides for the first two cases, it changes, as one moves to undirected graphs without self-loops. The key difference is that a certain prefix class, based on the well-known Ackermann class of first-order logic, contains sentences that can express NP-complete problems over graphs of the first two types, but becomes tractable over undirected graphs without self-loops. Moreover, establishing the dichotomy over undirected graphs without self-loops turns out to be a technically challenging problem that requires the use of sophisticated machinery from graph theory and combinatorics, including results about graphs of bounded tree-width and Ramsey's theorem.",
"",
"Existential second-order logic (ESO) and monadic second-order logic(MSO) have attracted much interest in logic and computer science. ESO is a much expressive logic over successor structures than MSO. However, little was known about the relationship between MSOand syntatic fragments of ESO. We shed light on this issue by completely characterizing this relationship for the prefix classes of ESO over strings, (i.e., finite successor structures). Moreover, we determine the complexity of model checking over strings, for all ESO-prefix classes. Let ESO( Q ) denote the prefix class containing all sentences of the shape ∃R Q 4 , where R is a list of predicate variables, Q is a first-order predicate qualifier from the prefix set Q and 4 is quantifier-free. We show that ESO( ∃*∀∃* ) and ESO( ∃*∀∀ ) are the maximal standard ESO-prefix classes contained in MSO, thus expressing only regular languages. We further prove the following dichotomy theorem: An ESO prefix-class either expresses only regular languages (and is thus in MSO), or it expresses some NP-complete languages. We also give a precise characterization of those ESO-prefix classes that are equivalent to MSO over strings, and of the ESO-prefix classes which are closed under complementation on strings."
]
} |
1412.6396 | 1831301291 | Descriptive complexity theory aims at inferring a problem's computational complexity from the syntactic complexity of its description. A cornerstone of this theory is Fagin's Theorem, by which a graph property is expressible in existential second-order logic (ESO logic) if, and only if, it is in NP. A natural question, from the theory's point of view, is which syntactic fragments of ESO logic also still characterize NP. Research on this question has culminated in a dichotomy result by Gottlob, Kolatis, and Schwentick: for each possible quantifier prefix of an ESO formula, the resulting prefix class either contains an NP-complete problem or is contained in P. However, the exact complexity of the prefix classes inside P remained elusive. In the present paper, we clear up the picture by showing that for each prefix class of ESO logic, its reduction closure under first-order reductions is either FO, L, NL, or NP. For undirected, self-loop-free graphs two containment results are especially challenging to prove: containment in L for the prefix @math and containment in FO for the prefix @math for monadic @math . The complex argument by Gottlob, Kolatis, and Schwentick concerning polynomial time needs to be carefully reexamined and either combined with the logspace version of Courcelle's Theorem or directly improved to first-order computations. A different challenge is posed by formulas with the prefix @math : We show that they express special constraint satisfaction problems that lie in L. | The present paper builds on the paper @cite_0 by Gottlob, Kolaitis, and Schwentick, which contains many of the upper and lower bounds from Theorem for the class @math as well as most of the combinatorial and graph-theoretic arguments needed to prove @math and @math . The paper misses, however, the finer classification provided in our Theorem and Remark 5.1 of @cite_0 expresses the unclear status of the exact complexity of @math at the time of writing, which hinges on a problem called @math : Note also that for each @math , @math is probably not a @math -complete set. [ ] This is due to the check for bounded treewidth, which is in @math (cf. Wanke [1994]) but not known to be in @math .'' The complexity of the check for bounded tree width was settled only later, namely in a paper by Elberfeld, Jakoby, and the author @cite_13 , and shown to lie in @math . This does not mean, however, that the proof of @cite_0 immediately yields @math since the application of Courcelle's Theorem is but one of several subprocedures in the proof and since a generalization of tree width rather than normal tree width is used. | {
"cite_N": [
"@cite_0",
"@cite_13"
],
"mid": [
"2055288944",
"2119468705"
],
"abstract": [
"Fagin's theorem, the first important result of descriptive complexity, asserts that a property of graphs is in NP if and only if it is definable by an existential second-order formula. In this article, we study the complexity of evaluating existential second-order formulas that belong to prefix classses of existential second-order logic, where a prefix class is the collection of all existential second-order formulas in prenex normal form such that the second-order and the first-order quantifiers obey a certain quantifier pattern. We completely characterize the computational complexity of prefix classes of existential second-order logic in three different contexts: (1) over directed graphs, (2) over undirected graphs with self-loops and (3) over undirected graphs without self-loops. Our main result is that in each of these three contexts a dichotomy holds, that is to say, each prefix class of existential second-order logic either contains sentences that can express NP-complete problems, or each of its sentences expresses a polynomial-time solvable problem. Although the boundary of the dichotomy coincides for the first two cases, it changes, as one moves to undirected graphs without self-loops. The key difference is that a certain prefix class, based on the well-known Ackermann class of first-order logic, contains sentences that can express NP-complete problems over graphs of the first two types, but becomes tractable over undirected graphs without self-loops. Moreover, establishing the dichotomy over undirected graphs without self-loops turns out to be a technically challenging problem that requires the use of sophisticated machinery from graph theory and combinatorics, including results about graphs of bounded tree-width and Ramsey's theorem.",
"Bodlaender's Theorem states that for every k there is a linear-time algorithm that decides whether an input graph has tree width k and, if so, computes a width-k tree composition. Courcelle's Theorem builds on Bodlaender's Theorem and states that for every monadic second-order formula φ and for every k there is a linear-time algorithm that decides whether a given logical structure A of tree width at most k satisfies φ. We prove that both theorems still hold when \"linear time\" is replaced by \"logarithmic space.\" The transfer of the powerful theoretical framework of monadic second-order logic and bounded tree width to logarithmic space allows us to settle a number of both old and recent open problems in the log space world."
]
} |
1412.5697 | 1867622403 | This paper introduces a new proximity graph, called the @math -Semi-Yao graph ( @math -SYG), on a set @math of points in @math , which is a supergraph of the @math -nearest neighbor graph ( @math -NNG) of @math . We provide a kinetic data structure (KDS) to maintain the @math -SYG on moving points, where the trajectory of each point is a polynomial function whose degree is bounded by some constant. Our technique gives the first KDS for the theta graph ( , @math -SYG) in @math . It generalizes and improves on previous work on maintaining the theta graph in @math . As an application, we use the kinetic @math -SYG to provide the first KDS for maintenance of all the @math -nearest neighbors in @math , for any @math . Previous works considered the @math case only. Our KDS for all the @math -nearest neighbors is deterministic. The best previous KDS for all the @math -nearest neighbors in @math is randomized. Our structure and analysis are simpler and improve on this work for the @math case. We also provide a KDS for all the @math -nearest neighbors, which in fact gives better performance than previous KDS's for maintenance of all the exact @math -nearest neighbors. As another application, we present the first KDS for answering reverse @math -nearest neighbor queries on moving points in @math , for any @math . | For a set of @math moving points in @math , where each trajectory of a point is a polynomial function of degree bounded by constant @math , Basch, Guibas, and Hershberger @cite_3 provided a KDS for maintenance of the closest pair. Their KDS uses linear space and processes @math events, each in time @math . Here, @math is an extremely slow-growing function. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2086474457"
],
"abstract": [
"Akinetic data structure(KDS) maintains an attribute of interest in a system of geometric objects undergoing continuous motion. In this paper we develop a concentual framework for kinetic data structures, we propose a number of criteria for the quality of such structures, and we describe a number of fundamental techniques for their design. We illustrate these general concepts by presenting kinetic data structures for maintaining the convex hull and the closest pair of moving points in the plane; these structures behave well according to the proposed quality criteria for KDSs."
]
} |
1412.5697 | 1867622403 | This paper introduces a new proximity graph, called the @math -Semi-Yao graph ( @math -SYG), on a set @math of points in @math , which is a supergraph of the @math -nearest neighbor graph ( @math -NNG) of @math . We provide a kinetic data structure (KDS) to maintain the @math -SYG on moving points, where the trajectory of each point is a polynomial function whose degree is bounded by some constant. Our technique gives the first KDS for the theta graph ( , @math -SYG) in @math . It generalizes and improves on previous work on maintaining the theta graph in @math . As an application, we use the kinetic @math -SYG to provide the first KDS for maintenance of all the @math -nearest neighbors in @math , for any @math . Previous works considered the @math case only. Our KDS for all the @math -nearest neighbors is deterministic. The best previous KDS for all the @math -nearest neighbors in @math is randomized. Our structure and analysis are simpler and improve on this work for the @math case. We also provide a KDS for all the @math -nearest neighbors, which in fact gives better performance than previous KDS's for maintenance of all the exact @math -nearest neighbors. As another application, we present the first KDS for answering reverse @math -nearest neighbor queries on moving points in @math , for any @math . | Using multidimensional range trees, Agarwal, Kaplan, and Sharir (TALG'08) @cite_13 gave KDS's both for maintenance of the closest pair and for all the @math -nearest neighbors in @math . The closest pair KDS by Agarwal al uses @math space and processes @math events, each in amortized time @math ; this KDS is efficient, amortized responsive, local, and compact. Agarwal al gave the first efficient KDS to maintain all the @math -nearest neighbors in @math . For the efficiency of their KDS, they implemented range trees by using randomized search trees (treaps). Their kinetic approach uses @math space and processes @math events; the expected time to process all events is @math . Their all @math -nearest neighbors KDS is efficient, amortized responsive, compact, but in general is not local. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2115704441"
],
"abstract": [
"We present simple, fully dynamic and kinetic data structures, which are variants of a dynamic two-dimensional range tree, for maintaining the closest pair and all nearest neighbors for a set of n moving points in the plane; insertions and deletions of points are also allowed. If no insertions or deletions take place, the structure for the closest pair uses O(n log n) space, and processes O(n2βs+2(n)log n) critical events, each in O(log2n) time. Here s is the maximum number of times where the distances between any two specific pairs of points can become equal, βs(q) e λs(q) q, and λs(q) is the maximum length of Davenport-Schinzel sequences of order s on q symbols. The dynamic version of the problem incurs a slight degradation in performance: If m ≥ n insertions and deletions are performed, the structure still uses O(n log n) space, and processes O(mnβs+2(n)log3 n) events, each in O(log3n) time. Our kinetic data structure for all nearest neighbors uses O(n log2 n) space, and processes O(n2β2s+2(n)log3 n) critical events. The expected time to process all events is O(n2βs+22(n) log4n), though processing a single event may take Θ(n) expected time in the worst case. If m ≥ n insertions and deletions are performed, then the expected number of events is O(mnβ2s+2(n) log3n) and processing them all takes O(mnβ2s+2(n) log4n). An insertion or deletion takes O(n) expected time."
]
} |
1412.5697 | 1867622403 | This paper introduces a new proximity graph, called the @math -Semi-Yao graph ( @math -SYG), on a set @math of points in @math , which is a supergraph of the @math -nearest neighbor graph ( @math -NNG) of @math . We provide a kinetic data structure (KDS) to maintain the @math -SYG on moving points, where the trajectory of each point is a polynomial function whose degree is bounded by some constant. Our technique gives the first KDS for the theta graph ( , @math -SYG) in @math . It generalizes and improves on previous work on maintaining the theta graph in @math . As an application, we use the kinetic @math -SYG to provide the first KDS for maintenance of all the @math -nearest neighbors in @math , for any @math . Previous works considered the @math case only. Our KDS for all the @math -nearest neighbors is deterministic. The best previous KDS for all the @math -nearest neighbors in @math is randomized. Our structure and analysis are simpler and improve on this work for the @math case. We also provide a KDS for all the @math -nearest neighbors, which in fact gives better performance than previous KDS's for maintenance of all the exact @math -nearest neighbors. As another application, we present the first KDS for answering reverse @math -nearest neighbor queries on moving points in @math , for any @math . | The reverse @math -nearest neighbor queries for a set of continuously moving objects has attracted the attention of the database community (see @cite_30 and references therein). To our knowledge there is no previous solution to the kinetic problem in the literature. | {
"cite_N": [
"@cite_30"
],
"mid": [
"2142410337"
],
"abstract": [
"In this paper, we study the problem of continuous monitoring of reverse k nearest neighbors queries in Euclidean space as well as in spatial networks. Existing techniques are sensitive toward objects and queries movement. For example, the results of a query are to be recomputed whenever the query changes its location. We present a framework for continuous reverse k nearest neighbor (RkNN) queries by assigning each object and query with a safe region such that the expensive recomputation is not required as long as the query and objects remain in their respective safe regions. This significantly improves the computation cost. As a byproduct, our framework also reduces the communication cost in client---server architectures because an object does not report its location to the server unless it leaves its safe region or the server sends a location update request. We also conduct a rigid cost analysis for our Euclidean space RkNN algorithm. We show that our techniques can also be applied to answer bichromatic RkNN queries in Euclidean space as well as in spatial networks. Furthermore, we show that our techniques can be extended for the spatial networks that are represented by directed graphs. The extensive experiments demonstrate that our techniques outperform the existing techniques by an order of magnitude in terms of computation cost and communication cost."
]
} |
1412.5661 | 2952677200 | In this paper, we propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of model averaging. The proposed approach improves the mean averaged precision obtained by RCNN girshick2014rich , which was the state-of-the-art, from 31 to 50.3 on the ILSVRC2014 detection test set. It also outperforms the winner of ILSVRC2014, GoogLeNet, by 6.1 . Detailed component-wise analysis is also provided through extensive experimental evaluation, which provide a global view for people to understand the deep learning object detection pipeline. | Since many objects have non-rigid deformation, the ability to handle deformation improves detection performance. Deformable part-based models were used in @cite_73 @cite_38 for handling translational movement of parts. To handle more complex articulations, size change and rotation of parts were modeled in @cite_48 , and mixture of part appearance and articulation types were modeled in @cite_20 @cite_66 . A dictionary of shared deformable patterns was learned in @cite_41 . In these approaches, features are manually designed. | {
"cite_N": [
"@cite_38",
"@cite_41",
"@cite_48",
"@cite_73",
"@cite_66",
"@cite_20"
],
"mid": [
"",
"1986460963",
"2030536784",
"2168356304",
"",
"2535410496"
],
"abstract": [
"",
"Several popular and effective object detectors separately model intra-class variations arising from deformations and appearance changes. This reduces model complexity while enabling the detection of objects across changes in view- point, object pose, etc. The Deformable Part Model (DPM) is perhaps the most successful such model to date. A common assumption is that the exponential number of templates enabled by a DPM is critical to its success. In this paper, we show the counter-intuitive result that it is possible to achieve similar accuracy using a small dictionary of deformations. Each component in our model is represented by a single HOG template and a dictionary of flow fields that determine the deformations the template may undergo. While the number of candidate deformations is dramatically fewer than that for a DPM, the deformed templates tend to be plausible and interpretable. In addition, we discover that the set of deformation bases is actually transferable across object categories and that learning shared bases across similar categories can boost accuracy.",
"In this paper we present a computationally efficient framework for part-based modeling and recognition of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to represent an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. We address the problem of using pictorial structure models to find instances of an object in an image as well as the problem of learning an object model from training examples, presenting efficient algorithms in both cases. We demonstrate the techniques by learning models that represent faces and human bodies and using the resulting models to locate the corresponding objects in novel images.",
"We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.",
"",
"We address the classic problems of detection, segmentation and pose estimation of people in images with a novel definition of a part, a poselet. We postulate two criteria (1) It should be easy to find a poselet given an input image (2) it should be easy to localize the 3D configuration of the person conditioned on the detection of a poselet. To permit this we have built a new dataset, H3D, of annotations of humans in 2D photographs with 3D joint information, inferred using anthropometric constraints. This enables us to implement a data-driven search procedure for finding poselets that are tightly clustered in both 3D joint configuration space as well as 2D image appearance. The algorithm discovers poselets that correspond to frontal and profile faces, pedestrians, head and shoulder views, among others. Each poselet provides examples for training a linear SVM classifier which can then be run over the image in a multiscale scanning mode. The outputs of these poselet detectors can be thought of as an intermediate layer of nodes, on top of which one can run a second layer of classification or regression. We show how this permits detection and localization of torsos or keypoints such as left shoulder, nose, etc. Experimental results show that we obtain state of the art performance on people detection in the PASCAL VOC 2007 challenge, among other datasets. We are making publicly available both the H3D dataset as well as the poselet parameters for use by other researchers."
]
} |
1412.5661 | 2952677200 | In this paper, we propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of model averaging. The proposed approach improves the mean averaged precision obtained by RCNN girshick2014rich , which was the state-of-the-art, from 31 to 50.3 on the ILSVRC2014 detection test set. It also outperforms the winner of ILSVRC2014, GoogLeNet, by 6.1 . Detailed component-wise analysis is also provided through extensive experimental evaluation, which provide a global view for people to understand the deep learning object detection pipeline. | Context gains attentions in object detection. The context information investigated in literature includes regions surrounding objects @cite_6 @cite_55 @cite_28 , object-scene interaction @cite_30 @cite_44 , and the presence, location, orientation and size relationship among objects @cite_7 @cite_9 @cite_53 @cite_22 @cite_42 @cite_69 @cite_37 @cite_30 @cite_64 @cite_51 @cite_31 @cite_34 @cite_27 @cite_14 @cite_71 . In this paper, we use whole-image classification scores over a large number of classes from a deep model as global contextual information to refine detection scores. | {
"cite_N": [
"@cite_30",
"@cite_64",
"@cite_22",
"@cite_42",
"@cite_44",
"@cite_71",
"@cite_69",
"@cite_37",
"@cite_7",
"@cite_28",
"@cite_55",
"@cite_6",
"@cite_27",
"@cite_34",
"@cite_14",
"@cite_9",
"@cite_53",
"@cite_31",
"@cite_51"
],
"mid": [
"2141364309",
"2046589395",
"2142037471",
"",
"",
"",
"",
"",
"1977470347",
"2077493928",
"",
"2161969291",
"",
"2151454023",
"",
"",
"2150385913",
"2028742349",
"2024665880"
],
"abstract": [
"This paper presents an empirical evaluation of the role of context in a contemporary, challenging object detection task - the PASCAL VOC 2008. Previous experiments with context have mostly been done on home-grown datasets, often with non-standard baselines, making it difficult to isolate the contribution of contextual information. In this work, we present our analysis on a standard dataset, using top-performing local appearance detectors as baseline. We evaluate several different sources of context and ways to utilize it. While we employ many contextual cues that have been used before, we also propose a few novel ones including the use of geographic context and a new approach for using object spatial support.",
"Detecting objects in cluttered scenes and estimating articulated human body parts are two challenging problems in computer vision. The difficulty is particularly pronounced in activities involving human-object interactions (e.g. playing tennis), where the relevant object tends to be small or only partially visible, and the human body parts are often self-occluded. We observe, however, that objects and human poses can serve as mutual context to each other – recognizing one facilitates the recognition of the other. In this paper we propose a new random field model to encode the mutual context of objects and human poses in human-object interaction activities. We then cast the model learning task as a structure learning problem, of which the structural connectivity between the object, the overall human pose, and different body parts are estimated through a structure search approach, and the parameters of the model are estimated by a new max-margin algorithm. On a sports data set of six classes of human-object interactions [12], we show that our mutual context model significantly outperforms state-of-the-art in detecting very difficult objects and human poses.",
"Many state-of-the-art approaches for object recognition reduce the problem to a 0-1 classification task. Such reductions allow one to leverage sophisticated classifiers for learning. These models are typically trained independently for each class using positive and negative examples cropped from images. At test-time, various post-processing heuristics such as non-maxima suppression (NMS) are required to reconcile multiple detections within and between different classes for each image. Though crucial to good performance on benchmarks, this post-processing is usually defined heuristically.",
"",
"",
"",
"",
"",
"To detect multiple objects of interest, the methods based on Hough transform use non-maxima supression or mode seeking in order to locate and to distinguish peaks in Hough images. Such postprocessing requires tuning of extra parameters and is often fragile, especially when objects of interest tend to be closely located. In the paper, we develop a new probabilistic framework that is in many ways related to Hough transform, sharing its simplicity and wide applicability. At the same time, the framework bypasses the problem of multiple peaks identification in Hough images, and permits detection of multiple objects without invoking nonmaximum suppression heuristics. As a result, the experiments demonstrate a significant improvement in detection accuracy both for the classical task of straight line detection and for a more modern category-level (pedestrian) detection problem.",
"Recent work in object localization has shown that the use of contextual cues can greatly improve accuracy over models that use appearance features alone. Although many of these models have successfully explored different types of contextual sources, they only consider one type of contextual interaction (e.g., pixel, region or object level interactions), leaving open questions about the true potential contribution of context. Furthermore, contributions across object classes and over appearance features still remain unknown. In this work, we introduce a novel model for multi-class object localization that incorporates different levels of contextual interactions. We study contextual interactions at pixel, region and object level by using three different sources of context: semantic, boundary support and contextual neighborhoods. Our framework learns a single similarity metric from multiple kernels, combining pixel and region interactions with appearance features, and then uses a conditional random field to incorporate object level interactions. We perform experiments on two challenging image databases: MSRC and PASCAL VOC 2007. Experimental results show that our model outperforms current state-of-the-art contextual frameworks and reveals individual contributions for each contextual interaction level, as well as the importance of each type of feature in object localization.",
"",
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"",
"Detecting pedestrians in cluttered scenes is a challenging problem in computer vision. The difficulty is added when several pedestrians overlap in images and occlude each other. We observe, however, that the occlusion visibility statuses of overlapping pedestrians provide useful mutual relationship for visibility estimation - the visibility estimation of one pedestrian facilitates the visibility estimation of another. In this paper, we propose a mutual visibility deep model that jointly estimates the visibility statuses of overlapping pedestrians. The visibility relationship among pedestrians is learned from the deep model for recognizing co-existing pedestrians. Experimental results show that the mutual visibility deep model effectively improves the pedestrian detection results. Compared with existing image-based pedestrian detection approaches, our approach has the lowest average miss rate on the Caltech-Train dataset, the Caltech-Test dataset and the ETH dataset. Including mutual visibility leads to 4 - 8 improvements on multiple benchmark datasets.",
"",
"",
"Recent state-of-the-art algorithms have achieved good performance on normal pedestrian detection tasks. However, pedestrian detection in crowded scenes is still challenging due to the significant appearance variation caused by heavy occlusions and complex spatial interactions. In this paper we propose a unified probabilistic framework to globally describe multiple pedestrians in crowded scenes in terms of appearance and spatial interaction. We utilize a mixture model, where every pedestrian is assumed in a special subclass and described by the sub-model. Scores of pedestrian parts are used to represent appearance and quadratic kernel is used to represent relative spatial interaction. For efficient inference, multi-pedestrian detection is modeled as a MAP problem and we utilize greedy algorithm to get an approximation. For discriminative parameter learning, we formulate it as a learning to rank problem, and propose Latent Rank SVM for learning from weakly labeled data. Experiments on various databases validate the effectiveness of the proposed approach.",
"Proxemics is the study of how people interact. We present a computational formulation of visual proxemics by attempting to label each pair of people in an image with a subset of physically based “touch codes.” A baseline approach would be to first perform pose estimation and then detect the touch codes based on the estimated joint locations. We found that this sequential approach does not perform well because pose estimation step is too unreliable for images of interacting people, due to difficulties with occlusion and limb ambiguities. Instead, we propose a direct approach where we build an articulated model tuned for each touch code. Each such model contains two people, connected in an appropriate manner for the touch code in question. We fit this model to the image and then base classification on the fitting error. Experiments show that this approach significantly outperforms the sequential baseline as well as other related approches.",
"Pedestrian detection from images is an important and yet challenging task. The conventional methods usually identify human figures using image features inside the local regions. In this paper we present that, besides the local features, context cues in the neighborhood provide important constraints that are not yet well utilized. We propose a framework to incorporate the context constraints for detection. First, we combine the local window with neighborhood windows to construct a multi-scale image context descriptor, designed to represent the contextual cues in spatial, scaling, and color spaces. Second, we develop an iterative classification algorithm called contextual boost. At each iteration, the classifier responses from the previous iteration across the neighborhood and multiple image scales, called classification context, are incorporated as additional features to learn a new classifier. The number of iterations is determined in the training process when the error rate converges. Since the classification context incorporates contextual cues from the neighborhood, through iterations it implicitly propagates to greater areas and thus provides more global constraints. We evaluate our method on the Caltech benchmark dataset [11]. The results confirm the advantages of the proposed framework. Compared with state of the arts, our method reduces the miss rate from 29 by [30] to 25 at 1 false positive per image (FPPI)."
]
} |
1412.5731 | 2164179872 | The joint user association and spectrum allocation problem is studied for multi-tier heterogeneous networks (HetNets) in both downlink and uplink in the interference-limited regime. Users are associated with base-stations (BSs) based on the biased downlink received power. Spectrum is either shared or orthogonally partitioned among the tiers. This paper models the placement of BSs in different tiers as spatial point processes and adopts stochastic geometry to derive the theoretical mean proportionally fair utility of the network based on the coverage rate. By formulating and solving the network utility maximization problem, the optimal user association bias factors and spectrum partition ratios are analytically obtained for the multi-tier network. The resulting analysis reveals that the downlink and uplink user associations do not have to be symmetric. For uplink under spectrum sharing, if all tiers have the same target signal-to-interference ratio (SIR), distance-based user association is shown to be optimal under a variety of path loss and power control settings. For both downlink and uplink, under orthogonal spectrum partition, it is shown that the optimal proportion of spectrum allocated to each tier should match the proportion of users associated with that tier. Simulations validate the analytical results. Under typical system parameters, simulation results suggest that spectrum partition performs better for downlink in terms of utility, while spectrum sharing performs better for uplink with power control. | The use of spatial random point processes to model transmitters and receivers in wireless networks has been considered extensively in the literature. It allows tools from stochastic geometry @cite_25 @cite_40 to be used to characterize performance metrics analytically. For example, the random network topology is assumed in characterizing the coverage and rate @cite_23 as well as handover @cite_26 in traditional cellular networks. Stochastic geometry based analysis can also be extended to multi-tier HetNets: the flexible user association among different tiers is analyzed in @cite_30 , where the coverage and rate are analyzed; open-access and closed-access user association are discussed in @cite_4 ; the distribution of the per-user rate is derived in @cite_0 by considering the cell size and user distribution in the random networks. However, none of these works characterizes user performance from a network utility perspective, which models the tradeoff between rate and fairness. | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_4",
"@cite_0",
"@cite_40",
"@cite_23",
"@cite_25"
],
"mid": [
"2034420299",
"1965942575",
"2149170915",
"2005108639",
"631335369",
"2150166076",
"2145873277"
],
"abstract": [
"In this paper we develop a tractable framework for SINR analysis in downlink heterogeneous cellular networks (HCNs) with flexible cell association policies. The HCN is modeled as a multi-tier cellular network where each tier's base stations (BSs) are randomly located and have a particular transmit power, path loss exponent, spatial density, and bias towards admitting mobile users. For example, as compared to macrocells, picocells would usually have lower transmit power, higher path loss exponent (lower antennas), higher spatial density (many picocells per macrocell), and a positive bias so that macrocell users are actively encouraged to use the more lightly loaded picocells. In the present paper we implicitly assume all base stations have full queues; future work should relax this. For this model, we derive the outage probability of a typical user in the whole network or a certain tier, which is equivalently the downlink SINR cumulative distribution function. The results are accurate for all SINRs, and their expressions admit quite simple closed-forms in some plausible special cases. We also derive the average ergodic rate of the typical user, and the minimum average user throughput - the smallest value among the average user throughputs supported by one cell in each tier. We observe that neither the number of BSs or tiers changes the outage probability or average ergodic rate in an interference-limited full-loaded HCN with unbiased cell association (no biasing), and observe how biasing alters the various metrics.",
"We consider stochastic cellular networks where base stations locations form a homogeneous Poisson point process and each mobile is attached to the base station that provides the best mean signal power. The mobile is in outage if the SINR falls below some threshold. The handover decision has to be made if the mobile is in outage during several time slots. The outage probability and the handover probability are evaluated in taking into account the effect of path loss, shadowing, Rayleigh fast fading, frequency factor reuse and conventional beamforming. The main assumption is that the Rayleigh fast fading changes each time slot while other network components remain static during the period of study.",
"Cellular networks are in a major transition from a carefully planned set of large tower-mounted base-stations (BSs) to an irregular deployment of heterogeneous infrastructure elements that often additionally includes micro, pico, and femtocells, as well as distributed antennas. In this paper, we develop a tractable, flexible, and accurate model for a downlink heterogeneous cellular network (HCN) consisting of K tiers of randomly located BSs, where each tier may differ in terms of average transmit power, supported data rate and BS density. Assuming a mobile user connects to the strongest candidate BS, the resulting Signal-to-Interference-plus-Noise-Ratio (SINR) is greater than 1 when in coverage, Rayleigh fading, we derive an expression for the probability of coverage (equivalently outage) over the entire network under both open and closed access, which assumes a strikingly simple closed-form in the high SINR regime and is accurate down to -4 dB even under weaker assumptions. For external validation, we compare against an actual LTE network (for tier 1) with the other K-1 tiers being modeled as independent Poisson Point Processes. In this case as well, our model is accurate to within 1-2 dB. We also derive the average rate achieved by a randomly located mobile and the average load on each tier of BSs. One interesting observation for interference-limited open access networks is that at a given , adding more tiers and or BSs neither increases nor decreases the probability of coverage or outage when all the tiers have the same target-SINR.",
"Pushing data traffic from cellular to WiFi is an example of inter radio access technology (RAT) offloading. While this clearly alleviates congestion on the over-loaded cellular network, the ultimate potential of such offloading and its effect on overall system performance is not well understood. To address this, we develop a general and tractable model that consists of M different RATs, each deploying up to K different tiers of access points (APs), where each tier differs in transmit power, path loss exponent, deployment density and bandwidth. Each class of APs is modeled as an independent Poisson point process (PPP), with mobile user locations modeled as another independent PPP, all channels further consisting of i.i.d. Rayleigh fading. The distribution of rate over the entire network is then derived for a weighted association strategy, where such weights can be tuned to optimize a particular objective. We show that the optimum fraction of traffic offloaded to maximize SINR coverage is not in general the same as the one that maximizes rate coverage, defined as the fraction of users achieving a given rate.",
"Covering point process theory, random geometric graphs and coverage processes, this rigorous introduction to stochastic geometry will enable you to obtain powerful, general estimates and bounds of wireless network performance and make good design choices for future wireless architectures and protocols that efficiently manage interference effects. Practical engineering applications are integrated with mathematical theory, with an understanding of probability the only prerequisite. At the same time, stochastic geometry is connected to percolation theory and the theory of random geometric graphs and accompanied by a brief introduction to the R statistical computing language. Combining theory and hands-on analytical techniques with practical examples and exercises, this is a comprehensive guide to the spatial stochastic models essential for modelling and analysis of wireless network performance.",
"Cellular networks are usually modeled by placing the base stations on a grid, with mobile users either randomly scattered or placed deterministically. These models have been used extensively but suffer from being both highly idealized and not very tractable, so complex system-level simulations are used to evaluate coverage outage probability and rate. More tractable models have long been desirable. We develop new general models for the multi-cell signal-to-interference-plus-noise ratio (SINR) using stochastic geometry. Under very general assumptions, the resulting expressions for the downlink SINR CCDF (equivalent to the coverage probability) involve quickly computable integrals, and in some practical special cases can be simplified to common integrals (e.g., the Q-function) or even to simple closed-form expressions. We also derive the mean rate, and then the coverage gain (and mean rate loss) from static frequency reuse. We compare our coverage predictions to the grid model and an actual base station deployment, and observe that the proposed model is pessimistic (a lower bound on coverage) whereas the grid model is optimistic, and that both are about equally accurate. In addition to being more tractable, the proposed model may better capture the increasingly opportunistic and dense placement of base stations in future networks.",
"Wireless networks are fundamentally limited by the intensity of the received signals and by their interference. Since both of these quantities depend on the spatial location of the nodes, mathematical techniques have been developed in the last decade to provide communication-theoretic results accounting for the networks geometrical configuration. Often, the location of the nodes in the network can be modeled as random, following for example a Poisson point process. In this case, different techniques based on stochastic geometry and the theory of random geometric graphs -including point process theory, percolation theory, and probabilistic combinatorics-have led to results on the connectivity, the capacity, the outage probability, and other fundamental limits of wireless networks. This tutorial article surveys some of these techniques, discusses their application to model wireless networks, and presents some of the main results that have appeared in the literature. It also serves as an introduction to the field for the other papers in this special issue."
]
} |
1412.5731 | 2164179872 | The joint user association and spectrum allocation problem is studied for multi-tier heterogeneous networks (HetNets) in both downlink and uplink in the interference-limited regime. Users are associated with base-stations (BSs) based on the biased downlink received power. Spectrum is either shared or orthogonally partitioned among the tiers. This paper models the placement of BSs in different tiers as spatial point processes and adopts stochastic geometry to derive the theoretical mean proportionally fair utility of the network based on the coverage rate. By formulating and solving the network utility maximization problem, the optimal user association bias factors and spectrum partition ratios are analytically obtained for the multi-tier network. The resulting analysis reveals that the downlink and uplink user associations do not have to be symmetric. For uplink under spectrum sharing, if all tiers have the same target signal-to-interference ratio (SIR), distance-based user association is shown to be optimal under a variety of path loss and power control settings. For both downlink and uplink, under orthogonal spectrum partition, it is shown that the optimal proportion of spectrum allocated to each tier should match the proportion of users associated with that tier. Simulations validate the analytical results. Under typical system parameters, simulation results suggest that spectrum partition performs better for downlink in terms of utility, while spectrum sharing performs better for uplink with power control. | For the user association problem, one of the prior approaches in the literature involves heuristic greedy search, i.e., adding users that improve a certain metric to the BS in a greedy fashion, as in @cite_5 and @cite_11 for single-tier networks and multi-tier HetNets, respectively. Another prior approach involves a utility maximization framework and pricing-based association methods, see @cite_7 for single-tier networks and @cite_37 @cite_8 for HetNets. In @cite_16 @cite_20 , the association problem is jointly considered with resource allocation using the game theoretical approach. These solutions are dynamic and require real-time computations based on channel and topology realization. The cell range expansion scheme @cite_18 @cite_13 considered in this paper is semi-static and simple to implement. However, the bias factors are usually empirically determined through system-level performance evaluation @cite_18 . The effect of biased offloading has been investigated for multi-tier HetNets in @cite_30 @cite_0 under random topology, where the optimal bias in terms of SIR and rate coverage is determined through numerical evaluation. In our work, the optimal bias factor of each tier are derived through analytical network utility optimization. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_18",
"@cite_7",
"@cite_8",
"@cite_0",
"@cite_5",
"@cite_16",
"@cite_13",
"@cite_20",
"@cite_11"
],
"mid": [
"2034420299",
"2010630450",
"2136530738",
"2124130287",
"2084099189",
"2005108639",
"2065092668",
"2167145293",
"1973560162",
"2099285351",
"2129591674"
],
"abstract": [
"In this paper we develop a tractable framework for SINR analysis in downlink heterogeneous cellular networks (HCNs) with flexible cell association policies. The HCN is modeled as a multi-tier cellular network where each tier's base stations (BSs) are randomly located and have a particular transmit power, path loss exponent, spatial density, and bias towards admitting mobile users. For example, as compared to macrocells, picocells would usually have lower transmit power, higher path loss exponent (lower antennas), higher spatial density (many picocells per macrocell), and a positive bias so that macrocell users are actively encouraged to use the more lightly loaded picocells. In the present paper we implicitly assume all base stations have full queues; future work should relax this. For this model, we derive the outage probability of a typical user in the whole network or a certain tier, which is equivalently the downlink SINR cumulative distribution function. The results are accurate for all SINRs, and their expressions admit quite simple closed-forms in some plausible special cases. We also derive the average ergodic rate of the typical user, and the minimum average user throughput - the smallest value among the average user throughputs supported by one cell in each tier. We observe that neither the number of BSs or tiers changes the outage probability or average ergodic rate in an interference-limited full-loaded HCN with unbiased cell association (no biasing), and observe how biasing alters the various metrics.",
"For small cell technology to significantly increase the capacity of tower-based cellular networks, mobile users will need to be actively pushed onto the more lightly loaded tiers (corresponding to, e.g., pico and femtocells), even if they offer a lower instantaneous SINR than the macrocell base station (BS). Optimizing a function of the long-term rate for each user requires (in general) a massive utility maximization problem over all the SINRs and BS loads. On the other hand, an actual implementation will likely resort to a simple biasing approach where a BS in tier j is treated as having its SINR multiplied by a factor Aj ≥ 1, which makes it appear more attractive than the heavily-loaded macrocell. This paper bridges the gap between these approaches through several physical relaxations of the network-wide association problem, whose solution is NP hard. We provide a low-complexity distributed algorithm that converges to a near-optimal solution with a theoretical performance guarantee, and we observe that simple per-tier biasing loses surprisingly little, if the bias values Aj are chosen carefully. Numerical results show a large (3.5x) throughput gain for cell-edge users and a 2x rate gain for median users relative to a maximizing received power association.",
"As the spectral efficiency of a point-to-point link in cellular networks approaches its theoretical limits, with the forecasted explosion of data traffic, there is a need for an increase in the node density to further improve network capacity. However, in already dense deployments in today's networks, cell splitting gains can be severely limited by high inter-cell interference. Moreover, high capital expenditure cost associated with high power macro nodes further limits viability of such an approach. This article discusses the need for an alternative strategy, where low power nodes are overlaid within a macro network, creating what is referred to as a heterogeneous network. We survey current state of the art in heterogeneous deployments and focus on 3GPP LTE air interface to describe future trends. A high-level overview of the 3GPP LTE air interface, network nodes, and spectrum allocation options is provided, along with the enabling mechanisms for heterogeneous deployments. Interference management techniques that are critical for LTE heterogeneous deployments are discussed in greater detail. Cell range expansion, enabled through cell biasing and adaptive resource partitioning, is seen as an effective method to balance the load among the nodes in the network and improve overall trunking efficiency. An interference cancellation receiver plays a crucial role in ensuring acquisition of weak cells and reliability of control and data reception in the presence of legacy signals.",
"In this paper, we jointly consider the resource allocation and base-station assignment problems for the downlink in CDMA networks that could carry heterogeneous data services. We first study a joint power and rate allocation problem that attempts to maximize the expected throughput of the system. This problem is inherently difficult because it is in fact a nonconvex optimization problem. To solve this problem, we develop a distributed algorithm based on dynamic pricing. This algorithm provides a power and rate allocation that is asymptotically optimal in the number of mobiles. We also study the effect of various factors on the development of efficient resource allocation strategies. Finally, using the outcome of the power and rate allocation algorithm, we develop a pricing-based base-station assignment algorithm that results in an overall joint resource allocation and base-station assignment. In this algorithm, a base-station is assigned to each mobile taking into account the congestion level of the base-station as well as the transmission environment of the mobile.",
"This paper considers optimization of the user and base-station (BS) association in a wireless downlink heterogeneous cellular network under the proportional fairness criterion. We first consider the case where each BS has a single antenna and transmits at fixed power and propose a distributed price update strategy for a pricing-based user association scheme, in which the users are assigned to the BS based on the value of a utility function minus a price. The proposed price update algorithm is based on a coordinate descent method for solving the dual of the network utility maximization problem and it has a rigorous performance guarantee. The main advantage of the proposed algorithm as compared to an existing subgradient method for price update is that the proposed algorithm is independent of parameter choices and can be implemented asynchronously. Further, this paper considers the joint user association and BS power control problem and proposes an iterative dual coordinate descent and the power optimization algorithm that significantly outperforms existing approaches. Finally, this paper considers the joint user association and BS beamforming problem for the case where the BSs are equipped with multiple antennas and spatially multiplex multiple users. We incorporate dual coordinate descent with the weighted minimum mean-squared error (WMMSE) algorithm and show that it achieves nearly the same performance as a computationally more complex benchmark algorithm (which applies the WMMSE algorithm on the entire network for BS association) while avoiding excessive BS handover.",
"Pushing data traffic from cellular to WiFi is an example of inter radio access technology (RAT) offloading. While this clearly alleviates congestion on the over-loaded cellular network, the ultimate potential of such offloading and its effect on overall system performance is not well understood. To address this, we develop a general and tractable model that consists of M different RATs, each deploying up to K different tiers of access points (APs), where each tier differs in transmit power, path loss exponent, deployment density and bandwidth. Each class of APs is modeled as an independent Poisson point process (PPP), with mobile user locations modeled as another independent PPP, all channels further consisting of i.i.d. Rayleigh fading. The distribution of rate over the entire network is then derived for a weighted association strategy, where such weights can be tuned to optimize a particular objective. We show that the optimum fraction of traffic offloaded to maximize SINR coverage is not in general the same as the one that maximizes rate coverage, defined as the fraction of users achieving a given rate.",
"Next-generation cellular networks will provide higher cell capacity by adopting advanced physical layer techniques and broader bandwidth. Even in such networks, boundary users would suffer from low throughput due to severe intercell interference and unbalanced user distributions among cells, unless additional schemes to mitigate this problem are employed. In this paper, we tackle this problem by jointly optimizing partial frequency reuse and load-balancing schemes in a multicell network. We formulate this problem as a network-wide utility maximization problem and propose optimal offline and practical online algorithms to solve this. Our online algorithm turns out to be a simple mixture of inter- and intra-cell handover mechanisms for existing users and user association control and cell-site selection mechanisms for newly arriving users. A remarkable feature of the proposed algorithm is that it uses a notion of expected throughput as the decision making metric, as opposed to signal strength in conventional systems. Extensive simulations demonstrate that our online algorithm can not only closely approximate network-wide proportional fairness but also provide two types of gain, interference avoidance gain and load balancing gain, which yield 20 100 throughput improvement of boundary users (depending on traffic load distribution), while not penalizing total system throughput.We also demonstrate that this improvement cannot be achieved by conventional systems using universal frequency reuse and signal strength as the decision making metric.",
"We consider a resource management problem in a multi-cell downlink OFDMA network whereby the goal is to find the optimal combination of (i) assignment of users to base stations and (ii) resource allocation strategies at each base station. Efficient resource management protocols must rely on users truthfully reporting privately held information such as downlink channel states. However, individual users can manipulate the resulting resource allocation (by misreporting their private information) if by doing so they can improve their payoff. Therefore, it is of interest to design efficient resource management protocols that are strategy-proof, i.e. it is in the users' best interests to truthfully report their private information. Unfortunately, we show that the implementation of any protocol that is efficient and strategy-proof is NP-hard. Thus, we propose a computationally tractable strategy-proof mechanism that is approximately efficient, i.e. the solution obtained yields at least 1 2 of the optimal throughput. Simulations are provided to illustrate the effectiveness of the proposed mechanism.",
"In order to expand the downlink (DL) coverage areas of picocells in the presence of an umbrella macrocell, the concept of range expansion has been recently proposed, in which a positive range expansion bias (REB) is added to the DL received signal strengths (RSSs) of picocell pilot signals at user equipments (UEs). Although range expansion may increase DL footprints of picocells, it also results in severe DL inter-cell interference in picocell expanded regions (ERs), because ER picocell user equipments (PUEs) are not connected to the cells that provide the strongest DL RSSs. In this paper, we derive closed-form formulas to calculate appropriate REBs for two different range expansion strategies, investigate both DL and uplink (UL) inter-cell interference coordination (ICIC) to enhance picocell performance, and propose a new macrocell-picocell cooperative scheduling scheme to mitigate both DL and UL interference caused by macrocells to ER PUEs. Simulation results provide insights on REB selection approaches at picocells, and demonstrate the benefits of the proposed macrocell-picocell cooperative scheduling scheme over alternative approaches.",
"We consider the distributed uplink resource allocation problem in a multi-carrier wireless network with multiple access points (APs). Each mobile user can optimize its own transmission rate by selecting a suitable AP and by controlling its transmit power. Our objective is to devise suitable algorithms by which mobile users can jointly perform these tasks in a distributed manner. Our approach relies on a game theoretic formulation of the joint power control and AP selection problem. In the proposed game, each user is a player with an associated strategy containing a discrete variable (the AP selection decision) and a continuous vector (the power allocation among multiple channels). We provide characterizations of the Nash Equilibrium of the proposed game. We present a novel algorithm named Joint Access Point Selection and Power Allocation (JASPA) and its various extensions (with different update schedules) that allow the users to efficiently optimize their rates. Finally, we study the properties of the proposed algorithms as well as their performance via extensive simulations.",
"Embedding pico femto base-stations and relay nodes in a macro-cellular network is a promising method for achieving substantial gains in coverage and capacity compared to macro-only networks. These new types of base-stations can operate on the same wireless channel as the macro-cellular network, providing higher spatial reuse via cell splitting. However, these base-stations are deployed in an unplanned manner, can have very different transmit powers, and may not have traffic aggregation among many users. This could potentially result in much higher interference magnitude and variability. Hence, such deployments require the use of innovative cell association and inter-cell interference coordination techniques in order to realize the promised capacity and coverage gains. In this paper, we describe new paradigms for design and operation of such heterogeneous cellular networks. Specifically, we focus on cell splitting, range expansion, semi-static resource negotiation on third-party backhaul connections, and fast dynamic interference management for QoS via over-the-air signaling. Notably, our methodologies and algorithms are simple, lightweight, and incur extremely low overhead. Numerical studies show that they provide large gains over currently used methods for cellular networks."
]
} |
1412.5731 | 2164179872 | The joint user association and spectrum allocation problem is studied for multi-tier heterogeneous networks (HetNets) in both downlink and uplink in the interference-limited regime. Users are associated with base-stations (BSs) based on the biased downlink received power. Spectrum is either shared or orthogonally partitioned among the tiers. This paper models the placement of BSs in different tiers as spatial point processes and adopts stochastic geometry to derive the theoretical mean proportionally fair utility of the network based on the coverage rate. By formulating and solving the network utility maximization problem, the optimal user association bias factors and spectrum partition ratios are analytically obtained for the multi-tier network. The resulting analysis reveals that the downlink and uplink user associations do not have to be symmetric. For uplink under spectrum sharing, if all tiers have the same target signal-to-interference ratio (SIR), distance-based user association is shown to be optimal under a variety of path loss and power control settings. For both downlink and uplink, under orthogonal spectrum partition, it is shown that the optimal proportion of spectrum allocated to each tier should match the proportion of users associated with that tier. Simulations validate the analytical results. Under typical system parameters, simulation results suggest that spectrum partition performs better for downlink in terms of utility, while spectrum sharing performs better for uplink with power control. | For the spectrum allocation problem, disjoint spectrum partition between macro and femto tiers has been considered in prior works. The authors in @cite_3 analytically determine the spectrum partition between the two tiers that maximizes the network-wide area spectral efficiency. Stochastic geometry is used in @cite_2 to study the optimal spectrum partition by formulating the throughput maximization problem subject to constraints on coverage probabilities. Biased user association and spectrum partition can be jointly considered. The authors of @cite_22 analyze the rate coverage for a two-tier topology and provide trends with respect to the spectrum partition fraction. However, no optimal partition is analytically given. For a general multi-tier network, spectrum partition and user association are optimized analytically in the downlink in terms of the user rate in @cite_10 and rate coverage in @cite_35 . Different from these works, under the orthogonal spectrum allocation assumption, we analytically determine the optimal inter-tier spectrum partition in terms of the mean user utility for both downlink and uplink. | {
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_3",
"@cite_2",
"@cite_10"
],
"mid": [
"2125088789",
"2013366448",
"2120419969",
"2109830484",
"1989423868"
],
"abstract": [
"For a wireless multi-tier heterogeneous network with orthogonal spectrum allocation across tiers, we optimize the association probability and the fraction of spectrum allocated to each tier so as to maximize rate coverage. In practice, the association probability can be controlled using a biased received signal power. The optimization problem is non-convex and we are forced to explore locally optimal solutions. We make two contributions in this paper: first, we show that there exists a relation between the first derivatives of the objective function with respect to each of the optimization variables. This can be used to simplify numerical solutions to the optimization problem. Second, we explore the optimality of the intuitive solution that the fraction of spectrum allocated to each tier should be equal to the tier association probability. We show that, in this case, a closed-form solution exists. Importantly, our numerical results show that there is essentially zero performance loss. The results also illustrate the significant gains possible by jointly optimizing the user association and the resource allocation.",
"In heterogeneous cellular networks (HCNs), it is desirable to offload mobile users to small cells, which are typically significantly less congested than the macrocells. To achieve sufficient load balancing, the offloaded users often have much lower SINR than they would on the macrocell. This SINR degradation can be partially alleviated through interference avoidance, for example time or frequency resource partitioning, whereby the macrocell turns off in some fraction of such resources. Naturally, the optimal offloading strategy is tightly coupled with resource partitioning; the optimal amount of which in turn depends on how many users have been offloaded. In this paper, we propose a general and tractable framework for modeling and analyzing joint resource partitioning and offloading in a two-tier cellular network. With it, we are able to derive the downlink rate distribution over the entire network, and an optimal strategy for joint resource partitioning and offloading. We show that load balancing, by itself, is insufficient, and resource partitioning is required in conjunction with offloading to improve the rate of cell edge users in co-channel heterogeneous networks.",
"Two-tier networks, comprising a conventional cellular network overlaid with shorter range hotspots (e.g. femtocells, distributed antennas, or wired relays), offer an economically viable way to improve cellular system capacity. The capacity-limiting factor in such networks is interference. The cross-tier interference between macrocells and femtocells can suffocate the capacity due to the near-far problem, so in practice hotspots should use a different frequency channel than the potentially nearby high-power macrocell users. Centralized or coordinated frequency planning, which is difficult and inefficient even in conventional cellular networks, is all but impossible in a two-tier network. This paper proposes and analyzes an optimum decentralized spectrum allocation policy for two-tier networks that employ frequency division multiple access (including OFDMA). The proposed allocation is optimal in terms of area spectral efficiency (ASE), and is subjected to a sensible quality of service (QoS) requirement, which guarantees that both macrocell and femtocell users attain at least a prescribed data rate. Results show the dependence of this allocation on the QoS requirement, hotspot density and the co-channel interference from the macrocell and femtocells. Design interpretations are provided.",
"The deployment of femtocells in a macrocell network is an economical and effective way to increase network capacity and coverage. Nevertheless, such deployment is challenging due to the presence of inter-tier and intra-tier interference, and the ad hoc operation of femtocells. Motivated by the flexible subchannel allocation capability of OFDMA, we investigate the effect of spectrum allocation in two-tier networks, where the macrocells employ closed access policy and the femtocells can operate in either open or closed access. By introducing a tractable model, we derive the success probability for each tier under different spectrum allocation and femtocell access policies. In particular, we consider joint subchannel allocation, in which the whole spectrum is shared by both tiers, as well as disjoint subchannel allocation, whereby disjoint sets of subchannels are assigned to both tiers. We formulate the throughput maximization problem subject to quality of service constraints in terms of success probabilities and per-tier minimum rates, and provide insights into the optimal spectrum allocation. Our results indicate that with closed access femtocells, the optimized joint and disjoint subchannel allocations provide the highest throughput among all schemes in sparse and dense femtocell networks, respectively. With open access femtocells, the optimized joint subchannel allocation provides the highest possible throughput for all femtocell densities.",
"We study joint spectrum allocation and user association in heterogeneous cellular networks with multiple tiers of base stations. A stochastic geometric approach is applied as the basis to derive the average downlink user data rate in a closed-form expression. Then, the expression is employed as the objective function in jointly optimizing spectrum allocation and user association, which is of non-convex programming in nature. A computationally efficient Structured Spectrum Allocation and User Association (SSAUA) approach is proposed, solving the optimization problem optimally when the density of users is low, and near-optimally with a guaranteed performance bound when the density of users is high. A Surcharge Pricing Scheme (SPS) is also presented, such that the designed association bias values can be achieved in Nash equilibrium. Simulations and numerical studies are conducted to validate the accuracy and efficiency of the proposed SSAUA approach and SPS."
]
} |
1412.5731 | 2164179872 | The joint user association and spectrum allocation problem is studied for multi-tier heterogeneous networks (HetNets) in both downlink and uplink in the interference-limited regime. Users are associated with base-stations (BSs) based on the biased downlink received power. Spectrum is either shared or orthogonally partitioned among the tiers. This paper models the placement of BSs in different tiers as spatial point processes and adopts stochastic geometry to derive the theoretical mean proportionally fair utility of the network based on the coverage rate. By formulating and solving the network utility maximization problem, the optimal user association bias factors and spectrum partition ratios are analytically obtained for the multi-tier network. The resulting analysis reveals that the downlink and uplink user associations do not have to be symmetric. For uplink under spectrum sharing, if all tiers have the same target signal-to-interference ratio (SIR), distance-based user association is shown to be optimal under a variety of path loss and power control settings. For both downlink and uplink, under orthogonal spectrum partition, it is shown that the optimal proportion of spectrum allocated to each tier should match the proportion of users associated with that tier. Simulations validate the analytical results. Under typical system parameters, simulation results suggest that spectrum partition performs better for downlink in terms of utility, while spectrum sharing performs better for uplink with power control. | Most of the previous works on HetNets focus on the downlink. A key difference in uplink as compared to downlink is that fractional power control is often used in uplink to fully or partially compensate for the path loss, e.g., as defined in 3GPP-LTE @cite_1 . The influence of fractional power control on system performance is studied in various works, e.g., @cite_21 @cite_17 @cite_14 under regular hexagonal topology. For networks with random topology and accounting for fractional power control, @cite_6 analytically derives uplink SIR and rate distribution for single-tier networks; @cite_32 investigates uplink outage capacity for two-tier networks with shared spectrum; @cite_33 extends the analysis to multi-tier uplink networks in terms of outage probability and spectral efficiency. In this paper, the mean user utility of random multi-tier HetNets in uplink with fractional power control is analyzed and optimized. | {
"cite_N": [
"@cite_14",
"@cite_33",
"@cite_21",
"@cite_1",
"@cite_32",
"@cite_6",
"@cite_17"
],
"mid": [
"2114498940",
"2963793432",
"2171203705",
"",
"2102465185",
"1984714570",
"2125161536"
],
"abstract": [
"Uplink power control in UTRAN Long Term Evolution consists of an open-loop scheme handled by the User Equipment and closed-loop power corrections determined and signaled by the network. In this study the difference in performance between pure open-loop and combined open and closed-loop power control has been analyzed and the different behavior of fractional vs. full path-loss compensation has been evaluated. A comprehensive system level simulation model has been used with a facility to trace a particular test user during its motion from eNodeB towards the cell border and back to its initial position. This study demonstrates the effect of distance path-loss of a test user on several physical layer performance metrics including throughput, resource allocation as well as modulation and coding scheme utilization. Simulation results in a fully loaded network show high throughput for open-loop fractional power control for the user located in the vicinity of the serving eNodeB, however, steep performance degradation has been observed when the user is moving towards the cell edge. The user throughput at the cell border can be increased by the closed-loop component. The benefit of closed-loop power control is the higher homogeneity in terms of throughput across the entire network area and the ability to automatically stabilize the network performance under different conditions like cell load and traffic distribution.",
"Using stochastic geometry, we develop a tractable uplink modeling paradigm for outage probability and spectral efficiency in both single and multi-tier cellular wireless networks. The analysis accounts for per user equipment (UE) power control as well as the maximum power limitations for UEs. More specifically, for interference mitigation and robust uplink communication, each UE is required to control its transmit power such that the average received signal power at its serving base station (BS) is equal to a certain threshold ρ o . Due to the limited transmit power, the UEs employ a truncated channel inversion power control policy with a cutoff threshold of ρ o . We show that there exists a transfer point in the uplink system performance that depends on the following tuple: BS intensity λ, maximum transmit power of UEs P u , and ρ o . That is, when P u is a tight operational constraint with respect to (w.r.t.) λ and ρ o , the uplink outage probability and spectral efficiency highly depend on the values of λ and ρ o . In this case, there exists an optimal cutoff threshold ρ o *, which depends on the system parameters, that minimizes the outage probability. On the other hand, when P u is not a binding operational constraint w.r.t. λ and ρ o , the uplink outage probability and spectral efficiency become independent of λ and ρ o . We obtain approximate yet accurate simple expressions for outage probability and spectral efficiency, which reduce to closed forms in some special cases.",
"UTRAN long term evolution is currently being standardized in 3GPP with the aim of more than twice the capacity over high-speed packet access. The chosen multiple access for uplink is single carrier FDMA, which avoids the intra-cell interference typical of CDMA systems, but it is still sensitive to inter-cell interference. As a result, the role of the power control becomes decisive to provide the required SINR, while controlling at the same time the interference caused to neighboring cells. This is the target of the fractional power control (FPC) algorithm lately approved in 3GPP. This paper evaluates in detail the impact of a FPC scheme on the SINR and interference distributions in order to provide a sub-optimal configuration tuned for both interference- and noise-limited scenarios.",
"",
"Two-tier femtocell networks- comprising a conventional cellular network plus embedded femtocell hotspots- offer an economically viable solution to achieving high cellular user capacity and improved coverage. With universal frequency reuse and DS-CDMA transmission however, the ensuing cross-tier interference causes unacceptable outage probability. This paper develops an uplink capacity analysis and interference avoidance strategy in such a two-tier CDMA network. We evaluate a network-wide area spectral efficiency metric called the operating contour (OC) defined as the feasible combinations of the average number of active macrocell users and femtocell base stations (BS) per cell-site that satisfy a target outage constraint. The capacity analysis provides an accurate characterization of the uplink outage probability, accounting for power control, path loss and shadowing effects. Considering worst case interference at a corner femtocell, results reveal that interference avoidance through a time-hopped CDMA physical layer and sectorized antennas allows about a 7x higher femtocell density, relative to a split spectrum two-tier network with omnidirectional femtocell antennas. A femtocell exclusion region and a tier selection based handoff policy offers modest improvements in the OCs. These results provide guidelines for the design of robust shared spectrum two-tier networks.",
"Cellular uplink analysis has typically been undertaken by either a simple approach that lumps all interference into a single deterministic or random parameter in a Wyner-type model, or via complex system level simulations that often do not provide insight into why various trends are observed. This paper proposes a novel middle way using point processes that is both accurate and also results in easy-to-evaluate integral expressions based on the Laplace transform of the interference. We assume mobiles and base stations are randomly placed in the network with each mobile pairing up to its closest base station. Compared to related recent work on downlink analysis, the proposed uplink model differs in two key features. First, dependence is considered between user and base station point processes to make sure each base station serves a single mobile in the given resource block. Second, per-mobile power control is included, which further couples the transmission of mobiles due to location-dependent channel inversion. Nevertheless, we succeed in deriving the coverage (equivalently outage) probability of a typical link in the network. This model can be used to address a wide variety of system design questions in the future. In this paper we focus on the implications for power control and show that partial channel inversion should be used at low signal-to-interference-plus-noise ratio (SINR), while full power transmission is optimal at higher SINR.",
"Uplink power control is a key radio resource management function. It is typically used to maximize the power of the desired received signals while limiting the generated interference. This paper presents the 3GPP long term evolution (LTE) power control mechanism, and compares its performance to two reference mechanisms. The LTE power control mechanism constitutes of a closed loop component operating around an open loop point of operation. Specifically, the open loop component has a parameterized fractional path loss compensation factor, enabling a trade-off between cell edge bitrate and cell capacity. The closed-loop component can be limited to compensate for long-term variations, enabling fast channel quality variations to be utilized by scheduling and link adaptation. Simulation results indicate that the LTE power control mechanism is advantageous compared to reference mechanisms using full path loss compensation and SINR balancing. The fractional pathless compensation can improve the cell-edge bitrate and or the capacity with up to 20 while at the same time battery life time is improved. The fast SINR balancing closed loop mechanism performs poorly at high load since it does not utilize the link adaptation and the full link performance capability in LTE."
]
} |
1412.5731 | 2164179872 | The joint user association and spectrum allocation problem is studied for multi-tier heterogeneous networks (HetNets) in both downlink and uplink in the interference-limited regime. Users are associated with base-stations (BSs) based on the biased downlink received power. Spectrum is either shared or orthogonally partitioned among the tiers. This paper models the placement of BSs in different tiers as spatial point processes and adopts stochastic geometry to derive the theoretical mean proportionally fair utility of the network based on the coverage rate. By formulating and solving the network utility maximization problem, the optimal user association bias factors and spectrum partition ratios are analytically obtained for the multi-tier network. The resulting analysis reveals that the downlink and uplink user associations do not have to be symmetric. For uplink under spectrum sharing, if all tiers have the same target signal-to-interference ratio (SIR), distance-based user association is shown to be optimal under a variety of path loss and power control settings. For both downlink and uplink, under orthogonal spectrum partition, it is shown that the optimal proportion of spectrum allocated to each tier should match the proportion of users associated with that tier. Simulations validate the analytical results. Under typical system parameters, simulation results suggest that spectrum partition performs better for downlink in terms of utility, while spectrum sharing performs better for uplink with power control. | Part of this work has appeared in @cite_38 @cite_29 , which contain the analysis and optimization of the downlink case. | {
"cite_N": [
"@cite_38",
"@cite_29"
],
"mid": [
"2148562694",
"2133705997"
],
"abstract": [
"This paper considers the joint optimization of frequency reuse and base-station (BS) bias for user association in downlink heterogeneous networks for load balancing and intercell interference management. To make the analysis tractable, we assume that BSs are randomly deployed as point processes in multiple tiers, where BSs in each tier have different transmission powers and spatial densities. A utility maximization framework is formulated based on the user coverage rate, which is a function of the different BS biases for user association and different frequency reuse factors across BS tiers. Compared to previous works where the bias levels are heuristically determined and full reuse is adopted, we quantitatively compute the optimal user association bias and obtain the closed-form solution of the optimal frequency reuse. Interestingly, we find that the optimal bias and the optimal reuse factor of each BS tier have an inversely proportional relationship. Further, we also propose an iterative method for optimizing these two factors. In contrast to system-level optimization solutions based on specific channel realization and network topology, our approach is off-line and is useful for deriving deployment insights. Numerical results show that optimizing user association and frequency reuse for multi-tier heterogeneous networks can effectively improve cell-edge user rate performance and utility.",
"The joint spectrum partition and user association problem for multi-tier heterogeneous networks is studied in this paper, where disjoint spectrums are allocated among tiers and users are associated with each tier with a biased received power. The random placement of base-stations (BSs) of different tiers are modeled using stochastic geometry, which accounts for their practical deployment and also makes analysis tractable. We derive an upper bound of the average user proportional fair utility based on the user coverage rate, from which we formulate a network utility maximization problem. The optimization of the proposed utility bound shows that the optimal spectrum allocation for each BS tier matches the average proportion of users associated with that tier. The solution to the optimization problem also provides closed-form expressions for the optimal user associated bias factors. Compared to system-level optimization solutions based on specific network topology and channel realization, our offline analytical approach offers deployment insights. Simulation results demonstrates the effectiveness of the proposed approach."
]
} |
1412.5238 | 1641043434 | With the proliferation of network data, researchers are increasingly focusing on questions investigating phenomena occurring on networks. This often includes analysis of peer-effects, i.e., how the connections of an individual affect that individual's behavior. This type of influence is not limited to direct connections of an individual (such as friends), but also to individuals that are connected through longer paths (for example, friends of friends, or friends of friends of friends). In this work, we identify an ambiguity in the definition of what constitutes the extended neighborhood of an individual. This ambiguity gives rise to different semantics and supports different types of underlying phenomena. We present experimental results, both on synthetic and real networks, that quantify differences among the sets of extended neighbors under different semantics. Finally, we provide experimental evidence that demonstrates how the use of different semantics affects model selection. | There is a plethora of work analyzing the properties of social networks. However, the actual construction for sets of peers is relatively understudied. A closely related work is that of @cite_3 , which focuses on analyzing the structure of the Facebook graph. The authors explicitly make a distinction between the number of non-unique friends of friends (the number of paths of length two from the initial node that do not return to the initial node) and unique friends of friends (the set of nodes that can be reached from the initial node with paths of length two). That distinction could be thought of as an additional type of semantics. It is worth pointing out that our work has connections to feature construction for relational learning, as well as to probabilistic relational models, such as PRMs @cite_4 and Markov Logic Networks @cite_5 . More specifically, relational models are lifted representations that can be grounded to propositional models, according to some grounding semantics. Those semantics specify how an edge on the relational level translates'' to an edge on the propositional level. | {
"cite_N": [
"@cite_5",
"@cite_4",
"@cite_3"
],
"mid": [
"1977970897",
"959407384",
"1893161742"
],
"abstract": [
"We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a first-order formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach.",
"The invention comprises a method and apparatus for learning probabilistic models (PRM's) with attribute uncertainty. A PRM with attribute uncertainty defines a probability distribution over instantiations of a database. A learned PRM is useful for discovering interesting patterns and dependencies in the data. Unlike many existing techniques, the process is data-driven rather than hypothesis driven. This makes the technique particularly well-suited for exploratory data analysis. In addition, the invention comprises a method and apparatus for handling link uncertainty in PRM's. Link uncertainty is uncertainty over which entities are related in our domain. The invention comprises of two mechanisms for modeling link uncertainty: reference uncertainty and existence uncertainty. The invention includes learning algorithms for each form of link uncertainty. The third component of the invention is a technique for performing database selectivity estimation using probabilistic relational models. The invention provides a unified framework for the estimation of query result size for a broad class of queries involving both select and join operations. A single learned model can be used to efficiently estimate query result sizes for a wide collection of potential queries across multiple tables.",
"We study the structure of the social graph of active Facebook users, the largest social network ever analyzed. We compute numerous features of the graph including the number of users and friendships, the degree distribution, path lengths, clustering, and mixing patterns. Our results center around three main observations. First, we characterize the global structure of the graph, determining that the social network is nearly fully connected, with 99.91 of individuals belonging to a single large connected component, and we confirm the \"six degrees of separation\" phenomenon on a global scale. Second, by studying the average local clustering coefficient and degeneracy of graph neighborhoods, we show that while the Facebook graph as a whole is clearly sparse, the graph neighborhoods of users contain surprisingly dense structure. Third, we characterize the assortativity patterns present in the graph by studying the basic demographic and network properties of users. We observe clear degree assortativity and characterize the extent to which \"your friends have more friends than you\". Furthermore, we observe a strong effect of age on friendship preferences as well as a globally modular community structure driven by nationality, but we do not find any strong gender homophily. We compare our results with those from smaller social networks and find mostly, but not entirely, agreement on common structural network characteristics."
]
} |
1412.5404 | 2951319811 | The short text has been the prevalent format for information of Internet in recent decades, especially with the development of online social media, whose millions of users generate a vast number of short messages everyday. Although sophisticated signals delivered by the short text make it a promising source for topic modeling, its extreme sparsity and imbalance brings unprecedented challenges to conventional topic models like LDA and its variants. Aiming at presenting a simple but general solution for topic modeling in short texts, we present a word co-occurrence network based model named WNTM to tackle the sparsity and imbalance simultaneously. Different from previous approaches, WNTM models the distribution over topics for each word instead of learning topics for each document, which successfully enhance the semantic density of data space without importing too much time or space complexity. Meanwhile, the rich contextual information preserved in the word-word space also guarantees its sensitivity in identifying rare topics with convincing quality. Furthermore, employing the same Gibbs sampling with LDA makes WNTM easily to be extended to various application scenarios. Extensive validations on both short and normal texts testify the outperformance of WNTM as compared to baseline methods. And finally we also demonstrate its potential in precisely discovering newly emerging topics or unexpected events in Weibo at pretty early stages. | The sparse short texts has also attract much research interest in the previous literature and most early studies mainly focus on increasing data density through utilizing auxiliary information. For example, @cite_38 train topic models on aggregated tweets that sharing the same word, and find those models work better than those being directly trained on original tweets. @cite_2 propose a search-snippet-based similarity measure for short texts. learn topics on short texts via transfer learning from auxiliary long text data @cite_1 . Another way to deal with data sparsity in short texts is to apply special topic models. For example, assume each tweet only covers a single topic @cite_11 . @cite_33 propose a special form of mixture of unigrams @cite_7 , which is called biterm topic model to improve topic modeling on short texts. | {
"cite_N": [
"@cite_38",
"@cite_33",
"@cite_7",
"@cite_1",
"@cite_2",
"@cite_11"
],
"mid": [
"2063904635",
"1714665356",
"",
"2139317750",
"2161443453",
"2168332560"
],
"abstract": [
"Social networks such as Facebook, LinkedIn, and Twitter have been a crucial source of information for a wide spectrum of users. In Twitter, popular information that is deemed important by the community propagates through the network. Studying the characteristics of content in the messages becomes important for a number of tasks, such as breaking news detection, personalized message recommendation, friends recommendation, sentiment analysis and others. While many researchers wish to use standard text mining tools to understand messages on Twitter, the restricted length of those messages prevents them from being employed to their full potential. We address the problem of using standard topic models in micro-blogging environments by studying how the models can be trained on the dataset. We propose several schemes to train a standard topic model and compare their quality and effectiveness through a set of carefully designed experiments from both qualitative and quantitative perspectives. We show that by training a topic model on aggregated messages we can obtain a higher quality of learned model which results in significantly better performance in two real-world classification problems. We also discuss how the state-of-the-art Author-Topic model fails to model hierarchical relationships between entities in Social Media.",
"Uncovering the topics within short texts, such as tweets and instant messages, has become an important task for many content analysis applications. However, directly applying conventional topic models (e.g. LDA and PLSA) on such short texts may not work well. The fundamental reason lies in that conventional topic models implicitly capture the document-level word co-occurrence patterns to reveal topics, and thus suffer from the severe data sparsity in short documents. In this paper, we propose a novel way for modeling topics in short texts, referred as biterm topic model (BTM). Specifically, in BTM we learn the topics by directly modeling the generation of word co-occurrence patterns (i.e. biterms) in the whole corpus. The major advantages of BTM are that 1) BTM explicitly models the word co-occurrence patterns to enhance the topic learning; and 2) BTM uses the aggregated patterns in the whole corpus for learning topics to solve the problem of sparse word co-occurrence patterns at document-level. We carry out extensive experiments on real-world short text collections. The results demonstrate that our approach can discover more prominent and coherent topics, and significantly outperform baseline methods on several evaluation metrics. Furthermore, we find that BTM can outperform LDA even on normal texts, showing the potential generality and wider usage of the new topic model.",
"",
"With the rapid growth of social Web applications such as Twitter and online advertisements, the task of understanding short texts is becoming more and more important. Most traditional text mining techniques are designed to handle long text documents. For short text messages, many of the existing techniques are not effective due to the sparseness of text representations. To understand short messages, we observe that it is often possible to find topically related long texts, which can be utilized as the auxiliary data when mining the target short texts data. In this article, we present a novel approach to cluster short text messages via transfer learning from auxiliary long text data. We show that while some previous work exists that enhance short text clustering with related long texts, most of them ignore the semantic and topical inconsistencies between the target and auxiliary data and hurt the clustering performance. To accommodate the possible inconsistency between source and target data, we propose a novel topic model - Dual Latent Dirichlet Allocation (DLDA) model, which jointly learns two sets of topics on short and long texts and couples the topic parameters to cope with the potential inconsistency between data sets. We demonstrate through large-scale clustering experiments on both advertisements and Twitter data that we can obtain superior performance over several state-of-art techniques for clustering short text documents.",
"Determining the similarity of short text snippets, such as search queries, works poorly with traditional document similarity measures (e.g., cosine), since there are often few, if any, terms in common between two short text snippets. We address this problem by introducing a novel method for measuring the similarity between short text snippets (even those without any overlapping terms) by leveraging web search results to provide greater context for the short texts. In this paper, we define such a similarity kernel function, mathematically analyze some of its properties, and provide examples of its efficacy. We also show the use of this kernel function in a large-scale system for suggesting related queries to search engine users.",
"Twitter as a new form of social media can potentially contain much useful information, but content analysis on Twitter has not been well studied. In particular, it is not clear whether as an information source Twitter can be simply regarded as a faster news feed that covers mostly the same information as traditional news media. In This paper we empirically compare the content of Twitter with a traditional news medium, New York Times, using unsupervised topic modeling. We use a Twitter-LDA model to discover topics from a representative sample of the entire Twitter. We then use text mining techniques to compare these Twitter topics with topics from New York Times, taking into consideration topic categories and types. We also study the relation between the proportions of opinionated tweets and retweets and topic categories and types. Our comparisons show interesting and useful findings for downstream IR or DM applications."
]
} |
1412.5278 | 1699452924 | Social media involve many shared items, such as photos, which may concern more than one user. The challenge is that users’ individual privacy preferences for the same item may conflict, so an approach that simply merges in some way the users’ privacy preferences may provide unsatisfactory results. Previous proposals to deal with the problem were either time-consuming or did not consider compromises to solve these conflicts (e.g., by considering unilaterally imposed approaches only). We propose a negotiation mechanism for users to agree on a compromise for these conflicts. The second challenge we address in this article relates to the exponential complexity of such a negotiation mechanism. To address this, we propose heuristics that reduce the complexity of the negotiation mechanism and show how substantial benefits can be derived from the use of these heuristics through extensive experimental evaluation that compares the performance of the negotiation mechanism with and without these heuristics. Moreover, we show that one such heuristic makes the negotiation mechanism produce results fast enough to be used in actual social media infrastructures with near-optimal results. | Few works have actually been proposed to deal with the problem of collaboratively defining privacy policies for shared items between two or more users of a social media site. We shall discuss them and how they relate to our work in the following paragraphs. @cite_8 propose a method to define privacy policies collaboratively. Their approach is based on a collaborative definition of privacy policies in which all of the parties involved can define strong and weak preferences. They define a privacy language to specify users' preferences in the form of strong and weak conditions, and they detect privacy conflicts based on them. However, this approach does not involve any automated method to resolve conflicts, only some suggestions that users may want to consider consider when they try to resolve such conflicts manually. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2166084080"
],
"abstract": [
"Recent years have seen a significant increase in the popularity of social networking services. These online services enable users to construct groups of contacts, referred to as friends, with which they can share digital content and communicate. This sharing is actively encouraged by the social networking services, with users’ privacy often seen as a secondary concern. In this paper we first propose a privacy-aware social networking service and then introduce a collaborative approach to authoring privacy policies for the service. In addressing user privacy, our approach takes into account the needs of all parties affected by the disclosure of information and digital content."
]
} |
1412.5278 | 1699452924 | Social media involve many shared items, such as photos, which may concern more than one user. The challenge is that users’ individual privacy preferences for the same item may conflict, so an approach that simply merges in some way the users’ privacy preferences may provide unsatisfactory results. Previous proposals to deal with the problem were either time-consuming or did not consider compromises to solve these conflicts (e.g., by considering unilaterally imposed approaches only). We propose a negotiation mechanism for users to agree on a compromise for these conflicts. The second challenge we address in this article relates to the exponential complexity of such a negotiation mechanism. To address this, we propose heuristics that reduce the complexity of the negotiation mechanism and show how substantial benefits can be derived from the use of these heuristics through extensive experimental evaluation that compares the performance of the negotiation mechanism with and without these heuristics. Moreover, we show that one such heuristic makes the negotiation mechanism produce results fast enough to be used in actual social media infrastructures with near-optimal results. | The work described in @cite_46 is based on an incentive mechanism where users are rewarded with a quantity of numeraire each time they share information or acknowledge other users (called co-owners) who are affected by the same item. When there are conflicts among co-owners' policies, the use of the Clark Tax mechanism is suggested, where users can spend their numeraire bidding for the policy that is best for them. As stated in @cite_28 , the usability of this approach may be limited, because users could have difficulties in comprehending the mechanism and specify appropriate bid values in auctions. Moreover, the auction process adopted in their approach implies that only the winning bid determines who will be able to access the data, instead of accommodating all stakeholders' privacy preferences. | {
"cite_N": [
"@cite_28",
"@cite_46"
],
"mid": [
"2125470616",
"2108710747"
],
"abstract": [
"We have seen tremendous growth in online social networks (OSNs) in recent years. These OSNs not only offer attractive means for virtual social interactions and information sharing, but also raise a number of security and privacy issues. Although OSNs allow a single user to govern access to her his data, they currently do not provide any mechanism to enforce privacy concerns over data associated with multiple users, remaining privacy violations largely unresolved and leading to the potential disclosure of information that at least one user intended to keep private. In this paper, we propose an approach to enable collaborative privacy management of shared data in OSNs. In particular, we provide a systematic mechanism to identify and resolve privacy conflicts for collaborative data sharing. Our conflict resolution indicates a tradeoff between privacy protection and data sharing by quantifying privacy risk and sharing loss. We also discuss a proof-of-concept prototype implementation of our approach as part of an application in Facebook and provide system evaluation and usability study of our methodology.",
"Social Networking is one of the major technological phenomena of the Web 2.0, with hundreds of millions of people participating. Social networks enable a form of self expression for users, and help them to socialize and share content with other users. In spite of the fact that content sharing represents one of the prominent features of existing Social Network sites, Social Networks yet do not support any mechanism for collaborative management of privacy settings for shared content. In this paper, we model the problem of collaborative enforcement of privacy policies on shared data by using game theory. In particular, we propose a solution that offers automated ways to share images based on an extended notion of content ownership. Building upon the Clarke-Tax mechanism, we describe a simple mechanism that promotes truthfulness, and that rewards users who promote co-ownership. We integrate our design with inference techniques that free the users from the burden of manually selecting privacy preferences for each picture. To the best of our knowledge this is the first time such a protection mechanism for Social Networking has been proposed. In the paper, we also show a proof-of-concept application, which we implemented in the context of Facebook, one of today's most popular social networks. We show that supporting these type of solutions is not also feasible, but can be implemented through a minimal increase in overhead to end-users."
]
} |
1412.5278 | 1699452924 | Social media involve many shared items, such as photos, which may concern more than one user. The challenge is that users’ individual privacy preferences for the same item may conflict, so an approach that simply merges in some way the users’ privacy preferences may provide unsatisfactory results. Previous proposals to deal with the problem were either time-consuming or did not consider compromises to solve these conflicts (e.g., by considering unilaterally imposed approaches only). We propose a negotiation mechanism for users to agree on a compromise for these conflicts. The second challenge we address in this article relates to the exponential complexity of such a negotiation mechanism. To address this, we propose heuristics that reduce the complexity of the negotiation mechanism and show how substantial benefits can be derived from the use of these heuristics through extensive experimental evaluation that compares the performance of the negotiation mechanism with and without these heuristics. Moreover, we show that one such heuristic makes the negotiation mechanism produce results fast enough to be used in actual social media infrastructures with near-optimal results. | In @cite_28 , users must manually define their to other users, the sensitivity that each of the items has for them, and their general privacy concern. Then, the authors use these parameters to calculate two main measures, privacy risk and sharing loss. In particular, they calculate the privacy risk and the sharing loss on what they call segments --- in our terminology, a segment equals the set of agents in conflict --- as a whole, i.e. all of the agents in these segments are assigned the action preferred by either one party or the other in the negotiation. That is, in our terminology only two action vectors --- @math and @math induced by the privacy policies @math and @math respectively --- are considered, and the action vector chosen is the one that maximises the tradeoff between privacy risk and sharing loss. Clearly, not considering other possible action vectors could lead to outcomes that are far from optimal. | {
"cite_N": [
"@cite_28"
],
"mid": [
"2125470616"
],
"abstract": [
"We have seen tremendous growth in online social networks (OSNs) in recent years. These OSNs not only offer attractive means for virtual social interactions and information sharing, but also raise a number of security and privacy issues. Although OSNs allow a single user to govern access to her his data, they currently do not provide any mechanism to enforce privacy concerns over data associated with multiple users, remaining privacy violations largely unresolved and leading to the potential disclosure of information that at least one user intended to keep private. In this paper, we propose an approach to enable collaborative privacy management of shared data in OSNs. In particular, we provide a systematic mechanism to identify and resolve privacy conflicts for collaborative data sharing. Our conflict resolution indicates a tradeoff between privacy protection and data sharing by quantifying privacy risk and sharing loss. We also discuss a proof-of-concept prototype implementation of our approach as part of an application in Facebook and provide system evaluation and usability study of our methodology."
]
} |
1412.5278 | 1699452924 | Social media involve many shared items, such as photos, which may concern more than one user. The challenge is that users’ individual privacy preferences for the same item may conflict, so an approach that simply merges in some way the users’ privacy preferences may provide unsatisfactory results. Previous proposals to deal with the problem were either time-consuming or did not consider compromises to solve these conflicts (e.g., by considering unilaterally imposed approaches only). We propose a negotiation mechanism for users to agree on a compromise for these conflicts. The second challenge we address in this article relates to the exponential complexity of such a negotiation mechanism. To address this, we propose heuristics that reduce the complexity of the negotiation mechanism and show how substantial benefits can be derived from the use of these heuristics through extensive experimental evaluation that compares the performance of the negotiation mechanism with and without these heuristics. Moreover, we show that one such heuristic makes the negotiation mechanism produce results fast enough to be used in actual social media infrastructures with near-optimal results. | Finally, there are also related approaches based on voting in the literature @cite_45 @cite_16 @cite_5 . In these cases, a third party collects the decision to be taken (granting denying) for a particular friend from each party. Then, the authors propose to aggregate a final decision based on one voting rule (majority, veto, etc.). However, the rule to be applied is either fixed @cite_45 @cite_16 or is chosen by the user that uploads the item @cite_5 . The problem with this is that the solution to the conflicts then becomes a unilateral decision (being taken by a third-party or by the user that uploads the item) and, thus, there is no room for users to actually negotiate and achieve compromise themselves. Moreover, in the latter case, it might actually be quite difficult for the user that uploads the item to anticipate which voting rule would produce the best result without knowing the preferences of the other users. | {
"cite_N": [
"@cite_5",
"@cite_45",
"@cite_16"
],
"mid": [
"2104730303",
"2070002283",
"1602027763"
],
"abstract": [
"Online social networks (OSNs) have experienced tremendous growth in recent years and become a de facto portal for hundreds of millions of Internet users. These OSNs offer attractive means for digital social interactions and information sharing, but also raise a number of security and privacy issues. While OSNs allow users to restrict access to shared data, they currently do not provide any mechanism to enforce privacy concerns over data associated with multiple users. To this end, we propose an approach to enable the protection of shared data associated with multiple users in OSNs. We formulate an access control model to capture the essence of multiparty authorization requirements, along with a multiparty policy specification scheme and a policy enforcement mechanism. Besides, we present a logical representation of our access control model that allows us to leverage the features of existing logic solvers to perform various analysis tasks on our model. We also discuss a proof-of-concept prototype of our approach as part of an application in Facebook and provide usability study and system evaluation of our method.",
"Topology-based access control is today a de-facto standard for protecting resources in On-line Social Networks (OSNs) both within the research community and commercial OSNs. According to this paradigm, authorization constraints specify the relationships (and possibly their depth and trust level) that should occur between the requestor and the resource owner to make the first able to access the required resource. In this paper, we show how topology-based access control can be enhanced by exploiting the collaboration among OSN users, which is the essence of any OSN. The need of user collaboration during access control enforcement arises by the fact that, different from traditional settings, in most OSN services users can reference other users in resources (e.g., a user can be tagged to a photo), and therefore it is generally not possible for a user to control the resources published by another user. For this reason, we introduce collaborative security policies, that is, access control policies identifying a set of collaborative users that must be involved during access control enforcement. Moreover, we discuss how user collaboration can also be exploited for policy administration and we present an architecture on support of collaborative policy enforcement.",
"As the popularity of social networks expands, the information users expose to the public has potentially dangerous implications for individual privacy. While social networks allow users to restrict access to their personal data, there is currently no mechanism to enforce privacy concerns over content uploaded by other users. As group photos and stories are shared by friends and family, personal privacy goes beyond the discretion of what a user uploads about himself and becomes an issue of what every network participant reveals. In this paper, we examine how the lack of joint privacy controls over content can inadvertently reveal sensitive information about a user including preferences, relationships, conversations, and photos. Specifically, we analyze Facebook to identify scenarios where conflicting privacy settings between friends will reveal information that at least one user intended remain private. By aggregating the information exposed in this manner, we demonstrate how a user's private attributes can be inferred from simply being listed as a friend or mentioned in a story. To mitigate this threat, we show how Facebook's privacy model can be adapted to enforce multi-party privacy. We present a proof of concept application built into Facebook that automatically ensures mutually acceptable privacy restrictions are enforced on group content."
]
} |
1412.4474 | 1482247908 | The growing demand for high-speed data, quality of service (QoS) assurance, and energy efficiency has triggered the evolution of fourth-generation (4G) Long-Term Evolution-Advanced (LTE-A) networks to fifth generation (5G) and beyond. Interference is still a major performance bottleneck. This paper studies the application of physical-layer network coding (PNC), which is a technique that exploits interference, in heterogeneous cellular networks. In particular, we propose a rate-maximizing relay selection algorithm for a single cell with multiple relays assuming the decode-and-forward (DF) strategy. With nodes transmitting at different powers, the proposed algorithm adapts the resource allocation according to the differing link rates, and we prove theoretically that the optimization problem is log-concave. The proposed technique is shown to perform significantly better than the widely studied selection-cooperation technique. We then undertake an experimental study—on a software radio platform—of the decoding performance of PNC with unbalanced signal-to-noise ratios (SNRs) in the multiple-access transmissions. This problem is inherent in cellular networks, and it is shown that, with channel coding and decoders based on multiuser detection and successive interference cancellation, the performance is better with power imbalance. This paper paves the way for further research on multicell PNC, resource allocation, and the implementation of PNC with higher order modulations and advanced coding techniques. | To date, most PNC studies have focused on the two-way relay channel (TWRC) model where all the nodes transmit at equal powers @cite_22 . Two key issues in PNC, symbol asynchrony and channel coding, were addressed in the time domain in @cite_11 and in the frequency domain in @cite_14 . PNC was also successfully implemented on a software radio platform and insights on throughput gains, symbol misalignment, channel coding, effect of carrier frequency offset and real-time issues were gained through these practical prototyping efforts @cite_14 @cite_23 @cite_9 . | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_9",
"@cite_23",
"@cite_11"
],
"mid": [
"2293741284",
"2111834530",
"2203527323",
"",
"2076284780"
],
"abstract": [
"Abstract This paper presents the first implementation of a two-way relay network based on the principle of physical-layer network coding (PNC). To date, only a simplified version of PNC, called analog network coding (ANC), has been successfully implemented. The advantage of ANC is that it is simple to implement; the disadvantage, on the other hand, is that the relay amplifies the noise along with the signal before forwarding the signal. PNC systems in which the relay performs XOR or other denoising PNC mappings of the received signal have the potential for significantly better performance. However, the implementation of such PNC systems poses many challenges. For example, the relay in a PNC system must be able to deal with symbol and carrier-phase asynchronies of the simultaneous signals received from multiple nodes, and the relay must perform channel estimation before detecting the signals. We investigate a PNC implementation in the frequency domain, referred to as FPNC, to tackle these challenges. FPNC is based on OFDM. In FPNC, XOR mapping is performed on the OFDM samples in each subcarrier rather than on the samples in the time domain. We implement FPNC on the universal soft radio peripheral (USRP) platform. Our implementation requires only moderate modifications of the packet preamble design of 802.11a g OFDM PHY. With the help of the cyclic prefix (CP) in OFDM, symbol asynchrony and the multi-path fading effects can be dealt with simultaneously in a similar fashion. Our experimental results show that symbol-synchronous and symbol-asynchronous FPNC have essentially the same BER performance, for both channel-coded and non-channel-coded FPNC systems.",
"Abstract The concept of physical-layer network coding (PNC) was proposed in 2006 for application in wireless networks. Since then it has developed into a subfield of network coding with wide implications. The basic idea of PNC is to exploit the mixing of signals that occurs naturally when electromagnetic (EM) waves are superimposed on one another. In particular, at a receiver, the simultaneous transmissions by several transmitters result in the reception of a weighted sum of the signals. This weighted sum is a form of network coding operation by itself. Alternatively, the received signal could be transformed and mapped to other forms of network coding. Exploiting these facts turns out to have profound and fundamental ramifications. Subsequent works by various researchers have led to many new results in the domains of (1) wireless communication, (2) information theory, and (3) wireless networking. The purpose of this paper is fourfold. First, we give a brief tutorial on the basic concept of PNC. Second, we survey and discuss recent key results in the three aforementioned areas. Third, we examine a critical issue in PNC: synchronization. It has been a common belief that PNC requires tight synchronization. Recent results suggest, however, that PNC may actually benefit from asynchrony. Fourth, we propose that PNC is not just for wireless networks; it can also be useful in optical networks. We provide an example showing that the throughput of a passive optical network (PON) could potentially be raised by 100 with PNC.",
"In this paper we consider physical-layer network coding (PLNC) in OFDM-based two-way relaying systems. Practically, a key impairment for the application of PLNC is the carrier frequency offset (CFO) mismatch between the sources and the relay, which can not be compensated completely in the multiple-access (MA) phase. As this CFO mismatch results in inter-carrier interference (ICI) for OFDM transmissions, practical CFO compensation and ICI cancelation strategies are investigated to mitigate the impairment for a-posterior probability (APP) based PLNC decoders at the relay. Furthermore, we perform hardware implementation of the two-way relaying network employing an long term evolution (LTE) near parametrization adapted to PLNC. The APP-based decoding schemes with CFO compensation and ICI cancelation are applied on this real-time transmission platform to verify the analytical results.",
"",
"A key issue in physical-layer network coding (PNC) is how to deal with the asynchrony between signals transmitted by multiple transmitters. That is, symbols transmitted by different transmitters could arrive at the receiver with symbol misalignment as well as relative carrier-phase offset. A second important issue is how to integrate channel coding with PNC to achieve reliable communication. This paper investigates these two issues and makes the following contributions: 1) We propose and investigate a general framework for decoding at the receiver based on belief propagation (BP). The framework can effectively deal with symbol and phase asynchronies while incorporating channel coding at the same time. 2) For unchannel-coded PNC, we show that for BPSK and QPSK modulations, our BP method can significantly reduce the asynchrony penalties compared with prior methods. 3) For QPSK unchannel-coded PNC, with a half symbol offset between the transmitters, our BP method can drastically reduce the performance penalty due to phase asynchrony, from more than 6 dB to no more than 1 dB. 4) For channel-coded PNC, with our BP method, both symbol and phase asynchronies actually improve the system performance compared with the perfectly synchronous case. Furthermore, the performance spread due to different combinations of symbol and phase offsets between the transmitters in channel-coded PNC is only around 1 dB. The implication of 3) is that if we could control the symbol arrival times at the receiver, it would be advantageous to deliberately introduce a half symbol offset in unchannel-coded PNC. The implication of 4) is that when channel coding is used, symbol and phase asynchronies are not major performance concerns in PNC."
]
} |
1412.4474 | 1482247908 | The growing demand for high-speed data, quality of service (QoS) assurance, and energy efficiency has triggered the evolution of fourth-generation (4G) Long-Term Evolution-Advanced (LTE-A) networks to fifth generation (5G) and beyond. Interference is still a major performance bottleneck. This paper studies the application of physical-layer network coding (PNC), which is a technique that exploits interference, in heterogeneous cellular networks. In particular, we propose a rate-maximizing relay selection algorithm for a single cell with multiple relays assuming the decode-and-forward (DF) strategy. With nodes transmitting at different powers, the proposed algorithm adapts the resource allocation according to the differing link rates, and we prove theoretically that the optimization problem is log-concave. The proposed technique is shown to perform significantly better than the widely studied selection-cooperation technique. We then undertake an experimental study—on a software radio platform—of the decoding performance of PNC with unbalanced signal-to-noise ratios (SNRs) in the multiple-access transmissions. This problem is inherent in cellular networks, and it is shown that, with channel coding and decoders based on multiuser detection and successive interference cancellation, the performance is better with power imbalance. This paper paves the way for further research on multicell PNC, resource allocation, and the implementation of PNC with higher order modulations and advanced coding techniques. | The drawback of the approach in @cite_10 is that the relay selected to maximise the minimum mutual information of the two broadcast links may not be the optimum one for the multiple-access phase. This sub-optimum selection could affect the overall rate of the PNC system. We have also seen that the relay selection algorithms in the literature are simplified by assuming equal time allocation for all the links. The performance of the system could be further improved by allocating more time for the weaker link. In addition, the problem of power imbalance, which is inherent in a cellular network, has not been studied. Since all the nodes transmit at different powers, the decoding performance at the relay in the multiple-access phase could be impacted. All the above gaps are addressed in this paper. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2151578621"
],
"abstract": [
"We study relay selection with the physical-layer network coding (PNC), which is a major decode-and-forward (DF)-based bidirectional protocol. We consider a network consists of two different end-sources and multiple relays, where each relay adopts XOR-encoding to combine two received symbols from the two end-sources. For this network, we propose a relay selection scheme by modifying the well-known selection cooperation (SC) to use in the PNC protocol, and it is referred to as SC-PNC. The SC-PNC consists of two phases: a multiple access channel (MAC) phase and a broadcast channel (BC) phase. In the MAC phase, a set of relays that correctly decode two received symbols from the two end-sources is determined; in the BC phase, among the relays in the determined set, a single best relay is selected such that the minimum mutual information of the two links from each relay to the two end-sources is maximized. Finally, we derive the exact outage probability of the SC-PNC in closed-form."
]
} |
1412.4679 | 2337878656 | We introduce Bayesian multi-tensor factorization, a model that is the first Bayesian formulation for joint factorization of multiple matrices and tensors. The research problem generalizes the joint matrix---tensor factorization problem to arbitrary sets of tensors of any depth, including matrices, can be interpreted as unsupervised multi-view learning from multiple data tensors, and can be generalized to relax the usual trilinear tensor factorization assumptions. The result is a factorization of the set of tensors into factors shared by any subsets of the tensors, and factors private to individual tensors. We demonstrate the performance against existing baselines in multiple tensor factorization tasks in structural toxicogenomics and functional neuroimaging. | When all the tensors have @math and are paired in the first mode, our framework reduces to the group factor analysis (GFA) problem presented by @cite_14 . GFA has been generalized to allow pairings between arbitrary data modes under the name collective matrix factorization (CMF) , which the formulation in generalizes to tensors. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2950903552"
],
"abstract": [
"We introduce a factor analysis model that summarizes the dependencies between observed variable groups, instead of dependencies between individual variables as standard factor analysis does. A group may correspond to one view of the same set of objects, one of many data sets tied by co-occurrence, or a set of alternative variables collected from statistics tables to measure one property of interest. We show that by assuming group-wise sparse factors, active in a subset of the sets, the variation can be decomposed into factors explaining relationships between the sets and factors explaining away set-specific variation. We formulate the assumptions in a Bayesian model which provides the factors, and apply the model to two data analysis tasks, in neuroimaging and chemical systems biology."
]
} |
1412.4709 | 2949929148 | We give an asymptotic approximation scheme (APTAS) for the problem of packing a set of circles into a minimum number of unit square bins. To obtain rational solutions, we use augmented bins of height @math , for some arbitrarily small number @math . Our algorithm is polynomial on @math , and thus @math is part of the problem input. For the special case that @math is constant, we give a (one dimensional) resource augmentation scheme, that is, we obtain a packing into bins of unit width and height @math using no more than the number of bins in an optimal packing. Additionally, we obtain an APTAS for the circle strip packing problem, whose goal is to pack a set of circles into a strip of unit width and minimum height. These are the first approximation and resource augmentation schemes for these problems. Our algorithm is based on novel ideas of iteratively separating small and large items, and may be extended to a wide range of packing problems that satisfy certain conditions. These extensions comprise problems with different kinds of items, such as regular polygons, or with bins of different shapes, such as circles and spheres. As an example, we obtain APTAS's for the problems of packing d-dimensional spheres into hypercubes under the @math -norm. | The first approximation algorithm for the rectangle strip packing problem was proposed by Baker @cite_25 . They presented the so called BL (Bottom-Leftmost) algorithm, and showed that it has approximation ratio @math . For the special case when all items are squares, the approximation ratio of BL algorithm is at most @math . Coffman @cite_17 presented three algorithms, denoted by NFDH (Next Fit Decreasing Height), FFDH (First Fit Decreasing Height), and SF (Split Fit), with asymptotic approximation ratios of @math , @math , and @math , respectively. They also showed that, when the items are squares, FFDH has an asymptotic approximation ratio of @math , and, when all items have width at most @math , the algorithms FFDH and SFFDH have asymptotic approximation ratios of @math and @math , respectively. The best known ratio for the problem was obtained by Kenyon and R 'emila @cite_19 , who presented an asymptotic approximation scheme. Considering non-asymptotic approximation algorithms, Sleator @cite_24 presented a ratio @math . This result was improved to @math independently by Schiermeyer @cite_27 and Steinberg @cite_7 , then to @math by Harren and van Stee @cite_26 , and finally to @math by Harren @cite_10 . | {
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_24",
"@cite_19",
"@cite_27",
"@cite_10",
"@cite_25",
"@cite_17"
],
"mid": [
"1586863324",
"2059422698",
"2071027833",
"2060963575",
"",
"2098666887",
"2019318803",
""
],
"abstract": [
"We consider the two-dimensional bin packing and strip packing problem, where a list of rectangles has to be packed into a minimal number of rectangular bins or a strip of minimal height, respectively. All packings have to be non-overlapping and orthogonal, i.e., axis-parallel. Our algorithm for strip packing has an absolute approximation ratio of 1.9396 and is the first algorithm to break the approximation ratio of 2 which was established more than a decade ago. Moreover, we present a polynomial time approximation scheme ( @math ) for strip packing where rotations by 90 degrees are permitted and an algorithm for two-dimensional bin packing with an absolute worst-case ratio of 2, which is optimal provided @math .",
"This paper proposes a new approximation algorithm @math for packing rectangles into a strip with unit width and unbounded height so as to minimize the total height of the packing. It is shown that for any list @math of rectangles, @math , where @math is the strip height actually used by the algorithm @math when applied to @math and OPT @math is the minimum possible height within which the rectangles in @math can be packed.",
"",
"We present an asymptotic fully polynomial approximation scheme for strip-packing, or packing rectangles into a rectangle of fixed width and minimum height, a classicalNP-hard cutting-stock problem. The algorithm, based on a new linear-programming relaxation, finds a packing ofn rectangles whose total height is within a factor of (1 +e) of optimal (up to an additive term), and has running time polynomial both in n and in 1 e.",
"",
"We study strip packing, which is one of the most classical two-dimensional packing problems: given a collection of rectangles, the problem is to find a feasible orthogonal packing without rotations into a strip of width 1 and minimum height. In this paper we present an approximation algorithm for the strip packing problem with absolute approximation ratio of 5 3+@e for any @e>0. This result significantly narrows the gap between the best known upper bound and the lower bound of 3 2; previously, the best upper bound was 1.9396 due to Harren and van Stee.",
"We consider problems of packing an arbitrary collection of rectangular pieces into an open-ended, rectangular bin so as to minimize the height achieved by any piece. This problem has numerous applications in operations research and studies of computer operation. We devise efficient approximation algorithms, study their limitations, and derive worst-case bounds on the performance of the packings they produce.",
""
]
} |
1412.4709 | 2949929148 | We give an asymptotic approximation scheme (APTAS) for the problem of packing a set of circles into a minimum number of unit square bins. To obtain rational solutions, we use augmented bins of height @math , for some arbitrarily small number @math . Our algorithm is polynomial on @math , and thus @math is part of the problem input. For the special case that @math is constant, we give a (one dimensional) resource augmentation scheme, that is, we obtain a packing into bins of unit width and height @math using no more than the number of bins in an optimal packing. Additionally, we obtain an APTAS for the circle strip packing problem, whose goal is to pack a set of circles into a strip of unit width and minimum height. These are the first approximation and resource augmentation schemes for these problems. Our algorithm is based on novel ideas of iteratively separating small and large items, and may be extended to a wide range of packing problems that satisfy certain conditions. These extensions comprise problems with different kinds of items, such as regular polygons, or with bins of different shapes, such as circles and spheres. As an example, we obtain APTAS's for the problems of packing d-dimensional spheres into hypercubes under the @math -norm. | For the 3-dimensional strip packing problem, whose items are boxes, Li and Cheng @cite_28 were the first to present an asymptotic @math -approximation algorithm. Their algorithm was shown to have approximation ratio @math @cite_33 , @math @cite_12 and finally @math @cite_30 . Bansal @cite_23 showed that there is no asymptotic approximation scheme for the rectangle bin packing problem, which implies that there is no APTAS for the 3-dimensional strip packing problem. When the items are cubes, the first specialized algorithm was shown to have asymptotic ratio of @math @cite_28 , and the best result is an asymptotic approximation scheme due to Bansal @cite_30 . | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_28",
"@cite_23",
"@cite_12"
],
"mid": [
"1966556036",
"1972902146",
"",
"2067823599",
"2013696768"
],
"abstract": [
"In the three-dimensional (3D) strip packing problem, we are given a set of 3D rectangular items and a 3D box @math . The goal is to pack all the items in @math such that the height of the packing is minimized. We consider the most basic version of the problem, where the items must be packed with their edges parallel to the edges of @math and cannot be rotated. Building upon Caprara's work for the two-dimensional (2D) bin packing problem, we obtain an algorithm that, given any @math , achieves an approximation of @math , where @math is the well-known number that occurs naturally in the context of bin packing. Our key idea is to establish a connection between bin packing solutions for an arbitrary instance @math and the strip packing solutions for the corresponding instance obtained from @math by applying the harmonic transformation to certain dimensions. Based on this connection, we also give a simple alternate proof of the @math approximation for 2D ...",
"The three-dimensional packing problem can be stated as follows. Given a list of boxes, each with a given length, width, and height, the problem is to pack these boxes into a rectangular box of fixed-size bottom and unbounded height, so that the height of this packing is minimized. The boxes have to be packed orthogonally and oriented in all three dimensions. We present an approximation algorithm for this problem and show that its asymptotic performance bound is between 2.5 and 2.67. This result answers a question raised by Li and Cheng [5] about the existence of an algorithm for this problem with an asymptotic performance bound less than 2.89.",
"",
"We study the following packing problem: Given a collection of d-dimensional rectangles of specified sizes, pack them into the minimum number of unit cubes. We show that unlike the one-dimensional case, the two-dimensional packing problem cannot have an asymptotic polynomial time approximation scheme (APTAS), unless PNP. On the positive side, we give an APTAS for the special case of packing d-dimensional cubes into the minimum number of unit cubes. Second, we give a polynomial time algorithm for packing arbitrary two-dimensional rectangles into at most OPT square bins with sides of length 1 , where OPT denotes the minimum number of unit bins required to pack these rectangles. Interestingly, this result has no additive constant term, i.e., is not an asymptotic result. As a corollary, we obtain the first approximation scheme for the problem of placing a collection of rectangles in a minimum-area encasing rectangle.",
"We present an asymptotic (2 + e)-approximation algorithm for the 3D-strip packing problem, for any e > 0. In the 3D-strip packing problem the input is a set L = b 1 , b 2 ,. . ., b n of 3-dimensional boxes. Each box b i has width, length, and height at most 1. The problem is to pack the boxes into a 3-dimensional bin B of width 1, length 1 and minimum height, so that the boxes do not overlap. We consider here only orthogonal packings without rotations; this means that the boxes are packed so that their faces are parallel to the faces of the bin, and rotations are not allowed. This algorithm improves on the previously best algorithm of Miyazawa and Wakabayashi which has asymptotic performance ratio of 2.64. Our algorithm can be easily modified to a (4 + e)-approximation algorithm for the 3D-bin packing problem."
]
} |
1412.3721 | 1501602439 | We study budget constrained network upgradeable problems. We are given an undirected edge weighted graph @math where the weight an edge @math can be upgraded for a cost @math . Given a budget @math for improvement, the goal is to find a subset of edges to be upgraded so that the resulting network is optimum for @math . The results obtained in this paper include the following. Maximum Weight Constrained Spanning Tree We present a randomized algorithm for the problem of weight upgradeable budget constrained maximum spanning tree on a general graph. This returns a spanning tree @math which is feasible within the budget @math , such that @math (where @math and @math denote the length and cost of the tree respectively), for any fixed @math , in time polynomial in @math , @math . Our results extend to the minimization version also. Previously Krumke et. al. krumke presented a @math bicriteria approximation algorithm for any fixed @math for this problem in general graphs for a more general cost upgrade function. The result in this paper improves their 0 1 cost upgrade model. Longest Path in a DAG We consider the problem of weight improvable longest path in a @math vertex DAG and give a @math algorithm for the problem when there is a bound on the number of improvements allowed. We also give a @math -approximation which runs in @math time for the budget constrained version. Similar results can be achieved also for the problem of shortest paths in a DAG. | More examples and applications of computational problems in the improvable framework can be found in @cite_14 . Goerigk, Sabharwal, Sch "obel and Sen @cite_14 considered the weight-reducible knapsack problem, for which they gave a polynomial-time 3-approximation and an FPTAS for the special case of uniform improvement costs. The problem of budget constrained network improvable spanning tree has been proved to be NP-hard, even for series-parallel graphs, by Krumke et. al. @cite_6 , which also cite several practical applications. Frederickson and Solis-Oba @cite_15 considered the problem of increasing the weight of the minimum spanning tree in a graph subject to a budget constraint where the cost functions increase linearly with weights. Berman et. al. @cite_12 consider the problem of shortening edges in a given tree to minimize its shortest path tree weight. In contrast to most problems in the network upgradation model, this problem was shown to be solvable in strongly polynomial time. Phillips @cite_5 studied the problem of finding an optimal strategy for reducing the capacity of the network so that the residual capacity in the modified network is minimized. | {
"cite_N": [
"@cite_14",
"@cite_6",
"@cite_5",
"@cite_15",
"@cite_12"
],
"mid": [
"2098281589",
"1548716989",
"1986538507",
"2076631145",
"2013990445"
],
"abstract": [
"We consider the weight-reducible knapsack problem, where we are given a limited budget that can be used to decrease item weights, and we would like to optimize the knapsack objective value using such weight improvements.",
"We consider the problem of reducing the edge lengths of a given network so that the modified network has a spanning tree of small total length. It is assumed that each edge e of the given network has an associated function C sub e that specifies the cost of shortening the edge by a given amount and that there is a budget B on the total reduction cost. The goal is to develop a reduction strategy satisfying the budget constraint so that the total length of a minimum spanning tree in the modified network is the smallest possible over all reduction strategies that obey the budget constraint. We show that in general the problem of computing optimal reduction strategy for modifying the network as above in NP-hard and present the first polynomial time approximation algorithms for the problem, where the cost functions C sub e are allowed to be taken from a broad class of functions. We also present improved approximation algorithms for the class of treewidth-bounded graphs when the cost functions are linear. Our results can be extended to obtain approximation algorithms for more general network design problems such as those considered in [GW, GG+94].",
"",
"Given an undirected connected graph G and a cost function for increasing edge weights, the problem of determining the maximum increase in the weight of the minimum spanning trees of G subject to a budget constraint is investigated. Two versions of the problem are considered. In the first, each edge has a cost function that is linear in the weight increase. An algorithm is presented that solves this problem in strongly polynomial time. In the second version, the edge weights are fixed but an edge can be removed from G at a unit cost. This version is shown to be NP-hard. An Omega (1 log k)-approximation algorithm is presented for it, where k is the number of edges to be removed.",
"In this paper, we consider the problem of how the transportation network can be modified most efficiently in order to improve the known location of the facilities. The performance of the facilities is measured by the “minisum” objective. We examine in the paper two types of network modifications: reductions and additions of links. We analyze various reduction and addition problems for both trees and general networks. For trees, we present exact results and algorithms for the majority of problems studied. For general networks, we discuss mainly heuristics."
]
} |
1412.3709 | 2951396225 | Object class detectors typically apply a window classifier to all the windows in a large set, either in a sliding window manner or using object proposals. In this paper, we develop an active search strategy that sequentially chooses the next window to evaluate based on all the information gathered before. This results in a substantial reduction in the number of classifier evaluations and in a more elegant approach in general. Our search strategy is guided by two forces. First, we exploit context as the statistical relation between the appearance of a window and its location relative to the object, as observed in the training set. This enables to jump across distant regions in the image (e.g. observing a sky region suggests that cars might be far below) and is done efficiently in a Random Forest framework. Second, we exploit the score of the classifier to attract the search to promising areas surrounding a highly scored window, and to keep away from areas near low scored ones. Our search strategy can be applied on top of any classifier as it treats it as a black-box. In experiments with R-CNN on the challenging SUN2012 dataset, our method matches the detection accuracy of evaluating all windows independently, while evaluating 9x fewer windows. | Recent, highly accurate window classifiers like high-dimensional Bag-of-Words @cite_13 or CNN @cite_48 @cite_58 @cite_63 are too expensive to evaluate in a sliding window fashion. For this reason, recent detectors @cite_57 @cite_44 @cite_13 @cite_39 evaluate only a few thousands windows produced by object proposals generators @cite_18 @cite_30 @cite_13 . The state-of-the-art detector @cite_44 follows this approach, using CNN features @cite_4 with Selective Search proposals @cite_13 . Although proposals already reduce the number of window classifier evaluations, our work brings even further reductions. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_4",
"@cite_48",
"@cite_39",
"@cite_57",
"@cite_44",
"@cite_63",
"@cite_58",
"@cite_13"
],
"mid": [
"2121660792",
"2066624635",
"",
"2953360861",
"2110226160",
"",
"2102605133",
"",
"",
"2088049833"
],
"abstract": [
"Generic object detection is the challenging task of proposing windows that localize all the objects in an image, regardless of their classes. Such detectors have recently been shown to benefit many applications such as speeding-up class-specific object detection, weakly supervised learning of object detectors and object discovery. In this paper, we introduce a novel and very efficient method for generic object detection based on a randomized version of Prim's algorithm. Using the connectivity graph of an image's super pixels, with weights modelling the probability that neighbouring super pixels belong to the same object, the algorithm generates random partial spanning trees with large expected sum of edge weights. Object localizations are proposed as bounding-boxes of those partial trees. Our method has several benefits compared to the state-of-the-art. Thanks to the efficiency of Prim's algorithm, it samples proposals very quickly: 1000 proposals are obtained in about 0.7s. With proposals bound to super pixel boundaries yet diversified by randomization, it yields very high detection rates and windows that tightly fit objects. In extensive experiments on the challenging PASCAL VOC 2007 and 2012 and SUN2012 benchmark datasets, we show that our method improves over state-of-the-art competitors for a wide range of evaluation scenarios.",
"We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.",
"",
"We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.",
"Generic object detection is confronted by dealing with different degrees of variations in distinct object classes with tractable computations, which demands for descriptive and flexible object representations that are also efficient to evaluate for many locations. In view of this, we propose to model an object class by a cascaded boosting classifier which integrates various types of features from competing local regions, named as region lets. A region let is a base feature extraction region defined proportionally to a detection window at an arbitrary resolution (i.e. size and aspect ratio). These region lets are organized in small groups with stable relative positions to delineate fine grained spatial layouts inside objects. Their features are aggregated to a one-dimensional feature within one group so as to tolerate deformations. Then we evaluate the object bounding box proposal in selective search from segmentation cues, limiting the evaluation locations to thousands. Our approach significantly outperforms the state-of-the-art on popular multi-class detection benchmark datasets with a single method, without any contexts. It achieves the detection mean average precision of 41.7 on the PASCAL VOC 2007 dataset and 39.7 on the VOC 2010 for 20 object categories. It achieves 14.7 mean average precision on the Image Net dataset for 200 object categories, outperforming the latest deformable part-based model (DPM) by 4.7 .",
"",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"",
"",
"This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html )."
]
} |
1412.3709 | 2951396225 | Object class detectors typically apply a window classifier to all the windows in a large set, either in a sliding window manner or using object proposals. In this paper, we develop an active search strategy that sequentially chooses the next window to evaluate based on all the information gathered before. This results in a substantial reduction in the number of classifier evaluations and in a more elegant approach in general. Our search strategy is guided by two forces. First, we exploit context as the statistical relation between the appearance of a window and its location relative to the object, as observed in the training set. This enables to jump across distant regions in the image (e.g. observing a sky region suggests that cars might be far below) and is done efficiently in a Random Forest framework. Second, we exploit the score of the classifier to attract the search to promising areas surrounding a highly scored window, and to keep away from areas near low scored ones. Our search strategy can be applied on top of any classifier as it treats it as a black-box. In experiments with R-CNN on the challenging SUN2012 dataset, our method matches the detection accuracy of evaluating all windows independently, while evaluating 9x fewer windows. | Some works reduce the number of window classifier evaluations. @cite_31 use a branch-and-bound scheme to efficiently find the maximum of the classifier over all windows. However, it is limited to classifiers for which tight bounds on the highest score in a subset of windows can be derived. @cite_10 extend @cite_31 to some more classifiers. @cite_5 avoid exhaustive evaluation for face detection by using a hierarchical model and pruning heuristics. | {
"cite_N": [
"@cite_5",
"@cite_31",
"@cite_10"
],
"mid": [
"2147713217",
"2113201641",
""
],
"abstract": [
"We provide a novel search technique which uses a hierarchical model and a mutual information gain heuristic to efficiently prune the search space when localizing faces in images. We show exponential gains in computation over traditional sliding window approaches, while keeping similar performance levels.",
"Most successful object recognition systems rely on binary classification, deciding only if an object is present or not, but not providing information on the actual object location. To perform localization, one can take a sliding window approach, but this strongly increases the computational cost, because the classifier function has to be evaluated over a large set of candidate subwindows. In this paper, we propose a simple yet powerful branch-and-bound scheme that allows efficient maximization of a large class of classifier functions over all possible subimages. It converges to a globally optimal solution typically in sublinear time. We show how our method is applicable to different object detection and retrieval scenarios. The achieved speedup allows the use of classifiers for localization that formerly were considered too slow for this task, such as SVMs with a spatial pyramid kernel or nearest neighbor classifiers based on the chi2-distance. We demonstrate state-of-the-art performance of the resulting systems on the UIUC Cars dataset, the PASCAL VOC 2006 dataset and in the PASCAL VOC 2007 competition.",
""
]
} |
1412.3709 | 2951396225 | Object class detectors typically apply a window classifier to all the windows in a large set, either in a sliding window manner or using object proposals. In this paper, we develop an active search strategy that sequentially chooses the next window to evaluate based on all the information gathered before. This results in a substantial reduction in the number of classifier evaluations and in a more elegant approach in general. Our search strategy is guided by two forces. First, we exploit context as the statistical relation between the appearance of a window and its location relative to the object, as observed in the training set. This enables to jump across distant regions in the image (e.g. observing a sky region suggests that cars might be far below) and is done efficiently in a Random Forest framework. Second, we exploit the score of the classifier to attract the search to promising areas surrounding a highly scored window, and to keep away from areas near low scored ones. Our search strategy can be applied on top of any classifier as it treats it as a black-box. In experiments with R-CNN on the challenging SUN2012 dataset, our method matches the detection accuracy of evaluating all windows independently, while evaluating 9x fewer windows. | An alternative approach is to reduce the cost of evaluating the classifier on a window. For example, @cite_28 @cite_20 first run a linear classifier over all windows and then evaluate a complex non-linear kernel only on a few highly scored windows. Several techniques are specific to certain types of window classifiers and achieve a speedup by exploiting their internal structure (e.g. DPM @cite_45 @cite_46 @cite_14 , CNN-based @cite_34 , additive scoring functions @cite_38 , cascaded boosting on Haar features @cite_7 @cite_21 . Our work instead can be used with any window classifier as it treats it as a black-box. | {
"cite_N": [
"@cite_38",
"@cite_14",
"@cite_7",
"@cite_28",
"@cite_21",
"@cite_45",
"@cite_46",
"@cite_34",
"@cite_20"
],
"mid": [
"2036745441",
"114421296",
"2101217650",
"",
"2164598857",
"2058943444",
"",
"2179352600",
"2538008885"
],
"abstract": [
"Many object detectors, such as AdaBoost, SVM and deformable part-based models (DPM), compute additive scoring functions at a large number of windows scanned over image pyramid, thus computational efficiency is an important consideration beside accuracy performance. In this paper, we present a framework of learning cost-sensitive decision policy which is a sequence of two-sided thresholds to execute early rejection or early acceptance based on the accumulative scores at each step. A decision policy is said to be optimal if it minimizes an empirical global risk function that sums over the loss of false negatives (FN) and false positives (FP), and the cost of computation. While the risk function is very complex due to high-order connections among the two-sided thresholds, we find its upper bound can be optimized by dynamic programming (DP) efficiently and thus say the learned policy is near-optimal. Given the loss of FN and FP and the cost in three numbers, our method can produce a policy on-the-fly for Adaboost, SVM and DPM. In experiments, we show that our decision policy outperforms state-of-the-art cascade methods significantly in terms of speed with similar accuracy performance.",
"This paper presents an active approach for part-based object detection, which optimizes the order of part filter evaluations and the time at which to stop and make a prediction. Statistics, describing the part responses, are learned from training data and are used to formalize the part scheduling problem as an offline optimization. Dynamic programming is applied to obtain a policy, which balances the number of part evaluations with the classification accuracy. During inference, the policy is used as a look-up table to choose the part order and the stopping time based on the observed filter responses. The method is faster than cascade detection with deformable part models (which does not optimize the part order) with negligible loss in accuracy when evaluated on the PASCAL VOC 2007 and 2010 datasets.",
"The problem of learning classifier cascades is considered. A new cascade boosting algorithm, fast cascade boosting (FCBoost), is proposed. FCBoost is shown to have a number of interesting properties, namely that it 1) minimizes a Lagrangian risk that jointly accounts for classification accuracy and speed, 2) generalizes adaboost, 3) can be made cost-sensitive to support the design of high detection rate cascades, and 4) is compatible with many predictor structures suitable for sequential decision making. It is shown that a rich family of such structures can be derived recursively from cascade predictors of two stages, denoted cascade generators. Generators are then proposed for two new cascade families, last-stage and multiplicative cascades, that generalize the two most popular cascade architectures in the literature. The concept of neutral predictors is finally introduced, enabling FCBoost to automatically determine the cascade configuration, i.e., number of stages and number of weak learners per stage, for the learned cascades. Experiments on face and pedestrian detection show that the resulting cascades outperform current state-of-the-art methods in both detection accuracy and speed.",
"",
"This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the \"integral image\" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers. The third contribution is a method for combining increasingly more complex classifiers in a \"cascade\" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection.",
"We describe a state-of-the-art system for finding objects in cluttered images. Our system is based on deformable models that represent objects using local part templates and geometric constraints on the locations of parts. We reduce object detection to classification with latent variables. The latent variables introduce invariances that make it possible to detect objects with highly variable appearance. We use a generalization of support vector machines to incorporate latent information during training. This has led to a general framework for discriminative training of classifiers with latent variables. Discriminative training benefits from large training datasets. In practice we use an iterative algorithm that alternates between estimating latent values for positive examples and solving a large convex optimization problem. Practical optimization of this large convex problem can be done using active set techniques for adaptive subsampling of the training data.",
"",
"Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.",
"Our objective is to obtain a state-of-the art object category detector by employing a state-of-the-art image classifier to search for the object in all possible image sub-windows. We use multiple kernel learning of Varma and Ray (ICCV 2007) to learn an optimal combination of exponential χ2 kernels, each of which captures a different feature channel. Our features include the distribution of edges, dense and sparse visual words, and feature descriptors at different levels of spatial organization."
]
} |
1412.3709 | 2951396225 | Object class detectors typically apply a window classifier to all the windows in a large set, either in a sliding window manner or using object proposals. In this paper, we develop an active search strategy that sequentially chooses the next window to evaluate based on all the information gathered before. This results in a substantial reduction in the number of classifier evaluations and in a more elegant approach in general. Our search strategy is guided by two forces. First, we exploit context as the statistical relation between the appearance of a window and its location relative to the object, as observed in the training set. This enables to jump across distant regions in the image (e.g. observing a sky region suggests that cars might be far below) and is done efficiently in a Random Forest framework. Second, we exploit the score of the classifier to attract the search to promising areas surrounding a highly scored window, and to keep away from areas near low scored ones. Our search strategy can be applied on top of any classifier as it treats it as a black-box. In experiments with R-CNN on the challenging SUN2012 dataset, our method matches the detection accuracy of evaluating all windows independently, while evaluating 9x fewer windows. | A few works develop techniques that make sequential fixations inspired by human perception for tracking in video @cite_19 , image classification @cite_40 @cite_41 @cite_53 and face detection @cite_54 @cite_15 . However, they only use the score of a (foveated) window classifier, not exploiting the valuable information given by context. Moreover, they experiment on simple datasets, far less challenging than SUN2012 @cite_9 (MNIST digits, faces). | {
"cite_N": [
"@cite_41",
"@cite_54",
"@cite_53",
"@cite_9",
"@cite_19",
"@cite_40",
"@cite_15"
],
"mid": [
"2141399712",
"2102179764",
"2951527505",
"2017814585",
"2183231851",
"2154071538",
"2950181755"
],
"abstract": [
"We describe a model based on a Boltzmann machine with third-order connections that can learn how to accumulate information about a shape over several fixations. The model uses a retina that only has enough high resolution pixels to cover a small area of the image, so it must decide on a sequence of fixations and it must combine the \"glimpse\" at each fixation with the location of the fixation before integrating the information with information from other glimpses of the same object. We evaluate this model on a synthetic dataset and two image classification datasets, showing that it can perform at least as well as a model trained on whole images.",
"Recent years have seen the development of fast and accurate algorithms for detecting objects in images. However, as the size of the scene grows, so do the running-times of these algorithms. If a 128×102 pixel image requires 20 ms to process, searching for objects in a 1280×1024 image will take 2 s. This is unsuitable under real-time operating constraints: by the time a frame has been processed, the object may have moved. An analogous problem occurs when controlling robot camera that need to scan scenes in search of target objects. In this paper, we consider a method for improving the run-time of general-purpose object-detection algorithms. Our method is based on a model of visual search in humans, which schedules eye fixations to maximize the long-term information accrued about the location of the target of interest. The approach can be used to drive robot cameras that physically scan scenes or to improve the scanning speed for very large high resolution images. We consider the latter application in this work by simulating a “digital fovea” and sequentially placing it in various regions of an image in a way that maximizes the expected information gain. We evaluate the approach using the OpenCV version of the Viola-Jones face detector. After accounting for all computational overhead introduced by the fixation controller, the approach doubles the speed of the standard Viola-Jones detector at little cost in accuracy.",
"Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.",
"Scene categorization is a fundamental problem in computer vision. However, scene understanding research has been constrained by the limited scope of currently-used databases which do not capture the full variety of scene categories. Whereas standard databases for object categorization contain hundreds of different classes of objects, the largest available dataset of scene categories contains only 15 classes. In this paper we propose the extensive Scene UNderstanding (SUN) database that contains 899 categories and 130,519 images. We use 397 well-sampled categories to evaluate numerous state-of-the-art algorithms for scene recognition and establish new bounds of performance. We measure human scene classification performance on the SUN database and compare this with computational methods. Additionally, we study a finer-grained scene representation to detect scenes embedded inside of larger scenes.",
"We propose a novel attentional model for simultaneous object tracking and recognition that is driven by gaze data. Motivated by theories of the human perceptual system, the model consists of two interacting pathways: ventral and dorsal. The ventral pathway models object appearance and classification using deep (factored)-restricted Boltzmann machines. At each point in time, the observations consist of retinal images, with decaying resolution toward the periphery of the gaze. The dorsal pathway models the location, orientation, scale and speed of the attended object. The posterior distribution of these states is estimated with particle filtering. Deeper in the dorsal pathway, we encounter an attentional mechanism that learns to control gazes so as to minimize tracking uncertainty. The approach is modular (with each module easily replaceable with more sophisticated algorithms), straightforward to implement, practically efficient, and works well in simple video sequences.",
"We discuss an attentional model for simultaneous object tracking and recognition that is driven by gaze data. Motivated by theories of perception, the model consists of two interacting pathways, identity and control, intended to mirror the what and where pathways in neuroscience models. The identity pathway models object appearance and performs classification using deep (factored)-restricted Boltzmann machines. At each point in time, the observations consist of foveated images, with decaying resolution toward the periphery of the gaze. The control pathway models the location, orientation, scale, and speed of the attended object. The posterior distribution of these states is estimated with particle filtering. Deeper in the control pathway, we encounter an attentional mechanism that learns to select gazes so as to minimize tracking uncertainty. Unlike in our previous work, we introduce gaze selection strategies that operate in the presence of partial information and on a continuous action space. We show that a straightforward extension of the existing approach to the partial information setting results in poor performance, and we propose an alternative method based on modeling the reward surface as a gaussian process. This approach gives good performance in the presence of partial information and allows us to expand the action space from a small, discrete set of fixation points to a continuous domain.",
"Attention has long been proposed by psychologists as important for effectively dealing with the enormous sensory stimulus available in the neocortex. Inspired by the visual attention models in computational neuroscience and the need of object-centric data for generative models, we describe for generative learning framework using attentional mechanisms. Attentional mechanisms can propagate signals from region of interest in a scene to an aligned canonical representation, where generative modeling takes place. By ignoring background clutter, generative models can concentrate their resources on the object of interest. Our model is a proper graphical model where the 2D Similarity transformation is a part of the top-down process. A ConvNet is employed to provide good initializations during posterior inference which is based on Hamiltonian Monte Carlo. Upon learning images of faces, our model can robustly attend to face regions of novel test subjects. More importantly, our model can learn generative models of new faces from a novel dataset of large images where the face locations are not known."
]
} |
1412.3709 | 2951396225 | Object class detectors typically apply a window classifier to all the windows in a large set, either in a sliding window manner or using object proposals. In this paper, we develop an active search strategy that sequentially chooses the next window to evaluate based on all the information gathered before. This results in a substantial reduction in the number of classifier evaluations and in a more elegant approach in general. Our search strategy is guided by two forces. First, we exploit context as the statistical relation between the appearance of a window and its location relative to the object, as observed in the training set. This enables to jump across distant regions in the image (e.g. observing a sky region suggests that cars might be far below) and is done efficiently in a Random Forest framework. Second, we exploit the score of the classifier to attract the search to promising areas surrounding a highly scored window, and to keep away from areas near low scored ones. Our search strategy can be applied on top of any classifier as it treats it as a black-box. In experiments with R-CNN on the challenging SUN2012 dataset, our method matches the detection accuracy of evaluating all windows independently, while evaluating 9x fewer windows. | Many works use context as an additional cue on top of object detectors, complementing the information provided by the window classifier, but without altering the search process. Several works @cite_12 @cite_28 @cite_11 @cite_16 @cite_13 predict the presence of object classes based on global image descriptors, and use it to remove out-of-context false-positive detections. The response of detectors for multiple object classes also provides context, as it enables to reason about co-occurrence @cite_59 and spatial relations between classes @cite_22 @cite_47 @cite_35 @cite_8 . Other works incorporate regions outside the object into the window classifier @cite_37 @cite_1 @cite_62 @cite_13 @cite_60 analyze several context sources and their impact on object detection. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_62",
"@cite_22",
"@cite_8",
"@cite_28",
"@cite_60",
"@cite_1",
"@cite_59",
"@cite_47",
"@cite_16",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"2160254296",
"1989684337",
"2125215748",
"1982522767",
"",
"",
"2141364309",
"2143729633",
"2081293863",
"",
"2166761907",
"2088049833",
"2168356304",
"2098355199"
],
"abstract": [
"In this work we introduce a novel approach to object categorization that incorporates two types of context-co-occurrence and relative location - with local appearance-based features. Our approach, named CoLA (for co-occurrence, location and appearance), uses a conditional random field (CRF) to maximize object label agreement according to both semantic and spatial relevance. We model relative location between objects using simple pairwise features. By vector quantizing this feature space, we learn a small set of prototypical spatial relationships directly from the data. We evaluate our results on two challenging datasets: PASCAL 2007 and MSRC. The results show that combining co-occurrence and spatial context improves accuracy in as many as half of the categories compared to using co-occurrence alone.",
"This paper proposes a conceptually simple but surprisingly powerful method which combines the effectiveness of a discriminative object detector with the explicit correspondence offered by a nearest-neighbor approach. The method is based on training a separate linear SVM classifier for every exemplar in the training set. Each of these Exemplar-SVMs is thus defined by a single positive instance and millions of negatives. While each detector is quite specific to its exemplar, we empirically observe that an ensemble of such Exemplar-SVMs offers surprisingly good generalization. Our performance on the PASCAL VOC detection task is on par with the much more complex latent part-based model of , at only a modest computational cost increase. But the central benefit of our approach is that it creates an explicit association between each detection and a single training exemplar. Because most detections show good alignment to their associated exemplar, it is possible to transfer any available exemplar meta-data (segmentation, geometric structure, 3D model, etc.) directly onto the detections, which can then be used as part of overall scene understanding.",
"In this paper we study the role of context in existing state-of-the-art detection and segmentation approaches. Towards this goal, we label every pixel of PASCAL VOC 2010 detection challenge with a semantic category. We believe this data will provide plenty of challenges to the community, as it contains 520 additional classes for semantic segmentation and object detection. Our analysis shows that nearest neighbor based approaches perform poorly on semantic segmentation of contextual classes, showing the variability of PASCAL imagery. Furthermore, improvements of exist ing contextual models for detection is rather modest. In order to push forward the performance in this difficult scenario, we propose a novel deformable part-based model, which exploits both local context around each candidate detection as well as global context at the level of the scene. We show that this contextual reasoning significantly helps in detecting objects at all scales.",
"There has been a growing interest in exploiting contextual information in addition to local features to detect and localize multiple object categories in an image. Context models can efficiently rule out some unlikely combinations or locations of objects and guide detectors to produce a semantically coherent interpretation of a scene. However, the performance benefit from using context models has been limited because most of these methods were tested on datasets with only a few object categories, in which most images contain only one or two object categories. In this paper, we introduce a new dataset with images that contain many instances of different object categories and propose an efficient model that captures the contextual information among more than a hundred of object categories. We show that our context model can be applied to scene understanding tasks that local detectors alone cannot solve.",
"",
"",
"This paper presents an empirical evaluation of the role of context in a contemporary, challenging object detection task - the PASCAL VOC 2008. Previous experiments with context have mostly been done on home-grown datasets, often with non-standard baselines, making it difficult to isolate the contribution of contextual information. In this work, we present our analysis on a standard dataset, using top-performing local appearance detectors as baseline. We evaluate several different sources of context and ways to utilize it. While we employ many contextual cues that have been used before, we also propose a few novel ones including the use of geographic context and a new approach for using object spatial support.",
"Existing approaches to contextual reasoning for enhanced object detection typically utilize other labeled categories in the images to provide contextual information. As a consequence, they inadvertently commit to the granularity of information implicit in the labels. Moreover, large portions of the images may not belong to any of the manually-chosen categories, and these unlabeled regions are typically neglected. In this paper, we overcome both these drawbacks and propose a contextual cue that exploits unlabeled regions in images. Our approach adaptively determines the granularity (scene, inter-object, intra-object, etc.) at which contextual information is captured. In order to extract the proposed contextual cue, we consider a scene to be a structured configuration of objects and regions; just as an object is a composition of parts. We thus learn our proposed “contextual meta-objects” using any off-the-shelf object detector, which makes our proposed cue widely accessible to the community. Our results show that incorporating our proposed cue provides a relative improvement of 12 over a state-of-the-art object detector on the challenging PASCAL dataset.",
"In the task of visual object categorization, semantic context can play the very important role of reducing ambiguity in objects' visual appearance. In this work we propose to incorporate semantic object context as a post-processing step into any off-the-shelf object categorization model. Using a conditional random field (CRF) framework, our approach maximizes object label agreement according to contextual relevance. We compare two sources of context: one learned from training data and another queried from Google Sets. The overall performance of the proposed framework is evaluated on the PASCAL and MSRC datasets. Our findings conclude that incorporating context into object categorization greatly improves categorization accuracy.",
"",
"There is general consensus that context can be a rich source of information about an object's identity, location and scale. In fact, the structure of many real-world scenes is governed by strong configurational rules akin to those that apply to a single object. Here we introduce a simple framework for modeling the relationship between context and object properties based on the correlation between the statistics of low-level features across the entire scene and the objects that it contains. The resulting scheme serves as an effective procedure for object priming, context driven focus of attention and automatic scale-selection on real-world scenes.",
"This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).",
"We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.",
"Standard approaches to object detection focus on local patches of the image, and try to classify them as background or not. We propose to use the scene context (image as a whole) as an extra source of (global) information, to help resolve local ambiguities. We present a conditional random field for jointly solving the tasks of object detection and scene classification."
]
} |
1412.3709 | 2951396225 | Object class detectors typically apply a window classifier to all the windows in a large set, either in a sliding window manner or using object proposals. In this paper, we develop an active search strategy that sequentially chooses the next window to evaluate based on all the information gathered before. This results in a substantial reduction in the number of classifier evaluations and in a more elegant approach in general. Our search strategy is guided by two forces. First, we exploit context as the statistical relation between the appearance of a window and its location relative to the object, as observed in the training set. This enables to jump across distant regions in the image (e.g. observing a sky region suggests that cars might be far below) and is done efficiently in a Random Forest framework. Second, we exploit the score of the classifier to attract the search to promising areas surrounding a highly scored window, and to keep away from areas near low scored ones. Our search strategy can be applied on top of any classifier as it treats it as a black-box. In experiments with R-CNN on the challenging SUN2012 dataset, our method matches the detection accuracy of evaluating all windows independently, while evaluating 9x fewer windows. | The most related work to ours is @cite_42 , which proposes a search strategy driven by context. Here we go beyond in several ways: (1) They used context in an inefficient way, involving a nearest-neighbour search over all windows in all training images. This caused a large overhead that compromised the actual wall-clock speedup they made over evaluating all windows in the test image. In contrast, we present a very efficient technique based on Random Forests, which has little overhead (sec. ). (2) While @cite_42 uses only context, we guide the search also by the classifier score, and learn an optimal combination of the two forces (sec. ). (3) They perform single-view and single-instance detection, whereas we detect multiple views and multiple instances in the same image. (4) We adopt the state-of-the-art R-CNN @cite_44 as the reference detector and compare to it, as opposed to the weaker DPM detector @cite_12 . (5) While @cite_42 performs experiments only on PASCAL VOC10, we also use SUN2012 @cite_9 , which has more cluttered images with smaller objects. | {
"cite_N": [
"@cite_44",
"@cite_9",
"@cite_42",
"@cite_12"
],
"mid": [
"2102605133",
"2017814585",
"2135440260",
"2168356304"
],
"abstract": [
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"Scene categorization is a fundamental problem in computer vision. However, scene understanding research has been constrained by the limited scope of currently-used databases which do not capture the full variety of scene categories. Whereas standard databases for object categorization contain hundreds of different classes of objects, the largest available dataset of scene categories contains only 15 classes. In this paper we propose the extensive Scene UNderstanding (SUN) database that contains 899 categories and 130,519 images. We use 397 well-sampled categories to evaluate numerous state-of-the-art algorithms for scene recognition and establish new bounds of performance. We measure human scene classification performance on the SUN database and compare this with computational methods. Additionally, we study a finer-grained scene representation to detect scenes embedded inside of larger scenes.",
"The dominant visual search paradigm for object class detection is sliding windows. Although simple and effective, it is also wasteful, unnatural and rigidly hardwired. We propose strategies to search for objects which intelligently explore the space of windows by making sequential observations at locations decided based on previous observations. Our strategies adapt to the class being searched and to the content of a particular test image, exploiting context as the statistical relation between the appearance of a window and its location relative to the object, as observed in the training set. In addition to being more elegant than sliding windows, we demonstrate experimentally on the PASCAL VOC 2010 dataset that our strategies evaluate two orders of magnitude fewer windows while achieving higher object detection performance.",
"We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function."
]
} |
1412.3714 | 2105745022 | This paper addresses how a recursive neural network model can automatically leave out useless information and emphasize important evidence, in other words, to perform "weight tuning" for higher-level representation acquisition. We propose two models, Weighted Neural Network (WNN) and Binary-Expectation Neural Network (BENN), which automatically control how much one specific unit contributes to the higher-level representation. The proposed model can be viewed as incorporating a more powerful compositional function for embedding acquisition in recursive neural networks. Experimental results demonstrate the significant improvement over standard neural models. | Distributed representations, calculated based on neural frameworks, are extended beyond token-level, to represent N-grams @cite_40 , phrases @cite_28 , sentences (e.g., @cite_25 @cite_3 ), discourse @cite_15 @cite_37 , paragraphs @cite_12 or documents @cite_30 . Recursive and recurrent @cite_17 @cite_1 models constitute two types of commonly used frameworks for sentence-level embedding acquisition. Different variations of recurrent recursive models are proposed to cater for different scenarios (e.g., @cite_25 @cite_28 ). Other recently proposed approaches included sentence compositional approach proposed in @cite_16 , or paragraph sentence vector @cite_12 where representations are optimized through predicting words within the sentence. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_28",
"@cite_1",
"@cite_3",
"@cite_40",
"@cite_15",
"@cite_16",
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"2110951295",
"2250767751",
"2251939518",
"196214544",
"",
"71795751",
"2251293245",
"2120615054",
"1753823461",
"2949547296",
"2131774270"
],
"abstract": [
"Capturing the compositional process which maps the meaning of words to that of documents is a central challenge for researchers in Natural Language Processing and Information Retrieval. We introduce a model that is able to represent the meaning of documents by embedding them in a low dimensional vector space, while preserving distinctions of word and sentence order crucial for capturing nuanced semantics. Our model is based on an extended Dynamic Convolution Neural Network, which learns convolution filters at both the sentence and document level, hierarchically learning to capture and compose low level lexical features into high level semantic concepts. We demonstrate the effectiveness of this model on a range of document modelling tasks, achieving strong results with no feature engineering and with a more compact model. Inspired by recent advances in visualising deep convolution networks for computer vision, we present a novel visualisation technique for our document networks which not only provides insight into their learning process, but also can be interpreted to produce a compelling automatic summarisation system for texts.",
"Text-level discourse parsing remains a challenge: most approaches employ features that fail to capture the intentional, semantic, and syntactic aspects that govern discourse coherence. In this paper, we propose a recursive model for discourse parsing that jointly models distributed representations for clauses, sentences, and entire discourses. The learned representations can to some extent learn the semantic and intentional import of words and larger discourse units automatically,. The proposed framework obtains comparable performance regarding standard discoursing parsing evaluations when compared against current state-of-art systems.",
"Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.",
"Recurrent Neural Networks (RNNs) are very powerful sequence models that do not enjoy widespread use because it is extremely difficult to train them properly. Fortunately, recent advances in Hessian-free optimization have been able to overcome the difficulties associated with training RNNs, making it possible to apply them successfully to challenging sequence problems. In this paper we demonstrate the power of RNNs trained with the new Hessian-Free optimizer (HF) by applying them to character-level language modeling tasks. The standard RNN architecture, while effective, is not ideally suited for such tasks, so we introduce a new RNN variant that uses multiplicative (or \"gated\") connections which allow the current input character to determine the transition matrix from one hidden state vector to the next. After training the multiplicative RNN with the HF optimizer for five days on 8 high-end Graphics Processing Units, we were able to surpass the performance of the best previous single method for character-level language modeling – a hierarchical non-parametric sequence model. To our knowledge this represents the largest recurrent neural network application to date.",
"",
"We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined sentiment lexica or polarity shifting rules. We also evaluate the model's ability to predict sentiment distributions on a new dataset based on confessions from the experience project. The dataset consists of personal user stories annotated with multiple labels which, when aggregated, form a multinomial distribution that captures emotional reactions. Our algorithm can more accurately predict distributions over such labels compared to several competitive baselines.",
"Text-level discourse parsing is notoriously difficult, as distinctions between discourse relations require subtle semantic judgments that are not easily captured using standard features. In this paper, we present a representation learning approach, in which we transform surface features into a latent space that facilitates RST discourse parsing. By combining the machinery of large-margin transition-based structured prediction with representation learning, our method jointly learns to parse discourse while at the same time learning a discourse-driven projection of surface features. The resulting shift-reduce discourse parser obtains substantial improvements over the previous state-of-the-art in predicting relations and nuclearity on the RST Treebank.",
"The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25 error reduction in the last task with respect to the strongest baseline.",
"Recently, deep architectures, such as recurrent and recursive neural networks have been successfully applied to various natural language processing tasks. Inspired by bidirectional recurrent neural networks which use representations that summarize the past and future around an instance, we propose a novel architecture that aims to capture the structural information around an input, and use it to label instances. We apply our method to the task of opinion expression extraction, where we employ the binary parse tree of a sentence as the structure, and word vector representations as the initial representation of a single token. We conduct preliminary experiments to investigate its performance and compare it to the sequential approach.",
"Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.",
"In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported."
]
} |
1412.3714 | 2105745022 | This paper addresses how a recursive neural network model can automatically leave out useless information and emphasize important evidence, in other words, to perform "weight tuning" for higher-level representation acquisition. We propose two models, Weighted Neural Network (WNN) and Binary-Expectation Neural Network (BENN), which automatically control how much one specific unit contributes to the higher-level representation. The proposed model can be viewed as incorporating a more powerful compositional function for embedding acquisition in recursive neural networks. Experimental results demonstrate the significant improvement over standard neural models. | Both of the proposed architectures are in this work inspired by the long short-term memory (LSTM) model, first proposed by Hochreiter and Schmidhuber back in 1990s @cite_9 @cite_34 to process time sequence data where there are very long time lags of unknown size between important events http: en.wikipedia.org wiki Long_short_term_memory . LSTM associates each time with a series of gates" to determine whether the information from early time-sequence should be forgotten @cite_34 and when current information should be allowed to flow into or out of the memory. LSTM could partially address gradient vanishing problem in recurrent neural models and have been widely used in machine translation @cite_35 @cite_6 | {
"cite_N": [
"@cite_35",
"@cite_9",
"@cite_34",
"@cite_6"
],
"mid": [
"2136016850",
"",
"2136848157",
"2950635152"
],
"abstract": [
"This work presents two different translation models using recurrent neural networks. The first one is a word-based approach using word alignments. Second, we present phrase-based translation models that are more consistent with phrasebased decoding. Moreover, we introduce bidirectional recurrent neural models to the problem of machine translation, allowing us to use the full source sentence in our models, which is also of theoretical interest. We demonstrate that our translation models are capable of improving strong baselines already including recurrent neural language models on three tasks: IWSLT 2013 German!English, BOLT Arabic!English and Chinese!English. We obtain gains up to 1.6 BLEU and 1.7 TER by rescoring 1000-best lists.",
"",
"Long short-term memory (LSTM; Hochreiter & Schmidhuber, 1997) can solve numerous tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams that are not a priori segmented into subsequences with explicitly marked ends at which the network's internal state could be reset. Without resets, the state may grow indefinitely and eventually cause the network to break down. Our remedy is a novel, adaptive \"forget gate\" that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review illustrative benchmark problems on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve continual versions of these problems. LSTM with forget gates, however, easily solves them, and in an elegant way.",
"In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases."
]
} |
1412.3161 | 50291828 | This paper proposes to go beyond the state-of-the-art deep convolutional neural network (CNN) by incorporating the information from object detection, focusing on dealing with fine-grained image classification. Unfortunately, CNN suffers from over-fiting when it is trained on existing fine-grained image classification benchmarks, which typically only consist of less than a few tens of thousands training images. Therefore, we first construct a large-scale fine-grained car recognition dataset that consists of 333 car classes with more than 150 thousand training images. With this large-scale dataset, we are able to build a strong baseline for CNN with top-1 classification accuracy of 81.6 . One major challenge in fine-grained image classification is that many classes are very similar to each other while having large within-class variation. One contributing factor to the within-class variation is cluttered image background. However, the existing CNN training takes uniform window sampling over the image, acting as blind on the location of the object of interest. In contrast, this paper proposes an (OCS) scheme that samples image windows based on the object location information. The challenge in using the location information lies in how to design powerful object detector and how to handle the imperfectness of detection results. To that end, we design a saliency-aware object detection approach specific for the setting of fine-grained image classification, and the uncertainty of detection results are naturally handled in our OCS scheme. Our framework is demonstrated to be very effective, improving top-1 accuracy to 89.3 (from 81.6 ) on the large-scale fine-grained car classification dataset. | Fine-grained image classification has been an active research topic in recent years. Compared to base-class image classification, fine-grained image classification needs to distinguish many similar classes with only subtle differences among the classes. There has been much work @cite_7 @cite_4 @cite_12 aiming at localizing salient part of fine-grained classes. To ease the challenge, many of them even assume that the ground-truth bounding boxes of the objects of interest are given. This work is different in two aspects. First, rather than using ground-truth bounding boxes, we make attempts to train a good object detector by proposing a saliency-aware detection approach based on the Regionlet framework. Second, we build a mechanism to handle imperfect detection results. | {
"cite_N": [
"@cite_4",
"@cite_12",
"@cite_7"
],
"mid": [
"1977295328",
"2169501191",
"2083367367"
],
"abstract": [
"We investigate the fine grained object categorization problem of determining the breed of animal from an image. To this end we introduce a new annotated dataset of pets covering 37 different breeds of cats and dogs. The visual problem is very challenging as these animals, particularly cats, are very deformable and there can be quite subtle differences between the breeds. We make a number of contributions: first, we introduce a model to classify a pet breed automatically from an image. The model combines shape, captured by a deformable part model detecting the pet face, and appearance, captured by a bag-of-words model that describes the pet fur. Fitting the model involves automatically segmenting the animal in the image. Second, we compare two classification approaches: a hierarchical one, in which a pet is first assigned to the cat or dog family and then to a breed, and a flat one, in which the breed is obtained directly. We also investigate a number of animal and image orientated spatial layouts. These models are very good: they beat all previously published results on the challenging ASIRRA test (cat vs dog discrimination). When applied to the task of discriminating the 37 different breeds of pets, the models obtain an average accuracy of about 59 , a very encouraging result considering the difficulty of the problem.",
"Fine-grained recognition refers to a subordinate level of recognition, such as recognizing different species of animals and plants. It differs from recognition of basic categories, such as humans, tables, and computers, in that there are global similarities in shape and structure shared cross different categories, and the differences are in the details of object parts. We suggest that the key to identifying the fine-grained differences lies in finding the right alignment of image regions that contain the same object parts. We propose a template model for the purpose, which captures common shape patterns of object parts, as well as the cooccurrence relation of the shape patterns. Once the image regions are aligned, extracted features are used for classification. Learning of the template model is efficient, and the recognition results we achieve significantly outperform the state-of-the-art algorithms.",
"The ability to normalize pose based on super-category landmarks can significantly improve models of individual categories when training data are limited. Previous methods have considered the use of volumetric or morphable models for faces and for certain classes of articulated objects. We consider methods which impose fewer representational assumptions on categories of interest, and exploit contemporary detection schemes which consider the ensemble of responses of detectors trained for specific posekeypoint configurations. We develop representations for poselet-based pose normalization using both explicit warping and implicit pooling as mechanisms. Our method defines a pose normalized similarity or kernel function that is suitable for nearest-neighbor or kernel-based learning methods."
]
} |
1412.3161 | 50291828 | This paper proposes to go beyond the state-of-the-art deep convolutional neural network (CNN) by incorporating the information from object detection, focusing on dealing with fine-grained image classification. Unfortunately, CNN suffers from over-fiting when it is trained on existing fine-grained image classification benchmarks, which typically only consist of less than a few tens of thousands training images. Therefore, we first construct a large-scale fine-grained car recognition dataset that consists of 333 car classes with more than 150 thousand training images. With this large-scale dataset, we are able to build a strong baseline for CNN with top-1 classification accuracy of 81.6 . One major challenge in fine-grained image classification is that many classes are very similar to each other while having large within-class variation. One contributing factor to the within-class variation is cluttered image background. However, the existing CNN training takes uniform window sampling over the image, acting as blind on the location of the object of interest. In contrast, this paper proposes an (OCS) scheme that samples image windows based on the object location information. The challenge in using the location information lies in how to design powerful object detector and how to handle the imperfectness of detection results. To that end, we design a saliency-aware object detection approach specific for the setting of fine-grained image classification, and the uncertainty of detection results are naturally handled in our OCS scheme. Our framework is demonstrated to be very effective, improving top-1 accuracy to 89.3 (from 81.6 ) on the large-scale fine-grained car classification dataset. | There is a rich literature in object detection research. Deformable part model (DPM) @cite_19 has been a popular approach for generic object detection in the past years. Recently, regions with CNN (R-CNN) approach @cite_10 achieves excellent performance on benchmark datasets. Both approaches require to scale images (so that the object is fit into a fixed-size sliding window) or warp candidate bounding boxes (to the same size to be input into CNN). Such treatments enable scale-invariant property. However, in the case of object detection for fine-grained image recognition, scale is an important saliency cue that we hope to exploit, as explained in more details in Section. Regionlet approach @cite_9 is a good choice because it operates on the candidate bounding boxes proposed on original images, and it has the capability to utilize scale as an important saliency cue. This work also makes some important modifications to the original Regionlet approach, namely, saliency-aware object detection, which exploits the special property in the setting of fine-grained image classification where the object of interests is always the most salient (e.g., not occluded, occupying a big portion of image, etc) object in an image. | {
"cite_N": [
"@cite_19",
"@cite_9",
"@cite_10"
],
"mid": [
"2120419212",
"",
"2102605133"
],
"abstract": [
"This paper describes a discriminatively trained, multiscale, deformable part model for object detection. Our system achieves a two-fold improvement in average precision over the best performance in the 2006 PASCAL person detection challenge. It also outperforms the best results in the 2007 challenge in ten out of twenty categories. The system relies heavily on deformable parts. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL challenge. Our system also relies heavily on new methods for discriminative training. We combine a margin-sensitive approach for data mining hard negative examples with a formalism we call latent SVM. A latent SVM, like a hidden CRF, leads to a non-convex training problem. However, a latent SVM is semi-convex and the training problem becomes convex once latent information is specified for the positive examples. We believe that our training methods will eventually make possible the effective use of more latent information such as hierarchical (grammar) models and models involving latent three dimensional pose.",
"",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn."
]
} |
1412.2723 | 2951515971 | Signed network analysis has attracted increasing attention in recent years. This is in part because research on signed network analysis suggests that negative links have added value in the analytical process. A major impediment in their effective use is that most social media sites do not enable users to specify them explicitly. In other words, a gap exists between the importance of negative links and their availability in real data sets. Therefore, it is natural to explore whether one can predict negative links automatically from the commonly available social network data. In this paper, we investigate the novel problem of negative link prediction with only positive links and content-centric interactions in social media. We make a number of important observations about negative links, and propose a principled framework NeLP, which can exploit positive links and content-centric interactions to predict negative links. Our experimental results on real-world social networks demonstrate that the proposed NeLP framework can accurately predict negative links with positive links and content-centric interactions. Our detailed experiments also illustrate the relative importance of various factors to the effectiveness of the proposed framework. | Positive link prediction infers new positive links in the near future based on a snapshot of a positive network. Existing methods can be roughly divided into unsupervised methods and supervised methods. Unsupervised methods are usually based on the topological structure of the given positive network. @cite_12 , several unsupervised link prediction algorithms are proposed, such as Katz, Jaccard's coefficient and Adamic Adar. @cite_24 , several unsupervised algorithms based on low-rank matrix factorization are proposed. There are usually two steps for supervised methods. First, they extract features from available sources to represent each pair of users and consider the existence of positive links as labels. Second, they train a binary classifier based on the representation with extracted features and labels. @cite_6 , the authors show several advantages of supervised link prediction algorithms such as superior performance, adaptation to different domains and variance reduction. @cite_0 , the features extracted from human mobility have very strong predictive power and can significantly improve the positive link prediction performance. | {
"cite_N": [
"@cite_24",
"@cite_0",
"@cite_6",
"@cite_12"
],
"mid": [
"34646664",
"2101108259",
"2003707464",
""
],
"abstract": [
"We propose to solve the link prediction problem in graphs using a supervised matrix factorization approach. The model learns latent features from the topological structure of a (possibly directed) graph, and is shown to make better predictions than popular unsupervised scores. We show how these latent features may be combined with optional explicit features for nodes or edges, which yields better performance than using either type of feature exclusively. Finally, we propose a novel approach to address the class imbalance problem which is common in link prediction by directly optimizing for a ranking loss. Our model is optimized with stochastic gradient descent and scales to large graphs. Results on several datasets show the efficacy of our approach.",
"Our understanding of how individual mobility patterns shape and impact the social network is limited, but is essential for a deeper understanding of network dynamics and evolution. This question is largely unexplored, partly due to the difficulty in obtaining large-scale society-wide data that simultaneously capture the dynamical information on individual movements and social interactions. Here we address this challenge for the first time by tracking the trajectories and communication records of 6 Million mobile phone users. We find that the similarity between two individuals' movements strongly correlates with their proximity in the social network. We further investigate how the predictive power hidden in such correlations can be exploited to address a challenging problem: which new links will develop in a social network. We show that mobility measures alone yield surprising predictive power, comparable to traditional network-based measures. Furthermore, the prediction accuracy can be significantly improved by learning a supervised classifier based on combined mobility and network measures. We believe our findings on the interplay of mobility patterns and social ties offer new perspectives on not only link prediction but also network dynamics.",
"This paper examines important factors for link prediction in networks and provides a general, high-performance framework for the prediction task. Link prediction in sparse networks presents a significant challenge due to the inherent disproportion of links that can form to links that do form. Previous research has typically approached this as an unsupervised problem. While this is not the first work to explore supervised learning, many factors significant in influencing and guiding classification remain unexplored. In this paper, we consider these factors by first motivating the use of a supervised framework through a careful investigation of issues such as network observational period, generality of existing methods, variance reduction, topological causes and degrees of imbalance, and sampling approaches. We also present an effective flow-based predicting algorithm, offer formal bounds on imbalance in sparse network link prediction, and employ an evaluation method appropriate for the observed imbalance. Our careful consideration of the above issues ultimately leads to a completely general framework that outperforms unsupervised link prediction methods by more than 30 AUC.",
""
]
} |
1412.2723 | 2951515971 | Signed network analysis has attracted increasing attention in recent years. This is in part because research on signed network analysis suggests that negative links have added value in the analytical process. A major impediment in their effective use is that most social media sites do not enable users to specify them explicitly. In other words, a gap exists between the importance of negative links and their availability in real data sets. Therefore, it is natural to explore whether one can predict negative links automatically from the commonly available social network data. In this paper, we investigate the novel problem of negative link prediction with only positive links and content-centric interactions in social media. We make a number of important observations about negative links, and propose a principled framework NeLP, which can exploit positive links and content-centric interactions to predict negative links. Our experimental results on real-world social networks demonstrate that the proposed NeLP framework can accurately predict negative links with positive links and content-centric interactions. Our detailed experiments also illustrate the relative importance of various factors to the effectiveness of the proposed framework. | Positive and negative link prediction infers new positive and negative links by giving a snapshot of a signed network, which has attracted increasing attention in recent years @cite_9 . @cite_22 , an algorithm based on trust and distrust propagation is proposed to predict trust and distrust relations. @cite_17 , local-topology-based features based on balance theory are extracted to improve the performance of a logistic regression classifier in signed relation prediction. Features derived from longer cycles in signed networks can be used to improve the positive and negative link prediction performance @cite_21 . @cite_3 , a low-rank matrix factorization approach with generalized loss functions is proposed to predict trust and distrust relations. | {
"cite_N": [
"@cite_22",
"@cite_9",
"@cite_21",
"@cite_3",
"@cite_17"
],
"mid": [
"2144780381",
"",
"1964537599",
"2028513945",
"2073415627"
],
"abstract": [
"A (directed) network of people connected by ratings or trust scores, and a model for propagating those trust scores, is a fundamental building block in many of today's most successful e-commerce and recommendation systems. We develop a framework of trust propagation schemes, each of which may be appropriate in certain circumstances, and evaluate the schemes on a large trust network consisting of 800K trust scores expressed among 130K people. We show that a small number of expressed trusts distrust per individual allows us to predict trust between any two people in the system with high accuracy. Our work appears to be the first to incorporate distrust in a computational trust propagation setting.",
"",
"We consider the problem of link prediction in signed networks. Such networks arise on the web in a variety of ways when users can implicitly or explicitly tag their relationship with other users as positive or negative. The signed links thus created reflect social attitudes of the users towards each other in terms of friendship or trust. Our first contribution is to show how any quantitative measure of social imbalance in a network can be used to derive a link prediction algorithm. Our framework allows us to reinterpret some existing algorithms as well as derive new ones. Second, we extend the approach of (2010) by presenting a supervised machine learning based link prediction method that uses features derived from longer cycles in the network. The supervised method outperforms all previous approaches on 3 networks drawn from sources such as Epinions, Slashdot and Wikipedia. The supervised approach easily scales to these networks, the largest of which has 132k nodes and 841k edges. Most real-world networks have an overwhelmingly large proportion of positive edges and it is therefore easy to get a high overall accuracy at the cost of a high false positive rate. We see that our supervised method not only achieves good accuracy for sign prediction but is also especially effective in lowering the false positive rate.",
"Trust networks, where people leave trust and distrust feedback, are becoming increasingly common. These networks may be regarded as signed graphs, where a positive edge weight captures the degree of trust while a negative edge weight captures the degree of distrust. Analysis of such signed networks has become an increasingly important research topic. One important analysis task is that of sign inference, i.e., infer unknown (or future) trust or distrust relationships given a partially observed signed network. Most state-of-the-art approaches consider the notion of structural balance in signed networks, building inference algorithms based on information about links, triads, and cycles in the network. In this paper, we first show that the notion of weak structural balance in signed networks naturally leads to a global low-rank model for the network. Under such a model, the sign inference problem can be formulated as a low-rank matrix completion problem. We show that we can perfectly recover missing relationships, under certain conditions, using state-of-the-art matrix completion algorithms. We also propose the use of a low-rank matrix factorization approach with generalized loss functions as a practical method for sign inference - this approach yields high accuracy while being scalable to large signed networks, for instance, we show that this analysis can be performed on a synthetic graph with 1.1 million nodes and 120 million edges in 10 minutes. We further show that the low-rank model can be used for other analysis tasks on signed networks, such as user segmentation through signed graph clustering, with theoretical guarantees. Experiments on synthetic as well as real data show that our low rank model substantially improves accuracy of sign inference as well as clustering. As an example, on the largest real dataset available to us (Epinions data with 130K nodes and 840K edges), our matrix factorization approach yields 94.6 accuracy on the sign inference task as compared to 90.8 accuracy using a state-of-the-art cycle-based method - moreover, our method runs in 40 seconds as compared to 10,000 seconds for the cycle-based method.",
"We study online social networks in which relationships can be either positive (indicating relations such as friendship) or negative (indicating relations such as opposition or antagonism). Such a mix of positive and negative links arise in a variety of online settings; we study datasets from Epinions, Slashdot and Wikipedia. We find that the signs of links in the underlying social networks can be predicted with high accuracy, using models that generalize across this diverse range of sites. These models provide insight into some of the fundamental principles that drive the formation of signed links in networks, shedding light on theories of balance and status from social psychology; they also suggest social computing applications by which the attitude of one user toward another can be estimated from evidence provided by their relationships with other members of the surrounding social network."
]
} |
1412.2812 | 1959399437 | We introduce a new approach to unsupervised estimation of feature-rich semantic role labeling models. Our model consists of two components: (1) an encoding component: a semantic role labeling model which predicts roles given a rich set of syntactic and lexical features; (2) a reconstruction component: a tensor factorization model which relies on roles to predict argument fillers. When the components are estimated jointly to minimize errors in argument reconstruction, the induced roles largely correspond to roles defined in annotated resources. Our method performs on par with most accurate role induction methods on English and German, even though, unlike these previous approaches, we do not incorporate any prior linguistic knowledge about the languages. | In recent years, unsupervised approaches to semantic role induction have attracted considerable attention. However, there exist other ways to address insufficient coverage provided by existing semantically-annotated resources. One natural direction is semi-supervised role labeling, where both annotated and unannotated data is used to construct a model. Previous semi-supervised approaches to SRL can mostly be regarded as extensions to supervised learning by either incorporating word features induced from unnannoted texts @cite_9 @cite_28 or creating some form of surrogate' supervision @cite_55 @cite_7 @cite_21 . The benefits from using unlabeled data were moderate, and more significant for the harder SRL version, frame-semantic parsing @cite_21 . | {
"cite_N": [
"@cite_7",
"@cite_28",
"@cite_55",
"@cite_9",
"@cite_21"
],
"mid": [
"2078658491",
"2159746348",
"2273815277",
"2117130368",
"2117391079"
],
"abstract": [
"Unknown lexical items present a major obstacle to the development of broad-coverage semantic role labeling systems. We address this problem with a semi-supervised learning approach which acquires training instances for unseen verbs from an unlabeled corpus. Our method relies on the hypothesis that unknown lexical items will be structurally and semantically similar to known items for which annotations are available. Accordingly, we represent known and unknown sentences as graphs, formalize the search for the most similar verb as a graph alignment problem and solve the optimization using integer linear programming. Experimental results show that role labeling performance for unknown lexical items improves with training data produced automatically by our method.",
"Semantic Role Labeling (SRL) has proved to be a valuable tool for performing automatic analysis of natural language texts. Currently however, most systems rely on a large training set, which is manually annotated, an effort that needs to be repeated whenever different languages or a different set of semantic roles is used in a certain application. A possible solution for this problem is semi-supervised learning, where a small set of training examples is automatically expanded using unlabeled texts. We present the Latent Words Language Model, which is a language model that learns word similarities from unlabeled texts. We use these similarities for different semi-supervised SRL methods as additional features or to automatically expand a small training set. We evaluate the methods on the PropBank dataset and find that for small training sizes our best performing system achieves an error reduction of 33.27 F1-measure compared to a state-of-the-art supervised baseline.",
"",
"We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.",
"We describe a new approach to disambiguating semantic frames evoked by lexical predicates previously unseen in a lexicon or annotated data. Our approach makes use of large amounts of unlabeled data in a graph-based semi-supervised learning framework. We construct a large graph where vertices correspond to potential predicates and use label propagation to learn possible semantic frames for new ones. The label-propagated graph is used within a frame-semantic parser and, for unknown predicates, results in over 15 absolute improvement in frame identification accuracy and over 13 absolute improvement in full frame-semantic parsing F1 score on a blind test set, over a state-of-the-art supervised baseline."
]
} |
1412.2812 | 1959399437 | We introduce a new approach to unsupervised estimation of feature-rich semantic role labeling models. Our model consists of two components: (1) an encoding component: a semantic role labeling model which predicts roles given a rich set of syntactic and lexical features; (2) a reconstruction component: a tensor factorization model which relies on roles to predict argument fillers. When the components are estimated jointly to minimize errors in argument reconstruction, the induced roles largely correspond to roles defined in annotated resources. Our method performs on par with most accurate role induction methods on English and German, even though, unlike these previous approaches, we do not incorporate any prior linguistic knowledge about the languages. | Another important direction includes cross-lingual approaches @cite_38 @cite_11 which leverage resources for research-rich languages, as well as parallel data, to transfer the annotation to the resource-poor languages. However, both translation shifts and noise in word alignments harm the performance of cross-lingual methods. Nevertheless, even joint unsupervised induction across languages appears to be beneficial @cite_14 . | {
"cite_N": [
"@cite_38",
"@cite_14",
"@cite_11"
],
"mid": [
"2115057736",
"2171860752",
"2103006268"
],
"abstract": [
"This article considers the task of automatically inducing role-semantic annotations in the FrameNet paradigm for new languages. We propose a general framework that is based on annotation projection, phrased as a graph optimization problem. It is relatively inexpensive and has the potential to reduce the human effort involved in creating role-semantic resources. Within this framework, we present projection models that exploit lexical and syntactic information. We provide an experimental evaluation on an English-German parallel corpus which demonstrates the feasibility of inducing high-precision German semantic role annotation both for manually and automatically annotated English data.",
"We argue that multilingual parallel data provides a valuable source of indirect supervision for induction of shallow semantic representations. Specifically, we consider unsupervised induction of semantic roles from sentences annotated with automatically-predicted syntactic dependency representations and use a state-of-the-art generative Bayesian non-parametric model. At inference time, instead of only seeking the model which explains the monolingual data available for each language, we regularize the objective by introducing a soft constraint penalizing for disagreement in argument labeling on aligned sentences. We propose a simple approximate learning algorithm for our set-up which results in efficient inference. When applied to German-English parallel data, our method obtains a substantial improvement over a model trained without using the agreement signal, when both are tested on non-parallel sentences.",
"Broad-coverage semantic annotations for training statistical learners are only available for a handful of languages. Previous approaches to cross-lingual transfer of semantic annotations have addressed this problem with encouraging results on a small scale. In this paper, we scale up previous efforts by using an automatic approach to semantic annotation that does not rely on a semantic ontology for the target language. Moreover, we improve the quality of the transferred semantic annotations by using a joint syntactic-semantic parser that learns the correlations between syntax and semantics of the target language and smooths out the errors from automatic transfer. We reach a labelled F-measure for predicates and arguments of only 4 and 9 points, respectively, lower than the upper bound from manual annotations."
]
} |
1412.2812 | 1959399437 | We introduce a new approach to unsupervised estimation of feature-rich semantic role labeling models. Our model consists of two components: (1) an encoding component: a semantic role labeling model which predicts roles given a rich set of syntactic and lexical features; (2) a reconstruction component: a tensor factorization model which relies on roles to predict argument fillers. When the components are estimated jointly to minimize errors in argument reconstruction, the induced roles largely correspond to roles defined in annotated resources. Our method performs on par with most accurate role induction methods on English and German, even though, unlike these previous approaches, we do not incorporate any prior linguistic knowledge about the languages. | Unsupervised learning has also been one of the central paradigms for the closely-related area of relation extraction (RE), where several techniques have been proposed to cluster semantically similar verbalizations of relations @cite_26 @cite_51 @cite_22 . Similarly to SRL, unsupervised methods for RE mostly rely on generative modeling and agglomerative clustering. | {
"cite_N": [
"@cite_26",
"@cite_51",
"@cite_22"
],
"mid": [
"1965605789",
"2127978399",
"115166160"
],
"abstract": [
"In this paper, we propose an unsupervised method for discovering inference rules from text, such as \"X is author of Y a X wrote Y\", \"X solved Y a X found a solution to Y\", and \"X caused Y a Y is triggered by X\". Inference rules are extremely important in many fields such as natural language processing, information retrieval, and artificial intelligence in general. Our algorithm is based on an extended version of Harris' Distributional Hypothesis, which states that words that occurred in the same contexts tend to be similar. Instead of using this hypothesis on words, we apply it to paths in the dependency trees of a parsed corpus.",
"To implement open information extraction, a new extraction paradigm has been developed in which a system makes a single data-driven pass over a corpus of text, extracting a large set of relational tuples without requiring any human input. Using training data, a Self-Supervised Learner employs a parser and heuristics to determine criteria that will be used by an extraction classifier (or other ranking model) for evaluating the trustworthiness of candidate tuples that have been extracted from the corpus of text, by applying heuristics to the corpus of text. The classifier retains tuples with a sufficiently high probability of being trustworthy. A redundancy-based assessor assigns a probability to each retained tuple to indicate a likelihood that the retained tuple is an actual instance of a relationship between a plurality of objects comprising the retained tuple. The retained tuples comprise an extraction graph that can be queried for information.",
"We explore unsupervised approaches to relation extraction between two named entities; for instance, the semantic bornIn relation between a person and location entity. Concretely, we propose a series of generative probabilistic models, broadly similar to topic models, each which generates a corpus of observed triples of entity mention pairs and the surface syntactic dependency path between them. The output of each model is a clustering of observed relation tuples and their associated textual expressions to underlying semantic relation types. Our proposed models exploit entity type constraints within a relation as well as features on the dependency path between entity mentions. We examine effectiveness of our approach via multiple evaluations and demonstrate 12 error reduction in precision over a state-of-the-art weakly supervised baseline."
]
} |
1412.2954 | 2148466135 | We present a simple, general technique for reducing the sample complexity of matrix and tensor decomposition algorithms applied to distributions. We use the technique to give a polynomial-time algorithm for standard ICA with sample complexity nearly linear in the dimension, thereby improving substantially on previous bounds. The analysis is based on properties of random polynomials, namely the spacings of an ensemble of polynomials. Our technique also applies to other applications of tensor decompositions, including spherical Gaussian mixture models. | The main technique is all these papers can be viewed as efficient tensor decomposition. For a Hermitian matrix @math , one can give an orthogonal decomposition into rank @math components: This decomposition, especially when applied to covariance matrices, is a powerful tool in machine learning and theoretical computer science. The generalization of this to tensors is not straightforward, and many versions of this decomposition lead directly to NP-hard problems . The application of tensor decomposition to ICA was proposed by @cite_6 . Such decompositions were used by @cite_3 and @cite_17 to give provable algorithms for various latent variable models. @cite_7 extended these decompositions to a more general setting where the rank-one factors need not be linearly independent (and thus might be many more than the dimension). | {
"cite_N": [
"@cite_3",
"@cite_7",
"@cite_6",
"@cite_17"
],
"mid": [
"2953337630",
"2950033149",
"1533423434",
"2951439068"
],
"abstract": [
"The problem of topic modeling can be seen as a generalization of the clustering problem, in that it posits that observations are generated due to multiple latent factors (e.g., the words in each document are generated as a mixture of several active topics, as opposed to just one). This increased representational power comes at the cost of a more challenging unsupervised learning problem of estimating the topic probability vectors (the distributions over words for each topic), when only the words are observed and the corresponding topics are hidden. We provide a simple and efficient learning procedure that is guaranteed to recover the parameters for a wide class of mixture models, including the popular latent Dirichlet allocation (LDA) model. For LDA, the procedure correctly recovers both the topic probability vectors and the prior over the topics, using only trigram statistics (i.e., third order moments, which may be estimated with documents containing just three words). The method, termed Excess Correlation Analysis (ECA), is based on a spectral decomposition of low order moments (third and fourth order) via two singular value decompositions (SVDs). Moreover, the algorithm is scalable since the SVD operations are carried out on @math matrices, where @math is the number of latent factors (e.g. the number of topics), rather than in the @math -dimensional observed space (typically @math ).",
"Fourier PCA is Principal Component Analysis of a matrix obtained from higher order derivatives of the logarithm of the Fourier transform of a distribution.We make this method algorithmic by developing a tensor decomposition method for a pair of tensors sharing the same vectors in rank- @math decompositions. Our main application is the first provably polynomial-time algorithm for underdetermined ICA, i.e., learning an @math matrix @math from observations @math where @math is drawn from an unknown product distribution with arbitrary non-Gaussian components. The number of component distributions @math can be arbitrarily higher than the dimension @math and the columns of @math only need to satisfy a natural and efficiently verifiable nondegeneracy condition. As a second application, we give an alternative algorithm for learning mixtures of spherical Gaussians with linearly independent means. These results also hold in the presence of Gaussian noise.",
"The author presents a simple algebraic method for the extraction of independent components in multidimensional data. Since statistical independence is a much stronger property than uncorrelation, it is possible, using higher-order moments, to identify source signatures in array data without any a priori model for propagation or reception, that is, without directional vector parameterization, provided that the emitting sources are independent with different probability distributions. The author proposes such a blind identification procedure. Source signatures are directly identified as covariance eigenvectors after data have been orthonormalized and nonlinearly weighted. Potential applications to array processing are illustrated by a simulation consisting of a simultaneous range-bearing estimation with a passive array. >",
"This work provides a computationally efficient and statistically consistent moment-based estimator for mixtures of spherical Gaussians. Under the condition that component means are in general position, a simple spectral decomposition technique yields consistent parameter estimates from low-order observable moments, without additional minimum separation assumptions needed by previous computationally efficient estimation procedures. Thus computational and information-theoretic barriers to efficient estimation in mixture models are precluded when the mixture components have means in general position and spherical covariances. Some connections are made to estimation problems related to independent component analysis."
]
} |
1412.2204 | 2283248567 | A content can be replicated in more than one node, in Information Centric Networks (ICNs). Thus, more than one path can be followed to reach the same content, and it is necessary to decide the interface(s) to be selected in every network node to forward content requests towards such multiple content containers. A multipath forwarding strategy defines how to perform this choice. In this paper we propose a general analytical model to evaluate the effect of multipath forwarding strategies on the performance of an ICN content delivery, whose congestion control follows a receiver driven, loss-based AIMD scheme. We use the model to understand the behavior of ICN multipath forwarding strategies proposed in the literature so far, and to devise and evaluate a novel strategy. The considered multipath forwarding strategies are also evaluated in a realistic network setting, by using the PlanetLab testbed. | * Content Centric Network - CCN A CCN addresses contents by using unique hierarchical names @cite_9 (e.g. foo.com doc1). Big contents are split into chunks, uniquely addressed by names that include the content name and the chunk number (e.g. foo.com doc1 @math CN1). At the reception of the related Data, the AIMD algorithm sets cwnd=2 and the receiver sends out two Interests for the next two chunks. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2014952121"
],
"abstract": [
"Network use has evolved to be dominated by content distribution and retrieval, while networking technology still speaks only of connections between hosts. Accessing content and services requires mapping from the what that users care about to the network's where. We present Content-Centric Networking (CCN) which treats content as a primitive - decoupling location from identity, security and access, and retrieving content by name. Using new approaches to routing named content, derived heavily from IP, we can simultaneously achieve scalability, security and performance. We implemented our architecture's basic features and demonstrate resilience and performance with secure file downloads and VoIP calls."
]
} |
1412.2204 | 2283248567 | A content can be replicated in more than one node, in Information Centric Networks (ICNs). Thus, more than one path can be followed to reach the same content, and it is necessary to decide the interface(s) to be selected in every network node to forward content requests towards such multiple content containers. A multipath forwarding strategy defines how to perform this choice. In this paper we propose a general analytical model to evaluate the effect of multipath forwarding strategies on the performance of an ICN content delivery, whose congestion control follows a receiver driven, loss-based AIMD scheme. We use the model to understand the behavior of ICN multipath forwarding strategies proposed in the literature so far, and to devise and evaluate a novel strategy. The considered multipath forwarding strategies are also evaluated in a realistic network setting, by using the PlanetLab testbed. | * TCP IP multipath In TCP IP networks, multipath issues have been abundantly discussed in the literature @cite_7 . We briefly report some reference approaches to exploit end-to-end and in-network multipath. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2120545656"
],
"abstract": [
"The Internet would be more efficient and robust if routers could flexibly divide traffic over multiple paths. Often, having one or two extra paths is sufficient for customizing paths for different applications, improving security, reacting to failures, and balancing load. However, support for Internet-wide multipath routing faces two significant barriers. First, multipath routing could impose significant computational and storage overhead in a network the size of the Internet. Second, the independent networks that comprise the Internet will not relinquish control over the flow of traffic without appropriate incentives. In this article, we survey flexible multipath routing techniques that are both scalable and incentive compatible. Techniques covered include: multihoming, tagging, tunneling, and extensions to existing Internet routing protocols."
]
} |
1412.2204 | 2283248567 | A content can be replicated in more than one node, in Information Centric Networks (ICNs). Thus, more than one path can be followed to reach the same content, and it is necessary to decide the interface(s) to be selected in every network node to forward content requests towards such multiple content containers. A multipath forwarding strategy defines how to perform this choice. In this paper we propose a general analytical model to evaluate the effect of multipath forwarding strategies on the performance of an ICN content delivery, whose congestion control follows a receiver driven, loss-based AIMD scheme. We use the model to understand the behavior of ICN multipath forwarding strategies proposed in the literature so far, and to devise and evaluate a novel strategy. The considered multipath forwarding strategies are also evaluated in a realistic network setting, by using the PlanetLab testbed. | - The exploitation of end-to-end multipath requires to discover remote NICs and to split data traffic among them. These operations, and congestion control, can be executed above the IP layer. Consequently, as shown in fig. , end-to-end multipath systems run above the IP network layer, at the communication end-points, without affecting network nodes. Usually, a multipath forwarding strategy splits traffic on a per-packet basis, to maximize the transfer rate. For instance, in the multi-homed'' scenario (fig. ) the MultiPath TCP (MPTCP) protocol @cite_12 sets up parallel subflows among couples of NICs discovered through TCP options, uses a TCP-friendly congestion control per-subflow, and schedules traffic on the different sub-flows according to a specific forwarding strategy @cite_8 @cite_14 . In the server pooling'' scenario, a receiver can use a BitTorrent approach to concurrently fetch different file pieces from different sources, discovered with Web means. The transfer of each piece is controlled by a TCP connection. | {
"cite_N": [
"@cite_14",
"@cite_12",
"@cite_8"
],
"mid": [
"1515106148",
"1823805418",
"2072152665"
],
"abstract": [
"Multipath TCP, as proposed by the IETF working group mptcp, allows a single data stream to be split across multiple paths. This has obvious benefits for reliability, and it can also lead to more efficient use of networked resources. We describe the design of a multipath congestion control algorithm, we implement it in Linux, and we evaluate it for multihomed servers, data centers and mobile clients. We show that some 'obvious' solutions for multipath congestion control can be harmful, but that our algorithm improves throughput and fairness compared to single-path TCP. Our algorithmis a drop-in replacement for TCP, and we believe it is safe to deploy.",
"TCP IP communication is currently restricted to a single path per connection, yet multiple paths often exist between peers. The simultaneous use of these multiple paths for a TCP IP session would improve resource usage within the network and, thus, improve user experience through higher throughput and improved resilience to network failure. Multipath TCP provides the ability to simultaneously use multiple paths between peers. This document presents a set of extensions to traditional TCP to support multipath operation. The protocol offers the same type of service to applications as TCP (i.e., reliable bytestream), and it provides the components necessary to establish and use multiple TCP flows across potentially disjoint paths. This document defines an Experimental Protocol for the Internet community.",
"Multipath transport protocols such as Multipath TCP can concurrently use several subflows to transmit a TCP flow over potentially different paths. Since more than one subflow is used, an efficient multipath scheduling algorithm is needed at the sender. The objective of the scheduler is to identify the subflow over which the current data packet should be sent. This paper compares the most important types of schedulers for multipath transfers. We model their performance analytically and derive key metrics, most notably the resulting end-to-end delay over heterogeneous paths. Our results show that a scheduler minimizing the packet delivery delay yields the best overall performance, but it is complex to realize. An alternative scheduler based on the sender queue size is simpler and has sufficient performance for relatively small asymmetry between the multiple paths. Our model results are confirmed by measurements with a real multipath transport protocol."
]
} |
1412.2204 | 2283248567 | A content can be replicated in more than one node, in Information Centric Networks (ICNs). Thus, more than one path can be followed to reach the same content, and it is necessary to decide the interface(s) to be selected in every network node to forward content requests towards such multiple content containers. A multipath forwarding strategy defines how to perform this choice. In this paper we propose a general analytical model to evaluate the effect of multipath forwarding strategies on the performance of an ICN content delivery, whose congestion control follows a receiver driven, loss-based AIMD scheme. We use the model to understand the behavior of ICN multipath forwarding strategies proposed in the literature so far, and to devise and evaluate a novel strategy. The considered multipath forwarding strategies are also evaluated in a realistic network setting, by using the PlanetLab testbed. | - The exploitation of in-network multipath requires to discover internal network paths and to control the forwarding of traffic inside the network. Consequently, the path discovery and multipath forwarding mechanisms must be necessarily executed by the IP network layer We are not considering the possibility of using the source routing IP option since practically is not supported , thus involving network nodes. Instead, congestion control mechanism can remain above IP, at the communication end-points. However, as shown in fig. , this division of multipath functionality in two different layers creates inter operation issues, of which a crucial one is packet reordering. In fact, if a per-packet strategy were used by IP routers, it would cause out of order packet delivery (as different paths may have different delays) and TCP-based congestion control would wrongly reduce the send-rate, even in absence of congestion @cite_16 . Conversely, out-of-order delivery does not occur in the case of per-flow multipath forwarding strategies. Thus, per-flow strategies are the safest approach in TCP IP networks, to exploit in-network multipath. | {
"cite_N": [
"@cite_16"
],
"mid": [
"1898772099"
],
"abstract": [
"In this paper, we investigate TCP performance over a multipath routing protocol. Multipath routing can improve the path availability in mobile environment. Thus, it has a great potential to improve TCP performance in ad hoc networks under mobility. Previous research on multipath routing mostly used UDP traffic for performance evaluation. When TCP is used, we find that most times, using multiple paths simultaneously may actually degrade TCP performance. This is partly due to frequent out-of-order packet delivery via different paths. We then test another multipath routing strategy called backup path routing. Under the backup path routing scheme, TCP is able to gain improvements against mobility. We then further study related issues to backup path routing, which can affect TCP performance. Some important discoveries are reported in the paper and simulation results show that by careful selection of the multipath routing strategies, we can improve TCP performance by more than 30 even under very high mobility."
]
} |
1412.2204 | 2283248567 | A content can be replicated in more than one node, in Information Centric Networks (ICNs). Thus, more than one path can be followed to reach the same content, and it is necessary to decide the interface(s) to be selected in every network node to forward content requests towards such multiple content containers. A multipath forwarding strategy defines how to perform this choice. In this paper we propose a general analytical model to evaluate the effect of multipath forwarding strategies on the performance of an ICN content delivery, whose congestion control follows a receiver driven, loss-based AIMD scheme. We use the model to understand the behavior of ICN multipath forwarding strategies proposed in the literature so far, and to devise and evaluate a novel strategy. The considered multipath forwarding strategies are also evaluated in a realistic network setting, by using the PlanetLab testbed. | Congestion control is an open ICN issue and there is not yet a standard'' protocol. Out of delivery may frequently happen in ICN, due to in-network caching and per-packet multipath forwarding strategies. Thus, recent works suggest to use receiver-driven congestion control schemes that do not consider out of order delivery as a symptom of congestion, but rather infer congestion from other parameters such as increasing delay (delay-based congestion control) @cite_4 @cite_1 , and packet loss (loss-based congestion control) @cite_19 @cite_0 . However, it is not clear, yet, which is the best indicator for congestion. In @cite_18 the authors raise concerns about delay-based approaches due to the small correlation between increased delays (or RTTs) and congestion-related losses in wired Internet measurements. Conversely, it is well-known that loss-based congestion control dramatically suffers from the random packet loss of wireless environments @cite_10 . In any case, the receiver-driven and connectionless nature of ICN congestion control enables receivers to select the best approach, depending on actual conditions. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_1",
"@cite_0",
"@cite_19",
"@cite_10"
],
"mid": [
"94696622",
"1994384380",
"1994523529",
"2026645245",
"2058905172",
"2101214690"
],
"abstract": [
"",
"The evolution of the Internet into a distributed Information access system calls for a paradigm shift to enable an evolvable future network architecture. Information-Centric Networking (ICN) proposals rethink the communication model around named data, in contrast with the host-centric transport view of TCP IP. Information retrieval is natively pull-based, driven by user requests, point-to-multipoint and intrinsically coupled with in-network caching. In this paper, we tackle the problem of joint multipath congestion control and request forwarding in ICN for the first time. We formulate it as a global optimization problem with the twofold objective of maximizing user throughput and minimizing overall network cost. We solve it via decomposition and derive a family of optimal congestion control strategies at the receiver and of distributed algorithms for dynamic request forwarding at network nodes. An experimental evaluation of our proposal is carried out in different network scenarios to assess the performance of our design and to highlight the benefits of an ICN approach.",
"Data communication across the Internet has significantly changed under the pressure of massive content delivery. Content-Centric Networking (CCN) rethinks Internet communication paradigm around named data retrieval, in contrast with the host-to-host transport model of TCP IP. Content retrieval is natively pull-based driven by user requests, point-to-multipoint and intrinsically coupled with the availability of network storage. By leveraging the key features of CCN transport, in this paper we propose for the first time a congestion control mechanism realizing efficient multipath communication over content-centric networks. Our proposal is based on a Remote Adaptive Active Queue Management (RAAQM) at the receiver that performs a per-route control of bottleneck queues along the paths. We analyze the stability of the proposed solution and assess its performance by means of CCN packet-level simulations under random and optimal route selection.",
"Content Centric Networking (CCN) is a recently proposed information-centric Internet architecture in which the main network abstraction is represented by location-agnostic content identifiers instead of node identifiers. In CCN each content object is divided into packet-size chunks. When a content object is transferred, routers on the path can cache single chunks which they can use to serve subsequent requests from other users. Since content chunks in CCN may be retrieved from a number of different nodes caches, implicit-feedback transport protocols will not be able to work efficiently, because it is not possible to set an appropriate timeout value based on RTT estimations given that the data source may change frequently during a flow. In order to address this problem, we propose in this paper a scalable, implicit-feedback congestion control protocol, capable of coping with RTT unpredictability using a novel anticipated interests mechanism to predict the location of chunks before they are actually served. Our evaluation shows that our protocol outperforms similar receiver-driven protocols, in particular when content chunks are scattered across network paths due to reduced cache sizes, long-tail content popularity distribution or the adoption of specific caching policies.",
"The Content Centric Network (CCNx) protocol introduces a new routing and forwarding paradigm for the waist of the future Internet architecture. Below CCN, a volatile set of transport and flow-control strategies is envisioned, which can match better different service requirements, than current TCP IP technology does. Although a broad range of possibilities has not been explored yet, there are already several proponents of strategies that come close to TCP's well-known flow control mechanism (timeout-driven, window-based, and AIMD- operated). In this paper, we carry out an empirical exploration of some proposed strategies in order to assert their feasibility and efficiency. Our contributions are twofold: First, we establish if receiver-based, timeout-driven, AIMD operated flow-control on Interest transmissions is sufficiently effective for CCN in a future Internet deployment, where it may co-exist with TCP. In this process we compare the performance of three different variants of this strategy, in presence of multi-homed content in the network (one of them proposed by the authors). Second, we provide indicators for the general efficiency of timeout-based flow-control at the CCN receiver, in presence of in-network caching, and exhibit some of the challenges faced by such strategies.",
"Reliable transport protocols such as TCP are tuned to perform well in traditional networks where packet losses occur mostly because of congestion. However, networks with wireless and other lossy links also suffer from significant losses due to bit errors and handoffs. TCP responds to all losses by invoking congestion control and avoidance algorithms, resulting in degraded end-to end performance in wireless and lossy systems. We compare several schemes designed to improve the performance of TCP in such networks. We classify these schemes into three broad categories: end-to-end protocols, where loss recovery is performed by the sender; link-layer protocols that provide local reliability; and split-connection protocols that break the end-to-end connection into two parts at the base station. We present the results of several experiments performed in both LAN and WAN environments, using throughput and goodput as the metrics for comparison. Our results show that a reliable link-layer protocol that is TCP-aware provides very good performance. Furthermore, it is possible to achieve good performance without splitting the end-to-end connection at the base station. We also demonstrate that selective acknowledgments and explicit loss notifications result in significant performance improvements."
]
} |
1412.2087 | 2950860300 | Although the Poisson point process (PPP) has been widely used to model base station (BS) locations in cellular networks, it is an idealized model that neglects the spatial correlation among BSs. The present paper proposes the use of determinantal point process (DPP) to take into account these correlations; in particular the repulsiveness among macro base station locations. DPPs are demonstrated to be analytically tractable by leveraging several unique computational properties. Specifically, we show that the empty space function, the nearest neighbor function, the mean interference and the signal-to-interference ratio (SIR) distribution have explicit analytical representations and can be numerically evaluated for cellular networks with DPP configured BSs. In addition, the modeling accuracy of DPPs is investigated by fitting three DPP models to real BS location data sets from two major U.S. cities. Using hypothesis testing for various performance metrics of interest, we show that these fitted DPPs are significantly more accurate than popular choices such as the PPP and the perturbed hexagonal grid model. | Cellular network performance metrics, such as the coverage probability and achievable rate, strongly depend on the spatial configuration of BSs. PPPs have become increasingly popular to model cellular BSs not only because they can describe highly irregular placements, but also because they allow the use of powerful tools from stochastic geometry and are amenable to tractable analysis @cite_1 . While cellular networks with PPP distributed BSs have been studied in early works such as @cite_29 @cite_8 @cite_35 , the coverage probability and average Shannon rate were derived only recently in @cite_1 . The analysis of cellular networks with PPP distributed BSs has been widely extended to other network scenarios, including heterogeneous cellular networks (HetNets) @cite_25 @cite_36 @cite_5 @cite_10 @cite_7 , MIMO cellular networks @cite_14 @cite_13 , and MIMO HetNets @cite_14 @cite_23 @cite_26 . | {
"cite_N": [
"@cite_13",
"@cite_35",
"@cite_14",
"@cite_26",
"@cite_7",
"@cite_8",
"@cite_36",
"@cite_29",
"@cite_1",
"@cite_23",
"@cite_5",
"@cite_10",
"@cite_25"
],
"mid": [
"1612785869",
"2086605530",
"2081450055",
"2023633390",
"2039688938",
"2158581494",
"2057540419",
"1552861460",
"2150166076",
"2155860576",
"2059973889",
"2043180800",
""
],
"abstract": [
"We study a multiple input multiple output (MIMO) cellular system where each base-station (BS) is equipped with a large antenna array and serves some single antenna mobile stations (MSs). With the same setup as in [1], the influence of orthogonal and non-orthogonal pilot sequences on the system performance is analytically characterized when each BS has infinitely many antennas. Using stochastic geometric modeling of the BS and MS locations, closed-form expressions are derived for the distribution of signal-to-interference-ratio (SIR) for both uplink and downlink. Moreover, they are shown to be equivalent for the orthogonal pilots case. Further, it is shown that the downlink SIR is greatly influenced by the correlations between the pilot sequences in the non-orthogonal pilots case. Finally, the mathematical tools can be used to study system performances with other general channel estimation methods and transmission-reception schemes.",
"We define and analyze a random coverage process of the @math -dimensional Euclidian space which allows one to describe a continuous spectrum that ranges from the Boolean model to the Poisson-Voronoi tessellation to the Johnson-Mehl model. Like for the Boolean model, the minimal stochastic setting consists of a Poisson point process on this Euclidian space and a sequence of real valued random variables considered as marks of this point process. In this coverage process, the cell attached to a point is defined as the region of the space where the effect of the mark of this point exceeds an affine function of the cumulated effect of all marks. This cumulated effect is defined as the shot noise process associated with the marked point process. In addition to analyzing and visualizing this continuum, we study various basic properties of the coverage process such as the probability that a point or a pair of points be covered by a typical cell. We also determine the distribution of the number of cells which cover a given point, and show how to provide deterministic bounds on this number. Finally, we also analyze convergence properties of the coverage process using the framework of closed sets, and its differentiability properties using perturbation analysis. Our results require a pathwise continuity property for the shot noise process for which we provide sufficient conditions. The model in question stems from wireless communications where several antennas share the same (or different but interfering) channel(s). In this case, the area where the signal of a given antenna can be received is the area where the signal to interference ratio is large enough. We describe this class of problems in detail in the paper. The obtained results allow one to compute quantities of practical interest within this setting: for instance the outage probability is obtained as the complement of the volume fraction; the law of the number of cells covering a point allows one to characterize handover strategies etc.",
"Cellular systems are becoming more heterogeneous with the introduction of low power nodes including femtocells, relays, and distributed antennas. Unfortunately, the resulting interference environment is also becoming more complicated, making evaluation of different communication strategies challenging in both analysis and simulation. Leveraging recent applications of stochastic geometry to analyze cellular systems, this paper proposes to analyze downlink performance in a fixed-size cell, which is inscribed within a weighted Voronoi cell in a Poisson field of interferers. A nearest out-of-cell interferer, out-of-cell interferers outside a guard region, and cross-tier interferers are included in the interference calculations. Bounding the interference power as a function of distance from the cell center, the total interference is characterized through its Laplace transform. An equivalent marked process is proposed for the out-of-cell interference under additional assumptions. To facilitate simplified calculations, the interference distribution is approximated using the Gamma distribution with second order moment matching. The Gamma approximation simplifies calculation of the success probability and average rate, incorporates small-scale and large-scale fading, and works with co-tier and cross-tier interference. Simulations show that the proposed model provides a flexible way to characterize outage probability and rate as a function of the distance to the cell edge.",
"We consider a heterogeneous cellular network (HetNet) where a macrocell tier with a large antenna array base station (BS) is overlaid with a dense tier of small cells (SCs). We investigate the potential benefits of incorporating a massive MIMO BS in a TDD-based HetNet and we provide analytical expressions for the coverage probability and the area spectral efficiency using stochastic geometry. The duplexing mode in which SCs should operate during uplink macrocell transmissions is optimized. Furthermore, we consider a reverse TDD scheme, in which the massive MIMO BS can estimate the SC interference covariance matrix. Our results suggest that significant throughput improvement can be achieved by exploiting interference nulling and implicit coordination across the tiers due to flexible and asymmetric TDD operation.",
"For more than three decades, stochastic geometry has been used to model large-scale ad hoc wireless networks, and it has succeeded to develop tractable models to characterize and better understand the performance of these networks. Recently, stochastic geometry models have been shown to provide tractable yet accurate performance bounds for multi-tier and cognitive cellular wireless networks. Given the need for interference characterization in multi-tier cellular networks, stochastic geometry models provide high potential to simplify their modeling and provide insights into their design. Hence, a new research area dealing with the modeling and analysis of multi-tier and cognitive cellular wireless networks is increasingly attracting the attention of the research community. In this article, we present a comprehensive survey on the literature related to stochastic geometry models for single-tier as well as multi-tier and cognitive cellular wireless networks. A taxonomy based on the target network model, the point process used, and the performance evaluation technique is also presented. To conclude, we discuss the open research challenges and future research directions.",
"This paper considers two-dimensional interference-limited cellular radio systems. It introduces the shotgun cellular system that places base stations randomly and assigns channels randomly. Such systems are shown to provide lower bounds to cellular performance that are easy to compute, independent of shadow fading, and apply to a number of design scenarios. Traditional hexagonal systems provide an upper performance bound. The difference between upper and lower bounds is small under operating conditions typical in modern TDMA and CDMA cellular systems. Furthermore, in the strong shadow fading limit, the bounds converge. To give insights into the design of practical systems, several variations are explored including mobile access methods, sectorizing, channel assignments, and placement with deviations. Together these results indicate cellular performance is very robust and little is lost in making rapid minimally planned deployments.",
"Random spatial models are attractive for modeling heterogeneous cellular networks (HCNs) due to their realism, tractability, and scalability. A major limitation of such models to date in the context of HCNs is the neglect of network traffic and load: all base stations (BSs) have typically been assumed to always be transmitting. Small cells in particular will have a lighter load than macrocells, and so their contribution to the network interference may be significantly overstated in a fully loaded model. This paper incorporates a flexible notion of BS load by introducing a new idea of conditionally thinning the interference field. For a K-tier HCN where BSs across tiers differ in terms of transmit power, supported data rate, deployment density, and now load, we derive the coverage probability for a typical mobile, which connects to the strongest BS signal. Conditioned on this connection, the interfering BSs of the i^ th tier are assumed to transmit independently with probability p_i, which models the load. Assuming — reasonably — that smaller cells are more lightly loaded than macrocells, the analysis shows that adding such access points to the network always increases the coverage probability. We also observe that fully loaded models are quite pessimistic in terms of coverage.",
"This paper proposes a new approach for communication networks planning based on stochastic geometry. We first summarize the state of the art in this domain, together with its economic implications, before sketching the main expectations of the proposed method. The main probabilistic tools are point processes and stochastic geometry. We show how several performance evaluation and optimization problems within this framework can actually be posed and solved by computing the mathematical expectation of certain functionals of point processes. We mainly analyze models based on Poisson point processes, for which analytical formulae can often be obtained, although more complex models can also be analyzed, for instance via simulation.",
"Cellular networks are usually modeled by placing the base stations on a grid, with mobile users either randomly scattered or placed deterministically. These models have been used extensively but suffer from being both highly idealized and not very tractable, so complex system-level simulations are used to evaluate coverage outage probability and rate. More tractable models have long been desirable. We develop new general models for the multi-cell signal-to-interference-plus-noise ratio (SINR) using stochastic geometry. Under very general assumptions, the resulting expressions for the downlink SINR CCDF (equivalent to the coverage probability) involve quickly computable integrals, and in some practical special cases can be simplified to common integrals (e.g., the Q-function) or even to simple closed-form expressions. We also derive the mean rate, and then the coverage gain (and mean rate loss) from static frequency reuse. We compare our coverage predictions to the grid model and an actual base station deployment, and observe that the proposed model is pessimistic (a lower bound on coverage) whereas the grid model is optimistic, and that both are about equally accurate. In addition to being more tractable, the proposed model may better capture the increasingly opportunistic and dense placement of base stations in future networks.",
"We develop a general downlink model for multi-antenna heterogeneous cellular networks (HetNets), where base stations (BSs) across tiers may differ in terms of transmit power, target signal-to-interference-ratio (SIR), deployment density, number of transmit antennas and the type of multi-antenna transmission. In particular, we consider and compare space division multiple access (SDMA), single user beamforming (SU-BF), and baseline single-input single-output (SISO) transmission. For this general model, the main contributions are: (i) ordering results for both coverage probability and per user rate in closed form for any BS distribution for the three considered techniques, using novel tools from stochastic orders, (ii) upper bounds on the coverage probability assuming a Poisson BS distribution, and (iii) a comparison of the area spectral efficiency (ASE). The analysis concretely demonstrates, for example, that for a given total number of transmit antennas in the network, it is preferable to spread them across many single-antenna BSs vs. fewer multi-antenna BSs. Another observation is that SU-BF provides higher coverage and per user data rate than SDMA, but SDMA is in some cases better in terms of ASE.",
"The Signal to Interference Plus Noise Ratio (SINR) on a wireless link is an important basis for consideration of outage, capacity, and throughput in a cellular network. It is therefore important to understand the SINR distribution within such networks, and in particular heterogeneous cellular networks, since these are expected to dominate future network deployments . Until recently the distribution of SINR in heterogeneous networks was studied almost exclusively via simulation, for selected scenarios representing pre-defined arrangements of users and the elements of the heterogeneous network such as macro-cells, femto-cells, etc. However, the dynamic nature of heterogeneous networks makes it difficult to design a few representative simulation scenarios from which general inferences can be drawn that apply to all deployments. In this paper, we examine the downlink of a heterogeneous cellular network made up of multiple tiers of transmitters (e.g., macro-, micro-, pico-, and femto-cells) and provide a general theoretical analysis of the distribution of the SINR at an arbitrarily-located user. Using physically realistic stochastic models for the locations of the base stations (BSs) in the tiers, we can compute the general SINR distribution in closed form. We illustrate a use of this approach for a three-tier network by calculating the probability of the user being able to camp on a macro-cell or an open-access (OA) femto-cell in the presence of Closed Subscriber Group (CSG) femto-cells. We show that this probability depends only on the relative densities and transmit powers of the macro- and femto-cells, the fraction of femto-cells operating in OA vs. Closed Subscriber Group (CSG) mode, and on the parameters of the wireless channel model. For an operator considering a femto overlay on a macro network, the parameters of the femto deployment can be selected from a set of universal curves.",
"Abstract--- This paper studies the carrier-to-interference ratio (CIR) and carrier-to-interference-plus-noise ratio (CINR) performance at the mobile station (MS) within a multi-tier network composed of M tiers of wireless networks, with each tier modeled as the homogeneous n-dimensional (n-D, n=1,2, and 3) shotgun cellular system, where the base station (BS) distribution is given by the homogeneous Poisson point process in n-D. The CIR and CINR at the MS in a single tier network are thoroughly analyzed to simplify the analysis of the multi-tier network. For the multi-tier network with given system parameters, the following are the main results of this paper: (1) semi-analytical expressions for the tail probabilities of CIR and CINR; (2) a closed form expression for the tail probability of CIR in the range [1,infinity); (3) a closed form expression for the tail probability of an approximation to CINR in the entire range [0,infinity); (4) a lookup table based approach for obtaining the tail probability of CINR, and (5) the study of the effect of shadow fading and BSs with ideal sectorized antennas on the CIR and CINR. Based on these results, it is shown that, in a practical cellular system, the installation of additional wireless networks (microcells, picocells and femtocells) with low power BSs over the already existing macrocell network will always improve the CINR performance at the MS.",
""
]
} |
1412.2087 | 2950860300 | Although the Poisson point process (PPP) has been widely used to model base station (BS) locations in cellular networks, it is an idealized model that neglects the spatial correlation among BSs. The present paper proposes the use of determinantal point process (DPP) to take into account these correlations; in particular the repulsiveness among macro base station locations. DPPs are demonstrated to be analytically tractable by leveraging several unique computational properties. Specifically, we show that the empty space function, the nearest neighbor function, the mean interference and the signal-to-interference ratio (SIR) distribution have explicit analytical representations and can be numerically evaluated for cellular networks with DPP configured BSs. In addition, the modeling accuracy of DPPs is investigated by fitting three DPP models to real BS location data sets from two major U.S. cities. Using hypothesis testing for various performance metrics of interest, we show that these fitted DPPs are significantly more accurate than popular choices such as the PPP and the perturbed hexagonal grid model. | For several reasons, determinantal point processes (DPPs) are a promising class of point processes to model cellular BS deployments. First, DPPs have soft and adaptable repulsiveness @cite_24 . Second, there are quite effective statistical inference tools for DPPs @cite_12 @cite_34 . Third, many stationary DPPs can be easily simulated @cite_34 @cite_21 @cite_30 . Fourth, DPPs have many attractive mathematical properties, which can be used for the analysis of cellular network performance @cite_9 @cite_18 . | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_9",
"@cite_21",
"@cite_24",
"@cite_34",
"@cite_12"
],
"mid": [
"2223302737",
"",
"2092812368",
"1938647246",
"1982849351",
"2091038426",
""
],
"abstract": [
"The Ginibre point process is one of the main examples of deter- minantal point processes on the complex plane. It forms a recurring model in stochastic matrix theory as well as in pratical applications. However, this model has mostly been studied from a probabilistic point of view in the fields of stochastic matrices and determinantal point processes, and thus using the Ginibre process to model random phenomena is a topic which is for the most part unexplored. In order to obtain a determinantal point process more suited for simulation, we introduce a modified version of the classical kernel. Then, we compare three different methods to simulate the Ginibre point process and discuss the most efficient one depending on the application at hand.",
"",
"We introduce certain classes of random point fields, including fermion and boson point processes, which are associated with Fredholm determinants of certain integral operators and study some of their basic properties: limit theorems, correlation functions, Palm measures etc. Also we propose a conjecture on an α-analogue of the determinant and permanent.",
"Determinantal point processes (DPP) serve as a practicable modeling for many applications of repulsive point processes. A known approach for simulation was proposed in Hough(2006) , which generate the desired distribution point wise through rejection sampling. Unfortunately, the size of rejection could be very large. In this paper, we investigate the application of perfect simulation via coupling from the past (CFTP) on DPP. We give a general framework for perfect simulation on DPP model. It is shown that the limiting sequence of the time-to-coalescence of the coupling is bounded by @math An application is given to the stationary models in DPP.",
"The spatial correlations in transmitter node locations introduced by common multiple access protocols make the analysis of interference, outage, and other related metrics in a wireless network extremely difficult. Most works therefore assume that nodes are distributed either as a Poisson point process (PPP) or a grid, and utilize the independence properties of the PPP (or the regular structure of the grid) to analyze interference, outage and other metrics. But, the independence of node locations makes the PPP a dubious model for nontrivial MACs which intentionally introduce correlations, e.g., spatial separation, while the grid is too idealized to model real networks. In this paper, we introduce a new technique based on the factorial moment expansion of functionals of point processes to analyze functions of interference, in particular outage probability. We provide a Taylor-series type expansion of functions of interference, wherein increasing the number of terms in the series provides a better approximation at the cost of increased complexity of computation. Various examples illustrate how this new approach can be used to find outage probability in both Poisson and non-Poisson wireless networks.",
"type=\"main\" xml:id=\"rssb12096-abs-0001\"> Statistical models and methods for determinantal point processes (DPPs) seem largely unexplored. We demonstrate that DPPs provide useful models for the description of spatial point pattern data sets where nearby points repel each other. Such data are usually modelled by Gibbs point processes, where the likelihood and moment expressions are intractable and simulations are time consuming. We exploit the appealing probabilistic properties of DPPs to develop parametric models, where the likelihood and moment expressions can be easily evaluated and realizations can be quickly simulated. We discuss how statistical inference is conducted by using the likelihood or moment properties of DPP models, and we provide freely available software for simulation and statistical inference.",
""
]
} |
1412.2087 | 2950860300 | Although the Poisson point process (PPP) has been widely used to model base station (BS) locations in cellular networks, it is an idealized model that neglects the spatial correlation among BSs. The present paper proposes the use of determinantal point process (DPP) to take into account these correlations; in particular the repulsiveness among macro base station locations. DPPs are demonstrated to be analytically tractable by leveraging several unique computational properties. Specifically, we show that the empty space function, the nearest neighbor function, the mean interference and the signal-to-interference ratio (SIR) distribution have explicit analytical representations and can be numerically evaluated for cellular networks with DPP configured BSs. In addition, the modeling accuracy of DPPs is investigated by fitting three DPP models to real BS location data sets from two major U.S. cities. Using hypothesis testing for various performance metrics of interest, we show that these fitted DPPs are significantly more accurate than popular choices such as the PPP and the perturbed hexagonal grid model. | The Ginibre point process, which is a type of DPP, has been recently proposed as a possible model for cellular BSs. Closed-form expressions of the coverage probability and the mean data rate were derived for Ginibre single-tier cellular networks in @cite_18 , and heterogeneous cellular networks in @cite_31 . @cite_32 , several spatial descriptive statistics and the coverage probability were derived for Ginibre single-tier networks. These results were empirically validated by comparing to real BS deployments. That being said, the modeling accuracy and analytical tractability of using general DPPs to model cellular BS deployments are still largely unexplored. | {
"cite_N": [
"@cite_18",
"@cite_31",
"@cite_32"
],
"mid": [
"",
"2076773434",
"1994267277"
],
"abstract": [
"",
"We consider spatial stochastic models of downlink heterogeneous cellular networks (HCNs) with multiple tiers, where the base stations (BSs) of each tier have a particular spatial density, transmission power and path-loss exponent. Prior works on such spatial models of HCNs assume, due to its tractability, that the BSs are deployed according to homogeneous Poisson point processes. This means that the BSs are located independently of each other and their spatial correlation is ignored. In the current paper, we propose two spatial models for the analysis of downlink HCNs, in which the BSs are deployed according to @a-Ginibre point processes. The @a-Ginibre point processes constitute a class of determinantal point processes and account for the repulsion between the BSs. Besides, the degree of repulsion is adjustable according to the value of @[email protected]?(0,1]. In one proposed model, the BSs of different tiers are deployed according to mutually independent @a-Ginibre processes, where the @a can take different values for the different tiers. In the other model, all the BSs are deployed according to an @a-Ginibre point process and they are classified into multiple tiers by mutually independent marks. For these proposed models, we derive computable representations for the coverage probability of a typical user-the probability that the downlink signal-to-interference-plus-noise ratio for the typical user achieves a target threshold. We exhibit the results of some numerical experiments and compare the proposed models and the Poisson based model.",
"The spatial structure of transmitters in wireless networks plays a key role in evaluating the mutual interference and hence the performance. Although the Poisson point process (PPP) has been widely used to model the spatial configuration of wireless networks, it is not suitable for networks with repulsion. The Ginibre point process (GPP) is one of the main examples of determinantal point processes that can be used to model random phenomena where repulsion is observed. Considering the accuracy, tractability and practicability tradeoffs, we introduce and promote the @math -GPP, an intermediate class between the PPP and the GPP, as a model for wireless networks when the nodes exhibit repulsion. To show that the model leads to analytically tractable results in several cases of interest, we derive the mean and variance of the interference using two different approaches: the Palm measure approach and the reduced second moment approach, and then provide approximations of the interference distribution by three known probability density functions. Besides, to show that the model is relevant for cellular systems, we derive the coverage probability of the typical user and also find that the fitted @math -GPP can closely model the deployment of actual base stations in terms of the coverage probability and other statistics."
]
} |
1412.2122 | 2949596139 | In this paper we present a non-invasive ambient intelligence framework for the semi-automatic analysis of non-verbal communication applied to the restorative justice field. In particular, we propose the use of computer vision and social signal processing technologies in real scenarios of Victim-Offender Mediations, applying feature extraction techniques to multi-modal audio-RGB-depth data. We compute a set of behavioral indicators that define communicative cues from the fields of psychology and observational methodology. We test our methodology on data captured in real world Victim-Offender Mediation sessions in Catalonia in collaboration with the regional government. We define the ground truth based on expert opinions when annotating the observed social responses. Using different state-of-the-art binary classification approaches, our system achieves recognition accuracies of 86 when predicting satisfaction, and 79 when predicting both agreement and receptivity. Applying a regression strategy, we obtain a mean deviation for the predictions between 0.5 and 0.7 in the range [1-5] for the computed social signals. | The Restorative Justice approach focuses on the personal needs of victims. As discussed above, achieving success in the VOM sessions depends largely on how the participants communicate with each other. A large number of techniques can be found in the literature for application in VOM. A good example of this is provided by Umbreit's handbook @cite_20 . This resource offers an empirically grounded, state-of-the-art analysis of the application and impact of VOM. It provides practical guidance and resources for VOM in the case of property crimes, minor assaults, and, more recently, crimes of severe violence, where family members of murder victims request a meeting with the offender. Since most of these cases addressed are of a highly sensitive nature, participants are likely to manifest emotional states, when interacting with the others, that can be physically observed through their non-verbal communication @cite_31 . | {
"cite_N": [
"@cite_31",
"@cite_20"
],
"mid": [
"1544484417",
"572213402"
],
"abstract": [
"Preface. Part I: AN INTRODUCTION TO THE STUDY OF NONVERBAL COMMUNICATION. 1. Nonverbal Communication: Basic Perspectives. 2. The Roots of Nonverbal Behavior. 3. The Ability to Receive and Send Nonverbal Signals. Part II: THE COMMUNICATION ENVIRONMENT. 4. The Effects of the Environment on Human Communication. 5. The Effects of Territory and Personal Space on Human Communication. Part III: THE COMMUNICATORS. 6. The Effects of Physical Characteristics on Human Communication. Part IV: The Communicators' Behavior. 7. The Effects of Gesture and Posture on Human Communication. 8. The Effects of Touch on Human Communication. 9. The Effects of the Face on Human Communication. 10. The Effects of Eye Behavior on Human Communication. 11. The Effects of Vocal Cues That Accompany Spoken Words. Part V: COMMUNICATING IMPORTANT MESSAGES. 12. Using Nonverbal Behavior in Daily Interaction. 13. Nonverbal Messages in Special Contexts.",
"Foreword by Marlene Young. Introduction: Restorative Justice Through Victim Offender Mediation. PHILOSOPHY, PRACTICE, AND CONTEXT. Humanistic Mediation: A Transformative Journey of Peacemaking. Guidelines for Victim-Sensitive Mediation and Dialogue with Offenders. The Mediation Process: Phases and Tasks. Multicultural Implications of Victim Offender Mediation. Case Studies. National Survey of Victim Offender Mediation Programs. Program Development Issues. WHAT WE ARE LEARNING FROM RESEARCH. The Impact of Victim Offender Mediation: Two Decades of Research. Cross-National Assessment of Victim Offender Mediation. Victim Offender Mediation in the United States: A Multisite Assessment. Victim Offender Mediation in Canada: A Multisite Assessment. Victim Offender Mediation in England: A Multisite Assessment. EMERGING ISSUES. Advanced Mediation and Dialogue in Crimes of Severe Violence. Potential Hazards and Opportunities. Appendix A. Resources: Organizations, Publications, Videotapes. Appendix B. Directory of VOM Programs in the United States. Appendix C. Program Profiles. Appendix D. Promising Practices and Innovations. Appendix E. Summary of Forty VOM Empirical Studies. Appendix F. Assessing Participant Satisfaction with VOM. References. Index."
]
} |
1412.2122 | 2949596139 | In this paper we present a non-invasive ambient intelligence framework for the semi-automatic analysis of non-verbal communication applied to the restorative justice field. In particular, we propose the use of computer vision and social signal processing technologies in real scenarios of Victim-Offender Mediations, applying feature extraction techniques to multi-modal audio-RGB-depth data. We compute a set of behavioral indicators that define communicative cues from the fields of psychology and observational methodology. We test our methodology on data captured in real world Victim-Offender Mediation sessions in Catalonia in collaboration with the regional government. We define the ground truth based on expert opinions when annotating the observed social responses. Using different state-of-the-art binary classification approaches, our system achieves recognition accuracies of 86 when predicting satisfaction, and 79 when predicting both agreement and receptivity. Applying a regression strategy, we obtain a mean deviation for the predictions between 0.5 and 0.7 in the range [1-5] for the computed social signals. | Recently, a number of studies have proposed ways in which personality traits can be inferred from multimedia data @cite_27 and which can be applied directly to the approach taken by Restorative Justice. The prediction of these responses takes a particular interest in meetings involving a limited number of participants. For instance, in @cite_45 the goal was both to detect the social signals produced in small group interactions and to emphasize their importance markers. In addition, the works of @cite_36 @cite_21 combined several methodologies to analyze non-verbal behavior automatically by extracting communicative cues from both simulated and real scenarios. Thus, most of these social signal processing frameworks involve the detection of a set of visual indicators from the analysis of the participants' body and face. Additionally, information obtained from speech is commonly used @cite_40 @cite_49 @cite_37 , as is other information obtained from ambient and wearable sensors @cite_29 . | {
"cite_N": [
"@cite_37",
"@cite_36",
"@cite_29",
"@cite_21",
"@cite_40",
"@cite_27",
"@cite_45",
"@cite_49"
],
"mid": [
"2136553199",
"2069462093",
"2092309660",
"2067360832",
"2139942200",
"2068823511",
"2009658355",
"2097128017"
],
"abstract": [
"The automatic discovery of group conversational behavior is a relevant problem in social computing. In this paper, we present an approach to address this problem by defining a novel group descriptor called bag of group-nonverbal-patterns (NVPs) defined on brief observations of group interaction, and by using principled probabilistic topic models to discover topics. The proposed bag of group NVPs allows fusion of individual cues and facilitates the eventual comparison of groups of varying sizes. The use of topic models helps to cluster group interactions and to quantify how different they are from each other in a formal probabilistic sense. Results of behavioral topics discovered on the Augmented Multi-Party Interaction (AMI) meeting corpus are shown to be meaningful using human annotation with multiple observers. Our method facilitates “group behavior-based” retrieval of group conversational segments without the need of any previous labeling.",
"Nonverbal communication plays an important role in many aspects of our lives, such as in job interviews, where vis-α-vis conversations take place. This paper proposes a method to automatically detect body communicative cues by using video sequences of the upper body of individuals in a conversational context. To our knowledge, our work brings novelty by explicitly addressing the recognition of visual activity in a seated, conversational setting from monocular video, compared to most existing work in video-based motion capture, which targets full-body with lower limb activities. We first detect the person hands in the sequence by searching for the higher speed parts along the whole video. Then, aided by training a set of typical conversational movements, we infer the approximate 3D upper body pose, that we transfer to a low-dimensionality space in order to perform action recognition. We test our system in the context of job interviews, with several new databases that we make publicly available.",
"In this paper we present a multimodal analysis of emergent leadership in small groups using audio-visual features and discuss our experience in designing and collecting a data corpus for this purpose. The ELEA Audio-Visual Synchronized corpus (ELEA AVS) was collected using a light portable setup and contains recordings of small group meetings. The participants in each group performed the winter survival task and filled in questionnaires related to personality and several social concepts such as leadership and dominance. In addition, the corpus includes annotations on participants’ performance in the survival task, and also annotations of social concepts from external viewers. Based on this corpus, we present the feasibility of predicting the emergent leader in small groups using automatically extracted audio and visual features, based on speaking turns and visual attention, and we focus specifically on multimodal features that make use of the looking at participants while speaking and looking at while not speaking measures. Our findings indicate that emergent leadership is related, but not equivalent, to dominance, and while multimodal features bring a moderate degree of effectiveness in inferring the leader, much simpler features extracted from the audio channel are found to give better performance.",
"We present an analysis on personality prediction in small groups based on trait attributes from external observers. We use a rich set of automatically extracted audio-visual nonverbal features, including speaking turn, prosodic, visual activity, and visual focus of attention features. We also investigate whether the thin sliced impressions of external observers generalize to the whole meeting in the personality prediction task. Using ridge regression, we have analyzed both the regression and classification performance of personality prediction. Our experiments show that the extraversion trait can be predicted with high accuracy in a binary classification task and visual activity features give higher accuracies than audio ones. The highest accuracy for the extraversion trait, is 75 , obtained with a combination of audio-visual features. Openness to experience trait also has a significant accuracy, only when the whole meeting is used as the unit of processing.",
"This paper introduces social signal processing (SSP), the domain aimed at automatic understanding of social interactions through analysis of nonverbal behavior. The core idea of SSP is that nonverbal behavior is machine detectable evidence of social signals, the relational attitudes exchanged between interacting individuals. Social signals include (dis-)agreement, empathy, hostility, and any other attitude towards others that is expressed not only by words but by nonverbal behaviors such as facial expression and body posture as well. Thus, nonverbal behavior analysis is used as a key to automatic understanding of social interactions. This paper presents not only a survey of the related literature and the main concepts underlying SSP, but also an illustrative example of how such concepts are applied to the analysis of conflicts in competitive discussions.",
"Persuasive communication is part of everyone's daily life. With the emergence of social websites like YouTube, Facebook and Twitter, persuasive communication is now seen online on a daily basis. This paper explores the effect of multi-modality and perceived personality on persuasiveness of social multimedia content. The experiments are performed over a large corpus of movie review clips from Youtube which is presented to online annotators in three different modalities: only text, only audio and video. The annotators evaluated the persuasiveness of each review across different modalities and judged the personality of the speaker. Our detailed analysis confirmed several research hypotheses designed to study the relationships between persuasion, perceived personality and communicative channel, namely modality. Three hypotheses are designed: the first hypothesis studies the effect of communication modality on persuasion, the second hypothesis examines the correlation between persuasion and personality perception and finally the third hypothesis, derived from the first two hypotheses explores how communication modality influence the personality perception.",
"Identifying emergent leaders in organizations is a key issue in organizational behavioral research, and a new problem in social computing. This paper presents an analysis on how an emergent leader is perceived in newly formed, small groups, and then tackles the task of automatically inferring emergent leaders, using a variety of communicative nonverbal cues extracted from audio and video channels. The inference task uses rule-based and collective classification approaches with the combination of acoustic and visual features extracted from a new small group corpus specifically collected to analyze the emergent leadership phenomenon. Our results show that the emergent leader is perceived by his her peers as an active and dominant person; that visual information augments acoustic information; and that adding relational information to the nonverbal cues improves the inference of each participant's leadership rankings in the group.",
"The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence - the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement - in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for social signal processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially aware computing."
]
} |
1412.2122 | 2949596139 | In this paper we present a non-invasive ambient intelligence framework for the semi-automatic analysis of non-verbal communication applied to the restorative justice field. In particular, we propose the use of computer vision and social signal processing technologies in real scenarios of Victim-Offender Mediations, applying feature extraction techniques to multi-modal audio-RGB-depth data. We compute a set of behavioral indicators that define communicative cues from the fields of psychology and observational methodology. We test our methodology on data captured in real world Victim-Offender Mediation sessions in Catalonia in collaboration with the regional government. We define the ground truth based on expert opinions when annotating the observed social responses. Using different state-of-the-art binary classification approaches, our system achieves recognition accuracies of 86 when predicting satisfaction, and 79 when predicting both agreement and receptivity. Applying a regression strategy, we obtain a mean deviation for the predictions between 0.5 and 0.7 in the range [1-5] for the computed social signals. | Many of the aforementioned studies demonstrate that indicators of agreement during communication are highly dependent on social signals. As such, it is possible to perform an exhaustive analysis to detect the role played by each participant in terms of influence, dominance, or submission. For instance, in @cite_41 , both the interest of observers and the dominant participants are predicted solely on the basis of behavioral motion information when looking at face-to-face (also called or dyadic) interactions. Furthermore, there are many interdisciplinary, state-of-the-art studies examining related fields from the point of view of social computing, some of which are summarized in @cite_33 @cite_2 . | {
"cite_N": [
"@cite_41",
"@cite_33",
"@cite_2"
],
"mid": [
"",
"2165734786",
"1493363822"
],
"abstract": [
"",
"Although developers of communication-support tools have certainly tried to create products that support group thinking, they usually do so without adequately accounting for social context, so that all too often these systems are jarring and even downright rude. In fact, most people would agree that today's communication technology seems to be at war with human society. Technology must account for this by recognizing that communication is always socially situated and that discussions are not just words but part of a larger social dialogue. This web of social interaction forms a sort of collective intelligence; it is the unspoken shared understanding that enforces the dominance hierarchy and passes judgment about it. We have found nonlinguistic social signals to be particularly powerful for analyzing and predicting human behavior, sometimes exceeding even expert human capabilities. Psychologists have firmly established that social signals are a powerful determinant of human behavior and speculate that they might have evolved as a way to establish hierarchy and group cohesion.",
"How can you know when someone is bluffing? Paying attention? Genuinely interested? The answer, writes Sandy Pentland in Honest Signals, is that subtle patterns in how we interact with other people reveal our attitudes toward them. These unconscious social signals are not just a back channel or a complement to our conscious language; they form a separate communication network. Biologically based \"honest signaling,\" evolved from ancient primate signaling mechanisms, offers an unmatched window into our intentions, goals, and values. If we understand this ancient channel of communication, Pentland claims, we can accurately predict the outcomes of situations ranging from job interviews to first dates. Pentland, an MIT professor, has used a specially designed digital sensor worn like an ID badgea \"sociometer\"to monitor and analyze the back-and-forth patterns of signaling among groups of people. He and his researchers found that this second channel of communication, revolving not around words but around social relations, profoundly influences major decisions in our liveseven though we are largely unaware of it. Pentland presents the scientific background necessary for understanding this form of communication, applies it to examples of group behavior in real organizations, and shows how by \"reading\" our social networks we can become more successful at pitching an idea, getting a job, or closing a deal. Using this \"network intelligence\" theory of social signaling, Pentland describes how we can harness the intelligence of our social network to become better managers, workers, and communicators."
]
} |
1412.2122 | 2949596139 | In this paper we present a non-invasive ambient intelligence framework for the semi-automatic analysis of non-verbal communication applied to the restorative justice field. In particular, we propose the use of computer vision and social signal processing technologies in real scenarios of Victim-Offender Mediations, applying feature extraction techniques to multi-modal audio-RGB-depth data. We compute a set of behavioral indicators that define communicative cues from the fields of psychology and observational methodology. We test our methodology on data captured in real world Victim-Offender Mediation sessions in Catalonia in collaboration with the regional government. We define the ground truth based on expert opinions when annotating the observed social responses. Using different state-of-the-art binary classification approaches, our system achieves recognition accuracies of 86 when predicting satisfaction, and 79 when predicting both agreement and receptivity. Applying a regression strategy, we obtain a mean deviation for the predictions between 0.5 and 0.7 in the range [1-5] for the computed social signals. | The participants involved in these conversational settings usually appear either sitting or with some parts of their body occluded. Therefore, from a computer vision point of view, it might be best just to focus only on the upper body regions. Then, visual feature extraction techniques can focus on the most significant sources of information coming from the region of interest, which might be the face or hands, for example. These regions provide discriminative behavioral information, or adaptors, which are movements, such as head scratching, indicative of attitude, anxiety level and self-confidence @cite_0 ; or beat gestures, which are small baton-like movements of the hands used to emphasize important parts of speech with respect to the larger discourse @cite_14 . However, as explained in @cite_36 @cite_7 , body posture is also found to be an important indicator of a person's emotional state. Additionally, another potential source of information is provided by facial expressions @cite_40 @cite_49 @cite_42 @cite_23 . | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_36",
"@cite_42",
"@cite_0",
"@cite_40",
"@cite_49",
"@cite_23"
],
"mid": [
"1708911126",
"",
"2069462093",
"2123701509",
"2118163921",
"2139942200",
"2097128017",
"2138206939"
],
"abstract": [
"Gesturing is such an integral yet unconscious part of communication that we are mostly oblivious to it. But if you observe anyone in conversation, you are likely to see his or her fingers, hands, and arms in some form of spontaneous motion. Why? David McNeill, a pioneer in the ongoing study of the relationship between gesture and language, set about answering this question in \"Gesture and Thought\" with an unlikely accomplice - Tweety Bird. McNeill argues that gestures are active participants in both speaking and thinking. He posits that gestures are key ingredients in an \"imagery-language dialectic\" that fuels speech and thought; gestures are the \"imagery\" and also the components of \"language,\" rather than mere consequences. The smallest unit of this dialectic is the \"growth point,\" a snapshot of an utterance at its beginning psychological stage. Enter Tweety Bird. In \"Gesture and Thought\", the central growth point comes from a cartoon. In his quest to eat Tweety Bird, Sylvester the cat first scales the outside of a rain gutter to reach his prey. Unsuccessful, he makes a second attempt by climbing up the inside of the gutter. Tweety, however, drops a bowling ball down the gutter; Sylvester swallows the ball. Over the course of twenty-five years, McNeill showed this cartoon to numerous subjects who spoke a variety of languages. A fascinating pattern emerged. Those who remembered the exact sequence of the cartoon while retelling it all used the same gesture to describe Sylvester's position inside the gutter. Those who forgot, in the retelling, that Sylvester had first climbed the outside of the gutter did not use this gesture at all. Thus that gesture becomes part of the \"growth point\" - the building block of language and thought. An ambitious project in the ongoing study of the relationship of how we communicate and its connection to thought, \"Gesture and Thought\" is a work of such consequence that it will influence all subsequent linguistic and evolutionary theory on the subject.",
"",
"Nonverbal communication plays an important role in many aspects of our lives, such as in job interviews, where vis-α-vis conversations take place. This paper proposes a method to automatically detect body communicative cues by using video sequences of the upper body of individuals in a conversational context. To our knowledge, our work brings novelty by explicitly addressing the recognition of visual activity in a seated, conversational setting from monocular video, compared to most existing work in video-based motion capture, which targets full-body with lower limb activities. We first detect the person hands in the sequence by searching for the higher speed parts along the whole video. Then, aided by training a set of typical conversational movements, we infer the approximate 3D upper body pose, that we transfer to a low-dimensionality space in order to perform action recognition. We test our system in the context of job interviews, with several new databases that we make publicly available.",
"In this paper, we present a fully-automatic Spatio-Temporal GrabCut human segmentation methodology that combines tracking and segmentation. GrabCut initialization is performed by a HOG-based subject detection, face detection, and skin color model. Spatial information is included by Mean Shift clustering whereas temporal coherence is considered by the historical of Gaussian Mixture Models. Moreover, full face and pose recovery is obtained by combining human segmentation with Active Appearance Models and Conditional Random Fields. Results over public datasets and in a new Human Limb dataset show a robust segmentation and recovery of both face and pose using the presented methodology.",
"What is the relation between gestures and speech? In terms of symbolic forms, of course, the spontaneous and unwitting gestures we make while talking differ sharply from spoken language itself. Whereas spoken language is linear, segmented, standardized, and arbitrary, gestures are global, synthetic, idiosyncratic, and imagistic. In Hand and Mind, David McNeill presents a bold theory of the essential unity of speech and the gestures that accompany it. This long-awaited, provocative study argues that the unity of gestures and language far exceeds the surface level of speech noted by previous researchers and in fact also includes the semantic and pragmatic levels of language. In effect, the whole concept of language must be altered to take into account the nonsegmented, instantaneous, and holistic images conveyed by gestures. McNeill and his colleagues carefully devised a standard methodology for examining the speech and gesture behavior of individuals engaged in narrative discourse. A research subject is shown a cartoon like the 1950 Canary Row--a classic Sylvester and Tweedy Bird caper that features Sylvester climbing up a downspout, swallowing a bowling ball and slamming into a brick wall. After watching the cartoon, the subject is videotaped recounting the story from memory to a listener who has not seen the cartoon. Painstaking analysis of the videotapes revealed that although the research subjects--children as well as adults, some neurologically impaired--represented a wide variety of linguistic groupings, the gestures of people speaking English and a half dozen other languages manifest the same principles. Relying on data from more than ten years of research, McNeill shows thatgestures do not simply form a part of what is said and meant but have an impact on thought itself. He persuasively argues that because gestures directly transfer mental images to visible forms, conveying ideas that language cannot always express, we must examine language and gesture",
"This paper introduces social signal processing (SSP), the domain aimed at automatic understanding of social interactions through analysis of nonverbal behavior. The core idea of SSP is that nonverbal behavior is machine detectable evidence of social signals, the relational attitudes exchanged between interacting individuals. Social signals include (dis-)agreement, empathy, hostility, and any other attitude towards others that is expressed not only by words but by nonverbal behaviors such as facial expression and body posture as well. Thus, nonverbal behavior analysis is used as a key to automatic understanding of social interactions. This paper presents not only a survey of the related literature and the main concepts underlying SSP, but also an illustrative example of how such concepts are applied to the analysis of conflicts in competitive discussions.",
"The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence - the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement - in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for social signal processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially aware computing.",
"We propose a method for head-pose invariant facial expression recognition that is based on a set of characteristic facial points. To achieve head-pose invariance, we propose the Coupled Scaled Gaussian Process Regression (CSGPR) model for head-pose normalization. In this model, we first learn independently the mappings between the facial points in each pair of (discrete) nonfrontal poses and the frontal pose, and then perform their coupling in order to capture dependences between them. During inference, the outputs of the coupled functions from different poses are combined using a gating function, devised based on the head-pose estimation for the query points. The proposed model outperforms state-of-the-art regression-based approaches to head-pose normalization, 2D and 3D Point Distribution Models (PDMs), and Active Appearance Models (AAMs), especially in cases of unknown poses and imbalanced training data. To the best of our knowledge, the proposed method is the first one that is able to deal with expressive faces in the range from @math to @math pan rotation and @math to @math tilt rotation, and with continuous changes in head pose, despite the fact that training was conducted on a small set of discrete poses. We evaluate the proposed method on synthetic and real images depicting acted and spontaneously displayed facial expressions."
]
} |
1412.2122 | 2949596139 | In this paper we present a non-invasive ambient intelligence framework for the semi-automatic analysis of non-verbal communication applied to the restorative justice field. In particular, we propose the use of computer vision and social signal processing technologies in real scenarios of Victim-Offender Mediations, applying feature extraction techniques to multi-modal audio-RGB-depth data. We compute a set of behavioral indicators that define communicative cues from the fields of psychology and observational methodology. We test our methodology on data captured in real world Victim-Offender Mediation sessions in Catalonia in collaboration with the regional government. We define the ground truth based on expert opinions when annotating the observed social responses. Using different state-of-the-art binary classification approaches, our system achieves recognition accuracies of 86 when predicting satisfaction, and 79 when predicting both agreement and receptivity. Applying a regression strategy, we obtain a mean deviation for the predictions between 0.5 and 0.7 in the range [1-5] for the computed social signals. | In order to analyze these visual features automatically most approaches are based on classic computer vision techniques applied to RGB data. However, extracting discriminative information from standard image sequences is sometimes unreliable. In this sense, recent studies have included compact multi-modal devices which allow @math partial information to be obtained from the scene. In @cite_3 , the authors proposed a system for real-time human pose recognition including depth information for each image pixel. In this case, information is obtained by means of a Kinect 1mm device, which estimates a depth map based on the inverse of time response of an infrared sensor sampling within the scene. This new source of information, which provides visual @math features, has been recently exploited for creating new human pose descriptors by combining different state-of-the-art RGB-depth features @cite_48 , as well as they are used in a large amount of Human Computer Interaction (HCI) applications @cite_26 . | {
"cite_N": [
"@cite_48",
"@cite_26",
"@cite_3"
],
"mid": [
"",
"2071652770",
"2060280062"
],
"abstract": [
"",
"The use of depth maps is of increasing interest after the advent of cheap multisensor devices based on structured light, such as Kinect. In this context, there is a strong need of powerful 3-D shape descriptors able to generate rich object representations. Although several 3-D descriptors have been already proposed in the literature, the research of discriminative and computationally efficient descriptors is still an open issue. In this paper, we propose a novel point cloud descriptor called spherical blurred shape model (SBSM) that successfully encodes the structure density and local variabilities of an object based on shape voxel distances and a neighborhood propagation strategy. The proposed SBSM is proven to be rotation and scale invariant, robust to noise and occlusions, highly discriminative for multiple categories of complex objects like the human hand, and computationally efficient since the SBSM complexity is linear to the number of object voxels. Experimental evaluation in public depth multiclass object data, 3-D facial expressions data, and a novel hand poses data sets show significant performance improvements in relation to state-of-the-art approaches. Moreover, the effectiveness of the proposal is also proved for object spotting in 3-D scenes and for real-time automatic hand pose recognition in human computer interaction scenarios.",
"We propose a new method to quickly and accurately predict human pose---the 3D positions of body joints---from a single depth image, without depending on information from preceding frames. Our approach is strongly rooted in current object recognition strategies. By designing an intermediate representation in terms of body parts, the difficult pose estimation problem is transformed into a simpler per-pixel classification problem, for which efficient machine learning techniques exist. By using computer graphics to synthesize a very large dataset of training image pairs, one can train a classifier that estimates body part labels from test images invariant to pose, body shape, clothing, and other irrelevances. Finally, we generate confidence-scored 3D proposals of several body joints by reprojecting the classification result and finding local modes. The system runs in under 5ms on the Xbox 360. Our evaluation shows high accuracy on both synthetic and real test sets, and investigates the effect of several training parameters. We achieve state-of-the-art accuracy in our comparison with related work and demonstrate improved generalization over exact whole-skeleton nearest neighbor matching."
]
} |
1412.2122 | 2949596139 | In this paper we present a non-invasive ambient intelligence framework for the semi-automatic analysis of non-verbal communication applied to the restorative justice field. In particular, we propose the use of computer vision and social signal processing technologies in real scenarios of Victim-Offender Mediations, applying feature extraction techniques to multi-modal audio-RGB-depth data. We compute a set of behavioral indicators that define communicative cues from the fields of psychology and observational methodology. We test our methodology on data captured in real world Victim-Offender Mediation sessions in Catalonia in collaboration with the regional government. We define the ground truth based on expert opinions when annotating the observed social responses. Using different state-of-the-art binary classification approaches, our system achieves recognition accuracies of 86 when predicting satisfaction, and 79 when predicting both agreement and receptivity. Applying a regression strategy, we obtain a mean deviation for the predictions between 0.5 and 0.7 in the range [1-5] for the computed social signals. | Once data from the environment have been acquired and processed to define a set of behavioral features, they serve as the basis for modelling a set of communication indicators. For instance, in @cite_25 , the authors outline a system for real-time tracking of the human body with the objective of interpreting human behavior. However, in the context of conversations we are particularly interested in behavioral traits belonging to social signals captured in the communication and in the interactions between the participants in the VOM sessions. In this sense, levels of agitation (or energy), activity, stress, and engagement are analyzed not only from their body movements, but also from their speech, facial expressions, or gaze directions, in order to predict behavioral responses. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2140235142"
],
"abstract": [
"Pfinder is a real-time system for tracking people and interpreting their behavior. It runs at 10 Hz on a standard SGI Indy computer, and has performed reliably on thousands of people in many different physical locations. The system uses a multiclass statistical model of color and shape to obtain a 2D representation of head and hands in a wide range of viewing conditions. Pfinder has been successfully used in a wide range of applications including wireless interfaces, video databases, and low-bandwidth coding."
]
} |
1412.2433 | 2953259750 | Social relationships are a natural basis on which humans make trust decisions. Online Social Networks (OSNs) are increasingly often used to let users base trust decisions on the existence and the strength of social relationships. While most OSNs allow users to discover the length of the social path to other users, they do so in a centralized way, thus requiring them to rely on the service provider and reveal their interest in each other. This paper presents Social PaL, a system supporting the privacy-preserving discovery of arbitrary-length social paths between any two social network users. We overcome the bootstrapping problem encountered in all related prior work, demonstrating that Social PaL allows its users to find all paths of length two and to discover a significant fraction of longer paths, even when only a small fraction of OSN users is in the Social PaL system - e.g., discovering 70 of all paths with only 40 of the users. We implement Social PaL using a scalable server-side architecture and a modular Android client library, allowing developers to seamlessly integrate it into their apps. | @cite_20 present a privacy-preserving social matching protocol based on property-preserving encryption (PPE), which however relies on a centralized approach. @cite_49 then propose a set of protocols for privacy-preserving matching of attribute sets of different OSN users. Similar work include @cite_30 , @cite_26 , and @cite_23 . Private friend discovery has also been investigated in @cite_8 and @cite_37 , which do not provide authenticity as they are vulnerable to malicious users claiming non-existent friendships. While @cite_28 addresses the authenticity problem, it unfortunately comes at the cost of relying on relatively expensive cryptographic techniques (specifically, a number of modular exponentiations linear in the size of friend lists and a quadratic number of modular multiplications). | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_26",
"@cite_8",
"@cite_28",
"@cite_23",
"@cite_49",
"@cite_20"
],
"mid": [
"2147866554",
"2144644951",
"2076195444",
"2165633818",
"",
"2114434656",
"",
""
],
"abstract": [
"Mobile social networks extend social networks in the cyberspace into the real world by allowing mobile users to discover and interact with existing and potential friends who happen to be in their physical vicinity. Despite their promise to enable many exciting applications, serious security and privacy concerns have hindered wide adoption of these networks. To address these concerns, in this paper we develop novel techniques and protocols to compute social proximity between two users to discover potential friends, which is an essential task for mobile social networks.We make three major contributions. First, we identify a range of potential attacks against friend discovery by analyzing real traces. Second, we develop a novel solution for secure proximity estimation, which allows users to identify potential friends by computing social proximity in a privacy-preserving manner. A distinctive feature of our solution is that it provides both privacy and verifiability, which are frequently at odds in secure multiparty computation. Third, we demonstrate the feasibility and effectiveness of our approaches using real implementation on smartphones and show it is efficient in terms of both computation time and power consumption.",
"Recently, mobile social software has become an active area of research and development. A multitude of systems have been proposed over the past years that try to follow the success of their Internet bound equivalents. Many mobile solutions try to augment the functionality of existing platforms with location awareness. The price for mobility, however, is typically either the lack of the popular friendship exploration features or the costs involved to access a central server required for this functionality. In this paper, we try to address this issue by introducing a decentralized method that is able to explore the social neighborhood of a user by detecting friends of friends. Rather than only exploiting information about the users of the system, the method relies on real friends, and adequately addresses the arising privacy issues. Moreover, we present VENETA, a mobile social networking platform which, among other features, implements our novel friend of friend detection algorithm.",
"Proximity-based mobile social networking (PMSN) refers to the social interaction among physically proximate mobile users directly through the Bluetooth WiFi interfaces on their smartphones or other mobile devices. It becomes increasingly popular due to the recently explosive growth of smartphone users. Profile matching means two users comparing their personal profiles and is often the first step towards effective PMSN. It, however, conflicts with users' growing privacy concerns about disclosing their personal profiles to complete strangers before deciding to interact with them. This paper tackles this open challenge by designing a suite of novel fine-grained private matching protocols. Our protocols enable two users to perform profile matching without disclosing any information about their profiles beyond the comparison result. In contrast to existing coarse-grained private matching schemes for PMSN, our protocols allow finer differentiation between PMSN users and can support a wide range of matching metrics at different privacy levels. The security and communication computation overhead of our protocols are thoroughly analyzed and evaluated via detailed simulations.",
"Smartphones are becoming some of our most trusted computing devices. People use them to store highly sensitive information including email, passwords, financial accounts, and medical records. These properties make smartphones an essential platform for privacy-preserving applications. To date, this area remains largely unexplored mainly because privacy-preserving computation protocols were thought to be too heavyweight for practical applications, even for standard desktops. We propose using smartphones to perform secure multi-party computation. The limitations of smartphones provide a number of challenges for building such applications. In this paper, we introduce the issues that make smartphones a unique platform for secure computation, identify some interesting potential applications, and describe our initial experiences creating privacy-preserving applications on Android devices.",
"",
"Many proximity-based mobile social networks are developed to facilitate connections between any two people, or to help a user to find people with a matched profile within a certain distance. A challenging task in these applications is to protect the privacy of the participants' profiles and personal interests. In this paper, we design novel mechanisms, when given a preference-profile submitted by a user, that search persons with matching-profile in decentralized multi-hop mobile social networks. Our mechanisms also establish a secure communication channel between the initiator and matching users at the time when the matching user is found. Our rigorous analysis shows that our mechanism is privacy-preserving (no participants' profile and the submitted preference-profile are exposed), verifiable (both the initiator and the unmatched user cannot cheat each other to pretend to be matched), and efficient in both communication and computation. Extensive evaluations using real social network data, and actual system implementation on smart phones show that our mechanisms are significantly more efficient than existing solutions.",
"",
""
]
} |
1412.2433 | 2953259750 | Social relationships are a natural basis on which humans make trust decisions. Online Social Networks (OSNs) are increasingly often used to let users base trust decisions on the existence and the strength of social relationships. While most OSNs allow users to discover the length of the social path to other users, they do so in a centralized way, thus requiring them to rely on the service provider and reveal their interest in each other. This paper presents Social PaL, a system supporting the privacy-preserving discovery of arbitrary-length social paths between any two social network users. We overcome the bootstrapping problem encountered in all related prior work, demonstrating that Social PaL allows its users to find all paths of length two and to discover a significant fraction of longer paths, even when only a small fraction of OSN users is in the Social PaL system - e.g., discovering 70 of all paths with only 40 of the users. We implement Social PaL using a scalable server-side architecture and a modular Android client library, allowing developers to seamlessly integrate it into their apps. | Smokescreen @cite_29 , SMILE @cite_18 , and PIKE @cite_53 support secure private device-to-device handshake and proximity-based communication. introduce SDDR @cite_33 , which allows a device to establish a secure encounter -- i.e., a secret key -- with every device in short radio range, and can be used to recognize previously encountered users, while providing strong unlinkability guarantees. The EnCore platform @cite_42 builds on SDDR to provide privacy-preserving interaction between nearby devices, as well as event-based communication for mobile social applications. | {
"cite_N": [
"@cite_18",
"@cite_33",
"@cite_29",
"@cite_42",
"@cite_53"
],
"mid": [
"2139223971",
"",
"2142514943",
"1972656783",
"2139483853"
],
"abstract": [
"Conventional mobile social services such as Loopt and Google Latitude rely on two classes of trusted relationships: participants trust a centralized server to manage their location information and trust between users is based on existing social relationships. Unfortunately, these assumptions are not secure or general enough for many mobile social scenarios: centralized servers cannot always be relied upon to preserve data confidentiality, and users may want to use mobile social services to establish new relationships. To address these shortcomings, this paper describes SMILE, a privacy-preserving \"missed-connections\" service in which the service provider is untrusted and users are not assumed to have pre-established social relationships with each other. At a high-level, SMILE uses short-range wireless communication and standard cryptographic primitives to mimic the behavior of users in existing missed-connections services such as Craigslist: trust is founded solely on anonymous users' ability to prove to each other that they shared an encounter in the past. We have evaluated SMILE using protocol analysis, an informal study of Craigslist usage, and experiments with a prototype implementation and found it to be both privacy-preserving and feasible.",
"",
"Presence-sharing is an emerging platform for mobile applications, but presence-privacy remains a challenge. Privacy controls must be flexible enough to allow sharing between both trusted social relations and untrusted strangers. In this paper, we present a system called SmokeScreen that provides flexible and power-efficient mechanisms for privacy management. Broadcasting clique signals, which can only be interpreted by other trusted users, enables sharing between social relations; broadcasting opaque identifiers (OIDs), which can only be resolved to an identity by a trusted broker, enables sharing between strangers. Computing these messages is power-efficient since they can be pre-computed with acceptable storage costs. In evaluating these mechanisms we first analyzed traces from an actual presence-sharing application. Four months of traces provide evidence of anonymous snooping, even among trusted users. We have also implemented our mechanisms on two devices and found the power demands of clique signals and OIDs to be reasonable. A mobile phone running our software can operate for several days on a single charge.",
"Mobile social apps provide sharing and networking opportunities based on a user's location, activity, and set of nearby users. A platform for these apps must meet a wide range of communication needs while ensuring users' control over their privacy. In this paper, we introduce EnCore, a mobile platform that builds on secure encounters between pairs of devices as a foundation for privacy-preserving communication. An encounter occurs whenever two devices are within Bluetooth radio range of each other, and generates a unique encounter ID and associated shared key. EnCore detects nearby users and resources, bootstraps named communication abstractions called events for groups of proximal users, and enables communication and sharing among event participants, while relying on existing network, storage and online social network services. At the same time, EnCore puts users in control of their privacy and the confidentiality of the information they share. Using an Android implementation of EnCore and an app for event-based communication and sharing, we evaluate EnCore's utility using a live testbed deployment with 35 users.",
"Online collaboration tools such as Google+, Face-book or Dropbox have become an important and ubiquitous mediator of many human interactions. In the virtual world, they enable secure interaction by controlling access to shared resources. Yet relying on them to support synchronous direct interactions, such as face-to-face meetings, might be suboptimal as they require reliable online connectivity and even then often introduce delays. A much more efficient way of co-located resource sharing is the use of local communications, such as ad-hoc WiFi. Yet setting up the necessary encryption and authentication mechanisms is often cumbersome. In this paper, we present PIKE, a key exchange protocol that minimizes this configuration effort. PIKE piggybacks the exchange of keys on top of an existing service infrastructure. To support encryption or authentication without Internet connection, PIKE relies on triggers for upcoming personal interactions and exchanges keys before they take place. To evaluate PIKE, we present two example applications and we perform an experimental as well as an analytical analysis of its characteristics. The evaluation indicates that PIKE is broadly applicable, scales well enough to support larger events and provides a level of security that is (at least) comparable to the one provided by the underlying service."
]
} |
1412.2433 | 2953259750 | Social relationships are a natural basis on which humans make trust decisions. Online Social Networks (OSNs) are increasingly often used to let users base trust decisions on the existence and the strength of social relationships. While most OSNs allow users to discover the length of the social path to other users, they do so in a centralized way, thus requiring them to rely on the service provider and reveal their interest in each other. This paper presents Social PaL, a system supporting the privacy-preserving discovery of arbitrary-length social paths between any two social network users. We overcome the bootstrapping problem encountered in all related prior work, demonstrating that Social PaL allows its users to find all paths of length two and to discover a significant fraction of longer paths, even when only a small fraction of OSN users is in the Social PaL system - e.g., discovering 70 of all paths with only 40 of the users. We implement Social PaL using a scalable server-side architecture and a modular Android client library, allowing developers to seamlessly integrate it into their apps. | @cite_14 present a routing protocol (called SimBet) for DTN networks based on social network data. Their protocol attempts to identify a routing bridge node based on the concept of centrality and transitivity of social networks. @cite_31 design another DTN routing protocol (called Social Selfishness Aware Routing) which takes into account user's social selfishness and willingness to forward data only to nodes with sufficiently strong social ties. Other work @cite_24 @cite_48 @cite_27 also propose adjusting message forwarding based on some social metrics. | {
"cite_N": [
"@cite_14",
"@cite_48",
"@cite_24",
"@cite_27",
"@cite_31"
],
"mid": [
"2082674813",
"2143374307",
"2135712710",
"2110950500",
""
],
"abstract": [
"Message delivery in sparse Mobile Ad hoc Networks (MANETs) is difficult due to the fact that the network graph is rarely (if ever) connected. A key challenge is to find a route that can provide good delivery performance and low end-to-end delay in a disconnected network graph where nodes may move freely. This paper presents a multidisciplinary solution based on the consideration of the so-called small world dynamics which have been proposed for economy and social studies and have recently revealed to be a successful approach to be exploited for characterising information propagation in wireless networks. To this purpose, some bridge nodes are identified based on their centrality characteristics, i.e., on their capability to broker information exchange among otherwise disconnected nodes. Due to the complexity of the centrality metrics in populated networks the concept of ego networks is exploited where nodes are not required to exchange information about the entire network topology, but only locally available information is considered. Then SimBet Routing is proposed which exploits the exchange of pre-estimated \"betweenness' centrality metrics and locally determined social \"similarity' to the destination node. We present simulations using real trace data to demonstrate that SimBet Routing results in delivery performance close to Epidemic Routing but with significantly reduced overhead. Additionally, we show that SimBet Routing outperforms PRoPHET Routing, particularly when the sending and receiving nodes have low connectivity.",
"Multihop data delivery through vehicular ad hoc networks is complicated by the fact that vehicular networks are highly mobile and frequently disconnected. To address this issue, we adopt the idea of carry and forward, where a moving vehicle carries a packet until a new vehicle moves into its vicinity and forwards the packet. Being different from existing carry and forward solutions, we make use of predictable vehicle mobility, which is limited by traffic pattern and road layout. Based on the existing traffic pattern, a vehicle can find the next road to forward the packet to reduce the delay. We propose several vehicle-assisted data delivery (VADD) protocols to forward the packet to the best road with the lowest data-delivery delay. Experimental results show that the proposed VADD protocols outperform existing so- lutions in terms of packet-delivery ratio, data packet delay, and protocol overhead. Among the proposed VADD protocols, the Hybrid Probe (H-VADD) protocol has a much better performance.",
"In this paper we seek to improve our understanding of human mobility in terms of social structures, and to use these structures in the design of forwarding algorithms for Pocket Switched Networks (PSNs). Taking human mobility traces from the real world, we discover that human interaction is heterogeneous both in terms of hubs (popular individuals) and groups or communities. We propose a social based forwarding algorithm, BUBBLE, which is shown empirically to improve the forwarding efficiency significantly compared to oblivious forwarding schemes and to PROPHET algorithm. We also show how this algorithm can be implemented in a distributed way, which demonstrates that it is applicable in the decentralised environment of PSNs.",
"Content sharing through vehicle-to-vehicle communication can help people find their interested content on the road. In VANETs, due to limited contact duration time and the unreliable wireless connection, a vehicle can only get the useful data when it meets the vehicle which has the exactly matching data. However, the probability of such cases is very low. To improve the performance of content sharing in intermittently connected VANETs, we propose a novel P2P content sharing scheme called Roadcast. Roadcast relaxes user's query requirement a little bit so that each user can have more chances to get the requested content quickly. Furthermore, Roadcast ensures popular data is more likely to be shared with other vehicles so that the performance of overall query delay can be improved. Roadcast consists of two components called popularity aware content retrieval and popularity aware data replacement. The popularity aware content retrieval scheme makes use of Information Retrieval (IR) techniques to find the most relevant data towards user's query, but significantly different from IR techniques by taking the data popularity factor into consideration. The popularity aware data replacement algorithm ensures that the density of different data is proportional to the square-root of their popularity in the system steady state, which firmly obeys the optimal \"square-root\" replication rule [6]. Results based on real city map and real traffic model show that Roadcast outperforms other content sharing schemes in VANETs.",
""
]
} |
1412.2433 | 2953259750 | Social relationships are a natural basis on which humans make trust decisions. Online Social Networks (OSNs) are increasingly often used to let users base trust decisions on the existence and the strength of social relationships. While most OSNs allow users to discover the length of the social path to other users, they do so in a centralized way, thus requiring them to rely on the service provider and reveal their interest in each other. This paper presents Social PaL, a system supporting the privacy-preserving discovery of arbitrary-length social paths between any two social network users. We overcome the bootstrapping problem encountered in all related prior work, demonstrating that Social PaL allows its users to find all paths of length two and to discover a significant fraction of longer paths, even when only a small fraction of OSN users is in the Social PaL system - e.g., discovering 70 of all paths with only 40 of the users. We implement Social PaL using a scalable server-side architecture and a modular Android client library, allowing developers to seamlessly integrate it into their apps. | OSN Properties. Another line of work has studied properties of OSNs. @cite_0 and @cite_4 study the structure of Facebook social graph, revealing that the average social path length suggested by the small world experiment" @cite_12 (i.e., six) does not apply for Facebook, as the majority of people are separated by a 4-hop path. @cite_21 define the relationship between tie strengths (i.e., the importance of a social relationship between two users) and various variables retrieved from the OSN social graph. @cite_32 , investigate the link between the tie strength definition (given by Granovotter @cite_17 ) and a composition of factors describing the emotional closeness in online relationships. They demonstrate the existence of the (i.e., the maximum number of people a user can actively interact with) for Facebook. In follow-up work @cite_46 @cite_13 , they also show the existence of four hierarchical layers of social relationships inside ego networks. Existence of the Dunbar number is also shown for Twitter in @cite_50 . Finally, Saram " a @cite_3 find an uneven distribution of tie strengths within ego networks that is characterized by the presence of a few strong and a majority of weak ties. | {
"cite_N": [
"@cite_4",
"@cite_21",
"@cite_32",
"@cite_3",
"@cite_0",
"@cite_50",
"@cite_46",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2097662241",
"1965056875",
"2112392909",
"",
"1928223220",
"",
"2168092462",
"1573579329",
"2109469951"
],
"abstract": [
"",
"We have friends we consider very close and acquaintances we barely know. The social sciences use the term tie strength to denote this differential closeness with the people in our lives. In this paper, we explore how well a tie strength model developed for one social medium adapts to another. Specifically, we present a Twitter application called We Meddle which puts a Facebook tie strength model at the core of its design. We Meddle estimated tie strengths for more than 200,000 online relationships from people in 52 countries. We focus on the mapping of Facebook relational features to relational features in Twitter. By examining We Meddle's mistakes, we find that the Facebook tie strength model largely generalizes to Twitter. This is early evidence that important relational properties may manifest similarly across different social media, a finding that would allow new social media sites to build around relational findings from old ones.",
"The widespread use of online social networks, such as Facebook and Twitter, is generating a growing amount of accessible data concerning social relationships. The aim of this work is twofold. First, we present a detailed analysis of a real Facebook data set aimed at characterising the properties of human social relationships in online environments. We find that certain properties of online social networks appear to be similar to those found “offline” (i.e., on human social networks maintained without the use of social networking sites). Our experimental results indicate that on Facebook there is a limited number of social relationships an individual can actively maintain and this number is close to the well-known Dunbar’s number (150) found in offline social networks. Second, we also present a number of linear models that predict tie strength (the key figure to quantitatively represent the importance of social relationships) from a reduced set of observable Facebook variables. Specifically, we are able to predict with good accuracy (i.e., higher than 80 ) the strength of social ties by exploiting only four variables describing different aspects of users interaction on Facebook. We find that the recency of contact between individuals – used in other studies as the unique estimator of tie strength – has the highest relevance in the prediction of tie strength. Nevertheless, using it in combination with other observable quantities, such as indices about the social similarity between people, can lead to more accurate predictions",
"The social network maintained by a focal individual, or ego, is intrinsically dynamic and typically exhibits some turnover in membership over time as personal circumstances change. However, the consequences of such changes on the distribution of an ego’s network ties are not well understood. Here we use a unique 18-mo dataset that combines mobile phone calls and survey data to track changes in the ego networks and communication patterns of students making the transition from school to university or work. Our analysis reveals that individuals display a distinctive and robust social signature, captured by how interactions are distributed across different alters. Notably, for a given ego, these social signatures tend to persist over time, despite considerable turnover in the identity of alters in the ego network. Thus, as new network members are added, some old network members either are replaced or receive fewer calls, preserving the overall distribution of calls across network members. This is likely to reflect the consequences of finite resources such as the time available for communication, the cognitive and emotional effort required to sustain close relationships, and the ability to make emotional investments.",
"",
"Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100–200 stable relationships. Thus, the ‘economy of attention’ is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior.",
"",
"Online Social Networks are amongst the most important platforms for maintaining social relationships online, supporting content generation and exchange between users. They are therefore natural candidate to be the basis of future humancentric networks and data exchange systems, in addition to novel forms of Internet services exploiting the properties of human social relationships. Understanding the structural properties of OSN and how they are influenced by human behaviour is thus fundamental to design such human-centred systems. In this paper we analyse a real Twitter data set to investigate whether well known structures of human social networks identified in \"offline\" environments can also be identified in the social networks maintained by users on Twitter. According to the well known model proposed by Dunbar, offline social networks are formed of circles of relationships having different social characteristics (e.g., intimacy, contact frequency and size). These circles can be directly ascribed to cognitive constraints of human brain, that impose limits on the number of social relationships maintainable at different levels of emotional closeness. Our results indicate that a similar structure can also be found in the Twitter users' social networks. This suggests that the structure of social networks also in online environments are controlled by the same cognitive properties of human brain that operate offline.",
"",
"Analysis of social networks is suggested as a tool for linking micro and macro levels of sociological theory. The procedure is illustrated by elaboration of the macro implications of one aspect of small-scale interaction: the strength of dyadic ties. It is argued that the degree of overlap of two individuals' friendship networks varies directly with the strength of their tie to one another. The impact of this principle on diffusion of influence and information, mobility opportunity, and community organization is explored. Stress is laid on the cohesive power of weak ties. Most network models deal, implicitly, with strong ties, thus confining their applicability to small, well-defined groups. Emphasis on weak ties lends itself to discussion of relations between groups and to analysis of segments of social structure not easily defined in terms of primary groups."
]
} |
1412.2324 | 2950501945 | Multi-versioned database systems have the potential to significantly increase the amount of concurrency in transaction processing because they can avoid read-write conflicts. Unfortunately, the increase in concurrency usually comes at the cost of transaction serializability. If a database user requests full serializability, modern multi-versioned systems significantly constrain read-write concurrency among conflicting transactions and employ expensive synchronization patterns in their design. In main-memory multi-core settings, these additional constraints are so burdensome that multi-versioned systems are often significantly outperformed by single-version systems. We propose Bohm, a new concurrency control protocol for main-memory multi-versioned database systems. Bohm guarantees serializable execution while ensuring that reads never block writes. In addition, Bohm does not require reads to perform any book-keeping whatsoever, thereby avoiding the overhead of tracking reads via contended writes to shared memory. This leads to excellent scalability and performance in multi-core settings. Bohm has all the above characteristics without performing validation based concurrency control. Instead, it is pessimistic, and is therefore not prone to excessive aborts in the presence of contention. An experimental evaluation shows that Bohm performs well in both high contention and low contention settings, and is able to dramatically outperform state-of-the-art multi-versioned systems despite maintaining the full set of serializability guarantees. | Timestamp ordering is a concurrency control technique in which the serialization order of transactions is determined by assigning transactions monotonically increasing timestamps @cite_16 @cite_9 . In order to commit, conflicting transactions must execute in an order that is consistent with their timestamps. Reed designed a multi-version concurrency control protocol based on timestamp ordering @cite_27 . Unlike single-version timestamp ordering, reads are always successful, but readers may cause writers to abort and the database needs to track the timestamp of each read in order to abort writers. In contrast, 's design guarantees that reads never block writes and does require any kind of tracking when a transaction reads the value of a record. By eliminating writes to shared-memory, greatly improves multi-core scalability. | {
"cite_N": [
"@cite_27",
"@cite_9",
"@cite_16"
],
"mid": [
"2076627572",
"1963979953",
"2389944897"
],
"abstract": [
"Synchronization of accesses to shared data and recovering the state of such data in the case of failures are really two aspects of the same problem--implementing atomic actions on a related set of data items. In this paper a mechanism that solves both problems simultaneously in a way that is compatible with requirements of decentralized systems is described. In particular, the correct construction and execution of a new atomic action can be accomplished without knowledge of all other atomic actions in the system that might execute concurrently. Further, the mechanisms degrade gracefully if parts of the system fail: only those atomic actions that require resources in failed parts of the system are prevented from executing, and there is no single coordinator that can fail and bring down the whole system.",
"This paper presents the concurrency control strategy of SDD-1. SDD-1, a System for Distributed Databases, is a prototype distributed database system being developed by Computer Corporation of America. In SDD-1, portions of data distributed throughout a network may be replicated at multiple sites. The SDD-1 concurrency control guarantees database consistency in the face of such distribution and replication. This paper is one of a series of companion papers on SDD-1 [4, 10, 12, 21].",
"Concurrency control is necessary and important in any multiusers, especially distributed database systems. In the paper we have briefly discussed the questions about concurrency control in distributed database systems, its eritieron of correctness, algorithms and techniques, and some questions related to its implementation such as deadloek, locks in relational database, robustness and so on."
]
} |
1412.2324 | 2950501945 | Multi-versioned database systems have the potential to significantly increase the amount of concurrency in transaction processing because they can avoid read-write conflicts. Unfortunately, the increase in concurrency usually comes at the cost of transaction serializability. If a database user requests full serializability, modern multi-versioned systems significantly constrain read-write concurrency among conflicting transactions and employ expensive synchronization patterns in their design. In main-memory multi-core settings, these additional constraints are so burdensome that multi-versioned systems are often significantly outperformed by single-version systems. We propose Bohm, a new concurrency control protocol for main-memory multi-versioned database systems. Bohm guarantees serializable execution while ensuring that reads never block writes. In addition, Bohm does not require reads to perform any book-keeping whatsoever, thereby avoiding the overhead of tracking reads via contended writes to shared memory. This leads to excellent scalability and performance in multi-core settings. Bohm has all the above characteristics without performing validation based concurrency control. Instead, it is pessimistic, and is therefore not prone to excessive aborts in the presence of contention. An experimental evaluation shows that Bohm performs well in both high contention and low contention settings, and is able to dramatically outperform state-of-the-art multi-versioned systems despite maintaining the full set of serializability guarantees. | propose techniques for optimistic and pessimistic multi-version concurrency control in the context of main-memory databases @cite_8 . Their techniques address several limitations of traditional systems. For instance, their optimistic validation technique does not require the use of a global critical section. However, their design uses a global counter to generate timestamps (that is accessible to many different threads), and thus inherits the scalability bottlenecks associated with contended global data-structures. In contrast avoids the use of a global counter to generate transactions' timestamps. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2011395086"
],
"abstract": [
"A database system optimized for in-memory storage can support much higher transaction rates than current systems. However, standard concurrency control methods used today do not scale to the high transaction rates achievable by such systems. In this paper we introduce two efficient concurrency control methods specifically designed for main-memory databases. Both use multiversioning to isolate read-only transactions from updates but differ in how atomicity is ensured: one is optimistic and one is pessimistic. To avoid expensive context switching, transactions never block during normal processing but they may have to wait before commit to ensure correct serialization ordering. We also implemented a main-memory optimized version of single-version locking. Experimental results show that while single-version locking works well when transactions are short and contention is low performance degrades under more demanding conditions. The multiversion schemes have higher overhead but are much less sensitive to hotspots and the presence of long-running transactions."
]
} |
1412.2324 | 2950501945 | Multi-versioned database systems have the potential to significantly increase the amount of concurrency in transaction processing because they can avoid read-write conflicts. Unfortunately, the increase in concurrency usually comes at the cost of transaction serializability. If a database user requests full serializability, modern multi-versioned systems significantly constrain read-write concurrency among conflicting transactions and employ expensive synchronization patterns in their design. In main-memory multi-core settings, these additional constraints are so burdensome that multi-versioned systems are often significantly outperformed by single-version systems. We propose Bohm, a new concurrency control protocol for main-memory multi-versioned database systems. Bohm guarantees serializable execution while ensuring that reads never block writes. In addition, Bohm does not require reads to perform any book-keeping whatsoever, thereby avoiding the overhead of tracking reads via contended writes to shared memory. This leads to excellent scalability and performance in multi-core settings. Bohm has all the above characteristics without performing validation based concurrency control. Instead, it is pessimistic, and is therefore not prone to excessive aborts in the presence of contention. An experimental evaluation shows that Bohm performs well in both high contention and low contention settings, and is able to dramatically outperform state-of-the-art multi-versioned systems despite maintaining the full set of serializability guarantees. | propose an optimistic multi-version concurrency control protocol that minimizes version maintenance overhead @cite_19 . Their protocol allows transactions's undo buffers to satisfy reads, minimizing the overhead of multi-versioning relative to single-version systems. However, their protocol's scalability is bottlenecked by the use of a contended global counter to generate transaction timestamps. Furthermore, like @'s optimistic protocol @cite_8 , their protocol aborts reading transactions in the presence of read-write conflicts among concurrent transactions (Section ). | {
"cite_N": [
"@cite_19",
"@cite_8"
],
"mid": [
"2020129682",
"2011395086"
],
"abstract": [
"Multi-Version Concurrency Control (MVCC) is a widely employed concurrency control mechanism, as it allows for execution modes where readers never block writers. However, most systems implement only snapshot isolation (SI) instead of full serializability. Adding serializability guarantees to existing SI implementations tends to be prohibitively expensive. We present a novel MVCC implementation for main-memory database systems that has very little overhead compared to serial execution with single-version concurrency control, even when maintaining serializability guarantees. Updating data in-place and storing versions as before-image deltas in undo buffers not only allows us to retain the high scan performance of single-version systems but also forms the basis of our cheap and fine-grained serializability validation mechanism. The novel idea is based on an adaptation of precision locking and verifies that the (extensional) writes of recently committed transactions do not intersect with the (intensional) read predicate space of a committing transaction. We experimentally show that our MVCC model allows very fast processing of transactions with point accesses as well as read-heavy transactions and that there is little need to prefer SI over full serializability any longer.",
"A database system optimized for in-memory storage can support much higher transaction rates than current systems. However, standard concurrency control methods used today do not scale to the high transaction rates achievable by such systems. In this paper we introduce two efficient concurrency control methods specifically designed for main-memory databases. Both use multiversioning to isolate read-only transactions from updates but differ in how atomicity is ensured: one is optimistic and one is pessimistic. To avoid expensive context switching, transactions never block during normal processing but they may have to wait before commit to ensure correct serialization ordering. We also implemented a main-memory optimized version of single-version locking. Experimental results show that while single-version locking works well when transactions are short and contention is low performance degrades under more demanding conditions. The multiversion schemes have higher overhead but are much less sensitive to hotspots and the presence of long-running transactions."
]
} |
1412.2324 | 2950501945 | Multi-versioned database systems have the potential to significantly increase the amount of concurrency in transaction processing because they can avoid read-write conflicts. Unfortunately, the increase in concurrency usually comes at the cost of transaction serializability. If a database user requests full serializability, modern multi-versioned systems significantly constrain read-write concurrency among conflicting transactions and employ expensive synchronization patterns in their design. In main-memory multi-core settings, these additional constraints are so burdensome that multi-versioned systems are often significantly outperformed by single-version systems. We propose Bohm, a new concurrency control protocol for main-memory multi-versioned database systems. Bohm guarantees serializable execution while ensuring that reads never block writes. In addition, Bohm does not require reads to perform any book-keeping whatsoever, thereby avoiding the overhead of tracking reads via contended writes to shared memory. This leads to excellent scalability and performance in multi-core settings. Bohm has all the above characteristics without performing validation based concurrency control. Instead, it is pessimistic, and is therefore not prone to excessive aborts in the presence of contention. An experimental evaluation shows that Bohm performs well in both high contention and low contention settings, and is able to dramatically outperform state-of-the-art multi-versioned systems despite maintaining the full set of serializability guarantees. | Silo is a database system designed for main-memory, multicore machines @cite_2 . Silo implements a variant of optimistic concurrency control, and uses a decentralized timestamp based technique to validate transactions at commit time. shares some of Silo's design principles. For instance, it uses a low contention technique to generate timestamps to decide the relative ordering of conflicting transactions. Unlike Silo, does not use optimistic concurrency control, thus, it is able to perform much better on high-contention workloads for which optimistic concurrency control leads to many aborts. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2141710443"
],
"abstract": [
"Silo is a new in-memory database that achieves excellent performance and scalability on modern multicore machines. Silo was designed from the ground up to use system memory and caches efficiently. For instance, it avoids all centralized contention points, including that of centralized transaction ID assignment. Silo's key contribution is a commit protocol based on optimistic concurrency control that provides serializability while avoiding all shared-memory writes for records that were only read. Though this might seem to complicate the enforcement of a serial order, correct logging and recovery is provided by linking periodically-updated epochs with the commit protocol. Silo provides the same guarantees as any serializable database without unnecessary scalability bottlenecks or much additional latency. Silo achieves almost 700,000 transactions per second on a standard TPC-C workload mix on a 32-core machine, as well as near-linear scalability. Considered per core, this is several times higher than previously reported results."
]
} |
1412.2324 | 2950501945 | Multi-versioned database systems have the potential to significantly increase the amount of concurrency in transaction processing because they can avoid read-write conflicts. Unfortunately, the increase in concurrency usually comes at the cost of transaction serializability. If a database user requests full serializability, modern multi-versioned systems significantly constrain read-write concurrency among conflicting transactions and employ expensive synchronization patterns in their design. In main-memory multi-core settings, these additional constraints are so burdensome that multi-versioned systems are often significantly outperformed by single-version systems. We propose Bohm, a new concurrency control protocol for main-memory multi-versioned database systems. Bohm guarantees serializable execution while ensuring that reads never block writes. In addition, Bohm does not require reads to perform any book-keeping whatsoever, thereby avoiding the overhead of tracking reads via contended writes to shared memory. This leads to excellent scalability and performance in multi-core settings. Bohm has all the above characteristics without performing validation based concurrency control. Instead, it is pessimistic, and is therefore not prone to excessive aborts in the presence of contention. An experimental evaluation shows that Bohm performs well in both high contention and low contention settings, and is able to dramatically outperform state-of-the-art multi-versioned systems despite maintaining the full set of serializability guarantees. | propose a data-oriented architecture (DORA) in order to eliminate the impact of contended accesses to shared memory by transaction execution threads @cite_35 . DORA partitions a database among several physical cores of a multi-core system and executes a disjoint subset of each transaction's logic on multiple threads, a form of intra-transaction parallelism. uses intra-transaction parallelism to decide the in which transactions must execute. However, the execution of a transaction's logic occurs on a single thread. | {
"cite_N": [
"@cite_35"
],
"mid": [
"2101995353"
],
"abstract": [
"While hardware technology has undergone major advancements over the past decade, transaction processing systems have remained largely unchanged. The number of cores on a chip grows exponentially, following Moore's Law, allowing for an ever-increasing number of transactions to execute in parallel. As the number of concurrently-executing transactions increases, contended critical sections become scalability burdens. In typical transaction processing systems the centralized lock manager is often the first contended component and scalability bottleneck. In this paper, we identify the conventional thread-to-transaction assignment policy as the primary cause of contention. Then, we design DORA, a system that decomposes each transaction to smaller actions and assigns actions to threads based on which data each action is about to access. DORA's design allows each thread to mostly access thread-local data structures, minimizing interaction with the contention-prone centralized lock manager. Built on top of a conventional storage engine, DORA maintains all the ACID properties. Evaluation of a prototype implementation of DORA on a multicore system demonstrates that DORA attains up to 4.8x higher throughput than a state-of-the-art storage engine when running a variety of synthetic and real-world OLTP workloads."
]
} |
1412.2324 | 2950501945 | Multi-versioned database systems have the potential to significantly increase the amount of concurrency in transaction processing because they can avoid read-write conflicts. Unfortunately, the increase in concurrency usually comes at the cost of transaction serializability. If a database user requests full serializability, modern multi-versioned systems significantly constrain read-write concurrency among conflicting transactions and employ expensive synchronization patterns in their design. In main-memory multi-core settings, these additional constraints are so burdensome that multi-versioned systems are often significantly outperformed by single-version systems. We propose Bohm, a new concurrency control protocol for main-memory multi-versioned database systems. Bohm guarantees serializable execution while ensuring that reads never block writes. In addition, Bohm does not require reads to perform any book-keeping whatsoever, thereby avoiding the overhead of tracking reads via contended writes to shared memory. This leads to excellent scalability and performance in multi-core settings. Bohm has all the above characteristics without performing validation based concurrency control. Instead, it is pessimistic, and is therefore not prone to excessive aborts in the presence of contention. An experimental evaluation shows that Bohm performs well in both high contention and low contention settings, and is able to dramatically outperform state-of-the-art multi-versioned systems despite maintaining the full set of serializability guarantees. | propose techniques for improving the scalability of lock-managers @cite_1 . Their design includes the pervasive use of the read-after-write pattern @cite_32 in order to avoid repeatedly bouncing'' cache-lines due to cache-coherence @cite_20 @cite_13 . In addition, to avoid the cost of reference counting locks, they use a technique to de-allocate locks in batches. similarly refrains from the use of reference counters to garbage collect versions of records that are no longer visible to transactions. | {
"cite_N": [
"@cite_13",
"@cite_1",
"@cite_32",
"@cite_20"
],
"mid": [
"2001738739",
"2152211001",
"2163121173",
"2069278684"
],
"abstract": [
"Busy-wait techniques are heavily used for mutual exclusion andbarrier synchronization in shared-memory parallel programs.Unfortunately, typical implementations of busy-waiting tend to producelarge amounts of memory and interconnect contention, introducingperformance bottlenecks that become markedly more pronounced asapplications scale. We argue that this problem is not fundamental, andthat one can in fact construct busy-wait synchronization algorithms thatinduce no memory or interconnect contention. The key to these algorithmsis for every processor to spin on separate locally-accessible flag variables,and for some other processor to terminate the spin with a single remotewrite operation at an appropriate time. Flag variables may be locally-accessible as a result of coherent caching, or by virtue ofallocation in the local portion of physically distributed sharedmemory. We present a new scalable algorithm for spin locks that generates 0(1) remote references per lockacquisition, independent of the number of processors attempting toacquire the lock. Our algorithm provides reasonable latency in theabsence of contention, requires only a constant amount of space perlock, and requires no hardware support other than a swap-with-memoryinstruction. We also present a new scalable barrier algorithm thatgenerates 0(1) remote references perprocessor reaching the barrier, and observe that two previously-knownbarriers can likewise be cast in a form that spins only onlocally-accessible flag variables. None of these barrier algorithmsrequires hardware support beyond the usual atomicity of memory reads andwrites. We compare the performance of our scalable algorithms with othersoftware approaches to busy-wait synchronization on both a SequentSymmetry and a BBN Butterfly. Our principal conclusion is that contention due to synchronization need not be a problemin large-scale shared-memory multiprocessors. Theexistence of scalable algorithms greatly weakens the case for costlyspecial-purpose hardware support for synchronization, and provides acase against so-called “dance hall” architectures, in whichshared memory locations are equally far from all processors. — From the Authors' Abstract",
"Modern implementations of DBMS software are intended to take advantage of high core counts that are becoming common in high-end servers. However, we have observed that several database platforms, including MySQL, Shore-MT, and a commercial system, exhibit throughput collapse as load increases into oversaturation (where there are more request threads than cores), even for a workload with little or no logical contention for locks, such as a read-only workload. Our analysis of MySQL identifies latch contention within the lock manager as the bottleneck responsible for this collapse. We design a lock manager with reduced latching, implement it in MySQL, and show that it avoids the collapse and generally improves performance. Our efficient implementation of a lock manager is enabled by a staged allocation and deallocation of locks. Locks are preallocated in bulk, so that the lock manager only has to perform simple list manipulation operations during the acquire and release phases of a transaction. Deallocation of the lock data structures is also performed in bulk, which enables the use of fast implementations of lock acquisition and release as well as concurrent deadlock checking.",
"Building correct and efficient concurrent algorithms is known to be a difficult problem of fundamental importance. To achieve efficiency, designers try to remove unnecessary and costly synchronization. However, not only is this manual trial-and-error process ad-hoc, time consuming and error-prone, but it often leaves designers pondering the question of: is it inherently impossible to eliminate certain synchronization, or is it that I was unable to eliminate it on this attempt and I should keep trying? In this paper we respond to this question. We prove that it is impossible to build concurrent implementations of classic and ubiquitous specifications such as sets, queues, stacks, mutual exclusion and read-modify-write operations, that completely eliminate the use of expensive synchronization. We prove that one cannot avoid the use of either: i) read-after-write (RAW), where a write to shared variable A is followed by a read to a different shared variable B without a write to B in between, or ii) atomic write-after-read (AWAR), where an atomic operation reads and then writes to shared locations. Unfortunately, enforcing RAW or AWAR is expensive on all current mainstream processors. To enforce RAW, memory ordering--also called fence or barrier--instructions must be used. To enforce AWAR, atomic instructions such as compare-and-swap are required. However, these instructions are typically substantially slower than regular instructions. Although algorithm designers frequently struggle to avoid RAW and AWAR, their attempts are often futile. Our result characterizes the cases where avoiding RAW and AWAR is impossible. On the flip side, our result can be used to guide designers towards new algorithms where RAW and AWAR can be eliminated.",
"The author examines the questions of whether there are efficient algorithms for software spin-waiting given hardware support for atomic instructions, or whether more complex kinds of hardware support are needed for performance. He considers the performance of a number of software spin-waiting algorithms. Arbitration for control of a lock is in many ways similar to arbitration for control of a network connecting a distributed system. He applies several of the static and dynamic arbitration methods originally developed for networks to spin locks. A novel method is proposed for explicitly queueing spinning processors in software by assigning each a unique number when it arrives at the lock. Control of the lock can then be passed to the next processor in line with minimal effect on other processors. >"
]
} |
1412.2324 | 2950501945 | Multi-versioned database systems have the potential to significantly increase the amount of concurrency in transaction processing because they can avoid read-write conflicts. Unfortunately, the increase in concurrency usually comes at the cost of transaction serializability. If a database user requests full serializability, modern multi-versioned systems significantly constrain read-write concurrency among conflicting transactions and employ expensive synchronization patterns in their design. In main-memory multi-core settings, these additional constraints are so burdensome that multi-versioned systems are often significantly outperformed by single-version systems. We propose Bohm, a new concurrency control protocol for main-memory multi-versioned database systems. Bohm guarantees serializable execution while ensuring that reads never block writes. In addition, Bohm does not require reads to perform any book-keeping whatsoever, thereby avoiding the overhead of tracking reads via contended writes to shared memory. This leads to excellent scalability and performance in multi-core settings. Bohm has all the above characteristics without performing validation based concurrency control. Instead, it is pessimistic, and is therefore not prone to excessive aborts in the presence of contention. An experimental evaluation shows that Bohm performs well in both high contention and low contention settings, and is able to dramatically outperform state-of-the-art multi-versioned systems despite maintaining the full set of serializability guarantees. | identified latch contention on high level intention locks as a scalability bottleneck in multi-core databases @cite_12 . They proposed Speculative Lock Inheritance (SLI), a technique to reduce the number of contended latch acquisitions. SLI effectively the cost of contended latch acquisitions across a batch transactions by passing locks from transaction to transaction without requiring calls to the lock manager. similarly amortizes synchronization across batches of transactions in order to scale concurrency control. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2144430183"
],
"abstract": [
"Transaction processing workloads provide ample request level concurrency which highly parallel architectures can exploit. However, the resulting heavy utilization of core database services also causes resource contention within the database engine itself and limits scalability. Meanwhile, many database workloads consist of short transactions which access only a few database records each, often with stringent response time requirements. Performance of these short transactions is determined largely by the amount of overhead the database engine imposes for services such as logging, locking, and transaction management. This paper highlights the negative scalability impact of database locking, an effect which is especially severe for short transactions running on highly concurrent multicore hardware. We propose and evaluate Speculative Lock Inheritance, a technique where hot database locks pass directly from transaction to transaction, bypassing the lock manager bottleneck. We implement SLI in the Shore-MT storage manager and show that lock inheritance fundamentally improves scalability by decoupling the number of simultaneous requests for popular locks from the number of threads in the system, eliminating contention within the lock manager even as core counts continue to increase. We achieve this effect with only minor changes to the lock manager and without changes to consistency or other application-visible effects."
]
} |
1412.2324 | 2950501945 | Multi-versioned database systems have the potential to significantly increase the amount of concurrency in transaction processing because they can avoid read-write conflicts. Unfortunately, the increase in concurrency usually comes at the cost of transaction serializability. If a database user requests full serializability, modern multi-versioned systems significantly constrain read-write concurrency among conflicting transactions and employ expensive synchronization patterns in their design. In main-memory multi-core settings, these additional constraints are so burdensome that multi-versioned systems are often significantly outperformed by single-version systems. We propose Bohm, a new concurrency control protocol for main-memory multi-versioned database systems. Bohm guarantees serializable execution while ensuring that reads never block writes. In addition, Bohm does not require reads to perform any book-keeping whatsoever, thereby avoiding the overhead of tracking reads via contended writes to shared memory. This leads to excellent scalability and performance in multi-core settings. Bohm has all the above characteristics without performing validation based concurrency control. Instead, it is pessimistic, and is therefore not prone to excessive aborts in the presence of contention. An experimental evaluation shows that Bohm performs well in both high contention and low contention settings, and is able to dramatically outperform state-of-the-art multi-versioned systems despite maintaining the full set of serializability guarantees. | Calvin @cite_10 is a deterministic database system that executes transactions according to a pre-defined total order. Calvin uses deterministic transaction ordering to reduce the impact of distributed transactions on scalability. Furthermore, Calvin uses a modular architecture and separates key parts of concurrency control from transaction execution @cite_33 . Although similar to with its focus on scalability and modularity, Calvin is a single-versioned system and uses locking to avoid read-write and write-write conflicts, while is multi-versioned and ensures that reads do not block writes. Furthermore, Calvin is focused on horizontal shared-nothing scalability, while is focused on multi-core scalability. | {
"cite_N": [
"@cite_10",
"@cite_33"
],
"mid": [
"2060440895",
"2402836473"
],
"abstract": [
"Many distributed storage systems achieve high data access throughput via partitioning and replication, each system with its own advantages and tradeoffs. In order to achieve high scalability, however, today's systems generally reduce transactional support, disallowing single transactions from spanning multiple partitions. Calvin is a practical transaction scheduling and data replication layer that uses a deterministic ordering guarantee to significantly reduce the normally prohibitive contention costs associated with distributed transactions. Unlike previous deterministic database system prototypes, Calvin supports disk-based storage, scales near-linearly on a cluster of commodity machines, and has no single point of failure. By replicating transaction inputs rather than effects, Calvin is also able to support multiple consistency levels---including Paxos-based strong consistency across geographically distant replicas---at no cost to transactional throughput.",
"Calvin is a transaction scheduling and replication management layer for distributed storage systems. By first writing transaction requests to a durable, replicated log, and then using a concurrency control mechanism that emulates a deterministic serial execution of the log’s transaction requests, Calvin supports strongly consistent replication and fully ACID distributed transactions while incurring significantly lower inter-partition transaction coordination costs than traditional distributed database systems. Furthermore, Calvin’s declarative specification of target concurrency-control behavior allows system components to avoid interacting with actual transaction scheduling mechanisms—whereas in traditional DBMSs, the analogous components often have to explicitly observe concurrency control modules’ (highly nondeterministic) procedural behaviors in order to function correctly."
]
} |
1412.2324 | 2950501945 | Multi-versioned database systems have the potential to significantly increase the amount of concurrency in transaction processing because they can avoid read-write conflicts. Unfortunately, the increase in concurrency usually comes at the cost of transaction serializability. If a database user requests full serializability, modern multi-versioned systems significantly constrain read-write concurrency among conflicting transactions and employ expensive synchronization patterns in their design. In main-memory multi-core settings, these additional constraints are so burdensome that multi-versioned systems are often significantly outperformed by single-version systems. We propose Bohm, a new concurrency control protocol for main-memory multi-versioned database systems. Bohm guarantees serializable execution while ensuring that reads never block writes. In addition, Bohm does not require reads to perform any book-keeping whatsoever, thereby avoiding the overhead of tracking reads via contended writes to shared memory. This leads to excellent scalability and performance in multi-core settings. Bohm has all the above characteristics without performing validation based concurrency control. Instead, it is pessimistic, and is therefore not prone to excessive aborts in the presence of contention. An experimental evaluation shows that Bohm performs well in both high contention and low contention settings, and is able to dramatically outperform state-of-the-art multi-versioned systems despite maintaining the full set of serializability guarantees. | Very lightweight locking (VLL) reduces lock-manager overhead by co-locating concurrency control related meta-data with records @cite_37 . Unlike , VLL is not designed for systems with large number of cores because every transaction must execute a global critical section before it can execute. | {
"cite_N": [
"@cite_37"
],
"mid": [
"1814601622"
],
"abstract": [
"Locking is widely used as a concurrency control mechanism in database systems. As more OLTP databases are stored mostly or entirely in memory, transactional throughput is less and less limited by disk IO, and lock managers increasingly become performance bottlenecks. In this paper, we introduce very lightweight locking (VLL), an alternative approach to pessimistic concurrency control for main-memory database systems that avoids almost all overhead associated with traditional lock manager operations. We also propose a protocol called selective contention analysis (SCA), which enables systems implementing VLL to achieve high transactional throughput under high contention workloads. We implement these protocols both in a traditional single-machine multi-core database server setting and in a distributed database where data is partitioned across many commodity machines in a shared-nothing cluster. Our experiments show that VLL dramatically reduces locking overhead and thereby increases transactional throughput in both settings."
]
} |
1412.2324 | 2950501945 | Multi-versioned database systems have the potential to significantly increase the amount of concurrency in transaction processing because they can avoid read-write conflicts. Unfortunately, the increase in concurrency usually comes at the cost of transaction serializability. If a database user requests full serializability, modern multi-versioned systems significantly constrain read-write concurrency among conflicting transactions and employ expensive synchronization patterns in their design. In main-memory multi-core settings, these additional constraints are so burdensome that multi-versioned systems are often significantly outperformed by single-version systems. We propose Bohm, a new concurrency control protocol for main-memory multi-versioned database systems. Bohm guarantees serializable execution while ensuring that reads never block writes. In addition, Bohm does not require reads to perform any book-keeping whatsoever, thereby avoiding the overhead of tracking reads via contended writes to shared memory. This leads to excellent scalability and performance in multi-core settings. Bohm has all the above characteristics without performing validation based concurrency control. Instead, it is pessimistic, and is therefore not prone to excessive aborts in the presence of contention. An experimental evaluation shows that Bohm performs well in both high contention and low contention settings, and is able to dramatically outperform state-of-the-art multi-versioned systems despite maintaining the full set of serializability guarantees. | H-Store @cite_22 uses a shared-nothing architecture consisting of single-threaded partitions in order reduce the impact of lock-manager overhead @cite_3 , and logging overhead @cite_24 . However, performance degrades rapidly if a workload contains multi-partition transactions. Furthermore, sub-optimal performance is observed if some partitions have more work to do than others. achieves scalability without doing a hard-partitioning of the data --- it is thus less susceptible to skew problems and does not suffer from the multi-partition transaction problem. | {
"cite_N": [
"@cite_24",
"@cite_22",
"@cite_3"
],
"mid": [
"2071414195",
"1577169013",
"2054075931"
],
"abstract": [
"Fine-grained, record-oriented write-ahead logging, as exemplified by systems like ARIES, has been the gold standard for relational database recovery. In this paper, we show that in modern high-throughput transaction processing systems, this is no longer the optimal way to recover a database system. In particular, as transaction throughputs get higher, ARIES-style logging starts to represent a non-trivial fraction of the overall transaction execution time.",
"In previous papers [SC05, SBC+07], some of us predicted the end of \"one size fits all\" as a commercial relational DBMS paradigm. These papers presented reasons and experimental evidence that showed that the major RDBMS vendors can be outperformed by 1--2 orders of magnitude by specialized engines in the data warehouse, stream processing, text, and scientific database markets. Assuming that specialized engines dominate these markets over time, the current relational DBMS code lines will be left with the business data processing (OLTP) market and hybrid markets where more than one kind of capability is required. In this paper we show that current RDBMSs can be beaten by nearly two orders of magnitude in the OLTP market as well. The experimental evidence comes from comparing a new OLTP prototype, H-Store, which we have built at M.I.T. to a popular RDBMS on the standard transactional benchmark, TPC-C. We conclude that the current RDBMS code lines, while attempting to be a \"one size fits all\" solution, in fact, excel at nothing. Hence, they are 25 year old legacy code lines that should be retired in favor of a collection of \"from scratch\" specialized engines. The DBMS vendors (and the research community) should start with a clean sheet of paper and design systems for tomorrow's requirements, not continue to push code lines and architectures designed for yesterday's needs.",
"Online Transaction Processing (OLTP) databases include a suite of features - disk-resident B-trees and heap files, locking-based concurrency control, support for multi-threading - that were optimized for computer technology of the late 1970's. Advances in modern processors, memories, and networks mean that today's computers are vastly different from those of 30 years ago, such that many OLTP databases will now fit in main memory, and most OLTP transactions can be processed in milliseconds or less. Yet database architecture has changed little. Based on this observation, we look at some interesting variants of conventional database systems that one might build that exploit recent hardware trends, and speculate on their performance through a detailed instruction-level breakdown of the major components involved in a transaction processing database system (Shore) running a subset of TPC-C. Rather than simply profiling Shore, we progressively modified it so that after every feature removal or optimization, we had a (faster) working system that fully ran our workload. Overall, we identify overheads and optimizations that explain a total difference of about a factor of 20x in raw performance. We also show that there is no single \"high pole in the tent\" in modern (memory resident) database systems, but that substantial time is spent in logging, latching, locking, B-tree, and buffer management operations."
]
} |
1412.2324 | 2950501945 | Multi-versioned database systems have the potential to significantly increase the amount of concurrency in transaction processing because they can avoid read-write conflicts. Unfortunately, the increase in concurrency usually comes at the cost of transaction serializability. If a database user requests full serializability, modern multi-versioned systems significantly constrain read-write concurrency among conflicting transactions and employ expensive synchronization patterns in their design. In main-memory multi-core settings, these additional constraints are so burdensome that multi-versioned systems are often significantly outperformed by single-version systems. We propose Bohm, a new concurrency control protocol for main-memory multi-versioned database systems. Bohm guarantees serializable execution while ensuring that reads never block writes. In addition, Bohm does not require reads to perform any book-keeping whatsoever, thereby avoiding the overhead of tracking reads via contended writes to shared memory. This leads to excellent scalability and performance in multi-core settings. Bohm has all the above characteristics without performing validation based concurrency control. Instead, it is pessimistic, and is therefore not prone to excessive aborts in the presence of contention. An experimental evaluation shows that Bohm performs well in both high contention and low contention settings, and is able to dramatically outperform state-of-the-art multi-versioned systems despite maintaining the full set of serializability guarantees. | describe a technique for evaluating transactions in the context of deterministic database systems @cite_11 . This lazy database design separates concurrency control from transaction execution --- a design element that is shared by . However, does not process transactions lazily, and is far more scalable due to its use of intra-transaction parallelism, and avoiding writes to shared memory on reads. Furthermore, is designed to be a generic multi-versioned concurrency control technique, and is motivated by existing limitations in multi-version concurrency control systems. | {
"cite_N": [
"@cite_11"
],
"mid": [
"1988072050"
],
"abstract": [
"Existing database systems employ an transaction processing scheme---that is, upon receiving a transaction request, the system executes all the operations entailed in running the transaction (which typically includes reading database records, executing user-specified transaction logic, and logging updates and writes) before reporting to the client that the transaction has completed. We introduce a transaction execution engine, in which a transaction may be considered durably completed after only partial execution, while the bulk of its operations (notably all reads from the database and all execution of transaction logic) may be deferred until an arbitrary future time, such as when a user attempts to read some element of the transaction's write-set---all without modifying the semantics of the transaction or sacrificing ACID guarantees. Lazy transactions are processed deterministically, so that the final state of the database is guaranteed to be equivalent to what the state would have been had all transactions been executed eagerly. Our prototype of a lazy transaction execution engine improves temporal locality when executing related transactions, reduces peak provisioning requirements by deferring more non-urgent work until off-peak load times, and reduces contention footprint of concurrent transactions. However, we find that certain queries suffer increased latency, and therefore lazy database systems may not be appropriate for read-latency sensitive applications. We introduce a lazy transaction execution engine, in which a transaction may be considered durably completed after only partial execution, while the bulk of its operations (notably all reads from the database and all execution of transaction logic) may be deferred until an arbitrary future time, such as when a user attempts to read some element of the transaction's write-set---all without modifying the semantics of the transaction or sacrificing ACID guarantees. Lazy transactions are processed deterministically, so that the final state of the database is guaranteed to be equivalent to what the state would have been had all transactions been executed eagerly. Our prototype of a lazy transaction execution engine improves temporal locality when executing related transactions, reduces peak provisioning requirements by deferring more non-urgent work until off-peak load times, and reduces contention footprint of concurrent transactions. However, we find that certain queries suffer increased latency, and therefore lazy database systems may not be appropriate for read-latency sensitive applications."
]
} |
1412.2300 | 2949632354 | We consider an interval coverage problem. Given @math intervals of the same length on a line @math and a line segment @math on @math , we want to move the intervals along @math such that every point of @math is covered by at least one interval and the sum of the moving distances of all intervals is minimized. As a basic geometry problem, it has applications in mobile sensor barrier coverage in wireless sensor networks. The previous work solved the problem in @math time. In this paper, by discovering many interesting observations and developing new algorithmic techniques, we present an @math time algorithm. We also show an @math time lower bound for this problem, which implies the optimality of our algorithm. | A Wireless Sensor Network (WSN) uses a large number of sensors to monitor some surrounding environmental phenomena @cite_0 . Each sensor is equipped with a sensing device with limited battery-supplied energy. The sensors process data obtained and forward the data to a base station. Intrusion detection and border surveillance constitute a major application category for WSNs. A main goal of these applications is to detect intruders as they cross the boundary of a region or domain. For example, research efforts were made to extend the scalability of WSNs to the monitoring of international borders @cite_16 @cite_1 . Unlike the traditional full coverage @cite_15 @cite_7 @cite_3 which requires an entire target region to be covered by the sensors, the barrier coverage @cite_8 @cite_9 @cite_10 @cite_14 @cite_1 only seeks to cover the perimeter of the region to ensure that any intruders are detected as they cross the region border. Since barrier coverage requires fewer sensors, it is often preferable to full coverage. Because sensors have limited battery-supplied energy, it is desired to minimize their movements. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_15",
"@cite_16",
"@cite_10"
],
"mid": [
"1530465016",
"2096812245",
"2001156795",
"2028589023",
"2045870157",
"2112953516",
"2168452204",
"1993852439",
"",
""
],
"abstract": [
"A set of sensors establishes barrier coverage of a given line segment if every point of the segment is within the sensing range of a sensor. Given a line segment I, n mobile sensors in arbitrary initial positions on the line (not necessarily inside I) and the sensing ranges of the sensors, we are interested in finding final positions of sensors which establish a barrier coverage of I so that the sum of the distances traveled by all sensors from initial to final positions is minimized. It is shown that the problem is NP complete even to approximate up to constant factor when the sensors may have different sensing ranges. When the sensors have an identical sensing range we give several efficient algorithms to calculate the final destinations so that the sensors either establish a barrier coverage or maximize the coverage of the segment if complete coverage is not feasible while at the same time the sum of the distances traveled by all sensors is minimized. Some open problems are also mentioned.",
"The efficiency of sensor networks depends on the coverage of the monitoring area. Although, in general, a sufficient number of sensors are used to ensure a certain degree of redundancy in coverage, a good sensor deployment is still necessary to balance the workload of sensors. In a sensor network with locomotion facilities, sensors can move around to self-deploy. The movement-assisted sensor deployment deals with moving sensors from an initial unbalanced state to a balanced state. Therefore, various optimization problems can be defined to minimize different parameters, including total moving distance, total number of moves, communication computation cost, and convergence rate. In this paper, we first propose a Hungarian-algorithm-based optimal solution, which is centralized. Then, a localized scan-based movement-assisted sensor deployment method (SMART) and several variations of it that use scan and dimension exchange to achieve a balanced state are proposed. An extended SMART is developed to address a unique problem called communication holes in sensor networks. Extensive simulations have been done to verify the effectiveness of the proposed scheme.",
"Intrusion detection, area coverage and border surveillance are important applications of wireless sensor networks today. They can be (and are being) used to monitor large unprotected areas so as to detect intruders as they cross a border or as they penetrate a protected area. We consider the problem of how to optimally move mobile sensors to the fence (perimeter) of a region delimited by a simple polygon in order to detect intruders from either entering its interior or exiting from it. We discuss several related issues and problems, propose two models, provide algorithms and analyze their optimal mobility behavior.",
"Global barrier coverage that requires much fewer sensors than full coverage, is known to be an appropriate model of coverage for movement detection applications such as intrusion detection. However, it has been proved that given a sensor deployment, sensors can not locally determine whether the deployment provides global barrier coverage, making it impossible to develop localized algorithms, thus limiting its use in practice. In this paper, we introduce the concept of local barrier coverage to address this limitation. Motivated by the observation that movements are likely to follow a shorter path in crossing a belt region, local barrier coverage guarantees the detection of all movements whose trajectory is confined to a slice of the belt region of deployment. We prove that it is possible for individual sensors to locally determine the existence of local barrier coverage, even when the region of deployment is arbitrarily curved. Although local barrier coverage does not always guarantee global barrier coverage, we show that for thin belt regions, local barrier coverage almost always provides global barrier coverage. To demonstrate that local barrier coverage can be used to design localized algorithms, we develop a novel sleep-wakeup algorithm for maximizing the network lifetime, called Localized Barrier Coverage Protocol (LBCP). We show that LBCP provides close to optimalenhancement in network lifetime, while providing global barrier coverage most of the time. It outperforms an existing algorithm called Randomized Independent Sleeping (RIS) by up to 6 times.",
"When a sensor network is deployed to detect objects penetrating a protected region, it is not necessary to have every point in the deployment region covered by a sensor. It is enough if the penetrating objects are detected at some point in their trajectory. If a sensor network guarantees that every penetrating object will be detected by at least k distinct sensors before it crosses the barrier of wireless sensors, we say the network provides k-barrier coverage. In this paper, we develop theoretical foundations for k-barrier coverage. We propose efficient algorithms using which one can quickly determine, after deploying the sensors, whether the deployment region is k-barrier covered. Next, we establish the optimal deployment pattern to achieve k-barrier coverage when deploying sensors deterministically. Finally, we consider barrier coverage with high probability when sensors are deployed randomly. The major challenge, when dealing with probabilistic barrier coverage, is to derive critical conditions using which one can compute the minimum number of sensors needed to ensure barrier coverage with high probability. Deriving critical conditions for k-barrier coverage is, however, still an open problem. We derive critical conditions for a weaker notion of barrier coverage, called weak k-barrier coverage.",
"Due to their low cost and small form factors, a large number of sensor nodes can be deployed in redundant fashion in dense sensor networks. The availability of redundant nodes increases network lifetime as well as network fault tolerance. It is, however, undesirable to keep all the sensor nodes active at all times for sensing and communication. An excessive number of active nodes lead to higher energy consumption and it places more demand on the limited network bandwidth. We present an efficient technique for the selection of active sensor nodes in dense sensor networks. The active node selection procedure is aimed at providing the highest possible coverage of the sensor field, i.e., the surveillance area. It also assures network connectivity for routing and information dissemination. We first show that the coverage-centric active nodes selection problem is NP-complete. We then present a distributed approach based on the concept of a connected dominating set (CDS). We prove that the set of active nodes selected by our approach provides full coverage and connectivity. We also describe an optimal coverage-centric centralized approach based on integer linear programming. We present simulation results obtained using an ns2 implementation of the proposed technique.",
"This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.",
"We pinpoint a new sensor self-deployment problem, achieving focused coverage around a Point of Interest (POI), and introduce an evaluation metric, coverage radius. We propose two purely localized solution protocols Greedy Advance (GA) and Greedy-Rotation-Greedy (GRG), both of which are resilient to node failures and work regardless of network partition. The two algorithms drive sensors to move along a locally-computed triangle tessellation (TT) to surround the POI. In GA, nodes greedily proceed as close to the POI as they can; in GRG, when their greedy advance is blocked, nodes rotate around the POI to a TT vertex where greedy advance can resume. They both yield a connected network of TT layout with hole-free coverage. Further, GRG ensures a hexagon coverage shape centered at the POI.",
"",
""
]
} |
1412.2300 | 2949632354 | We consider an interval coverage problem. Given @math intervals of the same length on a line @math and a line segment @math on @math , we want to move the intervals along @math such that every point of @math is covered by at least one interval and the sum of the moving distances of all intervals is minimized. As a basic geometry problem, it has applications in mobile sensor barrier coverage in wireless sensor networks. The previous work solved the problem in @math time. In this paper, by discovering many interesting observations and developing new algorithmic techniques, we present an @math time algorithm. We also show an @math time lower bound for this problem, which implies the optimality of our algorithm. | Mehrandish @cite_13 @cite_2 considered another variant of the one-dimensional barrier coverage problem, where the goal is to move the minimum number of sensors to form a barrier coverage. They @cite_13 @cite_2 proved the problem is NP-hard if sensors have different ranges and gave polynomial time algorithms otherwise. In addition, Li @cite_4 considers the linear coverage problem which aims to set an energy for each sensor to form a coverage such that the cost of all sensors is minimized. There @cite_4 , the sensors are not allowed to move, and the more energy a sensor has, the larger the covering range of the sensor and the larger the cost of the sensor. Another problem variation is considered in @cite_12 , where the goal is to maximize the barrier coverage lifetime subject to the limited battery powers. | {
"cite_N": [
"@cite_13",
"@cite_12",
"@cite_4",
"@cite_2"
],
"mid": [
"",
"1755547434",
"139009818",
"2125242971"
],
"abstract": [
"",
"Sensor networks are ubiquitously used for detection and tracking and as a result covering is one of the main tasks of such networks. We study the problem of maximizing the coverage lifetime of a barrier by mobile sensors with limited battery powers, where the coverage lifetime is the time until there is a breakdown in coverage due to the death of a sensor. Sensors are first deployed and then coverage commences. Energy is consumed in proportion to the distance traveled for mobility, while for coverage, energy is consumed in direct proportion to the radius of the sensor raised to a constant exponent. We study two variants which are distinguished by whether the sensing radii are given as part of the input or can be optimized, the fixed radii problem and the variable radii problem. We design parametric search algorithms for both problems for the case where the final order of the sensors is predetermined and for the case where sensors are initially located at barrier endpoints. In contrast, we show that the variable radii problem is strongly NP-hard and provide hardness of approximation results for fixed radii for the case where all the sensors are initially co-located at an internal point of the barrier.",
"One of the most fundamental tasks of wireless sensor networks is to provide coverage of the deployment region. In this paper, we study the coverage of a line segment with a set of wireless sensors with adjustable coverage ranges. Each coverage range of a sensor is an interval centered at that sensor whose length is decided by the power the sensor chooses. The objective is to find a range assignment with the minimum cost. There are two variants of the optimization problem. In the discrete variant, each sensor can only choose from a finite set of powers while in the continuous variant, each sensor can choose power from a given interval. For the discrete variant of the problem, we present a polynomial-time exact algorithm. For the continuous variant of the problem, we develop constant-approximation algorithms when the cost for all sensors is proportional to rk for some constant k ≥ 1, where r is the covering radius corresponding to the chosen power. Specifically, if k = 1, we give a simple 1.25-approximation algorithm and a fully polynomial-time approximation scheme (FPTAS); if k > 1, we give a simple 2-approximation algorithm.",
"We study the problem of achieving maximum barrier coverage by sensors on a barrier modeled by a line segment, by moving the minimum possible number of sensors, initially placed at arbitrary positions on the line containing the barrier. We consider several cases based on whether or not complete coverage is possible, and whether non-contiguous coverage is allowed in the case when complete coverage is impossible. When the sensors have unequal transmission ranges, we show that the problem of finding a minimum-sized subset of sensors to move in order to achieve maximum contiguous or non-contiguous coverage on a finite line segment barrier is NP-complete. In contrast, if the sensors all have the same range, we give efficient algorithms to achieve maximum contiguous as well as non-contiguous coverage. For some cases, we reduce the problem to finding a maximum-hop path of a certain minimum (maximum) weight on a related graph, and solve it using dynamic programming."
]
} |
1412.2300 | 2949632354 | We consider an interval coverage problem. Given @math intervals of the same length on a line @math and a line segment @math on @math , we want to move the intervals along @math such that every point of @math is covered by at least one interval and the sum of the moving distances of all intervals is minimized. As a basic geometry problem, it has applications in mobile sensor barrier coverage in wireless sensor networks. The previous work solved the problem in @math time. In this paper, by discovering many interesting observations and developing new algorithmic techniques, we present an @math time algorithm. We also show an @math time lower bound for this problem, which implies the optimality of our algorithm. | Bhattacharya @cite_8 studied a two-dimensional barrier coverage in which the barrier is a circle and the sensors, initially located inside the circle, are moved to the circle to minimize the sensor movements; the ranges of the sensors are not explicitly specified but the destinations of the sensors are required to form a regular @math -gon on the circle. Algorithms for both min-sum and min-max versions were given in @cite_8 and subsequent improvements were made in @cite_5 @cite_6 . | {
"cite_N": [
"@cite_5",
"@cite_6",
"@cite_8"
],
"mid": [
"2568372686",
"1603500658",
"2001156795"
],
"abstract": [
"Given n points in a circular region C in the plane, we study the problem of moving these points to the boundary of C to form a regular n-gon such that the maximum of the Euclidean distances traveled by the points is minimized. These problems find applications in mobile sensor barrier coverage of wireless sensor networks. The problem further has two versions: the decision version and optimization version. In this paper, we present an O(nlog2 n) time algorithm for the decision version and an O(nlog3 n) time algorithm for the optimization version. The previously best algorithms for these two problem versions take O(n 3.5) time and O(n 3.5logn) time, respectively. A by-product of our techniques is an algorithm for dynamically maintaining the maximum matching of a circular convex bipartite graph; our algorithm performs each vertex insertion or deletion on the graph in O(log2 n) time. This result may be interesting in its own right.",
"Monitoring and surveillance are important aspects in modern wireless sensor networks. In applications of wireless sensor networks, it often asks for the sensors to quickly move from the interior of a specified region to the region's perimeter, so as to form a barrier coverage of the region. The region is usually given as a simple polygon or even a circle. In comparison with the traditional concept of full area coverage, barrier coverage requires fewer sensors for detecting intruders, and can thus be considered as a good approximation of full area coverage. In this paper, we present an O(n2.5 log n) time algorithm for moving n sensors to the perimeter of the given circle such that the new positions of sensors form a regular n-gon and the maximum of the distances travelled by mobile sensors is minimized. This greatly improves upon the previous time bound O(n3.5 log n). Also, we describe an O(n4) time algorithm for moving n sensors, whose initial positions are on the perimeter of the circle, to form a regular n-gon such that the sum of the travelled distances is minimized. This solves an open problem posed in [2]. Moreover, our algorithms are simpler and have more explicit geometric flavor.",
"Intrusion detection, area coverage and border surveillance are important applications of wireless sensor networks today. They can be (and are being) used to monitor large unprotected areas so as to detect intruders as they cross a border or as they penetrate a protected area. We consider the problem of how to optimally move mobile sensors to the fence (perimeter) of a region delimited by a simple polygon in order to detect intruders from either entering its interior or exiting from it. We discuss several related issues and problems, propose two models, provide algorithms and analyze their optimal mobility behavior."
]
} |
1412.2300 | 2949632354 | We consider an interval coverage problem. Given @math intervals of the same length on a line @math and a line segment @math on @math , we want to move the intervals along @math such that every point of @math is covered by at least one interval and the sum of the moving distances of all intervals is minimized. As a basic geometry problem, it has applications in mobile sensor barrier coverage in wireless sensor networks. The previous work solved the problem in @math time. In this paper, by discovering many interesting observations and developing new algorithmic techniques, we present an @math time algorithm. We also show an @math time lower bound for this problem, which implies the optimality of our algorithm. | Some other barrier coverage problems have been studied. For example, Kumar @cite_1 proposed algorithms for determining whether a region is barrier covered after the sensors are deployed. They considered both the deterministic version (the sensors are deployed deterministically) and the randomized version (the sensors are deployed randomly), and aimed to determine a barrier coverage with high probability. Chen @cite_9 introduced a local barrier coverage problem in which individual sensors determine the barrier coverage locally. | {
"cite_N": [
"@cite_9",
"@cite_1"
],
"mid": [
"2028589023",
"2045870157"
],
"abstract": [
"Global barrier coverage that requires much fewer sensors than full coverage, is known to be an appropriate model of coverage for movement detection applications such as intrusion detection. However, it has been proved that given a sensor deployment, sensors can not locally determine whether the deployment provides global barrier coverage, making it impossible to develop localized algorithms, thus limiting its use in practice. In this paper, we introduce the concept of local barrier coverage to address this limitation. Motivated by the observation that movements are likely to follow a shorter path in crossing a belt region, local barrier coverage guarantees the detection of all movements whose trajectory is confined to a slice of the belt region of deployment. We prove that it is possible for individual sensors to locally determine the existence of local barrier coverage, even when the region of deployment is arbitrarily curved. Although local barrier coverage does not always guarantee global barrier coverage, we show that for thin belt regions, local barrier coverage almost always provides global barrier coverage. To demonstrate that local barrier coverage can be used to design localized algorithms, we develop a novel sleep-wakeup algorithm for maximizing the network lifetime, called Localized Barrier Coverage Protocol (LBCP). We show that LBCP provides close to optimalenhancement in network lifetime, while providing global barrier coverage most of the time. It outperforms an existing algorithm called Randomized Independent Sleeping (RIS) by up to 6 times.",
"When a sensor network is deployed to detect objects penetrating a protected region, it is not necessary to have every point in the deployment region covered by a sensor. It is enough if the penetrating objects are detected at some point in their trajectory. If a sensor network guarantees that every penetrating object will be detected by at least k distinct sensors before it crosses the barrier of wireless sensors, we say the network provides k-barrier coverage. In this paper, we develop theoretical foundations for k-barrier coverage. We propose efficient algorithms using which one can quickly determine, after deploying the sensors, whether the deployment region is k-barrier covered. Next, we establish the optimal deployment pattern to achieve k-barrier coverage when deploying sensors deterministically. Finally, we consider barrier coverage with high probability when sensors are deployed randomly. The major challenge, when dealing with probabilistic barrier coverage, is to derive critical conditions using which one can compute the minimum number of sensors needed to ensure barrier coverage with high probability. Deriving critical conditions for k-barrier coverage is, however, still an open problem. We derive critical conditions for a weaker notion of barrier coverage, called weak k-barrier coverage."
]
} |
1412.2378 | 2950018201 | Attributes of words and relations between two words are central to numerous tasks in Artificial Intelligence such as knowledge representation, similarity measurement, and analogy detection. Often when two words share one or more attributes in common, they are connected by some semantic relations. On the other hand, if there are numerous semantic relations between two words, we can expect some of the attributes of one of the words to be inherited by the other. Motivated by this close connection between attributes and relations, given a relational graph in which words are inter- connected via numerous semantic relations, we propose a method to learn a latent representation for the individual words. The proposed method considers not only the co-occurrences of words as done by existing approaches for word representation learning, but also the semantic relations in which two words co-occur. To evaluate the accuracy of the word representations learnt using the proposed method, we use the learnt word representations to solve semantic word analogy problems. Our experimental results show that it is possible to learn better word representations by using semantic semantics between words. | Representing the semantics of a word is a fundamental step in many NLP tasks. Given word-level representations, numerous methods have been proposed in compositional semantics to construct phrase-level, sentence-level, or document-level representations @cite_3 @cite_24 . Existing methods for creating word representations can be broadly categorised into two groups: methods, and methods. | {
"cite_N": [
"@cite_24",
"@cite_3"
],
"mid": [
"1889268436",
"2952300142"
],
"abstract": [
"Single-word vector space models have been very successful at learning lexical information. However, they cannot capture the compositional meaning of longer phrases, preventing them from a deeper understanding of language. We introduce a recursive neural network (RNN) model that learns compositional vector representations for phrases and sentences of arbitrary syntactic type and length. Our model assigns a vector and a matrix to every node in a parse tree: the vector captures the inherent meaning of the constituent, while the matrix captures how it changes the meaning of neighboring words or phrases. This matrix-vector RNN can learn the meaning of operators in propositional logic and natural language. The model obtains state of the art performance on three different experiments: predicting fine-grained sentiment distributions of adverb-adjective pairs; classifying sentiment labels of movie reviews and classifying semantic relationships such as cause-effect or topic-message between nouns using the syntactic path between them.",
"The development of compositional distributional models of semantics reconciling the empirical aspects of distributional semantics with the compositional aspects of formal semantics is a popular topic in the contemporary literature. This paper seeks to bring this reconciliation one step further by showing how the mathematical constructs commonly used in compositional distributional models, such as tensors and matrices, can be used to simulate different aspects of predicate logic. This paper discusses how the canonical isomorphism between tensors and multilinear maps can be exploited to simulate a full-blown quantifier-free predicate calculus using tensors. It provides tensor interpretations of the set of logical connectives required to model propositional calculi. It suggests a variant of these tensor calculi capable of modelling quantifiers, using few non-linear operations. It finally discusses the relation between these variants, and how this relation should constitute the subject of future work."
]
} |
1412.2378 | 2950018201 | Attributes of words and relations between two words are central to numerous tasks in Artificial Intelligence such as knowledge representation, similarity measurement, and analogy detection. Often when two words share one or more attributes in common, they are connected by some semantic relations. On the other hand, if there are numerous semantic relations between two words, we can expect some of the attributes of one of the words to be inherited by the other. Motivated by this close connection between attributes and relations, given a relational graph in which words are inter- connected via numerous semantic relations, we propose a method to learn a latent representation for the individual words. The proposed method considers not only the co-occurrences of words as done by existing approaches for word representation learning, but also the semantic relations in which two words co-occur. To evaluate the accuracy of the word representations learnt using the proposed method, we use the learnt word representations to solve semantic word analogy problems. Our experimental results show that it is possible to learn better word representations by using semantic semantics between words. | Counting-based approaches follow the distributional hypothesis @cite_12 which says that the meaning of a word can be represented using the co-occurrences it has with other words. By aggregating the words that occur within a pre-defined window of context surrounding all instances of a word from a corpus and by appropriately weighting the co-occurrences, it is possible to represent the semantics of a word. Numerous definitions of co-occurrence such as within a proximity window or involved in a particular dependency relation etc. and co-occurrence measures have been proposed in the literature @cite_23 . This counting-based bottom-up approach often results in sparse word representations. Dimensionality reduction techniques such as the singular value decomposition (SVD) have been employed to overcome this problem in tasks such as measuring similarity between words using the learnt word representations @cite_2 . | {
"cite_N": [
"@cite_23",
"@cite_12",
"@cite_2"
],
"mid": [
"2128870637",
"",
"1662133657"
],
"abstract": [
"Research into corpus-based semantics has focused on the development of ad hoc models that treat single tasks, or sets of closely related tasks, as unrelated challenges to be tackled by extracting different kinds of distributional information from the corpus. As an alternative to this \"one task, one model\" approach, the Distributional Memory framework extracts distributional information once and for all from the corpus, in the form of a set of weighted word-link-word tuples arranged into a third-order tensor. Different matrices are then generated from the tensor, and their rows and columns constitute natural spaces to deal with different semantic problems. In this way, the same distributional information can be shared across tasks such as modeling word similarity judgments, discovering synonyms, concept categorization, predicting selectional preferences of verbs, solving analogy problems, classifying relations between word pairs, harvesting qualia structures with patterns or example pairs, predicting the typical properties of concepts, and classifying verbs into alternation classes. Extensive empirical testing in all these domains shows that a Distributional Memory implementation performs competitively against task-specific algorithms recently reported in the literature for the same tasks, and against our implementations of several state-of-the-art methods. The Distributional Memory approach is thus shown to be tenable despite the constraints imposed by its multi-purpose nature.",
"",
"Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field."
]
} |
1412.1741 | 2065666475 | Regular expression matching is essential for many applications, such as finding patterns in text, exploring substrings in large DNA sequences, or lexical analysis. However, sequential regular expression matching may be time-prohibitive for large problem sizes. In this paper, we describe a novel algorithm for parallel regular expression matching via deterministic finite automata. Furthermore, we present our tool PaREM that accepts regular expressions and finite automata as input and automatically generates the corresponding code for our algorithm that is amenable for parallel execution on shared-memory systems. We evaluate our parallel algorithm empirically by comparing it with a commonly used algorithm for sequential regular expression matching. Experiments on a dual-socket shared-memory system with 24 physical cores show speed-ups of up to 21× for 48 threads. | Holub and Stekr @cite_10 propose an approach for parallel REM via DFA by splitting the input string in small chunks and running these chunks on each core, but due to pre-calculation of initial states for each sub input, this was not efficient for general DFA. Their algorithm runs efficiently for a specific type of DFA, so called synchronizing automata, that relies on the input automaton being k-local. | {
"cite_N": [
"@cite_10"
],
"mid": [
"1525776788"
],
"abstract": [
"We present implementations of parallel DFA run methods and find whether and under what conditions is worthy to use the parallel methods of simulation of run of finite automata. First, we introduce the parallel DFA run methods for general DFA, which are universal, but due to the dependency of simulation time on the number of states |Q | of automaton being run, they are suitable only for run of automata with the smaller number of states. Then we show that if we apply some restrictions to properties of automata being run, we can reach the linear speedup compared to the sequential simulation method. We designed methods benefiting from k -locality that allows optimum parallel run of exact and approximate pattern matching automata. Finally, we show the results of experiments conducted on two types of parallel computers (Cluster of workstations and Symmetric shared-memory multiprocessors)."
]
} |
1412.1741 | 2065666475 | Regular expression matching is essential for many applications, such as finding patterns in text, exploring substrings in large DNA sequences, or lexical analysis. However, sequential regular expression matching may be time-prohibitive for large problem sizes. In this paper, we describe a novel algorithm for parallel regular expression matching via deterministic finite automata. Furthermore, we present our tool PaREM that accepts regular expressions and finite automata as input and automatically generates the corresponding code for our algorithm that is amenable for parallel execution on shared-memory systems. We evaluate our parallel algorithm empirically by comparing it with a commonly used algorithm for sequential regular expression matching. Experiments on a dual-socket shared-memory system with 24 physical cores show speed-ups of up to 21× for 48 threads. | Yang and Prassana @cite_3 propose the segmentation of regular expressions and perform the REM evaluation via nondeterministic finite automata. The major aim is to optimize the use of memory hierarchy in case of automata with many states and large transition table. In contrast to our approach, the authors of @cite_3 focus on large automata but do not address specifically algorithmic optimizations with respect to large input strings. | {
"cite_N": [
"@cite_3"
],
"mid": [
"1990473086"
],
"abstract": [
"Conventionally, regular expression matching (REM) has been performed by sequentially comparing the regular expression (regex) to the input stream, which can be slow due to excessive backtracking (smith:acsac06). Alternatively, the regex can be converted to a deterministic finite automaton (DFA) for efficient matching, which however may require an extremely large state transition table (STT) due to exponential state explosion (meyer:swat71, yu:ancs06). We propose the segmented regex-NFA (SR-NFA) architecture, where the regex is first compiled into modular nondeterministic finite automata (NFA), then partitioned, optimized, and matched efficiently on modern multi-core processors. SR-NFA offers attack-resilient multi-gigabit per second matching throughput, does not suffer from either backtracking or state explosion, and can be rapidly constructed. For regex sets that construct a DFA with moderate state explosion, i.e., on average 200k states in the STT, the proposed SR-NFA is 367k times faster to construct and update and use 23k times less memory than the DFA approach. Running on an 8-core 2.6 GHz Opteron platform, our prototype achieves 2.2 Gbps average matching throughput for regex sets with up to 4,000 SR-NFA states per regex set."
]
} |
1412.1741 | 2065666475 | Regular expression matching is essential for many applications, such as finding patterns in text, exploring substrings in large DNA sequences, or lexical analysis. However, sequential regular expression matching may be time-prohibitive for large problem sizes. In this paper, we describe a novel algorithm for parallel regular expression matching via deterministic finite automata. Furthermore, we present our tool PaREM that accepts regular expressions and finite automata as input and automatically generates the corresponding code for our algorithm that is amenable for parallel execution on shared-memory systems. We evaluate our parallel algorithm empirically by comparing it with a commonly used algorithm for sequential regular expression matching. Experiments on a dual-socket shared-memory system with 24 physical cores show speed-ups of up to 21× for 48 threads. | Mytkowicz and Schulte @cite_15 propose an approach that exploits SIMD, instruction and thread level parallelism in the context of finite state machines computations. To increase the opportunities for data-parallelism authors of [11] have devised a method for breaking data-dependencies with enumeration. This approach is not based on speculation with respect to initial state determination. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2132774949"
],
"abstract": [
"A finite-state machine (FSM) is an important abstraction for solving several problems, including regular-expression matching, tokenizing text, and Huffman decoding. FSM computations typically involve data-dependent iterations with unpredictable memory-access patterns making them difficult to parallelize. This paper describes a parallel algorithm for FSMs that breaks dependences across iterations by efficiently enumerating transitions from all possible states on each input symbol. This allows the algorithm to utilize various sources of data parallelism available on modern hardware, including vector instructions and multiple processors cores. For instance, on benchmarks from three FSM applications: regular expressions, Huffman decoding, and HTML tokenization, the parallel algorithm achieves up to a 3x speedup over optimized sequential baselines on a single core, and linear speedups up to 21x on 8 cores."
]
} |
1412.1741 | 2065666475 | Regular expression matching is essential for many applications, such as finding patterns in text, exploring substrings in large DNA sequences, or lexical analysis. However, sequential regular expression matching may be time-prohibitive for large problem sizes. In this paper, we describe a novel algorithm for parallel regular expression matching via deterministic finite automata. Furthermore, we present our tool PaREM that accepts regular expressions and finite automata as input and automatically generates the corresponding code for our algorithm that is amenable for parallel execution on shared-memory systems. We evaluate our parallel algorithm empirically by comparing it with a commonly used algorithm for sequential regular expression matching. Experiments on a dual-socket shared-memory system with 24 physical cores show speed-ups of up to 21× for 48 threads. | @cite_17 address the issue of large-scale finite automata (also known as the state explosion problem) by splitting regular expressions into two parts: (1) a prefix that contains frequently visited parts of the automata, and (2) a suffix that is the rest of the automaton. The aim is to have a small DFA for frequently accessed parts of automata that fits in cache memory. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2153244370"
],
"abstract": [
"The importance of network security has grown tremendously and a collection of devices have been introduced, which can improve the security of a network. Network intrusion detection systems (NIDS) are among the most widely deployed such system; popular NIDS use a collection of signatures of known security threats and viruses, which are used to scan each packet's payload. Today, signatures are often specified as regular expressions; thus the core of the NIDS comprises of a regular expressions parser; such parsers are traditionally implemented as finite automata. Deterministic Finite Automata (DFA) are fast, therefore they are often desirable at high network link rates. DFA for the signatures, which are used in the current security devices, however require prohibitive amounts of memory, which limits their practical use. In this paper, we argue that the traditional DFA based NIDS has three main limitations: first they fail to exploit the fact that normal data streams rarely match any virus signature; second, DFAs are extremely inefficient in following multiple partially matching signatures and explodes in size, and third, finite automaton are incapable of efficiently keeping track of counts. We propose mechanisms to solve each of these drawbacks and demonstrate that our solutions can implement a NIDS much more securely and economically, and at the same time substantially improve the packet throughput."
]
} |
1412.1741 | 2065666475 | Regular expression matching is essential for many applications, such as finding patterns in text, exploring substrings in large DNA sequences, or lexical analysis. However, sequential regular expression matching may be time-prohibitive for large problem sizes. In this paper, we describe a novel algorithm for parallel regular expression matching via deterministic finite automata. Furthermore, we present our tool PaREM that accepts regular expressions and finite automata as input and automatically generates the corresponding code for our algorithm that is amenable for parallel execution on shared-memory systems. We evaluate our parallel algorithm empirically by comparing it with a commonly used algorithm for sequential regular expression matching. Experiments on a dual-socket shared-memory system with 24 physical cores show speed-ups of up to 21× for 48 threads. | @cite_12 propose an approach of finding the correct initial state by speculation. They believe that guessing the state of the DFA at certain position (network intrusion detection DFA based scanning spends most of the time in a few hot states) has a very good chance that after a few steps will reach the correct state. They validate these guesses using a history of speculated states. In comparison to our algorithm, the convergence of the guessed state and the correct state is not guaranteed. Furthermore, if a thread does not converge on its sub input, then the next thread is forced to start from a new state, which limits the scalability @cite_15 . | {
"cite_N": [
"@cite_15",
"@cite_12"
],
"mid": [
"2132774949",
"2110199304"
],
"abstract": [
"A finite-state machine (FSM) is an important abstraction for solving several problems, including regular-expression matching, tokenizing text, and Huffman decoding. FSM computations typically involve data-dependent iterations with unpredictable memory-access patterns making them difficult to parallelize. This paper describes a parallel algorithm for FSMs that breaks dependences across iterations by efficiently enumerating transitions from all possible states on each input symbol. This allows the algorithm to utilize various sources of data parallelism available on modern hardware, including vector instructions and multiple processors cores. For instance, on benchmarks from three FSM applications: regular expressions, Huffman decoding, and HTML tokenization, the parallel algorithm achieves up to a 3x speedup over optimized sequential baselines on a single core, and linear speedups up to 21x on 8 cores.",
"Intrusion prevention systems (IPSs) determine whether incoming traffic matches a database of signatures, where each signature is a regular expression and represents an attack or a vulnerability. IPSs need to keep up with ever-increasing line speeds, which has lead to the use of custom hardware. A major bottleneck that IPSs face is that they scan incoming packets one byte at a time, which limits their throughput and latency. In this paper, we present a method to search for arbitrary regular expressions by scanning multiple bytes in parallel using speculation. We break the packet in several chunks, opportunistically scan them in parallel, and if the speculation is wrong, correct it later. We present algorithms that apply speculation in single-threaded software running on commodity processors as well as algorithms for parallel hardware. Experimental results show that speculation leads to improvements in latency and throughput in both cases."
]
} |
1412.1952 | 2288351814 | Vehicular energy network (VEN) is a vehicular network which can transport energy over a large geographical area by means of electric vehicles (EVs). In the near future, an abundance of EVs, plentiful generation of the renewables, and mature wireless energy transfer and vehicular communication technologies will expedite the realization of VEN. To transmit energy from a source to a destination, we need to establish energy paths, which are composed of segments of vehicular routes, while satisfying various design objectives. In this paper, we develop a method to construct all energy paths for a particular energy source-destination pair, followed by some analytical results of the method. We describe how to utilize the energy paths to develop optimization models for different design goals and propose two solutions. We also develop a heuristic for the power loss minimization problem. We compare the performance of the three solution methods with artificial and real-world traffic networks and provide a comprehensive comparison in terms of solution quality, computation time, solvable problem size, and applicability. This paper lays the foundations of VEN routing. | EVs take a very important role in energy management in the smart grid. When compared to the capacity of the power grid, the capacity of an EV is very small. However, an aggregation of many EVs can become a huge load or power source. An energy market can be set up to trade energy between aggregations of EVs with the main grid in a vehicle-to-grid system @cite_1 . EVs can also be used to provide regulation services to the power system in a distributed fashion @cite_11 . In practice, charging stations are currently the main source of energy supply to EVs and their locations can affect the mobility pattern of vehicles @cite_12 . With VEN, EVs are used to transport energy across an area, complementing the power network. EVs can also obtain energy to support mobility from VEN. We can see that VEN brings a new dimension of functionality in the smart grid. | {
"cite_N": [
"@cite_1",
"@cite_12",
"@cite_11"
],
"mid": [
"2043922680",
"",
"2161715775"
],
"abstract": [
"In this paper, we propose a novel multi-layer market for analyzing the energy exchange process between electric vehicles and the smart grid. The proposed market consists essentially of two layers: a macro layer and a micro layer. At the macro layer, we propose a double auction mechanism using which the aggregators, acting as sellers, and the smart grid elements, acting as buyers, interact so as to trade energy. We show that this double auction mechanism is strategy-proof and converges asymptotically. At the micro layer, the aggregators, which are the sellers in the macro layer, are given monetary incentives so as to sell the energy of associated plug-in hybrid electric vehicles (PHEVs) and to maximize their revenues. We analyze the interaction between the macro and micro layers and study some representative cases. Depending on the elasticity of the supply and demand, the utility functions are analyzed under different scenarios. Simulation results show that the proposed approach can significantly increase the utility of PHEVs, compared to a classical greedy approach.",
"",
"Due to green initiatives adopted in many countries, renewable energy will be massively incorporated into the future smart grid. However, the intermittency of the renewables may result in power imbalance, thus adversely affecting the stability of a power system. Voltage regulation may be used to maintain the power balance at all times. As electric vehicles (EVs) become popular, they may be connected to the grid to form a vehicle-to-grid (V2G) system. An aggregation of EVs can be coordinated to provide voltage regulation services. However, V2G is a dynamic system where EVs are connected to the grid according to the owners' habits. In this paper, we model an aggregation of EVs with a queueing network, whose structure allows us to estimate the capacities for regulation up and regulation down, separately. The estimated capacities from the V2G system can be used for establishing a regulation contract between an aggregator and the grid operator, and facilitate a new business model for V2G."
]
} |
1412.1952 | 2288351814 | Vehicular energy network (VEN) is a vehicular network which can transport energy over a large geographical area by means of electric vehicles (EVs). In the near future, an abundance of EVs, plentiful generation of the renewables, and mature wireless energy transfer and vehicular communication technologies will expedite the realization of VEN. To transmit energy from a source to a destination, we need to establish energy paths, which are composed of segments of vehicular routes, while satisfying various design objectives. In this paper, we develop a method to construct all energy paths for a particular energy source-destination pair, followed by some analytical results of the method. We describe how to utilize the energy paths to develop optimization models for different design goals and propose two solutions. We also develop a heuristic for the power loss minimization problem. We compare the performance of the three solution methods with artificial and real-world traffic networks and provide a comprehensive comparison in terms of solution quality, computation time, solvable problem size, and applicability. This paper lays the foundations of VEN routing. | VEN is specially designed for conveying energy while VANET aims to disseminate information. Yet they both utilize the vehicular network to provide additional services over geographical areas other than transportation of passengers or goods. They share many similarities on the underlying routing principle making use of the opportunistic contacts of vehicles for energy or data exchanges. @cite_15 proposed an opportunistic routing protocol for VANET by exploiting vehicular mobility patterns and geographical information provided in navigation systems. @cite_4 focused on position-based routing with topological knowledge for VANET in a city environment. @cite_13 proposed an opportunistic forwarding scheme, which utilizes velocity information to make forwarding decisions. However, routing algorithms developed for VANET may not be applicable to VEN as data and energy are different in nature. Data packets are different from one another, i.e., we are dealing with a multi-commodity routing problem, although they can be replicated to increase the chance of transmission success. However, energy packets'' are indistinguishable, i.e., we have a a single commodity routing problem, and we cannot replicate energy. | {
"cite_N": [
"@cite_13",
"@cite_15",
"@cite_4"
],
"mid": [
"2142651281",
"2144899618",
"2139041219"
],
"abstract": [
"When highly mobile nodes are interconnected via wireless links, the resulting network can be used as a transit network to connect other disjoint ad-hoc networks. In this paper, we compare five different opportunistic forwarding schemes, which vary in their overhead, their success rate, and the amount of knowledge about neighboring nodes that they require. In particular, we present the MoVe algorithm, which uses velocity information to make intelligent opportunistic forwarding decisions. Using auxiliary information to make forwarding decisions provides a reasonable trade-off between resource overhead and performance.",
"Vehicular networks can be seen as an example of hybrid delay tolerant network where a mixture of infostations and vehicles can be used to geographically route the information messages to the right location. In this paper we present a forwarding protocol which exploits both the opportunistic nature and the inherent characteristics of the vehicular network in terms of mobility patterns and encounters, and the geographical information present in navigator systems of vehicles. We also report about our evaluation of the protocol over a simulator using realistic vehicular traces and in comparison with other geographical routing protocols.",
"Routing of data in a vehicular ad hoc network is a challenging task due to the high dynamics of such a network. Recently, it was shown for the case of highway traffic that position-based routing approaches can very well deal with the high mobility of network nodes. However, baseline position-based routing has difficulties to handle two-dimensional scenarios with obstacles (buildings) and voids as it is the case for city scenarios. In this paper we analyze a position-based routing approach that makes use of the navigational systems of vehicles. By means of simulation we compare this approach with non-position-based ad hoc routing strategies (dynamic source routing and ad-hoc on-demand distance vector routing). The simulation makes use of highly realistic vehicle movement patterns derived from Daimler-Chrysler's Videlio traffic simulator. While DSR's performance is limited due to problems with scalability and handling mobility, both AODV and the position-based approach show good performances with the position-based approach outperforming AODV."
]
} |
1412.1952 | 2288351814 | Vehicular energy network (VEN) is a vehicular network which can transport energy over a large geographical area by means of electric vehicles (EVs). In the near future, an abundance of EVs, plentiful generation of the renewables, and mature wireless energy transfer and vehicular communication technologies will expedite the realization of VEN. To transmit energy from a source to a destination, we need to establish energy paths, which are composed of segments of vehicular routes, while satisfying various design objectives. In this paper, we develop a method to construct all energy paths for a particular energy source-destination pair, followed by some analytical results of the method. We describe how to utilize the energy paths to develop optimization models for different design goals and propose two solutions. We also develop a heuristic for the power loss minimization problem. We compare the performance of the three solution methods with artificial and real-world traffic networks and provide a comprehensive comparison in terms of solution quality, computation time, solvable problem size, and applicability. This paper lays the foundations of VEN routing. | Mobile electrical grid (or called EV energy network) proposed in @cite_7 has a similar but different design as VEN. It does make use of EVs for energy transmission and distribution but it requires the involved EVs to actively participate in the energy transmission process by stopping at particular locations for charging and discharging. However, with dynamic (dis)charging technologies, VEN can function transparently to the EV drivers. In @cite_0 , we provided an extensive analytical framework for further performance study of VEN. | {
"cite_N": [
"@cite_0",
"@cite_7"
],
"mid": [
"206983966",
"2111773724"
],
"abstract": [
"The smart grid spawns many innovative ideas, but many of them cannot be easily integrated into the existing power system due to power system constraints, such as the lack of capacity to transport renewable energy in remote areas to the urban centers. An energy delivery system can be built upon the traffic network and electric vehicles (EVs) utilized as energy carriers to transport energy over a large geographical region. A generalized architecture called the vehicular energy network (VEN) is constructed and a mathematically tractable framework is developed. Dynamic wireless (dis)charging allows electric energy, as an energy packet, to be added and subtracted from EV batteries seamlessly. With proper routing, energy can be transported from the sources to destinations through EVs along appropriate vehicular routes. This paper gives a preliminary study of VEN. Models are developed to study its operational and economic feasibilities with real traffic data in U. K. This paper shows that a substantial amount of renewable energy can be transported from some remote wind farms to London under some reasonable settings and VEN is likely to be profitable in the near future. VEN can complement the power network and enhance its power delivery capability.",
"Vehicle-to-grid provides a viable approach that feeds the battery energy stored in electric vehicles (EVs) back to the power grid. Meanwhile, since EVs are mobile, the energy in EVs can be easily transported from one place to another. Based on these two observations, we introduce a novel concept called EV energy network for energy transmission and distribution using EVs. We present a concrete example to illustrate the usage of an EV energy network, and then study the optimization problem of how to deploy energy routers in an EV energy network. We prove that the problem is NP-hard and develop a greedy heuristic solution. Simulations using real-world data shows that our method is efficient."
]
} |
1412.1952 | 2288351814 | Vehicular energy network (VEN) is a vehicular network which can transport energy over a large geographical area by means of electric vehicles (EVs). In the near future, an abundance of EVs, plentiful generation of the renewables, and mature wireless energy transfer and vehicular communication technologies will expedite the realization of VEN. To transmit energy from a source to a destination, we need to establish energy paths, which are composed of segments of vehicular routes, while satisfying various design objectives. In this paper, we develop a method to construct all energy paths for a particular energy source-destination pair, followed by some analytical results of the method. We describe how to utilize the energy paths to develop optimization models for different design goals and propose two solutions. We also develop a heuristic for the power loss minimization problem. We compare the performance of the three solution methods with artificial and real-world traffic networks and provide a comprehensive comparison in terms of solution quality, computation time, solvable problem size, and applicability. This paper lays the foundations of VEN routing. | @cite_18 discussed routing in the mobile electrical grid in the presence of traffic congestion by assuming every route capable of transmitting unlimited amount of energy. It constructed energy routes heuristically in terms of shortest paths. @cite_6 relaxed the above unlimited energy assumption and considered a simple flow model for multiple route construction. However, the shortest-path strategy may not be appropriate when the focus is not on energy loss. Even so, we will show that this strategy may not give the optimal results. In this paper, we provide the fundamentals of VEN routing which can be applied to problems of different system objectives. | {
"cite_N": [
"@cite_18",
"@cite_6"
],
"mid": [
"2073015190",
"1992509276"
],
"abstract": [
"Vehicle-to-Grid (V2G) is that the energy stored in the batteries of electric vehicles can be utilized to send back to the power grid. And then, the energy in the batteries of electric vehicles can move with electric vehicles (EVs). Based on above characteristics, this paper introduces the concept of a mobile electrical grid and discusses the energy routing problem. It focuses on the optimization problem of how to find routes from the energy sources to charge stations, especially, when some paths are clogged by traffic jam. A bipartite graph model is used to analyze the route problem and two algorithms are presented to compute minimal energy metric route. Both of algorithms are tested by real-world transporting data in Manhattan and the Pioneer Valley Transit Authority(PVTA). Simulations show that the method is efficient.",
"Vehicle-to-Grid (V2G) technology utilizes the stored energy in electric vehicle batteries to contribute electricity back to the grid. The energy in batteries can move with electric vehicles (EVs). Combining V2G and the mobility of vehicles, EVs can provide a natural energy transmission architecture called mobile electrical grid. The main idea of this paper focuses on multiple energy transmission route in mobile electrical grid from solar energy sources to places as capacity of every energy route is limited. The features of energy route in mobile electrical grid are analyzed and a minimum cost flow algorithm is presented. Simulations using real-world transporting data in Manhattan. Simulations show that this method is efficient."
]
} |
1412.1353 | 2952897246 | Sharing information between multiple tasks enables algorithms to achieve good generalization performance even from small amounts of training data. However, in a realistic scenario of multi-task learning not all tasks are equally related to each other, hence it could be advantageous to transfer information only between the most related tasks. In this work we propose an approach that processes multiple tasks in a sequence with sharing between subsequent tasks instead of solving all tasks jointly. Subsequently, we address the question of curriculum learning of tasks, i.e. finding the best order of tasks to be learned. Our approach is based on a generalization bound criterion for choosing the task order that optimizes the average expected classification performance over all tasks. Our experimental results show that learning multiple related tasks sequentially can be more effective than learning them jointly, the order in which tasks are being solved affects the overall performance, and that our model is able to automatically discover the favourable order of tasks. | Methods based on the sharing of weight vector have also been generalized since their original introduction in @cite_1 , in particular to relax the assumption that all tasks have to be related. @cite_21 , Evgeniou achieved this by introducing a graph regularization. Alternatively, Chen @cite_28 proposed to penalize deviations in weight vectors for highly correlated tasks. However, these methods require prior knowledge about the amount of similarities between tasks. In contrast, the algorithm we present in this work does not assume all tasks to be related, yet does not need a priori information regarding their similarities, either. | {
"cite_N": [
"@cite_28",
"@cite_21",
"@cite_1"
],
"mid": [
"1769664844",
"2144752499",
"2143104527"
],
"abstract": [
"We consider the problem of learning a structured multi-task regression, where the output consists of multiple responses that are related by a graph and the correlated response variables are dependent on the common inputs in a sparse but synergistic manner. Previous methods such as l1 l2-regularized multi-task regression assume that all of the output variables are equally related to the inputs, although in many real-world problems, outputs are related in a complex manner. In this paper, we propose graph-guided fused lasso (GFlasso) for structured multi-task regression that exploits the graph structure over the output variables. We introduce a novel penalty function based on fusion penalty to encourage highly correlated outputs to share a common set of relevant inputs. In addition, we propose a simple yet efficient proximal-gradient method for optimizing GFlasso that can also be applied to any optimization problems with a convex smooth loss and the general class of fusion penalty defined on arbitrary graph structures. By exploiting the structure of the non-smooth ''fusion penalty'', our method achieves a faster convergence rate than the standard first-order method, sub-gradient method, and is significantly more scalable than the widely adopted second-order cone-programming and quadratic-programming formulations. In addition, we provide an analysis of the consistency property of the GFlasso model. Experimental results not only demonstrate the superiority of GFlasso over the standard lasso but also show the efficiency and scalability of our proximal-gradient method.",
"We study the problem of learning many related tasks simultaneously using kernel methods and regularization. The standard single-task kernel methods, such as support vector machines and regularization networks, are extended to the case of multi-task learning. Our analysis shows that the problem of estimating many task functions with regularization can be cast as a single task learning problem if a family of multi-task kernel functions we define is used. These kernels model relations among the tasks and are derived from a novel form of regularizers. Specific kernels that can be used for multi-task learning are provided and experimentally tested on two real data sets. In agreement with past empirical work on multi-task learning, the experiments show that learning multiple related tasks simultaneously using the proposed approach can significantly outperform standard single-task learning particularly when there are many related tasks but few data per task.",
"Past empirical work has shown that learning multiple related tasks from data simultaneously can be advantageous in terms of predictive performance relative to learning these tasks independently. In this paper we present an approach to multi--task learning based on the minimization of regularization functionals similar to existing ones, such as the one for Support Vector Machines (SVMs), that have been successfully used in the past for single--task learning. Our approach allows to model the relation between tasks in terms of a novel kernel function that uses a task--coupling parameter. We implement an instance of the proposed approach similar to SVMs and test it empirically using simulated as well as real data. The experimental results show that the proposed method performs better than existing multi--task learning methods and largely outperforms single--task learning using SVMs."
]
} |
1412.1353 | 2952897246 | Sharing information between multiple tasks enables algorithms to achieve good generalization performance even from small amounts of training data. However, in a realistic scenario of multi-task learning not all tasks are equally related to each other, hence it could be advantageous to transfer information only between the most related tasks. In this work we propose an approach that processes multiple tasks in a sequence with sharing between subsequent tasks instead of solving all tasks jointly. Subsequently, we address the question of curriculum learning of tasks, i.e. finding the best order of tasks to be learned. Our approach is based on a generalization bound criterion for choosing the task order that optimizes the average expected classification performance over all tasks. Our experimental results show that learning multiple related tasks sequentially can be more effective than learning them jointly, the order in which tasks are being solved affects the overall performance, and that our model is able to automatically discover the favourable order of tasks. | The question how to order a sequence of learning steps to achieve best performance has previously been studied mainly in the context of single task learning, where the question is in which order one should process the training examples. @cite_20 Bengio showed experimentally that choosing training examples in an order of gradually increasing difficulty can lead to faster training and higher prediction quality. Similarly, Kumar @cite_17 introduced the self-paced learning algorithm, which automatically chooses the order in which training examples are processed for solving a non-convex learning problem. In the context of learning multiple tasks, the question in which order to learn them was introduced in @cite_26 , where Lad proposed an algorithm for optimizing the task order based on pairwise preferences. However, they considered only the setting in which tasks are performed in a sequence through user interaction and therefore their approach is not applicable in the standard multi-task scenario. In the setting of multi-label classification, the idea of decomposing a multi-target problem into a sequence of single-target ones was proposed by Read in @cite_22 . However, there the sharing of information occurs through augmentations of the feature vectors, not through a regularization term. | {
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_20",
"@cite_17"
],
"mid": [
"176197218",
"1999954155",
"",
"2132984949"
],
"abstract": [
"In many practical applications, multiple interrelated tasks must be accomplished sequentially through user interaction. The ordering of the tasks may have a significant impact on the overall utility (or performance) of the systems; hence optimal ordering of tasks is desirable. However, manual specification of optimal ordering is often difficult when task dependencies are complex, and exhaustive search for the optimal order is computationally intractable when the number of tasks is large. We present the first attempt at solving the optimal task ordering problem by learning partial order preferences among tasks based on observed system behavior in context, and by reducing the order optimization problem to a well-known Linear Ordering Problem (LOP). For computational tractability of the LOP solution, we further use link analysis (HITS and PageRank) over a partial-order-preference graph as a heuristic approximation. These strategies allow us to find near-optimal solutions with efficient computation, scalable to large applications. We conducted a comparative evaluation of the proposed approach with two practical applications that involve computer-assisted trouble report generation and IT proposal annotation with heterogeneous classification labels (keywords, collaborators, customers, technical categories, etc.), and obtained highly encouraging results in both applications.",
"The widely known binary relevance method for multi-label classification, which considers each label as an independent binary problem, has often been overlooked in the literature due to the perceived inadequacy of not directly modelling label correlations. Most current methods invest considerable complexity to model interdependencies between labels. This paper shows that binary relevance-based methods have much to offer, and that high predictive performance can be obtained without impeding scalability to large datasets. We exemplify this with a novel classifier chains method that can model label correlations while maintaining acceptable computational complexity. We extend this approach further in an ensemble framework. An extensive empirical evaluation covers a broad range of multi-label datasets with a variety of evaluation metrics. The results illustrate the competitiveness of the chaining method against related and state-of-the-art methods, both in terms of predictive performance and time complexity.",
"",
"Latent variable models are a powerful tool for addressing several tasks in machine learning. However, the algorithms for learning the parameters of latent variable models are prone to getting stuck in a bad local optimum. To alleviate this problem, we build on the intuition that, rather than considering all samples simultaneously, the algorithm should be presented with the training data in a meaningful order that facilitates learning. The order of the samples is determined by how easy they are. The main challenge is that often we are not provided with a readily computable measure of the easiness of samples. We address this issue by proposing a novel, iterative self-paced learning algorithm where each iteration simultaneously selects easy samples and learns a new parameter vector. The number of samples selected is governed by a weight that is annealed until the entire training data has been considered. We empirically demonstrate that the self-paced learning algorithm outperforms the state of the art method for learning a latent structural SVM on four applications: object localization, noun phrase coreference, motif finding and handwritten digit recognition."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.