aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1605.08325 | 2397612811 | We develop a scalable and extendable training framework that can utilize GPUs across nodes in a cluster and accelerate the training of deep learning models based on data parallelism. Both synchronous and asynchronous training are implemented in our framework, where parameter exchange among GPUs is based on CUDA-aware MPI. In this report, we analyze the convergence and capability of the framework to reduce training time when scaling the synchronous training of AlexNet and GoogLeNet from 2 GPUs to 8 GPUs. In addition, we explore novel ways to reduce the communication overhead caused by exchanging parameters. Finally, we release the framework as open-source for further research on distributed deep learning | Krizhevsky proposed his trick on parallelizing the training of AlexNet @cite_4 on multiple GPUs in a synchronous way @cite_12 . This work showed that eight GPU workers training on the same batch size of 128 can give up to 6.25 @math data throughput speedup and nearly the same convergence as trained on a single GPU when exploiting both model and data parallelism. Notably, the increase in effective batch size effective batch size @math batch size @math number of workers leads to very small changes in the final convergence of AlexNet when the learning rate is scaled properly. Following his work, a Theano-based two-GPU synchronous framework @cite_2 for accelerating the training of AlexNet was proposed, where both weights and momentum are averaged between two GPUs after each iteration. The model converges to the same level as using a single GPU but in less time. | {
"cite_N": [
"@cite_4",
"@cite_12",
"@cite_2"
],
"mid": [
"",
"1598866093",
"315953870"
],
"abstract": [
"",
"I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural networks.",
"In this report, we describe a Theano-based AlexNet (, 2012) implementation and its naive data parallelism on multiple GPUs. Our performance on 2 GPUs is comparable with the state-of-art Caffe library (, 2014) run on 1 GPU. To the best of our knowledge, this is the first open-source Python-based AlexNet implementation to-date."
]
} |
1605.08325 | 2397612811 | We develop a scalable and extendable training framework that can utilize GPUs across nodes in a cluster and accelerate the training of deep learning models based on data parallelism. Both synchronous and asynchronous training are implemented in our framework, where parameter exchange among GPUs is based on CUDA-aware MPI. In this report, we analyze the convergence and capability of the framework to reduce training time when scaling the synchronous training of AlexNet and GoogLeNet from 2 GPUs to 8 GPUs. In addition, we explore novel ways to reduce the communication overhead caused by exchanging parameters. Finally, we release the framework as open-source for further research on distributed deep learning | There has been more development on the acceleration of vision-based deep learning in recent years. NVIDIA developed a multi-GPU deep learning framework, DIGITS, which shows 3.5 @math data throughput speedup when training AlexNet on 4 GPUs. Purine @cite_11 pipelines the propagation of gradients between iterations and overlaps the communication of large weights in fully connected layers with the rest of back-propagation, giving near 12 @math data throughput speedup when training GoogLeNet @cite_20 on 12 GPUs. Similarly, MXNet @cite_5 also shows a super-linear data throughput speedup on training GoogLeNet under a distributed training setting. | {
"cite_N": [
"@cite_5",
"@cite_20",
"@cite_11"
],
"mid": [
"2186615578",
"2950179405",
"1863582480"
],
"abstract": [
"MXNet is a multi-language machine learning (ML) library to ease the development of ML algorithms, especially for deep neural networks. Embedded in the host language, it blends declarative symbolic expression with imperative tensor computation. It offers auto differentiation to derive gradients. MXNet is computation and memory efficient and runs on various heterogeneous systems, ranging from mobile devices to distributed GPU clusters. This paper describes both the API design and the system implementation of MXNet, and explains how embedding of both symbolic expression and tensor operation is handled in a unified fashion. Our preliminary experiments reveal promising results on large scale deep neural network applications using multiple GPU machines.",
"We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"In this paper, we introduce a novel deep learning framework, termed Purine. In Purine, a deep network is expressed as a bipartite graph (bi-graph), which is composed of interconnected operators and data tensors. With the bi-graph abstraction, networks are easily solvable with event-driven task dispatcher. We then demonstrate that different parallelism schemes over GPUs and or CPUs on single or multiple PCs can be universally implemented by graph composition. This eases researchers from coding for various parallelization schemes, and the same dispatcher can be used for solving variant graphs. Scheduled by the task dispatcher, memory transfers are fully overlapped with other computations, which greatly reduce the communication overhead and help us achieve approximate linear acceleration."
]
} |
1605.08361 | 2399994860 | We use smoothed analysis techniques to provide guarantees on the training loss of Multilayer Neural Networks (MNNs) at differentiable local minima. Specifically, we examine MNNs with piecewise linear activation functions, quadratic loss and a single output, under mild over-parametrization. We prove that for a MNN with one hidden layer, the training error is zero at every differentiable local minimum, for almost every dataset and dropout-like noise realization. We then extend these results to the case of more than one hidden layer. Our theoretical guarantees assume essentially nothing on the training data, and are verified numerically. These results suggest why the highly non-convex loss of such MNNs can be easily optimized using local updates (e.g., stochastic gradient descent), as observed empirically. | At first, it may seem hopeless to find any training error guarantee for MNNs. Since the loss of MNNs is highly non-convex, with multiple local minima @cite_12 , it seems reasonable that optimization with SGD would get stuck at some bad local minimum. Moreover, many theoretical hardness results (reviewed in @cite_6 ) have been proven for MNNs with one hidden layer. | {
"cite_N": [
"@cite_6",
"@cite_12"
],
"mid": [
"2171931454",
"1988485873"
],
"abstract": [
"We first present a brief survey of hardness results for training feedforward neural networks. These results are then completed by the proof that the simplest architecture containing only a single neuron that applies a sigmoidal activation function σ: R → [α, β], satisfying certain natural axioms (e.g., the standard (logistic) sigmoid or saturated-linear function), to the weighted sum of n inputs is hard to train. In particular, the problem of finding the weights of such a unit that minimize the quadratic training error within (β - α)2 or its average (over a training set) within 5(β - α)2 (12n) of its infimum proves to be NP-hard. Hence, the well-known backpropagation learning algorithm appears not to be efficient even for one neuron, which has negative consequences in constructive learning.",
"Local minima and plateaus pose a serious problem in learning of neural networks. We investigate the hierarchical geometric structure of the parameter space of three-layer perceptrons in order to show the existence of local minima and plateaus. It is proved that a critical point of the model with H−1 hidden units always gives many critical points of the model with H hidden units. These critical points consist of many lines in the parameter space, which can cause plateaus in learning of neural networks. Based on this result, we prove that a point in the critical lines corresponding to the global minimum of the smaller model can be a local minimum or a saddle point of the larger model. We give a necessary and sufficient condition for this, and show that this kind of local minima exist as a line segment if any. The results are universal in the sense that they do not require special properties of the target, loss functions and activation functions, but only use the hierarchical structure of the model."
]
} |
1605.08361 | 2399994860 | We use smoothed analysis techniques to provide guarantees on the training loss of Multilayer Neural Networks (MNNs) at differentiable local minima. Specifically, we examine MNNs with piecewise linear activation functions, quadratic loss and a single output, under mild over-parametrization. We prove that for a MNN with one hidden layer, the training error is zero at every differentiable local minimum, for almost every dataset and dropout-like noise realization. We then extend these results to the case of more than one hidden layer. Our theoretical guarantees assume essentially nothing on the training data, and are verified numerically. These results suggest why the highly non-convex loss of such MNNs can be easily optimized using local updates (e.g., stochastic gradient descent), as observed empirically. | Previous works have shown that, given several limiting assumptions on the dataset, it is possible to get a low training error on a MNN with one hidden layer: @cite_25 proved convergences for linearly separable datasets; @cite_27 either required that @math , or clustering of the classes. Going beyond training error, @cite_19 showed that MNNs with one hidden layer can learn low order polynomials, under a product of Gaussians distributional assumption on the input. Also, @cite_16 devised a tensor method, instead of the standard SGD method, for which MNNs with one hidden layer are guaranteed to approximate arbitrary functions. Note, however, the last two works require a rather large @math to get good guarantees. | {
"cite_N": [
"@cite_19",
"@cite_27",
"@cite_16",
"@cite_25"
],
"mid": [
"2113517874",
"2243086562",
"1839868949",
""
],
"abstract": [
"We study the effectiveness of learning low degree polynomials using neural networks by the gradient descent method. While neural networks have been shown to have great expressive power, and gradient descent has been widely used in practice for learning neural networks, few theoretical guarantees are known for such methods. In particular, it is well known that gradient descent can get stuck at local minima, even for simple classes of target functions. In this paper, we present several positive theoretical results to support the effectiveness of neural networks. We focus on twolayer neural networks where the bottom layer is a set of non-linear hidden nodes, and the top layer node is a linear function, similar to Barron (1993). First we show that for a randomly initialized neural network with sufficiently many hidden units, the generic gradient descent algorithm learns any low degree polynomial, assuming we initialize the weights randomly. Secondly, we show that if we use complex-valued weights (the target function can still be real), then under suitable conditions, there are no \"robust local minima\": the neural network can always escape a local minimum by performing a random perturbation. This property does not hold for real-valued weights. Thirdly, we discuss whether sparse polynomials can be learned with small neural networks, with the size dependent on the sparsity of the target function.",
"Deep learning, in the form of artificial neural networks, has achieved remarkable practical success in recent years, for a variety of difficult machine learning applications. However, a theoretical explanation for this remains a major open problem, since training neural networks involves optimizing a highly non-convex objective function, and is known to be computationally hard in the worst case. In this work, we study the structure of the associated non-convex objective function, in the context of ReLU networks and starting from a random initialization of the network parameters. We identify some conditions under which it becomes more favorable to optimization, in the sense of (i) High probability of initializing at a point from which there is a monotonically decreasing path to a global minimum; and (ii) High probability of initializing at a basin (suitably defined) with a small minimal objective value. A common theme in our results is that such properties are more likely to hold for larger (\"overspecified\") networks, which accords with some recent empirical and theoretical observations.",
"Author(s): Janzamin, M; Sedghi, H; Anandkumar, A | Abstract: Training neural networks is a challenging non-convex optimization problem, and backpropagation or gradient descent can get stuck in spurious local optima. We propose a novel algorithm based on tensor decomposition for guaranteed training of two-layer neural networks. We provide risk bounds for our proposed method, with a polynomial sample complexity in the relevant parameters, such as input dimension and number of neurons. While learning arbitrary target functions is NP-hard, we provide transparent conditions on the function and the input for learnability. Our training method is based on tensor decomposition, which provably converges to the global optimum, under a set of mild non-degeneracy conditions. It consists of simple embarrassingly parallel linear and multi-linear operations, and is competitive with standard stochastic gradient descent (SGD), in terms of computational complexity. Thus, we propose a computationally efficient method with guaranteed risk bounds for training neural networks with one hidden layer.",
""
]
} |
1605.08548 | 2403986473 | In this work we present a mobile application we designed and engineered to enable people to log their travels near and far, leave notes behind, and build a community around spaces in between destinations. Our design explores new ground for location-based social computing systems, identifying opportunities where these systems can foster the growth of on-line communities rooted at non-places. In our work we develop, explore, and evaluate several innovative features designed around four usage scenarios: daily commuting, long-distance traveling, quantified traveling, and journaling. We present the results of two small-scale user studies, and one large-scale, world-wide deployment, synthesizing the results as potential opportunities and lessons learned in designing social computing for non-places. | Check-in apps emerged around 2003 to enable a virtual social experience anchored around physical places @cite_34 . These apps allow users to broadcast their presence at venues, that is, the places they go to. Venues include restaurants, bars, offices, apartment buildings, homes, museums, parks, movie theatres, shops, and caf ' e s. Dodgeball, GoWalla, Foursquare and now Swarm are some of the apps that embodied this concept of checking-in to a venue. More recently, Facebook has adopted this as a feature that people can use to attach a location to their posts. | {
"cite_N": [
"@cite_34"
],
"mid": [
"2099489485"
],
"abstract": [
"There have been many location sharing systems developed over the past two decades, and only recently have they started to be adopted by consumers. In this paper, we present the results of three studies focusing on the foursquare check-in system. We conducted interviews and two surveys to understand, both qualitatively and quantitatively, how and why people use location sharing applications, as well as how they manage their privacy. We also document surprising uses of foursquare, and discuss implications for design of mobile social services."
]
} |
1605.08548 | 2403986473 | In this work we present a mobile application we designed and engineered to enable people to log their travels near and far, leave notes behind, and build a community around spaces in between destinations. Our design explores new ground for location-based social computing systems, identifying opportunities where these systems can foster the growth of on-line communities rooted at non-places. In our work we develop, explore, and evaluate several innovative features designed around four usage scenarios: daily commuting, long-distance traveling, quantified traveling, and journaling. We present the results of two small-scale user studies, and one large-scale, world-wide deployment, synthesizing the results as potential opportunities and lessons learned in designing social computing for non-places. | Designers of social computing technologies have grappled with the concept of places for quite some time. In 1996, Harrison and Dourish @cite_1 argued for the importance in distinguishing between places" and spaces," with a special emphasis on the virtual. Ten years later, Dourish @cite_7 restates these concepts in the context of spatial technologies. | {
"cite_N": [
"@cite_1",
"@cite_7"
],
"mid": [
"2029544898",
"2051000932"
],
"abstract": [
"Many collaborative and communicative environments use notions of “space” and spatial organisation to facilitate and structure interaction. We argue that a focus on spatial models is misplaced. Drawing on understandings from architecture and urban design, as well as from our own research findings, we highlight the critical distinction between “space” and “place”. While designers use spatial models to support interaction, we show how it is actually a notion of “place” which frames interactive behaviour. This leads us to re-evaluate spatial systems, and discuss how “place”, rather than “space”, can support CSCW design.",
"In the ten years since the distinction between \"place\" and \"space\" emerged as a consideration for CSCW researchers and designers, the concepts have proven useful across a range of domains. In that same period of time, wireless and mobile technologies have given us new sites at which to examine the issues of space, practice, and mobility. These changes suggest that it might be fruitful to re-examine the issues of place and space in light of recent developments. In particular, the nature of space and spatiality deserve further consideration."
]
} |
1605.08548 | 2403986473 | In this work we present a mobile application we designed and engineered to enable people to log their travels near and far, leave notes behind, and build a community around spaces in between destinations. Our design explores new ground for location-based social computing systems, identifying opportunities where these systems can foster the growth of on-line communities rooted at non-places. In our work we develop, explore, and evaluate several innovative features designed around four usage scenarios: daily commuting, long-distance traveling, quantified traveling, and journaling. We present the results of two small-scale user studies, and one large-scale, world-wide deployment, synthesizing the results as potential opportunities and lessons learned in designing social computing for non-places. | Previous work has explored the design of technologies to connect people with one another while riding public transportation. For instance, researchers developed Trainroulette, an app to promote situated in-train social interaction between passengers'' @cite_35 . The researchers found that people were interested in knowing who shares the rides with them, but wanted to do it semi-anonymously (exposing only certain aspects of their identity). Similarly, Belloni and colleagues @cite_6 explored the use of mobile technologies in transitional spaces.'' Their work focused on the design of a location-based friend finder that displays any of the user's friends that are in the same subway train. The researchers found that users wanted the ability to invisibly'' log in to the system. This need for identity opaqueness inspired the design of our app's identity system. | {
"cite_N": [
"@cite_35",
"@cite_6"
],
"mid": [
"2146290028",
"2104154244"
],
"abstract": [
"Travelling by public transport is usually regarded as boring and uninteresting. Refraining from talking to the stranger next to you may be due to limitations that are self-imposed and further corroborated by social expectations and cultural norms that govern behaviour in public space. Our design research into passenger interactions on board of urban commuter trains has informed the development of the TrainRoulette prototype -- a mobile app for situated, real-time chats between train passengers. We study the impact of our design intervention on shaping perceptions of the train journey experience. Moreover, we are interested in the implications of such ICT-mediated interactions within train journeys for stimulating social offline interactions and new forms of passenger engagement.",
"This project explores the social possibilities of mobile technology in transitional spaces such as public transport. Based on a cultural probes study of Stockholm subway commuters, we designed a location-based friend finder that displays only people in the same train as the user. We aim at reaching a critical mass of users and therefore decided to make the system compatible with as many phones as possible, thus it was designed as a simple web application. An initial informal study pointed out consequences of certain design decisions on the user experience and highlighted social tensions created by presence awareness."
]
} |
1605.08548 | 2403986473 | In this work we present a mobile application we designed and engineered to enable people to log their travels near and far, leave notes behind, and build a community around spaces in between destinations. Our design explores new ground for location-based social computing systems, identifying opportunities where these systems can foster the growth of on-line communities rooted at non-places. In our work we develop, explore, and evaluate several innovative features designed around four usage scenarios: daily commuting, long-distance traveling, quantified traveling, and journaling. We present the results of two small-scale user studies, and one large-scale, world-wide deployment, synthesizing the results as potential opportunities and lessons learned in designing social computing for non-places. | Other work has looked at designing applications to increase social engagement with the physical world. designed an application that let people convert free form doodles to sharable walking routes on maps @cite_31 . designed an online community of people documenting their experiences at places through sharable city guides @cite_29 . Overall, a meta-analysis of pervasive technology and public transport @cite_5 proposed the development of applications that not facilitate only more efficient journeys, but also more enjoyable ones that people look forward to. This is what we set out to do. | {
"cite_N": [
"@cite_5",
"@cite_31",
"@cite_29"
],
"mid": [
"2148623276",
"1973704674",
"2097335278"
],
"abstract": [
"This review of IT-based services offered in public transportation focuses on the passenger's perspective. The authors suggest new directions for future services, stressing the need to develop frameworks for assessing service quality and customer satisfaction.",
"This paper describes a study of algorithmic living with Trace, a mobile mapping application that generates walking routes based on digital sketches people create and annotate without a map. In addition to creating walking paths, Trace enables people to send the paths to others. We designed Trace to explore the possibility of emphasizing guided wandering over precise, destination-oriented navigation. Studies of sixteen people's use of Trace over roughly one week reveal how walkers find Trace both delightful and disorienting, highlighting moments of surprise, frustration, and identification with GIS routing algorithms. We conclude by discussing how design interventions offers possibilities for understanding the work of mapping and how it might be done differently in HCI.",
"We report on our design of Curated City, a website that lets people build their own personal guide to the city's neighborhoods by chronicling their favorite experiences. Although users make their own personal guides, they are immersed in a social curatorial experience where they are influenced directly and indirectly by the guides of others. We use a 2-week field trial involving 20 residents of Pittsburgh as a technological probe to explore the initial design decisions, and we further refine the design landscape through subject interviews. Based on this study, we identify a set of design recommendations for building scalable social platforms for curating the experiences of the city."
]
} |
1605.08197 | 2407228066 | Corporations across the world are highly interconnected in a large global network of corporate control. This paper investigates the global board interlock network, covering 400,000 firms linked through 1,700,000 edges representing shared directors between these firms. The main focus is on the concept of centrality, which is used to investigate the embeddedness of firms from a particular country within the global network. The study results in three contributions. First, to the best of our knowledge for the first time we can investigate the topology as well as the concept of centrality in corporate networks at a global scale, allowing for the largest cross-country comparison ever done in interlocking directorates literature. We demonstrate, among other things, extremely similar network topologies, yet large differences between countries when it comes to the relation between economic prominence indicators and firm centrality. Second, we introduce two new metrics that are specifically suitable for comparing the centrality ranking of a partition to that of the full network. Using the notion of centrality persistence we propose to measure the persistence of a partition’s centrality ranking in the full network. In the board interlock network, it allows us to assess the extent to which the footprint of a national network is still present within the global network. Next, the measure of centrality ranking dominance tells us whether a partition (country) is more dominant at the top or the bottom of the centrality ranking of the full (global) network. Finally, comparing these two new measures of persistence and dominance between different countries allows us to classify these countries based the their embeddedness, measured using the relation between the centrality of a country’s firms on the national and the global scale of the board interlock network. | Apart from interlocking directorates, can model relationships between firms based on a number of different types of ties, including trade @cite_10 , borrowing and lending of money @cite_35 and ownership, creating a network in which two firms are linked if one firm owns a certain percentage of another firm @cite_6 @cite_13 . In corporate networks, algorithms, to find groups of firms that are more connected with each other than with the rest of the network, are frequently applied @cite_9 @cite_12 @cite_6 . For both board interlock as well as ownership networks, it has been suggested that the communities that arise from the global corporate network have a clear regional character. | {
"cite_N": [
"@cite_35",
"@cite_10",
"@cite_9",
"@cite_6",
"@cite_13",
"@cite_12"
],
"mid": [
"2273968408",
"1597300810",
"",
"2167503236",
"2116075173",
"1982094702"
],
"abstract": [
"Traditional economic theory could not explain, much less predict, the near collapse of the financial system and its long-lasting effects on the global economy. Since the 2008 crisis, there has been increasing interest in using ideas from complexity theory to make sense of economic and financial markets. Concepts, such as tipping points, networks, contagion, feedback, and resilience have entered the financial and regulatory lexicon, but actual use of complexity models and results remains at an early stage. Recent insights and techniques offer potential for better monitoring and management of highly interconnected economic and financial systems and, thus, may help anticipate and manage future crises.",
"Trade requires search, negotiation, and exchange, which are activities that absorb resources. This paper investigates how different trade networks attend to these activities. An artificial market is constructed in which autonomous agents endowed with a stock of goods seek out partners, negotiate a price, and then trade with the agent offering the best deal. Different trade networks are imposed on the system by restricting the set of individuals with whom an agent can communicate. We then compare the path to the eventual equilibrium as well as the equilibrium characteristics of each trade network to see how each system dealt with the tasks of search, negotiation, and exchange. Initially, all agents are free to trade with any individual in the global market. In such a world, global resources are optimally allocated with few trades, but only after a tremendous amount of search and negotiation. If trade is restricted within disjoint local boundaries, search is simple but global efficiency elusive. However, a hybrid model in which most agents trade locally but a few agents trade globally results in an economy that quickly reaches a Pareto optimal equilibrium with significantly lower search and negotiation costs. Such small-world' networks occur in nature and may help explain the ease with which most of us acquire goods from around the world. We also show that there are private incentives for such a system to arise.",
"",
"We investigate the community structure of the global ownership network of transnational corporations. We find a pronounced organization in communities that cannot be explained by randomness. Despite the global character of this network, communities reflect first of all the geographical location of firms, while the industrial sector plays only a marginal role. We also analyze the network in which the nodes are the communities and the links are obtained by aggregating the links among firms belonging to pairs of communities. We analyze the network centrality of the top 50 communities and we provide the first quantitative assessment of the financial sector role in connecting the global economy.",
"The structure of the control network of transnational corporations affects global market competition and financial stability. So far, only small national samples were studied and there was no appropriate methodology to assess control globally. We present the first investigation of the architecture of the international ownership network, along with the computation of the control held by each global player. We find that transnational corporations form a giant bow-tie structure and that a large portion of control flows to a small tightly-knit core of financial institutions. This core can be seen as an economic “super-entity” that raises new important issues both for researchers and policy makers.",
"The community structure of two real-world financial networks, namely the board network and the ownership network of the firms of the Italian Stock Exchange, is analyzed by means of the maximum modularity approach. The main result is that both networks exhibit a strong community structure and, moreover, that the two structures overlap significantly. This is due to a number of reasons, including the existence of pyramidal groups and directors serving in several boards. Overall, this means that the “small world” of listed companies is actually split into well identifiable “continents” (i.e., the communities)."
]
} |
1605.08197 | 2407228066 | Corporations across the world are highly interconnected in a large global network of corporate control. This paper investigates the global board interlock network, covering 400,000 firms linked through 1,700,000 edges representing shared directors between these firms. The main focus is on the concept of centrality, which is used to investigate the embeddedness of firms from a particular country within the global network. The study results in three contributions. First, to the best of our knowledge for the first time we can investigate the topology as well as the concept of centrality in corporate networks at a global scale, allowing for the largest cross-country comparison ever done in interlocking directorates literature. We demonstrate, among other things, extremely similar network topologies, yet large differences between countries when it comes to the relation between economic prominence indicators and firm centrality. Second, we introduce two new metrics that are specifically suitable for comparing the centrality ranking of a partition to that of the full network. Using the notion of centrality persistence we propose to measure the persistence of a partition’s centrality ranking in the full network. In the board interlock network, it allows us to assess the extent to which the footprint of a national network is still present within the global network. Next, the measure of centrality ranking dominance tells us whether a partition (country) is more dominant at the top or the bottom of the centrality ranking of the full (global) network. Finally, comparing these two new measures of persistence and dominance between different countries allows us to classify these countries based the their embeddedness, measured using the relation between the centrality of a country’s firms on the national and the global scale of the board interlock network. | Centrality has long been a basic concept in the study networks of interlocking directorates, in the beginning focusing mainly on degree centrality. As social network analysis gained more popularity, new centrality measures were proposed and applied to understand networks of interlocking directorates @cite_18 , for example to see differences between banks and nonbanks @cite_39 . In @cite_11 it is argued that the function of monitoring and the provisioning of resources of well-connected board has an effect on firm performance. However, when it comes to the precise relationship of firm performance and topological board interlock network measurements, the results are diverse. Correlations between centrality and economic performance are frequently demonstrated, but differ in strength across studies. For example, in @cite_26 it was shown that higher node centrality in the United States results in better boardroom performance, measured using a number of economic performance indicators. In @cite_30 , it was found that in the United Kingdom, the connectedness of directors and thus their boards is positively associated with firm performance, and a similar conclusion is drawn in @cite_2 for director networks. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_26",
"@cite_39",
"@cite_2",
"@cite_11"
],
"mid": [
"2143993253",
"1565308051",
"2149407414",
"2313188366",
"1573737819",
"2117482595"
],
"abstract": [
"Using a sample of 4,278 listed UK firms, we construct a social network of directorship-interlocks that comprises 31,495 directors. We use social capital theory and techniques developed in social network analysis to measure a director's connectedness and investigate whether this connectedness is associated with their compensation level and their firms overall performance. We find connectedness is positively associated with compensation and with the firm's future performance. The results do not support the view that executive and outside directors use their connections to extract economic rents. Rather the company compensates these individuals for the resources these better connections provide to the firm.",
"0 Introduction.- I Themes and Problems.- 1.0 Introduction.- 1.1 The theory of finance capital.- 1.2 Interlocking directorates and economic power.- 1.3 Financial groups.- 1.4 Corporate elite and capitalist class.- 1.5 Summary.- II Between Market and Hierarchy.- 2.0 Introduction.- 2.1 The organization of firms and markets: a neo-classical explanation.- 2.2 Some definitions of firms.- 2.3 Power and control.- 2.4 Ownership and control.- 2.5 Competition, cooperation and control.- 2.5.1 Concentration and centralization.- 2.5.2 The level-of-analysis problem.- 2.6 Summary.- III Imperialism in the Seventies: two models.- 3.0 Introduction.- 3.1 Theories of imperialism.- 3.1.1 Their origins.- 3.1.2 After World War II.- 3.2 Two models.- 3.3 Research design.- 3.4 Summary.- IV The International Corporate Elite.- 4.0 Introduction.- 4.1 The organization of the supervising and executive function in different countries.- 4.2 Selection of the international corporate elite.- 4.3 Network characteristics of the international corporate elite.- 4.3.0 Introduction.- 4.3.1 The finance capitalists.- 4.3.2 The big linkers.- 4.4 Types of interlocking directorates.- 4.4.0 Introduction.- 4.4.1 Multiple interlocks.- 4.5 Summary.- V National Versus International Integration.- 5.0 Introduction to some graph-theoretical concepts.- 5.1 General patterns in the international network.- 5.1.1 Compactness of the international network.- 5.1.2 International integration of national networks.- 5.2 The international network of Western firms.- 5.2.0 Introduction.- 5.2.1 Local centrality.- 5.2.2 Global centrality.- 5.2.3 Overall centrality.- 5.3 Industrial concentration versus economic centralization.- 5.4 Summary.- VI Domination and Control.- 6.0 Introduction.- 6.1 Clusters of heavily interlocked firms.- 6.1.1 The network at multiplicity-level two.- 6.1.2 The network at multiplicity-level three.- 6.1.3 Conclusions.- 6.2 The network of officer-interlocks.- 6.2.0 Introduction.- 6.2.1 Domination and control.- 6.2.1.1 The network of all officer-interlocks.- 6.2.1.2 The network of control.- 6.2.2 Financial groups.- 6.2.3 Constellations of interests.- 6.3 Summary.- VII Competition and Cooperation: the role of banks.- 7.0 Introduction.- 7.1 Interlocks among banks.- 7.2 Overlapping spheres of interests.- 7.3 International bank consortia.- 7.4 The American banks.- 7.5 Summary.- VIII The Impact of World Crisis: changes in the network.- 8.0 Introduction.- 8.1 A new economic world order?.- 8.2 Selection of the 1976 sample.- 8.3 The international corporate elite.- 8.3.1 Finance capitalists.- 8.3.2 Big linkers.- 8.4 International versus national integration.- 8.5 Centrality in the nested networks (1976).- 8.6 Domination and control in the 1976 network.- 8.7 Summary.- IX Summary and Conclusions.- 9.1 Introduction.- 9.2 The structure of the international corporate elite.- 9.3 Conflict or cooperation.- 9.4 The meaning of interlocking directorates.- 9.5 The international corporate elite.- References.- Authors Index.- Firms Index.- Appendix A.",
"Firms with central boards of directors earn superior risk-adjusted stock returns. A long (short) position in the most (least) central firms earns average annual returns of 4.68 . Firms with central boards also experience higher future return-on-assets growth and more positive analyst forecast errors. Return prediction, return-on-assets growth, and analyst errors are concentrated among high growth opportunity firms or firms confronting adverse circumstances, consistent with boardroom connections mattering most for firms standing to benefit most from information and resources exchanged through boardroom networks. Overall, our results suggest that director networks provide economic benefits that are not immediately reflected in stock prices.",
"I There is a terminological difficulty to be noted here. \"Centrality\" has both a generic and a particular meaning in this paper. Generically, \"centrality\" refers to any of three intuitive conceptions: degree (connectedness), closeness, and betweenness (Freeman, 1979). In its particular sense, \"centrality\" refers to any variation of a measurement technique introduced by Phillip Bonacich (1972a, 1972b) and further elaborated in (1975), Mariolis (1975), and Mariolis, Schwartz, and Mintz (1979). When using it in this latter sense, we generally refer to \"centrality scores,\" or to \"directional\" and \"nondirectional\" centrality. In other cases, the context should make clear which of the two meanings we intend. This paper addresses a series of empirical, methodological, and theoretical questions raised by examining the reliability and stability of centrality in corporate interlock networks. Data on the interlocking directorates of 1094 large U.S. corporations in 1962, 1964, and 1966 are analyzed with a test-retest simultaneous equation model. The results confirm the common, but little tested, assumptions that centrality measures are highly reliable and stable. Further, we find that, of three measures examined (number of interlocks, nondirectional centrality, and directional centrality), number of interlocks is slightly more reliable or stable than the other two. Finally, the results show that the centrality of banks is more stable than the centrality of nonbanks. We conclude with a discussion of the implications of these findings.*",
"The implications of the intricate pattern of relationships formed by company directors holding positions on multiple corporate boards, or 'interlocking', have long been the subject of speculation and investigation. While this web of inter-firm relationships is no longer regarded as prime facia evidence of collusive activity, a growing body of research on US firms has identified a range of performance effects on firms associated with information flows in these networks. Yet research on the role of director networks and firm performance is far from comprehensive and has largely been limited to the largest US corporates. This paper extends the existing research in this field by drawing together the principal findings to date and testing these in a different national context and with a much larger dataset than used previously. The relationship between director interlocks and corporate performance is examined among 6428 UK firms, those with annual turnover of £100 million or more. Social network and regression analysis is used to detect significant relationships between the pattern of director interlinking and corporate performance. A number of significant relationships are identified, broadly consistent with the US research but some phenomena distinctive to the UK is found, reflecting differences in the structure and sociology of capital markets in the two countries. In particular the role of executive directors is much less significant to the general financial performance of UK firms than to US firms and is more focused on reputational concerns in capital markets.",
"Boards of directors serve two important functions for organizations: monitoring management on behalf of shareholders and providing resources. Agency theorists assert that effective monitoring is a function of a board's incentives, whereas resource dependence theorists contend that the provision of resources is a function of board capital. We combine the two perspectives and argue that board capital affects both board monitoring and the provision of resources and that board incentives moderate these relationships."
]
} |
1605.08197 | 2407228066 | Corporations across the world are highly interconnected in a large global network of corporate control. This paper investigates the global board interlock network, covering 400,000 firms linked through 1,700,000 edges representing shared directors between these firms. The main focus is on the concept of centrality, which is used to investigate the embeddedness of firms from a particular country within the global network. The study results in three contributions. First, to the best of our knowledge for the first time we can investigate the topology as well as the concept of centrality in corporate networks at a global scale, allowing for the largest cross-country comparison ever done in interlocking directorates literature. We demonstrate, among other things, extremely similar network topologies, yet large differences between countries when it comes to the relation between economic prominence indicators and firm centrality. Second, we introduce two new metrics that are specifically suitable for comparing the centrality ranking of a partition to that of the full network. Using the notion of centrality persistence we propose to measure the persistence of a partition’s centrality ranking in the full network. In the board interlock network, it allows us to assess the extent to which the footprint of a national network is still present within the global network. Next, the measure of centrality ranking dominance tells us whether a partition (country) is more dominant at the top or the bottom of the centrality ranking of the full (global) network. Finally, comparing these two new measures of persistence and dominance between different countries allows us to classify these countries based the their embeddedness, measured using the relation between the centrality of a country’s firms on the national and the global scale of the board interlock network. | There are also a number of works such as @cite_40 that, using the case of Germany, suggest a negative correlation between firm performance and centrality. In @cite_31 , using data on listed firms in Italy and a comparison with a number of previous works, it is argued that there are certainly significant differences between countries with respect to the correlation of board centrality and economic performance. Furthermore, the causal relationship between the connectedness of boards and the aforementioned consequences is not always clear, see for example the discussion in @cite_0 . In many papers it is left for future work to determine whether there is a causal effect, to study the differences between countries, or to scale up to sufficient data for a fair cross-country comparison. Such a comparison is difficult, because datasets of board interlock networks have different sources, and are frequently based on manually gathered data from annual reports. As a result, studies differ in terms of the number of firms that is studied and the point in time at which the study was done, making it hard to objectively compare results. | {
"cite_N": [
"@cite_0",
"@cite_40",
"@cite_31"
],
"mid": [
"2134672337",
"2097989697",
"1486537057"
],
"abstract": [
"Research on interlocking directorates has gained increasing prominence within the field of organizations, but it has come under increasing criticism as well. This chapter presents an in-depth examination of the study of interlocking directorates. I focus initially on both the determinants and the consequences of interlocking directorates, reviewing alternative accounts of both phenomena. Special attention is paid to the processual formulations implied by various interlock analyses. I then address the two primary criticisms of interlock research and evaluate the tenability of these criticisms. I conclude with a discussion of future directions for interlock research.",
"We investigate the relationship between firm governance and the board's position in the social network of directors. Using a sample of 133 German firms over the four-year period from 2003 to 2006, we find that firms with intensely connected supervisory boards are (1) associated with lower firm performance, and (2) pay their executives significantly more. We interpret these results as evidence of poor monitoring in firms with directors who are more embedded in the social network. In both cases, simple measures for busy directors that were used by other studies in the past fail to show any significant pattern. The findings suggest that the quality and structural position of additional board seats may play a bigger role than simply the number of board appointments.",
"We use measures of vertex centrality to examine interlocking directorates and their economic effects in Italy. We employ centrality measures like degree, eigenvector centrality, betweenness, and flow betweenness, along with the clustering coefficient. We document the existence of a negative relationship between both degree and eigenvector centrality and firm value. Betweenness and flow betweenness, on the other hand, are not associated with lower firm valuations. We argue that these differences derive from the different properties of these measures: while degree and eigenvector centrality measures the influence and the power of the connections, betweenness and flow betweenness are proxies for the volume of information that passes between the nodes. This result is robust with respect to the use of both stock market and operating performance measures, as well as several controlling variables."
]
} |
1605.08197 | 2407228066 | Corporations across the world are highly interconnected in a large global network of corporate control. This paper investigates the global board interlock network, covering 400,000 firms linked through 1,700,000 edges representing shared directors between these firms. The main focus is on the concept of centrality, which is used to investigate the embeddedness of firms from a particular country within the global network. The study results in three contributions. First, to the best of our knowledge for the first time we can investigate the topology as well as the concept of centrality in corporate networks at a global scale, allowing for the largest cross-country comparison ever done in interlocking directorates literature. We demonstrate, among other things, extremely similar network topologies, yet large differences between countries when it comes to the relation between economic prominence indicators and firm centrality. Second, we introduce two new metrics that are specifically suitable for comparing the centrality ranking of a partition to that of the full network. Using the notion of centrality persistence we propose to measure the persistence of a partition’s centrality ranking in the full network. In the board interlock network, it allows us to assess the extent to which the footprint of a national network is still present within the global network. Next, the measure of centrality ranking dominance tells us whether a partition (country) is more dominant at the top or the bottom of the centrality ranking of the full (global) network. Finally, comparing these two new measures of persistence and dominance between different countries allows us to classify these countries based the their embeddedness, measured using the relation between the centrality of a country’s firms on the national and the global scale of the board interlock network. | In this paper we address a number of these issues, as we consider the largest @math million firms across the globe, allowing us to compare results with sufficient data in each country. The causal effects remain beyond the scope of this work, as we are foremost interested in understanding centrality at and between different national and global scales of the global corporate network network. Our work differs from studies such as @cite_19 in a sense that we still want to take the connectedness of the nodes within a particular partition of the full network into account, rather than merging all of the subset's nodes into one. To the best of our knowledge, this work is the first study in which the global corporate network is analyzed at such a large scale, particularly in the context of centrality, investigating the embeddedness of countries in the global network of corporate control. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2089370556"
],
"abstract": [
"This paper extends the standard network centrality measures of degree, closeness and betweenness to apply to groups and classes as well as individuals. The group centrality measures will enable researchers to answer such questions as ‘how central is the engineering department in the informal influence network of this company?’ or ‘among middle managers in a given organization, which are more central, the men or the women?’ With these measures we can also solve the inverse problem: given the network of ties among organization members, how can we form a team that is maximally central? The measures are illustrated using two classic network data sets. We also formalize a measure of group centrality efficiency, which indicates the extent to which a group's centrality is principally due to a small subset of its members."
]
} |
1605.08125 | 2404544750 | Manual spatio-temporal annotation of human action in videos is laborious, requires several annotators and contains human biases. In this paper, we present a weakly supervised approach to automatically obtain spatio-temporal annotations of an actor in action videos. We first obtain a large number of action proposals in each video. To capture a few most representative action proposals in each video and evade processing thousands of them, we rank them using optical flow and saliency in a 3D-MRF based framework and select a few proposals using MAP based proposal subset selection method. We demonstrate that this ranking preserves the high quality action proposals. Several such proposals are generated for each video of the same action. Our next challenge is to iteratively select one proposal from each video so that all proposals are globally consistent. We formulate this as Generalized Maximum Clique Graph problem using shape, global and fine grained similarity of proposals across the videos. The output of our method is the most action representative proposals from each video. Our method can also annotate multiple instances of the same action in a video. We have validated our approach on three challenging action datasets: UCF Sport, sub-JHMDB and THUMOS'13 and have obtained promising results compared to several baseline methods. Moreover, on UCF Sports, we demonstrate that action classifiers trained on these automatically obtained spatio-temporal annotations have comparable performance to the classifiers trained on ground truth annotation. | Recently, there has been a lot of interest in weakly supervised object localization using multiple images and videos @cite_8 @cite_27 . These approaches compute object candidate locations using the objectness score @cite_30 and find the similar boxes in multiple images or videos frames to improve object localization. To the best of our knowledge, no such analysis has been presented for action localization, before. | {
"cite_N": [
"@cite_30",
"@cite_27",
"@cite_8"
],
"mid": [
"2128715914",
"",
"1966601141"
],
"abstract": [
"We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. This includes an innovative cue measuring the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure [17], and the combined measure to perform better than any cue alone. Finally, we show how to sample windows from an image according to their objectness distribution and give an algorithm to employ them as location priors for modern class-specific object detectors. In experiments on PASCAL VOC 07 we show this greatly reduces the number of windows evaluated by class-specific object detectors.",
"",
"In this paper, we tackle the problem of co-localization in real-world images. Co-localization is the problem of simultaneously localizing (with bounding boxes) objects of the same class across a set of distinct images. Although similar problems such as co-segmentation and weakly supervised localization have been previously studied, we focus on being able to perform co-localization in real-world settings, which are typically characterized by large amounts of intra-class variation, inter-class diversity, and annotation noise. To address these issues, we present a joint image-box formulation for solving the co-localization problem, and show how it can be relaxed to a convex quadratic program which can be efficiently solved. We perform an extensive evaluation of our method compared to previous state-of-the-art approaches on the challenging PASCAL VOC 2007 and Object Discovery datasets. In addition, we also present a large-scale study of co-localization on ImageNet, involving ground-truth annotations for 3, 624 classes and approximately 1 million images."
]
} |
1605.08065 | 2405764872 | We draw a new connection between Coppersmith's method for finding small solutions to polynomial congruences modulo integers and the capacity theory of adelic subsets of algebraic curves. Coppersmith's method uses lattice basis reduction to construct an auxiliary polynomial that vanishes at the desired solutions. Capacity theory provides a toolkit for proving when polynomials with certain boundedness properties do or do not exist. Using capacity theory, we prove that Coppersmith's bound for univariate polynomials is optimal in the sense that there are auxiliary polynomials of the type he used that would allow finding roots of size @math for monic degree- @math polynomials modulo @math . Our results rule out the existence of polynomials of any degree and do not rely on lattice algorithms, thus eliminating the possibility of even superpolynomial-time improvements to Coppersmith's bound. We extend this result to constructions of auxiliary polynomials using binomial polynomials, and rule out the existence of any auxiliary polynomial of this form that would find solutions of size @math unless @math has a very small prime factor. | Although finding roots of a univariate polynomial, @math , modulo @math is difficult in general, if @math has a small'' root, then this root can be found efficiently using Coppersmith's method @cite_25 . | {
"cite_N": [
"@cite_25"
],
"mid": [
"2101040389"
],
"abstract": [
"We show how to find sufficiently small integer solutions to a polynomial in a single variable modulo N, and to a polynomial in two variables over the integers. The methods sometimes extend to more variables. As applications: RSA encryption with exponent 3 is vulnerable if the opponent knows two-thirds of the message, or if two messages agree over eight-ninths of their length; and we can find the factors of N=PQ if we are given the high order @math bits of P."
]
} |
1605.08065 | 2405764872 | We draw a new connection between Coppersmith's method for finding small solutions to polynomial congruences modulo integers and the capacity theory of adelic subsets of algebraic curves. Coppersmith's method uses lattice basis reduction to construct an auxiliary polynomial that vanishes at the desired solutions. Capacity theory provides a toolkit for proving when polynomials with certain boundedness properties do or do not exist. Using capacity theory, we prove that Coppersmith's bound for univariate polynomials is optimal in the sense that there are auxiliary polynomials of the type he used that would allow finding roots of size @math for monic degree- @math polynomials modulo @math . Our results rule out the existence of polynomials of any degree and do not rely on lattice algorithms, thus eliminating the possibility of even superpolynomial-time improvements to Coppersmith's bound. We extend this result to constructions of auxiliary polynomials using binomial polynomials, and rule out the existence of any auxiliary polynomial of this form that would find solutions of size @math unless @math has a very small prime factor. | A classic example listed in Coppersmith's original paper @cite_25 is decrypting stereotyped'' messages encrypted under low public exponent RSA, where an approximation to the solution is known in advance. The general RSA map is @math . For efficiency purposes, @math can be chosen to be as small as @math , so that a ciphertext'' is @math . Suppose we know some approximation to the message @math to the message @math . Then we can set [ f(x) = ( x _0 + x)^3 - c. ] Thus @math has a root (modulo @math ) at @math . If @math then this root can be found using Coppersmith's method. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2101040389"
],
"abstract": [
"We show how to find sufficiently small integer solutions to a polynomial in a single variable modulo N, and to a polynomial in two variables over the integers. The methods sometimes extend to more variables. As applications: RSA encryption with exponent 3 is vulnerable if the opponent knows two-thirds of the message, or if two messages agree over eight-ninths of their length; and we can find the factors of N=PQ if we are given the high order @math bits of P."
]
} |
1605.08065 | 2405764872 | We draw a new connection between Coppersmith's method for finding small solutions to polynomial congruences modulo integers and the capacity theory of adelic subsets of algebraic curves. Coppersmith's method uses lattice basis reduction to construct an auxiliary polynomial that vanishes at the desired solutions. Capacity theory provides a toolkit for proving when polynomials with certain boundedness properties do or do not exist. Using capacity theory, we prove that Coppersmith's bound for univariate polynomials is optimal in the sense that there are auxiliary polynomials of the type he used that would allow finding roots of size @math for monic degree- @math polynomials modulo @math . Our results rule out the existence of polynomials of any degree and do not rely on lattice algorithms, thus eliminating the possibility of even superpolynomial-time improvements to Coppersmith's bound. We extend this result to constructions of auxiliary polynomials using binomial polynomials, and rule out the existence of any auxiliary polynomial of this form that would find solutions of size @math unless @math has a very small prime factor. | The RSA function @math is assumed to be a one-way trapdoor permutation. Optimal Asymmetric Encryption Padding (OAEP) is a general method for taking a one-way trapdoor permutation and a random oracle @cite_9 , and creating a cryptosystem that achieves security against adaptive chosen ciphertext attacks (IND-CCA security). | {
"cite_N": [
"@cite_9"
],
"mid": [
"2052267638"
],
"abstract": [
"We argue that the random oracle model—where all parties have access to a public random oracle—provides a bridge between cryptographic theory and cryptographic practice. In the paradigm we suggest, a practical protocol P is produced by first devising and proving correct a protocol P R for the random oracle model, and then replacing oracle accesses by the computation of an “appropriately chosen” function h . This paradigm yields protocols much more efficient than standard ones while retaining many of the advantages of provable security. We illustrate these gains for problems including encryption, signatures, and zero-knowledge proofs."
]
} |
1605.08065 | 2405764872 | We draw a new connection between Coppersmith's method for finding small solutions to polynomial congruences modulo integers and the capacity theory of adelic subsets of algebraic curves. Coppersmith's method uses lattice basis reduction to construct an auxiliary polynomial that vanishes at the desired solutions. Capacity theory provides a toolkit for proving when polynomials with certain boundedness properties do or do not exist. Using capacity theory, we prove that Coppersmith's bound for univariate polynomials is optimal in the sense that there are auxiliary polynomials of the type he used that would allow finding roots of size @math for monic degree- @math polynomials modulo @math . Our results rule out the existence of polynomials of any degree and do not rely on lattice algorithms, thus eliminating the possibility of even superpolynomial-time improvements to Coppersmith's bound. We extend this result to constructions of auxiliary polynomials using binomial polynomials, and rule out the existence of any auxiliary polynomial of this form that would find solutions of size @math unless @math has a very small prime factor. | When we output only @math bit per iteration, this was shown to be secure @cite_17 @cite_6 , and later this was increased to allow the generator to output any @math consecutive bits @cite_28 . The maximum number of bits that can be safely outputted by such a generator is tightly tied to the approximation @math necessary for recovering @math from @math . Thus a bound on our ability to find small roots of @math immediately translates into bounds on the maximum number of bits that can be safely outputted at each step of the RSA pseudo random generator. | {
"cite_N": [
"@cite_28",
"@cite_6",
"@cite_17"
],
"mid": [
"2115652664",
"2018738738",
""
],
"abstract": [
"We study the security of individual bits in an RSA encrypted message EN(x). We show that given EN(x), predicting any single bit in x with only a nonnegligible advantage over the trivial guessing strategy, is (through a polynomial-time reduction) as hard as breaking RSA. Moreover, we prove that blocks of O(log log N) bits of x are computationally indistinguishable from random bits. The results carry over to the Rabin encryption scheme.Considering the discrete exponentiation function gx modulo p, with probability 1 − o(1) over random choices of the prime p, the analog results are demonstrated. The results do not rely on group representation, and therefore applies to general cyclic groups as well. Finally, we prove that the bits of ax + b modulo p give hard core predicates for any one-way function f.All our results follow from a general result on the chosen multiplier hidden number problem: given an integer N, and access to an algorithm Px that on input a random a i ZN, returns a guess of the ith bit of ax mod N, recover x. We show that for any i, if Px has at least a nonnegligible advantage in predicting the ith bit, we either recover x, or, obtain a nontrivial factor of N in polynomial time. The result also extends to prove the results about simultaneous security of blocks of O(log log N) bits.",
"The RSA and Rabin encryption functions are respectively defined as EN(x) = xe mod N and EN(x) = x2 mod N , where N is a product of two large random primes p , q and e is relatively prime to ? (N) . We present a simpler and tighter proof of the result of [ACGS] that the following problems are equivalent by probabilistic polynomial time reductions: (1) given EN(x) find x; (2) given EN(x) predict the least-significant bit of x with success probability 1 2 + 1 poly(n) , where N has n bits. The new proof consists of a more efficient algorithm for inverting the RSA Rabin function with the help of an oracle that predicts the least-significant bit of x . It yields provable security guarantees for RSA message bits and for the RSA random number generator for modules N of practical size.",
""
]
} |
1605.08065 | 2405764872 | We draw a new connection between Coppersmith's method for finding small solutions to polynomial congruences modulo integers and the capacity theory of adelic subsets of algebraic curves. Coppersmith's method uses lattice basis reduction to construct an auxiliary polynomial that vanishes at the desired solutions. Capacity theory provides a toolkit for proving when polynomials with certain boundedness properties do or do not exist. Using capacity theory, we prove that Coppersmith's bound for univariate polynomials is optimal in the sense that there are auxiliary polynomials of the type he used that would allow finding roots of size @math for monic degree- @math polynomials modulo @math . Our results rule out the existence of polynomials of any degree and do not rely on lattice algorithms, thus eliminating the possibility of even superpolynomial-time improvements to Coppersmith's bound. We extend this result to constructions of auxiliary polynomials using binomial polynomials, and rule out the existence of any auxiliary polynomial of this form that would find solutions of size @math unless @math has a very small prime factor. | In order to construct a provably secure pseudo random generator that outputs @math pseudo random bits for each multiplication modulo @math @cite_26 assume there is no probabilistic polynomial time algorithm for solving the @math -SSRSA problem. | {
"cite_N": [
"@cite_26"
],
"mid": [
"2171042141"
],
"abstract": [
"Pseudorandom Generators (PRGs) based on the RSA inversion (one-wayness) problem have been extensively studied in the literature over the last 25 years. These generators have the attractive feature of provable pseudorandomness security assuming the hardness of the RSA inversion problem. However, despite extensive study, the most efficient provably secure RSA-based generators output asymptotically only at most O(logn) bits per multiply modulo an RSA modulus of bitlength n, and hence are too slow to be used in many practical applications. To bring theory closer to practice, we present a simple modification to the proof of security by Fischlin and Schnorr of an RSA-based PRG, which shows that one can obtain an RSA-based PRG which outputs Ω(n) bits per multiply and has provable pseudorandomness security assuming the hardness of a well-studied variant of the RSA inversion problem, where a constant fraction of the plaintext bits are given. Our result gives a positive answer to an open question posed by Gennaro (J. of Cryptology, 2005) regarding finding a PRG beating the rate O(logn) bits per multiply at the cost of a reasonable assumption on RSA inversion."
]
} |
1605.08065 | 2405764872 | We draw a new connection between Coppersmith's method for finding small solutions to polynomial congruences modulo integers and the capacity theory of adelic subsets of algebraic curves. Coppersmith's method uses lattice basis reduction to construct an auxiliary polynomial that vanishes at the desired solutions. Capacity theory provides a toolkit for proving when polynomials with certain boundedness properties do or do not exist. Using capacity theory, we prove that Coppersmith's bound for univariate polynomials is optimal in the sense that there are auxiliary polynomials of the type he used that would allow finding roots of size @math for monic degree- @math polynomials modulo @math . Our results rule out the existence of polynomials of any degree and do not rely on lattice algorithms, thus eliminating the possibility of even superpolynomial-time improvements to Coppersmith's bound. We extend this result to constructions of auxiliary polynomials using binomial polynomials, and rule out the existence of any auxiliary polynomial of this form that would find solutions of size @math unless @math has a very small prime factor. | The Okamato-Uchiyama cryptosystem @cite_34 works with moduli of the form @math , and the authors assume that there is no polynomial time algorithm that can distinguish random samples from the set of @math th powers modulo @math from uniform elements in the multiplicative group modulo @math . Since @math , any algorithm that could find roots of @math would break the security of this cryptosystem. | {
"cite_N": [
"@cite_34"
],
"mid": [
"1529862094"
],
"abstract": [
"This paper proposes a novel public-key cryptosystem, which is practical, provably secure and has some other interesting properties as follows: 1. Its trapdoor technique is essentially different from any other previous schemes including RSA-Rabin and Diffie-Hellman. 2. It is a probabilistic encryption scheme. 3. It can be proven to be as secure as the intractability of factoring n = p2q (in the sense of the security of the whole plaintext) against passive adversaries. 4. It is semantically secure under the p-subgroup assumption, which is comparable to the quadratic residue and higher degree residue assumptions. 5. Under the most practical environment, the encryption and decryption speeds of our scheme are comparable to (around twice slower than) those of elliptic curve cryptosystems. 6. It has a homomorphic property: E(m0, r0)E(m1, r1) mod n = E(@#@ m0 + m1, r2), where E(m, r) means a ciphertext of plaintext m as randomized by r and m0+ m1 < p. 7. Anyone can change a ciphertext, C = E(m, r), into another ciphertext, C′ = Chr' mod n, while preserving plaintext of C (i.e., C′ = E(m,r″)), and the relationship between C and C′ can be concealed."
]
} |
1605.08065 | 2405764872 | We draw a new connection between Coppersmith's method for finding small solutions to polynomial congruences modulo integers and the capacity theory of adelic subsets of algebraic curves. Coppersmith's method uses lattice basis reduction to construct an auxiliary polynomial that vanishes at the desired solutions. Capacity theory provides a toolkit for proving when polynomials with certain boundedness properties do or do not exist. Using capacity theory, we prove that Coppersmith's bound for univariate polynomials is optimal in the sense that there are auxiliary polynomials of the type he used that would allow finding roots of size @math for monic degree- @math polynomials modulo @math . Our results rule out the existence of polynomials of any degree and do not rely on lattice algorithms, thus eliminating the possibility of even superpolynomial-time improvements to Coppersmith's bound. We extend this result to constructions of auxiliary polynomials using binomial polynomials, and rule out the existence of any auxiliary polynomial of this form that would find solutions of size @math unless @math has a very small prime factor. | The security of the Paillier Cryptosystem @cite_15 rests on the assumption (DCR), which is the assumption that there is no polynomial time algorithm that can distinguish the uniform distribution on @math th powers in @math from uniform elements in @math . Thus any algorithm that finds roots of @math would break the security of this cryptosystem. Because of its homomorphic properties, the Paillier Cryptosystem is a building block for many cryptographic protocols e.g. private searching on streaming data @cite_7 and private information retrieval @cite_10 @cite_22 . | {
"cite_N": [
"@cite_15",
"@cite_10",
"@cite_22",
"@cite_7"
],
"mid": [
"2132172731",
"1574024822",
"1511913323",
""
],
"abstract": [
"This paper investigates a novel computational problem, namely the Composite Residuosity Class Problem, and its applications to public-key cryptography. We propose a new trapdoor mechanism and derive from this technique three encryption schemes : a trapdoor permutation and two homomorphic probabilistic encryption schemes computationally comparable to RSA. Our cryptosystems, based on usual modular arithmetics, are provably secure under appropriate assumptions in the standard model.",
"We study the problem of single database private information retrieval, and present a solution with only logarithmic server-side communication complexity and a solution with only logarithmic user-side communication complexity. Previously the best result could only achieve polylogarithmic communication on each side, and was based on certain less well-studied assumptions in number theory [6]. On the contrary, our schemes are based on Paillier’s cryptosystem [16], which along with its variants have drawn extensive studies in recent cryptographic researches [3, 4, 8, 9], and have many important applications [7, 8].",
"We propose a one-round 1-out-of-n computationally-private information retrieval protocol for l-bit strings with low-degree polylogarithmic receiver-computation, linear sender-computation and communication Θ(klog2n+llogn), where k is a possibly non-constant security parameter. The new protocol is receiver-private if the underlying length-flexible additively homomorphic public-key cryptosystem is IND-CPA secure. It can be transformed to a one-round computationally receiver-private and information-theoretically sender-private 1-out-of-n oblivious-transfer protocol for l-bit strings, that has the same asymptotic communication and is private in the standard complexity-theoretic model.",
""
]
} |
1605.08143 | 2951649892 | Though deliberation is a critical component of democratic decision-making, existing deliberative processes do not scale to large groups of people. Motivated by this, we propose a model in which large-scale decision-making takes place through a sequence of small group interactions. Our model considers a group of participants, each having an opinion which together form a graph. We show that for median graphs, a class of graphs including grids and trees, it is possible to use a small number of three-person interactions to tightly approximate the wisdom of the crowd, defined here to be the generalized median of participant opinions, even when agents are strategic. Interestingly, we also show that this sharply contrasts with small groups of size two, for which we prove an impossibility result. Specifically, we show that it is impossible to use sequences of two-person interactions satisfying natural axioms to find a tight approximation of the generalized median, even when agents are non-strategic. Our results demonstrate the potential of small group interactions for reaching global decision-making properties. | The problem of scaling up decision-making is a well-studied problem in the context of voting and preference elicitation. In this context, one is typically trying to approximate or calculate the output of a social choice function while only eliciting small amounts of information from voters (e.g. through pairwise comparisons) @cite_9 @cite_23 @cite_14 @cite_30 . Our paper can be viewed as an attempt to create a thread of research mirroring preference elicitation, but for deliberative processes. Thus far, computational social choice has primarily viewed decision-making and preference elicitation from the perspective of efficiency, accuracy, and strategic issues. We propose that deliberation is a valuable new dimension to consider in social choice and provide a small step in this direction. We discuss this direction more in the concluding section. | {
"cite_N": [
"@cite_30",
"@cite_9",
"@cite_14",
"@cite_23"
],
"mid": [
"",
"2121751780",
"99643971",
"1948781505"
],
"abstract": [
"",
"We determine the communication complexity of the common voting rules. The rules (sorted by their communication complexity from low to high) are plurality, plurality with runoff, single transferable vote (STV), Condorcet, approval, Bucklin, cup, maximin, Borda, Copeland, and ranked pairs. For each rule, we first give a deterministic communication protocol and an upper bound on the number of bits communicated in it; then, we give a lower bound on (even the nondeterministic) communication requirements of the voting rule. The bounds match for all voting rules except STV and maximin.",
"While voting schemes provide an effective means for aggregating preferences, methods for the effective elicitation of voter preferences have received little attention. We address this problem by first considering approximate winner determination when incomplete voter preferences are provided. Exploiting natural scoring metrics, we use max regret to measure the quality or robustness of proposed winners, and develop polynomial time algorithms for computing the alternative with minimax regret for several popular voting rules. We then show how minimax regret can be used to effectively drive incremental preference vote elicitation and devise several heuristics for this process. Despite worst-case theoretical results showing that most voting protocols require nearly complete voter preferences to determine winners, we demonstrate the practical effectiveness of regret-based elicitation for determining both approximate and exact winners on several real-world data sets.",
"This paper considers the communication complexity of approximating common voting rules. Both upper and lower bounds are presented. For n voters and m alternatives, it is shown that for all e e (0, 1), the communication complexity of obtaining a 1 - e approximation to Borda is O(log (1 e)nm). A lower bound of Ω(nm) is provided for fixed small values of e. The communication complexity of computing the true Borda winner is Ω(nm log(m)) [5]. Thus, in the case of Borda, one can obtain arbitrarily good approximations with less communication overhead than is required to compute the true Borda winner. For other voting rules, no such 1±e approximation scheme exists. In particular, it is shown that the communication complexity of computing any constant factor approximation, ρ, to Bucklin is Ω(nm ρ2). Conitzer and Sandholm [5] show that the communication complexity of computing the true Bucklin winner is O(nm). However, we show that for all δ e (0, 1), the communication complexity of computing a mδ approximate winner in Bucklin elections is O(nm1 - δ log(m)). For δ e (1 2, 1), a lower bound of ω(nm1-2δ) is also provided. Similar lower bounds are presented on the communication complexity of computing approximate winners in Copeland elections."
]
} |
1605.08143 | 2951649892 | Though deliberation is a critical component of democratic decision-making, existing deliberative processes do not scale to large groups of people. Motivated by this, we propose a model in which large-scale decision-making takes place through a sequence of small group interactions. Our model considers a group of participants, each having an opinion which together form a graph. We show that for median graphs, a class of graphs including grids and trees, it is possible to use a small number of three-person interactions to tightly approximate the wisdom of the crowd, defined here to be the generalized median of participant opinions, even when agents are strategic. Interestingly, we also show that this sharply contrasts with small groups of size two, for which we prove an impossibility result. Specifically, we show that it is impossible to use sequences of two-person interactions satisfying natural axioms to find a tight approximation of the generalized median, even when agents are non-strategic. Our results demonstrate the potential of small group interactions for reaching global decision-making properties. | Median graphs have been studied before in the context of voting. For instance, it is known that for median graphs, the Condorcet winner is strongly related to the generalized median @cite_22 @cite_27 @cite_32 . Nehring and Puppe @cite_15 show that any single-peaked domain which admits a non-dictatorial and neutral strategy-proof social choice function is a median space. Clearwater et. al. also showed that any set of voters and alternatives on a median graph will have a Condorcet winner (their full result is stronger than this) @cite_42 . | {
"cite_N": [
"@cite_22",
"@cite_42",
"@cite_32",
"@cite_27",
"@cite_15"
],
"mid": [
"1994784589",
"",
"2136651023",
"2080695516",
"2165733306"
],
"abstract": [
"A median of a family of vertices in a graph is any vertex whose distance-sum to that family is minimum. In the framework of metric spaces the problem of minimizing a distance-sum is often referred to as the Fermat problem. On the other hand, medians have been studied from a purely order-theoretic or combinatorial point of view (for instance, in statistics, or in Jordan’s work [12] on trees). The aim of this paper is to investigate the mutual relationship of the metric and the ordinal combinatorial approaches to the median problem in the class of median graphs. A connected graph is a median graph if any three vertices admit a unique median (see Avann [l]). Note that trees and the covering graphs of distributive lattices are median graphs. Very little is known about medians in arbitrary graphs (cf. Slater [20]); so far, only trees (Zelinka [22], and many others) and the covering graphs of distributive lattices (Barbut [4]) have been considered. In both cases we get that (i) the medians of any family form an interval (a path in a tree, an order-theoretic interval in a distributive lattice), and (ii) medians of odd numbered families are unique (see Slater [19] for trees, and Barbut [4] for distributive lattices). These results point to the fact that (i) and (ii) must be true for any median graph. After recalling some basic definitions and facts concerning median graphs and median semilattices (for further information, see Bandelt and Hedlikova [3]), we establish (i) and (ii) for arbitrary median graphs. Our results are based on theorems of Avann, Sholander, and Barbut. In trees medians have nice local properties (cf. [7]). Indeed, median sets are related to mass centers (Zelinka [22]) and security centers (Slater [18]). In Section 3 this is extended to median graphs. The study of medians applies to social choice theory (see Barbut 151, and Barthelemy and Monjardet [8]). The median procedure is strongly related to the simple majority rule: the median of a family (A,, . . . , Azk+ ,) of subsets of a set X may be written as U n Ai (Barbut’s formula).",
"",
"We consider a competitive facility location problem on a network, in which consumers are located on the vertices and wish to connect to the nearest facility. Knowing this, competitive players locate their facilities on vertices that capture the largest possible market share. In 1991, Eiselt and Laporte established the first relation between Nash equilibria of a facility location game in a duopoly and the solutions to the 1-median problem. They showed that an equilibrium always exists in a tree because a location profile is at equilibrium if and only if both players select a 1-median of that tree [4]. In this work, we further explore the relations between the solutions to the 1-median problem and the equilibrium profiles. We show that if an equilibrium in a cycle exists, both players must choose a solution to the 1-median problem. We also obtain the same property for some other classes of graphs such as quasi-median graphs, median graphs, Helly graphs, and strongly-chordal graphs. Finally, we prove the converse for the latter class, establishing that, as for trees, any median of a strongly-chordal graph is a winning strategy that leads to an equilibrium.",
"Abstract This paper gives new perspectives in competitive location theory by considering new norms in two-dimensional problems and by considering (for the first time) the competitive location problem on a graph. The results in competitive location theory are generalized by exploiting an isomorphism to the literature in voting theory and by developing new results for competitive location problems on a graph.",
"We define a general notion of single-peaked preferences based on abstract betweenness relations. Special cases are the classical example of single-peaked preferences on a line, the separable preferences on the hypercube, the “multi-dimensionally single-peaked” preferences on the product of lines, but also the unrestricted preference domain. Generalizing and unifying the existing literature, we show that a social choice function is strategy-proof on a sufficiently rich domain of generalized single-peaked preferences if and only if it takes the form of voting by issues (“voting by committees”) satisfying a simple condition called the “Intersection Property.” Based on the Intersection Property, we show that the class of preference domains associated with “median spaces” gives rise to the strongest possibility results; in particular, we show that the existence of strategyproof social choice rules that are non-dictatorial and neutral requires an underlying median space. A space is a median space if, for every triple of elements, there is a fourth element that is between each pair of the triple; numerous examples are given (some well-known, some novel), and the structure of median spaces and the associated preference domains is analysed."
]
} |
1605.08143 | 2951649892 | Though deliberation is a critical component of democratic decision-making, existing deliberative processes do not scale to large groups of people. Motivated by this, we propose a model in which large-scale decision-making takes place through a sequence of small group interactions. Our model considers a group of participants, each having an opinion which together form a graph. We show that for median graphs, a class of graphs including grids and trees, it is possible to use a small number of three-person interactions to tightly approximate the wisdom of the crowd, defined here to be the generalized median of participant opinions, even when agents are strategic. Interestingly, we also show that this sharply contrasts with small groups of size two, for which we prove an impossibility result. Specifically, we show that it is impossible to use sequences of two-person interactions satisfying natural axioms to find a tight approximation of the generalized median, even when agents are non-strategic. Our results demonstrate the potential of small group interactions for reaching global decision-making properties. | A similar high-level triadic decision-making process was proposed in a prior paper by the authors @cite_41 and analyzed for the restricted case (described in Figure ) of opinions on a line. In this paper, we consider richer small group decision dynamics such as majority rule. We also consider spaces more complex than the line and provide a general framework for analyzing deliberative decision-making that enables us to prove an impossibility result for dyads. | {
"cite_N": [
"@cite_41"
],
"mid": [
"2139125821"
],
"abstract": [
"Typical voting rules do not work well in settings with many candidates. If there are just several hundred candidates, then even a simple task such as choosing a top candidate becomes impractical. Motivated by the hope of developing group consensus mechanisms over the internet, where the numbers of candidates could easily number in the thousands, we study an urn-based voting rule where each participant acts as a voter and a candidate. We prove that when participants lie in a one-dimensional space, this voting protocol finds a @math approximation of the Condorcet winner with high probability while only requiring an expected @math comparisons on average per voter. Moreover, this voting protocol is shown to have a quasi-truthful Nash equilibrium: namely, a Nash equilibrium exists which may not be truthful, but produces a winner with the same probability distribution as that of the truthful strategy."
]
} |
1605.08143 | 2951649892 | Though deliberation is a critical component of democratic decision-making, existing deliberative processes do not scale to large groups of people. Motivated by this, we propose a model in which large-scale decision-making takes place through a sequence of small group interactions. Our model considers a group of participants, each having an opinion which together form a graph. We show that for median graphs, a class of graphs including grids and trees, it is possible to use a small number of three-person interactions to tightly approximate the wisdom of the crowd, defined here to be the generalized median of participant opinions, even when agents are strategic. Interestingly, we also show that this sharply contrasts with small groups of size two, for which we prove an impossibility result. Specifically, we show that it is impossible to use sequences of two-person interactions satisfying natural axioms to find a tight approximation of the generalized median, even when agents are non-strategic. Our results demonstrate the potential of small group interactions for reaching global decision-making properties. | To our knowledge, we are not aware of other literature on algorithmic or mathematical models for scaling deliberation. However, the problem of scaling deliberation has been discussed in the political science community. Deliberation was initially conceptualized as an exchange of arguments among a small group of rational individuals. Recent developments acknowledge that deliberation should be thought of in richer ways, including as an activity that takes place at the system level, among groups. @cite_11 . In other words, it has recently become possible to think of deliberation as a task that can be broken down into various components and performed at different social levels and by large numbers of individuals. There have also been several practical initiatives for scaling deliberation such as Deliberative Polling @cite_12 , in which a single representative sample of participants is brought together to deliberate, and the 21st Century Town Hall Meeting @cite_10 , in which the entire set of participants are divided into tables of size @math for a single round of deliberation followed by voting. | {
"cite_N": [
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"2123778708",
"2020188026",
"1480441807"
],
"abstract": [
"Over the last decade we have watched democracy surge and ebb around the world. With its firm commitment to strengthening democratic movements, the United States has encouraged, directly assisted in, and even led many democratization efforts. Yet to maintain a credible leadership role, we must acknowledge that our own democracy has much room for improvement. A healthy democracy depends on the ability of citizens1 to affect the public policies that deeply influence their lives, and ours does not currently allow citizens their rightful voice in decision making. Special-interest groups have captured the processes for democratic input. They have skewed the agenda toward extreme positions and alienated many citizens who would tend toward a middle ground. For this reason and many others, citizens distrust their elected officials, don’t vote, and are deeply cynical about government. Conversely, policy makers believe citizens hold fast to uninformed opinions and operate from self-interest. In the end, the gap between people and the decision-making processes that affect their lives continues to widen. Despite this state of affairs, our experience working with citizens in all regions of the country leaves us confident that people want to get involved and change things for the better. Unfortunately, the traditional methods our government has used for involving citizens give little inspiration for the public to reinvest in civic life. Public hearings and typical town hall meetings are not a meaningful way for citizens to engage in governance and to have an impact on decision making. They are speaker-focused, with experts simply delivering information or responding to questions. Little learning occurs, for citizens or decision makers, because airing individual concerns too often devolves into repetitive ax grinding, grandstanding, or even a shouting match between various stakeholders. In the end, decision makers don’t know which points of view have the most salience for",
"",
"'Deliberative democracy' is often dismissed as a set of small-scale, academic experiments. This volume seeks to demonstrate how the deliberative ideal can work as a theory of democracy on a larger scale. It provides a new way of thinking about democratic engagement across the spectrum of political action, from towns and villages to nation states, and from local networks to transnational, even global systems. Written by a team of the world's leading deliberative theorists, Deliberative Systems explains the principles of this new approach, which seeks ways of ensuring that a division of deliberative labour in a system nonetheless meets both deliberative and democratic norms. Rather than simply elaborating the theory, the contributors examine the problems of implementation in a real world of competing norms, competing institutions and competing powerful interests. This pioneering book will inspire an exciting new phase of deliberative research, both theoretical and empirical."
]
} |
1605.08143 | 2951649892 | Though deliberation is a critical component of democratic decision-making, existing deliberative processes do not scale to large groups of people. Motivated by this, we propose a model in which large-scale decision-making takes place through a sequence of small group interactions. Our model considers a group of participants, each having an opinion which together form a graph. We show that for median graphs, a class of graphs including grids and trees, it is possible to use a small number of three-person interactions to tightly approximate the wisdom of the crowd, defined here to be the generalized median of participant opinions, even when agents are strategic. Interestingly, we also show that this sharply contrasts with small groups of size two, for which we prove an impossibility result. Specifically, we show that it is impossible to use sequences of two-person interactions satisfying natural axioms to find a tight approximation of the generalized median, even when agents are non-strategic. Our results demonstrate the potential of small group interactions for reaching global decision-making properties. | The study of majority rule dynamics, which is an important part of deliberative decision-making, has also been a long-studied problem. A particular relevant experimental result analyzed the majority rule dynamic in groups of five and showed that the solution concept that performed best (out of 16 considered) was the majority rule equilibrium @cite_16 . Our work is a natural next step of this observation applied to the goal of scaling deliberative decision-making. Namely, given experimental evidence for the ability of small groups to come to consensus on the majority rule equilibrium, how can we use these small groups to make good decisions for larger groups? | {
"cite_N": [
"@cite_16"
],
"mid": [
"2083307720"
],
"abstract": [
"This article reports the findings of a series of experiments on committee decision making under majority rule. The committee members had relatively fixed preferences, so that the process was one of making decisions rather than one of problem solving. The predictions of a variety of models drawn from Economics, Sociology, Political Science and Game Theory were compared to the experimental results. One predictive concept, the core of the noncooperative game without side payments (equivalent to the majority rule equilibrium) consistently performed best. Significantly, however, even when such an outcome did not exist, the experimental results did not display the degree of unpredictability that some theoretical work would suggest. An important subsidiary finding concerns the difference between experiments conducted under conditions of high stakes versus those conducted under conditions of much lower stakes. The findings in the two conditions differed considerably, thus calling into question the political applicability of numerous social psychological experiments in which subjects had little or no motivation."
]
} |
1605.08023 | 2401898190 | Mobile edge computing is a new cloud computing paradigm, which makes use of small-sized edge clouds to provide real-time services to users. These mobile edge-clouds (MECs) are located in close proximity to users, thus enabling users to seamlessly access applications running on MECs. Due to the co-existence of the core (centralized) cloud, users, and one or multiple layers of MECs, an important problem is to decide where (on which computational entity) to place different components of an application. This problem, known as the application or workload placement problem, is notoriously hard, and therefore, heuristic algorithms without performance guarantees are generally employed in common practice, which may unknowingly suffer from poor performance as compared with the optimal solution. In this paper, we address the application placement problem and focus on developing algorithms with provable performance bounds. We model the user application as an application graph and the physical computing system as a physical graph, with resource demands availabilities annotated on these graphs. We first consider the placement of a linear application graph and propose an algorithm for finding its optimal solution. Using this result, we then generalize the formulation and obtain online approximation algorithms with polynomial-logarithmic (poly-log) competitive ratio for tree application graph placement. We jointly consider node and link assignment, and incorporate multiple types of computational resources at nodes. | In @cite_23 , the authors proposed an algorithm for minimizing the sum cost while considering load balancing, which has an approximate approximation ratio of @math , where @math is the number of nodes in the physical graph. The algorithm is based on linear program (LP) relaxation, and only allows one node in each application graph to be placed on a particular physical node; thus, excluding server resource sharing among different nodes in one application graph. It is shown that the approximation ratio of this algorithm is @math , which is trivial because one would achieve the same approximation ratio when placing the whole application graph onto a single physical node instead of distributing it across the whole physical graph. | {
"cite_N": [
"@cite_23"
],
"mid": [
"1966600255"
],
"abstract": [
"Network virtualization allows multiple heterogeneous virtual networks (VNs) to coexist on a shared infrastructure. Efficient mapping of virtual nodes and virtual links of a VN request onto substrate network resources, also known as the VN embedding problem, is the first step toward enabling such multiplicity. Since this problem is known to be NP-hard, previous research focused on designing heuristic-based algorithms that had clear separation between the node mapping and the link mapping phases. In this paper, we present ViNEYard-a collection of VN embedding algorithms that leverage better coordination between the two phases. We formulate the VN embedding problem as a mixed integer program through substrate network augmentation. We then relax the integer constraints to obtain a linear program and devise two online VN embedding algorithms D-ViNE and R-ViNE using deterministic and randomized rounding techniques, respectively. We also present a generalized window-based VN embedding algorithm (WiNE) to evaluate the effect of lookahead on VN embedding. Our simulation experiments on a large mix of VN requests show that the proposed algorithms increase the acceptance ratio and the revenue while decreasing the cost incurred by the substrate network in the long run."
]
} |
1605.08023 | 2401898190 | Mobile edge computing is a new cloud computing paradigm, which makes use of small-sized edge clouds to provide real-time services to users. These mobile edge-clouds (MECs) are located in close proximity to users, thus enabling users to seamlessly access applications running on MECs. Due to the co-existence of the core (centralized) cloud, users, and one or multiple layers of MECs, an important problem is to decide where (on which computational entity) to place different components of an application. This problem, known as the application or workload placement problem, is notoriously hard, and therefore, heuristic algorithms without performance guarantees are generally employed in common practice, which may unknowingly suffer from poor performance as compared with the optimal solution. In this paper, we address the application placement problem and focus on developing algorithms with provable performance bounds. We model the user application as an application graph and the physical computing system as a physical graph, with resource demands availabilities annotated on these graphs. We first consider the placement of a linear application graph and propose an algorithm for finding its optimal solution. Using this result, we then generalize the formulation and obtain online approximation algorithms with polynomial-logarithmic (poly-log) competitive ratio for tree application graph placement. We jointly consider node and link assignment, and incorporate multiple types of computational resources at nodes. | A theoretical work in @cite_34 proposed an algorithm with @math time-complexity and an approximation ratio of @math for placing a tree application graph with @math levels of nodes onto a physical graph. It uses LP relaxation and its goal is to minimize the sum cost. Based on this algorithm, the authors presented an online algorithm for minimizing the maximum load on each node and link, which is @math -competitive when the application lifetimes are equal. The LP formulation in @cite_34 is complex and requires @math variables and constraints. This means when @math is not a constant, the space-complexity (specifying the required memory size of the algorithm) is exponential in @math . | {
"cite_N": [
"@cite_34"
],
"mid": [
"2121752873"
],
"abstract": [
"We study a basic resource allocation problem that arises in cloud computing environments. The physical network of the cloud is represented as a graph with vertices denoting servers and edges corresponding to communication links. A workload is a set of processes with processing requirements and mutual communication requirements. The workloads arrive and depart over time, and the resource allocator must map each workload upon arrival to the physical network. We consider the objective of minimizing the congestion. We show that solving a subproblem about mapping a single workload to the physical graph essentially suffices to solve the general problem. In particular, an α-approximation for this single mapping problem gives an O(α log nD)-competitive algorithm for the general problem, where n is the number of nodes in the physical network and D is the maximum to minimum workload duration ratio. We also show how to solve the single mapping problem for two natural class of workloads, namely depth-d-trees and complete-graph workloads. For depth-d tree, we give an nO(d) time O(d2 log (nd))-approximation based on a strong LP relaxation inspired by the Sherali-Adams hierarchy."
]
} |
1605.08023 | 2401898190 | Mobile edge computing is a new cloud computing paradigm, which makes use of small-sized edge clouds to provide real-time services to users. These mobile edge-clouds (MECs) are located in close proximity to users, thus enabling users to seamlessly access applications running on MECs. Due to the co-existence of the core (centralized) cloud, users, and one or multiple layers of MECs, an important problem is to decide where (on which computational entity) to place different components of an application. This problem, known as the application or workload placement problem, is notoriously hard, and therefore, heuristic algorithms without performance guarantees are generally employed in common practice, which may unknowingly suffer from poor performance as compared with the optimal solution. In this paper, we address the application placement problem and focus on developing algorithms with provable performance bounds. We model the user application as an application graph and the physical computing system as a physical graph, with resource demands availabilities annotated on these graphs. We first consider the placement of a linear application graph and propose an algorithm for finding its optimal solution. Using this result, we then generalize the formulation and obtain online approximation algorithms with polynomial-logarithmic (poly-log) competitive ratio for tree application graph placement. We jointly consider node and link assignment, and incorporate multiple types of computational resources at nodes. | Another related theoretical work which proposed an LP-based method for offline placement of paths into trees in data-center networks was reported in @cite_12 . Here, the application nodes can only be placed onto the leaves of a tree physical graph, and the goal is to minimize link congestion. In our problem, the application nodes are distributed across users, MECs, and core cloud, thus they should not be only placed at the leaves of a tree so the problem formulation in @cite_12 is inapplicable to our scenario. Additionally, @cite_12 only focuses on minimizing link congestion. The load balancing of nodes is not considered as part of the objective; only the capacity limits of nodes are considered. | {
"cite_N": [
"@cite_12"
],
"mid": [
"90769848"
],
"abstract": [
"Modern cloud infrastructure providers allow customers to rent computing capability in the form of a network of virtual machines (VMs) with bandwidth guarantees between pairs of VMs. Typical requests are in the form of a chain of VMs with an uplink bandwidth to the gateway node of the network (rooted path requests), and most data center architectures route network packets along a spanning tree of the physical network. VMs are instantiated inside servers which reside at the leaves of this network, leading to the following optimization problem: given a rooted tree network T and a set of rooted path requests, find an embedding of the requests that minimizes link congestion. Our main result is an algorithm that, given a rooted tree network T with n leaves and set of weighted rooted path requests, embeds a 1−e fraction of the requests with congestion at most poly(logn, logθ,e−1)·OPT (approximation is necessary since the problem is NP-hard). Here OPT is the congestion of the optimal embedding and θ is the ratio of the maximum to minimum weights of the path requests. We also obtain an O(Hlogn e2) approximation if node capacities can be augmented by a (1+e) factor (here H is the height of the tree). Our algorithm applies a randomized rounding scheme based on Group Steiner Tree rounding to a novel LP relaxation of the set of subtrees of T with a given number of leaves that may be of independent interest."
]
} |
1605.08023 | 2401898190 | Mobile edge computing is a new cloud computing paradigm, which makes use of small-sized edge clouds to provide real-time services to users. These mobile edge-clouds (MECs) are located in close proximity to users, thus enabling users to seamlessly access applications running on MECs. Due to the co-existence of the core (centralized) cloud, users, and one or multiple layers of MECs, an important problem is to decide where (on which computational entity) to place different components of an application. This problem, known as the application or workload placement problem, is notoriously hard, and therefore, heuristic algorithms without performance guarantees are generally employed in common practice, which may unknowingly suffer from poor performance as compared with the optimal solution. In this paper, we address the application placement problem and focus on developing algorithms with provable performance bounds. We model the user application as an application graph and the physical computing system as a physical graph, with resource demands availabilities annotated on these graphs. We first consider the placement of a linear application graph and propose an algorithm for finding its optimal solution. Using this result, we then generalize the formulation and obtain online approximation algorithms with polynomial-logarithmic (poly-log) competitive ratio for tree application graph placement. We jointly consider node and link assignment, and incorporate multiple types of computational resources at nodes. | Some other related work focuses on graph partitioning, such as @cite_13 and @cite_35 , where the physical graph is defined as a complete graph with edge costs associated with the distance or latency between physical servers. Such an abstraction combines multiple network links into one (abstract) physical edge, which may hide the actual status of individual links along a path. | {
"cite_N": [
"@cite_35",
"@cite_13"
],
"mid": [
"1986160824",
"1974099360"
],
"abstract": [
"The MapReduce Hadoop architecture has become very important and effective in cloud systems because many data-intensive applications are usually required to process big data. In such environments, big data is partitioned and stored over several data nodes; thus, the total completion time of a task would be delayed if the maximum access latency among all pairs of a data node and its assigned computation node is not bounded. Moreover, the computation nodes usually need to communicate with each other for aggregating the computation results; therefore, the maximum access latency among all pairs of assigned computation nodes also needs to be bounded. In the literature, it has been proved that the placement problem of computation nodes (virtual machines) to minimize the maximum access latency among all pairs of a data node and its assigned computation node and among all pairs of assigned computation nodes does not admit any approximation algorithm with a factor smaller than two, whereas no approximation algorithms have been proposed so far. In this paper, we first propose a 3-approximation algorithm for resolving the problem. Subsequently, we close the gap by proposing a 2-approximation algorithm, that is, an optimal approximation algorithm, for resolving the problem in the price of higher time complexity. Finally, we conduct simulations for evaluating the performance of our algorithms.",
"We consider resource allocation algorithms for distributed cloud systems, which deploy cloud-computing resources that are geographically distributed over a large number of locations in a wide-area network. This distribution of cloud-computing resources over many locations in the network may be done for several reasons, such as to locate resources closer to users, to reduce bandwidth costs, to increase availability, etc. To get the maximum benefit from a distributed cloud system, we need efficient algorithms for resource allocation which minimize communication costs and latency. In this paper, we develop efficient resource allocation algorithms for use in distributed clouds. Our contributions are as follows: Assuming that users specify their resource needs, such as the number of virtual machines needed for a large computational task, we develop an efficient 2-approximation algorithm for the optimal selection of data centers in the distributed cloud. Our objective is to minimize the maximum distance, or latency, between the selected data centers. Next, we consider use of a similar algorithm to select, within each data center, the racks and servers where the requested virtual machines for the task will be located. Since the network inside a data center is structured and typically a tree, we make use of this structure to develop an optimal algorithm for rack and server selection. Finally, we develop a heuristic for partitioning the requested resources for the task amongst the chosen data centers and racks. We use simulations to evaluate the performance of our algorithms over example distributed cloud systems and find that our algorithms provide significant gains over other simpler allocation algorithms."
]
} |
1605.08023 | 2401898190 | Mobile edge computing is a new cloud computing paradigm, which makes use of small-sized edge clouds to provide real-time services to users. These mobile edge-clouds (MECs) are located in close proximity to users, thus enabling users to seamlessly access applications running on MECs. Due to the co-existence of the core (centralized) cloud, users, and one or multiple layers of MECs, an important problem is to decide where (on which computational entity) to place different components of an application. This problem, known as the application or workload placement problem, is notoriously hard, and therefore, heuristic algorithms without performance guarantees are generally employed in common practice, which may unknowingly suffer from poor performance as compared with the optimal solution. In this paper, we address the application placement problem and focus on developing algorithms with provable performance bounds. We model the user application as an application graph and the physical computing system as a physical graph, with resource demands availabilities annotated on these graphs. We first consider the placement of a linear application graph and propose an algorithm for finding its optimal solution. Using this result, we then generalize the formulation and obtain online approximation algorithms with polynomial-logarithmic (poly-log) competitive ratio for tree application graph placement. We jointly consider node and link assignment, and incorporate multiple types of computational resources at nodes. | A related problem that has emerged recently is the service chain embedding problem @cite_17 @cite_15 @cite_18 . Motivated by network function virtualization (NFV) applications, the goal is to place a linear application graph between fixed source and destination physical nodes, so that a series of operations are performed on data packets sent from the source to the destination. Within this body of work, only @cite_18 has studied the competitive ratio of online placement, which, however, does not consider link placement optimization. | {
"cite_N": [
"@cite_15",
"@cite_18",
"@cite_17"
],
"mid": [
"2340581052",
"1651984967",
"2481063713"
],
"abstract": [
"The SDN and NFV paradigms enable novel network services which can be realized and embedded in a flexible and rapid manner. For example, SDN can be used to flexibly steer traffic from a source to a destination through a sequence of virtualized middleboxes, in order to realize so-called service chains. The service chain embedding problem consists of three tasks: admission control, finding suitable locations to allocate the virtualized middleboxes and computing corresponding routing paths. This paper considers the offline batch embedding of multiple service chains. Concretely, we consider the objectives of maximizing the profit by embedding an optimal subset of requests or minimizing the costs when all requests need to be embedded. Interestingly, while the service chain embedding problem has recently received much attention, so far, only non- polynomial time algorithms (based on integer programming) as well as heuristics (which do not provide any formal guarantees) are known. This paper presents the first polynomial time service chain approximation algorithms both for the case with admission and without admission control. Our algorithm is based on a novel extension of the classic linear programming and randomized rounding technique, which may be of independent interest. In particular, we show that our approach can also be extended to more complex service graphs, containing cycles or sub-chains, hence also providing new insights into the classic virtual network embedding problem.",
"The virtualization and softwarization of modern computer networks enables the definition and fast deployment of novel network services called service chains: sequences of virtualized network functions e.g., firewalls, caches, traffic optimizers through which traffic is routed between source and destination. This paper attends to the problem of admitting and embedding a maximum number of service chains, i.e., a maximum number of source-destination pairs which are routed via a sequence of l to-be-allocated, capacitated network functions. We consider an Online variant of this maximum Service Chain Embedding Problem, short OSCEP, where requests arrive over time, in a worst-case manner. Our main contribution is a deterministic Ologl-competitive online algorithm, under the assumption that capacities are at least logarithmic in l. We show that this is asymptotically optimal within the class of deterministic and randomized online algorithms. We also explore lower bounds for offline approximation algorithms, and prove that the offline problem is APX-hard for unit capacities and smalli¾źli¾ź3, and even Poly-APX-hard in general, when there is no bound oni¾źl. These approximation lower bounds may be of independent interest, as they also extend to other problems such as Virtual Circuit Routing. Finally, we present an exact algorithm based on 0-1 programming, implying that the general offline SCEP is in NP and, by the above hardness results, it is NP-complete for constant l.",
"Recently, Network Function Virtualization (NFV) has been proposed to transform from network hardware appliances to software middleboxes. Normally, a demand needs to invoke several Virtual Network Functions (VNFs) in a particular order following the service chain along a routing path. In this paper, we study the joint problem of VNF placement and path selection to better utilize the network. We discover that the relation between the link and server usage plays a crucial role in the problem. We first propose a systematic way to elastically tune the proper link and server usage of each demand based on network conditions and demand properties. In particular, we compute a proper routing path length, and decide, for each VNF in the service chain, whether to use additional server resources or to reuse resources provided by existing servers. We then propose a chain deployment algorithm to follow the guidance of this link and server usage. Via simulations, we show that our design effectively adapts resource usage to network dynamics, and, hence, serves more demands than other heuristics."
]
} |
1605.08023 | 2401898190 | Mobile edge computing is a new cloud computing paradigm, which makes use of small-sized edge clouds to provide real-time services to users. These mobile edge-clouds (MECs) are located in close proximity to users, thus enabling users to seamlessly access applications running on MECs. Due to the co-existence of the core (centralized) cloud, users, and one or multiple layers of MECs, an important problem is to decide where (on which computational entity) to place different components of an application. This problem, known as the application or workload placement problem, is notoriously hard, and therefore, heuristic algorithms without performance guarantees are generally employed in common practice, which may unknowingly suffer from poor performance as compared with the optimal solution. In this paper, we address the application placement problem and focus on developing algorithms with provable performance bounds. We model the user application as an application graph and the physical computing system as a physical graph, with resource demands availabilities annotated on these graphs. We first consider the placement of a linear application graph and propose an algorithm for finding its optimal solution. Using this result, we then generalize the formulation and obtain online approximation algorithms with polynomial-logarithmic (poly-log) competitive ratio for tree application graph placement. We jointly consider node and link assignment, and incorporate multiple types of computational resources at nodes. | One important aspect to note is that most existing work, including @cite_23 @cite_12 @cite_13 @cite_35 @cite_17 @cite_15 , do not specifically consider the online operation of the algorithms. Although some of them implicitly claim that one can apply the algorithm repeatedly for each newly arrived application, the competitive ratio of such procedure is unclear. To the best of our knowledge, @cite_34 is the only work that studied the competitive ratio of the online application placement problem that considers . | {
"cite_N": [
"@cite_35",
"@cite_23",
"@cite_15",
"@cite_34",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"1986160824",
"1966600255",
"2340581052",
"2121752873",
"1974099360",
"90769848",
"2481063713"
],
"abstract": [
"The MapReduce Hadoop architecture has become very important and effective in cloud systems because many data-intensive applications are usually required to process big data. In such environments, big data is partitioned and stored over several data nodes; thus, the total completion time of a task would be delayed if the maximum access latency among all pairs of a data node and its assigned computation node is not bounded. Moreover, the computation nodes usually need to communicate with each other for aggregating the computation results; therefore, the maximum access latency among all pairs of assigned computation nodes also needs to be bounded. In the literature, it has been proved that the placement problem of computation nodes (virtual machines) to minimize the maximum access latency among all pairs of a data node and its assigned computation node and among all pairs of assigned computation nodes does not admit any approximation algorithm with a factor smaller than two, whereas no approximation algorithms have been proposed so far. In this paper, we first propose a 3-approximation algorithm for resolving the problem. Subsequently, we close the gap by proposing a 2-approximation algorithm, that is, an optimal approximation algorithm, for resolving the problem in the price of higher time complexity. Finally, we conduct simulations for evaluating the performance of our algorithms.",
"Network virtualization allows multiple heterogeneous virtual networks (VNs) to coexist on a shared infrastructure. Efficient mapping of virtual nodes and virtual links of a VN request onto substrate network resources, also known as the VN embedding problem, is the first step toward enabling such multiplicity. Since this problem is known to be NP-hard, previous research focused on designing heuristic-based algorithms that had clear separation between the node mapping and the link mapping phases. In this paper, we present ViNEYard-a collection of VN embedding algorithms that leverage better coordination between the two phases. We formulate the VN embedding problem as a mixed integer program through substrate network augmentation. We then relax the integer constraints to obtain a linear program and devise two online VN embedding algorithms D-ViNE and R-ViNE using deterministic and randomized rounding techniques, respectively. We also present a generalized window-based VN embedding algorithm (WiNE) to evaluate the effect of lookahead on VN embedding. Our simulation experiments on a large mix of VN requests show that the proposed algorithms increase the acceptance ratio and the revenue while decreasing the cost incurred by the substrate network in the long run.",
"The SDN and NFV paradigms enable novel network services which can be realized and embedded in a flexible and rapid manner. For example, SDN can be used to flexibly steer traffic from a source to a destination through a sequence of virtualized middleboxes, in order to realize so-called service chains. The service chain embedding problem consists of three tasks: admission control, finding suitable locations to allocate the virtualized middleboxes and computing corresponding routing paths. This paper considers the offline batch embedding of multiple service chains. Concretely, we consider the objectives of maximizing the profit by embedding an optimal subset of requests or minimizing the costs when all requests need to be embedded. Interestingly, while the service chain embedding problem has recently received much attention, so far, only non- polynomial time algorithms (based on integer programming) as well as heuristics (which do not provide any formal guarantees) are known. This paper presents the first polynomial time service chain approximation algorithms both for the case with admission and without admission control. Our algorithm is based on a novel extension of the classic linear programming and randomized rounding technique, which may be of independent interest. In particular, we show that our approach can also be extended to more complex service graphs, containing cycles or sub-chains, hence also providing new insights into the classic virtual network embedding problem.",
"We study a basic resource allocation problem that arises in cloud computing environments. The physical network of the cloud is represented as a graph with vertices denoting servers and edges corresponding to communication links. A workload is a set of processes with processing requirements and mutual communication requirements. The workloads arrive and depart over time, and the resource allocator must map each workload upon arrival to the physical network. We consider the objective of minimizing the congestion. We show that solving a subproblem about mapping a single workload to the physical graph essentially suffices to solve the general problem. In particular, an α-approximation for this single mapping problem gives an O(α log nD)-competitive algorithm for the general problem, where n is the number of nodes in the physical network and D is the maximum to minimum workload duration ratio. We also show how to solve the single mapping problem for two natural class of workloads, namely depth-d-trees and complete-graph workloads. For depth-d tree, we give an nO(d) time O(d2 log (nd))-approximation based on a strong LP relaxation inspired by the Sherali-Adams hierarchy.",
"We consider resource allocation algorithms for distributed cloud systems, which deploy cloud-computing resources that are geographically distributed over a large number of locations in a wide-area network. This distribution of cloud-computing resources over many locations in the network may be done for several reasons, such as to locate resources closer to users, to reduce bandwidth costs, to increase availability, etc. To get the maximum benefit from a distributed cloud system, we need efficient algorithms for resource allocation which minimize communication costs and latency. In this paper, we develop efficient resource allocation algorithms for use in distributed clouds. Our contributions are as follows: Assuming that users specify their resource needs, such as the number of virtual machines needed for a large computational task, we develop an efficient 2-approximation algorithm for the optimal selection of data centers in the distributed cloud. Our objective is to minimize the maximum distance, or latency, between the selected data centers. Next, we consider use of a similar algorithm to select, within each data center, the racks and servers where the requested virtual machines for the task will be located. Since the network inside a data center is structured and typically a tree, we make use of this structure to develop an optimal algorithm for rack and server selection. Finally, we develop a heuristic for partitioning the requested resources for the task amongst the chosen data centers and racks. We use simulations to evaluate the performance of our algorithms over example distributed cloud systems and find that our algorithms provide significant gains over other simpler allocation algorithms.",
"Modern cloud infrastructure providers allow customers to rent computing capability in the form of a network of virtual machines (VMs) with bandwidth guarantees between pairs of VMs. Typical requests are in the form of a chain of VMs with an uplink bandwidth to the gateway node of the network (rooted path requests), and most data center architectures route network packets along a spanning tree of the physical network. VMs are instantiated inside servers which reside at the leaves of this network, leading to the following optimization problem: given a rooted tree network T and a set of rooted path requests, find an embedding of the requests that minimizes link congestion. Our main result is an algorithm that, given a rooted tree network T with n leaves and set of weighted rooted path requests, embeds a 1−e fraction of the requests with congestion at most poly(logn, logθ,e−1)·OPT (approximation is necessary since the problem is NP-hard). Here OPT is the congestion of the optimal embedding and θ is the ratio of the maximum to minimum weights of the path requests. We also obtain an O(Hlogn e2) approximation if node capacities can be augmented by a (1+e) factor (here H is the height of the tree). Our algorithm applies a randomized rounding scheme based on Group Steiner Tree rounding to a novel LP relaxation of the set of subtrees of T with a given number of leaves that may be of independent interest.",
"Recently, Network Function Virtualization (NFV) has been proposed to transform from network hardware appliances to software middleboxes. Normally, a demand needs to invoke several Virtual Network Functions (VNFs) in a particular order following the service chain along a routing path. In this paper, we study the joint problem of VNF placement and path selection to better utilize the network. We discover that the relation between the link and server usage plays a crucial role in the problem. We first propose a systematic way to elastically tune the proper link and server usage of each demand based on network conditions and demand properties. In particular, we compute a proper routing path length, and decide, for each VNF in the service chain, whether to use additional server resources or to reuse resources provided by existing servers. We then propose a chain deployment algorithm to follow the guidance of this link and server usage. Via simulations, we show that our design effectively adapts resource usage to network dynamics, and, hence, serves more demands than other heuristics."
]
} |
1605.07577 | 2403724605 | We introduce a new theorem prover for classical higher-order logic named auto2. The prover is designed to make use of human-specified heuristics when searching for proofs. The core algorithm is a best-first search through the space of propositions derivable from the initial assumptions, where new propositions are added by user-defined functions called proof steps. We implemented the prover in Isabelle HOL, and applied it to several formalization projects in mathematics and computer science, demonstrating the high level of automation it can provide in a variety of possible proof tasks. | The author is particularly inspired by the work of Ganesalingam and Gowers @cite_11 , which describes a theorem prover that can output proofs in a form extremely similar to human exposition. Our terminology of box'' is taken from there (although the meaning here is slightly different). | {
"cite_N": [
"@cite_11"
],
"mid": [
"1598115799"
],
"abstract": [
"This paper describes a program that solves elementary mathematical problems, mostly in metric space theory, and presents solutions that are hard to distinguish from solutions that might be written by human mathematicians. The program is part of a more general project, which we also discuss."
]
} |
1605.07577 | 2403724605 | We introduce a new theorem prover for classical higher-order logic named auto2. The prover is designed to make use of human-specified heuristics when searching for proofs. The core algorithm is a best-first search through the space of propositions derivable from the initial assumptions, where new propositions are added by user-defined functions called proof steps. We implemented the prover in Isabelle HOL, and applied it to several formalization projects in mathematics and computer science, demonstrating the high level of automation it can provide in a variety of possible proof tasks. | A similar blackboard'' approach is used for heuristic theorem proving by @cite_7 , where the focus is on proving real inequalities. The portion of our system concerning inequalities is not as sophisticated as what is implemented there. Instead, our work can be viewed as applying a similar technique to all forms of reasoning. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2504213494"
],
"abstract": [
"We describe a general method for verifying inequalities between real-valued expressions, especially the kinds of straightforward inferences that arise in interactive theorem proving. In contrast to approaches that aim to be complete with respect to a particular language or class of formulas, our method establishes claims that require heterogeneous forms of reasoning, relying on a Nelson-Oppen-style architecture in which special-purpose modules collaborate and share information. The framework is thus modular and extensible. A prototype implementation shows that the method is promising, complementing techniques that are used by contemporary interactive provers."
]
} |
1605.07369 | 2404609479 | We present a general framework and method for simultaneous detection and segmentation of an object in a video that moves (or comes into view of the camera) at some unknown time in the video. The method is an online approach based on motion segmentation, and it operates under dynamic backgrounds caused by a moving camera or moving nuisances. The goal of the method is to detect and segment the object as soon as it moves. Due to stochastic variability in the video and unreliability of the motion signal, several frames are needed to reliably detect the object. The method is designed to detect and segment with minimum delay subject to a constraint on the false alarm rate. The method is derived as a problem of Quickest Change Detection. Experiments on a dataset show the effectiveness of our method in minimizing detection delay subject to false alarm constraints. | The problem of detecting changes in a video has a large literature in computer vision, too extensive to review here, and thus we refer to @cite_16 for a good survey. That literature mainly addresses detection and segmentation of moving objects by background subtraction @cite_0 - subtraction of a known or adaptively determined background from the current video frame. We are interested in video with moving cameras or dynamic background nuisances, for which the methods in that literature largely do not apply, although there have been advancements. While there are methods that deal with dynamic cameras and detect and segment moving objects by motion (e.g., @cite_7 ), they do not address the issue of the tradeoff between detection delay and false alarms, which our approach addresses optimally by using QCD. We are not aware of the use of QCD in the detection and segmentation by motion of video. In the next section, we summarize the main ideas from QCD before framing our problem in that framework. | {
"cite_N": [
"@cite_0",
"@cite_16",
"@cite_7"
],
"mid": [
"2003830783",
"",
"2115606256"
],
"abstract": [
"In this paper, we present a comparative study of several state of the art background subtraction methods. Approaches ranging from simple background subtraction with global thresholding to more sophisticated statistical methods have been implemented and tested on different videos with ground truth. The goal of this study is to provide a solid analytic ground to underscore the strengths and weaknesses of the most widely implemented motion detection methods. The methods are compared based on their robustness to different types of video, their memory requirement, and the computational effort they require. The impact of a Markovian prior as well as some post-processing operators are also evaluated. Most of the videos used in this study come from state-of-the-art benchmark databases and represent different challenges such as poor signal-to-noise ratio, multimodal background motion and camera jitter. Overall, this study not only helps better understand to which type of videos each method suits best but also estimate how better sophisticated methods are, compared to basic background subtraction methods.",
"",
"We address the problem of detection and tracking of moving objects in a video stream obtained from a moving airborne platform. The proposed method relies on a graph representation of moving objects which allows to derive and maintain a dynamic template of each moving object by enforcing their temporal coherence. This inferred template along with the graph representation used in our approach allows us to characterize objects trajectories as an optimal path in a graph. The proposed tracker allows to deal with partial occlusions, stop and go motion in very challenging situations. We demonstrate results on a number of different real sequences. We then define an evaluation methodology to quantify our results and show how tracking overcome detection errors."
]
} |
1605.07669 | 2396229782 | The ability to compute an accurate reward function is essential for optimising a dialogue policy via reinforcement learning. In real-world applications, using explicit user feedback as the reward signal is often unreliable and costly to collect. This problem can be mitigated if the user's intent is known in advance or data is available to pre-train a task success predictor off-line. In practice neither of these apply for most real world applications. Here we propose an on-line learning framework whereby the dialogue policy is jointly trained alongside the reward model via active learning with a Gaussian process model. This Gaussian process operates on a continuous space dialogue representation generated in an unsupervised fashion using a recurrent neural network encoder-decoder. The experimental results demonstrate that the proposed framework is able to significantly reduce data annotation costs and mitigate noisy user feedback in dialogue policy learning. | Dialogue evaluation has been an active research area since late 90s. proposed the PARADISE framework, where a linear function of task completion and various dialogue features such as dialogue duration were used to infer user satisfaction. This measure was later used as a reward function for learning a dialogue policy @cite_10 . However, as noted, task completion is rarely available when the system is interacting with real users and also concerns have been raised regarding the theoretical validity of the model @cite_23 . | {
"cite_N": [
"@cite_10",
"@cite_23"
],
"mid": [
"2057244568",
"2153971568"
],
"abstract": [
"We present a new data-driven methodology for simulation-based dialogue strategy learning, which allows us to address several problems in the field of automatic optimization of dialogue strategies: learning effective dialogue strategies when no initial data or system exists, and determining a data-driven reward function. In addition, we evaluate the result with real users, and explore how results transfer between simulated and real interactions. We use Reinforcement Learning (RL) to learn multimodal dialogue strategies by interaction with a simulated environment which is \"bootstrapped\" from small amounts of Wizard-of-Oz (WOZ) data. This use of WOZ data allows data-driven development of optimal strategies for domains where no working prototype is available. Using simulation-based RL allows us to find optimal policies which are not (necessarily) present in the original data. Our results show that simulation-based RL significantly outperforms the average (human wizard) strategy as learned from the data by using Supervised Learning. The bootstrapped RL-based policy gains on average 50 times more reward when tested in simulation, and almost 18 times more reward when interacting with real users. Users also subjectively rate the RL-based policy on average 10 higher. We also show that results from simulated interaction do transfer to interaction with real users, and we explicitly evaluate the stability of the data-driven reward function.",
"The paper presents results and conclusions about the current evaluation methodologies for spoken dialogue systems (SDS). The PARADISE paradigm, used for evaluation in the DARPA Communicator project, is briefly introduced and discussed through the application to the OVID home banking dialogue system. It is shown to provide results consistent with those obtained by the DARPA community, but a number of problems and limitations are pointed out. The issue of user attitude measures obtained through questionnaires is discussed. This is an area that has not received much attention from the speech technology community, but is important in order to obtain valid results and conclusions about usability. A general presentation of the issues that must be addressed when developing and employing questionnaires is given with a focus on how to ensure the reliability and validity of the results. Examples of results obtained from the OVID project are used to illustrate this."
]
} |
1605.07669 | 2396229782 | The ability to compute an accurate reward function is essential for optimising a dialogue policy via reinforcement learning. In real-world applications, using explicit user feedback as the reward signal is often unreliable and costly to collect. This problem can be mitigated if the user's intent is known in advance or data is available to pre-train a task success predictor off-line. In practice neither of these apply for most real world applications. Here we propose an on-line learning framework whereby the dialogue policy is jointly trained alongside the reward model via active learning with a Gaussian process model. This Gaussian process operates on a continuous space dialogue representation generated in an unsupervised fashion using a recurrent neural network encoder-decoder. The experimental results demonstrate that the proposed framework is able to significantly reduce data annotation costs and mitigate noisy user feedback in dialogue policy learning. | Several approaches have been adopted for learning a dialogue reward model given a corpus of annotated dialogues. used collaborative filtering to infer user preferences. The use of reward shaping has also been investigated in @cite_44 @cite_22 to enrich the reward function in order to speed up dialogue policy learning. Also, demonstrated that there is a strong correlation between expert's user satisfaction ratings and dialogue success. However, all these methods assume the availability of reliable dialogue annotations such as expert ratings, which in practice are hard to obtain. | {
"cite_N": [
"@cite_44",
"@cite_22"
],
"mid": [
"2214131199",
"2250679999"
],
"abstract": [
"Reinforcement learning-based spoken dialogue systems aim to compute an optimal strategy for dialogue management from interactions with users. They compare their different management strategies on the basis of a numerical reward function. Reward inference consists of learning a reward function from dialogues scored by users. A major issue for reward inference algorithms is that important parameters influence user evaluations and cannot be computed online. This is the case of task completion. This paper introduces Task Completion Transfer Learning (TCTL): a method to exploit the exact knowledge of task completion on a corpus of dialogues scored by users in order to optimise online learning. Compared to previously proposed reward inference techniques, TCTL returns a reward function enhanced with the possibility to manage the online non-observability of task completion. A reward function is learnt with TCTL on dialogues with a restaurant seeking system. It is shown that the reward function returned by TCTL is a better estimator of dialogue performance than the one returned by reward inference.",
"Adapting Spoken Dialogue Systems to the user is supposed to result in more efficient and successful dialogues. In this work, we present an evaluation of a quality-adaptive strategy with a user simulator adapting the dialogue initiative dynamically during the ongoing interaction and show that it outperforms conventional non-adaptive strategies and a random strategy. Furthermore, we indicate a correlation between Interaction Quality and dialogue completion rate, task success rate, and average dialogue length. Finally, we analyze the correlation between task success and interaction quality in more detail identifying the usefulness of interaction quality for modelling the reward of reinforcement learning strategy optimization."
]
} |
1605.07681 | 2461078667 | Most current semantic segmentation methods rely on fully convolutional networks (FCNs). However, their use of large receptive fields and many pooling layers cause low spatial resolution inside the deep layers. This leads to predictions with poor localization around the boundaries. Prior work has attempted to address this issue by post-processing predictions with CRFs or MRFs. But such models often fail to capture semantic relationships between objects, which causes spatially disjoint predictions. To overcome these problems, recent methods integrated CRFs or MRFs into an FCN framework. The downside of these new models is that they have much higher complexity than traditional FCNs, which renders training and testing more challenging. In this work we introduce a simple, yet effective Convolutional Random Walk Network (RWN) that addresses the issues of poor boundary localization and spatially fragmented predictions with very little increase in model complexity. Our proposed RWN jointly optimizes the objectives of pixelwise affinity and semantic segmentation. It combines these two objectives via a novel random walk layer that enforces consistent spatial grouping in the deep layers of the network. Our RWN is implemented using standard convolution and matrix multiplication. This allows an easy integration into existing FCN frameworks and it enables end-to-end training of the whole network via standard back-propagation. Our implementation of RWN requires just @math additional parameters compared to the traditional FCNs, and yet it consistently produces an improvement over the FCNs on semantic segmentation and scene labeling. | The recent introduction of fully convolutional networks (FCNs) @cite_8 has led to remarkable advances in semantic segmentation. However, due to the large receptive fields and many pooling layers, segments predicted by FCNs tend to be blobby and lack fine object boundary details. Recently there have been several attempts to address these problems. These approaches can be divided into several groups. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2952632681"
],
"abstract": [
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image."
]
} |
1605.07559 | 2950439296 | We study the achievable capacity regions of full-duplex links in the single- and multi-channel cases (in the latter case, the channels are assumed to be orthogonal -- e.g., OFDM). We present analytical results that characterize the uplink and downlink capacity region and efficient algorithms for computing rate pairs at the region's boundary. We also provide near-optimal and heuristic algorithms that "convexify" the capacity region when it is not convex. The convexified region corresponds to a combination of a few full-duplex rates (i.e., to time sharing between different operation modes). The algorithms can be used for theoretical characterization of the capacity region as well as for resource (time, power, and channel) allocation with the objective of maximizing the sum of the rates when one of them (uplink or downlink) must be guaranteed (e.g., due to QoS considerations). We numerically illustrate the capacity regions and the rate gains (compared to time division duplex) for various channel and cancellation scenarios. The analytical results provide insights into the properties of the full-duplex capacity region and are essential for future development of scheduling, channel allocation, and power control algorithms. | Various challenges related to FD wireless recently attracted significant attention. These include FD radio system design @cite_9 @cite_16 @cite_23 @cite_19 @cite_10 @cite_11 as well as rate gain evaluation and resource allocation @cite_14 @cite_30 @cite_25 @cite_26 @cite_24 @cite_3 @cite_18 @cite_29 @cite_21 . A large body of (analytical) work @cite_14 @cite_26 @cite_5 focuses on while we focus on the more realistic model of imperfect SIC. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_14",
"@cite_26",
"@cite_9",
"@cite_29",
"@cite_21",
"@cite_3",
"@cite_24",
"@cite_19",
"@cite_23",
"@cite_5",
"@cite_16",
"@cite_10",
"@cite_25",
"@cite_11"
],
"mid": [
"",
"2078325625",
"2139947445",
"",
"2122125652",
"1503817581",
"",
"1569508871",
"",
"",
"",
"",
"",
"2139015614",
"",
"2160030828"
],
"abstract": [
"",
"Given that full duplex (FD) and MIMO both employ multiple antenna resources, an important question that arises is how to make the choice between MIMO and FD? We show that optimal performance requires a combination of both to be used. Hence, we present the design and implementation of MIDU, the first MIMO full duplex system for wireless networks. MIDU employs antenna cancellation with symmetric placement of transmit and receive antennas as its primary RF cancellation technique. We show that MIDU's design provides large amounts of self-interference cancellation with several key advantages: (i) It allows for two stages of additive antenna cancellation in tandem, to yield as high as 45 dB self-interference suppression; (ii) It can potentially eliminate the need for other forms of analog cancellation, thereby avoiding the need for variable attenuator and delays; (iii) It easily scales to MIMO systems, therefore enabling the coexistence of MIMO and full duplex. We implemented MIDU on the WARP FPGA platform, and evaluated its performance against half duplex (HD)-MIMO. Our results reveal that, with the same number of RF chains, MIDU can potentially double the throughput achieved by half duplex MIMO in a single link; and provide median gains of at least 20 even in single cell scenarios, where full duplex encounters inter-client interference. Based on key insights from our results, we also highlight how to efficiently enable scheduling for a MIDU node.",
"In this paper, we study a three-node full-duplex network, where a base station is engaged in simultaneous up- and downlink communication in the same frequency band with two half-duplex mobile nodes. To reduce the impact of inter-node interference between the two mobile nodes on the system capacity, we study how an orthogonal side-channel between the two mobile nodes can be leveraged to achieve full-duplex-like multiplexing gains. We propose and characterize the achievable rates of four distributed full-duplex schemes, labeled bin-and-cancel, compress-and-cancel, estimate-and-cancel and decode-and-cancel. Of the four, bin-and-cancel is shown to achieve within 1 bit s Hz of the capacity region for all values of channel parameters. In contrast, the other three schemes achieve the near-optimal performance only in certain regimes of channel values. Asymptotic multiplexing gains of all proposed schemes are derived to show that the side-channel is extremely effective in regimes where inter-node interference has the highest impact.",
"",
"This paper discusses the design of a single channel full-duplex wireless transceiver. The design uses a combination of RF and baseband techniques to achieve full-duplexing with minimal effect on link reliability. Experiments on real nodes show the full-duplex prototype achieves median performance that is within 8 of an ideal full-duplexing system. This paper presents Antenna Cancellation, a novel technique for self-interference cancellation. In conjunction with existing RF interference cancellation and digital baseband interference cancellation, antenna cancellation achieves the amount of self-interference cancellation required for full-duplex operation. The paper also discusses potential MAC and network gains with full-duplexing. It suggests ways in which a full-duplex system can solve some important problems with existing wireless systems including hidden terminals, loss of throughput due to congestion, and large end-to-end delays.",
"Full-duplex (FD) technology has been regarded as a promising solution to improve the spectral efficiency of the 5th generation (5G) wireless communication system. In this paper, we focus on a small-area radio system, where a FD base-station serves two half-duplex users simultaneously with one in the uplink and the other one in downlink direction. However, the self-interference (SI) and the inter-user interference (IUI) may largely degrade the system performance. Therefore we propose a simple but effective inter-user interference cancellation (IIC) strategy to cope with the IUI. This method is based on the fact that the downlink user observes a multiple access channel (MAC) of two user, the BS and the uplink user, respectively. Hence, the downlink user has an opportunity to remove the IUI according to the transmission rates and its received powers. Therefore, we call this method as multi-access channel IIC (MIIC). Then, we analyze the achievable rate region of the FD system with and without MIIC when taking the SI into consideration. To obtain the rate region, analytic power allocation strategies are derived. Simulation results show that, by applying the proposed power allocation strategies, the performance of FD system with MIIC is as good as the system without IUI under the 3GPP assumptions.",
"",
"The recent breakthrough in wireless full-duplex communication makes possible a brand new way of multi-hop wireless communication, namely full-duplex cut-through transmission, where for a traffic flow that traverses through multiple links, every node along the route can receive a new packet and simultaneously forward the previously received packet. This wireless transmission scheme brings new challenges in the design of MAC layer algorithms that aim to reap its full benefit. First, the MAC layer rate region of the cut-through enabled network is directly a function of the routing decision, leading to a strong coupling between routing and scheduling. Second, it is unclear how to dynamically form change cut-through routes based on the traffic rates and patterns. In this work, we introduce a novel method to characterize the interference relationship between links in the network with cut-through transmission, which decouples the routing decision with the scheduling decision and enables a seamless adaptation of traditional half-duplex routing scheduling algorithm into wireless networks with full-duplex cut-through capabilities. Based on this interference model, a queue-length based CSMA-type scheduling algorithm is proposed, which both leverages the flexibility of full-duplex cut-through transmission and permits distributed implementation.",
"",
"",
"",
"",
"",
"Recent works have considered the feasibility of full duplex (FD) wireless communications in practice. While the first FD system by Choi et.al. relied on a specific antenna cancellation technique to achieve a significant portion of self-interference cancellation, the various limitations of this technique prompted latter works to move away from antenna cancellation and rely on analog cancellation achieved through channel estimation. However, the latter systems in turn require the use of variable attenuator and delay elements that need to be automatically tuned to compensate for the self-interference channel. This not only adds complexity to the overall system but also makes the performance sensitive to wide-band channels. More importantly, none of the existing FD schemes can be readily scaled to MIMO systems. In this context, we revisit the role of antenna cancellation in FD communications and show that it has more potential in its applicability to FD than previously thought. We advocate a design that overcomes the limitations that have been pointed out in the literature. We then extend this to a two-stage design that allows both transmit and receive versions of antenna cancellation to be jointly leveraged. Finally, we illustrate an extension of our design to MIMO systems, where a combination of both MIMO and FD can be realized in tandem.",
"",
"This paper presents the design and implementation of the first in-band full duplex WiFi radios that can simultaneously transmit and receive on the same channel using standard WiFi 802.11ac PHYs and achieves close to the theoretical doubling of throughput in all practical deployment scenarios. Our design uses a single antenna for simultaneous TX RX (i.e., the same resources as a standard half duplex system). We also propose novel analog and digital cancellation techniques that cancel the self interference to the receiver noise floor, and therefore ensure that there is no degradation to the received signal. We prototype our design by building our own analog circuit boards and integrating them with a fully WiFi-PHY compatible software radio implementation. We show experimentally that our design works robustly in noisy indoor environments, and provides close to the expected theoretical doubling of throughput in practice."
]
} |
1605.07559 | 2950439296 | We study the achievable capacity regions of full-duplex links in the single- and multi-channel cases (in the latter case, the channels are assumed to be orthogonal -- e.g., OFDM). We present analytical results that characterize the uplink and downlink capacity region and efficient algorithms for computing rate pairs at the region's boundary. We also provide near-optimal and heuristic algorithms that "convexify" the capacity region when it is not convex. The convexified region corresponds to a combination of a few full-duplex rates (i.e., to time sharing between different operation modes). The algorithms can be used for theoretical characterization of the capacity region as well as for resource (time, power, and channel) allocation with the objective of maximizing the sum of the rates when one of them (uplink or downlink) must be guaranteed (e.g., due to QoS considerations). We numerically illustrate the capacity regions and the rate gains (compared to time division duplex) for various channel and cancellation scenarios. The analytical results provide insights into the properties of the full-duplex capacity region and are essential for future development of scheduling, channel allocation, and power control algorithms. | Power allocation for maximizing the sum of the UL and DL rates for the single- and multi-channel cases was studied in @cite_28 @cite_0 . The maximization only determines a single point on the capacity region and does not imply anything about the rest of the region, which is our focus. While @cite_28 (implicitly) constructs the FD capacity region in the single channel case (restated here as Proposition ), it does not derive any structural properties of the region, nor does it consider the multi-channel case or a combination of FD and TDD. | {
"cite_N": [
"@cite_28",
"@cite_0"
],
"mid": [
"2154641212",
"2060512747"
],
"abstract": [
"Full-duplex communication has the potential to substantially increase the throughput in wireless networks. However, the benefits of full-duplex are still not well understood. In this paper, we characterize the full-duplex rate gains in both single-channel and multi-channel use cases. For the single-channel case, we quantify the rate gain as a function of the remaining self-interference and SNR values. We also provide a sufficient condition under which the sum of uplink and downlink rates on a full-duplex channel is concave in the transmission power levels. Building on these results, we consider the multi-channel case. For that case, we introduce a new realistic model of a small form-factor (e.g., smartphone) full-duplex receiver and demonstrate its accuracy via measurements. We study the problem of jointly allocating power levels to different channels and selecting the frequency of maximum self-interference suppression, where the objective is maximizing the sum of the rates over uplink and downlink OFDM channels. We develop a polynomial time algorithm which is nearly optimal under very mild restrictions. To reduce the running time, we develop an efficient nearly-optimal algorithm under the high SINR approximation. Finally, we demonstrate via numerical evaluations the capacity gains in the different use cases and obtain insights into the impact of the remaining self-interference and wireless channel states on the performance.",
"We consider the full-duplex transmission over bidirectional channels with imperfect self-interference cancelation in wireless networks. In particular, together using propagation-domain interference suppression, analog-domain interference cancellation, and digital-domain interference cancellation, we develop the optimal dynamic power allocation schemes for the wireless full-duplex sum-rate optimization problem which aims at maximizing the sum-rate of wireless full-duplex bidirectional transmissions. In the high signal-to-interference-plus-noise ratio (SINR) region, the full-duplex sum-rate maximization problem is a convex optimization problem. For interference-dominated wireless full-duplex transmission in the high SINR region, we derive the closed-form expression for the optimal dynamic power allocation scheme. For non-interference-dominated wireless full-duplex transmission in the high SINR region, we obtain the optimal dynamic power allocation scheme by numerically solving the corresponding Karush-Kuhn-Tucker (KKT) conditions. While the full-duplex sum-rate maximization problem is usually not a convex optimization problem, by developing the tightest lower-bound function and using the logarithmic change of variables technique, we convert the full-duplex sum-rate maximization problem to a convex optimization problem. Then, using our proposed iteration algorithm, we can numerically derive the optimal dynamic power allocation scheme for the more generic scenario. Also presented are the numerical results which validate our developed optimal dynamic power allocation schemes."
]
} |
1605.07559 | 2950439296 | We study the achievable capacity regions of full-duplex links in the single- and multi-channel cases (in the latter case, the channels are assumed to be orthogonal -- e.g., OFDM). We present analytical results that characterize the uplink and downlink capacity region and efficient algorithms for computing rate pairs at the region's boundary. We also provide near-optimal and heuristic algorithms that "convexify" the capacity region when it is not convex. The convexified region corresponds to a combination of a few full-duplex rates (i.e., to time sharing between different operation modes). The algorithms can be used for theoretical characterization of the capacity region as well as for resource (time, power, and channel) allocation with the objective of maximizing the sum of the rates when one of them (uplink or downlink) must be guaranteed (e.g., due to QoS considerations). We numerically illustrate the capacity regions and the rate gains (compared to time division duplex) for various channel and cancellation scenarios. The analytical results provide insights into the properties of the full-duplex capacity region and are essential for future development of scheduling, channel allocation, and power control algorithms. | The capacity region for an FD MIMO two-way relay channel was studied in @cite_18 as a joint problem of beamforming and power allocation. For a fixed beamforming, the problem reduces to determining a single channel FD capacity region. Yet, the joint problem is significantly different from the problems considered here. The FD capacity region for multiple channels was considered in @cite_20 . While @cite_20 considers both fixed and general power allocation for determining an FD capacity region, the analytical results are obtained only for the fixed power case and the non-convex problem of general power allocation was addressed heuristically. Specifically, for the fixed power case, our proof of Lemma is more accurate than the proof of Theorem 3 in @cite_20 (see the proof of Lemma @cite_6 ). | {
"cite_N": [
"@cite_18",
"@cite_6",
"@cite_20"
],
"mid": [
"2078325625",
"",
"2086322506"
],
"abstract": [
"Given that full duplex (FD) and MIMO both employ multiple antenna resources, an important question that arises is how to make the choice between MIMO and FD? We show that optimal performance requires a combination of both to be used. Hence, we present the design and implementation of MIDU, the first MIMO full duplex system for wireless networks. MIDU employs antenna cancellation with symmetric placement of transmit and receive antennas as its primary RF cancellation technique. We show that MIDU's design provides large amounts of self-interference cancellation with several key advantages: (i) It allows for two stages of additive antenna cancellation in tandem, to yield as high as 45 dB self-interference suppression; (ii) It can potentially eliminate the need for other forms of analog cancellation, thereby avoiding the need for variable attenuator and delays; (iii) It easily scales to MIMO systems, therefore enabling the coexistence of MIMO and full duplex. We implemented MIDU on the WARP FPGA platform, and evaluated its performance against half duplex (HD)-MIMO. Our results reveal that, with the same number of RF chains, MIDU can potentially double the throughput achieved by half duplex MIMO in a single link; and provide median gains of at least 20 even in single cell scenarios, where full duplex encounters inter-client interference. Based on key insights from our results, we also highlight how to efficiently enable scheduling for a MIDU node.",
"",
"The rate regions of half- and full-duplex links using the orthogonal frequency division multiplexing (OFDM) technique are analyzed after taking into account the non-ideality of practical transceivers. The non-ideality is quantified by a measure named error vector magnitude (EVM) level in practical systems. It is approximated as a Gaussian noise added to the original signal by the transmitter. The assumed full-duplex transceiver suppresses the self-interference via a three-stage process. The stages are antenna isolation, RF cancellation, and digital baseband cancellation. The self-interference caused by the EVM noise and the original signal are suppressed differently in the three-stage process. The optimal power allocation algorithms are developed to maximize the rates of the half- and full-duplex OFDM links under two different strategies. The first one aims at a low complexity design, where each node uniformly allocates the power over sub-carriers, whereas the second one adaptively allocates the power over sub-carriers to achieve the largest rate. Using the developed algorithms, the achieved rate regions under frequency-flat and frequency-selective environments are compared."
]
} |
1605.07559 | 2950439296 | We study the achievable capacity regions of full-duplex links in the single- and multi-channel cases (in the latter case, the channels are assumed to be orthogonal -- e.g., OFDM). We present analytical results that characterize the uplink and downlink capacity region and efficient algorithms for computing rate pairs at the region's boundary. We also provide near-optimal and heuristic algorithms that "convexify" the capacity region when it is not convex. The convexified region corresponds to a combination of a few full-duplex rates (i.e., to time sharing between different operation modes). The algorithms can be used for theoretical characterization of the capacity region as well as for resource (time, power, and channel) allocation with the objective of maximizing the sum of the rates when one of them (uplink or downlink) must be guaranteed (e.g., due to QoS considerations). We numerically illustrate the capacity regions and the rate gains (compared to time division duplex) for various channel and cancellation scenarios. The analytical results provide insights into the properties of the full-duplex capacity region and are essential for future development of scheduling, channel allocation, and power control algorithms. | The TDFD capacity region was studied in @cite_2 only via simulation and in @cite_29 analytically but mainly for the single-channel case. The convexification'' of the FD region in @cite_29 is performed over a discrete set of rate pairs, which requires linear computation in the set size, assuming that the points are sorted (e.g., Ch. 33 in @cite_22 ). Our results for a single channel rely on the structural properties of the FD capacity region and do not require the set of FD rate pairs to be discrete. Moreover, the computation for determining the convexified region is logarithmic (see Section ). | {
"cite_N": [
"@cite_29",
"@cite_22",
"@cite_2"
],
"mid": [
"1503817581",
"2752885492",
""
],
"abstract": [
"Full-duplex (FD) technology has been regarded as a promising solution to improve the spectral efficiency of the 5th generation (5G) wireless communication system. In this paper, we focus on a small-area radio system, where a FD base-station serves two half-duplex users simultaneously with one in the uplink and the other one in downlink direction. However, the self-interference (SI) and the inter-user interference (IUI) may largely degrade the system performance. Therefore we propose a simple but effective inter-user interference cancellation (IIC) strategy to cope with the IUI. This method is based on the fact that the downlink user observes a multiple access channel (MAC) of two user, the BS and the uplink user, respectively. Hence, the downlink user has an opportunity to remove the IUI according to the transmission rates and its received powers. Therefore, we call this method as multi-access channel IIC (MIIC). Then, we analyze the achievable rate region of the FD system with and without MIIC when taking the SI into consideration. To obtain the rate region, analytic power allocation strategies are derived. Simulation results show that, by applying the proposed power allocation strategies, the performance of FD system with MIIC is as good as the system without IUI under the 3GPP assumptions.",
"From the Publisher: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25 increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning.",
""
]
} |
1605.07081 | 2399238887 | A single color image can contain many cues informative towards different aspects of local geometric structure. We approach the problem of monocular depth estimation by using a neural network to produce a mid-level representation that summarizes these cues. This network is trained to characterize local scene geometry by predicting, at every image location, depth derivatives of different orders, orientations and scales. However, instead of a single estimate for each derivative, the network outputs probability distributions that allow it to express confidence about some coefficients, and ambiguity about others. Scene depth is then estimated by harmonizing this overcomplete set of network predictions, using a globalization procedure that finds a single consistent depth map that best matches all the local derivative distributions. We demonstrate the efficacy of this approach through evaluation on the NYU v2 depth data set. | Interest in monocular depth estimation dates back to the early days of computer vision, with methods that reasoned about geometry from cues such as diffuse shading @cite_12 , or contours @cite_16 @cite_14 . However, the last decade has seen accelerated progress on this task @cite_7 @cite_1 @cite_0 @cite_3 @cite_8 @cite_2 @cite_11 @cite_15 @cite_4 @cite_6 , largely owing to the availability of cheap consumer depth sensors, and consequently, large amounts of depth data for training learning-based methods. Most recent methods are based on training neural networks to map RGB images to geometry @cite_7 @cite_1 @cite_0 @cite_3 @cite_8 @cite_2 @cite_11 . Eigen al @cite_1 @cite_0 set up their network to regress directly to per-pixel depth values, although they provide deeper supervision to their network by requiring an intermediate layer to explicitly output a coarse depth map. Other methods @cite_3 @cite_8 use conditional random fields (CRFs) to smooth their neural estimates. Moreover, the network in @cite_3 also learns to predict one aspect of depth structure, in the form of the CRF's pairwise potentials. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_12",
"@cite_11"
],
"mid": [
"1526242170",
"1992178727",
"2949447631",
"1915250530",
"2951713345",
"",
"2158211626",
"2951234442",
"2952623155",
"1545195129",
"",
"97083571",
"2221366145"
],
"abstract": [
"",
"The limitations of current state-of-the-art methods for single-view depth estimation and semantic segmentations are closely tied to the property of perspective geometry, that the perceived size of the objects scales inversely with the distance. In this paper, we show that we can use this property to reduce the learning of a pixel-wise depth classifier to a much simpler classifier predicting only the likelihood of a pixel being at an arbitrarily fixed canonical depth. The likelihoods for any other depths can be obtained by applying the same classifier after appropriate image manipulations. Such transformation of the problem to the canonical depth removes the training data bias towards certain depths and the effect of perspective. The approach can be straight-forwardly generalized to multiple semantic classes, improving both depth estimation and semantic segmentation performance by directly targeting the weaknesses of independent approaches. Conditioning the semantic label on the depth provides a way to align the data to their physical scale, allowing to learn a more discriminative classifier. Conditioning depth on the semantic class helps the classifier to distinguish between ambiguities of the otherwise ill-posed problem. We tested our algorithm on the KITTI road scene dataset and NYU2 indoor dataset and obtained obtained results that significantly outperform current state-of-the-art in both single-view depth and semantic segmentation domain.",
"In this paper we propose a method for estimating depth from a single image using a coarse to fine approach. We argue that modeling the fine depth details is easier after a coarse depth map has been computed. We express a global (coarse) depth map of an image as a linear combination of a depth basis learned from training examples. The depth basis captures spatial and statistical regularities and reduces the problem of global depth estimation to the task of predicting the input-specific coefficients in the linear combination. This is formulated as a regression problem from a holistic representation of the image. Crucially, the depth basis and the regression function are coupled and jointly optimized by our learning scheme. We demonstrate that this results in a significant improvement in accuracy compared to direct regression of depth pixel values or approaches learning the depth basis disjointly from the regression function. The global depth estimate is then used as a guidance by a local refinement method that introduces depth details that were not captured at the global level. Experiments on the NYUv2 and KITTI datasets show that our method outperforms the existing state-of-the-art at a considerably lower computational cost for both training and testing.",
"Depth estimation and semantic segmentation are two fundamental problems in image understanding. While the two tasks are strongly correlated and mutually beneficial, they are usually solved separately or sequentially. Motivated by the complementary properties of the two tasks, we propose a unified framework for joint depth and semantic prediction. Given an image, we first use a trained Convolutional Neural Network (CNN) to jointly predict a global layout composed of pixel-wise depth values and semantic labels. By allowing for interactions between the depth and semantic information, the joint network provides more accurate depth prediction than a state-of-the-art CNN trained solely for depth prediction [6]. To further obtain fine-level details, the image is decomposed into local segments for region-level depth and semantic prediction under the guidance of global layout. Utilizing the pixel-wise global prediction and region-wise local prediction, we formulate the inference problem in a two-layer Hierarchical Conditional Random Field (HCRF) to produce the final depth and semantic map. As demonstrated in the experiments, our approach effectively leverages the advantages of both tasks and provides the state-of-the-art results.",
"In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.",
"",
"We consider the task of depth estimation from a single monocular image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocular images (of unstructured outdoor environments which include forests, trees, buildings, etc.) and their corresponding ground-truth depthmaps. Then, we apply supervised learning to predict the depthmap as a function of the image. Depth estimation is a challenging problem, since local features alone are insufficient to estimate depth at a point, and one needs to consider the global context of the image. Our model uses a discriminatively-trained Markov Random Field (MRF) that incorporates multiscale local- and global-image features, and models both depths at individual points as well as the relation between depths at different points. We show that, even on unstructured scenes, our algorithm is frequently able to recover fairly accurate depthmaps.",
"Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation.",
"In the past few years, convolutional neural nets (CNN) have shown incredible promise for learning visual representations. In this paper, we use CNNs for the task of predicting surface normals from a single image. But what is the right architecture we should use? We propose to build upon the decades of hard work in 3D scene understanding, to design new CNN architecture for the task of surface normal estimation. We show by incorporating several constraints (man-made, manhattan world) and meaningful intermediate representations (room layout, edge labels) in the architecture leads to state of the art performance on surface normal estimation. We also show that our network is quite robust and show state of the art results on other datasets as well without any fine-tuning.",
"We describe a technique that automatically generates plausible depth maps from videos using non-parametric depth sampling. We demonstrate our technique in cases where past methods fail (non-translating cameras and dynamic scenes). Our technique is applicable to single images as well as videos. For videos, we use local motion cues to improve the inferred depth maps, while optical flow is used to ensure temporal depth consistency. For training and evaluation, we use a Kinect-based system to collect a large dataset containing stereoscopic videos with known depths. We show that our depth estimation technique outperforms the state-of-the-art on benchmark databases. Our technique can be used to automatically convert a monoscopic video into stereo for 3D visualization, and we demonstrate this through a variety of visually pleasing results for indoor and outdoor scenes, including results from the feature film Charade.",
"",
"Understanding how the shape of a three dimensional object may be recovered from shading in a two-dimensional image of the object is one of the most important - and still unresolved - problems in machine vision. Although this important subfield is now in its second decade, this book is the first to provide a comprehensive review of shape from shading. It brings together all of the seminal papers on the subject, shows how recent work relates to more traditional approaches, and provides a comprehensive annotated bibliography.The book's 17 chapters cover: Surface Descriptions from Stereo and Shading. Shape and Source from Shading. The Eikonal Equation: some Results Applicable to Computer Vision. A Method for Enforcing Integrability in Shape from Shading Algorithms. Obtaining Shape from Shading Information. The Variational Approach to Shape from Shading. Calculating the Reflectance Map. Numerical Shape from Shading and Occluding Boundaries. Photometric Invariants Related to Solid Shape. Improved Methods of Estimating Shape from Shading Using the Light Source Coordinate System. A Provably Convergent Algorithm for Shape from Shading. Recovering Three Dimensional Shape from a Single Image of Curved Objects. Perception of Solid Shape from Shading. Local Shading Analysis Pentland. Radarclinometry for the Venus Radar Mapper. Photometric Method for Determining Surface Orientation from Multiple Images.Berthold K. P. Horn is Professor of Electrical Engineering and Computer Science at MIT. He has presided over the field of machine vision for more than a decade and is the author of \"Robot Vision. \"Michael Brooks is Reader in Computer Science at The Flinders University of South Australia. \"Shape from Shading\" is included in the Artificial Intelligence series, edited by Michael Brady, Daniel Bobrow, and Randall Davis.",
"We propose a framework that infers mid-level visual properties of an image by learning about ordinal relationships. Instead of estimating metric quantities directly, the system proposes pairwise relationship estimates for points in the input image. These sparse probabilistic ordinal measurements are globalized to create a dense output map of continuous metric measurements. Estimating order relationships between pairs of points has several advantages over metric estimation: it solves a simpler problem than metric regression, humans are better at relative judgements, so data collection is easier, ordinal relationships are invariant to monotonic transformations of the data, thereby increasing the robustness of the system and providing qualitatively different information. We demonstrate that this frame-work works well on two important mid-level vision tasks: intrinsic image decomposition and depth from an RGB image. We train two systems with the same architecture on data from these two modalities. We provide an analysis of the resulting models, showing that they learn a number of simple rules to make ordinal decisions. We apply our algorithm to depth estimation, with good results, and intrinsic image decomposition, with state-of-the-art results."
]
} |
1605.07081 | 2399238887 | A single color image can contain many cues informative towards different aspects of local geometric structure. We approach the problem of monocular depth estimation by using a neural network to produce a mid-level representation that summarizes these cues. This network is trained to characterize local scene geometry by predicting, at every image location, depth derivatives of different orders, orientations and scales. However, instead of a single estimate for each derivative, the network outputs probability distributions that allow it to express confidence about some coefficients, and ambiguity about others. Scene depth is then estimated by harmonizing this overcomplete set of network predictions, using a globalization procedure that finds a single consistent depth map that best matches all the local derivative distributions. We demonstrate the efficacy of this approach through evaluation on the NYU v2 depth data set. | Some methods are trained to exploit other individual aspects of geometric structure. Wang al @cite_2 train a neural network to output surface normals instead of depth (Eigen al @cite_1 do so as well, for a network separately trained for this task). In a novel approach, Zoran al @cite_11 were able to train a network to predict the relative depth ordering between pairs of points in the image---whether one surface is behind, in front of, or at the same depth as the other. However, their globalization scheme to combine these outputs was able to achieve limited accuracy at estimating actual depth, due to the limited information carried by ordinal pair-wise predictions. | {
"cite_N": [
"@cite_11",
"@cite_1",
"@cite_2"
],
"mid": [
"2221366145",
"2951713345",
"2952623155"
],
"abstract": [
"We propose a framework that infers mid-level visual properties of an image by learning about ordinal relationships. Instead of estimating metric quantities directly, the system proposes pairwise relationship estimates for points in the input image. These sparse probabilistic ordinal measurements are globalized to create a dense output map of continuous metric measurements. Estimating order relationships between pairs of points has several advantages over metric estimation: it solves a simpler problem than metric regression, humans are better at relative judgements, so data collection is easier, ordinal relationships are invariant to monotonic transformations of the data, thereby increasing the robustness of the system and providing qualitatively different information. We demonstrate that this frame-work works well on two important mid-level vision tasks: intrinsic image decomposition and depth from an RGB image. We train two systems with the same architecture on data from these two modalities. We provide an analysis of the resulting models, showing that they learn a number of simple rules to make ordinal decisions. We apply our algorithm to depth estimation, with good results, and intrinsic image decomposition, with state-of-the-art results.",
"In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.",
"In the past few years, convolutional neural nets (CNN) have shown incredible promise for learning visual representations. In this paper, we use CNNs for the task of predicting surface normals from a single image. But what is the right architecture we should use? We propose to build upon the decades of hard work in 3D scene understanding, to design new CNN architecture for the task of surface normal estimation. We show by incorporating several constraints (man-made, manhattan world) and meaningful intermediate representations (room layout, edge labels) in the architecture leads to state of the art performance on surface normal estimation. We also show that our network is quite robust and show state of the art results on other datasets as well without any fine-tuning."
]
} |
1605.07081 | 2399238887 | A single color image can contain many cues informative towards different aspects of local geometric structure. We approach the problem of monocular depth estimation by using a neural network to produce a mid-level representation that summarizes these cues. This network is trained to characterize local scene geometry by predicting, at every image location, depth derivatives of different orders, orientations and scales. However, instead of a single estimate for each derivative, the network outputs probability distributions that allow it to express confidence about some coefficients, and ambiguity about others. Scene depth is then estimated by harmonizing this overcomplete set of network predictions, using a globalization procedure that finds a single consistent depth map that best matches all the local derivative distributions. We demonstrate the efficacy of this approach through evaluation on the NYU v2 depth data set. | In contrast, our network learns to reason about a more diverse set of structural relationships, by predicting a large number of coefficients at each location. Note that some prior methods @cite_7 @cite_8 also regress to coefficients in some basis instead of to depth values directly. However, their motivation for this is to the complexity of the output space, and use basis sets that have much lower dimensionality than the depth map itself. Our approach is different---our predictions are distributions over coefficients in an representation, motivated by the expectation that our network will be able to precisely characterize only a small subset of the total coefficients in our representation. | {
"cite_N": [
"@cite_7",
"@cite_8"
],
"mid": [
"2949447631",
"1915250530"
],
"abstract": [
"In this paper we propose a method for estimating depth from a single image using a coarse to fine approach. We argue that modeling the fine depth details is easier after a coarse depth map has been computed. We express a global (coarse) depth map of an image as a linear combination of a depth basis learned from training examples. The depth basis captures spatial and statistical regularities and reduces the problem of global depth estimation to the task of predicting the input-specific coefficients in the linear combination. This is formulated as a regression problem from a holistic representation of the image. Crucially, the depth basis and the regression function are coupled and jointly optimized by our learning scheme. We demonstrate that this results in a significant improvement in accuracy compared to direct regression of depth pixel values or approaches learning the depth basis disjointly from the regression function. The global depth estimate is then used as a guidance by a local refinement method that introduces depth details that were not captured at the global level. Experiments on the NYUv2 and KITTI datasets show that our method outperforms the existing state-of-the-art at a considerably lower computational cost for both training and testing.",
"Depth estimation and semantic segmentation are two fundamental problems in image understanding. While the two tasks are strongly correlated and mutually beneficial, they are usually solved separately or sequentially. Motivated by the complementary properties of the two tasks, we propose a unified framework for joint depth and semantic prediction. Given an image, we first use a trained Convolutional Neural Network (CNN) to jointly predict a global layout composed of pixel-wise depth values and semantic labels. By allowing for interactions between the depth and semantic information, the joint network provides more accurate depth prediction than a state-of-the-art CNN trained solely for depth prediction [6]. To further obtain fine-level details, the image is decomposed into local segments for region-level depth and semantic prediction under the guidance of global layout. Utilizing the pixel-wise global prediction and region-wise local prediction, we formulate the inference problem in a two-layer Hierarchical Conditional Random Field (HCRF) to produce the final depth and semantic map. As demonstrated in the experiments, our approach effectively leverages the advantages of both tasks and provides the state-of-the-art results."
]
} |
1605.07081 | 2399238887 | A single color image can contain many cues informative towards different aspects of local geometric structure. We approach the problem of monocular depth estimation by using a neural network to produce a mid-level representation that summarizes these cues. This network is trained to characterize local scene geometry by predicting, at every image location, depth derivatives of different orders, orientations and scales. However, instead of a single estimate for each derivative, the network outputs probability distributions that allow it to express confidence about some coefficients, and ambiguity about others. Scene depth is then estimated by harmonizing this overcomplete set of network predictions, using a globalization procedure that finds a single consistent depth map that best matches all the local derivative distributions. We demonstrate the efficacy of this approach through evaluation on the NYU v2 depth data set. | Our overall approach is similar to, and indeed motivated by, the recent work of Chakrabarti al @cite_10 , who proposed estimating a scene map (they considered disparity estimation from stereo images) by first using local predictors to produce distributional outputs from many overlapping regions at multiple scales, followed by a globalization step to harmonize these outputs. However, in addition to the fact that we use a neural network to carry out local inference, our approach is different in that inference is not based on imposing a restrictive model (such as planarity) on our local outputs. Instead, we produce independent local distributions for various derivatives of the depth map. Consequently, our globalization method need not explicitly reason about which local predictions are outliers'' with respect to such a model. Moreover, since our coefficients can be related to the global depth map through convolutions, we are able to use Fourier-domain computations for efficient inference. | {
"cite_N": [
"@cite_10"
],
"mid": [
"1910044562"
],
"abstract": [
"We introduce a multi-scale framework for low-level vision, where the goal is estimating physical scene values from image data—such as depth from stereo image pairs. The framework uses a dense, overlapping set of image regions at multiple scales and a “local model,” such as a slanted-plane model for stereo disparity, that is expected to be valid piecewise across the visual field. Estimation is cast as optimization over a dichotomous mixture of variables, simultaneously determining which regions are inliers with respect to the local model (binary variables) and the correct co-ordinates in the local model space for each inlying region (continuous variables). When the regions are organized into a multi-scale hierarchy, optimization can occur in an efficient and parallel architecture, where distributed computational units iteratively perform calculations and share information through sparse connections between parents and children. The framework performs well on a standard benchmark for binocular stereo, and it produces a distributional scene representation that is appropriate for combining with higher-level reasoning and other low-level cues."
]
} |
1605.06650 | 2395651392 | We present a novel method for hierarchical topic detection where topics are obtained by clustering documents in multiple ways. Specifically, we model document collections using a class of graphical models called hierarchical latent tree models (HLTMs). The variables at the bottom level of an HLTM are observed binary variables that represent the presence absence of words in a document. The variables at other levels are binary latent variables, with those at the lowest latent level representing word co-occurrence patterns and those at higher levels representing co-occurrence of patterns at the level below. Each latent variable gives a soft partition of the documents, and document clusters in the partitions are interpreted as topics. Latent variables at high levels of the hierarchy capture long-range word co-occurrence patterns and hence give thematically more general topics, while those at low levels of the hierarchy capture short-range word co-occurrence patterns and give thematically more specific topics. Unlike LDA-based topic models, HLTMs do not refer to a document generation process and use word variables instead of token variables. They use a tree structure to model the relationships between topics and words, which is conducive to the discovery of meaningful topics and topic hierarchies. | Topic detection has been one of the most active research areas in Machine Learning in the past decade. The most commonly used method is latent Dirichlet allocation (LDA) @cite_8 . LDA assumes that documents are generated as follows: First, a list @math of topics is drawn from a Dirichlet distribution. Then, for each document @math , a topic distribution @math is drawn from another Dirichlet distribution. Each word @math in the document is generated by first picking a topic @math according to the topic distribution @math , and then selecting a word according to the word distribution @math of the topic. Given a document collection, the generation process is reverted via statistical inference (sampling or variational inference) to determine the topics and topic compositions of the documents. | {
"cite_N": [
"@cite_8"
],
"mid": [
"1880262756"
],
"abstract": [
"We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model."
]
} |
1605.06650 | 2395651392 | We present a novel method for hierarchical topic detection where topics are obtained by clustering documents in multiple ways. Specifically, we model document collections using a class of graphical models called hierarchical latent tree models (HLTMs). The variables at the bottom level of an HLTM are observed binary variables that represent the presence absence of words in a document. The variables at other levels are binary latent variables, with those at the lowest latent level representing word co-occurrence patterns and those at higher levels representing co-occurrence of patterns at the level below. Each latent variable gives a soft partition of the documents, and document clusters in the partitions are interpreted as topics. Latent variables at high levels of the hierarchy capture long-range word co-occurrence patterns and hence give thematically more general topics, while those at low levels of the hierarchy capture short-range word co-occurrence patterns and give thematically more specific topics. Unlike LDA-based topic models, HLTMs do not refer to a document generation process and use word variables instead of token variables. They use a tree structure to model the relationships between topics and words, which is conducive to the discovery of meaningful topics and topic hierarchies. | LDA has been extended in various ways for additional modeling capabilities. Topic correlations are considered in @cite_15 @cite_21 ; topic evolution is modeled in @cite_6 @cite_38 @cite_33 ; topic structures are built in @cite_36 @cite_21 @cite_0 @cite_10 ; side information is exploited in @cite_22 @cite_16 ; supervised topic models are proposed in @cite_39 @cite_27 ; and so on. In the following, we discuss in more details three of the extensions that are more closely related to this paper than others. | {
"cite_N": [
"@cite_38",
"@cite_33",
"@cite_22",
"@cite_36",
"@cite_21",
"@cite_6",
"@cite_39",
"@cite_0",
"@cite_27",
"@cite_15",
"@cite_16",
"@cite_10"
],
"mid": [
"2171343266",
"2149655761",
"2158085718",
"2158266063",
"2106490775",
"",
"",
"",
"2102150019",
"",
"1506246224",
"2122678284"
],
"abstract": [
"This paper presents an LDA-style topic model that captures not only the low-dimensional structure of data, but also how the structure changes over time. Unlike other recent work that relies on Markov assumptions or discretization of time, here each topic is associated with a continuous distribution over timestamps, and for each generated document, the mixture distribution over topics is influenced by both word co-occurrences and the document's timestamp. Thus, the meaning of a particular topic can be relied upon as constant, but the topics' occurrence and correlations change significantly over time. We present results on nine months of personal email, 17 years of NIPS research papers and over 200 years of presidential state-of-the-union addresses, showing improved topics, better timestamp prediction, and interpretable trends.",
"Mixture models are a fundamental tool in applied statistics and machine learning for treating data taken from multiple subpopulations. The current practice for estimating the parameters of such models relies on local search heuristics (e.g., the EM algorithm) which are prone to failure, and existing consistent methods are unfavorable due to their high computational and sample complexity which typically scale exponentially with the number of mixture components. This work develops an efficient method of moments approach to parameter estimation for a broad class of high-dimensional mixture models with many components, including multi-view mixtures of Gaussians (such as mixtures of axis-aligned Gaussians) and hidden Markov models. The new method leads to rigorous unsupervised learning results for mixture models that were not achieved by previous works; and, because of its simplicity, it offers a viable alternative to EM for practical deployment.",
"Users of topic modeling methods often have knowledge about the composition of words that should have high or low probability in various topics. We incorporate such domain knowledge using a novel Dirichlet Forest prior in a Latent Dirichlet Allocation framework. The prior is a mixture of Dirichlet tree distributions with special structures. We present its construction, and inference via collapsed Gibbs sampling. Experiments on synthetic and real datasets demonstrate our model's ability to follow and generalize beyond user-specified domain knowledge.",
"We consider problems involving groups of data where each observation within a group is a draw from a mixture model and where it is desirable to share mixture components between groups. We assume that the number of mixture components is unknown a priori and is to be inferred from the data. In this setting it is natural to consider sets of Dirichlet processes, one for each group, where the well-known clustering property of the Dirichlet process provides a nonparametric prior for the number of mixture components within each group. Given our desire to tie the mixture models in the various groups, we consider a hierarchical model, specifically one in which the base measure for the child Dirichlet processes is itself distributed according to a Dirichlet process. Such a base measure being discrete, the child Dirichlet processes necessarily share atoms. Thus, as desired, the mixture models in the different groups necessarily share mixture components. We discuss representations of hierarchical Dirichlet processes ...",
"Latent Dirichlet allocation (LDA) and other related topic models are increasingly popular tools for summarization and manifold discovery in discrete data. However, LDA does not capture correlations between topics. In this paper, we introduce the pachinko allocation model (PAM), which captures arbitrary, nested, and possibly sparse correlations between topics using a directed acyclic graph (DAG). The leaves of the DAG represent individual words in the vocabulary, while each interior node represents a correlation among its children, which may be words or other interior nodes (topics). PAM provides a flexible alternative to recent work by Blei and Lafferty (2006), which captures correlations only between pairs of topics. Using text data from newsgroups, historic NIPS proceedings and other research paper corpora, we show improved performance of PAM in document classification, likelihood of held-out data, the ability to support finer-grained topics, and topical keyword coherence.",
"",
"",
"",
"We introduce hierarchically supervised latent Dirichlet allocation (HSLDA), a model for hierarchically and multiply labeled bag-of-word data. Examples of such data include web pages and their placement in directories, product descriptions and associated categories from product hierarchies, and free-text clinical records and their assigned diagnosis codes. Out-of-sample label prediction is the primary goal of this work, but improved lower-dimensional representations of the bag-of-word data are also of interest. We demonstrate HSLDA on large-scale data from clinical document labeling and retail product categorization tasks. We show that leveraging the structure from hierarchical labels improves out-of-sample label prediction substantially when compared to models that do not.",
"",
"Topic models have great potential for helping users understand document corpora. This potential is stymied by their purely unsupervised nature, which often leads to topics that are neither entirely meaningful nor effective in extrinsic tasks (, 2009). We propose a simple and effective way to guide topic models to learn topics of specific interest to a user. We achieve this by providing sets of seed words that a user believes are representative of the underlying topics in a corpus. Our model uses these seeds to improve both topic-word distributions (by biasing topics to produce appropriate seed words) and to improve document-topic distributions (by biasing documents to select topics related to the seed words they contain). Extrinsic evaluation on a document clustering task reveals a significant improvement when using seed information, even over other models that use seed information naively.",
"The four-level pachinko allocation model (PAM) (Li & McCallum, 2006) represents correlations among topics using a DAG structure. It does not, however, represent a nested hierarchy of topics, with some topical word distributions representing the vocabulary that is shared among several more specific topics. This paper presents hierarchical PAM---an enhancement that explicitly represents a topic hierarchy. This model can be seen as combining the advantages of hLDA's topical hierarchy representation with PAM's ability to mix multiple leaves of the topic hierarchy. Experimental results show improvements in likelihood of held-out documents, as well as mutual information between automatically-discovered topics and humangenerated categories such as journals."
]
} |
1605.06650 | 2395651392 | We present a novel method for hierarchical topic detection where topics are obtained by clustering documents in multiple ways. Specifically, we model document collections using a class of graphical models called hierarchical latent tree models (HLTMs). The variables at the bottom level of an HLTM are observed binary variables that represent the presence absence of words in a document. The variables at other levels are binary latent variables, with those at the lowest latent level representing word co-occurrence patterns and those at higher levels representing co-occurrence of patterns at the level below. Each latent variable gives a soft partition of the documents, and document clusters in the partitions are interpreted as topics. Latent variables at high levels of the hierarchy capture long-range word co-occurrence patterns and hence give thematically more general topics, while those at low levels of the hierarchy capture short-range word co-occurrence patterns and give thematically more specific topics. Unlike LDA-based topic models, HLTMs do not refer to a document generation process and use word variables instead of token variables. They use a tree structure to model the relationships between topics and words, which is conducive to the discovery of meaningful topics and topic hierarchies. | Nested Chinese Restaurant Process (nCRP) @cite_7 and nested Hierarchical Dirichlet Process (nHDP) @cite_28 are proposed as HTD methods. They assume that there is a true topic tree behind data. A prior distribution is placed over all possible trees using nCRP and nHDP respectively. An assumption is made as to how documents are generated from the true topic tree, which, together with data, gives a likelihood function over all possible trees. In nCRP, the topics in a document are assumed to be from one path down the tree, while in nHDP, the topics in a document can be from multiple paths, i.e., a subtree within the entire topic tree. The true topic tree is estimated by combining the prior and the likelihood in posterior inference. During inference, one in theory deals with a tree with infinitely many levels and each node having infinitely many children. In practice, the tree is truncated so that it has a predetermined number of levels. In nHDP, each node also has a predetermined number of children, and nCRP uses hyperparameters to control the number. As such, the two methods in effect require the user to provide the structure of an hierarchy as input. | {
"cite_N": [
"@cite_28",
"@cite_7"
],
"mid": [
"2005564522",
"2150286230"
],
"abstract": [
"We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical topic modeling. The nHDP generalizes the nested Chinese restaurant process (nCRP) to allow each word to follow its own path to a topic node according to a per-document distribution over the paths on a shared tree. This alleviates the rigid, single-path formulation assumed by the nCRP, allowing documents to easily express complex thematic borrowings. We derive a stochastic variational inference algorithm for the model, which enables efficient inference for massive collections of text documents. We demonstrate our algorithm on 1.8 million documents from The New York Times and 2.7 million documents from Wikipedia .",
"We present the nested Chinese restaurant process (nCRP), a stochastic process that assigns probability distributions to ensembles of infinitely deep, infinitely branching trees. We show how this stochastic process can be used as a prior distribution in a Bayesian nonparametric model of document collections. Specifically, we present an application to information retrieval in which documents are modeled as paths down a random tree, and the preferential attachment dynamics of the nCRP leads to clustering of documents according to sharing of topics at multiple levels of abstraction. Given a corpus of documents, a posterior inference algorithm finds an approximation to a posterior distribution over trees, topics and allocations of words to levels of the tree. We demonstrate this algorithm on collections of scientific abstracts from several journals. This model exemplifies a recent trend in statistical machine learning—the use of Bayesian nonparametric methods to infer distributions on flexible data structures."
]
} |
1605.06650 | 2395651392 | We present a novel method for hierarchical topic detection where topics are obtained by clustering documents in multiple ways. Specifically, we model document collections using a class of graphical models called hierarchical latent tree models (HLTMs). The variables at the bottom level of an HLTM are observed binary variables that represent the presence absence of words in a document. The variables at other levels are binary latent variables, with those at the lowest latent level representing word co-occurrence patterns and those at higher levels representing co-occurrence of patterns at the level below. Each latent variable gives a soft partition of the documents, and document clusters in the partitions are interpreted as topics. Latent variables at high levels of the hierarchy capture long-range word co-occurrence patterns and hence give thematically more general topics, while those at low levels of the hierarchy capture short-range word co-occurrence patterns and give thematically more specific topics. Unlike LDA-based topic models, HLTMs do not refer to a document generation process and use word variables instead of token variables. They use a tree structure to model the relationships between topics and words, which is conducive to the discovery of meaningful topics and topic hierarchies. | The reader is referred to @cite_17 for a survey of research activities on latent tree models. The activities take place in three settings. In the first setting, data are assumed to be generated from an unknown LTM, Here data generated from a model are vectors of values for observed variables, not documents. and the task is to recover the generative model . Here one tries to discover relationships between the latent structure and observed marginals that hold in LTMs, and then use those relationships to reconstruct the true latent structure from data. And one can prove theoretical results on consistency and sample complexity. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2113165053"
],
"abstract": [
"In data analysis, latent variables play a central role because they help provide powerful insights into a wide variety of phenomena, ranging from biological to human sciences. The latent tree model, a particular type of probabilistic graphical models, deserves attention. Its simple structure - a tree - allows simple and efficient inference, while its latent variables capture complex relationships. In the past decade, the latent tree model has been subject to significant theoretical and methodological developments. In this review, we propose a comprehensive study of this model. First we summarize key ideas underlying the model. Second we explain how it can be efficiently learned from data. Third we illustrate its use within three types of applications: latent structure discovery, multidimensional clustering, and probabilistic inference. Finally, we conclude and give promising directions for future researches in this field."
]
} |
1605.06650 | 2395651392 | We present a novel method for hierarchical topic detection where topics are obtained by clustering documents in multiple ways. Specifically, we model document collections using a class of graphical models called hierarchical latent tree models (HLTMs). The variables at the bottom level of an HLTM are observed binary variables that represent the presence absence of words in a document. The variables at other levels are binary latent variables, with those at the lowest latent level representing word co-occurrence patterns and those at higher levels representing co-occurrence of patterns at the level below. Each latent variable gives a soft partition of the documents, and document clusters in the partitions are interpreted as topics. Latent variables at high levels of the hierarchy capture long-range word co-occurrence patterns and hence give thematically more general topics, while those at low levels of the hierarchy capture short-range word co-occurrence patterns and give thematically more specific topics. Unlike LDA-based topic models, HLTMs do not refer to a document generation process and use word variables instead of token variables. They use a tree structure to model the relationships between topics and words, which is conducive to the discovery of meaningful topics and topic hierarchies. | In the second setting, no assumption is made about how data are generated and the task is to fit an LTM to data @cite_37 . Here it does not make sense to talk about theoretical guarantees on consistency and sample complexity. Instead, algorithms are evaluated empirically using held-out likelihood. It has been shown that, on real-world datasets, better models can be obtained using methods developed in this setting than using those developed in the first setting @cite_29 . The reason is that, although the assumption of the first setting is reasonable for data from domains such as phylogeny, it is not reasonable for other types of data such as text data and survey data. | {
"cite_N": [
"@cite_37",
"@cite_29"
],
"mid": [
"2002575106",
"2112733287"
],
"abstract": [
"Existing models for cluster analysis typically consist of a number of attributes that describe the objects to be partitioned and one single latent variable that represents the clusters to be identified. When one analyzes data using such a model, one is looking for one way to cluster data that is jointly defined by all the attributes. In other words, one performs unidimensional clustering. This is not always appropriate. For complex data with many attributes, it is more reasonable to consider multidimensional clustering, i.e., to partition data along multiple dimensions. In this paper, we present a method for performing multidimensional clustering on categorical data and show its superiority over unidimensional clustering.",
"Real-world data are often multifaceted and can be meaningfully clustered in more than one way. There is a growing interest in obtaining multiple partitions of data. In previous work we learnt from data a latent tree model (LTM) that contains multiple latent variables ( 2012). Each latent variable represents a soft partition of data and hence multiple partitions result in. The LTM approach can, through model selection, automatically determine how many partitions there should be, what attributes define each partition, and how many clusters there should be for each partition. It has been shown to yield rich and meaningful clustering results. Our previous algorithm EAST for learning LTMs is only efficient enough to handle data sets with dozens of attributes. This paper proposes an algorithm called BI that can deal with data sets with hundreds of attributes. We empirically compare BI with EAST and other more efficient LTM learning algorithms, and show that BI outperforms its competitors on data sets with hundreds of attributes. In terms of clustering results, BI compares favorably with alternative methods that are not based on LTMs."
]
} |
1605.06650 | 2395651392 | We present a novel method for hierarchical topic detection where topics are obtained by clustering documents in multiple ways. Specifically, we model document collections using a class of graphical models called hierarchical latent tree models (HLTMs). The variables at the bottom level of an HLTM are observed binary variables that represent the presence absence of words in a document. The variables at other levels are binary latent variables, with those at the lowest latent level representing word co-occurrence patterns and those at higher levels representing co-occurrence of patterns at the level below. Each latent variable gives a soft partition of the documents, and document clusters in the partitions are interpreted as topics. Latent variables at high levels of the hierarchy capture long-range word co-occurrence patterns and hence give thematically more general topics, while those at low levels of the hierarchy capture short-range word co-occurrence patterns and give thematically more specific topics. Unlike LDA-based topic models, HLTMs do not refer to a document generation process and use word variables instead of token variables. They use a tree structure to model the relationships between topics and words, which is conducive to the discovery of meaningful topics and topic hierarchies. | Another method to learn a hierarchy of latent variables from data is proposed by Ver Steeg and Galstyan @cite_11 . The method is named correlation explanation (CorEx) . Unlike HLTA, CorEx is proposed as a model-free method and it hence does not intend to provide a representation for the joint distribution of the observed variables. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2953096896"
],
"abstract": [
"We introduce a method to learn a hierarchy of successively more abstract representations of complex data based on optimizing an information-theoretic objective. Intuitively, the optimization searches for a set of latent factors that best explain the correlations in the data as measured by multivariate mutual information. The method is unsupervised, requires no model assumptions, and scales linearly with the number of variables which makes it an attractive approach for very high dimensional systems. We demonstrate that Correlation Explanation (CorEx) automatically discovers meaningful structure for data from diverse sources including personality tests, DNA, and human language."
]
} |
1605.06776 | 2398180441 | This work considers reconstructing a target signal in a context of distributed sparse sources. We propose an efficient reconstruction algorithm with the aid of other given sources as multiple side information (SI). The proposed algorithm takes advantage of compressive sensing (CS) with SI and adaptive weights by solving a proposed weighted @math - @math minimization. The proposed algorithm computes the adaptive weights in two levels, first each individual intra-SI and then inter-SI weights are iteratively updated at every reconstructed iteration. This two-level optimization leads the proposed reconstruction algorithm with multiple SI using adaptive weights (RAMSIA) to robustly exploit the multiple SIs with different qualities. We experimentally perform our algorithm on generated sparse signals and also correlated feature histograms as multiview sparse sources from a multiview image database. The results show that RAMSIA significantly outperforms both classical CS and CS with single SI, and RAMSIA with higher number of SIs gained more than the one with smaller number of SIs. | In this section, we review a fundamental problem of signal recovery from low-dimensional signals @cite_4 @cite_1 @cite_5 and CS with SI @cite_6 @cite_14 @cite_15 @cite_8 @cite_3 @cite_12 . | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_5",
"@cite_15",
"@cite_12"
],
"mid": [
"",
"2050834445",
"",
"2129638195",
"2046345921",
"2963839871",
"2100556411",
"2106460483",
"1575252957"
],
"abstract": [
"",
"We consider linear equations y = Φx where y is a given vector in ℝn and Φ is a given n × m matrix with n 0 so that for large n and for all Φ's except a negligible fraction, the following property holds: For every y having a representation y = Φx0by a coefficient vector x0 ∈ ℝmwith fewer than ρ · n nonzeros, the solution x1of the 1-minimization problem is unique and equal to x0. In contrast, heuristic attempts to sparsely solve such systems—greedy algorithms and thresholding—perform poorly in this challenging setting. The techniques include the use of random proportional embeddings and almost-spherical sections in Banach space theory, and deviation bounds for the eigenvalues of random Wishart matrices. © 2006 Wiley Periodicals, Inc.",
"",
"Suppose we are given a vector f in a class FsubeRopfN , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n)lesRmiddotn-1p , where R>0 and p>0. Suppose that we take measurements yk=langf# ,Xkrang,k=1,...,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0<p<1 and with overwhelming probability, our reconstruction ft, defined as the solution to the constraints yk=langf# ,Xkrang with minimal lscr1 norm, obeys parf-f#parlscr2lesCp middotRmiddot(K logN)-r, r=1 p-1 2. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed",
"We address the problem of Compressed Sensing (CS) with side information. Namely, when reconstructing a target CS signal, we assume access to a similar signal. This additional knowledge, the side information, is integrated into CS via L1-L1 and L1-L2 minimization. We then provide lower bounds on the number of measurements that these problems require for successful reconstruction of the target signal. If the side information has good quality, the number of measurements is significantly reduced via L1-L1 minimization, but not so much via L1-L2 minimization. We provide geometrical interpretations and experimental results illustrating our findings.",
"Purpose: Repeated brain MRI scans are performed in many clinical scenarios, such as follow up of patients with tumors and therapy response assessment. In this paper, the authors show an approach to utilize former scans of the patient for the acceleration of repeated MRI scans. Methods: The proposed approach utilizes the possible similarity of the repeated scans in longitudinal MRI studies. Since similarity is not guaranteed, sampling and reconstruction are adjusted during acquisition to match the actual similarity between the scans. The baseline MR scan is utilized both in the sampling stage, via adaptive sampling, and in the reconstruction stage, with weighted reconstruction. In adaptive sampling, k-space sampling locations are optimized during acquisition. Weighted reconstruction uses the locations of the nonzero coefficients in the sparse domains as a prior in the recovery process. The approach was tested on 2D and 3D MRI scans of patients with brain tumors. Results: The longitudinal adaptive compressed sensing MRI (LACS-MRI) scheme provides reconstruction quality which outperforms other CS-based approaches for rapid MRI. Examples are shown on patients with brain tumors and demonstrate improved spatial resolution. Compared with data sampled at the Nyquist rate, LACS-MRI exhibits signal-to-error ratio (SER) of 24.8 dB with undersampling factor of 16.6 in 3D MRI. Conclusions: The authors presented an adaptive method for image reconstruction utilizing similarity of scans in longitudinal MRI studies, where possible. The proposed approach can significantly reduce scanning time in many applications that consist of disease follow-up and monitoring of longitudinal changes in brain MRI.",
"We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.",
"This paper considers the problem of sparse signal recovery when the decoder has prior information on the sparsity pattern of the data. The data vector x=[x1,...,xN]T has a randomly generated sparsity pattern, where the i-th entry is non-zero with probability pi. Given knowledge of these probabilities, the decoder attempts to recover x based on M random noisy projections. Information-theoretic limits on the number of measurements needed to recover the support set of x perfectly are given, and it is shown that significantly fewer measurements can be used if the prior distribution is sufficiently non-uniform. Furthermore, extensions of Basis Pursuit, LASSO, and Orthogonal Matching Pursuit which exploit the prior information are presented. The improved performance of these methods over their standard counterparts is demonstrated using simulations.",
"We provide two novel adaptive-rate compressive sensing (CS) strategies for sparse, time-varying signals using side information. The first method uses extra cross-validation measurements, and the second one exploits extra low-resolution measurements. Unlike the majority of current CS techniques, we do not assume that we know an upper bound on the number of significant coefficients that comprises the images in the video sequence. Instead, we use the side information to predict the number of significant coefficients in the signal at the next time instant. We develop our techniques in the specific context of background subtraction using a spatially multiplexing CS camera such as the single-pixel camera. For each image in the video sequence, the proposed techniques specify a fixed number of CS measurements to acquire and adjust this quantity from image to image. We experimentally validate the proposed methods on real surveillance video sequences."
]
} |
1605.06776 | 2398180441 | This work considers reconstructing a target signal in a context of distributed sparse sources. We propose an efficient reconstruction algorithm with the aid of other given sources as multiple side information (SI). The proposed algorithm takes advantage of compressive sensing (CS) with SI and adaptive weights by solving a proposed weighted @math - @math minimization. The proposed algorithm computes the adaptive weights in two levels, first each individual intra-SI and then inter-SI weights are iteratively updated at every reconstructed iteration. This two-level optimization leads the proposed reconstruction algorithm with multiple SI using adaptive weights (RAMSIA) to robustly exploit the multiple SIs with different qualities. We experimentally perform our algorithm on generated sparse signals and also correlated feature histograms as multiview sparse sources from a multiview image database. The results show that RAMSIA significantly outperforms both classical CS and CS with single SI, and RAMSIA with higher number of SIs gained more than the one with smaller number of SIs. | . Low-dimensional signal recovery arises in a wide range of applications in signal processing. Most signals in such applications have sparse representations in some domain. Let @math denote a sparse source, which is indeed compressible. The source @math can be reduced by sampling via a projection @cite_1 . We denote a random measurement matrix for @math by @math , whose elements are sampled from an i.i.d. Gaussian distribution. Thus, we get a compressed vector @math , also called measurement, consisting of @math elements. The source @math can be recovered @cite_1 @cite_4 by solving: where @math is @math norm of @math wherein @math is an element of @math . | {
"cite_N": [
"@cite_1",
"@cite_4"
],
"mid": [
"2129638195",
"2050834445"
],
"abstract": [
"Suppose we are given a vector f in a class FsubeRopfN , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n)lesRmiddotn-1p , where R>0 and p>0. Suppose that we take measurements yk=langf# ,Xkrang,k=1,...,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0<p<1 and with overwhelming probability, our reconstruction ft, defined as the solution to the constraints yk=langf# ,Xkrang with minimal lscr1 norm, obeys parf-f#parlscr2lesCp middotRmiddot(K logN)-r, r=1 p-1 2. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed",
"We consider linear equations y = Φx where y is a given vector in ℝn and Φ is a given n × m matrix with n 0 so that for large n and for all Φ's except a negligible fraction, the following property holds: For every y having a representation y = Φx0by a coefficient vector x0 ∈ ℝmwith fewer than ρ · n nonzeros, the solution x1of the 1-minimization problem is unique and equal to x0. In contrast, heuristic attempts to sparsely solve such systems—greedy algorithms and thresholding—perform poorly in this challenging setting. The techniques include the use of random proportional embeddings and almost-spherical sections in Banach space theory, and deviation bounds for the eigenvalues of random Wishart matrices. © 2006 Wiley Periodicals, Inc."
]
} |
1605.06776 | 2398180441 | This work considers reconstructing a target signal in a context of distributed sparse sources. We propose an efficient reconstruction algorithm with the aid of other given sources as multiple side information (SI). The proposed algorithm takes advantage of compressive sensing (CS) with SI and adaptive weights by solving a proposed weighted @math - @math minimization. The proposed algorithm computes the adaptive weights in two levels, first each individual intra-SI and then inter-SI weights are iteratively updated at every reconstructed iteration. This two-level optimization leads the proposed reconstruction algorithm with multiple SI using adaptive weights (RAMSIA) to robustly exploit the multiple SIs with different qualities. We experimentally perform our algorithm on generated sparse signals and also correlated feature histograms as multiview sparse sources from a multiview image database. The results show that RAMSIA significantly outperforms both classical CS and CS with single SI, and RAMSIA with higher number of SIs gained more than the one with smaller number of SIs. | Problem becomes finding a solution to where @math is a smooth convex function with Lipschitz constant @math @cite_5 of gradient @math and @math is a continuous convex function possibly non-smooth. Problem is obviously a special case of with @math , where @math is a regularization parameter, and @math . The results of using proximal gradient methods @cite_5 give that @math at iteration @math can be iteratively computed by: where @math and @math is a proximal operator that is defined by | {
"cite_N": [
"@cite_5"
],
"mid": [
"2100556411"
],
"abstract": [
"We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude."
]
} |
1605.06853 | 2394914289 | Path planning in the presence of dynamic obstacles is a challenging problem due to the added time dimension in search space. In approaches that ignore the time dimension and treat dynamic obstacles as static, frequent re-planning is unavoidable as the obstacles move, and their solutions are generally sub-optimal and can be incomplete. To achieve both optimality and completeness, it is necessary to consider the time dimension during planning. The notion of adaptive dimensionality has been successfully used in high-dimensional motion planning such as manipulation of robot arms, but has not been used in the context of path planning in dynamic environments. In this paper, we apply the idea of adaptive dimensionality to speed up path planning in dynamic environments for a robot with no assumptions on its dynamic model. Specifically, our approach considers the time dimension only in those regions of the environment where a potential collision may occur, and plans in a low-dimensional state-space elsewhere. We show that our approach is complete and is guaranteed to find a solution, if one exists, within a cost sub-optimality bound. We experimentally validate our method on the problem of 3D vehicle navigation (x, y, heading) in dynamic environments. Our results show that the presented approach achieves substantial speedups in planning time over 4D heuristic-based A*, especially when the resulting plan deviates significantly from the one suggested by the heuristic. | A common approach used for efficient path planning in dynamic environments involves modeling moving obstacles as static objects with a small window of high cost around the beginning of their projected trajectories @cite_7 @cite_17 . | {
"cite_N": [
"@cite_7",
"@cite_17"
],
"mid": [
"2113029345",
"2156037610"
],
"abstract": [
"In this paper, we present an algorithm for generating complex dynamically feasible maneuvers for autonomous vehicles traveling at high speeds over large distances. Our approach is based on performing anytime incremental search on a multi-resolution, dynamically feasible lattice state space. The resulting planner provides real-time performance and guarantees on and control of the suboptimality of its solution. We provide theoretical properties and experimental results from an implementation on an autonomous passenger vehicle that competed in, and won, the Urban Challenge competition.",
"In this paper we describe a novel path planning approach for mobile robots operating in indoor environments. In such scenarios, robots must be able to maneuver in crowded spaces, partially filled with static and dynamic obstacles (such as people). Our approach produces smooth, complex maneuvers over large distances through the use of an anytime graph search algorithm applied to a novel multi-resolution state lattice, where the resolution is adapted based on both environmental characteristics and task characteristics. In addition, we present a novel approach for generating fast globally optimal trajectories in constrained spaces (i.e. rooms connected via doors and hallways). This approach exploits offline precomputation to provide extremely efficient online performance and is applicable to a wide range of both indoor and outdoor navigation scenarios. By combining an anytime, multi-resolution lattice-based search algorithm with our precomputation technique, globally optimal trajectories in up to four dimensions (2D position, heading and velocity) are obtained in real-time."
]
} |
1605.06853 | 2394914289 | Path planning in the presence of dynamic obstacles is a challenging problem due to the added time dimension in search space. In approaches that ignore the time dimension and treat dynamic obstacles as static, frequent re-planning is unavoidable as the obstacles move, and their solutions are generally sub-optimal and can be incomplete. To achieve both optimality and completeness, it is necessary to consider the time dimension during planning. The notion of adaptive dimensionality has been successfully used in high-dimensional motion planning such as manipulation of robot arms, but has not been used in the context of path planning in dynamic environments. In this paper, we apply the idea of adaptive dimensionality to speed up path planning in dynamic environments for a robot with no assumptions on its dynamic model. Specifically, our approach considers the time dimension only in those regions of the environment where a potential collision may occur, and plans in a low-dimensional state-space elsewhere. We show that our approach is complete and is guaranteed to find a solution, if one exists, within a cost sub-optimality bound. We experimentally validate our method on the problem of 3D vehicle navigation (x, y, heading) in dynamic environments. Our results show that the presented approach achieves substantial speedups in planning time over 4D heuristic-based A*, especially when the resulting plan deviates significantly from the one suggested by the heuristic. | To plan and re-plan online, several approaches have been suggested that sacrifice near-optimality guarantees for efficiency @cite_6 , including sampling-based planners such as RRT-variants that can quickly obtain kinodynamically feasible paths in a high dimensional space @cite_14 @cite_0 . However, these sampling-based approaches do not provide any global optimality guarantees that we require in most cases. | {
"cite_N": [
"@cite_0",
"@cite_14",
"@cite_6"
],
"mid": [
"2000790002",
"2167869989",
"2111112078"
],
"abstract": [
"This paper addresses the problem of motion planning (MP) in dynamic environments. It is first argued that dynamic environments impose a real-time constraint upon MP: it has a limited time only to compute a motion, the time available being a function of the dynamicity of the environment. Now, given the intrinsic complexity of MP, computing a complete motion to the goal within the time available is impossible to achieve in most real situations. Partial motion planning (PMP) is the answer to this problem proposed in this paper. PMP is a motion planning scheme with an anytime flavor: when the time available is over, PMP returns the best partial motion to the goal computed so far. Like reactive navigation scheme, PMP faces a safety issue: what guarantee is there that the system will never end up in a critical situation yielding an inevitable collision? The answer proposed in this paper to this safety issue relies upon the concept of inevitable collision states (ICS). ICS takes into account the dynamics of both the system and the moving obstacles. By computing ICS-free partial motion, the system safety can be guaranteed. Application of PMP to the case of a car-like system in a dynamic environment is presented.",
"We consider motion planning problems for a vehicle with kinodynamic constraints, where there is partial knowledge about the environment and replanning is required. We present a new tree-based planner that explicitly deals with kinodynamic constraints and addresses the safety issues when planning under finite computation times, meaning that the vehicle avoids collisions in its evolving configuration space. In order to achieve good performance we incrementally update a tree data-structure by retaining information from previous steps and we bias the search of the planner with a greedy, yet probabilistically complete state space exploration strategy. Moreover, the number of collision checks required to guarantee safety is kept to a minimum. We compare our technique with alternative approaches as a standalone planner and show that it achieves favorable performance when planning with dynamics. We have applied the planner to solve a challenging replanning problem involving the mapping of an unknown workspace with a nonholonomic platform",
"We present an efficient, anytime method for path planning in dynamic environments. Current approaches to planning in such domains either assume that the environment is static and replan when changes are observed, or assume that the dynamics of the environment are perfectly known a priori. Our approach takes into account all prior information about both the static and dynamic elements of the environment, and efficiently updates the solution when changes to either are observed. As a result, it is well suited to robotic path planning in known or unknown environments in which there are mobile objects, agents or adversaries"
]
} |
1605.06853 | 2394914289 | Path planning in the presence of dynamic obstacles is a challenging problem due to the added time dimension in search space. In approaches that ignore the time dimension and treat dynamic obstacles as static, frequent re-planning is unavoidable as the obstacles move, and their solutions are generally sub-optimal and can be incomplete. To achieve both optimality and completeness, it is necessary to consider the time dimension during planning. The notion of adaptive dimensionality has been successfully used in high-dimensional motion planning such as manipulation of robot arms, but has not been used in the context of path planning in dynamic environments. In this paper, we apply the idea of adaptive dimensionality to speed up path planning in dynamic environments for a robot with no assumptions on its dynamic model. Specifically, our approach considers the time dimension only in those regions of the environment where a potential collision may occur, and plans in a low-dimensional state-space elsewhere. We show that our approach is complete and is guaranteed to find a solution, if one exists, within a cost sub-optimality bound. We experimentally validate our method on the problem of 3D vehicle navigation (x, y, heading) in dynamic environments. Our results show that the presented approach achieves substantial speedups in planning time over 4D heuristic-based A*, especially when the resulting plan deviates significantly from the one suggested by the heuristic. | Other approaches @cite_12 @cite_11 delegate the dynamic obstacle avoidance problem to a local reactive planner which can effectively avoid collision with dynamic obstacles. These methods have the disadvantage that they can get stuck in local minima and are generally not globally optimal. | {
"cite_N": [
"@cite_12",
"@cite_11"
],
"mid": [
"2117211893",
"2131505299"
],
"abstract": [
"This approach, designed for mobile robots equipped with synchro-drives, is derived directly from the motion dynamics of the robot. In experiments, the dynamic window approach safely controlled the mobile robot RHINO at speeds of up to 95 cm sec, in populated and dynamic environments.",
"Many applications in mobile robotics require the safe execution of a collision-free motion to a goal position. Planning approaches are well suited for achieving a goal position in known static environments, while real-time obstacle avoidance methods allow reactive motion behavior in dynamic and unknown environments. This paper proposes the global dynamic window approach as a generalization of the dynamic window approach. It combines methods from motion planning and real-time obstacle avoidance to result in a framework that allows robust execution of high-velocity, goal-directed reactive motion for a mobile robot in unknown and dynamic environments. The global dynamic window approach is applicable to nonholonomic and holonomic mobile robots."
]
} |
1605.06853 | 2394914289 | Path planning in the presence of dynamic obstacles is a challenging problem due to the added time dimension in search space. In approaches that ignore the time dimension and treat dynamic obstacles as static, frequent re-planning is unavoidable as the obstacles move, and their solutions are generally sub-optimal and can be incomplete. To achieve both optimality and completeness, it is necessary to consider the time dimension during planning. The notion of adaptive dimensionality has been successfully used in high-dimensional motion planning such as manipulation of robot arms, but has not been used in the context of path planning in dynamic environments. In this paper, we apply the idea of adaptive dimensionality to speed up path planning in dynamic environments for a robot with no assumptions on its dynamic model. Specifically, our approach considers the time dimension only in those regions of the environment where a potential collision may occur, and plans in a low-dimensional state-space elsewhere. We show that our approach is complete and is guaranteed to find a solution, if one exists, within a cost sub-optimality bound. We experimentally validate our method on the problem of 3D vehicle navigation (x, y, heading) in dynamic environments. Our results show that the presented approach achieves substantial speedups in planning time over 4D heuristic-based A*, especially when the resulting plan deviates significantly from the one suggested by the heuristic. | @cite_4 , highly accurate heuristic values are computed by solving a low-dimensional problem and are then used to direct high-dimensional planning. However, this approach does not explicitly decrease the dimensionality of the state-space and can lead to long planning times when the heuristic is incorrect. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2066852548"
],
"abstract": [
"This paper presents a solution to the problem of finding an effective yet admissible heuristic function for A* by precomputing a look-up table of solutions. This is necessary because traditional heuristic functions such as Euclidean distance often produce poor performance for certain problems. In this case, the technique is applied to the state lattice, which is used for full state space motion planning. However, the approach is applicable to many applications of heuristic search algorithms. The look-up table is demonstrated to be feasible to generate and store. A principled technique is presented for selecting which queries belong in the table. Finally, the results are validated through testing on a variety of path planning problems. Index Terms – motion planning, state lattice, heuristic, nonholonomic, mobile robot"
]
} |
1605.06853 | 2394914289 | Path planning in the presence of dynamic obstacles is a challenging problem due to the added time dimension in search space. In approaches that ignore the time dimension and treat dynamic obstacles as static, frequent re-planning is unavoidable as the obstacles move, and their solutions are generally sub-optimal and can be incomplete. To achieve both optimality and completeness, it is necessary to consider the time dimension during planning. The notion of adaptive dimensionality has been successfully used in high-dimensional motion planning such as manipulation of robot arms, but has not been used in the context of path planning in dynamic environments. In this paper, we apply the idea of adaptive dimensionality to speed up path planning in dynamic environments for a robot with no assumptions on its dynamic model. Specifically, our approach considers the time dimension only in those regions of the environment where a potential collision may occur, and plans in a low-dimensional state-space elsewhere. We show that our approach is complete and is guaranteed to find a solution, if one exists, within a cost sub-optimality bound. We experimentally validate our method on the problem of 3D vehicle navigation (x, y, heading) in dynamic environments. Our results show that the presented approach achieves substantial speedups in planning time over 4D heuristic-based A*, especially when the resulting plan deviates significantly from the one suggested by the heuristic. | By contrast, the Adaptive Dimensionality (AD) approach, @cite_5 , explicitly decreases the dimensionality of the state-space in regions where full-dimensional planning is not needed. This approach introduces a strategy for adapting the dimensionality of the search space to guarantee a solution that is still feasible with respect to a high dimensional motion model while making fast progress in regions that exhibit only low-dimensional structure. In @cite_3 , path planning with adaptive dimensionality has been shown to be efficient for high-dimensional planning such as mobile manipulation. The AD approach has been extended in @cite_18 , to get faster planning times by introducing an incremental planning algorithm. @cite_10 extends this method in the context of mobile robots by using adaptively dimensional state-space to combine the global and local path planning problem for navigation. Our approach builds on the AD approach and applies it to path planning in dynamic environments. | {
"cite_N": [
"@cite_5",
"@cite_18",
"@cite_10",
"@cite_3"
],
"mid": [
"",
"2403583150",
"2005784077",
"1986375212"
],
"abstract": [
"",
"Path planning is often a high-dimensional computationally-expensive planning problem as it requires reasoning about the kinodynamic constraints of the robot and collisions of the robot with the environment. However, large regions of the environment are typically benign enough that a much faster low-dimensional planning combined with a local path following controller suffice. Planning with Adaptive Dimensionality that was recently developed makes use of this observation and iteratively constructs and searches a state-space consisting of mainly low-dimensional states. It only introduces regions of high-dimensional states into the state-space where they are necessary to ensure completeness and bounds on sub-optimality. However, due to its iterative nature, the approach relies on running a series of weighted A* searches. In this paper, we introduce and apply to Planning with Adaptive Dimensionality a simple but very effective incremental version of weighted A* that reuses its previously generated search tree if available. On the theoretical side, the new algorithm preserves guarantees on completeness and bounds on sub-optimality. On the experimental side, it speeds up 3D (x,y,heading) path planning with a full-body collision checking by up to a factor of 5. Our results also show that it tends to be much faster than applying alternative incremental graph search techniques such as D* to Planning with Adaptive Dimensionality.",
"Planning with kinodynamic constraints is often required for mobile robots operating in cluttered, complex environments. A common approach is to use a two-dimensional (2-D) global planner for long range planning, and a short range higher dimensional planner or controller capable of satisfying all of the constraints on motion. However, this approach is incomplete and can result in oscillations and the inability to find a path to the goal. In this paper we present an approach to solving this problem by combining the global and local path planning problem into a single search using a combined 2-D and higher dimensional state-space.",
"Mobile manipulation planning is a hard problem composed of multiple challenging sub-problems, some of which require searching through large high-dimensional state-spaces. The focus of this work is on computing a trajectory to safely maneuver an object through an environment, given the start and goal configurations. In this work we present a heuristic search-based deterministic mobile manipulation planner, based on our recently-developed algorithm for planning with adaptive dimensionality. Our planner demonstrates reasonable performance, while also providing strong guarantees on completeness and suboptimality bounds with respect to the graph representing the problem."
]
} |
1605.06853 | 2394914289 | Path planning in the presence of dynamic obstacles is a challenging problem due to the added time dimension in search space. In approaches that ignore the time dimension and treat dynamic obstacles as static, frequent re-planning is unavoidable as the obstacles move, and their solutions are generally sub-optimal and can be incomplete. To achieve both optimality and completeness, it is necessary to consider the time dimension during planning. The notion of adaptive dimensionality has been successfully used in high-dimensional motion planning such as manipulation of robot arms, but has not been used in the context of path planning in dynamic environments. In this paper, we apply the idea of adaptive dimensionality to speed up path planning in dynamic environments for a robot with no assumptions on its dynamic model. Specifically, our approach considers the time dimension only in those regions of the environment where a potential collision may occur, and plans in a low-dimensional state-space elsewhere. We show that our approach is complete and is guaranteed to find a solution, if one exists, within a cost sub-optimality bound. We experimentally validate our method on the problem of 3D vehicle navigation (x, y, heading) in dynamic environments. Our results show that the presented approach achieves substantial speedups in planning time over 4D heuristic-based A*, especially when the resulting plan deviates significantly from the one suggested by the heuristic. | Some approaches only plan in full-dimensional space-time search space until the end of an obstacle's trajectory and then finish the plan in a low-dimensional state-space. Time-bounded lattice planning, @cite_2 , neglects dynamic obstacles and the time dimension in the search space after a certain point in the time. Several works, @cite_16 @cite_13 , have extended this algorithm to account for kinematic and dynamic feasibility in the resulting paths by using a hybrid dimensionality state-space. These approaches sacrifice optimality for faster planning times and don't provide theoretical guarantees on the sub-optimality of the solution. In addition, our algorithm doesn't prune the dynamic obstacle trajectories, instead takes the entire obstacle trajectories into account and returns a bounded sub-optimal collision-free path. Considering the entire trajectory of the obstacles ensures a globally optimal solution. | {
"cite_N": [
"@cite_16",
"@cite_13",
"@cite_2"
],
"mid": [
"2020532707",
"204702450",
""
],
"abstract": [
"Abstract Safe and efficient path planning for mobile robots in large dynamic environments is still a challenging research topic. In order to plan collision-free trajectories, the time component of the path must be explicitly considered during the search. Furthermore, a precise planning near obstacles and in the vicinity of the robot is important. This results in a high computational burden of the trajectory planning algorithms. However, in large open areas and in the far future of the path, the planning can be performed more coarsely. In this paper, we present a novel algorithm that uses a hybrid-dimensional multi-resolution state × time lattice to efficiently compute trajectories with an adaptive fidelity according to the environmental requirements. We show how to construct this lattice in a consistent way and define the transitions between regions of different granularity. Finally, we provide some experimental results, which prove the real-time capability of our approach and show its advantages over single-dimensional single-resolution approaches.",
"Safe navigation for mobile robots in unstructured and dynamic environments is still a challenging research topic. Most approaches use separate algorithms for global path planning and local obstacle avoidance. However, this generally results in globally sub-optimal navigation strategies. In this paper, we present an algorithm which combines these two navigation tasks in a single integrated approach. For this purpose, we introduce a novel search space, namely, a × lattice with hybrid dimensionality. We describe a procedure for generating high-quality motion primitives for a mobile robot with four-wheel steering to define the motion in this lattice. Our algorithm computes a hybrid solution for the path planning problem consisting of a trajectory (i.e., a path with time component) in the imminent future, a dynamically feasible path in the near future, and a kinematically feasible path for the remaining time to the goal. Finally, we provide some results of our algorithm in action to prove its high solution quality and real-time capability.",
""
]
} |
1605.06561 | 2469560512 | Newton's method is a fundamental technique in optimization with quadratic convergence within a neighborhood around the optimum. However reaching this neighborhood is often slow and dominates the computational costs. We exploit two properties specific to empirical risk minimization problems to accelerate Newton's method, namely, subsampling training data and increasing strong convexity through regularization. We propose a novel continuation method, where we define a family of objectives over increasing sample sizes and with decreasing regularization strength. Solutions on this path are tracked such that the minimizer of the previous objective is guaranteed to be within the quadratic convergence region of the next objective to be optimized. Thereby every Newton iteration is guaranteed to achieve super-linear contractions with regard to the chosen objective, which becomes a moving target. We provide a theoretical analysis that motivates our algorithm, called DynaNewton, and characterizes its speed of convergence. Experiments on a wide range of data sets and problems consistently confirm the predicted computational savings. | The recent work of @cite_17 simultaneously exploits the properties of the empirical risk as well as the concentration bounds from learning theory to achieve fast convergence to the expected risk. This approach uses a dynamic sample size schedule that matches up optimization error and statistical accuracy. Although this approach was tailored specifically for variance-reduced stochastic gradient methods, we here show how a similar adaptive sample size strategy can be used in the context of non-stochastic approaches such as Newton's method. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2950991820"
],
"abstract": [
"For many machine learning problems, data is abundant and it may be prohibitive to make multiple passes through the full training set. In this context, we investigate strategies for dynamically increasing the effective sample size, when using iterative methods such as stochastic gradient descent. Our interest is motivated by the rise of variance-reduced methods, which achieve linear convergence rates that scale favorably for smaller sample sizes. Exploiting this feature, we show -- theoretically and empirically -- how to obtain significant speed-ups with a novel algorithm that reaches statistical accuracy on an @math -sample in @math , instead of @math steps."
]
} |
1605.06561 | 2469560512 | Newton's method is a fundamental technique in optimization with quadratic convergence within a neighborhood around the optimum. However reaching this neighborhood is often slow and dominates the computational costs. We exploit two properties specific to empirical risk minimization problems to accelerate Newton's method, namely, subsampling training data and increasing strong convexity through regularization. We propose a novel continuation method, where we define a family of objectives over increasing sample sizes and with decreasing regularization strength. Solutions on this path are tracked such that the minimizer of the previous objective is guaranteed to be within the quadratic convergence region of the next objective to be optimized. Thereby every Newton iteration is guaranteed to achieve super-linear contractions with regard to the chosen objective, which becomes a moving target. We provide a theoretical analysis that motivates our algorithm, called DynaNewton, and characterizes its speed of convergence. Experiments on a wide range of data sets and problems consistently confirm the predicted computational savings. | Free energy based continuation methods, often for non-convex or integer problems, have been popularized in computer vision and machine learning under the name of @cite_21 , a deterministic variant of simulated annealing @cite_18 . Here the family of objectives is parametrized by the computational analogue of temperature. Similar techniques known as graduated optimization have also been proposed in computer vision @cite_5 and in machine learning, a recent example being @cite_8 . | {
"cite_N": [
"@cite_5",
"@cite_18",
"@cite_21",
"@cite_8"
],
"mid": [
"",
"2183434659",
"2161877964",
"1901429906"
],
"abstract": [
"",
"Simulated annealing is a stochastic optimization procedure which is widely applicable and has been found effective in several problems arising in computeraided circuit design. This paper derives the method in the context of traditional optimization heuristics and presents experimental .studies of its computational efficiency when applied to graph partitioning and traveling salesman problems. Dan Gelatt and I, with help from several of our colleagues, have explored a general framework for optimization which uses computer simulation methods from condensed matter physics and an equivalence (which can be made rigorous) between the many undetermined parameters of the system being optimized and the particles in an imaginary physical system. The energy of the physical system is given by the objective function of the optimization problem. States of low energy in the imaginary physical system are thus the near-global optimum configurations sought in the optimization problem. The trick we have used to find these is to model statistically the evolution of the physical system at a series of temperatures which allow it to \"anneal\" into a state of high order and very low energy. Arguments for the validity of this approach, and some ideas which help in understanding how to use it effectively, are given in a paper which",
"The deterministic annealing approach to clustering and its extensions has demonstrated substantial performance improvement over standard supervised and unsupervised learning methods in a variety of important applications including compression, estimation, pattern recognition and classification, and statistical regression. The application-specific cost is minimized subject to a constraint on the randomness of the solution, which is gradually lowered. We emphasize the intuition gained from analogy to statistical physics. Alternatively the method is derived within rate-distortion theory, where the annealing process is equivalent to computation of Shannon's rate-distortion function, and the annealing temperature is inversely proportional to the slope of the curve. The basic algorithm is extended by incorporating structural constraints to allow optimization of numerous popular structures including vector quantizers, decision trees, multilayer perceptrons, radial basis functions, and mixtures of experts.",
"The graduated optimization approach, also known as the continuation method, is a popular heuristic to solving non-convex problems that has received renewed interest over the last decade. Despite its popularity, very little is known in terms of theoretical convergence analysis. In this paper we describe a new first-order algorithm based on graduated optimiza- tion and analyze its performance. We characterize a parameterized family of non- convex functions for which this algorithm provably converges to a global optimum. In particular, we prove that the algorithm converges to an -approximate solution within O(1 ^2) gradient-based steps. We extend our algorithm and analysis to the setting of stochastic non-convex optimization with noisy gradient feedback, attaining the same convergence rate. Additionally, we discuss the setting of zero-order optimization, and devise a a variant of our algorithm which converges at rate of O(d^2 ^4)."
]
} |
1605.06856 | 2399339171 | The database community has long recognized the importance of graphical query interface to the usability of data management systems. Yet, relatively less has been done. We present Orion, a visual interface for querying ultra-heterogeneous graphs. It iteratively assists users in query graph construction by making suggestions via machine learning methods. In its active mode, Orion automatically suggests top-k edges to be added to a query graph. In its passive mode, the user adds a new edge manually, and Orion suggests a ranked list of labels for the edge. Orion's edge ranking algorithm, Random Decision Paths (RDP), makes use of a query log to rank candidate edges by how likely they will match the user's query intent. Extensive user studies using Freebase demonstrated that Orion users have a 70 success rate in constructing complex query graphs, a significant improvement over the 58 success rate by the users of a baseline system that resembles existing visual query builders. Furthermore, using active mode only, the RDP algorithm was compared with several methods adapting other machine learning algorithms such as random forests and naive Bayes classifier, as well as class association rules and recommendation systems based on singular value decomposition. On average, RDP required 40 suggestions to correctly reach a target query graph (using only its active mode of suggestion) while other methods required 1.5--4 times as many suggestions. | QUBLE @cite_0 , GRAPHITE @cite_19 and @cite_30 provide visual query interfaces for querying a single large graph. But, they focus on efficient query processing, and only facilitate query graph formulation by giving options to quickly draw various components of the query graph. Instead of recommending query components that a user might be interested in, they alphabetically list all possible options for node labels (which may be extended to edge labels similarly). They also deal with smaller data graphs. For instance, the graph considered by QUBLE contains only around 10 thousand nodes with 300 distinct node types, and they do not consider edge types. Orion , on the other hand, considers large graphs such as Freebase, which has over 30 million distinct node types and 5 thousand distinct edge types. With such large graphs, it is impractical to expect users to browse through all options alphabetically to select the most appropriate edge to add to a query graph. Ranking these edges by their relevance to the user's query intent is a necessity, for which Orion is designed. | {
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_30"
],
"mid": [
"2048416336",
"",
"2042845972"
],
"abstract": [
"In a previous paper, we laid out the vision of a novel graph query processing paradigm where instead of processing a visual query graph after its construction, it interleaves visual query formulation and processing by exploiting the latency offered by the GUI [4]. Our recent attempts at implementing this vision [4,6], show significant improvement in the system response time (SRT) for subgraph queries. However, these efforts are designed specifically for graph databases containing a large collection of small or medium-sized graphs. Consequently, its frequent fragment-based action-aware indexing schemes and query processing strategy are unsuitable for supporting subgraph queries on large networks containing thousands of nodes and edges. In this demonstration, we present a novel system called QUBLE (QUery Blender for Large nEtworks) to realize this novel paradigm on large networks. We demonstrate various innovative features of QUBLE and its promising performance.",
"",
"Given the explosive growth of modern graph data, new methods are needed that allow for the querying of complex graph structures without the need of a complicated querying languages; in short, interactive graph querying is desirable. We describe our work towards achieving our overall research goal of designing and developing an interactive querying system for large network data. We focus on three critical aspects: scalable data mining algorithms, graph visualization, and interaction design. We have already completed an approximate subgraph matching system called MAGE in our previous work that fulfills the algorithmic foundation allowing us to query a graph with hundreds of millions of edges. Our preliminary work on visual graph querying, Graphite, was the first step in the process to making an interactive graph querying system. We are in the process of designing the graph visualization and robust interaction needed to make truly interactive graph querying a reality."
]
} |
1605.06770 | 2951039930 | In this paper, a novel approach is proposed to automatically construct parallel discourse corpus for dialogue machine translation. Firstly, the parallel subtitle data and its corresponding monolingual movie script data are crawled and collected from Internet. Then tags such as speaker and discourse boundary from the script data are projected to its subtitle data via an information retrieval approach in order to map monolingual discourse to bilingual texts. We not only evaluate the mapping results, but also integrate speaker information into the translation. Experiments show our proposed method can achieve 81.79 and 98.64 accuracy on speaker and dialogue boundary annotation, and speaker-based language model adaptation can obtain around 0.5 BLEU points improvement in translation qualities. Finally, we publicly release around 100K parallel discourse data with manual speaker and dialogue boundary annotation. | There are two directions of work related to dialogue corpus construction. One is parallel corpora construction for dialogue or conversation MT @cite_21 @cite_10 @cite_12 @cite_17 @cite_6 @cite_3 @cite_24 . Thanks to the effects of crowdsourcing and fan translation in audiovisual translation @cite_22 , we can regard subtitles as parallel corpora. leveraged the existence of bilingual subtitles as a source of parallel data for the Chinese-English language pair to improve the MT systems in the movie domain. However, their work only considers sentence-level data instead of extracting more useful information for dialogues. Besides, Japanese researchers constructed a speech dialogue corpus for a machine interpretation system @cite_18 @cite_0 @cite_5 @cite_7 . They collected speech dialogue corpora for machine interpretation research via recording and transcribing Japanese English interpreters' consecutive simultaneous interpreting in the booth. The German VERBMOBIL speech-to-speech translation programme @cite_28 also collected and transcribed task-oriented dialogue data. This related work focused on speech-to-speech translation including three modules of automatic speech recognition (ASR), MT and text-to-speech(TTS). | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_7",
"@cite_28",
"@cite_21",
"@cite_6",
"@cite_3",
"@cite_24",
"@cite_0",
"@cite_5",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"1587677094",
"",
"",
"",
"2222731017",
"118129934",
"1524027641",
"630532510",
"",
"365755509",
"1779819959",
"1533246198",
"1537949628"
],
"abstract": [
"",
"",
"",
"",
"This paper proposes to use DTW to construct parallel corpora from difficult data. Parallel corpora are considered as raw material for machine translation (MT), frequently, MT systems use European or Canadian parliament corpora. In order to achieve a realistic machine translation system, we decided to use movie subtitles. These data could be considered difficult because they contain unfamiliar expressions, abbreviations, hesitations, words which do not exist in classical dictionaries (as vulgar words), etc. The obtained parallel corpora can constitute a rich ressource to train decoding spontaneous speech translation system. From 40 movies, we align 43013 English subtitles with 42306 French subtitles. This leads to 37625 aligned pairs with a precision of 92,3 .",
"This paper presents a method for compiling a large-scale bilingual corpus from a database of movie subtitles. To create the corpus, we propose an algorithm based on Gale and Church’s sentence alignment algorithm(1993). However, our algorithm not only relies on character length information, but also uses subtitle-timing information, which is encoded in the subtitle files. Timing is highly correlated between subtitles in different versions (for the same movie), since subtitles that match should be displayed at the same time. However, the absolute time values can’t be used for alignment, since the timing is usually specified by frame numbers and not by real time, and converting it to real time values is not always possible, hence we use normalized subtitle duration instead. This results in a significant reduction in the alignment error rate.",
"This paper describes a methodology for constructing aligned German-Chinese corpora from movie subtitles. The corpora will be used to train a special machine translation system with intention to automatically translate the subtitles between German and Chinese. Since the common length-based algorithm for alignment shows weakness on short spoken sentences, especially on those from different language families, this paper studies to use dynamic programming based on time-shift information in subtitles, and extends it with statistical lexical cues to align the subtitle. In our experiment with around 4,000 Chinese and German sentences, the proposed alignment approach yields 83.8 precision. Furthermore, it is unrelated to languages, and leads to a general method of parallel corpora building between different language families.",
"This paper presents the current status of OPUS, a growing language resource of parallel corpora and related tools. The focus in OPUS is to provide freely available data sets in various formats together with basic annotation to be useful for applications in computational linguistics, translation studies and cross-linguistic corpus studies. In this paper, we report about new data sets and their features, additional annotation tools and models provided from the website and essential interfaces and on-line services included in the project.",
"",
"Speech-to-speech translation has been an important research topic with the advance of technologies for speech processing and language processing. This paper describes a bilingual speech dialogue corpus which has been constructed for research on simultaneous machine interpretation at the Center for Integrated Acoustic Information Research (CIAIR), Nagoya University. The corpus has been implemented by collecting simulated cross-lingual conversations between English speech and Japanese speech through simultaneous interpretation, and by transcribing them manually with bilingual sentence alignment. In the year 2002, 216 spoken dialogues have been collected under a real environment, and transcribed into text files consisting of about 300,000 morphemes. In order to utilize the bilingual corpus effectively, every source utterance speech has been segmented into interpreting units according to its word-for-word translation and the word alignment of them. The interpreting unit means a linguistic chunk that could be interpreted separately and simultaneously. This paper has investigated linguistic characters of such the unit, and examined the feasibility of simultaneous machine interpretation.",
"In this paper on-going work of creating an extensive multilingual parallel corpus of movie subtitles is presented. The corpus currently contains roughly 23,000 pairs of aligned subtitles covering about 2,700 movies in 29 languages. Subtitles mainly consist of transcribed speech, sometimes in a very condensed way. Insertions, deletions and paraphrases are very frequent which makes them a challenging data set to work with especially when applying automatic sentence alignment. Standard alignment approaches rely on translation consistency either in terms of length or term translations or a combination of both. In the paper, we show that these approaches are not applicable for subtitles and we propose a new alignment approach based on time overlaps specifically designed for subtitles. In our experiments we obtain a significant improvement of alignment accuracy compared to standard length-based",
"Sentence alignment is an essential step in building a parallel corpus. In this paper a specialized approach for the alignment of movie subtitles based on time overlaps is introduced. It is used for creating an extensive multilingual parallel subtitle corpus currently containing about 21 million aligned sentence fragments in 29 languages. Our alignment approach yields significantly higher accuracies compared to standard length-based approaches on this data. Furthermore, we can show that simple heuristics for subtitle synchronization can be used to improve the alignment accuracy even further.",
"This paper addresses the problem of synchronizing movie subtitles, which is necessary to improve alignment quality when building a parallel corpus out of translated subtitles. In particular, synchronization is done on the basis of aligned anchor points. Previous studies have shown that cognate filters are useful for the identification of such points. However, this restricts the approach to related languages with similar alphabets. Here, we propose a dictionary-based approach using automatic word alignment. We can show an improvement in alignment quality even for related languages compared to the cognate-based approach."
]
} |
1605.06457 | 2949907962 | Modern computer vision algorithms typically require expensive data acquisition and accurate manual labeling. In this work, we instead leverage the recent progress in computer graphics to generate fully labeled, dynamic, and photo-realistic proxy virtual worlds. We propose an efficient real-to-virtual world cloning method, and validate our approach by building and publicly releasing a new video dataset, called Virtual KITTI (see this http URL), automatically labeled with accurate ground truth for object detection, tracking, scene and instance segmentation, depth, and optical flow. We provide quantitative experimental evidence suggesting that (i) modern deep learning algorithms pre-trained on real data behave similarly in real and virtual worlds, and (ii) pre-training on virtual data improves performance. As the gap between real and virtual worlds is small, virtual worlds enable measuring the impact of various weather and imaging conditions on recognition performance, all other things being equal. We show these factors may affect drastically otherwise high-performing deep models for tracking. | The degree of photorealism allowed by the recent progress in computer graphics and modern high-level generic graphics platforms enables a more widespread use of synthetic data generated under less constrained settings. First attempts to use synthetic data for training are mainly limited to using rough synthetic models or synthesized real examples ( of pedestrians @cite_43 @cite_30 ). In contrast, Mar 'in al @cite_37 @cite_35 @cite_14 went further and positively answer the intriguing question whether one can learn appearance models of pedestrians in a virtual world and use the learned models for detection in the real world. A related approach is described in @cite_40 , but for scene- and scene-location specific detectors with fixed calibrated surveillance cameras and a priori known scene geometry. In the context of video surveillance too, @cite_41 proposes a virtual simulation test bed for system design and evaluation. Several other works use 3D CAD models for more general object pose estimation @cite_32 @cite_17 and detection @cite_4 @cite_6 . | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_37",
"@cite_14",
"@cite_4",
"@cite_41",
"@cite_32",
"@cite_6",
"@cite_43",
"@cite_40",
"@cite_17"
],
"mid": [
"2165741220",
"1581020457",
"2153062878",
"2033547469",
"2083544878",
"2108004794",
"2010625607",
"2211115409",
"2106413791",
"",
""
],
"abstract": [
"Recognizing 3D objects from arbitrary view points is one of the most fundamental problems in computer vision. A major challenge lies in the transition between the 3D geometry of objects and 2D representations that can be robustly matched to natural images. Most approaches thus rely on 2D natural images either as the sole source of training data for building an implicit 3D representation, or by enriching 3D models with natural image features. In this paper, we go back to the ideas from the early days of computer vision, by using 3D object models as the only source of information for building a multi-view object class detector. In particular, we use these models for learning 2D shape that can be robustly matched to 2D natural images. Our experiments confirm the validity of our approach, which outperforms current state-of-the-art techniques on a multi-view detection data set.",
"Vision-based object detectors are crucial for different applications. They rely on learnt object models. Ideally, we would like to deploy our vision system in the scenario where it must operate. Then, the system should self-learn how to distinguish the objects of interest, i.e., without human intervention. However, the learning of each object model requires labelled samples collected through a tiresome manual process. For instance, we are interested in exploring the self-training of a pedestrian detector for driver assistance systems. Our first approach to avoid manual labelling consisted in the use of samples coming from realistic computer graphics, so that their labels are automatically available [12]. This would make possible the desired self-training of our pedestrian detector. However, as we showed in [14], between virtual and real worlds it may be a dataset shift. In order to overcome it, we propose the use of unsupervised domain adaptation techniques that avoid human intervention during the adaptation process. In particular, this paper explores the use of the transductive SVM (T-SVM) learning algorithm in order to adapt virtual and real worlds for pedestrian detection (Fig. 1).",
"Detecting pedestrians in images is a key functionality to avoid vehicle-to-pedestrian collisions. The most promising detectors rely on appearance-based pedestrian classifiers trained with labelled samples. This paper addresses the following question: can a pedestrian appearance model learnt in virtual scenarios work successfully for pedestrian detection in real images? (Fig. 1). Our experiments suggest a positive answer, which is a new and relevant conclusion for research in pedestrian detection. More specifically, we record training sequences in virtual scenarios and then appearance-based pedestrian classifiers are learnt using HOG and linear SVM. We test such classifiers in a publicly available dataset provided by Daimler AG for pedestrian detection benchmarking. This dataset contains real world images acquired from a moving car. The obtained result is compared with the one given by a classifier learnt using samples coming from real images. The comparison reveals that, although virtual samples were not specially selected, both virtual and real based training give rise to classifiers of similar performance.",
"Pedestrian detection is of paramount interest for many applications. Most promising detectors rely on discriminatively learnt classifiers, i.e., trained with annotated samples. However, the annotation step is a human intensive and subjective task worth to be minimized. By using virtual worlds we can automatically obtain precise and rich annotations. Thus, we face the question: can a pedestrian appearance model learnt in realistic virtual worlds work successfully for pedestrian detection in real-world images? Conducted experiments show that virtual-world based training can provide excellent testing accuracy in real world, but it can also suffer the data set shift problem as real-world based training does. Accordingly, we have designed a domain adaptation framework, V-AYLA, in which we have tested different techniques to collect a few pedestrian samples from the target domain (real world) and combine them with the many examples of the source domain (virtual world) in order to train a domain adapted pedestrian classifier that will operate in the target domain. V-AYLA reports the same detection accuracy than when training with many human-provided pedestrian annotations and testing with real-world images of the same domain. To the best of our knowledge, this is the first work demonstrating adaptation of virtual and real worlds for developing an object detector.",
"The most successful 2D object detection methods require a large number of images annotated with object bounding boxes to be collected for training. We present an alternative approach that trains on virtual data rendered from 3D models, avoiding the need for manual labeling. Growing demand for virtual reality applications is quickly bringing about an abundance of available 3D models for a large variety of object categories. While mainstream use of 3D models in vision has focused on predicting the 3D pose of objects, we investigate the use of such freely available 3D models for multicategory 2D object detection. To address the issue of dataset bias that arises from training on virtual data and testing on real images, we propose a simple and fast adaptation approach based on decorrelated features. We also compare two kinds of virtual data, one rendered with real-image textures and one without. Evaluation on a benchmark domain adaptation dataset demonstrates that our method performs comparably to existing methods trained on large-scale real image domains.",
"Object video virtual video (OVVV) is a publicly available visual surveillance simulation test bed based on a commercial game engine. The tool simulates multiple synchronized video streams from a variety of camera configurations, including static, PTZ and omni-directional cameras, in a virtual environment populated with computer or player controlled humans and vehicles. To support performance evaluation, OVVV generates detailed automatic ground truth for each frame including target centroids, bounding boxes and pixel-wise foreground segmentation. We describe several realistic, controllable noise effects including pixel noise, video ghosting and radial distortion to improve the realism of synthetic video and provide additional dimensions for performance testing. Several indoor and outdoor virtual environments developed by the authors are described to illustrate the range of testing scenarios possible using OVVV. Finally, we provide a practical demonstration of using OVVV to develop and evaluate surveillance algorithms.",
"This paper poses object category detection in images as a type of 2D-to-3D alignment problem, utilizing the large quantities of 3D CAD models that have been made publicly available online. Using the \"chair\" class as a running example, we propose an exemplar-based 3D category representation, which can explicitly model chairs of different styles as well as the large variation in viewpoint. We develop an approach to establish part-based correspondences between 3D CAD models and real photographs. This is achieved by (i) representing each 3D model using a set of view-dependent mid-level visual elements learned from synthesized views in a discriminative fashion, (ii) carefully calibrating the individual element detectors on a common dataset of negative images, and (iii) matching visual elements to the test image allowing for small mutual deformations but preserving the viewpoint and style constraints. We demonstrate the ability of our system to align 3D models with 2D objects in the challenging PASCAL VOC images, which depict a wide variety of chairs in complex scenes.",
"Crowdsourced 3D CAD models are becoming easily accessible online, and can potentially generate an infinite number of training images for almost any object category.We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD-rendered images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark.",
"Pedestrian detection is a challenging vision task, especially applied to the automotive field where the background changes as the vehicle moves. This paper presents an extensive study upon human body models and the techniques suitable for being used in a pedestrian detection system. Several different approaches for building model sets, such as synthetic, real, and dynamic sets are presented and discussed. Comparative results are reported with reference to a case study of a real system. Preliminary results of current research status are shown together with further developments.",
"",
""
]
} |
1605.06457 | 2949907962 | Modern computer vision algorithms typically require expensive data acquisition and accurate manual labeling. In this work, we instead leverage the recent progress in computer graphics to generate fully labeled, dynamic, and photo-realistic proxy virtual worlds. We propose an efficient real-to-virtual world cloning method, and validate our approach by building and publicly releasing a new video dataset, called Virtual KITTI (see this http URL), automatically labeled with accurate ground truth for object detection, tracking, scene and instance segmentation, depth, and optical flow. We provide quantitative experimental evidence suggesting that (i) modern deep learning algorithms pre-trained on real data behave similarly in real and virtual worlds, and (ii) pre-training on virtual data improves performance. As the gap between real and virtual worlds is small, virtual worlds enable measuring the impact of various weather and imaging conditions on recognition performance, all other things being equal. We show these factors may affect drastically otherwise high-performing deep models for tracking. | Only few works use photo-realistic imagery for , and in most cases these works focus on low-level image and video processing tasks. Kaneva al @cite_39 evaluate low-level image features, while Butler al @cite_26 propose a synthetic benchmark for optical flow estimation: the popular MPI Sintel Flow Dataset. The recent work of Chen al @cite_10 is another example for basic building blocks of autonomous driving. These approaches view photo-realistic imagery as a way of obtaining ground truth that cannot be easily obtained otherwise ( optical flow). When ground-truth can be collected, for instance via crowd-sourcing, real-world imagery is often preferred over synthetic data because of the artifacts the latter might introduce. | {
"cite_N": [
"@cite_26",
"@cite_10",
"@cite_39"
],
"mid": [
"2950737880",
"2953248129",
"2036242214"
],
"abstract": [
"In this paper, we focus on the two key aspects of multiple target tracking problem: 1) designing an accurate affinity measure to associate detections and 2) implementing an efficient and accurate (near) online multiple target tracking algorithm. As the first contribution, we introduce a novel Aggregated Local Flow Descriptor (ALFD) that encodes the relative motion pattern between a pair of temporally distant detections using long term interest point trajectories (IPTs). Leveraging on the IPTs, the ALFD provides a robust affinity measure for estimating the likelihood of matching detections regardless of the application scenarios. As another contribution, we present a Near-Online Multi-target Tracking (NOMT) algorithm. The tracking problem is formulated as a data-association between targets and detections in a temporal window, that is performed repeatedly at every frame. While being efficient, NOMT achieves robustness via integrating multiple cues including ALFD metric, target dynamics, appearance similarity, and long term trajectory regularization into the model. Our ablative analysis verifies the superiority of the ALFD metric over the other conventional affinity metrics. We run a comprehensive experimental evaluation on two challenging tracking datasets, KITTI and MOT datasets. The NOMT method combined with ALFD metric achieves the best accuracy in both datasets with significant margins (about 10 higher MOTA) over the state-of-the-arts.",
"Today, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road traffic state for driving. Our representation provides a set of compact yet complete descriptions of the scene to enable a simple controller to drive autonomously. Falling in between the two extremes of mediated perception and behavior reflex, we argue that our direct perception representation provides the right level of abstraction. To demonstrate this, we train a deep Convolutional Neural Network using recording from 12 hours of human driving in a video game and show that our model can work well to drive a car in a very diverse set of virtual environments. We also train a model for car distance estimation on the KITTI dataset. Results show that our direct perception approach can generalize well to real driving images. Source code and data are available on our project website.",
"Image features are widely used in computer vision applications. They need to be robust to scene changes and image transformations. Designing and comparing feature descriptors requires the ability to evaluate their performance with respect to those transformations. We want to know how robust the descriptors are to changes in the lighting, scene, or viewing conditions. For this, we need ground truth data of different scenes viewed under different camera or lighting conditions in a controlled way. Such data is very difficult to gather in a real-world setting. We propose using a photorealistic virtual world to gain complete and repeatable control of the environment in order to evaluate image features. We calibrate our virtual world evaluations by comparing against feature rankings made from photographic data of the same subject matter (the Statue of Liberty). We find very similar feature rankings between the two datasets. We then use our virtual world to study the effects on descriptor performance of controlled changes in viewpoint and illumination. We also study the effect of augmenting the descriptors with depth information to improve performance."
]
} |
1605.06457 | 2949907962 | Modern computer vision algorithms typically require expensive data acquisition and accurate manual labeling. In this work, we instead leverage the recent progress in computer graphics to generate fully labeled, dynamic, and photo-realistic proxy virtual worlds. We propose an efficient real-to-virtual world cloning method, and validate our approach by building and publicly releasing a new video dataset, called Virtual KITTI (see this http URL), automatically labeled with accurate ground truth for object detection, tracking, scene and instance segmentation, depth, and optical flow. We provide quantitative experimental evidence suggesting that (i) modern deep learning algorithms pre-trained on real data behave similarly in real and virtual worlds, and (ii) pre-training on virtual data improves performance. As the gap between real and virtual worlds is small, virtual worlds enable measuring the impact of various weather and imaging conditions on recognition performance, all other things being equal. We show these factors may affect drastically otherwise high-performing deep models for tracking. | In this paper, we show that such issues can be partially circumvented using our approach, in particular for high-level video understanding tasks for which ground-truth data is tedious to collect. We believe current approaches face two major limitations that prevent broadening the scope of virtual data. First, the data generation is itself costly and time-consuming, as it often requires creating animation movies from scratch. This also limits the quantity of data that can be generated. An alternative consists in recording scenes from humans playing video games @cite_37 , but this faces similar time costs, and further restricts the variety of the generated scenes. The second limitation lies in the usefulness of synthetic data as a proxy to assess real-world performance on high-level computer vision tasks, including object detection and tracking. It is indeed difficult to evaluate how conclusions obtained from virtual data could be applied to the real world in general. | {
"cite_N": [
"@cite_37"
],
"mid": [
"2153062878"
],
"abstract": [
"Detecting pedestrians in images is a key functionality to avoid vehicle-to-pedestrian collisions. The most promising detectors rely on appearance-based pedestrian classifiers trained with labelled samples. This paper addresses the following question: can a pedestrian appearance model learnt in virtual scenarios work successfully for pedestrian detection in real images? (Fig. 1). Our experiments suggest a positive answer, which is a new and relevant conclusion for research in pedestrian detection. More specifically, we record training sequences in virtual scenarios and then appearance-based pedestrian classifiers are learnt using HOG and linear SVM. We test such classifiers in a publicly available dataset provided by Daimler AG for pedestrian detection benchmarking. This dataset contains real world images acquired from a moving car. The obtained result is compared with the one given by a classifier learnt using samples coming from real images. The comparison reveals that, although virtual samples were not specially selected, both virtual and real based training give rise to classifiers of similar performance."
]
} |
1605.06636 | 2408201877 | Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain. In this paper, we present joint adaptation networks (JAN), which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Adversarial training strategy is adopted to maximize JMMD such that the distributions of the source and target domains are made more distinguishable. Learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments testify that our model yields state of the art results on standard datasets. | Transfer learning @cite_25 aims to build learning machines that generalize across different domains following different probability distributions @cite_21 @cite_0 @cite_13 @cite_22 @cite_38 . Transfer learning finds wide applications in computer vision @cite_14 @cite_15 @cite_17 @cite_36 and natural language processing @cite_24 @cite_33 . | {
"cite_N": [
"@cite_38",
"@cite_14",
"@cite_22",
"@cite_33",
"@cite_36",
"@cite_21",
"@cite_0",
"@cite_24",
"@cite_15",
"@cite_13",
"@cite_25",
"@cite_17"
],
"mid": [
"2153929442",
"1722318740",
"2158815628",
"22861983",
"2951084305",
"2103851188",
"2115403315",
"2158899491",
"2128053425",
"2120149881",
"2165698076",
""
],
"abstract": [
"Let X denote the feature and Y the target. We consider domain adaptation under three possible scenarios: (1) the marginal PY changes, while the conditional PX Y stays the same (target shift), (2) the marginal PY is fixed, while the conditional PX Y changes with certain constraints (conditional shift), and (3) the marginal PY changes, and the conditional PX Y changes with constraints (generalized target shift). Using background knowledge, causal interpretations allow us to determine the correct situation for a problem at hand. We exploit importance reweighting or sample transformation to find the learning machine that works well on test data, and propose to estimate the weights or transformations by reweighting or transforming training data to reproduce the covariate distribution on the test domain. Thanks to kernel embedding of conditional as well as marginal distributions, the proposed approaches avoid distribution estimation, and are applicable for high-dimensional problems. Numerical evaluations on synthetic and real-world data sets demonstrate the effectiveness of the proposed framework.",
"Domain adaptation is an important emerging topic in computer vision. In this paper, we present one of the first studies of domain shift in the context of object recognition. We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While we focus our evaluation on object recognition tasks, the transform-based adaptation technique we develop is general and could be applied to nonimage data. Another contribution is a new multi-domain object database, freely available for download. We experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions.",
"Learning domain-invariant features is of vital importance to unsupervised domain adaptation, where classifiers trained on the source domain need to be adapted to a different target domain for which no labeled examples are available. In this paper, we propose a novel approach for learning such features. The central idea is to exploit the existence of landmarks, which are a subset of labeled data instances in the source domain that are distributed most similarly to the target domain. Our approach automatically discovers the landmarks and use them to bridge the source to the target by constructing provably easier auxiliary domain adaptation tasks. The solutions of those auxiliary tasks form the basis to compose invariant features for the original task. We show how this composition can be optimized discriminatively without requiring labels from the target domain. We validate the method on standard benchmark datasets for visual object recognition and sentiment analysis of text. Empirical results show the proposed method outperforms the state-of-the-art significantly.",
"The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains.",
"A major challenge in scaling object detection is the difficulty of obtaining labeled images for large numbers of categories. Recently, deep convolutional neural networks (CNNs) have emerged as clear winners on object classification benchmarks, in part due to training with 1.2M+ labeled classification images. Unfortunately, only a small fraction of those labels are available for the detection task. It is much cheaper and easier to collect large quantities of image-level labels from search engines than it is to collect detection data and label it with precise bounding boxes. In this paper, we propose Large Scale Detection through Adaptation (LSDA), an algorithm which learns the difference between the two tasks and transfers this knowledge to classifiers for categories without bounding box annotated data, turning them into detectors. Our method has the potential to enable detection for the tens of thousands of categories that lack bounding box annotations, yet have plenty of classification data. Evaluation on the ImageNet LSVRC-2013 detection challenge demonstrates the efficacy of our approach. This algorithm enables us to produce a >7.6K detector by using available classification data from leaf nodes in the ImageNet tree. We additionally demonstrate how to modify our architecture to produce a fast detector (running at 2fps for the 7.6K detector). Models and software are available at",
"A situation where training and test samples follow different input distributions is called covariate shift. Under covariate shift, standard learning methods such as maximum likelihood estimation are no longer consistent—weighted variants according to the ratio of test and training input densities are consistent. Therefore, accurately estimating the density ratio, called the importance, is one of the key issues in covariate shift adaptation. A naive approach to this task is to first estimate training and test input densities separately and then estimate the importance by taking the ratio of the estimated densities. However, this naive approach tends to perform poorly since density estimation is a hard task particularly in high dimensional cases. In this paper, we propose a direct importance estimation method that does not involve density estimation. Our method is equipped with a natural cross validation procedure and hence tuning parameters such as the kernel width can be objectively optimized. Simulations illustrate the usefulness of our approach.",
"Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.",
"We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.",
"Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.",
"Cross-domain learning methods have shown promising results by leveraging labeled patterns from the auxiliary domain to learn a robust classifier for the target domain which has only a limited number of labeled samples. To cope with the considerable change between feature distributions of different domains, we propose a new cross-domain kernel learning framework into which many existing kernel methods can be readily incorporated. Our framework, referred to as Domain Transfer Multiple Kernel Learning (DTMKL), simultaneously learns a kernel function and a robust classifier by minimizing both the structural risk functional and the distribution mismatch between the labeled and unlabeled samples from the auxiliary and target domains. Under the DTMKL framework, we also propose two novel methods by using SVM and prelearned classifiers, respectively. Comprehensive experiments on three domain adaptation data sets (i.e., TRECVID, 20 Newsgroups, and email spam data sets) demonstrate that DTMKL-based methods outperform existing cross-domain learning and multiple kernel learning methods.",
"A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
""
]
} |
1605.06636 | 2408201877 | Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain. In this paper, we present joint adaptation networks (JAN), which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Adversarial training strategy is adopted to maximize JMMD such that the distributions of the source and target domains are made more distinguishable. Learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments testify that our model yields state of the art results on standard datasets. | The main technical problem of transfer learning is how to reduce the shifts in data distributions across domains. Most existing methods learn a shallow representation model by which domain discrepancy is minimized, which cannot suppress domain-specific exploratory factors of variations. Deep networks learn abstract representations that disentangle the explanatory factors of variations behind data @cite_37 and extract transferable factors underlying different populations @cite_33 @cite_2 , which can only reduce, but not remove, the cross-domain discrepancy @cite_43 . Recent work on deep domain adaptation embeds domain-adaptation modules into deep networks to boost transfer performance @cite_40 @cite_20 @cite_18 @cite_31 @cite_32 @cite_27 . These methods mainly correct the shifts in marginal distributions, assuming conditional distributions remain unchanged after the marginal distribution adaptation. | {
"cite_N": [
"@cite_37",
"@cite_18",
"@cite_33",
"@cite_32",
"@cite_43",
"@cite_40",
"@cite_27",
"@cite_2",
"@cite_31",
"@cite_20"
],
"mid": [
"2163922914",
"2949987290",
"22861983",
"2951670162",
"2949667497",
"1565327149",
"2950361018",
"2161381512",
"1882958252",
"2953226914"
],
"abstract": [
"The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.",
"Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They also can improve recognition despite the presence of domain shift or dataset bias: several adversarial approaches to unsupervised domain adaptation have recently been introduced, which reduce the difference between the training and test domain distributions and thus improve generalization performance. Prior generative approaches show compelling visualizations, but are not optimal on discriminative tasks and can be limited to smaller shifts. Prior discriminative approaches could handle larger domain shifts, but imposed tied weights on the model and did not exploit a GAN-based loss. We first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and we use this generalized view to better relate the prior approaches. We propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard cross-domain digit classification tasks and a new more difficult cross-modality object classification task.",
"The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains.",
"Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.",
"Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.",
"Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task.",
"The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can simultaneously learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into the deep network to explicitly learn the residual function with reference to the target classifier. We embed features of multiple layers into reproducing kernel Hilbert spaces (RKHSs) and match feature distributions for feature adaptation. The adaptation behaviors can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently using standard back-propagation. Empirical evidence exhibits that the approach outperforms state of art methods on standard domain adaptation datasets.",
"Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The suc- cess of CNNs is attributed to their ability to learn rich mid- level image representations as opposed to hand-designed low-level features used in other image classification meth- ods. Learning CNNs, however, amounts to estimating mil- lions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be effi- ciently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred rep- resentation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.",
"Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.",
"Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings."
]
} |
1605.06636 | 2408201877 | Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain. In this paper, we present joint adaptation networks (JAN), which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Adversarial training strategy is adopted to maximize JMMD such that the distributions of the source and target domains are made more distinguishable. Learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments testify that our model yields state of the art results on standard datasets. | Transfer learning will become more challenging as domains may change by the joint distributions @math of input features @math and output labels @math . The distribution shifts may stem from the marginal distributions @math (a.k.a. covariate shift @cite_42 @cite_21 ), the conditional distributions @math (a.k.a. conditional shift @cite_38 ), or both (a.k.a. dataset shift @cite_8 ). Another line of work @cite_38 @cite_7 correct both target and conditional shifts based on the theory of kernel embedding of conditional distributions @cite_28 @cite_34 @cite_9 . Since the target labels are unavailable, adaptation is performed by minimizing the discrepancy between marginal distributions instead of conditional distributions. In general, the presence of conditional shift leads to an ill-posed problem, and an additional assumption that the conditional distribution may only change under location-scale transformations on @math is commonly imposed to make the problem tractable @cite_38 . As it is not easy to justify which components of the joint distribution are changing in practice, our work is transparent to diverse scenarios by directly manipulating the joint distribution without assumptions on the marginal and conditional distributions. Furthermore, it remains unclear how to account for the shift in joint distributions within the regime of deep architectures. | {
"cite_N": [
"@cite_38",
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_42",
"@cite_21",
"@cite_34"
],
"mid": [
"2153929442",
"2147520416",
"2162651021",
"2146641075",
"2124331852",
"2811380766",
"2103851188",
"2138153286"
],
"abstract": [
"Let X denote the feature and Y the target. We consider domain adaptation under three possible scenarios: (1) the marginal PY changes, while the conditional PX Y stays the same (target shift), (2) the marginal PY is fixed, while the conditional PX Y changes with certain constraints (conditional shift), and (3) the marginal PY changes, and the conditional PX Y changes with constraints (generalized target shift). Using background knowledge, causal interpretations allow us to determine the correct situation for a problem at hand. We exploit importance reweighting or sample transformation to find the learning machine that works well on test data, and propose to estimate the weights or transformations by reweighting or transforming training data to reproduce the covariate distribution on the test domain. Thanks to kernel embedding of conditional as well as marginal distributions, the proposed approaches avoid distribution estimation, and are applicable for high-dimensional problems. Numerical evaluations on synthetic and real-world data sets demonstrate the effectiveness of the proposed framework.",
"Transfer learning algorithms are used when one has sufficient training data for one supervised learning task (the source training domain) but only very limited training data for a second task (the target test domain) that is similar but not identical to the first. Previous work on transfer learning has focused on relatively restricted settings, where specific parts of the model are considered to be carried over between tasks. Recent work on covariate shift focuses on matching the marginal distributions on observations X across domains. Similarly, work on target conditional shift focuses on matching marginal distributions on labels Y and adjusting conditional distributions P(X|Y ), such that P(X) can be matched across domains. However, covariate shift assumes that the support of test P(X) is contained in the support of training P(X), i.e., the training set is richer than the test set. Target conditional shift makes a similar assumption for P(Y). Moreover, not much work on transfer learning has considered the case when a few labels in the test domain are available. Also little work has been done when all marginal and conditional distributions are allowed to change while the changes are smooth. In this paper, we consider a general case where both the support and the model change across domains. We transform both X and Y by a location-scale shift to achieve transfer between tasks. Since we allow more flexible transformations, the proposed method yields better results on both synthetic data and real-world data.",
"Dataset shift is a common problem in predictive modeling that occurs when the joint distribution of inputs and outputs differs between training and test stages. Covariate shift, a particular case of dataset shift, occurs when only the input distribution changes. Dataset shift is present in most practical applications, for reasons ranging from the bias introduced by experimental design to the irreproducibility of the testing conditions at training time. (An example is -email spam filtering, which may fail to recognize spam that differs in form from the spam the automatic filter has been built on.) Despite this, and despite the attention given to the apparently similar problems of semi-supervised learning and active learning, dataset shift has received relatively little attention in the machine learning community until recently. This volume offers an overview of current efforts to deal with dataset and covariate shift. The chapters offer a mathematical and philosophical introduction to the problem, place dataset shift in relationship to transfer learning, transduction, local learning, active learning, and semi-supervised learning, provide theoretical views of dataset and covariate shift (including decision theoretic and Bayesian perspectives), and present algorithms for covariate shift. Contributors: Shai Ben-David, Steffen Bickel, Karsten Borgwardt, Michael Brckner, David Corfield, Amir Globerson, Arthur Gretton, Lars Kai Hansen, Matthias Hein, Jiayuan Huang, Takafumi Kanamori, Klaus-Robert Mller, Sam Roweis, Neil Rubens, Tobias Scheffer, Marcel Schmittfull, Bernhard Schlkopf, Hidetoshi Shimodaira, Alex Smola, Amos Storkey, Masashi Sugiyama, Choon Hui Teo Neural Information Processing series",
"In this paper, we extend the Hilbert space embedding approach to handle conditional distributions. We derive a kernel estimate for the conditional embedding, and show its connection to ordinary embeddings. Conditional embeddings largely extend our ability to manipulate distributions in Hilbert spaces, and as an example, we derive a nonparametric method for modeling dynamical systems where the belief state of the system is maintained as a conditional embedding. Our method is very general in terms of both the domains and the types of distributions that it can handle, and we demonstrate the effectiveness of our method in various dynamical systems. We expect that conditional embeddings will have wider applications beyond modeling dynamical systems.",
"A Hilbert space embedding for probability measures has recently been proposed, with applications including dimensionality reduction, homogeneity testing, and independence testing. This embedding represents any probability measure as a mean element in a reproducing kernel Hilbert space (RKHS). A pseudometric on the space of probability measures can be defined as the distance between distribution embeddings: we denote this as γk, indexed by the kernel function k that defines the inner product in the RKHS. We present three theoretical properties of γk. First, we consider the question of determining the conditions on the kernel k for which γk is a metric: such k are denoted characteristic kernels. Unlike pseudometrics, a metric is zero only when two distributions coincide, thus ensuring the RKHS embedding maps all distributions uniquely (i.e., the embedding is injective). While previously published conditions may apply only in restricted circumstances (e.g., on compact domains), and are difficult to check, our conditions are straightforward and intuitive: integrally strictly positive definite kernels are characteristic. Alternatively, if a bounded continuous kernel is translation-invariant on ℜd, then it is characteristic if and only if the support of its Fourier transform is the entire ℜd. Second, we show that the distance between distributions under γk results from an interplay between the properties of the kernel and the distributions, by demonstrating that distributions are close in the embedding space when their differences occur at higher frequencies. Third, to understand the nature of the topology induced by γk, we relate γk to other popular metrics on probability measures, and present conditions on the kernel k under which γk metrizes the weak topology.",
"We consider the scenario where training and test data are drawn from different distributions, commonly referred to as sample selection bias. Most algorithms for this setting try to first recover sampling distributions and then make appropriate corrections based on the distribution estimate. We present a nonparametric method which directly produces resampling weights without distribution estimation. Our method works by matching distributions between training and testing sets in feature space. Experimental results demonstrate that our method works well in practice.",
"A situation where training and test samples follow different input distributions is called covariate shift. Under covariate shift, standard learning methods such as maximum likelihood estimation are no longer consistent—weighted variants according to the ratio of test and training input densities are consistent. Therefore, accurately estimating the density ratio, called the importance, is one of the key issues in covariate shift adaptation. A naive approach to this task is to first estimate training and test input densities separately and then estimate the importance by taking the ratio of the estimated densities. However, this naive approach tends to perform poorly since density estimation is a hard task particularly in high dimensional cases. In this paper, we propose a direct importance estimation method that does not involve density estimation. Our method is equipped with a natural cross validation procedure and hence tuning parameters such as the kernel width can be objectively optimized. Simulations illustrate the usefulness of our approach.",
"Hidden Markov Models (HMMs) are important tools for modeling sequence data. However, they are restricted to discrete latent states, and are largely restricted to Gaussian and discrete observations. And, learning algorithms for HMMs have predominantly relied on local search heuristics, with the exception of spectral methods such as those described below. We propose a nonparametric HMM that extends traditional HMMs to structured and non-Gaussian continuous distributions. Furthermore, we derive a local-minimum-free kernel spectral algorithm for learning these HMMs. We apply our method to robot vision data, slot car inertial sensor data and audio event classification data, and show that in these applications, embedded HMMs exceed the previous state-of-the-art performance."
]
} |
1605.06431 | 2541674938 | In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks. | @cite_18 @cite_9 are neural networks in which each layer consists of a residual module @math and a skip connection We only consider identity skip connections, but this framework readily generalizes to more complex projection skip connections when downsampling is required. bypassing @math . Since layers in residual networks can comprise multiple convolutional layers, we refer to them as residual blocks in the remainder of this paper. For clarity of notation, we omit the initial pre-processing and final classification steps. With @math as is input, the output of the @math th block is recursively defined as where @math is some sequence of convolutions, batch normalization @cite_21 , and Rectified Linear Units (ReLU) as nonlinearities. Figure (a) shows a schematic view of this architecture. In the most recent formulation of residual networks @cite_9 , @math is defined by where @math and @math are weight matrices, @math denotes convolution, @math is batch normalization and @math . Other formulations are typically composed of the same operations, but may differ in their order. | {
"cite_N": [
"@cite_9",
"@cite_18",
"@cite_21"
],
"mid": [
"2949427019",
"2949650786",
"2949117887"
],
"abstract": [
"Deep residual networks have emerged as a family of extremely deep architectures showing compelling accuracy and nice convergence behaviors. In this paper, we analyze the propagation formulations behind the residual building blocks, which suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation. A series of ablation experiments support the importance of these identity mappings. This motivates us to propose a new residual unit, which makes training easier and improves generalization. We report improved results using a 1001-layer ResNet on CIFAR-10 (4.62 error) and CIFAR-100, and a 200-layer ResNet on ImageNet. Code is available at: this https URL",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters."
]
} |
1605.06431 | 2541674938 | In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks. | Several investigative studies seek to better understand convolutional neural networks. For example, Zeiler and Fergus @cite_8 visualize convolutional filters to unveil the concepts learned by individual neurons. Further, @cite_5 investigate the function learned by neural networks and how small changes in the input called adversarial examples can lead to large changes in the output. Within this stream of research, the closest study to our work is from @cite_2 , which performs lesion studies on AlexNet. They discover that early layers exhibit little co-adaptation and later layers have more co-adaptation. These papers, along with ours, have the common thread of exploring specific aspects of neural network performance. In our study, we focus our investigation on structural properties of neural networks. | {
"cite_N": [
"@cite_5",
"@cite_2",
"@cite_8"
],
"mid": [
"1673923490",
"",
"2952186574"
],
"abstract": [
"Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.",
"",
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets."
]
} |
1605.06431 | 2541674938 | In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks. | @cite_3 show that dropping out individual neurons during training leads to a network that is equivalent to averaging over an ensemble of exponentially many networks. Similar in spirit, stochastic depth @cite_13 trains an ensemble of networks by dropping out entire layers during training. In this work, we show that one does not need a special training strategy such as stochastic depth to drop out layers. Entire layers can be removed from plain residual networks without impacting performance, indicating that they do not strongly depend on each other. | {
"cite_N": [
"@cite_13",
"@cite_3"
],
"mid": [
"2949892913",
"1904365287"
],
"abstract": [
"Very deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highly desirable at test time, training very deep networks comes with its own set of challenges. The gradients can vanish, the forward flow often diminishes, and the training time can be painfully slow. To address these problems, we propose stochastic depth, a training procedure that enables the seemingly contradictory setup to train short networks and use deep networks at test time. We start with very deep networks but during training, for each mini-batch, randomly drop a subset of layers and bypass them with the identity function. This simple approach complements the recent success of residual networks. It reduces training time substantially and improves the test error significantly on almost all data sets that we used for evaluation. With stochastic depth we can increase the depth of residual networks even beyond 1200 layers and still yield meaningful improvements in test error (4.91 on CIFAR-10).",
"When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This \"overfitting\" is greatly reduced by randomly omitting half of the feature detectors on each training case. This prevents complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors. Instead, each neuron learns to detect a feature that is generally helpful for producing the correct answer given the combinatorially large variety of internal contexts in which it must operate. Random \"dropout\" gives big improvements on many benchmark tasks and sets new records for speech and object recognition."
]
} |
1605.06368 | 2949161171 | Lurking is a complex user-behavioral phenomenon that occurs in all large-scale online communities and social networks. It generally refers to the behavior characterizing users that benefit from the information produced by others in the community without actively contributing back to the production of social content. The amount and evolution of lurkers may strongly affect an online social environment, therefore understanding the lurking dynamics and identifying strategies to curb this trend are relevant problems. In this regard, we introduce the Lurker Game, i.e., a model for analyzing the transitions from a lurking to a non-lurking (i.e., active) user role, and vice versa, in terms of evolutionary game theory. We evaluate the proposed Lurker Game by arranging agents on complex networks and analyzing the system evolution, seeking relations between the network topology and the final equilibrium of the game. Results suggest that the Lurker Game is suitable to model the lurking dynamics, showing how the adoption of rewarding mechanisms combined with the modeling of hypothetical heterogeneity of users' interests may lead users in an online community towards a cooperative behavior. | @cite_6 @cite_29 , the authors developed the first computational approach to lurker mining, focusing on ranking problems. To this purpose, they proposed a topology-driven definition of lurking behavior, based on principles of overconsumption, authoritativeness of the information received, and non-authoritativeness of the information produced. Quantitative and qualitative evaluation results showed how the proposed methods are effective in identifying and ranking lurkers in real-world OSNs. | {
"cite_N": [
"@cite_29",
"@cite_6"
],
"mid": [
"2071348697",
"2042866995"
],
"abstract": [
"The massive presence of silent members in online communities, the so-called lurkers, has long attracted the attention of researchers in social science, cognitive psychology, and computer–human interaction. However, the study of lurking phenomena represents an unexplored opportunity of research in data mining, information retrieval and related fields. In this paper, we take a first step towards the formal specification and analysis of lurking in social networks. We address the new problem of lurker ranking and propose the first centrality methods specifically conceived for ranking lurkers in social networks. Our approach utilizes only the network topology without probing into text contents or user relationships related to media. Using Twitter, Flickr, FriendFeed and GooglePlus as cases in point, our methods’ performance was evaluated against data-driven rankings as well as existing centrality methods, including the classic PageRank and alpha-centrality. Empirical evidence has shown the significance of our lurker ranking approach, and its uniqueness in effectively identifying and ranking lurkers in an online social network.",
"The massive presence of silent members in online communities, the so-called lurkers, has long attracted the attention of researchers in social science, cognitive psychology, and computer-human interaction. However, the study of lurking phenomena represents an unexplored opportunity of research in data mining, information retrieval and related fields. In this paper, we take a first step towards the formal specification and analysis of lurking in social networks. Particularly, focusing on the network topology, we address the new problem of lurker ranking and propose the first centrality methods specifically conceived for ranking lurkers in social networks. Using Twitter and FriendFeed as cases in point, our methods' performance was evaluated against data-driven rankings as well as existing centrality methods, including the classic PageRank and alpha-centrality. Empirical evidence has shown the significance of our lurker ranking approach, which substantially differs from other methods in effectively identifying and ranking lurkers."
]
} |
1605.06368 | 2949161171 | Lurking is a complex user-behavioral phenomenon that occurs in all large-scale online communities and social networks. It generally refers to the behavior characterizing users that benefit from the information produced by others in the community without actively contributing back to the production of social content. The amount and evolution of lurkers may strongly affect an online social environment, therefore understanding the lurking dynamics and identifying strategies to curb this trend are relevant problems. In this regard, we introduce the Lurker Game, i.e., a model for analyzing the transitions from a lurking to a non-lurking (i.e., active) user role, and vice versa, in terms of evolutionary game theory. We evaluate the proposed Lurker Game by arranging agents on complex networks and analyzing the system evolution, seeking relations between the network topology and the final equilibrium of the game. Results suggest that the Lurker Game is suitable to model the lurking dynamics, showing how the adoption of rewarding mechanisms combined with the modeling of hypothetical heterogeneity of users' interests may lead users in an online community towards a cooperative behavior. | The same authors also posed a first step toward the definition of delurking strategies in @cite_2 , by proposing a targeted influence maximization problem under the linear-threshold diffusion model. In this context, a set of previously identified lurkers is taken as target set of an influence maximization problem, whose objective function is defined upon the concept of , i.e., the social capital gained by activating lurkers in an online community. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2291420660"
],
"abstract": [
"Lurkers are silent members of a social network (SN) who gain benefit from others' information without significantly giving back to the community. The study of lurking behaviors in SNs is nonetheless important, since these users acquire knowledge from the community, and as such they are social capital holders. Within this view, a major goal is to delurk such users, i.e., to encourage them to more actively be involved in the SN. Despite delurking strategies have been conceptualized in social science and human-computer interaction research, no computational approach has been so far defined to turn lurkers into active participants in the SN. In this work we fill this gap by presenting a delurking-oriented targeted influence maximization problem under the linear threshold (LT) model. We define a novel objective function, in terms of the lurking scores associated with the nodes in the final active set, and we show it is monotone and submodular. We provide an approximate solution by developing a greedy algorithm, named DEvOTION, which computes a k- node set that maximizes the value of the delurking-capital-based objective function, for a given minimum lurking score threshold. Results on SN datasets of different sizes have demonstrated the significance of our delurking approach via LT-based targeted influence maximization."
]
} |
1605.06249 | 2404750071 | Patterns often appear in a variety of large, real-world networks, and interesting physical phenomena are often explained by network topology as in the case of the bow-tie structure of the World Wide Web, or the small world phenomenon in social networks. The discovery and modelling of such regular patterns has a wide application from disease propagation to financial markets. In this work we describe a newly discovered regularly occurring striation pattern found in the PageRank ordering of adjacency matrices that encode real-world networks. We demonstrate that these striations are the result of well-known graph generation processes resulting in regularities that are manifest in the typical neighborhood distribution. The spectral view explored in this paper encodes a tremendous amount about the explicit and implicit topology of a given network, so we also discuss the interesting network properties, outliers and anomalies that a viewer can determine from a brief look at the re-ordered matrix. | The origin of PageRank was rooted in the intent to rank web pages based on their link topology @cite_5 . Although there are alternative link topology based algorithms such as HITS @cite_41 and the SALSA @cite_15 (which combines PageRank and HITS), PageRank enjoys a brand recognition due to its early integration into and association with the Google search engine @cite_37 . Many studies have examined the type and quality of output of the PageRank algorithm. Page al advised that @math based on empirical evidence @cite_5 , and Bechetti and Castillo further showed that PageRank does not fit a power-law distribution for extreme @math values @cite_23 . Pandurangan al showed that the PageRank values of the web follow a power law @cite_6 , and Volkovich al showed the correlation between various parameters of the network (in-degree, out-degree, and percentage of dangling nodes) and the overall log-log shape of the PageRank plot @cite_40 . | {
"cite_N": [
"@cite_37",
"@cite_41",
"@cite_6",
"@cite_40",
"@cite_23",
"@cite_5",
"@cite_15"
],
"mid": [
"2094140553",
"2138621811",
"2170582301",
"1865446091",
"2119485457",
"1854214752",
"2089199911"
],
"abstract": [
"The roots of Google's PageRank can be traced back to several early, and equally remarkable, ranking techniques.",
"The network structure of a hyperlinked environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. We develop a set of algorithmic tools for extracting information from the link structures of such environments, and report on experiments that demonstrate their effectiveness in a variety of context on the World Wide Web. The central issue we address within our framework is the distillation of broad search topics, through the discovery of “authorative” information sources on such topics. We propose and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of “hub pages” that join them together in the link structure. Our formulation has connections to the eigenvectors of certain matrices associated with the link graph; these connections in turn motivate additional heuristrics for link-based analysis.",
"Recent work on modeling the Web graph has dwelt on capturing the degree distributions observed on the Web. Pointing out that this represents a heavy reliance on \"local\" properties of the Web graph, we study the distribution of PageRank values (used in the Google search engine) on the Web. This distribution is of independent interest in optimizing search indices and storage. We show that PageRank values on the Web follow a power law. We then develop detailed models for the Web graph that explain this observation, and moreover remain faithful to previously studied degree distributions. We analyze these models, and compare the analyses to both snapshots from the Web and to graphs generated by simulations on the new models. To our knowledge this represents the first modeling of the Web that goes beyond fitting degree distributions on the Web.",
"We study the relation between PageRank and other parameters of information networks such as in-degree, out-degree, and the fraction of dangling nodes. We model this relation through a stochastic equation inspired by the original definition of PageRank. Further, we use the theory of regular variation to prove that PageRank and in-degree follow power laws with the same exponent. The difference between these two power laws is in a multiplicative constant, which depends mainly on the fraction of dangling nodes, average in-degree, the power law exponent, and the damping factor. The out-degree distribution has a minor effect, which we explicitly quantify. Finally, we propose a ranking scheme which does not depend on out-degrees.",
"We show that the empirical distribution of the PageRank values in a large set of Web pages does not follow a power-law except for some particular choices of the damping factor. We argue that for a graph with an in-degree distribution following a power-law with exponent between 2.1 and 2.2, choosing a damping factor around 0.85 for PageRank yields a power-law distribution of its values. We suggest that power-law distributions of PageRank in Web graphs have been observed because the typical damping factor used in practice is between 0.85 and 0.90.",
"The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation.",
"Abstract Today, when searching for information on the World Wide Web, one usually performs a query through a term-based search engine. These engines return, as the query's result, a list of Web sites whose contents match the query. For broad topic queries, such searches often result in a huge set of retrieved documents, many of which are irrelevant to the user. However, much information is contained in the link-structure of the World Wide Web. Information such as which pages are linked to others can be used to augment search algorithms. In this context, Jon Kleinberg introduced the notion of two distinct types of Web sites: hubs and authorities . Kleinberg argued that hubs and authorities exhibit a mutually reinforcing relationship : a good hub will point to many authorities, and a good authority will be pointed at by many hubs. In light of this, he devised an algorithm aimed at finding authoritative sites. We present SALSA, a new stochastic approach for link structure analysis, which examines random walks on graphs derived from the link structure. We show that both SALSA and Kleinberg's mutual reinforcement approach employ the same meta-algorithm. We then prove that SALSA is equivalent to a weighted in-degree analysis of the link-structure of World Wide Web subgraphs, making it computationally more efficient than the mutual reinforcement approach. We compare the results of applying SALSA to the results derived through Kleinberg's approach. These comparisons reveal a topological phenomenon called the TKC effect (Tightly Knit Community) which, in certain cases, prevents the mutual reinforcement approach from identifying meaningful authorities."
]
} |
1605.06249 | 2404750071 | Patterns often appear in a variety of large, real-world networks, and interesting physical phenomena are often explained by network topology as in the case of the bow-tie structure of the World Wide Web, or the small world phenomenon in social networks. The discovery and modelling of such regular patterns has a wide application from disease propagation to financial markets. In this work we describe a newly discovered regularly occurring striation pattern found in the PageRank ordering of adjacency matrices that encode real-world networks. We demonstrate that these striations are the result of well-known graph generation processes resulting in regularities that are manifest in the typical neighborhood distribution. The spectral view explored in this paper encodes a tremendous amount about the explicit and implicit topology of a given network, so we also discuss the interesting network properties, outliers and anomalies that a viewer can determine from a brief look at the re-ordered matrix. | The adjacency matrix is a common tool for visualizing graph structure, and has been shown to perform well in providing an intuitive understanding of dense or large graphs, except in the task of path finding @cite_35 . The drawback of matrices, however, is that nodes can appear in any arbitrary order. Structures can often be revealed by re-ordering the nodes of the matrix to reveal clusters of related nodes. A study by Mueller al provided a comparison of different methods of ordering the matrices, using Random, BFS, DFS, King's algorithm, Reverse Cuthill McKee, Degree, Spectral, Separator tree partitioning algorithm, and Sloan ordering @cite_4 . They evaluated the different orderings in terms of their consistency and ability to reveal structure on graphs generated by different algorithms. They did not, however, investigate PageRank or any of the other ranking-models discussed in the present work. | {
"cite_N": [
"@cite_35",
"@cite_4"
],
"mid": [
"2157316473",
"2166220529"
],
"abstract": [
"In this article, we describe a taxonomy of generic graph related tasks along with a computer-based evaluation designed to assess the readability of two representations of graphs: matrix-based representations and node-link diagrams. This evaluation encompasses seven generic tasks and leads to insightful recommendations for the representation of graphs according to their size and density. Typically, we show that when graphs are bigger than twenty vertices, the matrix-based visualization outperforms node-link diagrams on most tasks. Only path finding is consistently in favor of node-link diagrams throughout the evaluation.",
"In this study, we examine the use of graph ordering algorithms for visual analysis of data sets using visual similarity matrices. Visual similarity matrices display the relationships between data items in a dot-matrix plot format, with the axes labeled with the data items and points drawn if there is a relationship between two data items. The biggest challenge for displaying data using this representation is finding an ordering of the data items that reveals the internal structure of the data set. Poor orderings are indistinguishable from noise whereas a good ordering can reveal complex and subtle features of the data. We consider three general classes of algorithms for generating orderings: simple graph theoretic algorithms, symbolic sparse matrix reordering algorithms, and spectral decomposition algorithms. We apply each algorithm to synthetic and real world data sets and evaluate each algorithm for interpretability (i.e., does the algorithm lead to images with usable visual features?) and stability (i.e., does the algorithm consistently produce similar results?). We also provide a detailed discussion of the results for each algorithm across the different graph types and include a discussion of some strategies for using ordering algorithms for data analysis based on these results."
]
} |
1605.06249 | 2404750071 | Patterns often appear in a variety of large, real-world networks, and interesting physical phenomena are often explained by network topology as in the case of the bow-tie structure of the World Wide Web, or the small world phenomenon in social networks. The discovery and modelling of such regular patterns has a wide application from disease propagation to financial markets. In this work we describe a newly discovered regularly occurring striation pattern found in the PageRank ordering of adjacency matrices that encode real-world networks. We demonstrate that these striations are the result of well-known graph generation processes resulting in regularities that are manifest in the typical neighborhood distribution. The spectral view explored in this paper encodes a tremendous amount about the explicit and implicit topology of a given network, so we also discuss the interesting network properties, outliers and anomalies that a viewer can determine from a brief look at the re-ordered matrix. | The idea of finding order in visual representations of matrices, especially adjacency matrices of graphs, has a long history. McCormick al introduced the Bond Energy Algorithm @cite_9 , which provided a method of reordering the columns and rows of matrices so that larger values were grouped together. This method used a nearest neighbor heuristic to overcome the @math possible permutations, and is among the earliest algorithms that may be used as a graph clustering algorithm. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2119430004"
],
"abstract": [
"A new cluster-analysis method, the bond energy algorithm, has been developed recently; it operates upon a raw input object-object or object-attribute data array by permuting its rows and columns in order to find informative variable groups and their interrelations. This paper describes the algorithm and illustrates by several examples its use for both problem decomposition and data reorganization."
]
} |
1605.06249 | 2404750071 | Patterns often appear in a variety of large, real-world networks, and interesting physical phenomena are often explained by network topology as in the case of the bow-tie structure of the World Wide Web, or the small world phenomenon in social networks. The discovery and modelling of such regular patterns has a wide application from disease propagation to financial markets. In this work we describe a newly discovered regularly occurring striation pattern found in the PageRank ordering of adjacency matrices that encode real-world networks. We demonstrate that these striations are the result of well-known graph generation processes resulting in regularities that are manifest in the typical neighborhood distribution. The spectral view explored in this paper encodes a tremendous amount about the explicit and implicit topology of a given network, so we also discuss the interesting network properties, outliers and anomalies that a viewer can determine from a brief look at the re-ordered matrix. | The ordering of the Separator tree is based on an algorithm given by Blandford al @cite_27 . It removes vertices () from the graph in order to partition the structure into subgraphs of a fixed size. Separator trees were a result of trying to reduce the size of storage required for large graphs by isolating and thereby removing the need to store empty parts of the adjacency matrix. | {
"cite_N": [
"@cite_27"
],
"mid": [
"1994840070"
],
"abstract": [
"We consider the problem of representing graphs compactly while supporting queries efficiently. In particular we describe a data structure for representing n-vertex unlabeled graphs that satisfy an O(nc)-separator theorem, c < 1. The structure uses O(n) bits, and supports adjacency and degree queries in constant time, and neighbor listing in constant time per neighbor. This generalizes previous results for graphs with constant genus, such as planar graphs.We present experimental results using many \"real world\" graphs including 3-dimensional finite element meshes, link graphs of the web, internet router graphs, VLSI circuits, and street map graphs. Compared to adjacency lists, our approach reduces space usage by almost an order of magnitude, while supporting depthfirst traversal in about the same running time."
]
} |
1605.06249 | 2404750071 | Patterns often appear in a variety of large, real-world networks, and interesting physical phenomena are often explained by network topology as in the case of the bow-tie structure of the World Wide Web, or the small world phenomenon in social networks. The discovery and modelling of such regular patterns has a wide application from disease propagation to financial markets. In this work we describe a newly discovered regularly occurring striation pattern found in the PageRank ordering of adjacency matrices that encode real-world networks. We demonstrate that these striations are the result of well-known graph generation processes resulting in regularities that are manifest in the typical neighborhood distribution. The spectral view explored in this paper encodes a tremendous amount about the explicit and implicit topology of a given network, so we also discuss the interesting network properties, outliers and anomalies that a viewer can determine from a brief look at the re-ordered matrix. | The traditional adjacency matrix is not the only method available to tease patterns from graph topology. Prakash al @cite_26 found a pattern called EigenSpokes, which is observed by creating a scatter plot of the singular vectors (an ) of the nodes against each other upon the presence of an edge. This plotting method often forms lines that align along specific vectors (), which is then used in community detection. Kang al @cite_34 demonstrated that nodes which have high scores on the EE-plot will often form near-cliques or bipartite cores. | {
"cite_N": [
"@cite_26",
"@cite_34"
],
"mid": [
"2111639622",
"1889066674"
],
"abstract": [
"We report a surprising, persistent pattern in an important class of large sparse social graphs, which we term EigenSpokes. We focus on large Mobile Call graphs, spanning hundreds of thousands of nodes and edges, and find that the singular vectors of these graphs exhibit a striking EigenSpokes pattern wherein, when plotted against each other, they have clear, separate lines that often neatly align along specific axes (hence the term \"spokes\"). We show this phenomenon to be persistent across both temporal and geographic samples of Mobile Call graphs. Through experiments on synthetic graphs, EigenSpokes are shown to be associated with the presence of community structure in these social networks. This is further verified by analysing the eigenvectors of the Mobile Call graph, which yield nodes that form tightly-knit communities. The presence of such patterns in the singular spectra has useful applications, and could potentially be used to design simple, efficient community extraction algorithms.",
"Given a graph with billions of nodes and edges, how can we find patterns and anomalies? Are there nodes that participate in too many or too few triangles? Are there close-knit near-cliques? These questions are expensive to answer unless we have the first several eigenvalues and eigenvectors of the graph adjacency matrix. However, eigensolvers suffer from subtle problems (e.g., convergence) for large sparse matrices, let alone for billion-scale ones. We address this problem with the proposed HEIGEN algorithm, which we carefully design to be accurate, efficient, and able to run on the highly scalable MAPREDUCE (HADOOP) environment. This enables HEIGEN to handle matrices more than 1000× larger than those which can be analyzed by existing algorithms. We implement HEIGEN and run it on the M45 cluster, one of the top 50 supercomputers in the world. We report important discoveries about near-cliques and triangles on several real-world graphs, including a snapshot of the Twitter social network (38Gb, 2 billion edges) and the \"YahooWeb\" dataset, one of the largest publicly available graphs (120Gb, 1.4 billion nodes, 6.6 billion edges)."
]
} |
1605.06399 | 2953063227 | Modern computer systems typically conbine multicore CPUs with accelerators like GPUs for inproved performance and energy efficiency. However, these sys- tems suffer from poor performance portability, code tuned for one device must be retuned to achieve high performance on another. Image processing is increas- ing in importance , with applications ranging from seismology and medicine to Photoshop. Based on our experience with medical image processing, we propose ImageCL, a high-level domain-specific language and source-to-source compiler, targeting heterogeneous hardware. ImageCL resembles OpenCL, but abstracts away per- formance optimization details, allowing the programmer to focus on algorithm development, rather than performance tuning. The latter is left to our source-to- source compiler and auto-tuner. From high-level ImageCL kernels, our source- to-source compiler can generate multiple OpenCL implementations with different optimizations applied. We rely on auto-tuning rather than machine models or ex- pert programmer knowledge to determine which optimizations to apply, making our tuning procedure highly robust. Furthermore, we can generate high perform- ing implementations for different devices from a single source code, thereby im- proving performance portability. We evaluate our approach on three image processing benchmarks, on different GPU and CPU devices, and are able to outperform other state of the art solutions in several cases, achieving speedups of up to 4.57x. | Auto-tuning is an established technique, used successfully in high performance libraries like FFTW @cite_8 for FFTs and ATLAS @cite_5 for linear algebra, as well as for bit-reversal @cite_21 . Methods to reduce the search effort of auto-tuning, such as analytical models @cite_9 , or machine learning @cite_13 , have been developed. | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_5",
"@cite_13"
],
"mid": [
"2102182691",
"2153637321",
"2166247098",
"2119395117",
"2033088400"
],
"abstract": [
"FFTW is an implementation of the discrete Fourier transform (DFT) that adapts to the hardware in order to maximize performance. This paper shows that such an approach can yield an implementation that is competitive with hand-optimized libraries, and describes the software structure that makes our current FFTW3 version flexible and adaptive. We further discuss a new algorithm for real-data DFTs of prime size, a new way of implementing DFTs by means of machine-specific single-instruction, multiple-data (SIMD) instructions, and how a special-purpose compiler can derive optimized implementations of the discrete cosine and sine transforms automatically from a DFT algorithm.",
"Empirical program optimizers estimate the values of key optimization parameters by generating different program versions and running them on the actual hardware to determine which values give the best performance. In contrast, conventional compilers use models of programs and machines to choose these parameters. It is widely believed that model-driven optimization does not compete with empirical optimization, but few quantitative comparisons have been done to date. To make such a comparison, we replaced the empirical optimization engine in ATLAS (a system for generating a dense numerical linear algebra library called the BLAS) with a model-driven optimization engine that used detailed models to estimate values for optimization parameters, and then measured the relative performance of the two systems on three different hardware platforms. Our experiments show that model-driven optimization can be surprisingly effective, and can generate code whose performance is comparable to that of code generated by empirical optimizers for the BLAS.",
"Fast bit-reversal algorithms have been of strong interest for many decades, especially after Cooley and Tukey introduced their FFT implementation in 1965. Many recent algorithms, including FFTW try to avoid the bit-reversal all together by doing in-place algorithms within their FFTs. We therefore motivate our work by showing that for FFTs of up to 65.536 points, a minimally tuned Cooley-Tukey FFT in C using our bit-reversal algorithm performs comparable or better than the default FFTW algorithm. In this paper, we present an extremely fast linear bit-reversal adapted for modern multithreaded architectures. Our bit-reversal algorithm takes advantage of recursive calls combined with the fact that it only generates pairs of indices for which the corresponding elements need to be exchanges, thereby avoiding any explicit tests. In addition we have implemented an adaptive approach which explores the trade-off between compile time and run-time work load. By generating look-up tables at compile time, our algorithm becomes even faster at run-time. Our results also show that by using more than one thread on tightly coupled architectures, further speed-up can be achieved.",
"The Basic Linear Algebra Subprograms lBLASr define one of the most heavily used performance-critical APIs in scientific computing today. It has long been understood that the most important of these routines, the dense Level 3 BLAS, may be written efficiently given a highly optimized general matrix multiply routine. In this paper, however, we show that an even larger set of operations can be efficiently maintained using a much simpler matrix multiply kernel. Indeed, this is how our own project, ATLAS lwhich provides one of the most widely used BLAS implementations in use todayr, supports a large variety of performance-critical routines. Copyright © 2004 John Wiley & Sons, Ltd.",
"The rapidly evolving landscape of multicore architectures makes the construction of efficient libraries a daunting task. A family of methods known collectively as “auto-tuning” has emerged to address this challenge. Two major approaches to auto-tuning are empirical and model-based: empirical autotuning is a generic but slow approach that works by measuring runtimes of candidate implementations, model-based auto-tuning predicts those runtimes using simplified abstractions designed by hand. We show that machine learning methods for non-linear regression can be used to estimate timing models from data, capturing the best of both approaches. A statistically-derived model offers the speed of a model-based approach, with the generality and simplicity of empirical auto-tuning. We validate our approach using the filterbank correlation kernel described in Pinto and Cox [2012], where we find that 0.1 seconds of hill climbing on the regression model (“predictive auto-tuning”) can achieve almost the same speed-up as is brought by minutes of empirical auto-tuning. Our approach is not specific to filterbank correlation, nor even to GPU kernel auto-tuning, and can be applied to almost any templated-code optimization problem, spanning a wide variety of problem types, kernel types, and platforms."
]
} |
1605.06399 | 2953063227 | Modern computer systems typically conbine multicore CPUs with accelerators like GPUs for inproved performance and energy efficiency. However, these sys- tems suffer from poor performance portability, code tuned for one device must be retuned to achieve high performance on another. Image processing is increas- ing in importance , with applications ranging from seismology and medicine to Photoshop. Based on our experience with medical image processing, we propose ImageCL, a high-level domain-specific language and source-to-source compiler, targeting heterogeneous hardware. ImageCL resembles OpenCL, but abstracts away per- formance optimization details, allowing the programmer to focus on algorithm development, rather than performance tuning. The latter is left to our source-to- source compiler and auto-tuner. From high-level ImageCL kernels, our source- to-source compiler can generate multiple OpenCL implementations with different optimizations applied. We rely on auto-tuning rather than machine models or ex- pert programmer knowledge to determine which optimizations to apply, making our tuning procedure highly robust. Furthermore, we can generate high perform- ing implementations for different devices from a single source code, thereby im- proving performance portability. We evaluate our approach on three image processing benchmarks, on different GPU and CPU devices, and are able to outperform other state of the art solutions in several cases, achieving speedups of up to 4.57x. | Poor OpenCL performance portability has been the subject of many works. @cite_24 identified important tuning parameters greatly affecing performance. @cite_11 attempted to find application settings that would achieve good performance accross different devices. Auto-tuning approaches have also been proposed in @cite_0 @cite_23 , but required the OpenCL code to be manually parameterized. | {
"cite_N": [
"@cite_24",
"@cite_0",
"@cite_23",
"@cite_11"
],
"mid": [
"110609005",
"1790519466",
"",
"2081245617"
],
"abstract": [
"We study the performance portability of OpenCL across diverse architectures including NVIDIA GPU, Intel Ivy Bridge CPU, and AMD Fusion APU. We present detailed performance analysis at assembly level on three exemplar OpenCL benchmarks: SGEMM, SpMV, and FFT. We also identify a number of tuning knobs that are critical to performance portability, including threads-data mapping, data layout, tiling size, data caching, and operation-specific factors. We further demonstrate that proper tuning could improve the OpenCL portable performance from the current 15 to a potential 67 of the state-of-the-art performance on the Ivy Bridge CPU. Finally, we evaluate the current OpenCL programming model, and propose a list of extensions that improve performance portability.",
"Heterogeneous computing, which combines devices with different architectures, is rising in popularity, and promises increased performance combined with reduced energy consumption. OpenCL has been proposed as a standard for programing such systems, and offers functional portability. It does, however, suffer from poor performance portability, code tuned for one device must be re-tuned to achieve good performance on another device. In this paper, we use machine learning-based auto-tuning to address this problem. Benchmarks are run on a random subset of the entire tuning parameter configuration space, and the results are used to build an artificial neural network based model. The model can then be used to find interesting parts of the parameter space for further search. We evaluate our method with different benchmarks, on several devices, including an Intel i7 3770 CPU, an Nvidia K40 GPU and an AMD Radeon HD 7970 GPU. Our model achieves a mean relative error as low as 6.1 , and is able to find configurations as little as 1.3 worse than the global minimum.",
"",
"This paper reports on the development of an MPI OpenCL implementation of LU, an application-level benchmark from the NAS Parallel Benchmark Suite. An account of the design decisions addressed during the development of this code is presented, demonstrating the importance of memory arrangement and work-item work-group distribution strategies when applications are deployed on different device types. The resulting platform-agnostic, single source application is benchmarked on a number of different architectures, and is shown to be 1.3-1.5x slower than native FORTRAN 77 or CUDA implementations on a single node and 1.3-3.1x slower on multiple nodes. We also explore the potential performance gains of OpenCL's device fissioning capability, demonstrating up to a 3x speed-up over our original OpenCL implementation."
]
} |
1605.06399 | 2953063227 | Modern computer systems typically conbine multicore CPUs with accelerators like GPUs for inproved performance and energy efficiency. However, these sys- tems suffer from poor performance portability, code tuned for one device must be retuned to achieve high performance on another. Image processing is increas- ing in importance , with applications ranging from seismology and medicine to Photoshop. Based on our experience with medical image processing, we propose ImageCL, a high-level domain-specific language and source-to-source compiler, targeting heterogeneous hardware. ImageCL resembles OpenCL, but abstracts away per- formance optimization details, allowing the programmer to focus on algorithm development, rather than performance tuning. The latter is left to our source-to- source compiler and auto-tuner. From high-level ImageCL kernels, our source- to-source compiler can generate multiple OpenCL implementations with different optimizations applied. We rely on auto-tuning rather than machine models or ex- pert programmer knowledge to determine which optimizations to apply, making our tuning procedure highly robust. Furthermore, we can generate high perform- ing implementations for different devices from a single source code, thereby im- proving performance portability. We evaluate our approach on three image processing benchmarks, on different GPU and CPU devices, and are able to outperform other state of the art solutions in several cases, achieving speedups of up to 4.57x. | High performance DSLs for image processing have also been proposed, and many of these works resemble our own. Halide @cite_3 is a DSL embedded in C++, particularly targeting graphs of stencil operations for image processing. Halide separates the algorithm, specified in a purely functional manner, from the which specifies how the calculations should be carried out, including tiling, parallelization, and vectorization. Optimization is done by changing the schedule, without modifying the algorithm, or its correctness. Schedules can be hand-tuend, or auto-tuned using stochastic search. GPUs can be targeted, but important GPU optimizations, such as using specific memories, are hard or impossible to express. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2055312318"
],
"abstract": [
"Image processing pipelines combine the challenges of stencil computations and stream programs. They are composed of large graphs of different stencil stages, as well as complex reductions, and stages with global or data-dependent access patterns. Because of their complex structure, the performance difference between a naive implementation of a pipeline and an optimized one is often an order of magnitude. Efficient implementations require optimization of both parallelism and locality, but due to the nature of stencils, there is a fundamental tension between parallelism, locality, and introducing redundant recomputation of shared values. We present a systematic model of the tradeoff space fundamental to stencil pipelines, a schedule representation which describes concrete points in this space for each stage in an image processing pipeline, and an optimizing compiler for the Halide image processing language that synthesizes high performance implementations from a Halide algorithm and a schedule. Combining this compiler with stochastic search over the space of schedules enables terse, composable programs to achieve state-of-the-art performance on a wide range of real image processing pipelines, and across different hardware architectures, including multicores with SIMD, and heterogeneous CPU+GPU execution. From simple Halide programs written in a few hours, we demonstrate performance up to 5x faster than hand-tuned C, intrinsics, and CUDA implementations optimized by experts over weeks or months, for image processing applications beyond the reach of past automatic compilers."
]
} |
1605.06399 | 2953063227 | Modern computer systems typically conbine multicore CPUs with accelerators like GPUs for inproved performance and energy efficiency. However, these sys- tems suffer from poor performance portability, code tuned for one device must be retuned to achieve high performance on another. Image processing is increas- ing in importance , with applications ranging from seismology and medicine to Photoshop. Based on our experience with medical image processing, we propose ImageCL, a high-level domain-specific language and source-to-source compiler, targeting heterogeneous hardware. ImageCL resembles OpenCL, but abstracts away per- formance optimization details, allowing the programmer to focus on algorithm development, rather than performance tuning. The latter is left to our source-to- source compiler and auto-tuner. From high-level ImageCL kernels, our source- to-source compiler can generate multiple OpenCL implementations with different optimizations applied. We rely on auto-tuning rather than machine models or ex- pert programmer knowledge to determine which optimizations to apply, making our tuning procedure highly robust. Furthermore, we can generate high perform- ing implementations for different devices from a single source code, thereby im- proving performance portability. We evaluate our approach on three image processing benchmarks, on different GPU and CPU devices, and are able to outperform other state of the art solutions in several cases, achieving speedups of up to 4.57x. | Combining code generation or source-to-source compilers with auto-tuners has also been explored. @cite_10 used a script based auto-tuning compiler to translate serial C loop nests to CUDA. @cite_19 proposed combining a code generator for linear algebra kernels with a auto-tuner to achieve performance portability. The PATUS @cite_22 framework can generate and auto-tune code for stencil computations for heterogeneous hardware, using a separate specification for the stencil and the computation strategy, similar to Halide. It lacks the general purpose capabilities of our work, and does not support all our optimizations. | {
"cite_N": [
"@cite_19",
"@cite_10",
"@cite_22"
],
"mid": [
"2107911628",
"1975001341",
"2104512032"
],
"abstract": [
"In this work, we evaluate OpenCL as a programming tool for developing performance-portable applications for GPGPU. While the Khronos group developed OpenCL with programming portability in mind, performance is not necessarily portable. OpenCL has required performance-impacting initializations that do not exist in other languages such as CUDA. Understanding these implications allows us to provide a single library with decent performance on a variety of platforms. We choose triangular solver (TRSM) and matrix multiplication (GEMM) as representative level 3 BLAS routines to implement in OpenCL. We profile TRSM to get the time distribution of the OpenCL runtime system. We then provide tuned GEMM kernels for both the NVIDIA Tesla C2050 and ATI Radeon 5870, the latest GPUs offered by both companies. We explore the benefits of using the texture cache, the performance ramifications of copying data into images, discrepancies in the OpenCL and CUDA compilers' optimizations, and other issues that affect the performance. Experimental results show that nearly 50 of peak performance can be obtained in GEMM on both GPUs in OpenCL. We also show that the performance of these kernels is not highly portable. Finally, we propose the use of auto-tuning to better explore these kernels' parameter space using search harness.",
"This article presents a novel compiler framework for CUDA code generation. The compiler structure is designed to support autotuning, which employs empirical techniques to evaluate a set of alternative mappings of computation kernels and select the mapping that obtains the best performance. This article introduces a Transformation Strategy Generator, a meta-optimizer that generates a set of transformation recipes, which are descriptions of the mapping of the sequential code to parallel CUDA code. These recipes comprise a search space of possible implementations. This system achieves performance comparable and sometimes better than manually tuned libraries and exceeds the performance of a state-of-the-art GPU compiler.",
"Stencil calculations comprise an important class of kernels in many scientific computing applications ranging from simple PDE solvers to constituent kernels in multigrid methods as well as image processing applications. In such types of solvers, stencil kernels are often the dominant part of the computation, and an efficient parallel implementation of the kernel is therefore crucial in order to reduce the time to solution. However, in the current complex hardware micro architectures, meticulous architecture-specific tuning is required to elicit the machine's full compute power. We present a code generation and auto-tuning framework for stencil computations targeted at multi- and many core processors, such as multicore CPUs and graphics processing units, which makes it possible to generate compute kernels from a specification of the stencil operation and a parallelization and optimization strategy, and leverages the auto tuning methodology to optimize strategy-dependent parameters for the given hardware architecture."
]
} |
1605.06391 | 2952505042 | Most contemporary multi-task learning methods assume linear models. This setting is considered shallow in the era of deep learning. In this paper, we present a new deep multi-task representation learning framework that learns cross-task sharing structure at every layer in a deep network. Our approach is based on generalising the matrix factorisation techniques explicitly or implicitly used by many conventional MTL algorithms to tensor factorisation, to realise automatic learning of end-to-end knowledge sharing in deep networks. This is in contrast to existing deep learning approaches that need a user-defined multi-task sharing strategy. Our approach applies to both homogeneous and heterogeneous MTL. Experiments demonstrate the efficacy of our deep multi-task representation learning in terms of both higher accuracy and fewer design choices. | Some studies consider heterogeneous MTL, where tasks may have different numbers of outputs . This differs from the previously discussed studies which implicitly assume that each task has a single output. Heterogeneous MTL typically uses neural networks with multiple sets of outputs and losses. E.g., @cite_6 proposes a shared-hidden-layer DNN model for multilingual speech processing, where each task corresponds to an individual language. @cite_3 uses a DNN to find facial landmarks (regression) as well as recognise facial attributes (classification); while @cite_2 proposes a DNN for query classification and information retrieval (ranking for web search). A key commonality of these studies is that they all require a user-defined parameter sharing strategy. A typical design pattern is to use shared layers (same parameters) for lower layers of the DNN and then split (independent parameters) for the top layers. However, there is no systematic way to make such design choices, so researchers usually rely on trial-and-error, further complicating the already somewhat dark art of DNN design. In contrast, our method learns where and how much to share representation parameters across the tasks, hence significantly reducing the space of DNN design choices. | {
"cite_N": [
"@cite_3",
"@cite_6",
"@cite_2"
],
"mid": [
"1896424170",
"2025198378",
"2131479143"
],
"abstract": [
"Facial landmark detection has long been impeded by the problems of occlusion and pose variation. Instead of treating the detection task as a single and independent problem, we investigate the possibility of improving detection robustness through multi-task learning. Specifically, we wish to optimize facial landmark detection together with heterogeneous but subtly correlated tasks, e.g. head pose estimation and facial attribute inference. This is non-trivial since different tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, with task-wise early stopping to facilitate learning convergence. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art method based on cascaded deep model [21].",
"In the deep neural network (DNN), the hidden layers can be considered as increasingly complex feature transformations and the final softmax layer as a log-linear classifier making use of the most abstract features computed in the hidden layers. While the loglinear classifier should be different for different languages, the feature transformations can be shared across languages. In this paper we propose a shared-hidden-layer multilingual DNN (SHL-MDNN), in which the hidden layers are made common across many languages while the softmax layers are made language dependent. We demonstrate that the SHL-MDNN can reduce errors by 3-5 , relatively, for all the languages decodable with the SHL-MDNN, over the monolingual DNNs trained using only the language specific data. Further, we show that the learned hidden layers sharing across languages can be transferred to improve recognition accuracy of new languages, with relative error reductions ranging from 6 to 28 against DNNs trained without exploiting the transferred hidden layers. It is particularly interesting that the error reduction can be achieved for the target language that is in different families of the languages used to learn the hidden layers.",
"Consider the problem of learning logistic-regression models for multiple classification tasks, where the training data set for each task is not drawn from the same statistical distribution. In such a multi-task learning (MTL) scenario, it is necessary to identify groups of similar tasks that should be learned jointly. Relying on a Dirichlet process (DP) based statistical model to learn the extent of similarity between classification tasks, we develop computationally efficient algorithms for two different forms of the MTL problem. First, we consider a symmetric multi-task learning (SMTL) situation in which classifiers for multiple tasks are learned jointly using a variational Bayesian (VB) algorithm. Second, we consider an asymmetric multi-task learning (AMTL) formulation in which the posterior density function from the SMTL model parameters (from previous tasks) is used as a prior for a new task: this approach has the significant advantage of not requiring storage and use of all previous data from prior tasks. The AMTL formulation is solved with a simple Markov Chain Monte Carlo (MCMC) construction. Experimental results on two real life MTL problems indicate that the proposed algorithms: (a) automatically identify subgroups of related tasks whose training data appear to be drawn from similar distributions; and (b) are more accurate than simpler approaches such as single-task learning, pooling of data across all tasks, and simplified approximations to DP."
]
} |
1605.06391 | 2952505042 | Most contemporary multi-task learning methods assume linear models. This setting is considered shallow in the era of deep learning. In this paper, we present a new deep multi-task representation learning framework that learns cross-task sharing structure at every layer in a deep network. Our approach is based on generalising the matrix factorisation techniques explicitly or implicitly used by many conventional MTL algorithms to tensor factorisation, to realise automatic learning of end-to-end knowledge sharing in deep networks. This is in contrast to existing deep learning approaches that need a user-defined multi-task sharing strategy. Our approach applies to both homogeneous and heterogeneous MTL. Experiments demonstrate the efficacy of our deep multi-task representation learning in terms of both higher accuracy and fewer design choices. | Our MTL approach is a , in that DNN weights are dynamically generated given some side information -- in the case of MTL, given the task identity. In a related example of speaker-adaptive speech recognition there may be several clusters in the data (e.g., gender, acoustic conditions), and each speaker's model could be a linear combination of these latent task clusters' models. They model each speaker @math 's weight matrix @math as a sum of @math base models @math , i.e., @math . The difference between speakers tasks comes from @math and the base models are shared. An advantage of this is that, when new data come, one can choose to re-train @math parameters only, and keep @math fixed. This will significantly reduce the number of parameters to learn, and consequently the required training data. Beyond this, @cite_0 show that it is possible to train another neural network to those @math values from some abstract metadata. Thus a model for an task can be generated on-the-fly with training instances given an abstract description of the task. The techniques developed here are compatible with both these ideas of generating models with minimal or no effort. | {
"cite_N": [
"@cite_0"
],
"mid": [
"1963826206"
],
"abstract": [
"The model for three-mode factor analysis is discussed in terms of newer applications of mathematical processes including a type of matrix process termed the Kronecker product and the definition of combination variables. Three methods of analysis to a type of extension of principal components analysis are discussed. Methods II and III are applicable to analysis of data collected for a large sample of individuals. An extension of the model is described in which allowance is made for unique variance for each combination variable when the data are collected for a large sample of individuals."
]
} |
1605.05717 | 2396432049 | Erasure codes offer an efficient way to decrease storage and communication costs while implementing atomic memory service in asynchronous distributed storage systems. In this paper, we provide erasure-code-based algorithms having the additional ability to perform background repair of crashed nodes. A repair operation of a node in the crashed state is triggered externally, and is carried out by the concerned node via message exchanges with other active nodes in the system. Upon completion of repair, the node re-enters active state, and resumes participation in ongoing and future read, write, and repair operations. To guarantee liveness and atomicity simultaneously, existing works assume either the presence of nodes with stable storage, or presence of nodes that never crash during the execution. We demand neither of these; instead we consider a natural, yet practical network stability condition @math that only restricts the number of nodes in the crashed repair state during broadcast of any message. We present an erasure-code based algorithm @math that is always live, and guarantees atomicity as long as condition @math holds. In situations when the number of concurrent writes is limited, @math has significantly improved storage and communication cost over a replication-based algorithm @math , which also works under @math . We further show how a slightly stronger network stability condition @math can be used to construct algorithms that never violate atomicity. The guarantee of atomicity comes at the expense of having an additional phase during the read and write operations. | Our setting is closely related to the problem of implementing a consistent memory object in a dynamic setting, where nodes are allowed to voluntarily leave and join the network. The problem involves dynamic reconfiguration of the set of nodes that take part in client operations, which is often implemented via a operation that is initiated by any of the participating processes, including the clients. Any node that wants to leave join the network makes an announcement, via a operation, before doing so. The problem is extensively studied in the field of distributed algorithms @cite_6 , @cite_26 , @cite_21 , @cite_22 , @cite_30 ; review and tutorial articles appear in @cite_20 , @cite_28 , @cite_17 . | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_22",
"@cite_28",
"@cite_21",
"@cite_6",
"@cite_20",
"@cite_17"
],
"mid": [
"2398339833",
"2100309652",
"2153637223",
"",
"2950965572",
"1587246893",
"2060204338",
"2070351064"
],
"abstract": [
"Simulating a shared register can mask the intricacies of designing algorithms for asynchronous message-passing systems subject to crash failures, since it allows them to run algorithms designed for the simpler shared-memory model. The simulation replicates the value of the register in multiple servers and requires readers and writers to communicate with a majority of servers. The success of this approach for static systems, where the set of nodes (readers, writers, and servers) is fixed, has motivated several similar simulations for dynamic systems, where nodes may enter and leave. However, all existing simulations need to assume that the system eventually stops changing for a long enough period or that the system size is fixed. This paper presents the first simulation of an atomic read write reg- ister in a crash-prone asynchronous system that can change size and withstand nodes continually entering and leaving. The simulation allows the system to keep changing, provided that the number of nodes entering and leaving during a fixed time interval is at most a constant fraction of the current system size.",
"This article deals with the emulation of atomic read write (R W) storage in dynamic asynchronous message passing systems. In static settings, it is well known that atomic R W storage can be implemented in a fault-tolerant manner even if the system is completely asynchronous, whereas consensus is not solvable. In contrast, all existing emulations of atomic storage in dynamic systems rely on consensus or stronger primitives, leading to a popular belief that dynamic R W storage is unattainable without consensus. In this article, we specify the problem of dynamic atomic read write storage in terms of the interface available to the users of such storage. We discover that, perhaps surprisingly, dynamic R W storage is solvable in a completely asynchronous system: we present DynaStore, an algorithm that solves this problem. Our result implies that atomic R W storage is in fact easier than consensus, even in dynamic systems.",
"Providing distributed processes with concurrent objects is a fundamental service that has to be offered by any distributed system. The classical shared read write register is one of the most basic ones. Several protocols have been proposed that build an atomic register on top of an asyn- chronous message-passing system prone to process crashes. In the same spirit, this paper addresses the implementation of a regular register (a weakened form of an atomic register) in an asynchronous dynamic message-passing system. The aim is here to cope with the net effect of the adversaries that are asynchrony and dynamicity (the fact that processes can enter and leave the system). The paper focuses on the class of dynamic systems the churn rate c of which is constant. It presents two protocols, one applicable to synchronous dynamic message passing systems, the other one to even- tually synchronous dynamic systems. Both protocols rely on an appropriate broadcast communication service (similar to a reliable broadcast). Each requires a specific constraint on the churn rate c. Both protocols are first presented in an as intuitive as possible way, and are then proved correct.",
"",
"Dynamic distributed storage algorithms such as DynaStore, Reconfigurable Paxos, RAMBO, and RDS, do not ensure liveness (wait-freedom) in asynchronous runs with infinitely many reconfigurations. We prove that this is inherent for asynchronous dynamic storage algorithms, including ones that use @math or @math oracles. Our result holds even if only one process may fail, provided that machines that were successfully removed from the system's configuration may be switched off by an administrator. Intuitively, the impossibility relies on the fact that a correct process can be suspected to have failed at any time, i.e., its failure is indistinguishable to other processes from slow delivery of its messages, and so the system should be able to reconfigure without waiting for this process to complete its pending operations. To circumvent this result, we define a dynamic eventually perfect failure detector, and present an algorithm that uses it to emulate wait-free dynamic atomic storage (with no restrictions on reconfiguration rate). Together, our results thus draw a sharp line between oracles like @math and @math , which allow some correct process to continue to be suspected forever, and a dynamic eventually perfect one, which does not.",
"This paper presents an algorithm that emulates atomic read write shared objects in a dynamic network setting. To ensure availability and faulttolerance, the objects are replicated. To ensure atomicity, reads and writes are performed using quorum configurations, each of which consists of a set of members plus sets of read-quorums and write-quorums. The algorithm is reconfigurable: the quorum configurations may change during computation, and such changes do not cause violations of atomicity. Any quorum configuration may be installed at any time. The algorithm tolerates processor stopping failure and message loss. The algorithm performs three major tasks, all concurrently: reading and writing objects, introducing new configurations, and \"garbage-collecting\" obsolete configurations. The algorithm guarantees atomicity for arbitrary patterns of asynchrony and failure. The algorithm satisfies a variety of conditional performance properties, based on timing and failure assumptions. In the \"normal case\", the latency of read and write operations is at most 8d, where d is the maximum message delay.",
"As computation continues to move into the cloud, the computing platform of interest no longer resembles a pizza box or a refrigerator, but a warehouse full of computers. These new large datacenters are quite different from traditional hosting facilities of earlier times and cannot be viewed simply as a collection of co-located servers. Large portions of the hardware and software resources in these facilities must work in concert to efficiently deliver good levels of Internet service performance, something that can only be achieved by a holistic approach to their design and deployment. In other words, we must treat the datacenter itself as one massive warehouse-scale computer (WSC). We describe the architecture of WSCs, the main factors influencing their design, operation, and cost structure, and the characteristics of their software base. We hope it will be useful to architects and programmers of today's WSCs, as well as those of future many-core platforms which may one day implement the equivalent of today's WSCs on a single board. Table of Contents: Introduction Workloads and Software Infrastructure Hardware Building Blocks Datacenter Basics Energy and Power Efficiency Modeling Costs Dealing with Failures and Repairs Closing Remarks",
""
]
} |
1605.05717 | 2396432049 | Erasure codes offer an efficient way to decrease storage and communication costs while implementing atomic memory service in asynchronous distributed storage systems. In this paper, we provide erasure-code-based algorithms having the additional ability to perform background repair of crashed nodes. A repair operation of a node in the crashed state is triggered externally, and is carried out by the concerned node via message exchanges with other active nodes in the system. Upon completion of repair, the node re-enters active state, and resumes participation in ongoing and future read, write, and repair operations. To guarantee liveness and atomicity simultaneously, existing works assume either the presence of nodes with stable storage, or presence of nodes that never crash during the execution. We demand neither of these; instead we consider a natural, yet practical network stability condition @math that only restricts the number of nodes in the crashed repair state during broadcast of any message. We present an erasure-code based algorithm @math that is always live, and guarantees atomicity as long as condition @math holds. In situations when the number of concurrent writes is limited, @math has significantly improved storage and communication cost over a replication-based algorithm @math , which also works under @math . We further show how a slightly stronger network stability condition @math can be used to construct algorithms that never violate atomicity. The guarantee of atomicity comes at the expense of having an additional phase during the read and write operations. | Recently, a large class of new erasure network codes for storage have been proposed (see @cite_11 for a survey), and also tested in networks @cite_19 , @cite_15 , @cite_1 , where the focus is efficient storage of immutable data, such as, archival data. These new codes are specifically designed to optimize performance metrics like repair-bandwidth and repair-time (of failed servers), and offer significant performance gains when compared to the traditional Reed-Solomon MDS codes @cite_9 . It needs to be explored if these codes can be used in conjunction with the @math algorithm, to further improve the performance costs. | {
"cite_N": [
"@cite_9",
"@cite_1",
"@cite_19",
"@cite_15",
"@cite_11"
],
"mid": [
"",
"2128268793",
"154253821",
"2949925098",
"2058863419"
],
"abstract": [
"",
"Erasure codes, such as Reed-Solomon (RS) codes, are increasingly being deployed as an alternative to data-replication for fault tolerance in distributed storage systems. While RS codes provide significant savings in storage space, they can impose a huge burden on the I O and network resources when reconstructing failed or otherwise unavailable data. A recent class of erasure codes, called minimum-storage-regeneration (MSR) codes, has emerged as a superior alternative to the popular RS codes, in that it minimizes network transfers during reconstruction while also being optimal with respect to storage and reliability. However, existing practical MSR codes do not address the increasingly important problem of I O overhead incurred during reconstructions, and are, in general, inferior to RS codes in this regard. In this paper, we design erasure codes that are simultaneously optimal in terms of I O, storage, and network bandwidth. Our design builds on top of a class of powerful practical codes, called the product-matrix-MSR codes. Evaluations show that our proposed design results in a significant reduction the number of I Os consumed during reconstructions (a 5× reduction for typical parameters), while retaining optimality with respect to storage, reliability, and network bandwidth.",
"Windows Azure Storage (WAS) is a cloud storage system that provides customers the ability to store seemingly limitless amounts of data for any duration of time. WAS customers have access to their data from anywhere, at any time, and only pay for what they use and store. To provide durability for that data and to keep the cost of storage low, WAS uses erasure coding. In this paper we introduce a new set of codes for erasure coding called Local Reconstruction Codes (LRC). LRC reduces the number of erasure coding fragments that need to be read when reconstructing data fragments that are offline, while still keeping the storage overhead low. The important benefits of LRC are that it reduces the bandwidth and I Os required for repair reads over prior codes, while still allowing a significant reduction in storage overhead. We describe how LRC is used in WAS to provide low overhead durable storage with consistently low read latencies.",
"Distributed storage systems for large clusters typically use replication to provide reliability. Recently, erasure codes have been used to reduce the large storage overhead of three-replicated systems. Reed-Solomon codes are the standard design choice and their high repair cost is often considered an unavoidable price to pay for high storage efficiency and high reliability. This paper shows how to overcome this limitation. We present a novel family of erasure codes that are efficiently repairable and offer higher reliability compared to Reed-Solomon codes. We show analytically that our codes are optimal on a recently identified tradeoff between locality and minimum distance. We implement our new codes in Hadoop HDFS and compare to a currently deployed HDFS module that uses Reed-Solomon codes. Our modified HDFS implementation shows a reduction of approximately 2x on the repair disk I O and repair network traffic. The disadvantage of the new coding scheme is that it requires 14 more storage compared to Reed-Solomon codes, an overhead shown to be information theoretically optimal to obtain locality. Because the new codes repair failures faster, this provides higher reliability, which is orders of magnitude higher compared to replication.",
"Distributed storage systems often introduce redundancy to increase reliability. When coding is used, the repair problem arises: if a node storing encoded information fails, in order to maintain the same level of reliability we need to create encoded information at a new node. This amounts to a partial recovery of the code, whereas conventional erasure coding focuses on the complete recovery of the information from a subset of encoded packets. The consideration of the repair network traffic gives rise to new design challenges. Recently, network coding techniques have been instrumental in addressing these challenges, establishing that maintenance bandwidth can be reduced by orders of magnitude compared to standard erasure codes. This paper provides an overview of the research results on this topic."
]
} |
1605.05847 | 2399534206 | Personal data is collected and stored more than ever by the governments and companies in the digital age. Even though the data is only released after anonymization, deanonymization is possible by joining different datasets. This puts the privacy of individuals in jeopardy. Furthermore, data leaks can unveil personal identifiers of individuals when security is breached. Processing the leaked dataset can provide even more information than what is visible to naked eye. In this work, we report the results of our analyses on the recent "Turkish citizen database leak", which revealed the national identifier numbers of close to fifty million voters, along with personal information such as date of birth, birth place, and full address. We show that with automated processing of the data, one can uniquely identify (i) mother's maiden name of individuals and (ii) landline numbers, for a significant portion of people. This is a serious privacy and security threat because (i) identity theft risk is now higher, and (ii) scammers are able to access more information about individuals. The only and utmost goal of this work is to point out to the security risks and suggest stricter measures to related companies and agencies to protect the security and privacy of individuals. | Hackers or attackers have many techniques to obtain unauthorized information from the leaked or publicly shared (anonymized) data. The most well-known technique that can be used to learn more about individuals is profile matching (or deanonymization). Studies show that in today's digital world, anonymization is not an effective way of protecting sensitive data. For example, Latanya Sweeney showed that it is possible to de-anonymize individuals by using publicly available anonymized health records and other auxiliary information that can be publicly accessed on the Internet @cite_4 . It has been shown that anonymization is also an ineffective technique for sharing genomic data @cite_16 @cite_6 . For instance, genomic variants on the Y chromosome are correlated with the last name (for males). This last name can be inferred using public genealogy databases. With further effort (e.g., using voter registration forms) the complete identity of the individual can also be revealed @cite_6 . Also, unique features in patient-location visit patterns in a distributed health care environment can be used to link the genomic data to the identity of the individuals in publicly available records @cite_15 . | {
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_4",
"@cite_6"
],
"mid": [
"2165067028",
"1968480526",
"2159024459",
"2168610667"
],
"abstract": [
"The increasing integration of patient-specific genomic data into clinical practice and research raises serious privacy concerns. Various systems have been proposed that protect privacy by removing or encrypting explicitly identifying information, such as name or social security number, into pseudonyms. Though these systems claim to protect identity from being disclosed, they lack formal proofs. In this paper, we study the erosion of privacy when genomic data, either pseudonymous or data believed to be anonymous, are released into a distributed healthcare environment. Several algorithms are introduced, collectively called RE-Identification of Data In Trails (REIDIT), which link genomic data to named individuals in publicly available records by leveraging unique features in patient-location visit patterns. Algorithmic proofs of re-identification are developed and we demonstrate, with experiments on real-world data, that susceptibility to re-identification is neither trivial nor the result of bizarre isolated occurrences. We propose that such techniques can be applied as system tests of privacy protection capabilities.",
"One concern in human genetics research is maintaining the privacy of study participants. The growth in genealogical registries may contribute to loss of privacy, given that genotypic information is accessible online to facilitate discovery of genetic relationships. Through iterative use of two such web archives, FamilySearch and Sorenson Molecular Genealogy Foundation, I was able to discern the likely haplotypes for the Y chromosomes of two men, Joseph Smith and Brigham Young, who were instrumental in the founding of the Latter-Day Saints Church. I then determined whether any of the Utahns who contributed to the HapMap project (the \"CEU\" set) is related to either man, on the basis of haplotype analysis of the Y chromosome. Although none of the CEU contributors appear to be a male-line relative, I discovered that predictions could be made for the surnames of the CEU participants by a similar process. For 20 of the 30 unrelated CEU samples, at least one exact match was revealed, and for 17 of these, a potential ancestor from Utah or a neighboring state could be identified. For the remaining ten samples, a match was nearly perfect, typically deviating by only one marker repeat unit. The same query performed in two other large databases revealed fewer individual matches and helped to clarify which surname predictions are more likely to be correct. Because large data sets of genotypes from both consenting research subjects and individuals pursuing genetic genealogy will be accessible online, this type of triangulation between databases may compromise the privacy of research subjects.",
"Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, µ-Argus and k-Similar provide guarantees of privacy protection.",
"Sharing sequencing data sets without identifiers has become a common practice in genomics. Here, we report that surnames can be recovered from personal genomes by profiling short tandem repeats on the Y chromosome (Y-STRs) and querying recreational genetic genealogy databases. We show that a combination of a surname with other types of metadata, such as age and state, can be used to triangulate the identity of the target. A key feature of this technique is that it entirely relies on free, publicly accessible Internet resources. We quantitatively analyze the probability of identification for U.S. males. We further demonstrate the feasibility of this technique by tracing back with high probability the identities of multiple participants in public sequencing projects."
]
} |
1605.05784 | 2402520960 | In this paper we apply a time series based Vector Auto Regressive (VAR) approach to the problem of predicting unemployment insurance claims in different census regions of the United States. Unemployment insurance claims data, reported weekly, are a leading indicator of the US unemployment rate. Gathering weekly unemployment claims and aggregating by region, we model correlation between the different census regions. Additionally, we explore the use of external variables such as Bing search query volumes and URL site clicks related to unemployment claims. To prevent any spurious predictors from appearing in the model we use sparse model based regularization. Preliminary results indicate that our approach is promising and in ongoing work we are extending the approach to a larger set of predictors and a longer data range. | Choi and Varian (See @cite_2 ) used google trends data to predict initial jobless claims. Our work differs from theirs in the modeling technique, selection of predictor variables and level of data aggregation. They looked at jobless claims only at the national level. Since initial jobless claims is often a leading indicator of the reported unemployment rate it is valuable to review works predicting unemployment rates too. In this realm, Askitas and Zimmermann @cite_4 , Suhoy @cite_1 and D'Amuri and Marcucci @cite_6 use search data to predict unemployment in Germany, Israel, and the US respectively. One downside of unemployment rate prediction is that most rates are reported on a monthly basis, while initial unemployment claims data are reported weekly. Additionally, Choi and Varian do a review of predicting various variables in the present, including unemployment along with travel and consumer confidence. We expand on this approach by using a more general forecasting model accounting for cross dependence between regions and also the different exogenous signals (query terms and clicks on links to specific sites to further improve predictions). | {
"cite_N": [
"@cite_1",
"@cite_4",
"@cite_6",
"@cite_2"
],
"mid": [
"2148006902",
"2165707497",
"2137018055",
"2102177432"
],
"abstract": [
"* Research Department, Bank of Israel. http: www.boi.org.il Tanya Suhoy – Phone: 972-2-655-2620; Email: tanya suhoy@boi.org.il 1 This study was undertaken in 2008 at the suggestion of Prof. Stanley Fischer. I thank Prof. Hal Varian for help and access to the Google's experimental database, Jonathan Sidi for research assistance, and Ran Sharabani for his contribution to the discussion at the Research Department seminar.",
"The current economic crisis requires fast information to predict economic behavior early, which is difficult at times of structural changes. This paper suggests an innovative new method of using data on internet activity for that purpose. It demonstrates strong correlations between keyword searches and unemployment rates using monthly German data and exhibits a strong potential for the method used.",
"We suggest the use of an Internet job-search indicator (the Google Index, GI) as the best leading indicator to predict the US unemployment rate. We perform a deep out-of-sample forecasting comparison analyzing many models that adopt both our preferred leading indicator (GI), the more standard initial claims or combinations of both. We find that models augmented with the GI outperform the traditional ones in predicting the monthly unemployment rate, even in most state-level forecasts and in comparison with the Survey of Professional Forecasters.",
"In this paper we show how to use search engine data to forecast near-term values of economic indicators. Examples include automobile sales, unemployment claims, travel destination planning and consumer confidence."
]
} |
1605.05784 | 2402520960 | In this paper we apply a time series based Vector Auto Regressive (VAR) approach to the problem of predicting unemployment insurance claims in different census regions of the United States. Unemployment insurance claims data, reported weekly, are a leading indicator of the US unemployment rate. Gathering weekly unemployment claims and aggregating by region, we model correlation between the different census regions. Additionally, we explore the use of external variables such as Bing search query volumes and URL site clicks related to unemployment claims. To prevent any spurious predictors from appearing in the model we use sparse model based regularization. Preliminary results indicate that our approach is promising and in ongoing work we are extending the approach to a larger set of predictors and a longer data range. | De @cite_5 looked at a Vector Auto Regressive (VAR) model for predicting monthly unemployment rates in the census regions. While this approach has some resemblance to our method, the important thing to note is that we do not force any spatial constraints in the model to infer any dependence between the regions. We achieve this goal by using a sparse penalty in the model. More importantly our model also incorporates exogenous signals. Search data provides an interesting source of boundary crossing information, since an individual can work in one state or region but live in another where their search occurs. | {
"cite_N": [
"@cite_5"
],
"mid": [
"162947328"
],
"abstract": [
"We analyze spatio-temporal data on U.S. unemployment rates. For this purpose, we present a family of models designed for the analysis and time-forward prediction of spatio-temporal econometric data. Our model is aimed at applications with spatially sparse but temporally rich data, i.e. for observations collected at few spatial regions, but at many regular time intervals. The family of models utilized does not make spatial stationarity assumptions and consists in a vector autoregressive (VAR) specification, where there are as many time series as spatial regions. A model building strategy is used that takes into account the spatial dependence structure of the data. Model building may be performed either by displaying sample partial correlation functions, or automatically with an information criterion. Monthly data on unemployment rates in the nine census divisions of the U.S. are analyzed. We show with a residual analysis that our autoregressive model captures the dependence structure of the data better than with univariate time series modeling."
]
} |
1605.05904 | 2400405110 | Object detection often suffers from a plenty of bootless proposals, selecting high quality proposals remains a great challenge. In this paper, we propose a semantic, class-specific approach to re-rank object proposals, which can consistently improve the recall performance even with less proposals. We first extract features for each proposal including semantic segmentation, stereo information, contextual information, CNN-based objectness and low-level cue, and then score them using class-specific weights learnt by Structured SVM. The advantages of the proposed model are twofold: 1) it can be easily merged to existing generators with few computational costs, and 2) it can achieve high recall rate uner strict critical even using less proposals. Experimental evaluation on the KITTI benchmark demonstrates that our approach significantly improves existing popular generators on recall performance. Moreover, in the experiment conducted for object detection, even with 1,500 proposals, our approach can still have higher average precision (AP) than baselines with 5,000 proposals. | Window scoring based methods attempt to score the objectness of each candidate proposal according to how likely it is to contain an object of interest. This category of methods first sample a set of candidate bounding boxes across scales and locations in an image, and measure the objectness scores based on scoring model and return top scoring candidates as proposals. Objectness @cite_24 is one of the earliest proposal methods. This method samples a set of proposals from salient locations in an image, and then measures objectness of each proposals according to different low-level cues, such as saliency, colour, and edges. BING @cite_1 proposes a real-time proposal generator by training a simple linear SVM on binary features, the most obvious shortcoming of which is that it has a low localization accuracy. EdgeBoxes @cite_3 uses contour informations to score candidate windows without any parameter learning. In addition, it proposes a refinement process to promote localization. These methods are generally efficient, but suffer from poor localization quality. | {
"cite_N": [
"@cite_24",
"@cite_1",
"@cite_3"
],
"mid": [
"2066624635",
"2010181071",
"7746136"
],
"abstract": [
"We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.",
"Training a generic objectness measure to produce a small set of candidate object windows, has been shown to speed up the classical sliding window object detection paradigm. We observe that generic objects with well-defined closed boundary can be discriminated by looking at the norm of gradients, with a suitable resizing of their corresponding image windows in to a small fixed size. Based on this observation and computational reasons, we propose to resize the window to 8 × 8 and use the norm of the gradients as a simple 64D feature to describe it, for explicitly training a generic objectness measure. We further show how the binarized version of this feature, namely binarized normed gradients (BING), can be used for efficient objectness estimation, which requires only a few atomic operations (e.g. ADD, BITWISE SHIFT, etc.). Experiments on the challenging PASCAL VOC 2007 dataset show that our method efficiently (300fps on a single laptop CPU) generates a small set of category-independent, high quality object windows, yielding 96.2 object detection rate (DR) with 1, 000 proposals. Increasing the numbers of proposals and color spaces for computing BING features, our performance can be further improved to 99.5 DR.",
"The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy."
]
} |
1605.05887 | 2407896220 | We propose in this paper a policies similarity approach which is performed in three steps. The first step is concerned with the formalization of a XACML policy as a term in a boolean algebra while taking into account the policy and rule combining algorithms. This formalization is based on Security Policy Language (SePL) which was proposed in a previous work. In the second step, the SePL term is transformed into a term in a boolean ring. In the third step, the two policy terms, which are derived from the previous step, are the input to a rewriting system to conclude which kind of relation exists between these security policies such as equivalence, restriction, inclusion, and divergence. We provide a case study of our approach based on real XACML policies and also an empirical evaluation of its performance. | This method of calculating similarity of policies is limited to policies having the same attribute. Similarly, it is not valid for a set of security policies (PolicySet) and handles only two combining algorithms: Permit-overrides" and Deny- overrides" of the entire set of rule combining algorithms described in XACML. In @cite_2 , propose an algorithm to verify the refinement of business confidentiality policies. The concept of policy refinement is close to the similarity of the policies in a certain sense, because this method checks if a policy is a subset of the other. However, their study is based on EPAL rather than XACML. It is an important difference, because the sole rule combining algorithm that is considered in their work is the First-one-applicable. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2036265926"
],
"abstract": [
"It is shown that any recognition problem solved by a polynomial time-bounded nondeterministic Turing machine can be “reduced” to the problem of determining whether a given propositional formula is a tautology. Here “reduced” means, roughly speaking, that the first problem can be solved deterministically in polynomial time provided an oracle is available for solving the second. From this notion of reducible, polynomial degrees of difficulty are defined, and it is shown that the problem of determining tautologyhood has the same polynomial degree as the problem of determining whether the first of two given graphs is isomorphic to a subgraph of the second. Other examples are discussed. A method of measuring the complexity of proof procedures for the predicate calculus is introduced and discussed."
]
} |
1605.05887 | 2407896220 | We propose in this paper a policies similarity approach which is performed in three steps. The first step is concerned with the formalization of a XACML policy as a term in a boolean algebra while taking into account the policy and rule combining algorithms. This formalization is based on Security Policy Language (SePL) which was proposed in a previous work. In the second step, the SePL term is transformed into a term in a boolean ring. In the third step, the two policy terms, which are derived from the previous step, are the input to a rewriting system to conclude which kind of relation exists between these security policies such as equivalence, restriction, inclusion, and divergence. We provide a case study of our approach based on real XACML policies and also an empirical evaluation of its performance. | Lately, @cite_4 devised a new set-based language called SBA-XACML to capture the complex structures in XACML and endowed it with a semantics to detect access flaws, conflicts and redundancies between policies. Their work covers the well-known rules combining algorithms (e.g. Permit overrides and First Applicable). However, the semantic space lacks a distance to assess the similarity between rules. | {
"cite_N": [
"@cite_4"
],
"mid": [
"1512556106"
],
"abstract": [
"Effective theorem provers are essential for automatic verification and generation of programs. The conventional resolution strategies, albeit complete, are inefficient. On the other hand, special purpose methods, such as term rewriting systems for solving word problems, are relatively efficient but applicable to only limited classes of problems."
]
} |
1605.05412 | 2405991429 | The explosion in the volumes of data being stored online has resulted in distributed storage systems transitioning to erasure coding based schemes. Yet, the codes being deployed in practice are fairly short. In this work, we address what we view as the main coding theoretic barrier to deploying longer codes in storage: at large lengths, failures are not independent and correlated failures are inevitable. This motivates designing codes that allow quick data recovery even after large correlated failures, and which have efficient encoding and decoding. We propose that code design for distributed storage be viewed as a two-step process. The first step is choose a topology of the code, which incorporates knowledge about the correlated failures that need to be handled, and ensures local recovery from such failures. In the second step one specifies a code with the chosen topology by choosing coefficients from a finite field. In this step, one tries to balance reliability (which is better over larger fields) with encoding and decoding efficiency (which is better over smaller fields). This work initiates an in-depth study of this reliability efficiency tradeoff. We consider the field-size needed for achieving maximal recoverability: the strongest reliability possible with a given topology. We propose a family of topologies called grid-like topologies which unify a number of topologies considered both in theory and practice, and prove a collection of results about maximally recoverable codes in such topologies including the first super-polynomial lower bound on the field size. | Another related problem had been recently studied in @cite_17 in the context of derandomizing parallel algorithms for matching. The authors also consider the problem of assigning weights to edges of a graph, so that simple cycles carry non-zero weight. The key differences from our setting are: we need a single assignment while @cite_17 may have multiple assignments; we care about all simple cycles, while @cite_17 only needed non-zero weights on short cycles; we are interested in fields of characteristic @math while @cite_17 work in characteristic zero. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2399980850"
],
"abstract": [
"We show that the bipartite perfect matching problem is in quasi- NC2. That is, it has uniform circuits of quasi-polynomial size nO(logn), and O(log2 n) depth. Previously, only an exponential upper bound was known on the size of such circuits with poly-logarithmic depth. We obtain our result by an almost complete derandomization of the famous Isolation Lemma when applied to yield an efficient randomized parallel algorithm for the bipartite perfect matching problem."
]
} |
1605.05396 | 2949999304 | Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions. | Generative adversarial networks @cite_5 have also benefited from convolutional decoder networks, for the generator network module. used a Laplacian pyramid of adversarial generator and discriminators to synthesize images at multiple resolutions. This work generated compelling high-resolution images and could also condition on class labels for controllable generation. used a standard convolutional decoder, but developed a highly effective and stable architecture incorporating batch normalization to achieve striking image synthesis results. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2099471712"
],
"abstract": [
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples."
]
} |
1605.05396 | 2949999304 | Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions. | Other tasks besides conditional generation have been considered in recent work. generate answers to questions about the visual content of images. This approach was extended to incorporate an explicit knowledge base @cite_4 . applied sequence models to both text (in the form of books) and movies to perform a joint alignment. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2231285690"
],
"abstract": [
"We describe a method for visual question answering which is capable of reasoning about contents of an image on the basis of information extracted from a large-scale knowledge base. The method not only answers natural language questions using concepts not contained in the image, but can provide an explanation of the reasoning by which it developed its answer. The method is capable of answering far more complex questions than the predominant long short-term memory-based approach, and outperforms it significantly in the testing. We also provide a dataset and a protocol by which to evaluate such methods, thus addressing one of the key issues in general visual ques- tion answering."
]
} |
1605.05396 | 2949999304 | Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions. | In contemporary work generated images from text captions, using a variational recurrent autoencoder with attention to paint the image in multiple steps, similar to DRAW @cite_9 . Impressively, the model can perform reasonable synthesis of completely novel (unlikely for a human to write) text such as a stop sign is flying in blue skies'', suggesting that it does not simply memorize. While the results are encouraging, the problem is highly challenging and the generated images are not yet realistic, i.e., mistakeable for real. Our model can in many cases generate visually-plausible @math images conditioned on text, and is also distinct in that our entire model is a GAN, rather only using GAN for post-processing. | {
"cite_N": [
"@cite_9"
],
"mid": [
"1850742715"
],
"abstract": [
"This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye."
]
} |
1605.05396 | 2949999304 | Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions. | Building on ideas from these many previous works, we develop a simple and effective approach for text-based image synthesis using a character-level text encoder and class-conditional GAN. We propose a novel architecture and learning strategy that leads to compelling visual results. We focus on the case of fine-grained image datasets, for which we use the recently collected descriptions for Caltech-UCSD Birds and Oxford Flowers with 5 human-generated captions per image @cite_19 . We train and test on class-disjoint sets, so that test performance can give a strong indication of generalization ability which we also demonstrate on MS COCO images with multiple objects and various backgrounds. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2951805548"
],
"abstract": [
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations."
]
} |
1605.05549 | 2401233188 | In this paper, we present the actual risks of stealing user PINs by using mobile sensors versus the perceived risks by users. First, we propose PINlogger.js which is a JavaScript-based side channel attack revealing user PINs on an Android mobile phone. In this attack, once the user visits a website controlled by an attacker, the JavaScript code embedded in the web page starts listening to the motion and orientation sensor streams without needing any permission from the user. By analysing these streams, it infers the user’s PIN using an artificial neural network. Based on a test set of fifty 4-digit PINs, PINlogger.js is able to correctly identify PINs in the first attempt with a success rate of 74 which increases to 86 and 94 in the second and third attempts, respectively. The high success rates of stealing user PINs on mobile devices via JavaScript indicate a serious threat to user security. With the technical understanding of the information leakage caused by mobile phone sensors, we then study users’ perception of the risks associated with these sensors. We design user studies to measure the general familiarity with different sensors and their functionality, and to investigate how concerned users are about their PIN being discovered by an app that has access to all these sensors. Our studies show that there is significant disparity between the actual and perceived levels of threat with regard to the compromise of the user PIN. We confirm our results by interviewing our participants using two different approaches, within-subject and between-subject, and compare the results. We discuss how this observation, along with other factors, renders many academic and industry solutions ineffective in preventing such side channel attacks. | Obtaining sensitive information about users such as PINs based on mobile sensors has been actively explored by researchers in the field @cite_23 @cite_3 . In particular, there is a number of research which use mobile sensors through a malicious app running in the background to extract PINs entered on the soft keyboard of the mobile device. For example, GyroPhone, by @cite_20 , shows that gyroscope data is sufficient to identify the speaker and even parse speech to some extent. Other examples include Accessory @cite_12 by and Tapprints @cite_21 by Miluzzo. They infer passwords on full alphabetical soft keyboards based on accelerometer measurements. Touchlogger @cite_32 is another example by Cai and Chen @cite_18 which shows the possibility of distinguishing user's input on a mobile numpad by using accelerometer and gyroscope. The same authors demonstrate a similar attack in @cite_27 on both numerical and full keyboards. The only work which relies on in-browser access to sensors to attack a numpad is our previous work, TouchSignatures @cite_10 . All of these works, however, aim for the individual digits or characters of a keyboard, rather than the entire PIN or password. | {
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_32",
"@cite_3",
"@cite_27",
"@cite_23",
"@cite_10",
"@cite_12",
"@cite_20"
],
"mid": [
"2009613904",
"2099468260",
"2294170611",
"",
"1533974647",
"2534482106",
"2227908937",
"2074367177",
"1438441013"
],
"abstract": [
"Abstract One of the most recently exposed security threats on smartphone platforms is the potential use of motion sensors to infer user keystrokes. Exploited as side channels, few researchers have demonstrated the ability of built-in accelerometers and gyroscopes in particular, to reveal information related to user input, though the practicality of such an attack remains an open question. This paper takes further steps along the path of exploring the aspects of the new threat, addressing the question of which available sensors can perform best in the context of the inference attack. We design and implement a benchmark experiment, against which the performances of several commodity smartphone-sensors are compared, in terms of inference accuracy. All available Android motion sensors are considered through different settings provided by the OS, and we add the option of fusing several sensors input into a single dataset, to examine the amount lack of improvement in the attack accuracy. Our results indicate an outstanding performance of the gyroscope sensor, and the potential improvement obtained out of sensors data fusion. On the other hand, it seems that sensors with magnetometer component or the accelerometer alone have less benefit in the adverted attack.",
"This paper shows that the location of screen taps on modern smartphones and tablets can be identified from accelerometer and gyroscope readings. Our findings have serious implications, as we demonstrate that an attacker can launch a background process on commodity smartphones and tablets, and silently monitor the user's inputs, such as keyboard presses and icon taps. While precise tap detection is nontrivial, requiring machine learning algorithms to identify fingerprints of closely spaced keys, sensitive sensors on modern devices aid the process. We present TapPrints, a framework for inferring the location of taps on mobile device touch-screens using motion sensor data combined with machine learning analysis. By running tests on two different off-the-shelf smartphones and a tablet computer we show that identifying tap locations on the screen and inferring English letters could be done with up to 90 and 80 accuracy, respectively. By optimizing the core tap detection capability with additional information, such as contextual priors, we are able to further magnify the core threat.",
"Attacks that use side channels, such as sound and electromagnetic emanation, to infer keystrokes on physical keyboards are ineffective on smartphones without physical keyboards. We describe a new side channel, motion, on touch screen smartphones with only soft keyboards. Since typing on different locations on the screen causes different vibrations, motion data can be used to infer the keys being typed. To demonstrate this attack, we developed TouchLogger, an Android application that extracts features from device orientation data to infer keystrokes. TouchLogger correctly inferred more than 70 of the keys typed on a number-only soft keyboard on a smartphone. We hope to raise the awareness of motion as a significant side channel that may leak confidential data.",
"",
"Recent researches have shown that motion sensors may be used as a side channel to infer keystrokes on the touchscreen of smartphones. However, the practicality of this attack is unclear. For example, does this attack work on different devices, screen dimensions, keyboard layouts, or keyboard types? Does this attack depend on specific users or is it user independent? To answer these questions, we conducted a user study where 21 participants typed a total of 47,814 keystrokes on four different mobile devices in six settings. Our results show that this attack remains effective even though the accuracy is affected by user habits, device dimension, screen orientation, and keyboard layout. On a number-only keyboard, after the attacker tries 81 4-digit PINs, the probability that she has guessed the correct PIN is 65 , which improves the accuracy rate of random guessing by 81 times. Our study also indicates that inference based on the gyroscope is more accurate than that based on the accelerometer. We evaluated two classification techniques in our prototype and found that they are similarly effective.",
"In this study, we present WindTalker, a novel and practical keystroke inference framework that allows an attacker to infer the sensitive keystrokes on a mobile device through WiFi-based side-channel information. WindTalker is motivated from the observation that keystrokes on mobile devices will lead to different hand coverage and the finger motions, which will introduce a unique interference to the multi-path signals and can be reflected by the channel state information (CSI). The adversary can exploit the strong correlation between the CSI fluctuation and the keystrokes to infer the user's number input. WindTalker presents a novel approach to collect the target's CSI data by deploying a public WiFi hotspot. Compared with the previous keystroke inference approach, WindTalker neither deploys external devices close to the target device nor compromises the target device. Instead, it utilizes the public WiFi to collect user's CSI data, which is easy-to-deploy and difficult-to-detect. In addition, it jointly analyzes the traffic and the CSI to launch the keystroke inference only for the sensitive period where password entering occurs. WindTalker can be launched without the requirement of visually seeing the smart phone user's input process, backside motion, or installing any malware on the tablet. We implemented Windtalker on several mobile phones and performed a detailed case study to evaluate the practicality of the password inference towards Alipay, the largest mobile payment platform in the world. The evaluation results show that the attacker can recover the key with a high successful rate.",
"Conforming to W3C specifications, mobile web browsers allow JavaScript code in a web page to access motion and orientation sensor data without the user's permission. The associated risks to user security and privacy are however not considered in W3C specifications. In this work, for the first time, we show how user security can be compromised using these sensor data via browser, despite that the data rate is 3–5 times slower than what is available in app. We examine multiple popular browsers on Android and iOS platforms and study their policies in granting permissions to JavaScript code with respect to access to motion and orientation sensor data. Based on our observations, we identify multiple vulnerabilities, and propose TouchSignatures which implements an attack where malicious JavaScript code on an attack tab listens to such sensor data measurements. Based on these streams, TouchSignatures is able to distinguish the user's touch actions (i.e., tap, scroll, hold, and zoom) and her PINs, allowing a remote website to learn the client-side user activities. We demonstrate the practicality of this attack by collecting data from real users and reporting high success rates using our proof-of-concept implementations. We also present a set of potential solutions to address the vulnerabilities. The W3C community and major mobile browser vendors including Mozilla, Google, Apple and Opera have acknowledged our work and are implementing some of our proposed countermeasures.",
"We show that accelerometer readings are a powerful side channel that can be used to extract entire sequences of entered text on a smart-phone touchscreen keyboard. This possibility is a concern for two main reasons. First, unauthorized access to one's keystrokes is a serious invasion of privacy as consumers increasingly use smartphones for sensitive transactions. Second, unlike many other sensors found on smartphones, the accelerometer does not require special privileges to access on current smartphone OSes. We show that accelerometer measurements can be used to extract 6-character passwords in as few as 4.5 trials (median).",
"We show that the MEMS gyroscopes found on modern smart phones are sufficiently sensitive to measure acoustic signals in the vicinity of the phone. The resulting signals contain only very low-frequency information (<200Hz). Nevertheless we show, using signal processing and machine learning, that this information is sufficient to identify speaker information and even parse speech. Since iOS and Android require no special permissions to access the gyro, our results show that apps and active web content that cannot access the microphone can nevertheless eavesdrop on speech in the vicinity of the phone."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.