aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1310.7297
|
2950767189
|
Recent advances in 3D modeling provide us with real 3D datasets to answer queries, such as "What is the best position for a new billboard?" and "Which hotel room has the best view?" in the presence of obstacles. These applications require measuring and differentiating the visibility of an object (target) from different viewpoints in a dataspace, e.g., a billboard may be seen from two viewpoints but is readable only from the viewpoint closer to the target. In this paper, we formulate the above problem of quantifying the visibility of (from) a target object from (of) the surrounding area with a visibility color map (VCM). A VCM is essentially defined as a surface color map of the space, where each viewpoint of the space is assigned a color value that denotes the visibility measure of the target from that viewpoint. Measuring the visibility of a target even from a single viewpoint is an expensive operation, as we need to consider factors such as distance, angle, and obstacles between the viewpoint and the target. Hence, a straightforward approach to construct the VCM that requires visibility computation for every viewpoint of the surrounding space of the target, is prohibitively expensive in terms of both I Os and computation, especially for a real dataset comprising of thousands of obstacles. We propose an efficient approach to compute the VCM based on a key property of the human vision that eliminates the necessity of computing the visibility for a large number of viewpoints of the space. To further reduce the computational overhead, we propose two approximations; namely, minimum bounding rectangle and tangential approaches with guaranteed error bounds. Our extensive experiments demonstrate the effectiveness and efficiency of our solutions to construct the VCM for real 2D and 3D datasets.
|
Given a collection of surfaces representing boundaries of obstacles, considers visibility problem as determining the regions of space or the surfaces of obstacles that are visible to an observer @cite_0 @cite_20 . They model visibility as a binary notion and find the light and dark regions of a space for a point light source. Disregarding reflection, diffraction and interference of light, they assume light rays are diminished upon contact with a surface of an obstacle.
|
{
"cite_N": [
"@cite_0",
"@cite_20"
],
"mid": [
"2073815206",
"1970123385"
],
"abstract": [
"We investigate the problem of determining visible regions given a set of (moving) obstacles and a (moving) vantage point. Our approach to this problem is through an implicit framework, where the obstacles are represented by a level set function. The visibility problem is formally formulated as a boundary value problem (BVP) of a first order partial differential equation. It is based on the continuation of values along the given ray field. We propose a one-pass, multi-level algorithm for the construction of the solution on a grid. Furthermore, we study the dynamics of shadow boundaries on the surfaces of the obstacles when the vantage point moves along a given trajectory. In all of these situations, topological changes such as merging and breaking occur in the regions of interest. These are automatically handled by the level set framework proposed here. Finally, we obtain additional useful information through simple operations in the level set framework.",
"Constructing the visible and invisible regions of an observer due to the presence of obstacles in the environment has played a central role in many applications. It can also be a first step. In this paper, we adopt a visibility algorithm that can produce a variety of general information to handle the optimization of visibility information. Through the use of level set tools, gradient flow, finite differencing, and solvers for ordinary differential equations, we introduce a set of distinct algorithms for several model problems involving the optimization of visibility information."
]
}
|
1310.6376
|
2158829560
|
In Biometrics, facial uniqueness is commonly inferred from impostor similarity scores. In this paper, we show that such uniqueness measures are highly unstable in the presence of image quality variations like pose, noise and blur. We also experimentally demonstrate the instability of a recently introduced impostor-based uniqueness measure of [Klare and Jain 2013] when subject to poor quality facial images.
|
Impostor score distribution has been widely used to identify the subjects that exhibit high level of similarity to other subjects in a population (i.e. lamb). The authors of @cite_15 investigated the existence of lamb'' in speech data by analyzing the relative difference between maximum impostor score and genuine score of a subject. They expected the lambs'' to have very high maximum impostor score. A similar strategy was applied by @cite_14 to locate non-unique faces in a facial image dataset. The authors of @cite_13 tag a subject as lamb'' if its mean impostor score lies above a certain threshold. Based on this knowledge of a subject's location in the Doddington zoo'' @cite_15 , they propose an adaptive fusion scheme for a multi-modal biometric system. Recently, @cite_12 have proposed an Impostor-based Uniqueness Measure (IUM) which is based on the location of mean impostor score relative to the maximum and minimum of the impostor score distribution. Using both genuine and impostor scores, @cite_11 investigated the existence of biometric menagerie in a broad range of biometric modalities like 2D and 3D faces, fingerprint, iris, speech, etc.
|
{
"cite_N": [
"@cite_14",
"@cite_15",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"2125860240",
"1533303231",
"2072313242",
"2136808691",
"2129312524"
],
"abstract": [
"The issue of recognizability of subjects in biometric identification is of particular interest to the designers of these systems. We have applied the concept of Doddingtons biometric menagerie to the area of facial recognition. We performed a series of tests for the presence of goats, lambs, and wolves on FRGC 2.0 color image data. The data for the subjects that appeared at the extreme end of these tests was then visually examined. Even a cursory comparison of images showed that for this set of data, some images fell into the defined menagerie categories. Our tests show the statistical existence of these animal classifications within the constraints of this set of FRGC 2.0 data using the baseline matching algorithm. Ultimately, these tests were limited by the image data set and matching algorithm used. For further confirmation of the existence of the menagerie, the analysis must be expanded to include different image sets and matching algorithms..",
"Abstract : Performance variability in speech and speaker recognition systems can be attributed to many factors. One major factor, which is often acknowledged but seldom analyzed, is inherent differences in the recognizability of different speakers. In speaker recognition systems such differences are characterized by the use of animal names for different types of speakers, including sheep, goats, lambs and wolves, depending on their behavior with respect to automatic recognition systems. In this paper we propose statistical tests for the existence of these animals and apply these tests to hunt for such animals using results from the 1998 NIST speaker recognition evaluation.",
"Recent research in biometrics has suggested the existence of the “Biometric Menagerie” in which weak users contribute disproportionately to the error rate (FAR and FRR) of a biometric system. The aim of this work is to utilize this observation to design a multibiometric system where information is consolidated on a user-specific basis. To facilitate this, the users in a database are characterized into multiple categories and only users belonging to weak categories are required to provide additional biometric information. The contribution of this work lies in (a) the design of a selective fusion scheme where fusion is invoked only for a subset of users, and (b) evaluating the performance of such a scheme on two public datasets. Experiments on the multi-unit CASIA V3 iris database and multi-unit WVU fingerprint database indicate that selective fusion, as defined in this work, improves overall matching accuracy while potentially reducing overall computational time. This has positive implications in a large-scale system where the throughput can be substantially increased without compromising the verification accuracy of the system.",
"We present a framework, called uniqueness-based nonmatch estimates (UNE), which demonstrates the ability to improve face recognition performance of any face matcher. The first aspect of the framework is a novel metric for measuring the uniqueness of a given individual, called the impostor-based uniqueness measure (IUM). The UNE the maps face match scores from any any face matcher into non-match probability estimates that are conditionally dependent on the probe image's IUM. Using this framework we demonstrate: (i) an improved generalization of matching thresholds (and, subsequently, improved matching accuracy), (ii) a score normalization technique that improves the interoperability for users of different face matchers, and (iii) the predictive ability of IUM towards face recognition accuracy. Studies are conducted on an operational dataset with 16,000 subjects using three different face matchers (two commercial, one proprietary) to demonstrate the effectiveness of the proposed framework.",
"It is commonly accepted that users of a biometric system may have differing degrees of accuracy within the system. Some people may have trouble authenticating, while others may be particularly vulnerable to impersonation. Goats, wolves, and lambs are labels commonly applied to these problem users. These user types are defined in terms of verification performance when users are matched against themselves (goats) or when matched against others (lambs and wolves). The relationship between a user's genuine and impostor match results suggests four new user groups: worms, doves, chameleons, and phantoms. We establish formal definitions for these animals and a statistical test for their existence. A thorough investigation is conducted using a broad range of biometric modalities, including 2D and 3D faces, fingerprints, iris, speech, and keystroke dynamics. Patterns that emerge from the results expose novel, important, and encouraging insights into the nature of biometric match results. A new framework for the evaluation of biometric systems based on the biometric menagerie, as opposed to collective statistics, is proposed."
]
}
|
1310.6460
|
2120465767
|
We consider the temporal homogenization of linear ODEs of the form ( x =Ax+ P(t)x+f(t) ), where P(t) is periodic and ( ) is small. Using a 2-scale expansion approach, we obtain the long-time approximation ( x(t) exp (At) ( (t)+ ^t exp (-A ) f( ) d ) ), where ( ) solves the cell problem ( = B + F(t) ) with an effective matrix B and an explicitly-known F(t). We provide necessary and sufficient conditions for the accuracy of the approximation (over a ( O ( ^ -1 ) ) time-scale), and show how B can be computed (at a cost independent of ( )). As a direct application, we investigate the possibility of using RLC circuits to harvest the energy contained in small scale oscillations of ambient electromagnetic fields (such as Schumann resonances). Although a RLC circuit parametrically coupled to the field may achieve such energy extraction via parametric resonance, its resistance R needs to be smaller than a threshold ( ) proportional to the fluctuations of the field, thereby limiting practical applications. We show that if n RLC circuits are appropriately coupled via mutual capacitances or inductances, then energy extraction can be achieved when the resistance of each circuit is smaller than ( n ). Hence, if the resistance of each circuit has a non-zero fixed value, energy extraction can be made possible through the coupling of a sufficiently large number n of circuits ( ( n 1000 ) for the first mode of Schumann resonances and contemporary values of capacitances, inductances and resistances). The theory is also applied to the control of the oscillation amplitude of a (damped) oscillator.
|
Magnus expansion @cite_66 allows for a representation of the solution of (note @math has to be 0) as an infinite series of integrals of increasingly many matrix commutators. For practical applications (see @cite_30 for a review), the infinite series has to be truncated to a finite number of terms. In many cases convergence after truncation is not guaranteed or slow (e.g., @cite_31 ), and one often faces such problem when studying @math long time behavior of our system of interest .
|
{
"cite_N": [
"@cite_30",
"@cite_31",
"@cite_66"
],
"mid": [
"2020615058",
"2062864779",
"2039763880"
],
"abstract": [
"Abstract Approximate resolution of linear systems of differential equations with varying coefficients is a recurrent problem, shared by a number of scientific and engineering areas, ranging from Quantum Mechanics to Control Theory. When formulated in operator or matrix form, the Magnus expansion furnishes an elegant setting to build up approximate exponential representations of the solution of the system. It provides a power series expansion for the corresponding exponent and is sometimes referred to as Time-Dependent Exponential Perturbation Theory . Every Magnus approximant corresponds in Perturbation Theory to a partial re-summation of infinite terms with the important additional property of preserving, at any order, certain symmetries of the exact solution. The goal of this review is threefold. First, to collect a number of developments scattered through half a century of scientific literature on Magnus expansion. They concern the methods for the generation of terms in the expansion, estimates of the radius of convergence of the series, generalizations and related non-perturbative expansions. Second, to provide a bridge with its implementation as generator of especial purpose numerical integration methods , a field of intense activity during the last decade. Third, to illustrate with examples the kind of results one can expect from Magnus expansion, in comparison with those from both perturbative schemes and standard numerical integrators. We buttress this issue with a revision of the wide range of physical applications found by Magnus expansion in the literature.",
"Approximate solutions of matrix linear differential equations by matrix exponentials are considered. In particular, the convergence issue of Magnus and Fer expansions is treated. Upper bounds for the convergence radius in terms of the norm of the defining matrix of the system are obtained. The very few previously published bounds are improved. Bounds to the error of approximate solutions are also reported. All results are based just on algebraic manipulations of the recursive relation of the expansion generators.",
""
]
}
|
1310.6012
|
2951101501
|
Animal grouping behaviors have been widely studied due to their implications for understanding social intelligence, collective cognition, and potential applications in engineering, artificial intelligence, and robotics. An important biological aspect of these studies is discerning which selection pressures favor the evolution of grouping behavior. In the past decade, researchers have begun using evolutionary computation to study the evolutionary effects of these selection pressures in predator-prey models. The selfish herd hypothesis states that concentrated groups arise because prey selfishly attempt to place their conspecifics between themselves and the predator, thus causing an endless cycle of movement toward the center of the group. Using an evolutionary model of a predator-prey system, we show that how predators attack is critical to the evolution of the selfish herd. Following this discovery, we show that density-dependent predation provides an abstraction of Hamilton's original formulation of domains of danger.'' Finally, we verify that density-dependent predation provides a sufficient selective advantage for prey to evolve the selfish herd in response to predation by coevolving predators. Thus, our work corroborates Hamilton's selfish herd hypothesis in a digital evolutionary model, refines the assumptions of the selfish herd hypothesis, and generalizes the domain of danger concept to density-dependent predation.
|
More broadly, in the past decade researchers have focused on the application of locally-interacting swarming agents to optimization problems, called Particle Swarm Optimization (PSO) @cite_26 . PSO applications range from feature selection for classifiers @cite_40 , to video processing @cite_12 , to open vehicle routing @cite_20 . A related technique within PSO seeks to combine PSO with coevolving predator'' and prey'' solutions to avoid local minima @cite_50 . Researchers have even sought to harness the collective problem solving power of swarming agents to design robust autonomous robotic swarms @cite_19 . Thus, elaborations on the foundations of animal grouping behavior has the potential to improve our ability to solve engineering problems.
|
{
"cite_N": [
"@cite_26",
"@cite_19",
"@cite_40",
"@cite_50",
"@cite_12",
"@cite_20"
],
"mid": [
"1994445384",
"2149469814",
"2003723996",
"1586193897",
"2132721978",
"2151257341"
],
"abstract": [
"Particle swarm optimisation (PSO) has been enormously successful. Within little more than a decade hundreds of papers have reported successful applications of PSO. In fact, there are so many of them, that it is difficult for PSO practitioners and researchers to have a clear up-to-date vision of what has been done in the area of PSO applications. This brief paper attempts to fill this gap, by categorising a large number of publications dealing with PSO applications stored in the IEEE Xplore database at the time of writing.",
"Swarm robotics is a novel approach to the coordination of large numbers of relatively simple robots which takes its inspiration from social insects. This paper proposes a definition to this newly emerging approach by 1) describing the desirable properties of swarm robotic systems, as observed in the system-level functioning of social insects, 2) proposing a definition for the term swarm robotics, and putting forward a set of criteria that can be used to distinguish swarm robotics research from other multi-robot studies, 3) providing a review of some studies which can act as sources of inspiration, and a list of promising domains for the utilization of swarm robotic systems.",
"Feature selection (FS) is an important data preprocessing technique, which has two goals of minimising the classification error and minimising the number of features selected. Based on particle swarm optimisation (PSO), this paper proposes two multi-objective algorithms for selecting the Pareto front of non-dominated solutions (feature subsets) for classification. The first algorithm introduces the idea of non-dominated sorting based multi-objective genetic algorithm II into PSO for FS. In the second algorithm, multi-objective PSO uses the ideas of crowding, mutation and dominance to search for the Pareto front solutions. The two algorithms are compared with two single objective FS methods and a conventional FS method on nine datasets. Experimental results show that both proposed algorithms can automatically evolve a smaller number of features and achieve better classification performance than using all features and feature subsets obtained from the two single objective methods and the conventional method. Both the continuous and the binary versions of PSO are investigated in the two proposed algorithms and the results show that continuous version generally achieves better performance than the binary version. The second new algorithm outperforms the first algorithm in both continuous and binary versions.",
"In this paper we present and discuss the results of experimentally comparing the performance of several variants of the standard swarm particle optimiser and a new approach to swarm based optimisation. The new algorithm, which we call predator prey optimiser, combines the ideas of particle swarm optimisation with a predator prey inspired strategy, which is used to maintain diversity in the swarm and preventing premature convergence to local suboptima. This algorithm and the most common variants of the particle swarm optimisers are tested in a set of multimodal functions commonly used as benchmark optimisation problems in evolutionary computation.",
"In dynamic optimization problems, the optima location and fitness value change over time. Techniques in literature for dynamic optimization involve tracking one or more peaks moving in a sequential manner through the parameter space. However, many practical applications in, e.g., video and image processing involve optimizing a stream of recurrent problems, subject to noise. In such cases, rather than tracking one or more moving peaks, the focus is on managing a memory of solutions along with information allowing to associate these solutions with their respective problem instances. In this paper, Gaussian Mixture Modeling (GMM) of Dynamic Particle Swarm Optimization (DPSO) solutions is proposed for fast optimization of streams of recurrent problems. In order to avoid costly re-optimizations over time, a compact density representation of previously-found DPSO solutions is created through mixture modeling in the optimization space, and stored in memory. For proof of concept simulation, the proposed hybrid GMM-DPSO technique is employed to optimize embedding parameters of a bi-tonal watermarking system on a heterogeneous database of document images. Results indicate that the computational burden of this watermarking problem is reduced by up to 90.4 with negligible impact on accuracy.",
"Honey Bees Mating Optimization algorithm is a relatively new nature inspired algorithm. In this paper, this nature inspired algorithm is used in a hybrid scheme with other metaheuristic algorithms for successfully solving the Open Vehicle Routing Problem. More precisely, the proposed algorithm for the solution of the Open Vehicle Routing Problem, the Honey Bees Mating Optimization (HBMOOVRP), combines a Honey Bees Mating Optimization (HBMO) algorithm and the Expanding Neighborhood Search (ENS) algorithm. Two set of benchmark instances is used in order to test the proposed algorithm. The results obtained for both sets are very satisfactory. More specifically, in the fourteen instances proposed by Christofides, the average quality is 0.35 when a hierarchical objective function is used, where, first, the number of vehicles is minimized and, afterwards, the total travel distance is minimized and the average quality is 0.42 when only the travel distance is minimized, while for the eight instances proposed by when a hierarchical objective function is used the average quality is 0.21 ."
]
}
|
1310.5828
|
2953052357
|
We consider the problem of coordinating a collection of robots at an intersection area taking into account dynamical constraints due to actuator limitations. We adopt the coordination space approach, which is standard in multiple robot motion planning. Assuming the priorities between robots are assigned in advance and the existence of a collision-free trajectory respecting those priorities, we propose a provably safe trajectory planner satisfying kinodynamic constraints. The algorithm is shown to run in real time and to return safe (collision-free) trajectories. Simulation results on synthetic data illustrate the benefits of the approach.
|
As first introduced in @cite_3 @cite_1 , the path-velocity decomposition enables to introduce an abstract space: the coordination space. It is a standard approach to robot motion planning @cite_0 @cite_6 , and the motion planning problem in the real space boils down to finding an optimal trajectory in the coordination space that is collision-free with respect to an obstacle region. The coordination space is a @math -dimensional space (where @math denotes the number of robots in the intersection) and the obstacle-region has a cylindrical structure @cite_15 . In @cite_8 , we have revisited the notion of priorities to propose a novel framework for automated intersection management based on priority assignment. It is a very intuitive notion: the priority graph indicates the relative order of robots. Our framework enables to decompose the motion planning problem problem in the coordination space into a combinatorial problem: priority assignment, and a continuous problem: finding an efficient trajectory with assigned priorities.
|
{
"cite_N": [
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_15"
],
"mid": [
"134193224",
"2169742265",
"",
"1926863402",
"101508493",
""
],
"abstract": [
"We consider the problem of cooperative intersection management. It arises in automated transportation systems for people or goods but also in multi-robots environment. Therefore many solutions have been proposed to avoid collisions. The main problem is to determine collision-free but also deadlock-free and optimal algorithms. Even with a simple definition of optimality, finding a global optimum is a problem of high complexity, especially for open systems involving a large and varying number of vehicles. This paper advocates the use of a mathematical framework based on a decomposition of the problem into a continuous optimization part and a scheduling problem. The paper emphasizes connections between the usual notion of vehicle priority and an abstract formulation of the scheduling problem in the coordination space. A constructive locally optimal algorithm is proposed. More generally, this work opens up for new computationally efficient cooperative motion planning algorithms.",
"This work makes two contributions to geometric motion planning for multiple robots: i) motion plans can be determined that simultaneously optimize an independent performance criterion for each robot; ii) a general spectrum is defined between decoupled and centralized planning. By considering independent performance criteria, we introduce a form of optimality that is consistent with concepts from multi-objective optimization and game theory research. Previous multiple-robot motion planning approaches that consider optimality combine individual criteria into a single criterion. As a result, these methods can fail to find many potentially useful motion plans. We present implemented, multi-robot motion planning algorithms that are derived from the principle of optimality, for three problem classes along the spectrum between centralized and decoupled planning: i) coordination along fixed, independent paths; ii) coordination along independent roadmaps; iii) general, unconstrained motion planning. Several computed examples are presented for all three problem classes that illustrate the concepts and algorithms.",
"",
"This paper presents a geometric based approach for multiple mobile robot motion coordination. All the robot paths being computed independently, we address the problem of coordinating the motion of the robots along their own path in such a way they do not collide each other. The proposed algorithm is based on a bounding box representation of the obstacles in the so-called coordination diagram. The algorithm is resolution-complete. Its efficiency is illustrated by examples involving more than 100 robots.",
"1 Introduction and Overview.- 2 Configuration Space of a Rigid Object.- 3 Obstacles in Configuration Space.- 4 Roadmap Methods.- 5 Exact Cell Decomposition.- 6 Approximate Cell Decomposition.- 7 Potential Field Methods.- 8 Multiple Moving Objects.- 9 Kinematic Constraints.- 10 Dealing with Uncertainty.- 11 Movable Objects.- Prospects.- Appendix A Basic Mathematics.- Appendix B Computational Complexity.- Appendix C Graph Searching.- Appendix D Sweep-Line Algorithm.- References.",
""
]
}
|
1310.5828
|
2953052357
|
We consider the problem of coordinating a collection of robots at an intersection area taking into account dynamical constraints due to actuator limitations. We adopt the coordination space approach, which is standard in multiple robot motion planning. Assuming the priorities between robots are assigned in advance and the existence of a collision-free trajectory respecting those priorities, we propose a provably safe trajectory planner satisfying kinodynamic constraints. The algorithm is shown to run in real time and to return safe (collision-free) trajectories. Simulation results on synthetic data illustrate the benefits of the approach.
|
The ambition of this framework is to enable more robustness and distribution in future automated intersection management systems. Indeed, existing intersection management systems such as proposed in @cite_5 @cite_13 @cite_12 plan the complete trajectories of robots through the intersection and ensuring safety requires robots to follow precisely the planned trajectory. By contrast, if priorities only are planned, the priority graph can be conserved even if some unpredictable event requires a robot to slow down for some time.
|
{
"cite_N": [
"@cite_5",
"@cite_13",
"@cite_12"
],
"mid": [
"89499552",
"2099860849",
""
],
"abstract": [
"Recently, several artificial intelligence labs have suggested the use of fully driverless vehicles with the capability of sensing the surrounding environment to enhance the roadway safety. The idea of having fully automated vehicles running in streets was inapplicable for many years until recently when researchers succeeded in releasing fully automated vehicles (without human drivers). Consequently, the paper develops a heuristic optimization algorithm for driverless vehicles at unsignalized intersections using a multi-agent system. The proposed system models the driverless vehicles as autonomous agents controlled by the intersection controller (manager agent). The input information of the system consists of vehicles’ current location, speed and acceleration in addition to the surrounding environment (weather, intersection characteristics, etc.). The intersection controller processes the input information using a built-in simulator: “OSDI” (Optimization Simulator for Driverless vehicles at Intersections). The simulator objective is to optimize the movements of vehicles to reduce the total delay time for the entire intersection and prevent crashes simultaneously. Thereafter, the intersection controller uses the simulator output for controlling the speed profile of the driverless vehicles within the intersection study zone. The proposed system is compared to two different intersection control scenarios: an All-way stop control (AWSC) and an Intersection manager with built-in OSDI. For both scenarios, it is assumed that there are four driverless vehicles (one vehicle per approach) willing to cross a four-legged unsignalized intersection concurrently. The results show using Monte Carlo simulation show that the proposed system reduces the total delay by 35 seconds on average compared to traditional AWSC. This research is considered as a first step in developing an unmanned vehicle technology system.",
"Traffic congestion is one of the leading causes of lost productivity and decreased standard of living in urban settings. Recent advances in artificial intelligence suggest vehicle navigation by autonomous agents will be possible in the near future. In this paper, we propose a reservation-based system for alleviating traffic congestion, specifically at intersections, and under the assumption that the cars are controlled by agents. First, we describe a custom simulator that we have created to measure the different delays associated with conducting traffic through an intersection. Second, we specify a precise metric for evaluating the quality of traffic control at an intersection. Using this simulator and this metric, we show that our reservation-based system can perform two to three hundred times better than traffic lights. As a result, it can smoothly handle much heavier traffic conditions. We show that our system very closely approximates an overpass, which is the optimal solution for the problem with which we are dealing.",
""
]
}
|
1310.5828
|
2953052357
|
We consider the problem of coordinating a collection of robots at an intersection area taking into account dynamical constraints due to actuator limitations. We adopt the coordination space approach, which is standard in multiple robot motion planning. Assuming the priorities between robots are assigned in advance and the existence of a collision-free trajectory respecting those priorities, we propose a provably safe trajectory planner satisfying kinodynamic constraints. The algorithm is shown to run in real time and to return safe (collision-free) trajectories. Simulation results on synthetic data illustrate the benefits of the approach.
|
It is now clear that the combinatorial problem of assigning efficient priorities is inherently difficult, as noticed in @cite_7 and developed in the priority-based framework in @cite_8 . As a result, we will only consider in the present paper the issue of planning "good" trajectories for already assigned priorities. When the robots can start and stop instantaneously, it is relatively easy to define an optimal trajectory for fixed priorities. This trajectory is referred to as the left-greedy trajectory @cite_7 @cite_8 . However, taking into account acceleration (and higher derivatives) constraints turns the optimization problem into a "highly non-trivial" problem (as suggested in the conclusion of the paper @cite_7 ). In the present paper, we address the challenging problem of finding safe trajectories that respect this type of constraints. In @cite_10 , the problem is formulated as a mixed integer nonlinear programming problem, and the solution proposed is suitable only for a "reasonable" and fixed number of robots. Moreover, priority assignment and trajectory planning are not decoupled. In the present paper, we focus on a low complexity solution to the trajectory planning problem with assigned priorities which is applicable for a large and potentially varying number of robots.
|
{
"cite_N": [
"@cite_10",
"@cite_7",
"@cite_8"
],
"mid": [
"",
"2005559889",
"134193224"
],
"abstract": [
"",
"Given a collection of robots sharing a common environment, assume that each possesses a graph (a one-dimensional complex also known as a roadmap) approximating its configuration space and, furthermore, that each robot wishes to travel to a goal while optimizing elapsed time. We consider vector-valued (or Pareto) optima for collision-free coordination on the product of these roadmaps with collision-type obstacles. Such optima are by no means unique: in fact, continua of Pareto optimal coordinations are possible. We prove a finite bound on the number of optimal coordinations in the physically relevant case where all obstacles are cylindrical (i.e., defined by pairwise collisions). The proofs rely crucially on perspectives from geometric group theory and CAT(0) geometry. In particular, the finiteness bound depends on the fact that the associated coordination space is devoid of positive curvature. We also demonstrate that the finiteness bound holds for systems with moving obstacles following known trajectories.",
"We consider the problem of cooperative intersection management. It arises in automated transportation systems for people or goods but also in multi-robots environment. Therefore many solutions have been proposed to avoid collisions. The main problem is to determine collision-free but also deadlock-free and optimal algorithms. Even with a simple definition of optimality, finding a global optimum is a problem of high complexity, especially for open systems involving a large and varying number of vehicles. This paper advocates the use of a mathematical framework based on a decomposition of the problem into a continuous optimization part and a scheduling problem. The paper emphasizes connections between the usual notion of vehicle priority and an abstract formulation of the scheduling problem in the coordination space. A constructive locally optimal algorithm is proposed. More generally, this work opens up for new computationally efficient cooperative motion planning algorithms."
]
}
|
1310.6079
|
1657057259
|
This paper introduces the synchrosqueezed curvelet transform as an optimal tool for 2D mode decomposition of wavefronts or banded wave-like components. The synchrosqueezed curvelet transform consists of a generalized curvelet transform with application dependent geometric scaling parameters, and a synchrosqueezing technique for a sharpened phase space representation. In the case of a superposition of banded wave-like components with well-separated wave-vectors, it is proved that the synchrosqueezed curvelet transform is capable of recognizing each component and precisely estimating local wave-vectors. A discrete analogue of the continuous transform and several clustering models for decomposition are proposed in detail. Some numerical examples with synthetic and real data are provided to demonstrate the above properties of the proposed transform.
|
There is another interesting line of work for mode decomposition, which is the empirical mode decomposition (EMD) initiated and refined by in @cite_21 @cite_15 . Starting from the most oscillatory mode, the EMD method decomposes a signal into a collection of intrinsic mode functions (IMFs) and estimates instantaneous frequencies via the Hilbert transform. However, the dependence on local extrema limits its applications in noisy cases. To address the robustness problem, some variants were proposed in @cite_6 @cite_27 . Following the idea of EMD, there are two existing methods for high dimensional mode decomposition. The first one is based on high dimensional interpolation @cite_26 @cite_8 @cite_19 @cite_7 and the second one applies a 1D decomposition to each dimension and then combines the results with a proper combination strategy @cite_4 @cite_10 @cite_20 . In spite of their considerable success, these existing methods in this research line are not suitable to separate two modes with similar wave-numbers but different wave-vectors due to the lack of anisotropic angular separation as discussed in @cite_13 .
|
{
"cite_N": [
"@cite_13",
"@cite_26",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_21",
"@cite_6",
"@cite_19",
"@cite_27",
"@cite_15",
"@cite_10",
"@cite_20"
],
"mid": [
"2042527479",
"1976836781",
"",
"2119589101",
"1485781512",
"2007221293",
"2160724632",
"2113193347",
"2120390927",
"2028497691",
"2478241477",
"2122470043"
],
"abstract": [
"This paper introduces the synchrosqueezed wave packet transform as a method for analyzing two-dimensional images. This transform is a combination of wave packet transforms of a certain geometric scaling, a reallocation technique for sharpening phase space representations, and clustering algorithms for modal decomposition. For a function that is a superposition of several wave-like components with a highly oscillatory pattern satisfying certain separation conditions, we prove that the synchrosqueezed wave packet transform identifies these components and estimates their local wavevectors. A discrete version of this transform is discussed in detail, and numerical results are given to demonstrate the properties of the proposed transform.",
"Recent developments in analysis methods on the non-linear and non-stationary data have received large attention by the image analysts. In 1998, Huang introduced the empirical mode decomposition (EMD) in signal processing. The EMD approach, fully unsupervised, proved reliable monodimensional (seismic and biomedical) signals. The main contribution of our approach is to apply the EMD to texture extraction and image filtering, which are widely recognized as a difficult and challenging computer vision problem. We developed an algorithm based on bidimensional empirical mode decomposition (BEMD) to extract features at multiple scales or spatial frequencies. These features, called intrinsic mode functions, are extracted by a sifting process. The bidimensional sifting process is realized using morphological operators to detect regional maxima and thanks to radial basis function for surface interpolation. The performance of the texture extraction algorithms, using BEMD method, is demonstrated in the experiment with both synthetic and natural images.",
"",
"Image empirical mode decomposition (IEMD) is an empirical mode decomposition concept used in Hilbert–Huang transform (HHT) expanded into two dimensions for the use on images. IEMD provides a tool for image processing by its special ability to locally separate superposed spatial frequencies. The tendency is that the intrinsic mode functions (IMFs) other than the first are low-frequency images. In this study we give an overview of the state-of-the-art methods to decompose an image into a number of IMFs and a residue image with a minimum number of extrema points, together with the use of the method. Ideas and open problems are presented.",
"This study introduces a new approach based on Bidimensional Empirical Mode Decomposition (BEMD) to extract texture features at multiple scales or spatial frequencies. Moreover, it can resolve the intrawave frequency modulation provided the frequency modulation. This decomposition, obtained by the bidimensional sifting process, plays an important role in the characterization of regions in textured images. The sifting process is realized using morphological operators to analyze the spatial frequencies and thanks to radial basis functions (RBF) for surface interpolation. We modified the original sifting algorithm to permit a pseudo bandpass decomposition of images by inserting scale criterion. Its effectiveness is demonstrated on synthetic and natural textures. In particular, we show that many different elements in textures can be extracted through the bidimensional empirical mode decomposition, which is fully unsupervised.",
"A new method for analysing nonlinear and non-stationary data has been developed. The key part of the method is the ‘empirical mode decomposition’ method with which any complicated data set can be decomposed into a finite and often small number of ‘intrinsic mode functions’ that admit well-behaved Hilbert transforms. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to nonlinear and non-stationary processes. With the Hilbert transform, the ‘instrinic mode functions’ yield instantaneous frequencies as functions of time that give sharp identifications of imbedded structures. The final presentation of the results is an energy-frequency-time distribution, designated as the Hilbert spectrum. In this method, the main conceptual innovations are the introduction of ‘intrinsic mode functions’ based on local properties of the signal, which make the instantaneous frequency meaningful; and the introduction of the instantaneous frequencies for complicated data sets, which eliminate the need for spurious harmonics to represent nonlinear and non-stationary signals. Examples from the numerical results of the classical nonlinear equation systems and data representing natural phenomena are given to demonstrate the power of this new method. Classical nonlinear system data are especially interesting, for they serve to illustrate the roles played by the nonlinear and non-stationary effects in the energy-frequency-time distribution.",
"In this paper, we propose a variant of the Empirical Mode Decomposition method to decompose multiscale data into their intrinsic mode functions. Under the assumption that the multiscale data satisfy certain scale separation property, we show that the proposed method can extract the intrinsic mode functions accurately and uniquely.",
"Previous work on empirical mode decomposition in two dimensions typically generates a residue with many extrema points. In this paper we propose an improved method to decompose an image into a number of intrinsic mode functions and a residue image with a minimum number of extrema points. We further propose a method for the variable sampling of the two-dimensional empirical mode decomposition. Since traditional frequency concept is not applicable in this work, we introduce the concept of empiquency, shortform for empirical mode frequency, to describe the signal oscillations. The very special properties of the intrinsic mode functions are used for variable sampling in order to reduce the number of parameters to represent the image. This is done blockwise using the occurrence of extrema points of the intrinsic mode function to steer the sampling rate of the block. A method of using overlapping 7 × 7 blocks is introduced to overcome blocking artifacts and to further reduce the number of parameters required to represent the image. The results presented here shows that an image can be successfully decomposed into a number of intrinsic mode functions and a residue image with a minimum number of extrema points. The results also show that subsampling offers a way to keep the total number of samples generated by empirical mode decomposition approximately equal to the number of pixels of the original image.",
"A new Ensemble Empirical Mode Decomposition (EEMD) is presented. This new approach consists of sifting an ensemble of white noise-added signal (data) and treats the mean as the final true result. Finite, not infinitesimal, amplitude white noise is necessary to force the ensemble to exhaust all possible solutions in the sifting process, thus making the different scale signals to collate in the proper intrinsic mode functions (IMF) dictated by the dyadic filter banks. As EEMD is a time–space analysis method, the added white noise is averaged out with sufficient number of trials; the only persistent part that survives the averaging process is the component of the signal (original data), which is then treated as the true and more physical meaningful answer. The effect of the added white noise is to provide a uniform reference frame in the time–frequency space; therefore, the added noise collates the portion of the signal of comparable scale in one IMF. With this ensemble mean, one can separate scales naturall...",
"Instantaneous frequency (IF) is necessary for understanding the detailed mechanisms for nonlinear and nonstationary processes. Historically, IF was computed from analytic signal (AS) through the Hilbert transform. This paper offers an overview of the difficulties involved in using AS, and two new methods to overcome the difficulties for computing IF. The first approach is to compute the quadrature (defined here as a simple 90° shift of phase angle) directly. The second approach is designated as the normalized Hilbert transform (NHT), which consists of applying the Hilbert transform to the empirically determined FM signals. Additionally, we have also introduced alternative methods to compute local frequency, the generalized zero-crossing (GZC), and the teager energy operator (TEO) methods. Through careful comparisons, we found that the NHT and direct quadrature gave the best overall performance. While the TEO method is the most localized, it is limited to data from linear processes, the GZC method is the m...",
"",
"A multi-dimensional ensemble empirical mode decomposition (MEEMD) for multi-dimensional data (such as images or solid with variable density) is proposed here. The decomposition is based on the applications of ensemble empirical mode decomposition (EEMD) to slices of data in each and every dimension involved. The final reconstruction of the corresponding intrinsic mode function (IMF) is based on a comparable minimal scale combination principle. For two-dimensional spatial data or images, f(x,y), we consider the data (or image) as a collection of one-dimensional series in both x-direction and y-direction. Each of the one-dimensional slices is decomposed through EEMD with the slice of the similar scale reconstructed in resulting two-dimensional pseudo-IMF-like components. This new two-dimensional data is further decomposed, but the data is considered as a collection of one-dimensional series in y-direction along locations in x-direction. In this way, we obtain a collection of two-dimensional components. Thes..."
]
}
|
1310.6079
|
1657057259
|
This paper introduces the synchrosqueezed curvelet transform as an optimal tool for 2D mode decomposition of wavefronts or banded wave-like components. The synchrosqueezed curvelet transform consists of a generalized curvelet transform with application dependent geometric scaling parameters, and a synchrosqueezing technique for a sharpened phase space representation. In the case of a superposition of banded wave-like components with well-separated wave-vectors, it is proved that the synchrosqueezed curvelet transform is capable of recognizing each component and precisely estimating local wave-vectors. A discrete analogue of the continuous transform and several clustering models for decomposition are proposed in detail. Some numerical examples with synthetic and real data are provided to demonstrate the above properties of the proposed transform.
|
Following the same methodology of extracting modes one by one from the most oscillatory one, proposed an optimization scheme for mode decomposition in @cite_32 @cite_14 . Inspired by recent developments of compressive sensing, the first paper @cite_32 is based on total variations, while the second one @cite_14 is based on the sparse representation in a data-driven time-frequency dictionary. The convergence of the data-driven time-frequency analysis method under a certain sparsity assumption is proved recently in @cite_28 . However, the analysis of high dimensional case is still under active research.
|
{
"cite_N": [
"@cite_28",
"@cite_14",
"@cite_32"
],
"mid": [
"2963917297",
"2952801764",
"2110983594"
],
"abstract": [
"In a recent paper [11], Hou and Shi introduced a new adaptive data analysis method to analyze nonlinear and non-stationary data. The main idea is to look for the sparsest representation of multiscale data within the largest possible dictionary consisting of intrinsic mode functions of the form a(t)cos(θ(t)) , where a∈V(θ),V(θ) consists of the functions that are less oscillatory than cos(θ(t)) and θ′⩾0. This problem was formulated as a nonlinear L^0 optimization problem and an iterative nonlinear matching pursuit method was proposed to solve this nonlinear optimization problem. In this paper, we prove the convergence of this nonlinear matching pursuit method under some scale separation assumptions on the signal. We consider both well-resolved and poorly sampled signals, as well as signals with noise. In the case without noise, we prove that our method gives exact recovery of the original signal.",
"In this paper, we introduce a new adaptive data analysis method to study trend and instantaneous frequency of nonlinear and non-stationary data. This method is inspired by the Empirical Mode Decomposition method (EMD) and the recently developed compressed (compressive) sensing theory. The main idea is to look for the sparsest representation of multiscale data within the largest possible dictionary consisting of intrinsic mode functions of the form @math , where @math , @math consists of the functions smoother than @math and @math . This problem can be formulated as a nonlinear @math optimization problem. In order to solve this optimization problem, we propose a nonlinear matching pursuit method by generalizing the classical matching pursuit for the @math optimization problem. One important advantage of this nonlinear matching pursuit method is it can be implemented very efficiently and is very stable to noise. Further, we provide a convergence analysis of our nonlinear matching pursuit method under certain scale separation assumptions. Extensive numerical examples will be given to demonstrate the robustness of our method and comparison will be made with the EMD EEMD method. We also apply our method to study data without scale separation, data with intra-wave frequency modulation, and data with incomplete or under-sampled data.",
"We introduce a new adaptive method for analyzing nonlinear and nonstationary data. This method is inspired by the empirical mode decomposition (EMD) method and the recently developed compressed sensing theory. The main idea is to look for the sparsest representation of multiscale data within the largest possible dictionary consisting of intrinsic mode functions of the form a(t )c os(θ(t)) ,w herea ≥ 0i s assumed to be smoother than cos(θ(t)) and θ is a piecewise smooth increasing function. We formulate this problem as a nonlinear L 1 optimization problem. Further, we propose an iterative algorithm to solve this nonlinear optimization problem recursively. We also introduce an adaptive filter method to decompose data with noise. Numerical examples are given to demonstrate the robustness of our method and comparison is made with the EMD method. One advantage of performing such a decomposition is to preserve some intrinsic physical property of the signal, such as trend and instantaneous frequency. Our method shares many important properties of the original EMD method. Because our method is based on a solid mathematical formulation, its performance does not depend on numerical parameters such as the number of shifting or stop criterion, which seem to have a major effect on the original EMD method. Our method is also less sensitive to noise perturbation and the end effect compared with the original EMD method."
]
}
|
1310.6079
|
1657057259
|
This paper introduces the synchrosqueezed curvelet transform as an optimal tool for 2D mode decomposition of wavefronts or banded wave-like components. The synchrosqueezed curvelet transform consists of a generalized curvelet transform with application dependent geometric scaling parameters, and a synchrosqueezing technique for a sharpened phase space representation. In the case of a superposition of banded wave-like components with well-separated wave-vectors, it is proved that the synchrosqueezed curvelet transform is capable of recognizing each component and precisely estimating local wave-vectors. A discrete analogue of the continuous transform and several clustering models for decomposition are proposed in detail. Some numerical examples with synthetic and real data are provided to demonstrate the above properties of the proposed transform.
|
There is another research line of adaptive time-frequency representations, the empirical transforms proposed in @cite_24 and generalized to 2D in @cite_33 . The 2D methods in @cite_33 fall into two kinds. The first one is based on the Fourier spectra of 1D data slices and, hence, lacks the anisotropic angular separation for the same reason of the 2D EMD methods. The second one is based on 2D Pseudo-Polar FFT @cite_17 @cite_31 and suffers the problem of inconsistency, i.e., the results of Fourier boundaries detections in different directions in the 2D Fourier domain are discontinuous. To avoid this problem, the authors compute an average spectrum where the averaging is taken with respect to the angle. The resulting methods are short of the angular separation for the same reason of the synchrosqueezed wavelet transform in @cite_23 as discussed in @cite_13 .
|
{
"cite_N": [
"@cite_33",
"@cite_24",
"@cite_23",
"@cite_31",
"@cite_13",
"@cite_17"
],
"mid": [
"2025016692",
"2019900743",
"2952969989",
"2085744431",
"2042527479",
"1979887666"
],
"abstract": [
"A recently developed approach, called “empirical wavelet transform,” aims to build one-dimensional (1D) adaptive wavelet frames accordingly to the analyzed signal. In this paper, we present several extensions of this approach to two-dimensional (2D) signals (images). We revisit some well-known transforms (tensor wavelets, Littlewood--Paley wavelets, ridgelets, and curvelets) and show that it is possible to build their empirical counterparts. We prove that such constructions lead to different adaptive frames which show some promising properties for image analysis and processing.",
"Some recent methods, like the empirical mode decomposition (EMD), propose to decompose a signal accordingly to its contained information. Even though its adaptability seems useful for many applications, the main issue with this approach is its lack of theory. This paper presents a new approach to build adaptive wavelets. The main idea is to extract the different modes of a signal by designing an appropriate wavelet filter bank. This construction leads us to a new wavelet transform, called the empirical wavelet transform. Many experiments are presented showing the usefulness of this method compared to the classic EMD.",
"The synchrosqueezing method aims at decomposing 1D functions as superpositions of a small number of \"Intrinsic Modes\", supposed to be well separated both in time and frequency. Based on the unidimensional wavelet transform and its reconstruction properties, the synchrosqueezing transform provides a powerful representation of multicomponent signals in the time-frequency plane, together with a reconstruction of each mode. In this paper, a bidimensional version of the synchrosqueezing transform is defined, by considering a well-adapted extension of the concept of analytic signal to images: the monogenic signal. The natural bidimensional counterpart of the notion of Intrinsic Mode is then the concept of \"Intrinsic Monogenic Mode\" that we define. Thereafter, we investigate the properties of its associated Monogenic Wavelet Decomposition. This leads to a natural bivariate extension of the Synchrosqueezed Wavelet Transform, for decomposing and processing multicomponent images. Numerical tests validate the effectiveness of the method for different examples.",
"The Fourier transform of a continuous function, evaluated at frequencies expressed in polar coordinates, is an important conceptual tool for understanding physical continuum phenomena. An analogous tool, suitable for computations on discrete grids, could be very useful; however, no exact analogue exists in the discrete case. In this paper we present the notion of pseudopolar grid (pp grid) and the pseudopolar Fourier transform (ppFT), which evaluates the discrete Fourier transform at points of the pp grid. The pp grid is a type of concentric-squares grid in which the radial density of squares is twice as high as usual. The pp grid consists of equally spaced samples along rays, where different rays are equally spaced in slope rather than angle. We develop a fast algorithm for the ppFT, with the same complexity order as the Cartesian fast Fourier transform; the algorithm is stable, invertible, requires only one-dimensional operations, and uses no approximate interpolations. We prove that the ppFT is invertible and develop two algorithms for its inversion: iterative and direct, both with complexity @math , where @math is the size of the reconstructed image. The iterative algorithm applies conjugate gradients to the Gram operator of the ppFT. Since the transform is ill-conditioned, we introduce a preconditioner, which significantly accelerates the convergence. The direct inversion algorithm utilizes the special frequency domain structure of the transform in two steps. First, it resamples the pp grid to a Cartesian frequency grid and then recovers the image from the Cartesian frequency grid.",
"This paper introduces the synchrosqueezed wave packet transform as a method for analyzing two-dimensional images. This transform is a combination of wave packet transforms of a certain geometric scaling, a reallocation technique for sharpening phase space representations, and clustering algorithms for modal decomposition. For a function that is a superposition of several wave-like components with a highly oscillatory pattern satisfying certain separation conditions, we prove that the synchrosqueezed wave packet transform identifies these components and estimates their local wavevectors. A discrete version of this transform is discussed in detail, and numerical results are given to demonstrate the properties of the proposed transform.",
"Abstract In a wide range of applied problems of 2D and 3D imaging a continuous formulation of the problem places great emphasis on obtaining and manipulating the Fourier transform in Polar coordinates. However, the translation of continuum ideas into practical work with data sampled on a Cartesian grid is problematic. In this article we develop a fast high accuracy Polar FFT. For a given two-dimensional signal of size N × N , the proposed algorithm's complexity is O ( N 2 log N ) , just like in a Cartesian 2D-FFT. A special feature of our approach is that it involves only 1D equispaced FFT's and 1D interpolations. A central tool in our method is the pseudo-Polar FFT, an FFT where the evaluation frequencies lie in an oversampled set of nonangularly equispaced points. We describe the concept of pseudo-Polar domain, including fast forward and inverse transforms. For those interested primarily in Polar FFT's, the pseudo-Polar FFT plays the role of a halfway point—a nearly-Polar system from which conversion to Polar coordinates uses processes relying purely on 1D FFT's and interpolation operations. We describe the conversion process, and give an error analysis of it. We compare accuracy results obtained by a Cartesian-based unequally-sampled FFT method to ours, both algorithms using a small-support interpolation and no pre-compensating, and show marked advantage to the use of the pseudo-Polar initial grid."
]
}
|
1310.5099
|
1625692685
|
In this paper, we introduce random walks with absorbing states on simplicial complexes. Given a simplicial complex of dimension @math , a random walk with an absorbing state is defined which relates to the spectrum of the @math -dimensional Laplacian for @math and which relates to the local random walk on a graph defined by Fan Chung. We also examine an application of random walks on simplicial complexes to a semi-supervised learning problem. Specifically, we consider a label propagation algorithm on oriented edges, which applies to a generalization of the partially labelled classification problem on graphs.
|
Both @cite_0 @cite_11 have examined the relation between graph random walks and the geometry of graphs with Dirichlet boundary conditions. In section we show that under certain conditions the Dirichlet random walk in codimension 0 coincides with the notion of a random walk on a graph with Dirichlet boundary. A natural question to ask concerning random walks on simplicial complexes is: what would be the analogous process on manifolds? In general we are not aware of results on the continuum limit of these walks. However, the Dirichlet random walk in codimension zero is analogous to the concept of Brownian motion with killing as described by Lawler and Sokal in @cite_9 .
|
{
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_11"
],
"mid": [
"2038098381",
"",
"2032336879"
],
"abstract": [
"The difference Laplacian on a square lattice in Rn has been stud- ied by many authors. In this paper an analogous difference operator is studied for an arbitrary graph. It is shown that many properties of the Laplacian in the continuous setting (e.g. the maximum principle, the Harnack inequality, and Cheeger's bound for the lowest eigenvalue) hold for this difference oper- ator. The difference Laplacian governs the random walk on a graph, just as the Laplace operator governs the Brownian motion. As an application of the theory of the difference Laplacian, it is shown that the random walk on a class of graphs is transient. The random walks we consider are defined as follows. Let K be a connected graph (i.e. a one dimensional simplicial complex). For a vertex x E K, let m(x) denote the number of edges emanating from x. The probability that a particle moves from x to another vertex y E K is l m(x) if x and y are connected by an edge and it is zero otherwise. As observed by Courant, Friedrichs and Lewy (CFL) for the case of a square lattice in the plane this random walk is intimately related to the difference analog of the Laplacian",
"",
"For a specified subset S of vertices in a graph G we consider local cuts that separate a subset of S. We consider the local Cheeger constant which is the minimum Cheeger ratio over all subsets of S, and we examine the relationship between the local Cheeger constant and the Dirichlet eigenvalue of the induced subgraph on S. These relationships are summarized in a local Cheeger inequality. The proofs are based on the methods of establishing isoperimetric inequalities using random walks and the spectral methods for eigenvalues with Dirichlet boundary conditions."
]
}
|
1310.5111
|
1919907269
|
In this paper, we explore complex network properties of word collocation networks (Ferret, 2002) from four different genres. Each document of a particular genre was converted into a network of words with word collocations as edges. We analyzed graphically and statistically how the global properties of these networks varied across different genres, and among different network types within the same genre. Our results indicate that the distributions of network properties are visually similar but statistically apart across different genres, and interesting variations emerge when we consider different network types within a single genre. We further investigate how the global properties change as we add more and more collocation edges to the graph of one particular genre, and observe that except for the number of vertices and the size of the largest connected component, network properties change in phases, via jumps and drops.
|
That language shows complex network structure at the word level, was shown more than a decade ago by at least two independent groups of researchers @cite_20 @cite_12 . went further ahead, and designed an unsupervised keyword extraction algorithm using the small-world property of word collocation networks. extended the collocation network idea to rather than words, and observed a small-world structure in the resulting network. Edges between concepts were defined as entries in an English thesaurus. compared word collocation networks of Chinese and English text, and pointed out their similarities and differences. They further constructed in Chinese, showed their small-world structure, and used these networks in a follow-up study to accurately segregate Chinese essays from different literary periods @cite_8 .
|
{
"cite_N": [
"@cite_12",
"@cite_20",
"@cite_8"
],
"mid": [
"1491844874",
"",
"1984535507"
],
"abstract": [
"A document is represented by a network; the nodes represent terms, and the edges represent the co-occurrence of terms. This paper shows that the network has the characteristics of being small world, i.e., highly clustered and short path length. Based on the topology, we can extract important terms, even if they are rare, by measuring their contribution to the graph being small world.",
"",
"Co-occurrence networks of Chinese characters are constructed from collections of essays in different periods of China: the ancient Chinese language, the Chinese language in Wei, Jin, and Southern-Northern Dynasties, the recent Chinese language, and the modern Chinese language, and their statistical parameters are studied. It has been found that 99.6 networks have the scale-free feature and 95.0 networks have the smallworld effect. This study reveals some commonalities and differences among articles in different periods of China from a complex network perspective. There has been a controversial question as to whether the literatures in Wei, Jin, and Southern-Northern Dynasties should belong to the ancient Chinese language or the recent Chinese language in the linguistic study. Our work shows that the statistical parameters of networks in Wei, Jin, and Southern-Northern Dynasties are clearly different from those of networks in the other periods of China, and it seems more reasonable that the literatures in Wei, Jin, and Southern-Northern Dynasties belong to the recent Chinese language."
]
}
|
1310.5111
|
1919907269
|
In this paper, we explore complex network properties of word collocation networks (Ferret, 2002) from four different genres. Each document of a particular genre was converted into a network of words with word collocations as edges. We analyzed graphically and statistically how the global properties of these networks varied across different genres, and among different network types within the same genre. Our results indicate that the distributions of network properties are visually similar but statistically apart across different genres, and interesting variations emerge when we consider different network types within a single genre. We further investigate how the global properties change as we add more and more collocation edges to the graph of one particular genre, and observe that except for the number of vertices and the size of the largest connected component, network properties change in phases, via jumps and drops.
|
Word collocation networks have been used by for opinion mining, and by Mihalcea and Tarau for keyword extraction. While the former study used complex network properties as features for machine learning algorithms, the latter ran PageRank @cite_1 on word collocation networks to sieve out most important words.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"1854214752"
],
"abstract": [
"The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation."
]
}
|
1310.5042
|
2115008472
|
There have been several efforts to extend distributional semantics beyond individual words, to measure the similarity of word pairs, phrases, and sentences (briefly, tuples; ordered sets of words, contiguous or noncontiguous). One way to extend beyond words is to compare two tuples using a function that combines pairwise similarities between the component words in the tuples. A strength of this approach is that it works with both relational similarity (analogy) and compositional similarity (paraphrase). However, past work required hand-coding the combination function for different tasks. The main contribution of this paper is that combination functions are generated by supervised learning. We achieve state-of-the-art results in measuring relational similarity between word pairs (SAT analogies and SemEval 2012 Task 2) and measuring compositional similarity between noun-modifier phrases and unigrams (multiple-choice paraphrase questions).
|
In SemEval 2012, Task 2 was concerned with measuring the degree of relational similarity between two word pairs @cite_13 and Task 6 @cite_25 examined the degree of semantic equivalence between two sentences. These two areas of research have been mostly independent, although and Turney present unified perspectives on the two tasks. We first discuss some work on relational similarity, then some work on compositional similarity, and lastly work that unifies the two types of similarity.
|
{
"cite_N": [
"@cite_13",
"@cite_25"
],
"mid": [
"2098801107",
"2251861449"
],
"abstract": [
"Up to now, work on semantic relations has focused on relation classification: recognizing whether a given instance (a word pair such as virus:flu) belongs to a specific relation class (such as CAUSE:EFFECT). However, instances of a single relation class may still have significant variability in how characteristic they are of that class. We present a new SemEval task based on identifying the degree of prototypicality for instances within a given class. As a part of the task, we have assembled the first dataset of graded relational similarity ratings across 79 relation categories. Three teams submitted six systems, which were evaluated using two methods.",
"Semantic Textual Similarity (STS) measures the degree of semantic equivalence between two texts. This paper presents the results of the STS pilot task in Semeval. The training data contained 2000 sentence pairs from previously existing paraphrase datasets and machine translation evaluation resources. The test data also comprised 2000 sentences pairs for those datasets, plus two surprise datasets with 400 pairs from a different machine translation evaluation corpus and 750 pairs from a lexical resource mapping exercise. The similarity of pairs of sentences was rated on a 0-5 scale (low to high similarity) by human judges using Amazon Mechanical Turk, with high Pearson correlation scores, around 90 . 35 teams participated in the task, submitting 88 runs. The best results scored a Pearson correlation >80 , well above a simple lexical baseline that only scored a 31 correlation. This pilot task opens an exciting way ahead, although there are still open issues, specially the evaluation metric."
]
}
|
1310.4632
|
2951512769
|
Wireless medium access control (MAC) and routing protocols are fundamental building blocks of the Internet of Things (IoT). As new IoT networking standards are being proposed and different existing solutions patched, evaluating the end-to-end performance of the network becomes challenging. Specific solutions designed to be beneficial, when stacked may have detrimental effects on the overall network performance. In this paper, an analysis of MAC and routing protocols for IoT is provided with focus on the IEEE 802.15.4 MAC and the IETF RPL standards. It is shown that existing routing metrics do not account for the complex interactions between MAC and routing, and thus novel metrics are proposed. This enables a protocol selection mechanism for selecting the routing option and adapting the MAC parameters, given specific performance constraints. Extensive analytical and experimental results show that the behavior of the MAC protocol can hurt the performance of the routing protocol and vice versa, unless these two are carefully optimized together by the proposed method.
|
The importance of a mathematical modeling of MAC protocols related to sensor networking has been advocated in recent literature @cite_19 Park_TON -- @cite_16 . The critical effect of MAC parameters on the performance has been shown and discussed in @cite_19 for IEEE 802.15.4 networks. @cite_11 , a Markov chain model is used to design a distributed adaptive algorithm for minimizing the power consumption of single hop star networks using the IEEE 802.15.4 MAC, while guaranteeing a given successful packet reception probability and delay constraints. @cite_16 , an automatic MAC protocol selection mechanism is proposed. The main idea of this approach is to provide a mathematical analysis of various MAC protocols, including the IEEE 802.15.4 MAC, and to choose the optimal MAC protocol and adapt its parameters for the selected modality, topology, and packet generation rate. In particular the designed mechanism takes into account the corresponding physical layer technology and hardware, while satisfying constraints for energy, reliability, and delay. The value of the aforementioned approaches is that the algorithms do not require any modification of the IEEE 802.15.4 standard to be applied. However, their application is limited to single-hop networks.
|
{
"cite_N": [
"@cite_19",
"@cite_16",
"@cite_11"
],
"mid": [
"2115871474",
"2104758807",
"2144261701"
],
"abstract": [
"The recent IEEE 802.15.4 standard defines a low rate wireless personal area network which can be used to implement sensor networks. In particular, the beacon enabled mode with slotted CSMA-CA contention resolution mechanism appears suitable for deployment in simple hierarchical sensor clusters. However, a detailed analysis of the operation of the MAC layer according to the 802.15.4 standard reveals a number of issues which may potentially become performance bottlenecks and thus lead to serious performance degradation. we analyze those issues and their impact, and suggest modifications of the coordinator function that allow the network to handle higher traffic loads and thus offer much improved performance to its clients.",
"We present a novel approach for Medium Access Control (MAC) protocol design based on protocol engine. Current way of designing MAC protocols for a specific application is based on two steps: First the application specifications (such as network topology and packet generation rate), the requirements for energy consumption, delay and reliability, and the resource constraints from the underlying physical layer (such as energy consumption and data rate) are specified, and then the protocol that satisfies all these constraints is designed. Main drawback of this procedure is that we have to restart the design process for each possible application, which may be a waste of time and efforts. The goal of a MAC protocol engine is to provide a library of protocols together with their analysis such that for each new application the optimal protocol is chosen automatically among its library with optimal parameters. We illustrate the MAC engine idea by including an original analysis of IEEE 802.15.4 unslotted random access and Time Division Multiple Access (TDMA) protocols, and implementing these protocols in the software framework called SPINE, which runs on top of TinyOS and is designed for health care applications. Then we validate the analysis and demonstrate how the protocol engine chooses the optimal protocol under different application scenarios via an experimental implementation.",
"Distributed processing through ad hoc and sensor networks is having a major impact on scale and applications of computing. The creation of new cyber-physical services based on wireless sensor devices relies heavily on how well communication protocols can be adapted and optimized to meet quality constraints under limited energy resources. The IEEE 802.15.4 medium access control protocol for wireless sensor networks can support energy efficient, reliable, and timely packet transmission by a parallel and distributed tuning of the medium access control parameters. Such a tuning is difficult, because simple and accurate models of the influence of these parameters on the probability of successful packet transmission, packet delay, and energy consumption are not available. Moreover, it is not clear how to adapt the parameters to the changes of the network and traffic regimes by algorithms that can run on resource-constrained devices. In this paper, a Markov chain is proposed to model these relations by simple expressions without giving up the accuracy. In contrast to previous work, the presence of limited number of retransmissions, acknowledgments, unsaturated traffic, packet size, and packet copying delay due to hardware limitations is accounted for. The model is then used to derive a distributed adaptive algorithm for minimizing the power consumption while guaranteeing a given successful packet reception probability and delay constraints in the packet transmission. The algorithm does not require any modification of the IEEE 802.15.4 medium access control and can be easily implemented on network devices. The algorithm has been experimentally implemented and evaluated on a testbed with off-the-shelf wireless sensor devices. Experimental results show that the analysis is accurate, that the proposed algorithm satisfies reliability and delay constraints, and that the approach reduces the energy consumption of the network under both stationary and transient conditions. Specifically, even if the number of devices and traffic configuration change sharply, the proposed parallel and distributed algorithm allows the system to operate close to its optimal state by estimating the busy channel and channel access probabilities. Furthermore, results indicate that the protocol reacts promptly to errors in the estimation of the number of devices and in the traffic load that can appear due to device mobility. It is also shown that the effect of imperfect channel and carrier sensing on system performance heavily depends on the traffic load and limited range of the protocol parameters."
]
}
|
1310.4632
|
2951512769
|
Wireless medium access control (MAC) and routing protocols are fundamental building blocks of the Internet of Things (IoT). As new IoT networking standards are being proposed and different existing solutions patched, evaluating the end-to-end performance of the network becomes challenging. Specific solutions designed to be beneficial, when stacked may have detrimental effects on the overall network performance. In this paper, an analysis of MAC and routing protocols for IoT is provided with focus on the IEEE 802.15.4 MAC and the IETF RPL standards. It is shown that existing routing metrics do not account for the complex interactions between MAC and routing, and thus novel metrics are proposed. This enables a protocol selection mechanism for selecting the routing option and adapting the MAC parameters, given specific performance constraints. Extensive analytical and experimental results show that the behavior of the MAC protocol can hurt the performance of the routing protocol and vice versa, unless these two are carefully optimized together by the proposed method.
|
Continuing our literature review in the MAC design and adaptation, a framework for MAC parameter adaptation based on analytical modeling has been also presented in @cite_4 . This approach has been developed for XMAC and LPP protocol, but the results are not intended for the IEEE 802.15.4 MAC. An adaptation mechanism of the IEEE 802.15.4 MAC has been proposed in @cite_14 . The mechanism was experimentally validated in a wireless body sensor network deployed for medical applications. However, this approach does not include any analytical modeling of the performance of the MAC and the adaptations are performed based on observations during their experimental study.
|
{
"cite_N": [
"@cite_14",
"@cite_4"
],
"mid": [
"2159389203",
"2095720104"
],
"abstract": [
"For the first time, this paper presents an analysis of the performance of the IEEE 802.15.4 low power, low data rate wireless standard in relation to medical sensor body area networks. This is an emerging application of wireless sensor networking with particular performance constraints, including power consumption, physical size, robustness and security. In the analysis presented, the star network configuration of the 802.15.4 standard at 2.4 GHz was considered for a body area network consisting of a wearable or desk mounted coordinator outside of the body with up to 10 body implanted sensors. The main consideration in this work was the long-term power consumption of devices, since for practical reasons, implanted medical devices and sensors must function for at least 10 to 15 years without battery replacement. The results show that when properly configured, 802.15.4 can be used for medical sensor networking when configured in non-beacon mode with low data rate asymmetric traffic. Beacon mode may also be used, but with more severe restrictions on data rate and crystal tolerance.",
"We present pTunes, a framework for runtime adaptation of low-power MAC protocol parameters. The MAC operating parameters bear great influence on the system performance, yet their optimal choice is a function of the current network state. Based on application requirements expressed as network lifetime, end-to-end latency, and end-to-end reliability, pTunes automatically determines optimized parameter values to adapt to link, topology, and traffic dynamics. To this end, we introduce a flexible modeling approach, separating protocol-dependent from protocol-independent aspects, which facilitates using pTunes with different MAC protocols, and design an efficient system support that integrates smoothly with the application. To demonstrate its effectiveness, we apply pTunes to X-MAC and LPP. In a 44-node testbed, pTunes achieves up to three-fold lifetime gains over static MAC parameters optimized for peak traffic, the latter being current - and almost unavoidable - practice in real deployments. pTunes promptly reacts to changes in traffic load and link quality, reducing packet loss by 80 during periods of controlled wireless interference. Moreover, pTunes helps the routing protocol recover quickly from critical network changes, reducing packet loss by 70 in a scenario where multiple core routing nodes fail."
]
}
|
1310.4632
|
2951512769
|
Wireless medium access control (MAC) and routing protocols are fundamental building blocks of the Internet of Things (IoT). As new IoT networking standards are being proposed and different existing solutions patched, evaluating the end-to-end performance of the network becomes challenging. Specific solutions designed to be beneficial, when stacked may have detrimental effects on the overall network performance. In this paper, an analysis of MAC and routing protocols for IoT is provided with focus on the IEEE 802.15.4 MAC and the IETF RPL standards. It is shown that existing routing metrics do not account for the complex interactions between MAC and routing, and thus novel metrics are proposed. This enables a protocol selection mechanism for selecting the routing option and adapting the MAC parameters, given specific performance constraints. Extensive analytical and experimental results show that the behavior of the MAC protocol can hurt the performance of the routing protocol and vice versa, unless these two are carefully optimized together by the proposed method.
|
Within the literature related to the study of the routing protocols for IoT applications, @cite_18 presents an experimental performance evaluation of RPL using the basic hop count routing metric and the expected transmissions count (ETX) @cite_3 metric. The ETX is a reliability metric that indicates the number of retransmissions a node expects to execute to successfully deliver a packet to the destination node. However, the study in @cite_18 does not consider the performance of RPL when a contention-based MAC protocol is active and there is no proposal about new routing metrics. In the back-pressure collection protocol (BCP) @cite_20 , an extension of the ETX metric lead to the introduction of a dynamic back-pressure routing metric. In BCP, the routing and forwarding decisions are made on a per-packet basis by computing a back-pressure weight of each outgoing link that is a function of the queues of the nodes and the link state information. BCP is tested over a low power contention-based MAC. However, the effects of the limited number of backoffs and retransmissions, present in the IEEE 802.15.4 standard, are not taken into account.
|
{
"cite_N": [
"@cite_18",
"@cite_20",
"@cite_3"
],
"mid": [
"1964246093",
"",
"2041248965"
],
"abstract": [
"In this paper, a performance evaluation of the Routing Protocol for Low power and Lossy Networks (RPL) is presented. Detailed simulations are carried out to produce several routing performance metrics using a set of real-life scenarios. A real outdoor network was reproduced in simulation with the help of topology and link quality data. Behaviors of such a network in presence of RPL, their reason and potential areas of further study improvement are pointed out.",
"",
"This paper presents the expected transmission count metric (ETX), which finds high-throughput paths on multi-hop wireless networks. ETX minimizes the expected total number of packet transmissions (including retransmissions) required to successfully deliver a packet to the ultimate destination. The ETX metric incorporates the effects of link loss ratios, asymmetry in the loss ratios between the two directions of each link, and interference among the successive links of a path. In contrast, the minimum hop-count metric chooses arbitrarily among the different paths of the same minimum length, regardless of the often large differences in throughput among those paths, and ignoring the possibility that a longer path might offer higher throughput.This paper describes the design and implementation of ETX as a metric for the DSDV and DSR routing protocols, as well as modifications to DSDV and DSR which allow them to use ETX. Measurements taken from a 29-node 802.11b test-bed demonstrate the poor performance of minimum hop-count, illustrate the causes of that poor performance, and confirm that ETX improves performance. For long paths the throughput improvement is often a factor of two or more, suggesting that ETX will become more useful as networks grow larger and paths become longer."
]
}
|
1310.4632
|
2951512769
|
Wireless medium access control (MAC) and routing protocols are fundamental building blocks of the Internet of Things (IoT). As new IoT networking standards are being proposed and different existing solutions patched, evaluating the end-to-end performance of the network becomes challenging. Specific solutions designed to be beneficial, when stacked may have detrimental effects on the overall network performance. In this paper, an analysis of MAC and routing protocols for IoT is provided with focus on the IEEE 802.15.4 MAC and the IETF RPL standards. It is shown that existing routing metrics do not account for the complex interactions between MAC and routing, and thus novel metrics are proposed. This enables a protocol selection mechanism for selecting the routing option and adapting the MAC parameters, given specific performance constraints. Extensive analytical and experimental results show that the behavior of the MAC protocol can hurt the performance of the routing protocol and vice versa, unless these two are carefully optimized together by the proposed method.
|
@cite_9 , the authors propose a metric for opportunistic routing for very low-duty cycled MACs that considers the expected number required to successfully deliver a packet from source to destination. @cite_24 , a multi-path opportunistic routing is proposed for time-constrained operations over IEEE 802.15.4 MAC. However, load balancing and the effects of contention-based access are not considered in both previous approaches. A study on the interaction of RPL with the MAC layer is presented in @cite_1 , where the use of a receiver-initiated MAC protocol in enhancing the performance of RPL is investigated. @cite_15 , a cross-layer framework has been proposed for IoT applications, by considering SMAC and RPL. However, these works do not take into account the specifications of the IEEE 802.15.4 MAC, which is widely used as the default MAC protocol for IoT @cite_10 .
|
{
"cite_N": [
"@cite_9",
"@cite_1",
"@cite_24",
"@cite_15",
"@cite_10"
],
"mid": [
"2095409851",
"2150662454",
"",
"2135126436",
"2120629158"
],
"abstract": [
"Opportunistic routing is widely known to have substantially better performance than traditional unicast routing in wireless networks with lossy links. However, wireless sensor networks are heavily duty-cycled, i.e. they frequently enter deep sleep states to ensure long network life-time. This renders existing opportunistic routing schemes impractical, as they assume that nodes are always awake and can overhear other transmissions. In this paper, we introduce a novel opportunistic routing metric that takes duty cycling into account. By analytical performance modeling and simulations, we show that our routing scheme results in significantly reduced delay and improved energy efficiency compared to traditional unicast routing. The method is based on a new metric, EDC, that reflects the expected number of duty cycled wakeups that are required to successfully deliver a packet from source to destination. We devise distributed algorithms that find the EDC-optimal forwarding, i.e. the optimal subset of neighbors that each node should permit to forward its packets. We compare the performance of the new routing with ETX-optimal single path routing in both simulations and testbed-based experiments.",
"Wireless Sensor Networks (WSNs) allow for untethered sensing of the environment. It is anticipated that, within the next few years, sensors will be deployed in a variety of scenarios, ranging from environmental monitoring to health care, from public to private sector and other areas. This paper investigate the utilization of receiver-based Medium Access Control (MAC) protocol in enhancing the performance of routing protocols such as IETF ROLL's RPL. Receiver-Based MAC (RB-MAC) is a preamble-sampling MAC protocol which dynamically elects the next-hop among a number of potential relay neighbors, based on current channel conditions and status of the sensor nodes. The proposed scheme is resilient to lossy links by nature, and hence reduces the number of retransmissions. We show by analysis how it outperforms the state-of-the-art sender-based preamble sampling MAC protocols in terms of energy and delay.",
"",
"The Internet of Things (IoT) is a novel networking paradigm which allows the communication among all sorts of physical objects over the Internet. The IoT defines a world-wide cyber-physical system with a plethora of applications in the fields of domotics, e-health, goods monitoring and logistics, among others. The use of cross-layer communication schemes to provide adaptive solutions for the IoT is motivated by the high heterogeneity in the hardware capabilities and the communication requirements among things. In this paper, a novel cross-layer module for the IoT is proposed to accurately capture both the high heterogeneity of the IoT and the impact of the Internet as part of the network architecture. The fundamental part of the module is a mathematical framework, which is developed to obtain the optimal routing paths and the communication parameters among things, by exploiting the interrelations among different layer functionalities in the IoT. Moreover, a cross-layer communication protocol is presented to implement this optimization framework in practical scenarios. The results show that the proposed solution can achieve a global communication optimum and outperforms existing layered solutions. The novel cross-layer module is a primary step towards providing efficient and reliable end-to-end communication in the IoT.",
"We have witnessed the Fixed Internet emerging with virtually every computer being connected today; we are currently witnessing the emergence of the Mobile Internet with the exponential explosion of smart phones, tablets and net-books. However, both will be dwarfed by the anticipated emergence of the Internet of Things (IoT), in which everyday objects are able to connect to the Internet, tweet or be queried. Whilst the impact onto economies and societies around the world is undisputed, the technologies facilitating such a ubiquitous connectivity have struggled so far and only recently commenced to take shape. To this end, this paper introduces in a timely manner and for the first time the wireless communications stack the industry believes to meet the important criteria of power-efficiency, reliability and Internet connectivity. Industrial applications have been the early adopters of this stack, which has become the de-facto standard, thereby bootstrapping early IoT developments with already thousands of wireless nodes deployed. Corroborated throughout this paper and by emerging industry alliances, we believe that a standardized approach, using latest developments in the IEEE 802.15.4 and IETF working groups, is the only way forward. We introduce and relate key embodiments of the power-efficient IEEE 802.15.4-2006 PHY layer, the power-saving and reliable IEEE 802.15.4e MAC layer, the IETF 6LoWPAN adaptation layer enabling universal Internet connectivity, the IETF ROLL routing protocol enabling availability, and finally the IETF CoAP enabling seamless transport and support of Internet applications. The protocol stack proposed in the present work converges towards the standardized notations of the ISO OSI and TCP IP stacks. What thus seemed impossible some years back, i.e., building a clearly defined, standards-compliant and Internet-compliant stack given the extreme restrictions of IoT networks, is commencing to become reality."
]
}
|
1310.4938
|
2205343574
|
We present the architecture and the evaluation of a new system for recognizing textual entailment (RTE). In RTE we want to identify automatically the type of a logical relation between two input texts. In particular, we are interested in proving the existence of an entailment between them. We conceive our system as a modular environment allowing for a high-coverage syntactic and semantic text analysis combined with logical inference. For the syntactic and semantic analysis we combine a deep semantic analysis with a shallow one supported by statistical models in order to increase the quality and the accuracy of results. For RTE we use logical inference of first-order employing model-theoretic techniques and automated reasoning tools. The inference is supported with problem-relevant background knowledge extracted automatically and on demand from external sources like, e.g., WordNet, YAGO, and OpenCyc, or other, more experimental sources with, e.g., manually defined presupposition resolutions, or with axiomatized general and common sense knowledge. The results show that fine-grained and consistent knowledge coming from diverse sources is a necessary condition determining the correctness and traceability of results.
|
Finally, the inference process of our system applies ontological knowledge coming from different external sources like, e.g., WordNet or YAGO @cite_4 . In the last few years we have been observing a reasonable growth of interest in huge ontologies and their applications. There exists also a number of huge ontology integration projects like, e.g., YAGO together with the Suggested Upper Model Ontology (SUMO, see melo2008integrating ), DBpedia @cite_17 , or the Linking Open Data Project @cite_26 which aim is to extract and to combine ontological data from many sources. Since YAGO is a part of those ontology projects, it should be possible to integrate them (at least partly) into the RTE application by applying the integration procedure from .
|
{
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_17"
],
"mid": [
"1807555267",
"2114544510",
"102708294"
],
"abstract": [
"In this article we describe VAMPIRE: a high-performance theorem prover for first-order logic. As our description is mostly targeted to the developers of such systems and specialists in automated reasoning, it focuses on the design of the system and some key implementation features. We also analyze the performance of the prover at CASC-JC.",
"This article presents YAGO, a large ontology with high coverage and precision. YAGO has been automatically derived from Wikipedia and WordNet. It comprises entities and relations, and currently contains more than 1.7 million entities and 15 million facts. These include the taxonomic Is-A hierarchy as well as semantic relations between entities. The facts for YAGO have been extracted from the category system and the infoboxes of Wikipedia and have been combined with taxonomic relations from WordNet. Type checking techniques help us keep YAGO's precision at 95 -as proven by an extensive evaluation study. YAGO is based on a clean logical model with a decidable consistency. Furthermore, it allows representing n-ary relations in a natural way while maintaining compatibility with RDFS. A powerful query model facilitates access to YAGO's data.",
"DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human-andmachine-consumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data."
]
}
|
1310.4645
|
2293063927
|
Collective communications are ubiquitous in parallel applications. We present two new algorithms for performing a reduction. The operation associated with our reduction needs to be associative and commutative. The two algorithms are developed under two different communication models (unidirectional and bidirectional). Both algorithms use a greedy scheduling scheme. For a unidirectional, fully connected network, we prove that our greedy algorithm is optimal when some realistic assumptions are respected. Previous algorithms fit the same assumptions and are only appropriate for some given configurations. Our algorithm is optimal for all configurations. We note that there are some configuration where our greedy algorithm significantly outperform any existing algorithms. This result represents a contribution to the state-of-the art. For a bidirectional, fully connected network, we present a different greedy algorithm. We verify by experimental simulations that our algorithm matches the time complexity of an optimal broadcast (with addition of the computation). Beside reversing an optimal broadcast algorithm, the greedy algorithm is the first known reduction algorithm to experimentally attain this time complexity. Simulations show that this greedy algorithm performs well in practice, outperforming any state-of-the-art reduction algorithms. Positive experiments on a parallel distributed machine are also presented.
|
Optimizing the reduction operation is closely related to optimization of the broadcast operation. Any broadcast algorithm can be reversed to perform a reduction. Bar- @cite_10 and Tr "a ff and Ripke @cite_1 both provide algorithms that produce an optimal broadcast schedule for a bidirectional system. Here the messages are split into segments and the segments are broadcast in rounds. In both cases the optimality is in the sense that the algorithm meets the lower bound on the number of communication rounds. For the theoretical case when @math , reversing an optimal broadcast will provide an optimal reduction. However, for @math this is no longer valid as an optimal schedule will most like take into account the computation.
|
{
"cite_N": [
"@cite_1",
"@cite_10"
],
"mid": [
"2062810083",
"2054243578"
],
"abstract": [
"We develop and implement an optimal broadcast algorithm for fully connected processor networks under a bidirectional communication model in which each processor can simultaneously send a message to one processor and receive a message from another, possibly different processor. For any number of processors p the algorithm requires [email protected][email protected]? communication rounds to broadcast N blocks of data from a root processor to the remaining processors, meeting the lower bound in the model. For data of size m, assuming that sending and receiving data of size m^' takes time @[email protected]^', the best running time that can be achieved by the division of m into equal-sized blocks is ((@[email protected]?-1)@[email protected])^2. The algorithm uses a regular, circulant graph communication pattern, and degenerates into a binomial tree broadcast when the number of blocks to be broadcast is one. The algorithm is furthermore well suited to fully connected clusters of SMP (Symmetric Multi-Processor) nodes. The algorithm is implemented as part of an MPI (Message Passing Interface) library. We demonstrate significant practical bandwidth improvements of up to a factor 1.5 over several other, commonly used broadcast algorithms on both a small SMP cluster and a 72 node NEC SX vector supercomputer.",
"We consider the problem of broadcasting multiple messages from one processor to many processors in telephone-like communication systems. In such systems, processors communicate in rounds, where in every round, each processor can communicate with exactly one other processor by exchanging messages with it. Finding an optimal solution for this problem was open for over a decade. In this paper, we present an optimal algorithm for this problem when the number of processors is even. For an odd number of processors, we provide an algorithm which is within an additive term of 3 of the optimum. A by-product of our solution is an optimal algorithm for the problem of broadcasting multiple messages for any number of processors in the simultaneous send receive model. In this latter model, in every round, each processor can send a message to one processor and receive a message from another processor."
]
}
|
1310.4645
|
2293063927
|
Collective communications are ubiquitous in parallel applications. We present two new algorithms for performing a reduction. The operation associated with our reduction needs to be associative and commutative. The two algorithms are developed under two different communication models (unidirectional and bidirectional). Both algorithms use a greedy scheduling scheme. For a unidirectional, fully connected network, we prove that our greedy algorithm is optimal when some realistic assumptions are respected. Previous algorithms fit the same assumptions and are only appropriate for some given configurations. Our algorithm is optimal for all configurations. We note that there are some configuration where our greedy algorithm significantly outperform any existing algorithms. This result represents a contribution to the state-of-the art. For a bidirectional, fully connected network, we present a different greedy algorithm. We verify by experimental simulations that our algorithm matches the time complexity of an optimal broadcast (with addition of the computation). Beside reversing an optimal broadcast algorithm, the greedy algorithm is the first known reduction algorithm to experimentally attain this time complexity. Simulations show that this greedy algorithm performs well in practice, outperforming any state-of-the-art reduction algorithms. Positive experiments on a parallel distributed machine are also presented.
|
Rabenseifner @cite_5 provides a reduce-scatter gather algorithm (butterfly) which provides optimal load balancing to minimize the computation. Rabenseifner does not predefine a segmentation of the message, but rather uses the techniques recursive-halving and recursive-doubling. The algorithm is done in two phases. In the first phase the message is repeatedly halved in size and exchanged among processes. At the end of the first phase the final result is distributed among all the processors. This phase is know as a reduce-scatter. The second phase gathers the results recursively doubling the size of the message. In @cite_3 Rabenseifner and Tr "a ff improve on the algorithm for non-power of two number of processors.
|
{
"cite_N": [
"@cite_5",
"@cite_3"
],
"mid": [
"1572016165",
"1518038022"
],
"abstract": [
"A 5-year-profiling in production mode at the University of Stuttgart has shown that more than 40 of the execution time of Message Passing Interface (MPI) routines is spent in the collective communication routines MPI_Allreduce and MPI_Reduce. Although MPI implementations are now available for about 10 years and all vendors are committed to this Message Passing Interface standard, the vendors’ and publicly available reduction algorithms could be accelerated with new algorithms by a factor between 3 (IBM, sum) and 100 (Cray T3E, maxloc) for long vectors. This paper presents five algorithms optimized for different choices of vector size and number of processes. The focus is on bandwidth dominated protocols for power-of-two and non-power-of-two number of processes, optimizing the load balance in communication and computation.",
"We present improved algorithms for global reduction operations for message-passing systems. Each of p processors has a vector of m data items, and we want to compute the element-wise “sum” under a given, associative function of the p vectors. The result, which is also a vector of m items, is to be stored at either a given root processor (MPI_Reduce), or all p processors (MPI_Allreduce). A further constraint is that for each data item and each processor the result must be computed in the same order, and with the same bracketing. Both problems can be solved in O(m+log2 p) communication and computation time. Such reduction operations are part of MPI (the Message Passing Interface), and the algorithms presented here achieve significant improvements over currently implemented algorithms for the important case where p is not a power of 2. Our algorithm requires ⌈log2 p⌉ + 1 rounds – one round off from optimal – for small vectors. For large vectors twice the number of rounds is needed, but the communication and computation time is less than 3mβ and 3 2mγ, respectively, an improvement from 4mβ and 2mγ achieved by previous algorithms (with the message transfer time modeled as α + mβ, and reduction-operation execution time as mγ). For p=3× 2 n and p=9× 2 n and small m ≤ b for some threshold b, and p=q 2 n with small q, our algorithm achieves the optimal ⌈log2 p⌉ number of rounds."
]
}
|
1310.4227
|
2950871198
|
The maximum a-posteriori (MAP) perturbation framework has emerged as a useful approach for inference and learning in high dimensional complex models. By maximizing a randomly perturbed potential function, MAP perturbations generate unbiased samples from the Gibbs distribution. Unfortunately, the computational cost of generating so many high-dimensional random variables can be prohibitive. More efficient algorithms use sequential sampling strategies based on the expected value of low dimensional MAP perturbations. This paper develops new measure concentration inequalities that bound the number of samples needed to estimate such expected values. Applying the general result to MAP perturbations can yield a more efficient algorithm to approximate sampling from the Gibbs distribution. The measure concentration result is of general interest and may be applicable to other areas involving expected estimations.
|
There are several results on measure concentration for Lipschitz functions of random variables (c.f. Maurey and ). In this work we use logarithmic Sobolev inequalities @cite_0 and prove a new measure concentration result for random variables. To do this we generalize a classic result of on Poincar ' e inequalities to non-strongly log-concave distributions, and also recover the concentration result of for functions of Laplace random variables.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2140777099"
],
"abstract": [
"We describe a new approach for phoneme recognition which aims at minimizing the phoneme error rate. Building on structured prediction techniques, we formulate the phoneme recognizer as a linear combination of feature functions. We state a PAC-Bayesian generalization bound, which gives an upper-bound on the expected phoneme error rate in terms of the empirical phoneme error rate. Our algorithm is derived by finding the gradient of the PAC-Bayesian bound and minimizing it by stochastic gradient descent. The resulting algorithm is iterative and easy to implement. Experiments on the TIMIT corpus show that our method achieves the lowest phoneme error rate compared to other discriminative and generative models with the same expressive power."
]
}
|
1310.4284
|
1826471959
|
Due to non-homogeneous spread of sunlight, sensing nodes possess non-uniform energy budget in recharge- able Wireless Sensor Networks (WSNs). An energy-aware workload distribution strategy is therefore nec- essary to achieve good data accuracy subject to energy-neutral operation. Recently proposed signal approx- imation strategies assume uniform sampling and fail to ensure energy neutral operation in rechargeable wireless sensor networks. We propose EAST (Energy Aware Sparse approximation Technique), which ap- proximates a signal, by adapting sensor node sampling workload according to solar energy availability. To the best of our knowledge, we are the first to propose sparse approximation to model energy-aware workload distribution in rechargeable WSNs. Experimental results, using data from an outdoor WSN deployment suggest that EAST significantly improves the approximation accuracy offering approximately 50 higher sensor on-time. EAST requires the approximation error to be known beforehand to determine the number of measure- ments. However, it is not always possible to decide the accuracy a-priori. We improve EAST and propose EAST+, which, given only the energy budget of the nodes, computes the optimal number of measurements subject to the energy neutral operation.
|
Work presented in @cite_22 proposes a harvest-aware adaptive sampling approach to dynamically identify the maximum duty cycle. However, their focus is not on signal approximation from the network.
|
{
"cite_N": [
"@cite_22"
],
"mid": [
"1991781995"
],
"abstract": [
"Power management is an important concern in sensor networks, because a tethered energy infrastructure is usually not available and an obvious concern is to use the available battery energy efficiently. However, in some of the sensor networking applications, an additional facility is available to ameliorate the energy problem: harvesting energy from the environment. Certain considerations in using an energy harvesting source are fundamentally different from that in using a battery, because, rather than a limit on the maximum energy, it has a limit on the maximum rate at which the energy can be used. Further, the harvested energy availability typically varies with time in a nondeterministic manner. While a deterministic metric, such as residual battery, suffices to characterize the energy availability in the case of batteries, a more sophisticated characterization may be required for a harvesting source. Another issue that becomes important in networked systems with multiple harvesting nodes is that different nodes may have different harvesting opportunity. In a distributed application, the same end-user performance may be achieved using different workload allocations, and resultant energy consumptions at multiple nodes. In this case, it is important to align the workload allocation with the energy availability at the harvesting nodes. We consider the above issues in power management for energy-harvesting sensor networks. We develop abstractions to characterize the complex time varying nature of such sources with analytically tractable models and use them to address key design issues. We also develop distributed methods to efficiently use harvested energy and test these both in simulation and experimentally on an energy-harvesting sensor network, prototyped for this work."
]
}
|
1310.4284
|
1826471959
|
Due to non-homogeneous spread of sunlight, sensing nodes possess non-uniform energy budget in recharge- able Wireless Sensor Networks (WSNs). An energy-aware workload distribution strategy is therefore nec- essary to achieve good data accuracy subject to energy-neutral operation. Recently proposed signal approx- imation strategies assume uniform sampling and fail to ensure energy neutral operation in rechargeable wireless sensor networks. We propose EAST (Energy Aware Sparse approximation Technique), which ap- proximates a signal, by adapting sensor node sampling workload according to solar energy availability. To the best of our knowledge, we are the first to propose sparse approximation to model energy-aware workload distribution in rechargeable WSNs. Experimental results, using data from an outdoor WSN deployment suggest that EAST significantly improves the approximation accuracy offering approximately 50 higher sensor on-time. EAST requires the approximation error to be known beforehand to determine the number of measure- ments. However, it is not always possible to decide the accuracy a-priori. We improve EAST and propose EAST+, which, given only the energy budget of the nodes, computes the optimal number of measurements subject to the energy neutral operation.
|
@cite_17 , a Bayesian estimation technique is presented to estimate the wind speed and wind direction signals. The authors have supplemented their estimation using the assumption that the wind speed and wind direction signals have a correlation with hourly tide data. However, in our work we assume that signals are compressible due to the presence of spatial-temporal correlation among the data collected at different sensing nodes.
|
{
"cite_N": [
"@cite_17"
],
"mid": [
"1975250521"
],
"abstract": [
"We present in this article a Bayesian estimation method for the joint segmentation of a set of piecewise stationary processes. The estimate we propose is based on the maximization of the posterior distribution of the change instants conditionally to the process parameter estimation. It is defined as a penalized contrast function with a first term related to the fit to the observation and a second term of penalty. The expression of the contrast function is deduced from the log-likelihood of the parametric distribution that models the statistic evolution of processes in the stationary segments. In the case of joint segmentation the penalty term is deduced from the prior law of change instants. It is composed of parameters that guide the number and the position of changes and of parameters that will bring prior information on the joint behavior of processes. This work is applied to the estimation of wind statistics parameters. We use data available from a cup anemometer and a wind vane, supposed to be piecewise stationary. The contrast function is deduced from the circular Von Mises distribution for the wind direction and from the log-normal distribution for the speed. The feasibility and the contribution of our method are shown on synthetic and real data."
]
}
|
1310.4284
|
1826471959
|
Due to non-homogeneous spread of sunlight, sensing nodes possess non-uniform energy budget in recharge- able Wireless Sensor Networks (WSNs). An energy-aware workload distribution strategy is therefore nec- essary to achieve good data accuracy subject to energy-neutral operation. Recently proposed signal approx- imation strategies assume uniform sampling and fail to ensure energy neutral operation in rechargeable wireless sensor networks. We propose EAST (Energy Aware Sparse approximation Technique), which ap- proximates a signal, by adapting sensor node sampling workload according to solar energy availability. To the best of our knowledge, we are the first to propose sparse approximation to model energy-aware workload distribution in rechargeable WSNs. Experimental results, using data from an outdoor WSN deployment suggest that EAST significantly improves the approximation accuracy offering approximately 50 higher sensor on-time. EAST requires the approximation error to be known beforehand to determine the number of measure- ments. However, it is not always possible to decide the accuracy a-priori. We improve EAST and propose EAST+, which, given only the energy budget of the nodes, computes the optimal number of measurements subject to the energy neutral operation.
|
A number of studies @cite_26 @cite_7 @cite_4 have utilized the spatial-temporal correlation of the signal to reduce sampling requirements. Though our approach has similar assumption, we have considered non-uniform energy profile of the sensors, which is different. Moreover, we have used Sparse Random Projections, which is also different from these approaches.
|
{
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_7"
],
"mid": [
"2134238238",
"2097995023",
"2149119325"
],
"abstract": [
"Limited energy supply is one of the major constraints in wireless sensor networks. A feasible strategy is to aggressively reduce the spatial sampling rate of sensors, that is, the density of the measure points in a field. By properly scheduling, we want to retain the high fidelity of data collection. In this paper, we propose a data collection method that is based on a careful analysis of the surveillance data reported by the sensors. By exploring the spatial correlation of sensing data, we dynamically partition the sensor nodes into clusters so that the sensors in the same cluster have similar surveillance time series. They can share the workload of data collection in the future since their future readings may likely be similar. Furthermore, during a short-time period, a sensor may report similar readings. Such a correlation in the data reported from the same sensor is called temporal correlation, which can be explored to further save energy. We develop a generic framework to address several important technical challenges, including how to partition the sensors into clusters, how to dynamically maintain the clusters in response to environmental changes, how to schedule the sensors in a cluster, how to explore temporal correlation, and how to restore the data in the sink with high fidelity. We conduct an extensive empirical study to test our method using both a real test bed system and a large-scale synthetic data set.",
"Declarative queries are proving to be an attractive paradigm for ineracting with networks of wireless sensors. The metaphor that \"the sensornet is a database\" is problematic, however, because sensors do not exhaustively represent the data in the real world. In order to map the raw sensor readings onto physical reality, a model of that reality is required to complement the readings. In this paper, we enrich interactive sensor querying with statistical modeling techniques. We demonstrate that such models can help provide answers that are both more meaningful, and, by introducing approximations with probabilistic confidences, significantly more efficient to compute in both time and energy. Utilizing the combination of a model and live data acquisition raises the challenging optimization problem of selecting the best sensor readings to acquire, balancing the increase in the confidence of our answer against the communication and data acquisition costs in the network. We describe an exponential time algorithm for finding the optimal solution to this optimization problem, and a polynomial-time heuristic for identifying solutions that perform well in practice. We evaluate our approach on several real-world sensor-network data sets, taking into account the real measured data and communication quality, demonstrating that our model-based approach provides a high-fidelity representation of the real phenomena and leads to significant performance gains versus traditional data acquisition techniques.",
"Wireless sensor networks provide an attractive approach to spatially monitoring environments. Wireless technology makes these systems relatively flexible, but also places heavy demands on energy consumption for communications. This raises a fundamental trade-off: using higher densities of sensors provides more measurements, higher resolution and better accuracy, but requires more communications and processing. This paper proposes a new approach, called \"back-casting,\" which can significantly reduce communications and energy consumption while maintaining high accuracy. Back-casting operates by first having a small subset of the wireless sensors communicate their information to a fusion center. This provides an initial estimate of the environment being sensed, and guides the allocation of additional network resources. Specifically, the fusion center backcasts information based on the initial estimate to the network at large, selectively activating additional sensor nodes in order to achieve a target error level. The key idea is that the initial estimate can detect correlations in the environment, indicating that many sensors may not need to be activated by the fusion center. Thus, adaptive sampling can save energy compared to dense, non-adaptive sampling. This method is theoretically analyzed in the context of field estimation and it is shown that the energy savings can be quite significant compared to conventional approaches. For example, when sensing a piecewise smooth field with an array of 100 spl times 100 sensors, adaptive sampling can reduce the energy consumption by roughly a factor of 10 while providing the same accuracy achievable if all sensors were activated."
]
}
|
1310.4284
|
1826471959
|
Due to non-homogeneous spread of sunlight, sensing nodes possess non-uniform energy budget in recharge- able Wireless Sensor Networks (WSNs). An energy-aware workload distribution strategy is therefore nec- essary to achieve good data accuracy subject to energy-neutral operation. Recently proposed signal approx- imation strategies assume uniform sampling and fail to ensure energy neutral operation in rechargeable wireless sensor networks. We propose EAST (Energy Aware Sparse approximation Technique), which ap- proximates a signal, by adapting sensor node sampling workload according to solar energy availability. To the best of our knowledge, we are the first to propose sparse approximation to model energy-aware workload distribution in rechargeable WSNs. Experimental results, using data from an outdoor WSN deployment suggest that EAST significantly improves the approximation accuracy offering approximately 50 higher sensor on-time. EAST requires the approximation error to be known beforehand to determine the number of measure- ments. However, it is not always possible to decide the accuracy a-priori. We improve EAST and propose EAST+, which, given only the energy budget of the nodes, computes the optimal number of measurements subject to the energy neutral operation.
|
Recently, we have extended the theory of compressive sensing showing that it can be used to support non-uniform sampling @cite_23 @cite_1 . However, in this paper we choose sparse random projections over compressive sensing, since the decoding process of compressive sensing is computationally expensive. The complexity of decoding a @math data point vector is @math . Whereas, decoding complexity of sparse random projections is as low as @math , where @math is the number of projections. In this paper the projects are generated locally without any coordination between basestation and nodes, and, the final signal recovery takes place at the resource enriched basestation. In our future study we seek to conduct the signal recovery at the resource limited sensor nodes. For that purpose a low complexity decoder will be very useful.
|
{
"cite_N": [
"@cite_1",
"@cite_23"
],
"mid": [
"2144364471",
"2093157150"
],
"abstract": [
"In this paper, we consider the problem of using wireless sensor networks (WSNs) to measure the temporal-spatial profile of some physical phenomena. We base our work on two observations. First, most physical phenomena are compressible in some transform domain basis. Second, most WSNs have some form of heterogeneity. Given these two observations, we propose a nonuniform compressive sensing method to improve the performance of WSNs by exploiting both compressibility and heterogeneity. We apply our proposed method to real WSN data sets. We find that our method can provide a more accurate temporal-spatial profile for a given energy budget compared with other sampling methods.",
"In this paper, we consider the problem of using wireless sensor networks (WSNs) to measure the temporal-spatial profile of some physical phenomena. We base our work on two observations. Firstly, most physical phenomena are compressible in some transform domain basis. Secondly, most WSNs have some form of heterogeneity. Given these two observations, we propose a non-uniform compressive sensing method to improve the performance of WSNs by exploiting both compressibility and heterogeneity. We apply our proposed method to a real WSN data set. We find that our method can provide a more accurate temporal-spatial profile for a given energy budget compared with other sampling methods."
]
}
|
1310.4113
|
2949799415
|
We study the behavior of the entangled value of two-player one-round projection games under parallel repetition. We show that for any projection game @math of entangled value 1-eps < 1, the value of the @math -fold repetition of G goes to zero as O((1-eps^c)^k), for some universal constant c 1. Previously parallel repetition with an exponential decay in @math was only known for the case of XOR and unique games. To prove the theorem we extend an analytical framework recently introduced by Dinur and Steurer for the study of the classical value of projection games under parallel repetition. Our proof, as theirs, relies on the introduction of a simple relaxation of the entangled value that is perfectly multiplicative. The main technical component of the proof consists in showing that the relaxed value remains tightly connected to the entangled value, thereby establishing the parallel repetition theorem. More generally, we obtain results on the behavior of the entangled value under products of arbitrary (not necessarily identical) projection games. Relating our relaxed value to the entangled value is done by giving an algorithm for converting a relaxed variant of quantum strategies that we call "vector quantum strategy" to a quantum strategy. The algorithm is considerably simpler in case the bipartite distribution of questions in the game has good expansion properties. When this is not the case, rounding relies on a quantum analogue of Holenstein's correlated sampling lemma which may be of independent interest. Our "quantum correlated sampling lemma" generalizes results of van Dam and Hayden on universal embezzlement.
|
After the completion of this work two new results established an exponential parallel repetition theorem for two-player one-round games with entangled players in which the distribution on questions is a product distribution. @cite_16 it is shown that the entangled value of games in which the distribution on questions is uniform decreases as @math Very recently @cite_15 extended the result to arbitrary product distributions on the questions, while also removing the dependence on the number of questions: they obtained the bound @math Both results are based on the use of information-theoretic techniques. They are incomparable to ours, as they apply to games in which the acceptance predicate is general but the input distribution is required to be product. In addition, both bounds above have a dependence on the number of answers in the game; while for the case of the classical value such a dependence is necessary @cite_33 , for the entangled value it is not yet known whether it can be avoided.
|
{
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_33"
],
"mid": [
"2952499036",
"2136241552",
"1937872493"
],
"abstract": [
"We show a parallel repetition theorem for the entangled value @math of any two-player one-round game @math where the questions @math to Alice and Bob are drawn from a product distribution on @math . We show that for the @math -fold product @math of the game @math (which represents the game @math played in parallel @math times independently), @math , where @math and @math represent the sets from which the answers of Alice and Bob are drawn.",
"In a two-player game, two cooperating but non communicating players, Alice and Bob, receive inputs taken from a probability distribution. Each of them produces an output and they win the game if they satisfy some predicate on their inputs outputs. The entangled value ω *(G) of a game G is the maximum probability that Alice and Bob can win the game if they are allowed to share an entangled state prior to receiving their inputs.",
"We show that no fixed number of parallel repetitions suffices in order to reduce the error in two-prover one-round proof systems from one constant to another. Our results imply that the recent bounds proven by Ran Raz (1995), showing that the number of rounds that suffice is inversely proportional to the answer length, are nearly best possible."
]
}
|
1310.4399
|
2152608246
|
In this work we present an in-depth analysis of the user behaviors on different Social Sharing systems. We consider three popular platforms, Flickr, Delicious and StumbleUpon, and, by combining techniques from social network analysis with techniques from semantic analysis, we characterize the tagging behavior as well as the tendency to create friendship relationships of the users of these platforms. The aim of our investigation is to see if (and how) the features and goals of a given Social Sharing system reflect on the behavior of its users and, moreover, if there exists a correlation between the social and tagging behavior of the users. We report our findings in terms of the characteristics of user profiles according to three different dimensions: (i) intensity of user activities, (ii) tag-based characteristics of user profiles, and (iii) semantic characteristics of user profiles.
|
In the latest years many researchers studied how to aggregate data coming from disparate Social Web systems with the goal of enhancing the level of personalization offered to the end users @cite_16 @cite_18 or improve the description of Web resources @cite_28 .
|
{
"cite_N": [
"@cite_28",
"@cite_18",
"@cite_16"
],
"mid": [
"2094526067",
"1966355874",
"1968130046"
],
"abstract": [
"The Social Web is successfully established and poised for continued growth. Web 2.0 applications such as blogs, bookmarking, music, photo and video sharing systems are among the most popular; and all of them incorporate a social aspect, i.e., users can easily share information with other users. But due to the diversity of these applications -- serving different aims -- the Social Web is ironically divided. Blog users who write about music for example, could possibly benefit from other users registered in other social systems operating within the same domain, such as a social radio station. Although these sites are two different and disconnected systems, offering distinct services to the users, the fact that domains are compatible could benefit users from both systems with interesting and multi-faceted information. In this paper we propose to automatically establish social links between distinct social systems through cross-tagging, i.e., enriching a social system with the tags of other similar social system(s). Since tags are known for increasing the prediction quality of recommender systems (RS), we propose to quantitatively evaluate the extent to which users can benefit from cross-tagging by measuring the impact of different cross-tagging approaches on tag-aware RS for personalized resource recommendations. We conduct experiments in real world data sets and empirically show the effectiveness of our approaches.",
"In order to adapt functionality to their individual users, systems need information about these users. The Social Web provides opportunities to gather user data from outside the system itself. Aggregated user data may be useful to address cold-start problems as well as sparse user profiles, but this depends on the nature of individual user profiles distributed on the Social Web. For example, does it make sense to re-use Flickr profiles to recommend bookmarks in Delicious? In this article, we study distributed form-based and tag-based user profiles, based on a large dataset aggregated from the Social Web. We analyze the completeness, consistency and replication of form-based profiles, which users explicitly create by filling out forms at Social Web systems such as Twitter, Facebook and LinkedIn. We also investigate tag-based profiles, which result from social tagging activities in systems such as Flickr, Delicious and StumbleUpon: to what extent do tag-based profiles overlap between different systems, what are the benefits of aggregating tag-based profiles. Based on these insights, we developed and evaluated the performance of several cross-system user modeling strategies in the context of recommender systems. The evaluation results show that the proposed methods solve the cold-start problem and improve recommendation quality significantly, even beyond the cold-start.",
"Recommender systems generally face the challenge of making predictions using only the relatively few user ratings available for a given domain. Cross-domain collaborative filtering (CF) aims to alleviate the effects of this data sparseness by transferring knowledge from other domains. We propose a novel algorithm, Tag-induced Cross-Domain Collaborative Filtering (TagCDCF), which exploits user-contributed tags that are common to multiple domains in order to establish the cross-domain links necessary for successful cross-domain CF. TagCDCF extends the state-of-the-art matrix factorization by introducing a constraint involving tag-based similarities between pairs of users and pairs of items across domains. The method requires no common users or items across domains. Using two publicly available CF data sets as different domains, we experimentally demonstrate that TagCDCF substantially outperforms other state-of-the-art single domain CF and cross-domain CF approaches. Additional experiments show that TagCDCF addresses data sparseness and illustrate the influence of the number of tags used by users in both domains."
]
}
|
1310.4399
|
2152608246
|
In this work we present an in-depth analysis of the user behaviors on different Social Sharing systems. We consider three popular platforms, Flickr, Delicious and StumbleUpon, and, by combining techniques from social network analysis with techniques from semantic analysis, we characterize the tagging behavior as well as the tendency to create friendship relationships of the users of these platforms. The aim of our investigation is to see if (and how) the features and goals of a given Social Sharing system reflect on the behavior of its users and, moreover, if there exists a correlation between the social and tagging behavior of the users. We report our findings in terms of the characteristics of user profiles according to three different dimensions: (i) intensity of user activities, (ii) tag-based characteristics of user profiles, and (iii) semantic characteristics of user profiles.
|
To the best of our knowledge, one of the first approaches targeting at merging data residing on different folksonomies is the already mentioned work by @cite_4 .
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2069692868"
],
"abstract": [
"As the popularity of the web increases, particularly the use of social networking sites and style sharing platforms, users are becoming increasingly connected, sharing more and more information, resources, and opinions. This vast array of information presents unique opportunities to harvest knowledge about user activities and interests through the exploitation of large-scale, complex systems. Communal tagging sites, and their respective folksonomies, are one example of such a complex system, providing huge amounts of information about users, spanning multiple domains of interest. However, the current Web infrastructure provides no mechanism for users to consolidate and exploit this information since it is spread over many desperate and unconnected resources. In this paper we compare user tag-clouds from multiple folksonomies to: (a) show how they tend to overlap, regardless of the focus of the folksonomy (b) demonstrate how this comparison helps finding and aligning the user's separate identities, and (c) show that cross-linking distributed user tag-clouds enriches users profiles. During this process, we find that significant user interests are often reflected in multiple Web2.0 profiles, even though they may operate over different domains. However, due to the free-form nature of tagging, some correlations are lost, a problem we address through the implementation and evaluation of a user tag filtering architecture."
]
}
|
1310.4399
|
2152608246
|
In this work we present an in-depth analysis of the user behaviors on different Social Sharing systems. We consider three popular platforms, Flickr, Delicious and StumbleUpon, and, by combining techniques from social network analysis with techniques from semantic analysis, we characterize the tagging behavior as well as the tendency to create friendship relationships of the users of these platforms. The aim of our investigation is to see if (and how) the features and goals of a given Social Sharing system reflect on the behavior of its users and, moreover, if there exists a correlation between the social and tagging behavior of the users. We report our findings in terms of the characteristics of user profiles according to three different dimensions: (i) intensity of user activities, (ii) tag-based characteristics of user profiles, and (iii) semantic characteristics of user profiles.
|
While the approach of @cite_4 focuses on the construction of global user profiles, other authors observe that the process of aggregating tagging data can be beneficial not only to produce more detailed user profiles but also to better describe resources available in a Social Web system. A relevant example is provided in @cite_28 ; in that paper the authors consider users of blogs concerning music and users of Last.fm , a popular folksonomy whose resources are musical tracks. The ultimate goal of @cite_28 is to enrich each Social Web system by re-using tags already exploited in other environments. This activity has a twofold effect: it first allows the automatic annotation of resources which were not originally labeled and, then, enriches user profiles in such a way that user similarities can be computed in a more precise way.
|
{
"cite_N": [
"@cite_28",
"@cite_4"
],
"mid": [
"2094526067",
"2069692868"
],
"abstract": [
"The Social Web is successfully established and poised for continued growth. Web 2.0 applications such as blogs, bookmarking, music, photo and video sharing systems are among the most popular; and all of them incorporate a social aspect, i.e., users can easily share information with other users. But due to the diversity of these applications -- serving different aims -- the Social Web is ironically divided. Blog users who write about music for example, could possibly benefit from other users registered in other social systems operating within the same domain, such as a social radio station. Although these sites are two different and disconnected systems, offering distinct services to the users, the fact that domains are compatible could benefit users from both systems with interesting and multi-faceted information. In this paper we propose to automatically establish social links between distinct social systems through cross-tagging, i.e., enriching a social system with the tags of other similar social system(s). Since tags are known for increasing the prediction quality of recommender systems (RS), we propose to quantitatively evaluate the extent to which users can benefit from cross-tagging by measuring the impact of different cross-tagging approaches on tag-aware RS for personalized resource recommendations. We conduct experiments in real world data sets and empirically show the effectiveness of our approaches.",
"As the popularity of the web increases, particularly the use of social networking sites and style sharing platforms, users are becoming increasingly connected, sharing more and more information, resources, and opinions. This vast array of information presents unique opportunities to harvest knowledge about user activities and interests through the exploitation of large-scale, complex systems. Communal tagging sites, and their respective folksonomies, are one example of such a complex system, providing huge amounts of information about users, spanning multiple domains of interest. However, the current Web infrastructure provides no mechanism for users to consolidate and exploit this information since it is spread over many desperate and unconnected resources. In this paper we compare user tag-clouds from multiple folksonomies to: (a) show how they tend to overlap, regardless of the focus of the folksonomy (b) demonstrate how this comparison helps finding and aligning the user's separate identities, and (c) show that cross-linking distributed user tag-clouds enriches users profiles. During this process, we find that significant user interests are often reflected in multiple Web2.0 profiles, even though they may operate over different domains. However, due to the free-form nature of tagging, some correlations are lost, a problem we address through the implementation and evaluation of a user tag filtering architecture."
]
}
|
1310.4399
|
2152608246
|
In this work we present an in-depth analysis of the user behaviors on different Social Sharing systems. We consider three popular platforms, Flickr, Delicious and StumbleUpon, and, by combining techniques from social network analysis with techniques from semantic analysis, we characterize the tagging behavior as well as the tendency to create friendship relationships of the users of these platforms. The aim of our investigation is to see if (and how) the features and goals of a given Social Sharing system reflect on the behavior of its users and, moreover, if there exists a correlation between the social and tagging behavior of the users. We report our findings in terms of the characteristics of user profiles according to three different dimensions: (i) intensity of user activities, (ii) tag-based characteristics of user profiles, and (iii) semantic characteristics of user profiles.
|
Starting from the findings of @cite_4 and @cite_28 , many researchers studied how to get real benefits from the aggregation of social data. A popular research trend consists of exploiting aggregated social data to generate high quality recommendations. For instance, in @cite_16 , the authors suggest to use tags present in different Social Web systems to establish links between items located in each system.
|
{
"cite_N": [
"@cite_28",
"@cite_16",
"@cite_4"
],
"mid": [
"2094526067",
"1968130046",
"2069692868"
],
"abstract": [
"The Social Web is successfully established and poised for continued growth. Web 2.0 applications such as blogs, bookmarking, music, photo and video sharing systems are among the most popular; and all of them incorporate a social aspect, i.e., users can easily share information with other users. But due to the diversity of these applications -- serving different aims -- the Social Web is ironically divided. Blog users who write about music for example, could possibly benefit from other users registered in other social systems operating within the same domain, such as a social radio station. Although these sites are two different and disconnected systems, offering distinct services to the users, the fact that domains are compatible could benefit users from both systems with interesting and multi-faceted information. In this paper we propose to automatically establish social links between distinct social systems through cross-tagging, i.e., enriching a social system with the tags of other similar social system(s). Since tags are known for increasing the prediction quality of recommender systems (RS), we propose to quantitatively evaluate the extent to which users can benefit from cross-tagging by measuring the impact of different cross-tagging approaches on tag-aware RS for personalized resource recommendations. We conduct experiments in real world data sets and empirically show the effectiveness of our approaches.",
"Recommender systems generally face the challenge of making predictions using only the relatively few user ratings available for a given domain. Cross-domain collaborative filtering (CF) aims to alleviate the effects of this data sparseness by transferring knowledge from other domains. We propose a novel algorithm, Tag-induced Cross-Domain Collaborative Filtering (TagCDCF), which exploits user-contributed tags that are common to multiple domains in order to establish the cross-domain links necessary for successful cross-domain CF. TagCDCF extends the state-of-the-art matrix factorization by introducing a constraint involving tag-based similarities between pairs of users and pairs of items across domains. The method requires no common users or items across domains. Using two publicly available CF data sets as different domains, we experimentally demonstrate that TagCDCF substantially outperforms other state-of-the-art single domain CF and cross-domain CF approaches. Additional experiments show that TagCDCF addresses data sparseness and illustrate the influence of the number of tags used by users in both domains.",
"As the popularity of the web increases, particularly the use of social networking sites and style sharing platforms, users are becoming increasingly connected, sharing more and more information, resources, and opinions. This vast array of information presents unique opportunities to harvest knowledge about user activities and interests through the exploitation of large-scale, complex systems. Communal tagging sites, and their respective folksonomies, are one example of such a complex system, providing huge amounts of information about users, spanning multiple domains of interest. However, the current Web infrastructure provides no mechanism for users to consolidate and exploit this information since it is spread over many desperate and unconnected resources. In this paper we compare user tag-clouds from multiple folksonomies to: (a) show how they tend to overlap, regardless of the focus of the folksonomy (b) demonstrate how this comparison helps finding and aligning the user's separate identities, and (c) show that cross-linking distributed user tag-clouds enriches users profiles. During this process, we find that significant user interests are often reflected in multiple Web2.0 profiles, even though they may operate over different domains. However, due to the free-form nature of tagging, some correlations are lost, a problem we address through the implementation and evaluation of a user tag filtering architecture."
]
}
|
1310.4399
|
2152608246
|
In this work we present an in-depth analysis of the user behaviors on different Social Sharing systems. We consider three popular platforms, Flickr, Delicious and StumbleUpon, and, by combining techniques from social network analysis with techniques from semantic analysis, we characterize the tagging behavior as well as the tendency to create friendship relationships of the users of these platforms. The aim of our investigation is to see if (and how) the features and goals of a given Social Sharing system reflect on the behavior of its users and, moreover, if there exists a correlation between the social and tagging behavior of the users. We report our findings in terms of the characteristics of user profiles according to three different dimensions: (i) intensity of user activities, (ii) tag-based characteristics of user profiles, and (iii) semantic characteristics of user profiles.
|
In @cite_6 the authors show how to merge ratings provided by users in different Social Web platforms to compute reputation values (which are subsequently used to generate recommendations).
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"1999509537"
],
"abstract": [
"Social internetworking systems are a significantly emerging new reality; they group together a set of social networks and allow their users to share resources, to acquire opinions and, more in general, to interact, even if these users belong to different social networks and, therefore, did not previously know each other. In this context the notions of trust and reputation play a very relevant role. These notions have been widely studied in the past in several contexts whereas they have been largely neglected in the social internetworking research; however, since this application field presents several peculiarities, the results found in other application contexts are not automatically valid here. This paper introduces a model to represent and handle trust and reputation in a social internetworking system and proposes an approach that exploits these parameters to provide users with suggestions about the most reliable persons they can contact or social networks they can register to."
]
}
|
1310.4399
|
2152608246
|
In this work we present an in-depth analysis of the user behaviors on different Social Sharing systems. We consider three popular platforms, Flickr, Delicious and StumbleUpon, and, by combining techniques from social network analysis with techniques from semantic analysis, we characterize the tagging behavior as well as the tendency to create friendship relationships of the users of these platforms. The aim of our investigation is to see if (and how) the features and goals of a given Social Sharing system reflect on the behavior of its users and, moreover, if there exists a correlation between the social and tagging behavior of the users. We report our findings in terms of the characteristics of user profiles according to three different dimensions: (i) intensity of user activities, (ii) tag-based characteristics of user profiles, and (iii) semantic characteristics of user profiles.
|
In @cite_18 the system Mypes is presented. Mypes supports the linkage, aggregation, alignment and semantic enrichment of user profiles available in various Social Web systems, such as Flickr, Delicious and Facebook.
|
{
"cite_N": [
"@cite_18"
],
"mid": [
"1966355874"
],
"abstract": [
"In order to adapt functionality to their individual users, systems need information about these users. The Social Web provides opportunities to gather user data from outside the system itself. Aggregated user data may be useful to address cold-start problems as well as sparse user profiles, but this depends on the nature of individual user profiles distributed on the Social Web. For example, does it make sense to re-use Flickr profiles to recommend bookmarks in Delicious? In this article, we study distributed form-based and tag-based user profiles, based on a large dataset aggregated from the Social Web. We analyze the completeness, consistency and replication of form-based profiles, which users explicitly create by filling out forms at Social Web systems such as Twitter, Facebook and LinkedIn. We also investigate tag-based profiles, which result from social tagging activities in systems such as Flickr, Delicious and StumbleUpon: to what extent do tag-based profiles overlap between different systems, what are the benefits of aggregating tag-based profiles. Based on these insights, we developed and evaluated the performance of several cross-system user modeling strategies in the context of recommender systems. The evaluation results show that the proposed methods solve the cold-start problem and improve recommendation quality significantly, even beyond the cold-start."
]
}
|
1310.4399
|
2152608246
|
In this work we present an in-depth analysis of the user behaviors on different Social Sharing systems. We consider three popular platforms, Flickr, Delicious and StumbleUpon, and, by combining techniques from social network analysis with techniques from semantic analysis, we characterize the tagging behavior as well as the tendency to create friendship relationships of the users of these platforms. The aim of our investigation is to see if (and how) the features and goals of a given Social Sharing system reflect on the behavior of its users and, moreover, if there exists a correlation between the social and tagging behavior of the users. We report our findings in terms of the characteristics of user profiles according to three different dimensions: (i) intensity of user activities, (ii) tag-based characteristics of user profiles, and (iii) semantic characteristics of user profiles.
|
The approaches presented in this section neglect social relationships whereas, in our approach, the social and tagging behavior of a user play an equally relevant role to infer user preferences. Due to these reasons, in this paper we did not investigate new techniques for recommending items tags in social system (which is the core of another research line @cite_12 ), but aimed at finding potential form of correlation between the social and the tagging behavior of a user.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2035346118"
],
"abstract": [
"Matrix Factorization techniques have been successfully applied to raise the quality of suggestions generated by Collaborative Filtering Systems (CFSs). Traditional CFSs based on Matrix Factorization operate on the ratings provided by users and have been recently extended to incorporate demographic aspects such as age and gender. In this paper we propose to merge CFS based on Matrix Factorization and information regarding social friendships in order to provide users with more accurate suggestions and rankings on items of their interest. The proposed approach has been evaluated on a real-life online social network; the experimental results show an improvement against existing CFSs. A detailed comparison with related literature is also present."
]
}
|
1310.4399
|
2152608246
|
In this work we present an in-depth analysis of the user behaviors on different Social Sharing systems. We consider three popular platforms, Flickr, Delicious and StumbleUpon, and, by combining techniques from social network analysis with techniques from semantic analysis, we characterize the tagging behavior as well as the tendency to create friendship relationships of the users of these platforms. The aim of our investigation is to see if (and how) the features and goals of a given Social Sharing system reflect on the behavior of its users and, moreover, if there exists a correlation between the social and tagging behavior of the users. We report our findings in terms of the characteristics of user profiles according to three different dimensions: (i) intensity of user activities, (ii) tag-based characteristics of user profiles, and (iii) semantic characteristics of user profiles.
|
To the best of our knowledge, the first attempt to match user identities was proposed in @cite_32 . In that paper, the authors performed an in-depth analysis of 25 adaptive systems with the goal of identifying what user profile attributes were frequently recurrent in each system; the outcome of such an analysis led to isolate attributes like username, name, location and e-mail address.
|
{
"cite_N": [
"@cite_32"
],
"mid": [
"2056835025"
],
"abstract": [
"Currently, there is an increasing demand for user-adaptive systems for various purposes in many different domains. Typically, personalisation in information systems occurs separately within each system. The recent trends in user modeling rely on cross-system personalisation, i.e., the opportunity to share information across multiple information systems in order to improve user adaptation. Cooperation among systems in order to exchange user model knowledge is a complex task. This paper addresses a key challenge for cross-system personalisation which is often taken as a starting assumption, i.e., user identification. In this paper, we describe the conceptualization and implementation of a framework that provides a common base for user identification for cross-system personalisation among web-based user-adaptive systems. However, the framework can be easily adopted in different working environments and for different purposes. The framework represents a hybrid approach which draws parallels both from centralized and decentralized solutions for user modeling. To perform user identification, we propose to exploit a set of identification properties that are combined using an identification algorithm."
]
}
|
1310.4399
|
2152608246
|
In this work we present an in-depth analysis of the user behaviors on different Social Sharing systems. We consider three popular platforms, Flickr, Delicious and StumbleUpon, and, by combining techniques from social network analysis with techniques from semantic analysis, we characterize the tagging behavior as well as the tendency to create friendship relationships of the users of these platforms. The aim of our investigation is to see if (and how) the features and goals of a given Social Sharing system reflect on the behavior of its users and, moreover, if there exists a correlation between the social and tagging behavior of the users. We report our findings in terms of the characteristics of user profiles according to three different dimensions: (i) intensity of user activities, (ii) tag-based characteristics of user profiles, and (iii) semantic characteristics of user profiles.
|
More recently, other authors studied the problem of user identification in the context of Social Networks. One of the first studies was proposed in @cite_11 . In that paper the authors focused on Facebook and StudiVZ http: studivz.net and investigated which profile attributes can be used to identify users.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2170379762"
],
"abstract": [
"Today, more and more people have their virtual identities on the web. It is common that people are users of more than one social network and also their friends may be registered on multiple websites. A facility to aggregate our online friends into a single integrated environment would enable the user to keep up-to-date with their virtual contacts more easily, as well as to provide improved facility to search for people across different websites. In this paper, we propose a method to identify users based on profile matching. We use data from two popular social networks to study the similarity of profile definition. We evaluate the importance of fields in the web profile and develop a profile comparison tool. We demonstrate the effectiveness and efficiency of our tool in identifying and consolidating duplicated users on different websites."
]
}
|
1310.4399
|
2152608246
|
In this work we present an in-depth analysis of the user behaviors on different Social Sharing systems. We consider three popular platforms, Flickr, Delicious and StumbleUpon, and, by combining techniques from social network analysis with techniques from semantic analysis, we characterize the tagging behavior as well as the tendency to create friendship relationships of the users of these platforms. The aim of our investigation is to see if (and how) the features and goals of a given Social Sharing system reflect on the behavior of its users and, moreover, if there exists a correlation between the social and tagging behavior of the users. We report our findings in terms of the characteristics of user profiles according to three different dimensions: (i) intensity of user activities, (ii) tag-based characteristics of user profiles, and (iii) semantic characteristics of user profiles.
|
In @cite_15 , the authors studied 12 different Social Web systems (like Delicious, Flickr and YouTube) with the goal of finding a mapping with the different user accounts. This mapping can be found by applying a traditional search engine.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"1496564895"
],
"abstract": [
"One of the most interesting challenges in the area of social computing and social media analysis is the so-called community analysis. A well known barrier in cross-community (multiple website) analysis is the disconnectedness of these websites. In this paper, our aim is to provide evidence on the existence of a mapping among identities across multiple communities, providing a method for connecting these websites. Our studies have shown that simple, yet effective approaches, which leverage social media’s collective patterns can be utilized to find such a mapping. The employed methods successfully reveal this mapping with 66 accuracy."
]
}
|
1310.4389
|
2065749455
|
Humans describe images in terms of nouns and adjectives while algorithms operate on images represented as sets of pixels. Bridging this gap between how humans would like to access images versus their typical representation is the goal of image parsing, which involves assigning object and attribute labels to pixels. In this article we propose treating nouns as object labels and adjectives as visual attribute labels. This allows us to formulate the image parsing problem as one of jointly estimating per-pixel object and attribute labels from a set of training images. We propose an efficient (interactive time) solution. Using the extracted labels as handles, our system empowers a user to verbally refine the results. This enables hands-free parsing of an image into pixel-wise object attribute labels that correspond to human semantics. Verbally selecting objects of interest enables a novel and natural interaction modality that can possibly be used to interact with new generation devices (e.g., smartphones, Google Glass, livingroom devices). We demonstrate our system on a large number of real-world images with varying complexity. To help understand the trade-offs compared to traditional mouse-based interactions, results are reported for both a large-scale quantitative evaluation and a user study.
|
Semantic-based region selection Manipulation in the semantic space @cite_45 is a powerful tool and there are a number of approaches. An example is Photo Clip Art @cite_29 which allows users to directly insert new semantic objects into existing images, by retrieving suitable objects from a database. This work has been further extended to sketch based image composition by automatically extracting and selecting suitable salient object candidates @cite_41 from Internet images @cite_3 @cite_26 @cite_47 . Carroll al enables perspective aware image warps by using user annotated lines as projective constrains. Cheng al analyze semantic object regions as well as layer relations according to user input scribble marking, enabling interesting interactions across repeating elements. Zhou al proposed to reshape human image regions by fitting an appropriate @math d human model. Zheng al partially recover the @math d of man-made environments, enabling intuitive non-local editing. However, none of these methods attempt interactive verbal guided image parsing which has the added difficulty of enabling the use of verbal commands to provide vague guidance cues.
|
{
"cite_N": [
"@cite_26",
"@cite_41",
"@cite_29",
"@cite_3",
"@cite_45",
"@cite_47"
],
"mid": [
"",
"2037954058",
"2134921974",
"2026019603",
"1970369748",
"2070173541"
],
"abstract": [
"",
"Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.",
"We present a system for inserting new objects into existing photographs by querying a vast image-based object library, pre-computed using a publicly available Internet object database. The central goal is to shield the user from all of the arduous tasks typically involved in image compositing. The user is only asked to do two simple things: 1) pick a 3D location in the scene to place a new object; 2) select an object to insert using a hierarchical menu. We pose the problem of object insertion as a data-driven, 3D-based, context-sensitive object retrieval task. Instead of trying to manipulate the object to change its orientation, color distribution, etc. to fit the new image, we simply retrieve an object of a specified class that has all the required properties (camera pose, lighting, resolution, etc) from our large object library. We present new automatic algorithms for improving object segmentation and blending, estimating true 3D object size and orientation, and estimating scene lighting conditions. We also present an intuitive user interface that makes object insertion fast and simple even for the artistically challenged.",
"We present a system that composes a realistic picture from a simple freehand sketch annotated with text labels. The composed picture is generated by seamlessly stitching several photographs in agreement with the sketch and text labels; these are found by searching the Internet. Although online image search generates many inappropriate results, our system is able to automatically select suitable photographs to generate a high quality composition, using a filtering scheme to exclude undesirable images. We also provide a novel image blending algorithm to allow seamless image composition. Each blending result is given a numeric score, allowing us to find an optimal combination of discovered images. Experimental results show the method is very successful; we also evaluate our system using the results from two user studies.",
"We present a framework for generating content-adaptive macros that can transfer complex photo manipulations to new target images. We demonstrate applications of our framework to face, landscape, and global manipulations. To create a content-adaptive macro, we make use of multiple training demonstrations. Specifically, we use automated image labeling and machine learning techniques to learn the dependencies between image features and the parameters of each selection, brush stroke, and image processing operation in the macro. Although our approach is limited to learning manipulations where there is a direct dependency between image features and operation parameters, we show that our framework is able to learn a large class of the most commonly used manipulations using as few as 20 training demonstrations. Our framework also provides interactive controls to help macro authors and users generate training demonstrations and correct errors due to incorrect labeling or poor parameter estimation. We ask viewers to compare images generated using our content-adaptive macros with and without corrections to manually generated ground-truth images and find that they consistently rate both our automatic and corrected results as close in appearance to the ground truth. We also evaluate the utility of our proposed macro generation workflow via a small informal lab study with professional photographers. The study suggests that our workflow is effective and practical in the context of real-world photo editing.",
"We present a framework for interactively manipulating objects in a photograph using related objects obtained from internet images. Given an image, the user selects an object to modify, and provides keywords to describe it. Objects with a similar shape are retrieved and segmented from online images matching the keywords, and deformed to correspond with the selected object. By matching the candidate object and adjusting manipulation parameters, our method appropriately modifies candidate objects and composites them into the scene. Supported manipulations include transferring texture, color and shape from the matched object to the target in a seamless manner. We demonstrate the versatility of our framework using several inputs of varying complexity, for object completion, augmentation, replacement and revealing. Our results are evaluated using a user study. © 2012 Wiley Periodicals, Inc. (This work was performed while Chen Goldberg was a visiting researcher at Tsinghua University, Beijing, China.)"
]
}
|
1310.4389
|
2065749455
|
Humans describe images in terms of nouns and adjectives while algorithms operate on images represented as sets of pixels. Bridging this gap between how humans would like to access images versus their typical representation is the goal of image parsing, which involves assigning object and attribute labels to pixels. In this article we propose treating nouns as object labels and adjectives as visual attribute labels. This allows us to formulate the image parsing problem as one of jointly estimating per-pixel object and attribute labels from a set of training images. We propose an efficient (interactive time) solution. Using the extracted labels as handles, our system empowers a user to verbally refine the results. This enables hands-free parsing of an image into pixel-wise object attribute labels that correspond to human semantics. Verbally selecting objects of interest enables a novel and natural interaction modality that can possibly be used to interact with new generation devices (e.g., smartphones, Google Glass, livingroom devices). We demonstrate our system on a large number of real-world images with varying complexity. To help understand the trade-offs compared to traditional mouse-based interactions, results are reported for both a large-scale quantitative evaluation and a user study.
|
Speech interface Speech interfaces are deployed when mouse based interactions are infeasible or cumbersome. Although research on integrating speech interfaces into software started in the 1980s @cite_50 , it is only recently that such interfaces have been widely deployed, (e.g. Apple's Siri, PixelTone ). However, most speech interface research is focused on natural language processing and to our knowledge there has been no prior work addressing image region selection through speech. The speech interface that most resembles our work is PixelTone , which allows users to attach object labels to scribble based segments. These labels allow subsequent voice reference. Independently, we have developed a hands-free parsing of an image into pixel-wise object attribute labels that correspond to human semantics. This provides a verbal option for selecting objects of interest and is potentially, a powerful additional tool for speech interfaces.
|
{
"cite_N": [
"@cite_50"
],
"mid": [
"2135643110"
],
"abstract": [
"Recent technological advances in connected-speech recognition and position sensing in space have encouraged the notion that voice and gesture inputs at the graphics interface can converge to provide a concerted, natural user modality. The work described herein involves the user commanding simple shapes about a large-screen graphics display surface. Because voice can be augmented with simultaneous pointing, the free usage of pronouns becomes possible, with a corresponding gain in naturalness and economy of expression. Conversely, gesture aided by voice gains precision in its power to reference."
]
}
|
1310.3745
|
2949960673
|
Mixed linear regression involves the recovery of two (or more) unknown vectors from unlabeled linear measurements; that is, where each sample comes from exactly one of the vectors, but we do not know which one. It is a classic problem, and the natural and empirically most popular approach to its solution has been the EM algorithm. As in other settings, this is prone to bad local minima; however, each iteration is very fast (alternating between guessing labels, and solving with those labels). In this paper we provide a new initialization procedure for EM, based on finding the leading two eigenvectors of an appropriate matrix. We then show that with this, a re-sampled version of the EM algorithm provably converges to the correct vectors, under natural assumptions on the sampling distribution, and with nearly optimal (unimprovable) sample complexity. This provides not only the first characterization of EM's performance, but also much lower sample complexity as compared to both standard (randomly initialized) EM, and other methods for this problem.
|
A quite similar problem that attracts extensive attention is subspace clustering, where the goal is to learn an unknown number of linear subspaces of varying dimensions from sample points. Putting our problem in this setting, each sample @math is a vector in @math ; the points from @math correspond to one @math -dimensional subspace, and from @math to another @math -dimensional subspace. Note that this makes for a very hard instance of subspace clustering, as not only are the dimensions of each subspace very high (only one less than ambient), but the projections of the points in the first @math coordinates are exactly the same. Even without the latter restriction, one typical method @cite_4 , @cite_7 -- as an example -- requires @math to have unique solution.
|
{
"cite_N": [
"@cite_4",
"@cite_7"
],
"mid": [
"2165964979",
"2952285266"
],
"abstract": [
"We propose an algebraic geometric approach to the problem of estimating a mixture of linear subspaces from sample data points, the so-called generalized principal component analysis (GPCA) problem. In the absence of noise, we show that GPCA is equivalent to factoring a homogeneous polynomial whose degree is the number of subspaces and whose factors (roots) represent normal vectors to each subspace. We derive a formula for the number of subspaces n and provide an analytic solution to the factorization problem using linear algebraic techniques. The solution is closed form if and only if n spl les 4. In the presence of noise, we cast GPCA as a constrained nonlinear least squares problem and derive an optimal function from which the subspaces can be directly recovered using standard nonlinear optimization techniques. We apply GPCA to the motion segmentation problem in computer vision, i.e. the problem of estimating a mixture of motion models from 2D imagery.",
"In many real-world problems, we are dealing with collections of high-dimensional data, such as images, videos, text and web documents, DNA microarray data, and more. Often, high-dimensional data lie close to low-dimensional structures corresponding to several classes or categories the data belongs to. In this paper, we propose and study an algorithm, called Sparse Subspace Clustering (SSC), to cluster data points that lie in a union of low-dimensional subspaces. The key idea is that, among infinitely many possible representations of a data point in terms of other points, a sparse representation corresponds to selecting a few points from the same subspace. This motivates solving a sparse optimization program whose solution is used in a spectral clustering framework to infer the clustering of data into subspaces. Since solving the sparse optimization program is in general NP-hard, we consider a convex relaxation and show that, under appropriate conditions on the arrangement of subspaces and the distribution of data, the proposed minimization program succeeds in recovering the desired sparse representations. The proposed algorithm can be solved efficiently and can handle data points near the intersections of subspaces. Another key advantage of the proposed algorithm with respect to the state of the art is that it can deal with data nuisances, such as noise, sparse outlying entries, and missing entries, directly by incorporating the model of the data into the sparse optimization program. We demonstrate the effectiveness of the proposed algorithm through experiments on synthetic data as well as the two real-world problems of motion segmentation and face clustering."
]
}
|
1310.3452
|
2951843153
|
We propose a new model, together with advanced optimization, to separate a thick scattering media layer from a single natural image. It is able to handle challenging underwater scenes and images taken in fog and sandstorm, both of which are with significantly reduced visibility. Our method addresses the critical issue -- this is, originally unnoticeable impurities will be greatly magnified after removing the scattering media layer -- with transmission-aware optimization. We introduce non-local structure-aware regularization to properly constrain transmission estimation without introducing the halo artifacts. A selective-neighbor criterion is presented to convert the unconventional constrained optimization problem to an unconstrained one where the latter can be efficiently solved.
|
Central to visual restoration from scattering media is transmission estimation. On the hardware side, polarizers were used during picture taking, which help estimate part of the medium transitivity @cite_16 or augments visibility for underwater vision @cite_6 . 3D scene models were used in @cite_10 to guide transmission estimation.
|
{
"cite_N": [
"@cite_16",
"@cite_10",
"@cite_6"
],
"mid": [
"1998363013",
"2081418206",
"2145213600"
],
"abstract": [
"We present an approach for easily removing the effects of haze from passively acquired images. Our approach is based on the fact that usually the natural illuminating light scattered by atmospheric particles (airlight) is partially polarized. Optical filtering alone cannot remove the haze effects, except in restricted situations. Our method, however, stems from physics-based analysis that works under a wide range of atmospheric and viewing conditions, even if the polarization is low. The approach does not rely on specific scattering models such as Rayleigh scattering and does not rely on the knowledge of illumination directions. It can be used with as few as two images taken through a polarizer at different orientations. As a byproduct, the method yields a range map of the scene, which enables scene rendering as if imaged from different viewpoints. It also yields information about the atmospheric particles. We present experimental results of complete dehazing of outdoor scenes, in far-from-ideal conditions for polarization filtering. We obtain a great improvement of scene contrast and correction of color.",
"In this paper, we introduce a novel system for browsing, enhancing, and manipulating casual outdoor photographs by combining them with already existing georeferenced digital terrain and urban models. A simple interactive registration process is used to align a photograph with such a model. Once the photograph and the model have been registered, an abundance of information, such as depth, texture, and GIS data, becomes immediately available to our system. This information, in turn, enables a variety of operations, ranging from dehazing and relighting the photograph, to novel view synthesis, and overlaying with geographic information. We describe the implementation of a number of these applications and discuss possible extensions. Our results show that augmenting photographs with already available 3D models of the world supports a wide variety of new ways for us to experience and interact with our everyday snapshots.",
"Underwater imaging is important for scientific research and technology, as well as for popular activities. We present a computer vision approach which easily removes degradation effects in underwater vision. We analyze the physical effects of visibility degradation. We show that the main degradation effects can be associated with partial polarization of light. We therefore present an algorithm which inverts the image formation process, to recover a good visibility image of the object. The algorithm is based on a couple of images taken through a polarizer at different orientations. As a by product, a distance map of the scene is derived as well. We successfully used our approach when experimenting in the sea using a system we built. We obtained great improvement of scene contrast and color correction, and nearly doubled the underwater visibility range."
]
}
|
1310.3452
|
2951843153
|
We propose a new model, together with advanced optimization, to separate a thick scattering media layer from a single natural image. It is able to handle challenging underwater scenes and images taken in fog and sandstorm, both of which are with significantly reduced visibility. Our method addresses the critical issue -- this is, originally unnoticeable impurities will be greatly magnified after removing the scattering media layer -- with transmission-aware optimization. We introduce non-local structure-aware regularization to properly constrain transmission estimation without introducing the halo artifacts. A selective-neighbor criterion is presented to convert the unconventional constrained optimization problem to an unconstrained one where the latter can be efficiently solved.
|
Single-image software solutions are also popular @cite_7 @cite_14 @cite_15 @cite_18 @cite_1 . They are generally based on priors on transmission and scene radiance. Tan @cite_15 developed a method mainly based on the observation that images with enhanced visibility have higher contrast and airlight depends on the distance to the viewer. Fattal @cite_18 regarded transmission and surface shading (reflection) as locally uncorrelated in a hazed image. Independent Component Analysis (ICA) was employed to estimate scene albedo and medium transitivity. A dark-channel prior was proposed in @cite_1 to initialize transmission estimation followed by refinement through soft matting.
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_7",
"@cite_1",
"@cite_15"
],
"mid": [
"",
"2097900287",
"2003709967",
"2028990532",
"2114867966"
],
"abstract": [
"",
"Images of outdoor scenes captured in bad weather suffer from poor contrast. Under bad weather conditions, the light reaching a camera is severely scattered by the atmosphere. The resulting decay in contrast varies across the scene and is exponential in the depths of scene points. Therefore, traditional space invariant image processing techniques are not sufficient to remove weather effects from images. We present a physics-based model that describes the appearances of scenes in uniform bad weather conditions. Changes in intensities of scene points under different weather conditions provide simple constraints to detect depth discontinuities in the scene and also to compute scene structure. Then, a fast algorithm to restore scene contrast is presented. In contrast to previous techniques, our weather removal algorithm does not require any a priori scene structure, distributions of scene reflectances, or detailed knowledge about the particular weather condition. All the methods described in this paper are effective under a wide range of weather conditions including haze, mist, fog, and conditions arising due to other aerosols. Further, our methods can be applied to gray scale, RGB color, multispectral and even IR images. We also extend our techniques to restore contrast of scenes with moving objects, captured using a video camera.",
"Current vision systems are designed to perform in clear weather. Needless to say, in any outdoor application, there is no escape from \"bad\" weather. Ultimately, computer vision systems must include mechanisms that enable them to function (even if somewhat less reliably) in the presence of haze, fog, rain, hail and snow. We begin by studying the visual manifestations of different weather conditions. For this, we draw on what is already known about atmospheric optics. Next, we identify effects caused by bad weather that can be turned to our advantage. Since the atmosphere modulates the information carried from a scene point to the observer it can be viewed as a mechanism of visual information coding. Based on this observation, we develop models and methods for recovering pertinent scene properties, such as three-dimensional structure, from images taken under poor weather conditions.",
"In this paper we present a new method for estimating the optical transmission in hazy scenes given a single input image. Based on this estimation, the scattered light is eliminated to increase scene visibility and recover haze-free scene contrasts. In this new approach we formulate a refined image formation model that accounts for surface shading in addition to the transmission function. This allows us to resolve ambiguities in the data by searching for a solution in which the resulting shading and transmission functions are locally statistically uncorrelated. A similar principle is used to estimate the color of the haze. Results demonstrate the new method abilities to remove the haze layer as well as provide a reliable transmission estimate which can be used for additional applications such as image refocusing and novel view synthesis.",
"Bad weather, such as fog and haze, can significantly degrade the visibility of a scene. Optically, this is due to the substantial presence of particles in the atmosphere that absorb and scatter light. In computer vision, the absorption and scattering processes are commonly modeled by a linear combination of the direct attenuation and the airlight. Based on this model, a few methods have been proposed, and most of them require multiple input images of a scene, which have either different degrees of polarization or different atmospheric conditions. This requirement is the main drawback of these methods, since in many situations, it is difficult to be fulfilled. To resolve the problem, we introduce an automated method that only requires a single input image. This method is based on two basic observations: first, images with enhanced visibility (or clear-day images) have more contrast than images plagued by bad weather; second, airlight whose variation mainly depends on the distance of objects to the viewer, tends to be smooth. Relying on these two observations, we develop a cost function in the framework of Markov random fields, which can be efficiently optimized by various techniques, such as graph-cuts or belief propagation. The method does not require the geometrical information of the input image, and is applicable for both color and gray images."
]
}
|
1310.3404
|
2135914354
|
Coroutines and events are two common abstractions for writing concurrent programs. Because coroutines are often more convenient, but events more portable and efficient, it is natural to want to translate the former into the latter. CPC is such a source-to-source translator for C programs, based on a partial conversion into continuation-passing style (CPS conversion) of functions annotated as cooperative. In this article, we study the application of the CPC translator to QEMU, an open-source machine emulator which also uses annotated coroutine functions for concurrency. We first propose a new type of annotations to identify functions which never cooperate, and we introduce CoroCheck, a tool for the static analysis and inference of cooperation annotations. Then, we improve the CPC translator, defining CPS conversion as a calling convention for the C language, with support for indirect calls to CPS-converted function through function pointers. Finally, we apply CoroCheck and CPC to QEMU (750 000 lines of C code), fixing hundreds of missing annotations and comparing performance of the translated code with existing implementations of coroutines in QEMU. Our work shows the importance of static annotation checking to prevent actual concurrency bugs, and demonstrates that CPS conversion is a flexible, portable, and efficient compilation technique, even for very large programs written in an imperative language.
|
The former approach is best illustrated by Concurrent ML constructs @cite_3 , implemented on top of SML NJ's first-class continuations, or by the way coroutines are typically implemented in Scheme using the call cc operator @cite_25 . More recently, Scala uses first-class delimited continuations to implement concurrency primitives @cite_41 @cite_39 . Anton and Thiemann build pure OCaml coroutines @cite_21 on top of Kiselyov's delimcc library for delimited continuations @cite_27 .
|
{
"cite_N": [
"@cite_41",
"@cite_21",
"@cite_3",
"@cite_39",
"@cite_27",
"@cite_25"
],
"mid": [
"2133051483",
"1883975342",
"2613691754",
"2021978684",
"2137235328",
"2004105509"
],
"abstract": [
"We describe the implementation of first-class polymorphic delimited continuations in the programming language Scala. We use Scala's pluggable typing architecture to implement a simple type and effect system, which discriminates expressions with control effects from those without and accurately tracks answer type modification incurred by control effects. To tackle the problem of implementing first-class continuations under the adverse conditions brought upon by the Java VM, we employ a selective CPS transform, which is driven entirely by effect-annotated types and leaves pure code in direct style. Benchmarks indicate that this high-level approach performs competitively.",
"Starting from reduction semantics for several styles of coroutines from the literature, we apply Danvy's method to obtain equivalent functional implementations (definitional interpreters) for them. By applying existing type systems for programs with continuations, we obtain sound type systems for coroutines through the translation. The resulting type systems are similar to earlier hand-crafted ones. As a side product, we obtain implementations for these styles of coroutines in OCaml.",
"",
"There is an impedance mismatch between message-passing concurrency and virtual machines, such as the JVM. VMs usually map their threads to heavyweight OS processes. Without a lightweight process abstraction, users are often forced to write parts of concurrent applications in an event-driven style which obscures control flow, and increases the burden on the programmer. In this paper we show how thread-based and event-based programming can be unified under a single actor abstraction. Using advanced abstraction mechanisms of the Scala programming language, we implement our approach on unmodified JVMs. Our programming model integrates well with the threading model of the underlying VM.",
"We describe the first implementation of multi-prompt delimited control operators in OCaml that is direct in that it captures only the needed part of the control stack. The implementation is a library that requires no changes to the OCaml compiler or run-time, so it is perfectly compatible with existing OCaml source and binary code. The library has been in fruitful practical use since 2006. We present the library as an implementation of an abstract machine derived by elaborating the definitional machine. The abstract view lets us distill a minimalistic API, scAPI, sufficient for implementing multi-prompt delimited control. We argue that a language system that supports exception and stack-overflow handling supports scAPI. With byte- and native-code OCaml systems as two examples, our library illustrates how to use scAPI to implement multi-prompt delimited control in a typed language. The approach is general and has been used to add multi-prompt delimited control to other existing language systems.",
"Abstract Continuations, when made available to the programmer as first class objects, provide a general control abstraction for sequential computation. The power of first class continuations is demonstrated by implementing a variety of coroutine mechanisms using only continuations and functional abstraction. The importance of general abstraction mechanisms such as continuations is discussed."
]
}
|
1310.3404
|
2135914354
|
Coroutines and events are two common abstractions for writing concurrent programs. Because coroutines are often more convenient, but events more portable and efficient, it is natural to want to translate the former into the latter. CPC is such a source-to-source translator for C programs, based on a partial conversion into continuation-passing style (CPS conversion) of functions annotated as cooperative. In this article, we study the application of the CPC translator to QEMU, an open-source machine emulator which also uses annotated coroutine functions for concurrency. We first propose a new type of annotations to identify functions which never cooperate, and we introduce CoroCheck, a tool for the static analysis and inference of cooperation annotations. Then, we improve the CPC translator, defining CPS conversion as a calling convention for the C language, with support for indirect calls to CPS-converted function through function pointers. Finally, we apply CoroCheck and CPC to QEMU (750 000 lines of C code), fixing hundreds of missing annotations and comparing performance of the translated code with existing implementations of coroutines in QEMU. Our work shows the importance of static annotation checking to prevent actual concurrency bugs, and demonstrates that CPS conversion is a flexible, portable, and efficient compilation technique, even for very large programs written in an imperative language.
|
Explicit translation into continuation-passing style, often encapsulated within a monad, is used in languages lacking first-class continuations. In Haskell, Claessen proposes a monad transformer yielding a concurrent version of existing monads @cite_17 . Li and Zdancewic also use a monadic approach to build event-driven network servers @cite_13 . In OCaml, Vouillon's Lwt @cite_16 provides a lightweight alternative to native threads. The asynchronous model in F # is implemented with a localized continuation-passing translation of control-flow and a heap-based allocation of the closures, using three continuations for success, exceptions and cancellation @cite_11 .
|
{
"cite_N": [
"@cite_16",
"@cite_13",
"@cite_11",
"@cite_17"
],
"mid": [
"2062177228",
"1976194690",
"1587375298",
"2014257084"
],
"abstract": [
"We present a cooperative thread library for Objective Caml. The library is entirely written in Objective Caml and does not rely on any external C function. Programs involving threads are written in a monadic style. This makes it possible to write threaded code almost as regular ML code, even though it has a different semantics. Cooperative threads are especially well suited for concurrent network applications, where threads perform little computation and spend most of their time waiting for input or output, at which time other threads can run. This library has been successfully used in the Unison file synchronizer and the Ocsigen Web server.",
"This paper proposes to combine two seemingly opposed programming models for building massively concurrent network services: the event-driven model and the multithreaded model. The result is a hybrid design that offers the best of both worlds--the ease of use and expressiveness of threads and the flexibility and performance of events. This paper shows how the hybrid model can be implemented entirely at the application level using concurrency monads in Haskell, which provides type-safe abstractions for both events and threads. This approach simplifies the development of massively concurrent software in a way that scales to real-world network services. The Haskell implementation supports exceptions, symmetrical multiprocessing, software transactional memory, asynchronous I O mechanisms and application-level network protocol stacks. Experimental results demonstrate that this monad-based approach has good performance: the threads are extremely lightweight (scaling to ten million threads), and the I O performance compares favorably to that of Linux NPTL. tens of thousands of simultaneous, mostly-idle client connections. Such massively-concurrent programs are difficult to implement, especially when other requirements, such as high performance and strong security, must also be met.",
"We describe the asynchronous programming model in F#, and its applications to reactive, parallel and concurrent programming. The key feature combines a core language with a non-blocking modality to author lightweight asynchronous tasks, where the modality has control flow constructs that are syntactically a superset of the core language and are given an asynchronous semantic interpretation. This allows smooth transitions between synchronous and asynchronous code and eliminates callback-style treatments of inversion of control, without disturbing the foundation of CPU-intensive programming that allows F# to interoperate smoothly and compile efficiently. An adapted version of this approach has recently been announced for a future version of C#.",
"Without adding any primitives to the language, we define a concurrency monad transformer in Haskell. This allows us to add a limited form of concurrency to any existing monad. The atomic actions of the new monad are lifted actions of the underlying monad. Some extra operations, such as fork, to initiate new processes, are provided. We discuss the implementation, and use some examples to illustrate the usefulness of this construction."
]
}
|
1310.3404
|
2135914354
|
Coroutines and events are two common abstractions for writing concurrent programs. Because coroutines are often more convenient, but events more portable and efficient, it is natural to want to translate the former into the latter. CPC is such a source-to-source translator for C programs, based on a partial conversion into continuation-passing style (CPS conversion) of functions annotated as cooperative. In this article, we study the application of the CPC translator to QEMU, an open-source machine emulator which also uses annotated coroutine functions for concurrency. We first propose a new type of annotations to identify functions which never cooperate, and we introduce CoroCheck, a tool for the static analysis and inference of cooperation annotations. Then, we improve the CPC translator, defining CPS conversion as a calling convention for the C language, with support for indirect calls to CPS-converted function through function pointers. Finally, we apply CoroCheck and CPC to QEMU (750 000 lines of C code), fixing hundreds of missing annotations and comparing performance of the translated code with existing implementations of coroutines in QEMU. Our work shows the importance of static annotation checking to prevent actual concurrency bugs, and demonstrates that CPS conversion is a flexible, portable, and efficient compilation technique, even for very large programs written in an imperative language.
|
Deriving state-machines from a threaded-style code is as old as @cite_38 . Implementations have then been improved in multiple directions: as C preprocessor macros @cite_6 , as source-to-source transformations on C++ @cite_20 or Java @cite_44 programs, as a transformation on JVM bytecode @cite_43 , or as LLVM code blocks and macros based on GCC's nested functions @cite_29 .
|
{
"cite_N": [
"@cite_38",
"@cite_29",
"@cite_6",
"@cite_44",
"@cite_43",
"@cite_20"
],
"mid": [
"2810731713",
"2115429665",
"2161566505",
"2054564983",
"1581908531",
"1487134436"
],
"abstract": [
"",
"This paper introduces AC, a set of language constructs for composable asynchronous IO in native languages such as C C++. Unlike traditional synchronous IO interfaces, AC lets a thread issue multiple IO requests so that they can be serviced concurrently, and so that long-latency operations can be overlapped with computation. Unlike traditional asynchronous IO interfaces, AC retains a sequential style of programming without requiring code to use multiple threads, and without requiring code to be \"stack-ripped\" into chains of callbacks. AC provides an \"async\" statement to identify opportunities for IO operations to be issued concurrently, a \"do..finish\" block that waits until any enclosed \"async\" work is complete, and a \"cancel\" statement that requests cancellation of unfinished IO within an enclosing \"do..finish\". We give an operational semantics for a core language. We describe and evaluate implementations that are integrated with message passing on the Barrelfish research OS, and integrated with asynchronous file and network IO on Microsoft Windows. We show that AC offers comparable performance to existing C C++ interfaces for asynchronous IO, while providing a simpler programming model.",
"Event-driven programming is a popular model for writing programs for tiny embedded systems and sensor network nodes. While event-driven programming can keep the memory overhead down, it enforces a state machine programming style which makes many programs difficult to write, maintain, and debug. We present a novel programming abstraction called protothreads that makes it possible to write event-driven programs in a thread-like style, with a memory overhead of only two bytes per protothread. We show that protothreads significantly reduce the complexity of a number of widely used programs previously written with event-driven state machines. For the examined programs the majority of the state machines could be entirely removed. In the other cases the number of states and transitions was drastically decreased. With protothreads the number of lines of code was reduced by one third. The execution time overhead of protothreads is on the order of a few processor cycles.",
"The event-driven programming style is pervasive as an efficient method for interacting with the environment. Unfortunately, the event-driven style severely complicates program maintenance and understanding, as it requires each logical flow of control to be fragmented across multiple independent callbacks. We propose tasks as a new programming model for organizing event-driven programs. Tasks are a variant of cooperative multi-threading and allow each logical control flow to be modularized in the traditional manner, including usage of standard control mechanisms like procedures and exceptions. At the same time, by using method annotations, task-based programs can be automatically and modularly translated into efficient event-based code, using a form of continuation passing style (CPS) translation. A linkable scheduler architecture permits tasks to be used in many different contexts. We have instantiated our model as a backward-compatible extension to Java, called TaskJava. We illustrate the benefits of our language through a formalization in an extension to Featherweight Java, and through a case study based on an open-source web server.",
"This paper describes Kilim, a framework that employs a combination of techniques to help create robust, massively concurrent systems in mainstream languages such as Java: (i) ultra-lightweight, cooperatively-scheduled threads (actors), (ii) a message-passing framework (no shared memory, no locks) and (iii) isolation-aware messaging. Isolation is achieved by controlling the shape and ownership of mutable messages --- they must not have internal aliases and can only be owned by a single actor at a time. We demonstrate a static analysis built around isolation type qualifiers to enforce these constraints. Kilim comfortably scales to handle hundreds of thousands of actors and messages on modest hardware. It is fast as well --- task-switching is 1000x faster than Java threads and 60x faster than other lightweight tasking frameworks, and message-passing is 3x faster than Erlang (currently the gold standard for concurrency-orientedprogramming).",
"Tame is a new event-based system for managing concurrency in network applications. Code written with Tame abstractions does not suffer from the \"stack-ripping\" problem associated with other event libraries. Like threaded code, tamed code uses standard control flow, automatically-managed local variables, and modular interfaces between callers and callees. Tame's implementation consists of C++ libraries and a source-to-source translator; no platform-specific support or compiler modifications are required, and Tame induces little runtime overhead. Experience with Tame in real-world systems, including a popular commercial Web site, suggests it is easy to adopt and deploy."
]
}
|
1310.3404
|
2135914354
|
Coroutines and events are two common abstractions for writing concurrent programs. Because coroutines are often more convenient, but events more portable and efficient, it is natural to want to translate the former into the latter. CPC is such a source-to-source translator for C programs, based on a partial conversion into continuation-passing style (CPS conversion) of functions annotated as cooperative. In this article, we study the application of the CPC translator to QEMU, an open-source machine emulator which also uses annotated coroutine functions for concurrency. We first propose a new type of annotations to identify functions which never cooperate, and we introduce CoroCheck, a tool for the static analysis and inference of cooperation annotations. Then, we improve the CPC translator, defining CPS conversion as a calling convention for the C language, with support for indirect calls to CPS-converted function through function pointers. Finally, we apply CoroCheck and CPC to QEMU (750 000 lines of C code), fixing hundreds of missing annotations and comparing performance of the translated code with existing implementations of coroutines in QEMU. Our work shows the importance of static annotation checking to prevent actual concurrency bugs, and demonstrates that CPS conversion is a flexible, portable, and efficient compilation technique, even for very large programs written in an imperative language.
|
The constraint that only cooperative functions can call cooperative functions is a very natural and common one in concurrent systems. In functional languages with a static type-checking, it is generally enforced by the monadic structure or the type system itself @cite_17 @cite_16 @cite_41 . Interestingly enough, authors of similar systems for imperative languages commonly acknowledge that static checking would be preferable, but do not implement it @cite_32 @cite_29 . It seems that Kilim statically checks +@pausable+ annotations, although the authors do not mention it explicitly @cite_43 .
|
{
"cite_N": [
"@cite_41",
"@cite_29",
"@cite_32",
"@cite_43",
"@cite_16",
"@cite_17"
],
"mid": [
"2133051483",
"2115429665",
"1521891776",
"1581908531",
"2062177228",
"2014257084"
],
"abstract": [
"We describe the implementation of first-class polymorphic delimited continuations in the programming language Scala. We use Scala's pluggable typing architecture to implement a simple type and effect system, which discriminates expressions with control effects from those without and accurately tracks answer type modification incurred by control effects. To tackle the problem of implementing first-class continuations under the adverse conditions brought upon by the Java VM, we employ a selective CPS transform, which is driven entirely by effect-annotated types and leaves pure code in direct style. Benchmarks indicate that this high-level approach performs competitively.",
"This paper introduces AC, a set of language constructs for composable asynchronous IO in native languages such as C C++. Unlike traditional synchronous IO interfaces, AC lets a thread issue multiple IO requests so that they can be serviced concurrently, and so that long-latency operations can be overlapped with computation. Unlike traditional asynchronous IO interfaces, AC retains a sequential style of programming without requiring code to use multiple threads, and without requiring code to be \"stack-ripped\" into chains of callbacks. AC provides an \"async\" statement to identify opportunities for IO operations to be issued concurrently, a \"do..finish\" block that waits until any enclosed \"async\" work is complete, and a \"cancel\" statement that requests cancellation of unfinished IO within an enclosing \"do..finish\". We give an operational semantics for a core language. We describe and evaluate implementations that are integrated with message passing on the Barrelfish research OS, and integrated with asynchronous file and network IO on Microsoft Windows. We show that AC offers comparable performance to existing C C++ interfaces for asynchronous IO, while providing a simpler programming model.",
"",
"This paper describes Kilim, a framework that employs a combination of techniques to help create robust, massively concurrent systems in mainstream languages such as Java: (i) ultra-lightweight, cooperatively-scheduled threads (actors), (ii) a message-passing framework (no shared memory, no locks) and (iii) isolation-aware messaging. Isolation is achieved by controlling the shape and ownership of mutable messages --- they must not have internal aliases and can only be owned by a single actor at a time. We demonstrate a static analysis built around isolation type qualifiers to enforce these constraints. Kilim comfortably scales to handle hundreds of thousands of actors and messages on modest hardware. It is fast as well --- task-switching is 1000x faster than Java threads and 60x faster than other lightweight tasking frameworks, and message-passing is 3x faster than Erlang (currently the gold standard for concurrency-orientedprogramming).",
"We present a cooperative thread library for Objective Caml. The library is entirely written in Objective Caml and does not rely on any external C function. Programs involving threads are written in a monadic style. This makes it possible to write threaded code almost as regular ML code, even though it has a different semantics. Cooperative threads are especially well suited for concurrent network applications, where threads perform little computation and spend most of their time waiting for input or output, at which time other threads can run. This library has been successfully used in the Unison file synchronizer and the Ocsigen Web server.",
"Without adding any primitives to the language, we define a concurrency monad transformer in Haskell. This allows us to add a limited form of concurrency to any existing monad. The atomic actions of the new monad are lifted actions of the underlying monad. Some extra operations, such as fork, to initiate new processes, are provided. We discuss the implementation, and use some examples to illustrate the usefulness of this construction."
]
}
|
1310.3404
|
2135914354
|
Coroutines and events are two common abstractions for writing concurrent programs. Because coroutines are often more convenient, but events more portable and efficient, it is natural to want to translate the former into the latter. CPC is such a source-to-source translator for C programs, based on a partial conversion into continuation-passing style (CPS conversion) of functions annotated as cooperative. In this article, we study the application of the CPC translator to QEMU, an open-source machine emulator which also uses annotated coroutine functions for concurrency. We first propose a new type of annotations to identify functions which never cooperate, and we introduce CoroCheck, a tool for the static analysis and inference of cooperation annotations. Then, we improve the CPC translator, defining CPS conversion as a calling convention for the C language, with support for indirect calls to CPS-converted function through function pointers. Finally, we apply CoroCheck and CPC to QEMU (750 000 lines of C code), fixing hundreds of missing annotations and comparing performance of the translated code with existing implementations of coroutines in QEMU. Our work shows the importance of static annotation checking to prevent actual concurrency bugs, and demonstrates that CPS conversion is a flexible, portable, and efficient compilation technique, even for very large programs written in an imperative language.
|
- There is a long history of static analysis to enforce safety properties, in particular for real-world programs written in languages lacking a strong type system @cite_19 @cite_24 . However, most of them require to add explicit annotations in ad-hoc domain-specific languages. A noteworthy exception is Dialyzer, a static analyser reusing the annotation format already found in the documentation of many Erlang programs @cite_12 .
|
{
"cite_N": [
"@cite_24",
"@cite_19",
"@cite_12"
],
"mid": [
"1993255342",
"2097990218",
"2103704182"
],
"abstract": [
"Frama-C is a source code analysis platform that aims at conducting verification of industrial-size C programs. It provides its users with a collection of plug-ins that perform static analysis, deductive verification, and testing, for safety- and security-critical software. Collaborative verification across cooperating plug-ins is enabled by their integration on top of a shared kernel and datastructures, and their compliance to a common specification language. This foundational article presents a consolidated view of the platform, its main and composite analyses, and some of its industrial achievements.",
"CCured is a program transformation system that adds memory safety guarantees to C programs by verifying statically that memory errors cannot occur and by inserting run-time checks where static verification is insufficient.This paper addresses major usability issues in a previous version of CCured, in which many type casts required the use of pointers whose representation was expensive and incompatible with precompiled libraries. We have extended the CCured type inference algorithm to recognize and verify statically a large number of type casts; this goal is achieved by using physical subtyping and pointers with run-time type information to allow parametric and subtype polymorphism. In addition, we present a new instrumentation scheme that splits CCured's metadata into a separate data structure whose shape mirrors that of the original user data. This scheme allows instrumented programs to invoke external functions directly on the program's data without the use of a wrapper function.With these extensions we were able to use CCured on real-world security-critical network daemons and to produce instrumented versions without memory-safety vulnerabilities.",
"Currently most Erlang programs contain no or very little type information. This sometimes makes them unreliable, hard to use, and difficult to understand and maintain. In this paper we describe our experiences from using static analysis tools to gradually add type information to a medium sized Erlang application that we did not write ourselves: the code base of Wrangler. We carefully document the approach we followed, the exact steps we took, and discuss possible difficulties that one is expected to deal with and the effort which is required in the process. We also show the type of software defects that are typically brought forward, the opportunities for code refactoring and improvement, and the expected benefits from embarking in such a project. We have chosen Wrangler for our experiment because the process is better explained on a code base which is small enough so that the interested reader can retrace its steps, yet large enough to make the experiment quite challenging and the experiences worth writing about. However, we have also done something similar on large parts of Erlang OTP. The result can partly be seen in the source code of Erlang OTP R12B-3."
]
}
|
1310.3407
|
2950582402
|
One major bottleneck in the practical implementation of received signal strength (RSS) based indoor localization systems is the extensive deployment efforts required to construct the radio maps through fingerprinting. In this paper, we aim to design an indoor localization scheme that can be directly employed without building a full fingerprinted radio map of the indoor environment. By accumulating the information of localized RSSs, this scheme can also simultaneously construct the radio map with limited calibration. To design this scheme, we employ a source data set that possesses the same spatial correlation of the RSSs in the indoor environment under study. The knowledge of this data set is then transferred to a limited number of calibration fingerprints and one or several RSS observations with unknown locations, in order to perform direct localization of these observations using manifold alignment. We test two different source data sets, namely a simulated radio propagation map and the environments plan coordinates. For moving users, we exploit the correlation of their observations to improve the localization accuracy. The online testing in two indoor environments shows that the plan coordinates achieve better results than the simulated radio maps, and a negligible degradation with 70-85 reduction in calibration load.
|
In @cite_9 @cite_2 , LANDMARK and LEASE proposed an adaptive offset of the RSS variations using deployed reference sniffers. This approach adapts the radio map to environmental dynamics using real-time samples, but still requires an initial map to start with. Moreover, it has been shown to be successful only with densely distributed reference sniffers. @cite_3 proposed LEMT that learns the functional relationship between the initial map and real-time readings (again from deployed sniffers) using nonlinear regression analysis and model trees, then applies nearest neighbor based method to find locations. LEMT requires less reference sniffers than LANDMARK and LEASE and can achieve a more effective accommodation of RSS variation. However, LEMT requires extensive processing after each RSS sniffing period by building a huge number of trees in each of them.
|
{
"cite_N": [
"@cite_9",
"@cite_3",
"@cite_2"
],
"mid": [
"2163993204",
"2103236029",
"2134440651"
],
"abstract": [
"Growing convergence among mobile computing devices and embedded technology sparks the development and deployment of \"context-aware\" applications, where location is the most essential context. We present LANDMARC, a location sensing prototype system that uses Radio Frequency Identification (RFID) technology for locating objects inside buildings. The major advantage of LANDMARC is that it improves the overall accuracy of locating objects by utilizing the concept of reference tags. Based on experimental analysis, we demonstrate that active RFID is a viable and cost-effective candidate for indoor location sensing. Although RFID is not designed for indoor location sensing, we point out three major features that should be added to make RFID technologies competitive in this new and growing market.",
"In wireless networks, a client's locations can be estimated using the signals received from various signal transmitters. Static fingerprint-based techniques are commonly used for location estimation, in which a radio map is built by calibrating signal-strength values in the offline phase. These values, compiled into deterministic or probabilistic models, are used for online localization. However, the radio map can be outdated when the signal-strength values change with time due to environmental dynamics, and repeated data calibration is infeasible or expensive. In this paper, we present a novel algorithm, known as LEMT (Location Estimation using Model Trees), to reconstruct a radio map using real-time signal- strength readings received at the reference points. This algorithm can take into account real-time signal-strength values at each time point and make use of the dependency between the estimated locations and reference points. We show that this technique can effectively accommodate the variations of signal strength over different time periods without the need to rebuild the radio maps repeatedly. We demonstrate the effectiveness of our proposed technique on realistic data sets collected from an 802.11b wireless network and a RFID-based network.",
"We present LEASE, a new system and framework for location estimation assisted by stationary emitters for indoor RF wireless networks. Unlike previous studies, we emphasize the deployment aspect of location estimation engines. Motivated thus, we present an adaptable infrastructure-based system that uses a small number of stationary emitters (SEs) and sniffers employed in a novel way to locate standard wireless clients in an enterprise. We present the components of the system and its architecture, and new non-parametric techniques for location estimation that work with a small number of SEs. Our techniques for location estimation can also be used in a client-based deployment. We present experimental results of using our techniques at two sites demonstrating the ability to perform location estimation with good accuracy in our new adaptable framework."
]
}
|
1310.2994
|
1922007388
|
We present a parallel visualization algorithm for the illustrative rendering of depth-dependent stylized dense tube data at interactive frame rates. While this computation could be efficiently performed on a GPU device, we target a parallel framework to enable it to be efficiently running on an ordinary multi-core CPU platform which is much more available than GPUs for common users. Our approach is to map the depth information in each tube onto each of the visual dimensions of shape, color, texture, value, and size on the basis of Bertin's semiology theory. The purpose is to enable more legible displays in the dense tube environments. A major contribution of our work is an efficient and effective parallel depthordering algorithm that makes use of the message passing interface (MPI) with VTK. We evaluated our framework with visualizations of depth-stylized tubes derived from 3D diffusion tensor MRI data by comparing its efficiency with several other alternative parallelization platforms running the same computations. As our results show, the parallelization framework we proposed can efficiently render highly dense 3D data sets like the tube data and thus is useful as a complement to parallel visualization environments that rely on GPUs.
|
There have been many previous work dedicating to technical solutions to the depth perception issues and visual occlusions in 3D data visualizations. To name a few, a rich set of landmarks and context cues @cite_6 and shading and transparency @cite_8 both contribute in enhancing visual perception in the depth dimension while alleviating occlusion problems within overlapping structures. Focusing on strengthening depth perception, employ volumetric halos to improve the 3D legibility of visualized volume data @cite_1 . They introduce different halos according to different ways of halo-volume combination and use halos to construct inconsistent lighting, which accentuates depth even further from another aspect.
|
{
"cite_N": [
"@cite_1",
"@cite_6",
"@cite_8"
],
"mid": [
"2158707602",
"2147394532",
""
],
"abstract": [
"Volumetric data commonly has high depth complexity which makes it difficult to judge spatial relationships accurately. There are many different ways to enhance depth perception, such as shading, contours, and shadows. Artists and illustrators frequently employ halos for this purpose. In this technique, regions surrounding the edges of certain structures are darkened or brightened which makes it easier to judge occlusion. Based on this concept, we present a flexible method for enhancing and highlighting structures of interest using GPU-based direct volume rendering. Our approach uses an interactively defined halo transfer function to classify structures of interest based on data value, direction, and position. A feature-preserving spreading algorithm is applied to distribute seed values to neighboring locations, generating a controllably smooth field of halo intensities. These halo intensities are then mapped to colors and opacities using a halo profile function. Our method can be used to annotate features at interactive frame rates.",
"Navigating through large-scale virtual environments such as simulations of the astrophysical Universe is difficult. The huge spatial range of astronomical models and the dominance of empty space make it hard for users to travel across cosmological scales effectively, and the problem of wayfinding further impedes the user's ability to acquire reliable spatial knowledge of astronomical contexts. We introduce a new technique called the scalable world-in-miniature (WIM) map as a unifying interface to facilitate travel and wayfinding in a virtual environment spanning gigantic spatial scales: power-law spatial seating enables rapid and accurate transitions among widely separated regions; logarithmically mapped miniature spaces offer a global overview mode when the full context is too large; 3D landmarks represented in the WIM are enhanced by scale, positional, and directional cues to augment spatial context awareness; a series of navigation models are incorporated into the scalable WIM to improve the performance of travel tasks posed by the unique characteristics of virtual cosmic exploration. The scalable WIM user interface supports an improved physical navigation experience and assists pragmatic cognitive understanding of a visualization context that incorporates the features of large-scale astronomy",
""
]
}
|
1310.2994
|
1922007388
|
We present a parallel visualization algorithm for the illustrative rendering of depth-dependent stylized dense tube data at interactive frame rates. While this computation could be efficiently performed on a GPU device, we target a parallel framework to enable it to be efficiently running on an ordinary multi-core CPU platform which is much more available than GPUs for common users. Our approach is to map the depth information in each tube onto each of the visual dimensions of shape, color, texture, value, and size on the basis of Bertin's semiology theory. The purpose is to enable more legible displays in the dense tube environments. A major contribution of our work is an efficient and effective parallel depthordering algorithm that makes use of the message passing interface (MPI) with VTK. We evaluated our framework with visualizations of depth-stylized tubes derived from 3D diffusion tensor MRI data by comparing its efficiency with several other alternative parallelization platforms running the same computations. As our results show, the parallelization framework we proposed can efficiently render highly dense 3D data sets like the tube data and thus is useful as a complement to parallel visualization environments that rely on GPUs.
|
@cite_13 give an ever complete discussion about occlusion management in 3D visualization where they focused on reducing 3D occlusions. Occlusion management for visualization is a more general class of visibility problem in computer graphics, which is concerned with improving human perception for specialized visual tasks such as occlusion, size and shape. This method extensively helped improve the legibility of 3D data visualizations. In contrast, we investigate how to manipulate typical retinal variables in graphics perception to help achieve a better depth legibility.
|
{
"cite_N": [
"@cite_13"
],
"mid": [
"2143203329"
],
"abstract": [
"While an important factor in depth perception, the occlusion effect in 3D environments also has a detrimental impact on tasks involving discovery, access, and spatial relation of objects in a 3D visualization. A number of interactive techniques have been developed in recent years to directly or indirectly deal with this problem using a wide range of different approaches. In this paper, we build on previous work on mapping out the problem space of 3D occlusion by defining a taxonomy of the design space of occlusion management techniques in an effort to formalize a common terminology and theoretical framework for this class of interactions. We classify a total of 50 different techniques for occlusion management using our taxonomy and then go on to analyze the results, deriving a set of five orthogonal design patterns for effective reduction of 3D occlusion. We also discuss the \"gaps\" in the design space, areas of the taxonomy not yet populated with existing techniques, and use these to suggest future research directions into occlusion management."
]
}
|
1310.2994
|
1922007388
|
We present a parallel visualization algorithm for the illustrative rendering of depth-dependent stylized dense tube data at interactive frame rates. While this computation could be efficiently performed on a GPU device, we target a parallel framework to enable it to be efficiently running on an ordinary multi-core CPU platform which is much more available than GPUs for common users. Our approach is to map the depth information in each tube onto each of the visual dimensions of shape, color, texture, value, and size on the basis of Bertin's semiology theory. The purpose is to enable more legible displays in the dense tube environments. A major contribution of our work is an efficient and effective parallel depthordering algorithm that makes use of the message passing interface (MPI) with VTK. We evaluated our framework with visualizations of depth-stylized tubes derived from 3D diffusion tensor MRI data by comparing its efficiency with several other alternative parallelization platforms running the same computations. As our results show, the parallelization framework we proposed can efficiently render highly dense 3D data sets like the tube data and thus is useful as a complement to parallel visualization environments that rely on GPUs.
|
Even direct volume rendering techniques often suffer from poor depth cues because the data sets commonly have a large number of overlapping structures. With MIP (maximum intensity projection) rendering @cite_24 , however, only few effort is required to create a good understanding of the structures represented by high signal intensities. This algorithm adds two different visual cues, occlusion revealing and depth based color. In the first one, they modify the MIP color in the presence of occluding objects with the same materials than the one at the point of maximum intensity while in other the actual position of the shaded fragment is used to change its color using a supporting spherical map. In this paper, we explore depth enhancement in dense geometry visualizations by encoding depth information with various visual variables.
|
{
"cite_N": [
"@cite_24"
],
"mid": [
"1591049691"
],
"abstract": [
"The two most common methods for the visualization of volumetric data are Direct Volume Rendering (DVR) and Maximum Intensity Projection (MIP). Direct Volume Rendering is superior to MIP in providing a larger amount of properly shaded details, because it employs a more complex shading model together with the use of user-defined transfer functions. However, the generation of adequate transfer functions is a laborious and time costly task, even for expert users. As a consequence, medical doctors often use MIP because it does not require the definition of complex transfer functions and because it gives good results on contrasted images. Unfortunately, MIP does not allow to perceive depth ordering and therefore spatial context is lost. In this paper we present a new approach to MIP rendering that uses depth and simple color blending to disambiguate the ordering of internal structures, while maintaining most of the details visible through MIP. It is usually faster than DVR and only requires the transfer function used by MIP rendering."
]
}
|
1310.2994
|
1922007388
|
We present a parallel visualization algorithm for the illustrative rendering of depth-dependent stylized dense tube data at interactive frame rates. While this computation could be efficiently performed on a GPU device, we target a parallel framework to enable it to be efficiently running on an ordinary multi-core CPU platform which is much more available than GPUs for common users. Our approach is to map the depth information in each tube onto each of the visual dimensions of shape, color, texture, value, and size on the basis of Bertin's semiology theory. The purpose is to enable more legible displays in the dense tube environments. A major contribution of our work is an efficient and effective parallel depthordering algorithm that makes use of the message passing interface (MPI) with VTK. We evaluated our framework with visualizations of depth-stylized tubes derived from 3D diffusion tensor MRI data by comparing its efficiency with several other alternative parallelization platforms running the same computations. As our results show, the parallelization framework we proposed can efficiently render highly dense 3D data sets like the tube data and thus is useful as a complement to parallel visualization environments that rely on GPUs.
|
@cite_3 employ hatching strokes to communicate shape while using distance-encoded shadow to further enhance depth perception in their vascular structure visualization. In addition, they achieve a real-time performance using GPU-based hatching algorithm, which is efficient for rendering complex tabular structures with depth being emphasized. Similarly, we handle tabular shapes in our visualization scenario but intend to improve depth perception in a much dense 3D tube geometries derived from human brain MRI data. Also, we we are to provide a cheaper interactive rendering solution on common multi-core CPU than the GPU rendering they have employed.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2132933030"
],
"abstract": [
"We present real-time vascular visualization methods, which extend on illustrative rendering techniques to particularly accentuate spatial depth and to improve the perceptive separation of important vascular properties such as branching level and supply area. The resulting visualization can and has already been used for direct projection on a patient's organ in the operation theater where the varying absorption and reflection characteristics of the surface limit the use of color. The important contributions of our work are a GPU-based hatching algorithm for complex tubular structures that emphasizes shape and depth as well as GPU-accelerated shadow-like depth indicators, which enable reliable comparisons of depth distances in a static monoscopic 3D visualization. In addition, we verify the expressiveness of our illustration methods in a large, quantitative study with 160 subjects"
]
}
|
1310.2994
|
1922007388
|
We present a parallel visualization algorithm for the illustrative rendering of depth-dependent stylized dense tube data at interactive frame rates. While this computation could be efficiently performed on a GPU device, we target a parallel framework to enable it to be efficiently running on an ordinary multi-core CPU platform which is much more available than GPUs for common users. Our approach is to map the depth information in each tube onto each of the visual dimensions of shape, color, texture, value, and size on the basis of Bertin's semiology theory. The purpose is to enable more legible displays in the dense tube environments. A major contribution of our work is an efficient and effective parallel depthordering algorithm that makes use of the message passing interface (MPI) with VTK. We evaluated our framework with visualizations of depth-stylized tubes derived from 3D diffusion tensor MRI data by comparing its efficiency with several other alternative parallelization platforms running the same computations. As our results show, the parallelization framework we proposed can efficiently render highly dense 3D data sets like the tube data and thus is useful as a complement to parallel visualization environments that rely on GPUs.
|
Parallelization has been extensively harnessed in visualization scenarios where performance becomes a challenge. In @cite_17 , the authors developed a scalable and portable parallel visualization system based on augmenting VTK for efficiently visualizing large scale time-varying data. The system they proposed provides parallelism on both task and pipeline level and primary addressed to visualization programmers. Also at a system scale but even earlier, SCIRun @cite_11 had offered task and data parallelism as a data flow based visualization system running on shared-memory machine with multiprocessors. This system was extended to support task parallelism on distributed-memory architectures @cite_0 . We present a light-weighted parallelization method for large geometry visualization by using existing facilities like MPI and VTK instead of providing a fully featured system or extended programming library.
|
{
"cite_N": [
"@cite_0",
"@cite_11",
"@cite_17"
],
"mid": [
"1868106643",
"138679572",
"2126507263"
],
"abstract": [
"Building systems that alter program behavior during execution based on user-specified criteria (computational steering systems) has been a recent research topic, particularly among the high performance computing community. To enable a computational steering system with powerful visualization capabilities to run on distributed memory architectures, a distributed infrastructure (or runtime system) must first be built. This infrastructure would permit harnessing a variety of machines to collaborate on an interactive simulation. Building such an infrastructure requires strategies for coordinating execution across machines (concurrency control mechanisms), mechanisms for fast data transfer between machines, and mechanisms for user manipulation of remote execution. We are creating a distributed infrastructure for the SCIRun computational steering system. SCIRun, a scientific problem solving environment (PSE), provides the ability to interactively guide or steer a running computation. Initially designed for a shared memory multiprocessor, SCIRun is a tightly integrated, multi-threaded framework for composing scientific applications from existing or new components. High performance computing is needed to maintain interactivity for scientists and engineers running simulations. Extending such a performance-sensitive application toolkit to enable pieces of the computation to run on different machine architectures all within the same computation would prove very useful. Not only could many different machines execute this framework, but also several machines could be configured to work synergistically on computations.",
"",
"A significant unsolved problem in scientific visualization is how to efficiently visualize extremely large time-varying datasets. Using parallelism provides a promising solution. One drawback of this approach is the high overhead and specialized knowledge often required to create parallel visualization programs. In this paper, we present a parallel visualization system that is scalable, portable and encapsulates parallel programming details for its users. Our approach was to augment an existing visualization system, the visualization toolkit(VTK). Process and communication abstractions were added in order to support task, pipeline and data parallelism. The resulting system allows users to quickly write parallel visualization programs and avoid rewriting these programs when porting to new platforms. The performance of a collection of parallel visualization programs written using this system and run on both a cluster of SGI Origin 2000s and a Linux-based PC cluster is presented. In addition to showing the utility of our approach, the results offer a comparison of the performance of commodity-based computing clusters."
]
}
|
1310.2994
|
1922007388
|
We present a parallel visualization algorithm for the illustrative rendering of depth-dependent stylized dense tube data at interactive frame rates. While this computation could be efficiently performed on a GPU device, we target a parallel framework to enable it to be efficiently running on an ordinary multi-core CPU platform which is much more available than GPUs for common users. Our approach is to map the depth information in each tube onto each of the visual dimensions of shape, color, texture, value, and size on the basis of Bertin's semiology theory. The purpose is to enable more legible displays in the dense tube environments. A major contribution of our work is an efficient and effective parallel depthordering algorithm that makes use of the message passing interface (MPI) with VTK. We evaluated our framework with visualizations of depth-stylized tubes derived from 3D diffusion tensor MRI data by comparing its efficiency with several other alternative parallelization platforms running the same computations. As our results show, the parallelization framework we proposed can efficiently render highly dense 3D data sets like the tube data and thus is useful as a complement to parallel visualization environments that rely on GPUs.
|
Compared to the system level solution, a lot more parallelization efforts for visualization focus on parallel rendering, ranging from photo-realistic rendering @cite_15 , volume rendering @cite_20 to parallel iso-surfacing @cite_21 . Among a large set of previous work specific to parallel polygon rendering, Crockett @cite_7 harnessed message passing architectures for polygon rendering parallelism that reduces memory usage and network contention while overlapping computation and communication. He also gives an overview of parallel rendering techniques from both hardware and software prospectives later on @cite_12 .
|
{
"cite_N": [
"@cite_7",
"@cite_21",
"@cite_15",
"@cite_20",
"@cite_12"
],
"mid": [
"2043339287",
"2126053368",
"2752853835",
"",
"1987288184"
],
"abstract": [
"Applications such as real-time animation and scientific visualization demand high performance for rendering complex 3D abstract data models into 2D images. As large applications migrate to highly parallel supercomputers, how can we exploit the available parallelism to keep the rendering on the supercomputer? To answer this question, we developed a parallel polygon renderer for general-purpose MIMD distributed-memory message-passing systems. It exploits object-level and image-level parallelism, and can run on systems containing from one processor to a number bounded by the number of scan lines in the resulting image. Unlike earlier approaches, ours multiplexes the transformation and rasterization phases on the same machine. This reduces memory usage and network contention, and overlaps computation and communication. >",
"We show that it is feasible to perform interactive isosurfacing of very large rectilinear datasets with brute-force ray tracing on a conventional (distributed) shared-memory multiprocessor machine. Rather than generate geometry representing the isosurface and render with a z-buffer, for each pixel we trace a ray through a volume and do an analytic isosurface intersection computation. Although this method has a high intrinsic computational cost, its simplicity and scalability make it ideal for large datasets on current high-end systems. Incorporating simple optimizations, such as volume bricking and a shallow hierarchy, enables interactive rendering (i.e. 10 frames per second) of the 1 GByte full resolution Visible Woman dataset on an SGI Reality Monster. The graphics capabilities of the Reality Monster are used only for display of the final color image.",
"A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid met al cooled fast breeder reactors.",
"",
"In computer graphics, rendering is the process by which an abstract description of a scene is converted to an image. When the scene is complex, or when high-quality images or high frame rates are required, the rendering process becomes computationally demanding. To provide the necessary levels of performance, parallel computing techniques must be brought to bear. Today, parallel hardware is routinely used in graphics workstations, and numerous software-based rendering systems have been developed for general-purpose parallel architectures. This article provides an overview of the parallel rendering field, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel renderers. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition and load balancing, are considered in relation to the rendering problem. Our survey explores a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics."
]
}
|
1310.2994
|
1922007388
|
We present a parallel visualization algorithm for the illustrative rendering of depth-dependent stylized dense tube data at interactive frame rates. While this computation could be efficiently performed on a GPU device, we target a parallel framework to enable it to be efficiently running on an ordinary multi-core CPU platform which is much more available than GPUs for common users. Our approach is to map the depth information in each tube onto each of the visual dimensions of shape, color, texture, value, and size on the basis of Bertin's semiology theory. The purpose is to enable more legible displays in the dense tube environments. A major contribution of our work is an efficient and effective parallel depthordering algorithm that makes use of the message passing interface (MPI) with VTK. We evaluated our framework with visualizations of depth-stylized tubes derived from 3D diffusion tensor MRI data by comparing its efficiency with several other alternative parallelization platforms running the same computations. As our results show, the parallelization framework we proposed can efficiently render highly dense 3D data sets like the tube data and thus is useful as a complement to parallel visualization environments that rely on GPUs.
|
Other researchers have probed different indirect aspects, such as image composition schemes @cite_23 and data decomposition strategy @cite_18 , to improve polygon rendering performance. More recently, various parallel rendering algorithms including sort-first, sort-last and the hybrid of them, were evaluated when being used on shared-memory computers while these algorithms are originally targeting distributed-memory architectures @cite_2 . In our work, we explore polygon rendering parallelization and employ image compositing as well but particularly serve depth enhancing thus provide a more legible 3D geometry visualization by overlapping parallel depth sorting and parallel polygonal data rendering.
|
{
"cite_N": [
"@cite_18",
"@cite_23",
"@cite_2"
],
"mid": [
"2159116333",
"2019596980",
"2049750333"
],
"abstract": [
"Using parallel processing for visualization speeds up computer graphics rendering of complex data sets. A parallel algorithm designed for polygon scan conversion and rendering is presented which supports fast rendering of highly complex data sets using advanced lighting models. Dedicated graphics rendering engines do not necessarily suit such data sets, although they can support real-time update of moderately complex scenes using simple lighting. Advantages to using a software-based approach include the feasibility of adding special rendering features to the program and the capability of integrating a parallel scientific application with a parallel graphics renderer. A new work decomposition strategy presented, called task adaptive, is based on dynamically partitioning the amount of computational work left at a given time. The algorithm uses a heuristic for dynamic task decomposition in which image space tasks are partitioned without requiring interruption of the partitioned processor. A sophisticated memory referencing strategy lets local memory access graphics data during rendering. This permits implementation of the algorithm on a distributed memory multiprocessor. An in-depth analysis of the overhead costs accompanying parallel processing shows where performance is adequate or could be improved. >",
"In a sort-last polygon rendering system, the efficiency of image composition is very important for achieving fast rendering. In this paper, the implementation of a sort-last rendering system on a general purpose multicomputer system is described. A two-phase sort-last-full image composition scheme is described first, and then many variants of it are presented for 2D mesh message-passing multicomputers, such as the Intel Delta and Paragon. All the proposed schemes are analyzed and experimentally evaluated on Caltech's Intel Delta machine for our sort-last parallel polygon renderer. Experimental results show that sort-last-sparse strategies are better suited than sort-last-full schemes for software implementation on a general purpose multicomputer system. Further, interleaved composition regions perform better than coherent regions. In a large multicomputer system. Performance can be improved by carefully scheduling the tasks of rendering and communication. Using 512 processors to render our test scenes, the peak rendering rate achieved on a 282,144 triangle dataset is dose to 4.6 million triangles per second which is comparable to the speed of current state-of-the-art graphics workstations.",
"Increasing the core count of CPUs to increase computational performance has been a significant trend for the better part of a decade. This has led to an unprecedented availability of large shared memory machines. Programming paradigms and systems are shifting to take advantage of this architectural change, so that intra-node parallelism can be fully utilized. Algorithms designed for parallel execution on distributed systems will also need to be modified to scale in these new shared and hybrid memory systems. In this paper, we reinvestigate parallel rendering algorithms with the goal of finding one that achieves favorable performance in this new environment. We test and analyze various methods, including sort-first, sort-last, and a hybrid scheme, to find an optimal parallel algorithm that maximizes shared memory performance."
]
}
|
1310.2994
|
1922007388
|
We present a parallel visualization algorithm for the illustrative rendering of depth-dependent stylized dense tube data at interactive frame rates. While this computation could be efficiently performed on a GPU device, we target a parallel framework to enable it to be efficiently running on an ordinary multi-core CPU platform which is much more available than GPUs for common users. Our approach is to map the depth information in each tube onto each of the visual dimensions of shape, color, texture, value, and size on the basis of Bertin's semiology theory. The purpose is to enable more legible displays in the dense tube environments. A major contribution of our work is an efficient and effective parallel depthordering algorithm that makes use of the message passing interface (MPI) with VTK. We evaluated our framework with visualizations of depth-stylized tubes derived from 3D diffusion tensor MRI data by comparing its efficiency with several other alternative parallelization platforms running the same computations. As our results show, the parallelization framework we proposed can efficiently render highly dense 3D data sets like the tube data and thus is useful as a complement to parallel visualization environments that rely on GPUs.
|
Note that the parallel sorting problem @cite_5 @cite_4 , which is at the core of our parallelization framework here, would be easily solved in highly efficient on GPU computing platforms with extensive existing algorithms available @cite_10 @cite_16 . In this paper instead, we target a cheaper solution without relying on high-end computing resources such as GPUs. Alternatively and complementarily, we use CPU-based parallel sorting algorithms leveraging a single processor of multiple cores, which has been almost a bottom-line configuration for modern common computers.
|
{
"cite_N": [
"@cite_5",
"@cite_16",
"@cite_4",
"@cite_10"
],
"mid": [
"2752885492",
"1995596660",
"1591066263",
"1597508330"
],
"abstract": [
"From the Publisher: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25 increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning.",
"This paper presents an algorithm for fast sorting of large lists using modern GPUs. The method achieves high speed by efficiently utilizing the parallelism of the GPU throughout the whole algorithm. Initially, GPU-based bucketsort or quicksort splits the list into enough sublists then to be sorted in parallel using merge-sort. The algorithm is of complexity nlogn, and for lists of 8 M elements and using a single Geforce 8800 GTS-512, it is 2.5 times as fast as the bitonic sort algorithms, with standard complexity of n(logn)^2, which for a long time was considered to be the fastest for GPU sorting. It is 6 times faster than single CPU quicksort, and 10 faster than the recent GPU-based radix sort. Finally, the algorithm is further parallelized to utilize two graphics cards, resulting in yet another 1.8 times speedup.",
"Apparatus for supporting different nets for various sporting purposes including interengaging tubular rods which are arranged to interconnect and have ground engaging portions suitable to be useful for the several functions. The frame of the net support structure includes a pair of spaced apart, vertically extending posts; each of the posts is divided into a pair of telescoping sections. An upper horizontally extending multi-section member extends and connects the upper end of the vertical posts. A U-shaped clip is provided to engage the frame support with resilient holding pressure for supporting a net on the frame.",
"“GPU Gems 2 isn't meant to simply adorn your bookshelf-it's required reading for anyone trying to keep pace with the rapid evolution of programmable graphics. If you're serious about graphics, this book will take you to the edge of what the GPU can do.” -Remi Arnaud, Graphics Architect at Sony Computer Entertainment “The topics covered in GPU Gems 2 are critical to the next generation of game engines.” -Gary McTaggart, Software Engineer at Valve, Creators of Half-Life and Counter-Strike This sequel to the best-selling, first volume of GPU Gems details the latest programming techniques for today's graphics processing units (GPUs). As GPUs find their way into mobile phones, handheld gaming devices, and consoles, GPU expertise is even more critical in today's competitive environment. Real-time graphics programmers will discover the latest algorithms for creating advanced visual effects, strategies for managing complex scenes, and advanced image processing techniques. Readers will also learn new methods for using the substantial processing power of the GPU in other computationally intensive applications, such as scientific computing and finance. Twenty of the book's forty-eight chapters are devoted to GPGPU programming, from basic concepts to advanced techniques. Written by experts in cutting-edge GPU programming, this book offers readers practical means to harness the enormous capabilities of GPUs.Major topics covered include: Geometric Complexity Shading, Lighting, and Shadows High-Quality Rendering General-Purpose Computation on GPUs: A Primer Image-Oriented Computing Simulation and Numerical Algorithms Contributors are from the following corporations and universities:1C: Maddox Games 2015 Apple Computer Armstrong State University Climax Entertainment Crytek discreet ETH Zurich GRAVIR IMAG-INRIA GSC Game World Lionhead Studios Lund University Massachusetts Institute of Technology mental images Microsoft Research NVIDIA Corporation Piranha Bytes Siemens Corporate Research Siemens Medical Solutions Simutronics Corporation Sony Pictures Imageworks Stanford University Stony Brook University Technische Universitat Munchen University of California, Davis University of North Carolina at Chapel Hill University of Potsdam University of Tokyo University of Toronto University of Utah University of Virginia University of Waterloo Vienna University of Technology VRVis Research CenterSection editors include NVIDIA engineers: Kevin Bjorke, Cem Cebenoyan, Simon Green, Mark Harris, Craig Kolb, and Matthias WlokaThe accompanying CD-ROM includes complementary examples and sample programs."
]
}
|
1310.2916
|
2038294257
|
We develop a framework for extracting a concise representation of the shape information available from diffuse shading in a small image patch. This produces a mid-level scene descriptor, comprised of local shape distributions that are inferred separately at every image patch across multiple scales. The framework is based on a quadratic representation of local shape that, in the absence of noise, has guarantees on recovering accurate local shape and lighting. And when noise is present, the inferred local shape distributions provide useful shape information without over-committing to any particular image explanation. These local shape distributions naturally encode the fact that some smooth diffuse regions are more informative than others, and they enable efficient and robust reconstruction of object-scale shape. Experimental results show that this approach to surface reconstruction compares well against the state-of-art on both synthetic images and captured photographs.
|
Background on shape inference from diffuse shading can be found in several reviews and serveys @cite_5 @cite_12 @cite_0 . An important question is whether shape is uniquely determined by a noiseless image, which has been addressed by a variety of PDE-based formulations. For example, Oliensis considered @math surfaces and showed that shape can be uniquely determined for the entire image by singular points and occluding boundaries together @cite_1 , and in many parts of the image by singular points alone @cite_17 . For the more general class of @math surfaces, Prados and Faugeras @cite_10 employed a smoothness constraint to prove uniqueness properties in a more general perspective setup @cite_20 @cite_9 given appropriate boundary conditions. In this paper, we use a more restrictive local surface model but prove local uniqueness without any boundary conditions or knowledge of singular points. This generalizes previous studies of local uniqueness, which have considered locally-spherical @cite_24 and fronto-parallel @cite_14 surfaces.
|
{
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_1",
"@cite_0",
"@cite_24",
"@cite_5",
"@cite_20",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"2008840450",
"",
"2057534882",
"2118304946",
"2129666524",
"2033664843",
"1509009208",
"1967434879",
"97083571",
"2113061379"
],
"abstract": [
"The shading cue is supposed to be a major factor in monocular stereopsis. However, the hypothesis is hardly corroborated by available data. For instance, the conventional stimulus used in perception research, which involves a circular disk with monotonic luminance gradient on a uniform surround, is theoretically ‘explained’ by any quadric surface, including spherical caps or cups (the conventional response categories), cylindrical ruts or ridges, and saddle surfaces. Whereas cylindrical ruts or ridges are reported when the outline is changed from circular to square, saddle surfaces are never reported. We introduce a method that allows us to differentiate between such possible responses. We report observations on a number of variations of the conventional stimulus, including variations of shape and quality of the boundary, and contexts that allow the observer to infer illumination direction. We find strong and expected influences of outline shape, but, perhaps surprisingly, we fail to find any influence of context, and only partial influence of outline quality. Moreover, we report appreciable differences within the generic population. We trace some of the idiosyncrasies (as compared to shape from shading algorithms) of the human observer to generic properties of the environment, in particular the fact that many objects are limited in size and elliptically convex over most of their boundaries.",
"",
"For general images of smooth objects wholly contained in the field of view, and for illumination symmetric around the viewing direction, it is proven that shape is uniquely determined by shading. Thus, shape from shading is a well-posed problem under these illumination conditions; and regularization is unnecessary for surface reconstruction and should be avoided. Generic properties of surfaces and images are established. Questions of existence are also discussed. Under the conditions above, it is argued that most images are effectively impossible, with no corresponding physically reasonable surface, and that any image can be rendered effectively impossible by a small perturbation of its intensities. This is explicitly illustrated for a synthetic image. The proofs are based on ideas of dynamical systems theory and global analysis.",
"Since the first shape-from-shading (SFS) technique was developed by Horn in the early 1970s, many different approaches have emerged. In this paper, six well-known SFS algorithms are implemented and compared. The performance of the algorithms was analyzed on synthetic images using mean and standard deviation of depth (Z) error, mean of surface gradient (p, q) error, and CPU timing. Each algorithm works well for certain images, but performs poorly for others. In general, minimization approaches are more robust, while the other approaches are faster.",
"Local analysis of image shading, in the absence of prior knowledge about the viewed scene, may be used to provide information about the scene. The following has been proved. Every image point has the same image intensity and first and second derivatives as the image of some point on a Lambertian surface with principal curvatures of equal magnitude. Further, if the principal curvatures are assumed to be equal there is a unique combination of image formation parameters (up to a mirror reversal) that will produce a particular set of image intensity and first and second derivatives. A solution for the unique combination of surface orientation, etc., is presented. This solution has been extended to natural imagery by using general position and regional constraints to obtain estimates of the following: ? surface orientation at each image point; ? the qualitative type of the surface, i.e., whether the surface is planar, cylindrical, convex, concave, or saddle; ? the illuminant direction within a region. Algorithms to recover illuminant direction and estimate surface orientation have been evaluated on both natural and synthesized images, and have been found to produce useful information about the scene.",
"Many algorithms have been suggested for the shape-from-shading problem, and some years have passed since the publication of the survey paper by [R. Zhang, P.-S. Tsai, J.E. Cryer, M. Shah, Shape from shading: a survey, IEEE Transactions on Pattern Analysis and Machine Intelligence 21 (8) (1999) 690-706]. In this new survey paper, we try to update their presentation including some recent methods which seem to be particularly representative of three classes of methods: methods based on partial differential equations, methods using optimization and methods approximating the image irradiance equation. One of the goals of this paper is to set the comparison of these methods on a firm basis. To this end, we provide a brief description of each method, highlighting its basic assumptions and mathematical properties. Moreover, we propose some numerical benchmarks in order to compare the methods in terms of their efficiency and accuracy in the reconstruction of surfaces corresponding to synthetic, as well as to real images.",
"This article proposes a solution of the Lambertian Shape From Shading (SFS) problem by designing a new mathematical framework based on the notion of viscosity solutions. The power of our approach is twofolds: 1) it defines a notion of weak solutions (in the viscosity sense) which does not necessarily require boundary data. Note that, in the previous SFS work of [23,15], [8], [22,20], the characterization of a viscosity solution and its computation require the knowledge of its values on the boundary of the image. This was quite unrealistic because in practice such values are not known. 2) it unifies the work of [23,15], [8], [22,20], based on the notion of viscosity solutions and the work of Dupuis and Oliensis [6] dealing with classical (C 1) solutions. Also, we generalize their work to the “perspective SFS” problem recently introduced by Prados and Faugeras 20.",
"We describe a mathematical and algorithmic study of the Lambertian \"Shape-From-Shading\" problem for orthographic and pinhole cameras. Our approach is based upon the notion of viscosity solutions of Hamilton-Jacobi equations. This approach provides a mathematical framework in which we can show that the problem is well-posed (we prove the existence of a solution and we characterize all the solutions). Our contribution is threefold. First, we model the camera both as orthographic and as perspective (pinhole), whereas most authors assume an orthographic projection (see Horn and Brooks (1989) for a survey of the SFS problem up to 1989 and (1999), Kozera (1998), (2004) for more recent ones); thus we extend the applicability of shape from shading methods to more realistic acquisition models. In particular it extends the work of (2002a) and Rouy and Tourin (1992). We provide some novel mathematical formulations of this problem yielding new partial differential equations. Results about the existence and uniqueness of their solutions are also obtained. Second, by introducing a \"generic\" Hamiltonian, we define a general framework allowing to deal with both models (orthographic and perspective), thereby simplifying the formalization of the problem. Thanks to this unification, each algorithm we propose can compute numerical solutions corresponding to all the modeling. Third, our work allows us to come up with two new generic algorithms for computing numerical approximations of the \"continuous solution of the \"Shape-From-Shading\" problem as well as a proof of their convergence toward that solution. Moreover, our two generic algorithms are able to deal with discontinuous images as well as images containing black shadows.",
"Understanding how the shape of a three dimensional object may be recovered from shading in a two-dimensional image of the object is one of the most important - and still unresolved - problems in machine vision. Although this important subfield is now in its second decade, this book is the first to provide a comprehensive review of shape from shading. It brings together all of the seminal papers on the subject, shows how recent work relates to more traditional approaches, and provides a comprehensive annotated bibliography.The book's 17 chapters cover: Surface Descriptions from Stereo and Shading. Shape and Source from Shading. The Eikonal Equation: some Results Applicable to Computer Vision. A Method for Enforcing Integrability in Shape from Shading Algorithms. Obtaining Shape from Shading Information. The Variational Approach to Shape from Shading. Calculating the Reflectance Map. Numerical Shape from Shading and Occluding Boundaries. Photometric Invariants Related to Solid Shape. Improved Methods of Estimating Shape from Shading Using the Light Source Coordinate System. A Provably Convergent Algorithm for Shape from Shading. Recovering Three Dimensional Shape from a Single Image of Curved Objects. Perception of Solid Shape from Shading. Local Shading Analysis Pentland. Radarclinometry for the Venus Radar Mapper. Photometric Method for Determining Surface Orientation from Multiple Images.Berthold K. P. Horn is Professor of Electrical Engineering and Computer Science at MIT. He has presided over the field of machine vision for more than a decade and is the author of \"Robot Vision. \"Michael Brooks is Reader in Computer Science at The Flinders University of South Australia. \"Shape from Shading\" is included in the Artificial Intelligence series, edited by Michael Brady, Daniel Bobrow, and Randall Davis.",
"For general objects, and for illumination from a general direction, the constraints on shape imposed by shading are studied. It is argued that, for a typical image, shading determines shape essentially up to a finite ambiguity. Thus regularization is often unnecessary and should be avoided. For some images, shape from shading is a partially well-constrained problem: the surface is uniquely determined over most of the image, but infinitely ambiguous in small regions bordering the image boundary, even though the image contains singular points. The main result is that, contrary to previous belief, the image of the occluding boundary does not strongly constrain the surface solution. It is shown that characteristic strips are curves of steepest ascent on the imaged surface. A theorem characterizing the properties of generic images is presented. >"
]
}
|
1310.2916
|
2038294257
|
We develop a framework for extracting a concise representation of the shape information available from diffuse shading in a small image patch. This produces a mid-level scene descriptor, comprised of local shape distributions that are inferred separately at every image patch across multiple scales. The framework is based on a quadratic representation of local shape that, in the absence of noise, has guarantees on recovering accurate local shape and lighting. And when noise is present, the inferred local shape distributions provide useful shape information without over-committing to any particular image explanation. These local shape distributions naturally encode the fact that some smooth diffuse regions are more informative than others, and they enable efficient and robust reconstruction of object-scale shape. Experimental results show that this approach to surface reconstruction compares well against the state-of-art on both synthetic images and captured photographs.
|
Global uniqueness analyses have inspired global propagation and energy-based methods for global shape inference ( @cite_5 @cite_7 @cite_15 ), some of which rely on identifying occluding boundaries and or singular points. While most methods do not typically provide any measurement of uncertainty in their output, progress toward representing shape ambiguity was made by Ecker and Jepson @cite_11 , who use a polynomial formulation of global shape from shading to numerically generate distinct global surfaces that are equally close to an input image. In this paper, we study uniqueness and uncertainty at the local level, and infer distributions over candidate local shapes.
|
{
"cite_N": [
"@cite_5",
"@cite_15",
"@cite_7",
"@cite_11"
],
"mid": [
"2033664843",
"2114539798",
"2076130323",
"2026551261"
],
"abstract": [
"Many algorithms have been suggested for the shape-from-shading problem, and some years have passed since the publication of the survey paper by [R. Zhang, P.-S. Tsai, J.E. Cryer, M. Shah, Shape from shading: a survey, IEEE Transactions on Pattern Analysis and Machine Intelligence 21 (8) (1999) 690-706]. In this new survey paper, we try to update their presentation including some recent methods which seem to be particularly representative of three classes of methods: methods based on partial differential equations, methods using optimization and methods approximating the image irradiance equation. One of the goals of this paper is to set the comparison of these methods on a firm basis. To this end, we provide a brief description of each method, highlighting its basic assumptions and mathematical properties. Moreover, we propose some numerical benchmarks in order to compare the methods in terms of their efficiency and accuracy in the reconstruction of surfaces corresponding to synthetic, as well as to real images.",
"Resolving local ambiguities is an important issue for shape from shading (SFS). Pixel ambiguities of SFS can be eliminated by propagation approaches. However, patch ambiguities still exist. Therefore, we formulate the global disambiguation problem to resolve these ambiguities. Intuitively, it can be interpreted as flipping patches and adjusting heights such that the result surface has no kinks. The problem is intractable because exponentially many possible configurations need to be checked. Alternatively, we solve the integrability testing problems closely related to the original one. It can be viewed as finding a surface which satisfies the global integrability constraint. To encode the constraints, we introduce a graph formulation called configuration graph. Searching the solution on this graph can be reduced to a Max-cut problem and its solution is computuble using semidefinite programming (SDP) relaxation. Tests carried out on synthetic and real images show that the global disambiguation works well for complex shapes.",
"The traditional shape-from-shading problem, with a single light source and Lambertian reflectance, is challenging since the constraints implied by the illumination are not sufficient to specify local orientation. Photometric stereo algorithms, a variant of shape-from-shading, simplify the problem by controlling the illumination to obtain additional constraints. In this paper, we demonstrate that many natural lighting environments already have sufficient variability to constrain local shape. We describe a novel optimization scheme that exploits this variability to estimate surface normals from a single image of a diffuse object in natural illumination. We demonstrate the effectiveness of our method on both simulated and real images.",
"We examine the shape from shading problem without boundary conditions as a polynomial system. This view allows, in generic cases, a complete solution for ideal polyhedral objects. For the general case we propose a semidefinite programming relaxation procedure, and an exact line search iterative procedure with a new smoothness term that favors folds at edges. We use this numerical technique to inspect shading ambiguities."
]
}
|
1310.2916
|
2038294257
|
We develop a framework for extracting a concise representation of the shape information available from diffuse shading in a small image patch. This produces a mid-level scene descriptor, comprised of local shape distributions that are inferred separately at every image patch across multiple scales. The framework is based on a quadratic representation of local shape that, in the absence of noise, has guarantees on recovering accurate local shape and lighting. And when noise is present, the inferred local shape distributions provide useful shape information without over-committing to any particular image explanation. These local shape distributions naturally encode the fact that some smooth diffuse regions are more informative than others, and they enable efficient and robust reconstruction of object-scale shape. Experimental results show that this approach to surface reconstruction compares well against the state-of-art on both synthetic images and captured photographs.
|
Our work is related to patch-based approaches that use synthetically-generated reference databases. The idea there is to reconstruct depth (or other scene properties @cite_23 ) by synthesizing a database of aligned image and depth-map pairs, and then finding and stitching together depth patches from this database to match the input image and be spatially consistent. Hassner and Basri @cite_19 obtain plausible results this way when the input image and the database are of similar object categories, and @cite_3 pursue a similar goal for textureless objects using a database of rendered Lambertian spheres. @cite_22 focus on patches located at detected keypoints near an object's occlusion boundaries, combining shading and contour cues. We also describe global shape as a mosaic of per-patch depth primitives, but instead of relying on primitives from a pre-chosen set of 3D models, we consider a continuous five-parameter family of depth primitives corresponding to graphs of quadratic functions at multiple scales.
|
{
"cite_N": [
"@cite_19",
"@cite_3",
"@cite_22",
"@cite_23"
],
"mid": [
"2116264069",
"2109846213",
"2122175759",
""
],
"abstract": [
"We present a novel solution to the problem of depth reconstruction from a single image. Single view 3D reconstruction is an ill-posed problem. We address this problem by using an example-based synthesis approach. Our method uses a database of objects from a single class (e.g. hands, human figures) containing example patches of feasible mappings from the appearance to the depth of each object. Given an image of a novel object, we combine the known depths of patches from similar objects to produce a plausible depth estimate. This is achieved by optimizing a global target function representing the likelihood of the candidate depth. We demonstrate how the variability of 3D shapes and their poses can be handled by updating the example database on-the-fly. In addition, we show how we can employ our method for the novel task of recovering an estimate for the occluded backside of the imaged objects. Finally, we present results on a variety of object classes and a range of imaging conditions.",
"Traditional Shape-from-Shading (SFS) techniques aim to solve an under-constrained problem: estimating depth map from one single image. The results are usually brittle from real images containing detailed shapes. Inspired by recent advances in texture synthesis, we present an exemplar-based approach to improve the robustness and accuracy of SFS. In essence, we utilize an appearance database synthesized from known 3D models where each image pixel is associated with its ground-truth normal. The input image is compared against the images in the database to find the most likely normals. The prior knowledge from the database is formulated as an additional cost term under an energy minimization framework to solve the depth map. Using a generic small database consisting of 50 spheres with different radius, our approach has demonstrated its capability to obviously improve the reconstruction quality from both synthetic and real images with different shapes, in particular those with small details.",
"This paper presents an example-based method to interpret a 3D shape from a single image depicting that shape. A major difficulty in applying an example-based approach to shape interpretation is the combinatorial explosion of shape possibilities that occur at occluding contours. Our key technical contribution is a new shape patch representation and corresponding pairwise compatibility terms that allow for flexible matching of overlapping patches, avoiding the combinatorial explosion by allowing patches to explain only the parts of the image they best fit. We infer the best set of localized shape patches over a graph of keypoints at multiple scales to produce a discontinuous shape representation we term a shape collage. To reconstruct a smooth result, we fit a surface to the collage using the predicted confidence of each shape patch. We demonstrate the method on shapes depicted in line drawing, diffuse and glossy shading, and textured styles.",
""
]
}
|
1310.2916
|
2038294257
|
We develop a framework for extracting a concise representation of the shape information available from diffuse shading in a small image patch. This produces a mid-level scene descriptor, comprised of local shape distributions that are inferred separately at every image patch across multiple scales. The framework is based on a quadratic representation of local shape that, in the absence of noise, has guarantees on recovering accurate local shape and lighting. And when noise is present, the inferred local shape distributions provide useful shape information without over-committing to any particular image explanation. These local shape distributions naturally encode the fact that some smooth diffuse regions are more informative than others, and they enable efficient and robust reconstruction of object-scale shape. Experimental results show that this approach to surface reconstruction compares well against the state-of-art on both synthetic images and captured photographs.
|
In independent work, Kunsberg and Zucker @cite_13 have recently derived local uniqueness results that are related to, and consistent with, our results in Sec. . Their elegant analysis, which uses differential geometry and applies to continuous images, is complimentary to the discrete and algebraic approach employed in this paper. Kunsberg and Zucker also observe that the analysis of shading in patches instead of at isolated points is consistent with early processing in the visual cortex, and they discuss the possibility of local shading distributions being computed there. Indeed, the notion of such distributions is compatible with evidence that humans perceive shape in some diffuse regions more accurately than others @cite_14 .
|
{
"cite_N": [
"@cite_14",
"@cite_13"
],
"mid": [
"2008840450",
"1830639927"
],
"abstract": [
"The shading cue is supposed to be a major factor in monocular stereopsis. However, the hypothesis is hardly corroborated by available data. For instance, the conventional stimulus used in perception research, which involves a circular disk with monotonic luminance gradient on a uniform surround, is theoretically ‘explained’ by any quadric surface, including spherical caps or cups (the conventional response categories), cylindrical ruts or ridges, and saddle surfaces. Whereas cylindrical ruts or ridges are reported when the outline is changed from circular to square, saddle surfaces are never reported. We introduce a method that allows us to differentiate between such possible responses. We report observations on a number of variations of the conventional stimulus, including variations of shape and quality of the boundary, and contexts that allow the observer to infer illumination direction. We find strong and expected influences of outline shape, but, perhaps surprisingly, we fail to find any influence of context, and only partial influence of outline quality. Moreover, we report appreciable differences within the generic population. We trace some of the idiosyncrasies (as compared to shape from shading algorithms) of the human observer to generic properties of the environment, in particular the fact that many objects are limited in size and elliptically convex over most of their boundaries.",
"Shape from shading is a classical inverse problem in computer vision. This shape reconstruction problem is inherently ill-dened; it depends on the assumed light source direction. We introduce a novel mathematical formulation for calculating local surface shape based on covariant derivatives of the shading ow eld, rather than the customary integral minimization or P.D.E approaches. On smooth surfaces, we show second derivatives of brightness are independent of the light sources and can be directly related to surface properties. We use these measurements to dene the matching local family of surfaces that can result from any given shading patch, changing the emphasis to characterizing ambiguity in the problem. We give an example of how these local surface ambiguities collapse along certain image contours and how this can be used for the reconstruction problem."
]
}
|
1310.2923
|
1561369684
|
We present the design and prototype implementation of a scientific visualization language called Zifazah for composing 3D visualizations of diffusion tensor magnetic resonance imaging (DT-MRI or DTI) data. Unlike existing tools allowing flexible customization of data visualizations that are programmer-oriented, we focus on domain scientists as end users in order to enable them to freely compose visualizations of their scientific data set. We analyzed end-user descriptions extracted from interviews with neurologists and physicians conducting clinical practices using DTI about how they would build and use DTI visualizations to collect syntax and semantics for the language design, and have discovered the elements and structure of the proposed language. Zifazah makes use of the initial set of lexical terms and semantics to provide a declarative language in the spirit of intuitive syntax and usage. This work contributes three, among others, main design principles for scientific visualization language design as well as a practice of such language for DTI visualization with Zifazah. First, Zifazah incorporated visual symbolic mapping based on color, size and shape, which is a sub-set of Bertin's taxonomy migrated to scientific visualizations. Second, Zifazah is defined as a spatial language whereby lexical representation of spatial relationship for 3D object visualization and manipulations, which is characteristic of scientific data, can be programmed. Third, built on top of Bertin's semiology, flexible data encoding specifically for scientific visualizations is integrated in our language in order to allow end users to achieve optimal visual composition at their best. Along with sample scripts representative of our language design features, some new DTI visualizations as the running results created by end users using the novel visualization language have also been presented.
|
Many other powerful tools have also been developed for exploring DTI visualizations @cite_12 @cite_1 @cite_28 @cite_3 @cite_20 . However, due to the data complexity, domain users' needs for performing their various tasks in daily practice have not yet been fully satisfied by using those tools. To give users more flexibility, some of the visualization tools are made highly configurable by allowing a wide range of settings @cite_25 @cite_11 . Nevertheless, it is still challenging to design a thoroughly effective visualization tool to meet all the needs of users. For instance, although sometimes able to meet specific requirements, higher flexibility of a visualization tool may even make the tool more complex to use for domain users @cite_10 .
|
{
"cite_N": [
"@cite_25",
"@cite_28",
"@cite_1",
"@cite_3",
"@cite_10",
"@cite_12",
"@cite_20",
"@cite_11"
],
"mid": [
"1857578225",
"",
"",
"2157570155",
"2147394532",
"2151856059",
"2170656593",
"2167790496"
],
"abstract": [
"Processing and visualization of 3D medical data is nowadays a common problem. However, it remains challenging because the diversification and complexification of the available sources of information, as well as the specific requirements of clinicians, make it difficult to solve in a computer science point of view. Indeed, clinicians need ergonomic, efficient, intuitive and reactive softwares. Moreover, they need new solutions to fully exploit their data, but they often cannot access state-of-the-art methods as those are mostly available in complicated softwares. The MedINRIA software was born to fill this lack and consists of a collection of tools that optimally exploit various types of data (e.g., 3D images, diffusion tensor fields, neural fibers as obtained in DT-MRI). It provides state-of-the-art algorithms while keeping a user-friendly graph-ical interface. For each of these tools, we first introduce its dedicated application and the processing methods it contains. Then, we focus on the features that make interactions with data even more intuitive. Med-INRIA is a free software, available on Windows, Linux and MacOSX. Other MedINRIA tools are underway to make cutting edge research in medical imaging rapidly available to clinicians. The interest clinicians have shown in MedINRIA so far indicates that the need of such simple, yet powerful softwares is real and increasing.",
"",
"",
"This paper describes a participatory design process employed to invent an interface for 3D selection of neural pathways estimated from MRI imaging of human brains. Existing pathway selection interfaces are frustratingly difficult to use, since they require the 3D placement of regions-of-interest within the brain data using only a mouse and keyboard. The proposed system addresses these usability problems by providing an interface that is potentially more intuitive and powerful: converting 2D mouse gestures into 3D path selections. The contributions of this work are twofold: 1) we introduce a participatory design process in which users invent and test their own gestural selection interfaces using a Wizard of Oz prototype, and 2) this process has helped to yield the design of an interface for 3D pathway selection, a problem that is known to be difficult. Aspects of both the design process and the interface may generalize to other interface design problems.",
"Navigating through large-scale virtual environments such as simulations of the astrophysical Universe is difficult. The huge spatial range of astronomical models and the dominance of empty space make it hard for users to travel across cosmological scales effectively, and the problem of wayfinding further impedes the user's ability to acquire reliable spatial knowledge of astronomical contexts. We introduce a new technique called the scalable world-in-miniature (WIM) map as a unifying interface to facilitate travel and wayfinding in a virtual environment spanning gigantic spatial scales: power-law spatial seating enables rapid and accurate transitions among widely separated regions; logarithmically mapped miniature spaces offer a global overview mode when the full context is too large; 3D landmarks represented in the WIM are enhanced by scale, positional, and directional cues to augment spatial context awareness; a series of navigation models are incorporated into the scalable WIM to improve the performance of travel tasks posed by the unique characteristics of virtual cosmic exploration. The scalable WIM user interface supports an improved physical navigation experience and assists pragmatic cognitive understanding of a visualization context that incorporates the features of large-scale astronomy",
"To disentangle and analyze neural pathways estimated from magnetic resonance imaging data, scientists need an interface to select 3D pathways. Broad adoption of such an interface requires the use of commodity input devices such as mice and pens, but these devices offer only two degrees of freedom. CINCH solves this problem by providing a marking interface for 3D pathway selection. CINCH interprets pen strokes as pathway selections in 3D using a marking language designed together with scientists. Its bimanual interface employs a pen and a trackball (see Figure 1), allowing alternating selections and scene rotations without changes of mode. CINCH was evaluated by observing four scientists using the tool over a period of three weeks as part of their normal work activity. Event logs and interviews revealed dramatic improvements in both the speed and quality of scientists' everyday work, and a set of principles that should inform the design of future 3D marking interfaces. More broadly, CINCH demonstrates the value of the iterative, participatory design process that catalyzed its evolution.",
"Diffusion tensor imaging (DTI) is an MRI-based technique for quantifying water diffusion in living tissue. In the white matter of the brain, water diffuses more rapidly along the neuronal axons than in the perpendicular direction. By exploiting this phenomenon, DTI can be used to determine trajectories of fiber bundles, or neuronal connections between regions, in the brain. The resulting bundles can be visualized. However, the resulting visualizations can be complex and difficult to interpret. An effective approach is to pre-determine trajectories from a large number of positions throughout the white matter (full brain fiber tracking) and to offer facilities to aid the user in selecting fiber bundles of interest. Two factors are crucial for the use and acceptance of this technique in clinical studies: firstly, the selection of the bundles by brain experts should be interactive, supported by real-time visualization of the trajectories registered with anatomical MRI scans. Secondly, the fiber selections should be reproducible, so that different experts will achieve the same results. In this paper we present a practical technique for the interactive selection of fiber-bundles using multiple convex objects that is an order of magnitude faster than similar techniques published earlier. We also present the results of a clinical study with ten subjects that show that our selection approach is highly reproducible for fractional anisotropy (FA) calculated over the selected fiber bundles.",
"Diffusion tensor imaging (DTI) is a magnetic resonance imaging method that can be used to measure local information about the structure of white matter within the human brain. Combining DTI data with the computational methods of MR tractography, neuroscientists can estimate the locations and sizes of nerve bundles (white matter pathways) that course through the human brain. Neuroscientists have used visualization techniques to better understand tractography data, but they often struggle with the abundance and complexity of the pathways. In this paper, we describe a novel set of interaction techniques that make it easier to explore and interpret such pathways. Specifically, our application allows neuroscientists to place and interactively manipulate box or ellipsoid-shaped regions to selectively display pathways that pass through specific anatomical areas. These regions can be used in coordination with a simple and flexible query language which allows for arbitrary combinations of these queries using Boolean logic operators. A representation of the cortical surface is provided for specifying queries of pathways that may be relevant to gray matter structures and for displaying activation information obtained from functional magnetic resonance imaging. By precomputing the pathways and their statistical properties, we obtain the speed necessary for interactive question-and-answer sessions with brain researchers. We survey some questions that researchers have been asking about tractography data and show how our system can be used to answer these questions efficiently."
]
}
|
1310.2923
|
1561369684
|
We present the design and prototype implementation of a scientific visualization language called Zifazah for composing 3D visualizations of diffusion tensor magnetic resonance imaging (DT-MRI or DTI) data. Unlike existing tools allowing flexible customization of data visualizations that are programmer-oriented, we focus on domain scientists as end users in order to enable them to freely compose visualizations of their scientific data set. We analyzed end-user descriptions extracted from interviews with neurologists and physicians conducting clinical practices using DTI about how they would build and use DTI visualizations to collect syntax and semantics for the language design, and have discovered the elements and structure of the proposed language. Zifazah makes use of the initial set of lexical terms and semantics to provide a declarative language in the spirit of intuitive syntax and usage. This work contributes three, among others, main design principles for scientific visualization language design as well as a practice of such language for DTI visualization with Zifazah. First, Zifazah incorporated visual symbolic mapping based on color, size and shape, which is a sub-set of Bertin's taxonomy migrated to scientific visualizations. Second, Zifazah is defined as a spatial language whereby lexical representation of spatial relationship for 3D object visualization and manipulations, which is characteristic of scientific data, can be programmed. Third, built on top of Bertin's semiology, flexible data encoding specifically for scientific visualizations is integrated in our language in order to allow end users to achieve optimal visual composition at their best. Along with sample scripts representative of our language design features, some new DTI visualizations as the running results created by end users using the novel visualization language have also been presented.
|
Since pioneered the automatic generation of graphic representation @cite_27 , Mackinlay's work has been extended lately into a visual analysis system armed with a set of interface commands and defaults representing the best practices of graphical design @cite_32 , upon which a commercial software Tableau was developed. In his work, the generation of visualizations was automated thanks to the application of a series of design rules and made adaptable to users with a wide range of design expertise via constrained flexibilities by those design rules. With , we also intend to provide an environment in which end users can flexibly build their own visualizations like Tableau. However, instead of targeting visual analysis in the context of two-dimensional (2D) information visualization, primarily aims at end-user visualization making and exploration with 3D scientific data such as DTI. Also, compared to the visual specifications in Tableau like those in its predecessor Polaris @cite_6 , textual programming is the main means for end users to interact with visualizations of interest in . Similar to Polaris in terms of using visual operations to build visualizations, the tool designed in @cite_11 aims to support retrieving DTI fibers instead of querying relational database in Polaris.
|
{
"cite_N": [
"@cite_27",
"@cite_32",
"@cite_6",
"@cite_11"
],
"mid": [
"2132881639",
"2152922709",
"2160382748",
"2167790496"
],
"abstract": [
"The goal of the research described in this paper is to develop an application-independent presentation tool that automatically designs effective graphical presentations (such as bar charts, scatter plots, and connected graphs) of relational information. Two problems are raised by this goal: The codification of graphic design criteria in a form that can be used by the presentation tool, and the generation of a wide variety of designs so that the presentation tool can accommodate a wide variety of information. The approach described in this paper is based on the view that graphical presentations are sentences of graphical languages. The graphic design issues are codified as expressiveness and effectiveness criteria for graphical languages. Expressiveness criteria determine whether a graphical language can express the desired information. Effectiveness criteria determine whether a graphical language exploits the capabilities of the output medium and the human visual system. A wide variety of designs can be systematically generated by using a composition algebra that composes a small set of primitive graphical languages. Artificial intelligence techniques are used to implement a prototype presentation tool called APT (A Presentation Tool), which is based on the composition algebra and the graphic design criteria.",
"This paper describes Show Me, an integrated set of user interface commands and defaults that incorporate automatic presentation into a commercial visual analysis system called Tableau. A key aspect of Tableau is VizQL, a language for specifying views, which is used by Show Me to extend automatic presentation to the generation of tables of views (commonly called small multiple displays). A key research issue for the commercial application of automatic presentation is the user experience, which must support the flow of visual analysis. User experience has not been the focus of previous research on automatic presentation. The Show Me user experience includes the automatic selection of mark types, a command to add a single field to a view, and a pair of commands to build views for multiple fields. Although the use of these defaults and commands is optional, user interface logs indicate that Show Me is used by commercial users.",
"In the last several years, large multidimensional databases have become common in a variety of applications, such as data warehousing and scientific computing. Analysis and exploration tasks place significant demands on the interfaces to these databases. Because of the size of the data sets, dense graphical representations are more effective for exploration than spreadsheets and charts. Furthermore, because of the exploratory nature of the analysis, it must be possible for the analysts to change visualizations rapidly as they pursue a cycle involving first hypothesis and then experimentation. In this paper, we present Polaris, an interface for exploring large multidimensional databases that extends the well-known pivot table interface. The novel features of Polaris include an interface for constructing visual specifications of table-based graphical displays and the ability to generate a precise set of relational queries from the visual specifications. The visual specifications can be rapidly and incrementally developed, giving the analyst visual feedback as he constructs complex queries and visualizations.",
"Diffusion tensor imaging (DTI) is a magnetic resonance imaging method that can be used to measure local information about the structure of white matter within the human brain. Combining DTI data with the computational methods of MR tractography, neuroscientists can estimate the locations and sizes of nerve bundles (white matter pathways) that course through the human brain. Neuroscientists have used visualization techniques to better understand tractography data, but they often struggle with the abundance and complexity of the pathways. In this paper, we describe a novel set of interaction techniques that make it easier to explore and interpret such pathways. Specifically, our application allows neuroscientists to place and interactively manipulate box or ellipsoid-shaped regions to selectively display pathways that pass through specific anatomical areas. These regions can be used in coordination with a simple and flexible query language which allows for arbitrary combinations of these queries using Boolean logic operators. A representation of the cortical surface is provided for specifying queries of pathways that may be relevant to gray matter structures and for displaying activation information obtained from functional magnetic resonance imaging. By precomputing the pathways and their statistical properties, we obtain the speed necessary for interactive question-and-answer sessions with brain researchers. We survey some questions that researchers have been asking about tractography data and show how our system can be used to answer these questions efficiently."
]
}
|
1310.2923
|
1561369684
|
We present the design and prototype implementation of a scientific visualization language called Zifazah for composing 3D visualizations of diffusion tensor magnetic resonance imaging (DT-MRI or DTI) data. Unlike existing tools allowing flexible customization of data visualizations that are programmer-oriented, we focus on domain scientists as end users in order to enable them to freely compose visualizations of their scientific data set. We analyzed end-user descriptions extracted from interviews with neurologists and physicians conducting clinical practices using DTI about how they would build and use DTI visualizations to collect syntax and semantics for the language design, and have discovered the elements and structure of the proposed language. Zifazah makes use of the initial set of lexical terms and semantics to provide a declarative language in the spirit of intuitive syntax and usage. This work contributes three, among others, main design principles for scientific visualization language design as well as a practice of such language for DTI visualization with Zifazah. First, Zifazah incorporated visual symbolic mapping based on color, size and shape, which is a sub-set of Bertin's taxonomy migrated to scientific visualizations. Second, Zifazah is defined as a spatial language whereby lexical representation of spatial relationship for 3D object visualization and manipulations, which is characteristic of scientific data, can be programmed. Third, built on top of Bertin's semiology, flexible data encoding specifically for scientific visualizations is integrated in our language in order to allow end users to achieve optimal visual composition at their best. Along with sample scripts representative of our language design features, some new DTI visualizations as the running results created by end users using the novel visualization language have also been presented.
|
Although a natural language like WordsEye @cite_18 for visualizations might be appealing to ordinary users without any programming knowledge, we do not attempt the entirely descriptive nature for as WordsEye did at current stage. In terms of lexical and syntax design, is similar to Yahoo!'s Pig Latin @cite_13 , which is a new data processing language associated with Yahoo! Pig data handling environment that balances between a declarative language and a low-level procedural one. The language supports data filtering and grouping with parallelism by its map-reduce programming capability. However, this language did not handle visualizations or any form of graphical representations but focusing on ad-hoc data analysis. Also, sets it apart from Pig Latin in the target audience again since the latter mainly served software engineers.
|
{
"cite_N": [
"@cite_18",
"@cite_13"
],
"mid": [
"2068676460",
"2098935637"
],
"abstract": [
"Natural language is an easy and effective medium for describing visual ideas and mental images. Thus, we foresee the emergence of language-based 3D scene generation systems to let ordinary users quickly create 3D scenes without having to learn special software, acquire artistic skills, or even touch a desktop window-oriented interface. WordsEye is such a system for automatically converting text into representative 3D scenes. WordsEye relies on a large database of 3D models and poses to depict entities and actions. Every 3D model can have associated shape displacements, spatial tags, and functional properties to be used in the depiction process. We describe the linguistic analysis and depiction techniques used by WordsEye along with some general strategies by which more abstract concepts are made depictable.",
"There is a growing need for ad-hoc analysis of extremely large data sets, especially at internet companies where innovation critically depends on being able to analyze terabytes of data collected every day. Parallel database products, e.g., Teradata, offer a solution, but are usually prohibitively expensive at this scale. Besides, many of the people who analyze this data are entrenched procedural programmers, who find the declarative, SQL style to be unnatural. The success of the more procedural map-reduce programming model, and its associated scalable implementations on commodity hardware, is evidence of the above. However, the map-reduce paradigm is too low-level and rigid, and leads to a great deal of custom user code that is hard to maintain, and reuse. We describe a new language called Pig Latin that we have designed to fit in a sweet spot between the declarative style of SQL, and the low-level, procedural style of map-reduce. The accompanying system, Pig, is fully implemented, and compiles Pig Latin into physical plans that are executed over Hadoop, an open-source, map-reduce implementation. We give a few examples of how engineers at Yahoo! are using Pig to dramatically reduce the time required for the development and execution of their data analysis tasks, compared to using Hadoop directly. We also report on a novel debugging environment that comes integrated with Pig, that can lead to even higher productivity gains. Pig is an open-source, Apache-incubator project, and available for general use."
]
}
|
1310.2923
|
1561369684
|
We present the design and prototype implementation of a scientific visualization language called Zifazah for composing 3D visualizations of diffusion tensor magnetic resonance imaging (DT-MRI or DTI) data. Unlike existing tools allowing flexible customization of data visualizations that are programmer-oriented, we focus on domain scientists as end users in order to enable them to freely compose visualizations of their scientific data set. We analyzed end-user descriptions extracted from interviews with neurologists and physicians conducting clinical practices using DTI about how they would build and use DTI visualizations to collect syntax and semantics for the language design, and have discovered the elements and structure of the proposed language. Zifazah makes use of the initial set of lexical terms and semantics to provide a declarative language in the spirit of intuitive syntax and usage. This work contributes three, among others, main design principles for scientific visualization language design as well as a practice of such language for DTI visualization with Zifazah. First, Zifazah incorporated visual symbolic mapping based on color, size and shape, which is a sub-set of Bertin's taxonomy migrated to scientific visualizations. Second, Zifazah is defined as a spatial language whereby lexical representation of spatial relationship for 3D object visualization and manipulations, which is characteristic of scientific data, can be programmed. Third, built on top of Bertin's semiology, flexible data encoding specifically for scientific visualizations is integrated in our language in order to allow end users to achieve optimal visual composition at their best. Along with sample scripts representative of our language design features, some new DTI visualizations as the running results created by end users using the novel visualization language have also been presented.
|
The Protovis specification language @cite_5 is a declarative domain-specific language (DSL) that supports specification of interactive information visualizations with animated transitions, providing an approach to composing custom views of data using graphical primitives called marks that are able encode data by dynamic properties, which is similar to the mapping of object properties to graphical representations in another InfoVis language presented by Lucas and Shieber @cite_23 . To some extent, both languages are comparable to the Microsoft's ongoing project Vedea aimming at a new visualization language @cite_8 in terms of syntactic design and programming style, although its design goals are closer to that of Processing.
|
{
"cite_N": [
"@cite_5",
"@cite_23",
"@cite_8"
],
"mid": [
"2158711339",
"1519642193",
""
],
"abstract": [
"We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude.",
"While information visualization tools support the representation of abstract data, their ability to enhance one’s understanding of complex relationships can be hindered by a limited set of predefined charts. To enable novel visualization over multiple variables, we propose a declarative language for specifying informational graphics from first principles. The language maps properties of generic objects to graphical representations based on scaled interpretations of data values. An iterative approach to constraint solving that involves user advice enables the optimization of graphic layouts. The flexibility and expressiveness of a powerful but relatively easy to use grammar supports the expression of visualizations ranging from the simple to the complex.",
""
]
}
|
1310.2923
|
1561369684
|
We present the design and prototype implementation of a scientific visualization language called Zifazah for composing 3D visualizations of diffusion tensor magnetic resonance imaging (DT-MRI or DTI) data. Unlike existing tools allowing flexible customization of data visualizations that are programmer-oriented, we focus on domain scientists as end users in order to enable them to freely compose visualizations of their scientific data set. We analyzed end-user descriptions extracted from interviews with neurologists and physicians conducting clinical practices using DTI about how they would build and use DTI visualizations to collect syntax and semantics for the language design, and have discovered the elements and structure of the proposed language. Zifazah makes use of the initial set of lexical terms and semantics to provide a declarative language in the spirit of intuitive syntax and usage. This work contributes three, among others, main design principles for scientific visualization language design as well as a practice of such language for DTI visualization with Zifazah. First, Zifazah incorporated visual symbolic mapping based on color, size and shape, which is a sub-set of Bertin's taxonomy migrated to scientific visualizations. Second, Zifazah is defined as a spatial language whereby lexical representation of spatial relationship for 3D object visualization and manipulations, which is characteristic of scientific data, can be programmed. Third, built on top of Bertin's semiology, flexible data encoding specifically for scientific visualizations is integrated in our language in order to allow end users to achieve optimal visual composition at their best. Along with sample scripts representative of our language design features, some new DTI visualizations as the running results created by end users using the novel visualization language have also been presented.
|
Recently Metoyer et. al. @cite_29 report from an exploratory study a set of design implications for the design of visualization languages and toolkits. More specifically, their findings inform visualization language design through the way end users describe visualizations and their inclination to using ambiguous and relative, instead of definite and absolute, terms that can be refined later via a feedback loop provided by the language. Emphatically, their findings also disclose that end users tend to express in generally high-level semantics. During the design of our visualization language, we have benefited from these findings and actually have reflected them in the development of .
|
{
"cite_N": [
"@cite_29"
],
"mid": [
"2147699512"
],
"abstract": [
"Tools exist for people to create visualizations with their data; however, they are often designed for programmers or they restrict less technical people to pre-defined templates. This can make creating novel, custom visualizations difficult for the average person. For example, existing tools typically do not support syntax or interaction techniques that are natural to end users. To explore how to support a more natural production of data visualizations by end users, we conducted an exploratory study to illuminate the structure and content of the language employed by end users when describing data visualizations. We present our findings from the study and discuss their design implications for future visualization languages and toolkits."
]
}
|
1310.2648
|
2951847816
|
This paper considers a time-varying game with @math players. Every time slot, players observe their own random events and then take a control action. The events and control actions affect the individual utilities earned by each player. The goal is to maximize a concave function of time average utilities subject to equilibrium constraints. Specifically, participating players are provided access to a common source of randomness from which they can optimally correlate their decisions. The equilibrium constraints incentivize participation by ensuring that players cannot earn more utility if they choose not to participate. This form of equilibrium is similar to the notions of Nash equilibrium and correlated equilibrium, but is simpler to attain. A Lyapunov method is developed that solves the problem in an online fashion by selecting actions based on a set of time-varying weights. The algorithm does not require knowledge of the event probabilities and has polynomial convergence time. A similar method can be used to compute a standard correlated equilibrium, albeit with increased complexity.
|
The notion of (CCE) was introduced in @cite_5 in the static case where there is no event process @math . The CCE definition is similar to a (CE) @cite_3 @cite_19 @cite_4 . The difference is as follows: A correlated equilibrium (CE) is more stringent and requires the utility achieved by each player @math to be at least as large as the utility she could achieve if she did not participate . It is known that both CCE and CE constraints can be written as linear programs. Adaptive methods that converge to a CE for static games are developed in @cite_0 @cite_10 @cite_9 . The concept of (NE) is more stringent still: The NE constraint requires all players to act independently and without the aid of a message process @math @cite_6 @cite_4 . Unfortunately, the problem of computing a Nash equilibrium is nonconvex.
|
{
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_5",
"@cite_10"
],
"mid": [
"2109100253",
"2013478841",
"2112794046",
"",
"2073242866",
"2118929276",
"1988716172",
"2126776273"
],
"abstract": [
"A Course in Game Theory presents the main ideas of game theory at a level suitable for graduate students and advanced undergraduates, emphasizing the theory's foundations and interpretations of its basic concepts. The authors provide precise definitions and full proofs of results, sacrificing generalities and limiting the scope of the material in order to do so. The text is organized in four parts: strategic games, extensive games with perfect information, extensive games with imperfect information, and coalitional games. It includes over 100 exercises.",
"Players choose an action before learning an outcome chosen according to an unknown and history-dependent stochastic rule. Procedures that categorize outcomes, and use a randomized variation on fictitious play within each category are studied. These procedures are “conditionally consistent:†they yield almost as high a time-average payoff as if the player knew the conditional distributions of actions given categories. Moreover, given any alternative procedure, there is a conditionally consistent procedure whose performance is no more than epsilon worse regardless of the discount factor. We also discuss cycles, and argue that the time-average of play should resemble a correlated equilibrium.",
"",
"",
"Abstract Suppose two players repeatedly meet each other to play a game where 1. each uses a learning rule with the property that it is a calibrated forecast of the other's plays, and 2. each plays a myopic best response to this forecast distribution. Then, the limit points of the sequence of plays are correlated equilibria. In fact, for each correlated equilibrium there is some calibrated learning rule that the players can use which results in their playing this correlated equilibrium in the limit. Thus, the statistical concept of a calibration is strongly related to the game theoretic concept of correlated equilibrium. Journal of Economic Literature Classification Numbers: C72,D83,C44.",
"If it is common knowledge that the players in a game are Bayesian utility maximizers who treat uncertainty about other players' actions like any other uncertainty, then the outcome is necessarily a correlated equilibrium. Random strategies appear as an expression of each player's uncertainty about what the others will do, not as the result of willful randomization. Use is made of the common prior assumption, according to which differences in probability assessments by different individuals are due to the different information that they have (where \"information\" may be interpreted broadly, to include experience, upbringing, and genetic makeup). Copyright 1987 by The Econometric Society. (This abstract was borrowed from another version of this item.) (This abstract was borrowed from another version of this item.) (This abstract was borrowed from another version of this item.) (This abstract was borrowed from another version of this item.) (This abstract was borrowed from another version of this item.) (This abstract was borrowed from another version of this item.) (This abstract was borrowed from another version of this item.) (This abstract was borrowed from another version of this item.) (This abstract was borrowed from another version of this item.) (This abstract was borrowed from another version of this item.) (This abstract was borrowed from another version of this item.) (This abstract was borrowed from another version of this item.) (This abstract was borrowed from another version of this item.) (This abstract was borrowed from another version of this item.) (This abstract was borrowed from another version of this item.) (This abstract was borrowed from another version of th (This abstract was borrowed from another version of this item.)",
"In this paper we propose a new class of games, the “strategically zero-sum games,” which are characterized by a special payoff structure. We show that for a large body of correlation schemes which includes the correlated strategies “a la Aumann”, strategically zero-sum games are exactly these games for which no completely mixed Nash equilibrium can be improved upon.",
"We propose a simple adaptive procedure for playing a game. In this procedure, players depart from their current play with probabilities that are proportional to measures of regret for not having used other strategies (these measures are updated every period). It is shown that our adaptive procedure guaranties that with probability one, the sample distributions of play converge to the set of correlated equilibria of the game. To compute these regret measures, a player needs to know his payoff function and the history of play. We also offer a variation where every player knows only his own realized payoff history (but not his payoff function)."
]
}
|
1310.2648
|
2951847816
|
This paper considers a time-varying game with @math players. Every time slot, players observe their own random events and then take a control action. The events and control actions affect the individual utilities earned by each player. The goal is to maximize a concave function of time average utilities subject to equilibrium constraints. Specifically, participating players are provided access to a common source of randomness from which they can optimally correlate their decisions. The equilibrium constraints incentivize participation by ensuring that players cannot earn more utility if they choose not to participate. This form of equilibrium is similar to the notions of Nash equilibrium and correlated equilibrium, but is simpler to attain. A Lyapunov method is developed that solves the problem in an online fashion by selecting actions based on a set of time-varying weights. The algorithm does not require knowledge of the event probabilities and has polynomial convergence time. A similar method can be used to compute a standard correlated equilibrium, albeit with increased complexity.
|
In contrast, while the current paper treats a stochastic problem with more limited structure, the resulting solution is simple and grows as @math . Specifically, the algorithm uses a number of that is linear in @math , rather than exponential in @math , resulting in polynomial bounds on convergence time. Furthermore, the number of virtual queues grows only linearly in the size of each set @math . This improves on the original conference version of this paper @cite_12 , which required a number of virtual queues that was exponential in the size of @math . The exponential-to-polynomial improvement is done by equivalently modeling the constraints via a grouping of conditional expectations given an observed random event.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2045465809"
],
"abstract": [
"This paper considers a time-varying game with N players. Every time slot, players observe their own random events and then take a control action. The events and control actions affect the individual utilities earned by each player. The goal is to maximize a concave function of time average utilities subject to equilibrium constraints. Specifically, participating players are provided access to a common source of randomness from which they can optimally correlate their decisions. The equilibrium constraints incentivize participation by ensuring that players cannot earn more utility if they choose not to participate. This form of equilibrium is similar to the notions of Nash equilibrium and correlated equilibrium, but is simpler to attain. A Lyapunov method is developed that solves the problem in an online max-weight fashion by selecting actions based on a set of time-varying weights. The algorithm does not require knowledge of the event probabilities. A similar method can be used to compute a standard correlated equilibrium, albeit with increased complexity."
]
}
|
1310.2743
|
2950812726
|
This paper proposes an approach for the adaptation of spatial or temporal cases in a case-based reasoning system. Qualitative algebras are used as spatial and temporal knowledge representation languages. The intuition behind this adaptation approach is to apply a substitution and then repair potential inconsistencies, thanks to belief revision on qualitative algebras. A temporal example from the cooking domain is given. (The paper on which this extended abstract is based was the recipient of the best paper award of the 2012 International Conference on Case-Based Reasoning.)
|
Several research work focused on the representation of time within the CBR framework. Most were interested in the analysis or in the prediction of temporal processes (e.g. breakdown or disease diagnosis starting from regular observations or successive events). The temporal aspect is generally taken into account from sequences of events or sometimes from relative or absolute time stamps @cite_3 @cite_9 @cite_8 . Particularly, the problem of temporal adaptation has been given much attention in CBR with a workflow representation @cite_0 . Only a few work @cite_11 @cite_2 adopted a qualitative representation. @cite_2 , cases are represented by temporal graphs and the retrieval step is based on graph matching. @cite_11 , cases are indexed by chronicles and temporal constraints, which are represented with a subset of Allen relations.
|
{
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_11"
],
"mid": [
"1480501482",
"",
"2075773714",
"",
"1582515563",
"1519963781"
],
"abstract": [
"In recent years, several researchers have studied the suitability of CBR to cope with dynamic or continuous or temporal domains. In these domains, the current state depends on the past temporal states. This feature really makes difficult to cope with these domains. This means that classical individual case retrieval is not very accurate, as the dynamic domain is structured in a temporally related stream of cases rather than in single cases. The CBR system solutions should also be dynamic and continuous, and temporal dependencies among cases should be taken into account. This paper proposes a new approach and a new framework to develop temporal CBR systems: Episode-Based Reasoning. It is based on the abstraction of temporal sequences of cases, which are named as episodes. Our preliminary evaluation in the wastewater treatment plants domain shows that Episode-Based Reasoning seems to outperform classical CBR systems.",
"",
"The recognition of high level clinical scenes is fundamental in patient monitoring. In this paper, we propose a technique for recognizing a session, i.e. the clinical process evolution, by comparison against a predetermined set of scenarios, i.e. the possible behaviors for this process. We use temporal constraint networks to represent both scenario and session. Specific operations on networks are then applied to perform the recognition task. An index of temporal proximity is introduced to quantify the degree of matching between two temporal networks in order to select the best scenario fitting a session. We explore the application of our technique, implemented in the Deja Vu system, to the recognition of typical medical scenarios with both precise and imprecise temporal information.",
"",
"Cases are descriptions of situations limited in time and space. The research reported here introduces a method for representation and reasoning with time-dependent situations, or temporal cases, within a knowledge-intensive CBR framework. Most current CBR methods deal with snapshot cases, descriptions of a world state at a single time stamp. In many timedependent situations, value sets at particular time points are less important than the value changes over some interval of time. Our focus is on prediction problems for avoiding faulty situations. Based on a well-established theory of temporal intervals, we have developed a method for representing temporal cases inside the knowledge-intensive CBR system Creek. The paper presents the theoretical foundation of the method, the representation formalism and basic reasoning algorithms, and an example applied to the prediction of unwanted events in oil well drilling.",
"In this paper, we present our case-based browsing advisor for the Web, called Broadway. Broadway follows a group of users during their navigations and supports an indirect collaboration to recommend Web pages to visit next. Broadway uses case-based reasoning to reuse precise experiences extracted from past navigations with a time-extended situation assessment, i.e. the recommendations are based mainly on the similarity of ordered sequences of past accessed documents. A first experimental evaluation shows that the system improves the information searching task."
]
}
|
1310.2841
|
2949412086
|
A recent trend in parameterized algorithms is the application of polytope tools (specifically, LP-branching) to FPT algorithms (e.g., , 2011; , 2012). However, although interesting results have been achieved, the methods require the underlying polytope to have very restrictive properties (half-integrality and persistence), which are known only for few problems (essentially Vertex Cover (Nemhauser and Trotter, 1975) and Node Multiway Cut (, 1994)). Taking a slightly different approach, we view half-integrality as a relaxation of a problem, e.g., a relaxation of the search space from @math to @math such that the new problem admits a polynomial-time exact solution. Using tools from CSP (in particular Thapper and Z ivn 'y, 2012) to study the existence of such relaxations, we provide a much broader class of half-integral polytopes with the required properties, unifying and extending previously known cases. In addition to the insight into problems with half-integral relaxations, our results yield a range of new and improved FPT algorithms, including an @math -time algorithm for node-deletion Unique Label Cover with label set @math and an @math -time algorithm for Group Feedback Vertex Set, including the setting where the group is only given by oracle access. All these significantly improve on previous results. The latter result also implies the first single-exponential time FPT algorithm for Subset Feedback Vertex Set, answering an open question of (2012). Additionally, we propose a network flow-based approach to solve some cases of the relaxation problem. This gives the first linear-time FPT algorithm to edge-deletion Unique Label Cover.
|
Hochbaum @cite_38 gave a general framework for half-integral relaxations of certain optimisation problems (as discussed above), via a form of integer program called IP2 (which in turn is solved via relaxation to a polynomial-time solvable problem class called IP2). Without going into too much technical detail, we note that monotone IP2s are covered in a VCSP framework by problems submodular @cite_15 @cite_10 @cite_30 , and that the Boolean-domain case of IP2 reduces directly to , a.k.a. . However, we have not reconstructed a direct VCSP interpretation of the full case of half-integral IP2. Hochbaum @cite_38 asks in her paper whether the problems of and can be brought into her framework; the problem of remains open to us.
|
{
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_15",
"@cite_10"
],
"mid": [
"2128059732",
"2020282424",
"2953362411",
""
],
"abstract": [
"This paper presents a combinatorial polynomial-time algorithm for minimizing submodular functions, answering an open question posed in 1981 by Grotschel, Lovasz, and Schrijver. The algorithm employs a scaling scheme that uses a flow in the complete directed graph on the underlying set with each arc capacity equal to the scaled parameter. The resulting algorithm runs in time bounded by a polynomial in the size of the underlying set and the length of the largest absolute function value. The paper also presents a strongly polynomial version in which the number of steps is bounded by a polynomial in the size of the underlying set, independent of the function values.",
"Abstract We define a class of monotone integer programs with constraints that involve up to three variables each. A generic constraint in such integer program is of the form ax − by ⩽ z + c , where a and b are nonnegative and the variable z appears only in that constraint. We devise an algorithm solving such problems in time polynomial in the length of the input and the range of variables U . The solution is obtained from a minimum cut on a graph with O( nU ) nodes and O( mU ) arcs where n is the number of variables of the types x and y and m is the number of constraints. Our algorithm is also valid for nonlinear objective functions. Nonmonotone integer programs are optimization problems with constraints of the type ax + by ⩽ z + c without restriction on the signs of a and b . Such problems are in general NP-hard. We devise here an algorithm, relying on a transformation to the monotone case, that delivers half integral superoptimal solutions in polynomial time. Such solutions provide bounds on the optimum value that can only be superior to bounds provided by linear programming relaxation. When the half integral solution can be rounded to an integer feasible solution, this is a 2-approximate solution. In that the technique is a unified 2-approximation technique for a large class of problems. The results apply also for general integer programming problems with worse approximation factors that depend on a quantifier measuring how far the problem is from the class of problems we describe. The algorithm described here has a wide array of problem applications. An additional important consequence of our results is that nonmonotone problems in the framework are MAX SNP-hard and at least as hard to approximate as vertex cover. Problems that are amenable to the analysis provided here are easily recognized. The analysis itself is entirely technical and involves manipulating the constraints and transforming them to a totally unimodular system while losing no more than a factor of 2 in the integrality.",
"We report new results on the complexity of the valued constraint satisfaction problem (VCSP). Under the unique games conjecture, the approximability of finite-valued VCSP is fairly well-understood. However, there is yet no characterisation of VCSPs that can be solved exactly in polynomial time. This is unsatisfactory, since such results are interesting from a combinatorial optimisation perspective; there are deep connections with, for instance, submodular and bisubmodular minimisation. We consider the Min and Max CSP problems (i.e. where the cost functions only attain values in 0,1 ) over four-element domains and identify all tractable fragments. Similar classifications were previously known for two- and three-element domains. In the process, we introduce a new class of tractable VCSPs based on a generalisation of submodularity. We also extend and modify a graph-based technique by Kolmogorov and Zivny (originally introduced by Takhanov) for efficiently obtaining hardness results in our setting. This allow us to prove the result without relying on computer-assisted case analyses (which otherwise are fairly common when studying the complexity and approximability of VCSPs.) The hardness results are further simplified by the introduction of powerful reduction techniques.",
""
]
}
|
1310.2841
|
2949412086
|
A recent trend in parameterized algorithms is the application of polytope tools (specifically, LP-branching) to FPT algorithms (e.g., , 2011; , 2012). However, although interesting results have been achieved, the methods require the underlying polytope to have very restrictive properties (half-integrality and persistence), which are known only for few problems (essentially Vertex Cover (Nemhauser and Trotter, 1975) and Node Multiway Cut (, 1994)). Taking a slightly different approach, we view half-integrality as a relaxation of a problem, e.g., a relaxation of the search space from @math to @math such that the new problem admits a polynomial-time exact solution. Using tools from CSP (in particular Thapper and Z ivn 'y, 2012) to study the existence of such relaxations, we provide a much broader class of half-integral polytopes with the required properties, unifying and extending previously known cases. In addition to the insight into problems with half-integral relaxations, our results yield a range of new and improved FPT algorithms, including an @math -time algorithm for node-deletion Unique Label Cover with label set @math and an @math -time algorithm for Group Feedback Vertex Set, including the setting where the group is only given by oracle access. All these significantly improve on previous results. The latter result also implies the first single-exponential time FPT algorithm for Subset Feedback Vertex Set, answering an open question of (2012). Additionally, we propose a network flow-based approach to solve some cases of the relaxation problem. This gives the first linear-time FPT algorithm to edge-deletion Unique Label Cover.
|
Kolmogorov @cite_49 gave close connections between functions with half-integral minima and functions, in particular showing that bisubmodular functions correspond (in a certain sense) to a class of (continuous-domain) functions referred to as . See for more details.
|
{
"cite_N": [
"@cite_49"
],
"mid": [
"2076534077"
],
"abstract": [
"Consider a convex relaxation f@? of a pseudo-Boolean function f. We say that the relaxation is totally half-integral if f@?(x) is a polyhedral function with half-integral extreme points x, and this property is preserved after adding an arbitrary combination of constraints of the form x\"i=x\"j, x\"i=1-x\"j, and x\"i=@c where @c@? 0,1,12 is a constant. A well-known example is the roof duality relaxation for quadratic pseudo-Boolean functions f. We argue that total half-integrality is a natural requirement for generalizations of roof duality to arbitrary pseudo-Boolean functions. Our contributions are as follows. First, we provide a complete characterization of totally half-integral relaxations f@? by establishing a one-to-one correspondence with bisubmodular functions. Second, we give a new characterization of bisubmodular functions. Finally, we show some relationships between general totally half-integral relaxations and relaxations based on the roof duality. On the conceptual level, our results show that bisubmodular functions provide a natural generalization of the roof duality approach to higher-order terms. This can be viewed as a non-submodular analogue of the fact that submodular functions generalize the s-t minimum cut problem with non-negative weights to higher-order terms."
]
}
|
1310.2841
|
2949412086
|
A recent trend in parameterized algorithms is the application of polytope tools (specifically, LP-branching) to FPT algorithms (e.g., , 2011; , 2012). However, although interesting results have been achieved, the methods require the underlying polytope to have very restrictive properties (half-integrality and persistence), which are known only for few problems (essentially Vertex Cover (Nemhauser and Trotter, 1975) and Node Multiway Cut (, 1994)). Taking a slightly different approach, we view half-integrality as a relaxation of a problem, e.g., a relaxation of the search space from @math to @math such that the new problem admits a polynomial-time exact solution. Using tools from CSP (in particular Thapper and Z ivn 'y, 2012) to study the existence of such relaxations, we provide a much broader class of half-integral polytopes with the required properties, unifying and extending previously known cases. In addition to the insight into problems with half-integral relaxations, our results yield a range of new and improved FPT algorithms, including an @math -time algorithm for node-deletion Unique Label Cover with label set @math and an @math -time algorithm for Group Feedback Vertex Set, including the setting where the group is only given by oracle access. All these significantly improve on previous results. The latter result also implies the first single-exponential time FPT algorithm for Subset Feedback Vertex Set, answering an open question of (2012). Additionally, we propose a network flow-based approach to solve some cases of the relaxation problem. This gives the first linear-time FPT algorithm to edge-deletion Unique Label Cover.
|
Submodular and bisubmodular functions also occur as rank functions of, respectively, matroids @cite_31 and delta-matroids @cite_11 ; there are also connections to polytope theory (e.g., @cite_50 ). Similar, but less well-explored connections exist for @math -submodular functions; see the theory of multi-matroids @cite_34 @cite_19 @cite_27 @cite_32 , and the polytope connection given by Huber and Kolmogorov @cite_53 .
|
{
"cite_N": [
"@cite_53",
"@cite_32",
"@cite_19",
"@cite_27",
"@cite_50",
"@cite_31",
"@cite_34",
"@cite_11"
],
"mid": [
"1930126570",
"1987044417",
"1501742082",
"1972360724",
"2066894529",
"2309721867",
"2033956728",
""
],
"abstract": [
"In this paper we investigate k-submodular functions. This natural family of discrete functions includes submodular and bisubmodular functions as the special cases k=1 and k=2 respectively. In particular we generalize the known Min-Max-Theorem for submodular and bisubmodular functions. This theorem asserts that the minimum of the (bi)submodular function can be found by solving a maximization problem over a (bi)submodular polyhedron. We define a k-submodular polyhedron, prove a Min-Max-Theorem for k-submodular functions, and give a greedy algorithm to construct the vertices of the polyhedron.",
"Abstract This paper completes the series A. Bouchet, Multimatroids I, SIAM J. Disc. Math.; A. Bouchet, Multimatroid II. Minors and connectivity; A. Bouchet, Multimatroids III. Tightness, fundamental graphs and pivotings, devoted to the introduction of multimatroids. Here we define a notion of linear representation that encompasses the isotropic systems and the linear representations of matroids and delta-matroids. We show that every Eulerian multimatroid is representable with a symplectic vector space over GF(2). Finally we adapt the construction to symplectic matroids.",
"A multimatroid is a combinatorial structure that encompasses matroids, delta-matroids and isotropic systems. This structure has been introduced to unify a theorem of Edmonds on the coverings of a matroid by independent sets and a theorem of Jackson on the existence of pairwise compatible Euler tours in a 4-regular graph. Here we investigate some basic concepts and properties related with multimatroids: matroid orthogonality, minor operations and connectivity.",
"This paper continues the study of multimatroids. Here we introduce the subclass of tight multimatroids, which contains the liftings of even delta-matroids, the 3-matroids derived from isotropic systems, the Eulerian 3-matroids associated to 4-regular graphs and the Eulerian 2-matroids associated to evenly directed 4-regular graphs. The local properties of a tight multimatroid in the vicinity of a base are reflected by a fundamental graph, as in matroid theory. We describe how the fundamental graph is transformed when the base is modified. As an application we derive some connectivity properties of tight multimatroids.",
"This paper relates an axiomatic generalization of matroids, called a jump system, to polyhedra arising from bisubmodular functions. Unlike the case for usual submodularity, the points of interest are not all the integral points in the relevant polyhedron but form a subset of them. However, it is shown that the convex hull of the set of points of a jump system is a bisubmodular polyhedron, and that the integral points of an integral bisubmodular polyhedron determine a (special) jump system. The authors prove addition and composition theorems for jump systems, which have several applications for delta-matroids and matroids.",
"",
"Multimatroids are combinatorial structures that generalize matroids and arise in the study of Eulerian graphs. We prove, by means of an efficient algorithm, a covering theorem for multimatroids. This theorem extends Edmonds' covering theorem for matroids. It also generalizes a theorem of Jackson on the Euler tours of a 4-regular graph.",
""
]
}
|
1310.2148
|
2044074685
|
Server clustering is a common design principle employed by many organisations who require high availability, scalability and easier management of their infrastructure. Servers are typically clustered according to the service they provide whether it be the application(s) installed, the role of the server or server accessibility for example. In order to optimize performance, manage load and maintain availability, servers may migrate from one cluster group to another making it difficult for server monitoring tools to continuously monitor these dynamically changing groups. Server monitoring tools are usually statically configured and with any change of group membership requires manual reconfiguration, an unreasonable task to undertake on large-scale cloud infrastructures. In this paper we present the Cloudlet Control and Management System (C2MS), a system for monitoring and controlling dynamic groups of physical or virtual servers within cloud infrastructures. The C2MS extends Ganglia - an open source scalable system performance monitoring tool - by allowing system administrators to define, monitor and modify server groups without the need for server reconfiguration. In turn administrators can easily monitor group and individual server metrics on large-scale dynamic cloud infrastructures where roles of servers may change frequently. Furthermore, we complement group monitoring with a control element allowing administrator-specified actions to be performed over servers within service groups as well as introduce further customized monitoring metrics. This paper outlines the design, implementation and evaluation of the C2MS.
|
The C2MS uses Ganglia as its foundation for infrastructure monitoring. Ganglia is a popular open source scalable system performance monitoring tool @cite_13 @cite_5 @cite_11 used widely in the High Performance Computing (HPC) community. Its popularity, easy installation process, easy to use web interface and its extensibility were the main factors why we chose Ganglia to build the C2MS upon.
|
{
"cite_N": [
"@cite_5",
"@cite_13",
"@cite_11"
],
"mid": [
"2046861417",
"2154983209",
"2159102814"
],
"abstract": [
"In this paper, we present a structure for monitoring a large set of computational clusters. We illustrate methods for scaling a monitor network comprised of many clusters while keeping processing requirements low. A design for presenting high-level web-based summaries of the monitor network is provided, along with a generalization to a distributed, multipleresolution monitoring tree. Emphasis is placed on scalability, fast query response, fault tolerance, and grid compatibility. Experimental evidence is presented that demonstrates the performance of our design.",
"Abstract Ganglia is a scalable distributed monitoring system for high performance computing systems such as clusters and Grids. It is based on a hierarchical design targeted at federations of clusters. It relies on a multicast-based listen announce protocol to monitor state within clusters and uses a tree of point-to-point connections amongst representative cluster nodes to federate clusters and aggregate their state. It leverages widely used technologies such as XML for data representation, XDR for compact, portable data transport, and RRDtool for data storage and visualization. It uses carefully engineered data structures and algorithms to achieve very low per-node overheads and high concurrency. The implementation is robust, has been ported to an extensive set of operating systems and processor architectures, and is currently in use on over 500 clusters around the world. This paper presents the design, implementation, and evaluation of Ganglia along with experience gained through real world deployments on systems of widely varying scale, configurations, and target application domains over the last two and a half years.",
"Monitoring is the act of collecting information concerning the characteristics and status of resources of interest. Monitoring grid resources is a lively research area given the challenges and manifold applications. The aim of this paper is to advance the understanding of grid monitoring by introducing the involved concepts, requirements, phases, and related standardisation activities, including Global Grid Forum's Grid Monitoring Architecture. Based on a refinement of the latter, the paper proposes a taxonomy of grid monitoring systems, which is employed to classify a wide range of projects and frameworks. The value of the offered taxonomy lies in that it captures a given system's scope, scalability, generality and flexibility. The paper concludes with, among others, a discussion of the considered systems, as well as directions for future research."
]
}
|
1310.2148
|
2044074685
|
Server clustering is a common design principle employed by many organisations who require high availability, scalability and easier management of their infrastructure. Servers are typically clustered according to the service they provide whether it be the application(s) installed, the role of the server or server accessibility for example. In order to optimize performance, manage load and maintain availability, servers may migrate from one cluster group to another making it difficult for server monitoring tools to continuously monitor these dynamically changing groups. Server monitoring tools are usually statically configured and with any change of group membership requires manual reconfiguration, an unreasonable task to undertake on large-scale cloud infrastructures. In this paper we present the Cloudlet Control and Management System (C2MS), a system for monitoring and controlling dynamic groups of physical or virtual servers within cloud infrastructures. The C2MS extends Ganglia - an open source scalable system performance monitoring tool - by allowing system administrators to define, monitor and modify server groups without the need for server reconfiguration. In turn administrators can easily monitor group and individual server metrics on large-scale dynamic cloud infrastructures where roles of servers may change frequently. Furthermore, we complement group monitoring with a control element allowing administrator-specified actions to be performed over servers within service groups as well as introduce further customized monitoring metrics. This paper outlines the design, implementation and evaluation of the C2MS.
|
The Ganglia framework relies on two daemons: and @cite_14 . For brevity, the daemon collects resource usage information about the host it runs upon (remote server) and sends periodic heartbeat messages via a UDP multicast protocol to the entire Ganglia cluster. The daemon collects the aggregated XML and exports it to the PHP web interface hosted on a central server.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"2339804275"
],
"abstract": [
"Written by Ganglia designers and maintainers, this book shows you how to collect and visualize metrics from clusters, grids, and cloud infrastructures at any scale. Want to track CPU utilization from 20,000 hosts every ten seconds? Ganglia is just the tool you need, once you know how its main components work together. This hands-on book helps experienced system administrators take advantage of Ganglia 3.x. Learn how to extend the base set of metrics you collect, fetch current values, see aggregate views of metrics, and observe time-series trends in your data. Youll also examine real-world case studies of Ganglia installs that feature challenging monitoring requirements. Determine whether Ganglia is a good fit for your environment Learn how Ganglias gmond and gmetad daemons build a metric collection overlay Plan for scalability early in your Ganglia deployment, with valuable tips and advice Take data visualization to a new level with gweb, Ganglias web frontend Write plugins to extend gmonds metric-collection capability Troubleshoot issues you may encounter with a Ganglia installation Integrate Ganglia with the sFlow and Nagios monitoring systems Contributors include: Robert Alexander, Jeff Buchbinder, Frederiko Costa, Alex Dean, Dave Josephsen, Peter Phaal, and Daniel Pocock."
]
}
|
1310.2148
|
2044074685
|
Server clustering is a common design principle employed by many organisations who require high availability, scalability and easier management of their infrastructure. Servers are typically clustered according to the service they provide whether it be the application(s) installed, the role of the server or server accessibility for example. In order to optimize performance, manage load and maintain availability, servers may migrate from one cluster group to another making it difficult for server monitoring tools to continuously monitor these dynamically changing groups. Server monitoring tools are usually statically configured and with any change of group membership requires manual reconfiguration, an unreasonable task to undertake on large-scale cloud infrastructures. In this paper we present the Cloudlet Control and Management System (C2MS), a system for monitoring and controlling dynamic groups of physical or virtual servers within cloud infrastructures. The C2MS extends Ganglia - an open source scalable system performance monitoring tool - by allowing system administrators to define, monitor and modify server groups without the need for server reconfiguration. In turn administrators can easily monitor group and individual server metrics on large-scale dynamic cloud infrastructures where roles of servers may change frequently. Furthermore, we complement group monitoring with a control element allowing administrator-specified actions to be performed over servers within service groups as well as introduce further customized monitoring metrics. This paper outlines the design, implementation and evaluation of the C2MS.
|
Birman outline their distributed self-configuring monitoring and adaptation tool called Astrolabe @cite_10 . Astrolabe works like any other monitoring tool by observing the state of an infrastructure where the tool is installed. However it differs by essentially creating a virtual system-wide hierarchical relational database based on a peer-to-peer protocol meaning no central server needs to exist to collect monitoring data. By performing distributed data analysis, Astrolabe can create performance summaries of --- machines typically grouped based on the shortest latency between one another or simply an administrator-specified group --- by data aggregation; a method we use to create graphs of cloudlets.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2132237855"
],
"abstract": [
"The dramatic growth of computer networks creates both an opportunity and a daunting distributed computing problem for users seeking to build applications that can configure themselves and adapt as disruptions occur. The problem is that data often resides on large numbers of devices and evolves rapidly. Systems that collect data at a single location scale poorly and suffer from single-point failures. Here, we discuss the use of a new system, Astrolabe, to automate self-configuration, monitoring, and to control adaptation. Astrolabe operates by creating a virtual system-wide hierarchical database, which evolves as the underlying information changes. Astrolabe is secure, robust under a wide range of failure and attack scenarios, and imposes low loads even under stress."
]
}
|
1310.2474
|
2034095341
|
Software Product Lines (SPLs) are inherently difficult to test due to the combinatorial explosion of the number of products to consider. To reduce the number of products to test, sampling techniques such as combinatorial interaction testing have been proposed. They usually start from a feature model and apply a coverage criterion (e.g. pairwise feature interaction or dissimilarity) to generate tractable, fault-finding, lists of configurations to be tested. Prioritization can also be used to sort generate such lists, optimizing coverage criteria or weights assigned to features. However, current sampling prioritization techniques barely take product behaviour into account. We explore how ideas of statistical testing, based on a usage model (a Markov chain), can be used to extract configurations of interest according to the likelihood of their executions. These executions are gathered in featured transition systems, compact representation of SPL behaviour. We discuss possible scenarios and give a prioritization procedure validated on a web-based learning management software.
|
There have been SPL test efforts to sample products for testing such as t-wise approaches (e.g. @cite_16 @cite_30 @cite_24 ). More recently sampling was combined with prioritization thanks to the addition of weights on feature models and the definition of multiple objectives @cite_6 @cite_9 . However, these approaches do not consider SPL behavior in their analyses.
|
{
"cite_N": [
"@cite_30",
"@cite_9",
"@cite_6",
"@cite_24",
"@cite_16"
],
"mid": [
"2165096715",
"2023501626",
"1979306994",
"2143361941",
"2071160808"
],
"abstract": [
"Software product line modeling has received a great deal of attention for its potential in fostering reuse of software artifacts across development phases. Research on the testing phase, has focused on identifying the potential for reuse of test cases across product line instances. While this offers potential reductions in test development effort for a given product line instance, it does not focus on and leverage the fundamental abstraction that is inherent in software product lines - variability.In this paper, we illustrate how rich software product line modeling notations can be mapped onto an underlying relational model that captures variability in the feasible product line instances. This relational model serves as the semantic basis for defining a family of coverage criteria for testing of a product line. These criteria make it possible to accumulate test coverage information for the product line itself over the course of multiple product line instance development efforts. Cumulative coverage, in turn, enables targeted testing efforts for new product line instances. We describe how combinatorial interaction testing methods can be applied to define test configurations that achieve a desired level of coverage and identify challenges to scaling such methods to large, complex software product lines.",
"Software Products Lines (SPLs) are families of products sharing common assets representing code or functionalities of a software product. These assets are represented as features, usually organized into Feature Models (FMs) from which the user can configure software products. Generally, few features are sufficient to allow configuring millions of software products. As a result, selecting the products matching given testing objectives is a difficult problem. The testing process usually involves multiple and potentially conflicting testing objectives to fulfill, e.g. maximizing the number of optional features to test while at the same time both minimizing the number of products and minimizing the cost of testing them. However, most approaches for generating products usually target a single objective, like testing the maximum amount of feature interactions. While focusing on one objective may be sufficient in certain cases, this practice does not reflect real-life testing situations. The present paper proposes a genetic algorithm to handle multiple conflicting objectives in test generation for SPLs. Experiments conducted on FMs of different sizes demonstrate the effectiveness, feasibility and practicality of the introduced approach.",
"Combinatorial interaction testing is an approach for testing product lines. A set of products to test can be set up from the covering array generated from a feature model. The products occurring in a partial covering array, however, may not focus on the important feature interactions nor resemble any actual product in the market. Knowledge about which interactions are prevalent in the market can be modeled by assigning weights to sub-product lines. Such models enable a covering array generator to select important interactions to cover first for a partial covering array, enable it to construct products resembling those in the market and enable it to suggest simple changes to an existing set of products to test for incremental adaption to market changes. We report experiences from the application of weighted combinatorial interaction testing for test product selection on an industrial product line, TOMRA's Reverse Vending Machines.",
"Combinatorial interaction testing (CIT) is a method to sample configurations of a software system systematically for testing. Many algorithms have been developed that create CIT samples, however few have considered the practical concerns that arise when adding constraints between combinations of options. In this paper, we survey constraint handling techniques in existing algorithms and discuss the challenges that they present. We examine two highly-configurable software systems to quantify the nature of constraints in real systems. We then present a general constraint representation and solving technique that can be integrated with existing CIT algorithms and compare two constraint-enhanced algorithm implementations with existing CIT tools to demonstrate feasibility.",
"Software Product Lines (SPL) are difficult to validate due to combinatorics induced by variability, which in turn leads to combinatorial explosion of the number of derivable products. Exhaustive testing in such a large products space is hardly feasible. Hence, one possible option is to test SPLs by generating test configurations that cover all possible t feature interactions (t-wise). It dramatically reduces the number of test products while ensuring reasonable SPL coverage. In this paper, we report our experience on applying t-wise techniques for SPL with two independent toolsets developed by the authors. One focuses on generality and splits the generation problem according to strategies. The other emphasizes providing efficient generation. To evaluate the respective merits of the approaches, measures such as the number of generated test configurations and the similarity between them are provided. By applying these measures, we were able to derive useful insights for pairwise and t-wise testing of product lines."
]
}
|
1310.2474
|
2034095341
|
Software Product Lines (SPLs) are inherently difficult to test due to the combinatorial explosion of the number of products to consider. To reduce the number of products to test, sampling techniques such as combinatorial interaction testing have been proposed. They usually start from a feature model and apply a coverage criterion (e.g. pairwise feature interaction or dissimilarity) to generate tractable, fault-finding, lists of configurations to be tested. Prioritization can also be used to sort generate such lists, optimizing coverage criteria or weights assigned to features. However, current sampling prioritization techniques barely take product behaviour into account. We explore how ideas of statistical testing, based on a usage model (a Markov chain), can be used to extract configurations of interest according to the likelihood of their executions. These executions are gathered in featured transition systems, compact representation of SPL behaviour. We discuss possible scenarios and give a prioritization procedure validated on a web-based learning management software.
|
To consider behavior in an abstract way, a full-fledged MBT approach @cite_19 is required. Although behavioural MBT is well established for single-system testing @cite_10 , a survey @cite_13 shows insufficient support of SPL-based MBT. However, there have been efforts to combine sampling techniques with modeling ones (e.g. @cite_28 ). These approaches are also product-based, meaning that may miss opportunities for test reuse amongst sampled products @cite_23 . We believe that benefiting from the recent advances in behavioral modeling provided by the model checking community @cite_4 @cite_0 @cite_1 @cite_27 @cite_12 @cite_14 @cite_2 @cite_20 , sound MBT approaches for SPL can be derived and interesting family-based scenarios combining verification and testing can be devised @cite_25 .
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_28",
"@cite_10",
"@cite_1",
"@cite_0",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_2",
"@cite_20",
"@cite_13",
"@cite_25",
"@cite_12"
],
"mid": [
"2120119918",
"2106509924",
"1977497804",
"2126818664",
"2167672803",
"2168086947",
"1608087281",
"2130195901",
"",
"2142960988",
"1498432697",
"899947898",
"2160196744",
"2015315253"
],
"abstract": [
"Software product line engineering combines the individual developments of systems to the development of a family of systems consisting of common and variable assets.In this paper we introduce the process algebra PL-CCS as a product line extension of CCS and show how to model the overall behavior of an entire family within PL-CCS. PL-CCS models incorporate behavioral variability and allow the derivation of individual systems in a systematic way due to a semantics given in terms of multi-valued modal Kripke structures. Furthermore, we introduce multi-valued modal μ-calculus as a property specification language for system families specified in PL-CCS and show how model checking techniques operate on such structures. In our setting the result of model checking is no longer a simple yesor noanswer but the set of systems of the product line that do meet the specified properties.",
"We propose an emerging solution technique, pushing the application of model-checking techniques to the design and validation of variability in a product line (PL), mainly aimed at those industrial domains where model-based development is adopted for the development of safety-critical systems.",
"Testing software product lines (SPLs) is very challenging due to a high degree of variability leading to an enormous number of possible products. The vast majority of today's testing approaches for SPLs validate products individually using different kinds of reuse techniques for testing. Because of their reusability and adaptability capabilities, model-based approaches are suitable to describe variability and are therefore frequently used for implementation and testing purposes of SPLs. Due to the enormous number of possible products, individual product testing becomes more and more infeasible. Pairwise testing offers one possibility to test a subset of all possible products. However, according to the best of our knowledge, there is no contribution discussing and rating this approach in the SPL context. In this contribution, we provide a mapping between feature models describing the common and variable parts of an SPL and a reusable test model in the form of statecharts. Thereby, we interrelate feature model-based coverage criteria and test model-based coverage criteria such as control and data flow coverage and are therefore able to discuss the potentials and limitations of pairwise testing. We pay particular attention to test requirements for feature interactions constituting a major challenge in SPL engineering. We give a concise definition of feature dependencies and feature interactions from a testing point of view, and we discuss adequacy criteria for SPL coverage under pairwise feature interaction testing and give a generalization to the T-wise case. The concept and implementation of our approach are evaluated by means of a case study from the automotive domain.",
"Model based testing is one of the promising technologies to meet the challenges imposed on software testing. In model based testing an implementation under test is tested for compliance with a model that describes the required behaviour of the implementation. This tutorial chapter describes a model based testing theory where models are expressed as labelled transition systems, and compliance is defined with the 'ioco' implementation relation. The ioco-testing theory, on the one hand, provides a sound and well-defined foundation for labelled transition system testing, having its roots in the theoretical area of testing equivalences and refusal testing. On the other hand, it has proved to be a practical basis for several model based test generation tools and applications. Definitions, underlying assumptions, an algorithm, properties, and several examples of the ioco-testing theory are discussed, involving specifications, implementations, tests, the ioco implementation relation and some of its variants, a test generation algorithm, and the soundness and exhaustiveness of this algorithm.",
"We study the problem of model checking software product line (SPL) behaviours against temporal properties. This is more difficult than for single systems because an SPL with n features yields up to 2n individual systems to verify. As each individual verification suffers from state explosion, it is crucial to propose efficient formalisms and heuristics. We recently proposed featured transition systems (FTS), a compact representation for SPL behaviour, and defined algorithms for model checking FTS against linear temporal properties. Although they showed to outperform individual system verifications, they still face a state explosion problem as they enumerate and visit system states one by one. In this paper, we tackle this latter problem by using symbolic representations of the state space. This lead us to consider computation tree logic (CTL) which is supported by the industry-strength symbolic model checker NuSMV. We first lay the foundations for symbolic SPL model checking by defining a feature-oriented version of CTL and its dedicated algorithms. We then describe an implementation that adapts the NuSMV language and tool infrastructure. Finally, we propose theoretical and empirical evaluations of our results. The benchmarks show that for certain properties, our algorithm is over a hundred times faster than model checking each system with the standard algorithm.",
"We illustrate how to manage variability in a single logical framework consisting of a Modal Transition System (MTS) and an associated set of formulae expressed in the branching-time temporal logic MHML interpreted in a deontic way over such MTSs. We discuss the commonalities and differences with the framework of based on Featured Transition Systems and Linear-time Temporal Logic.",
"This book gives a practical introduction to model-based testing, showing how to write models for testing purposes and how to use model-based testing tools to generate test suites. It is aimed at testers and software developers who wish to use model-based testing, rather than at tool-developers or academics. The book focuses on the mainstream practice of functional black-box testing and covers different styles of models, especially transition-based models (UML state machines) and pre post models (UML OCL specifications and B notation). The steps of applying model-based testing are demonstrated on examples and case studies from a variety of software domains, including embedded software and information systems. From this book you will learn: * The basic principles and terminology of model-based testing * How model-based testing differs from other testing processes * How model-based testing fits into typical software lifecycles such as agile methods and the Unified Process * The benefits and limitations of model-based testing, its cost effectiveness and how it can reduce time-to-market * A step-by-step process for applying model-based testing * How to write good models for model-based testing * How to use a variety of test selection criteria to control the tests that are generated from your models * How model-based testing can connect to existing automated test execution platforms such as Mercury Test Director, Java JUnit, and proprietary test execution environments * Presents the basic principles and terminology of model-based testing * Shows how model-based testing fits into the software lifecycle, its cost-effectiveness, and how it can reduce time to market * Offers guidance on how to use different kinds of modeling techniques, useful test generation strategies, how to apply model-based testing techniques to real applications using case studies",
"In product line engineering, systems are developed in families and differences between family members are expressed in terms of features. Formal modelling and verification is an important issue in this context as more and more critical systems are developed this way. Since the number of systems in a family can be exponential in the number of features, two major challenges are the scalable modelling and the efficient verification of system behaviour. Currently, the few attempts to address them fail to recognise the importance of features as a unit of difference, or do not offer means for automated verification. In this paper, we tackle those challenges at a fundamental level. We first extend transition systems with features in order to describe the combined behaviour of an entire system family. We then define and implement a model checking technique that allows to verify such transition systems against temporal properties. An empirical evaluation shows substantial gains over classical approaches.",
"",
"In product line engineering individual products are derived from the domain artifacts of the product line. The reuse of the domain artifacts is constraint by the product line variability. Since domain artifacts are reused in several products, product line engineering benefits from the verification of domain artifacts. For verifying development artifacts, model checking is a well-established technique in single system development. However, existing model checking approaches do not incorporate the product line variability and are hence of limited use for verifying domain artifacts. In this paper we present an extended model checking approach which takes the product line variability into account when verifying domain artifacts. Our approach is thus able to verify that every permissible product (specified with I O-automata) which can be derived from the product line fulfills the specified properties (specified with CTL). Moreover, we use two examples to validate the applicability of our approach and report on the preliminary validation results.",
"Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.",
"",
"The Software Product Lines (SPLs) paradigm promises faster development cycles and increased quality by systematically reusing software assets. This paradigm considers a family of systems, each of which can be obtained by a selection of features in a variability model. Though essential, providing Quality Assurance (QA) techniques for SPLs has long been perceived as a very difficult challenge due to the combinatorics induced by variability and for which very few techniques were available. Recently, important progress has been made by the model-checking and testing communities to address this QA challenge, in a very disparate way though. We present our vision for a unified framework combining model-checking and testing approaches applied to behavioural models of SPLs. Our vision relies on Featured Transition Systems (FTSs), an extension of transition systems supporting variability. This vision is also based on model-driven technologies to support practical SPL modelling and orchestrate various QA scenarios. We illustrate one of such scenarios on a vending machine SPL.",
"Software product lines or families represent an emerging paradigm that is enabling companies to engineer applications with similar functionality and user requirements more effectively. Behaviour modelling at the architecture level has the potential for supporting behaviour analysis of entire product lines, as well as defining optional and variable behaviour for different products of a family. However, to do so rigorously, a well defined notion of behavioural conformance of a product to its product line must exist. In this paper we provide a discussion on the shortcomings of traditional behaviour modelling formalisms such as Labelled Transition Systems for characterising conformance and propose Modal Transition Systems as an alternative. We discuss existing semantics for such models, exposing their limitations and finally propose a novel semantics for Modal Transition Systems, branching semantics, that can provide the formal underpinning for a notion of behaviour conformance for software product line architectures."
]
}
|
1310.2474
|
2034095341
|
Software Product Lines (SPLs) are inherently difficult to test due to the combinatorial explosion of the number of products to consider. To reduce the number of products to test, sampling techniques such as combinatorial interaction testing have been proposed. They usually start from a feature model and apply a coverage criterion (e.g. pairwise feature interaction or dissimilarity) to generate tractable, fault-finding, lists of configurations to be tested. Prioritization can also be used to sort generate such lists, optimizing coverage criteria or weights assigned to features. However, current sampling prioritization techniques barely take product behaviour into account. We explore how ideas of statistical testing, based on a usage model (a Markov chain), can be used to extract configurations of interest according to the likelihood of their executions. These executions are gathered in featured transition systems, compact representation of SPL behaviour. We discuss possible scenarios and give a prioritization procedure validated on a web-based learning management software.
|
Our will is to apply ideas stemming from statistical testing and adapt them in an SPL context. For example, combining structural criteria with statistical testing has been discussed in @cite_5 @cite_21 . We do not make any assumption on the way the DTMC is obtained: via an operational profile @cite_3 or by analyzing the source code or the specification @cite_21 . However, an uniform distribution of probabilities over the DTMC would probably be less interesting. As noted by Witthaker @cite_22 , in such case only the structure of traces would be considered and therefore basing their selection on their probabilities would just be a means to limit their number in a mainly random-testing approach. In such cases, structural test generation has to be employed @cite_29 .
|
{
"cite_N": [
"@cite_22",
"@cite_29",
"@cite_21",
"@cite_3",
"@cite_5"
],
"mid": [
"2144956041",
"2125364373",
"1880675778",
"1566634528",
"2133250787"
],
"abstract": [
"Statistical testing of software establishes a basis for statistical inference about a software system's expected field quality. This paper describes a method for statistical testing based on a Markov chain model of software usage. The significance of the Markov chain is twofold. First, it allows test input sequences to be generated from multiple probability distributions, making it more general than many existing techniques. Analytical results associated with Markov chains facilitate informative analysis of the sequences before they are generated, indicating how the test is likely to unfold. Second, the test input sequences generated from the chain and applied to the software are themselves a stochastic model and are used to create a second Markov chain to encapsulate the history of the test, including any observed failure information. The influence of the failures is assessed through analytical computations on this chain. We also derive a stopping criterion for the testing process based on a comparison of the sequence generating properties of the two chains. >",
"Markov chains with Labelled Transitions can be used to generate test cases in a model-based approach. These test cases are generated by random walks on the model according to probabilities associated with transitions. When these probabilities correspond to a usage profile, reliability may be estimated. However, in early stages of development, such probabilities are not easy to determine, thus default profiles must be considered. In such a case it may be interesting to target some coverage criteria rather to use classical uniform probability generation approach. In this paper we enrich an existing industrial tool based on usage profile with 3 possibilities to create default profiles that improve transition coverage. We report experiments that compare the improvement of the coverage rates by our approaches with respect to uniform probabilities on transitions from a given state, which is the current default profile.",
"Software validation embodies two notions: fault removal and fault forecasting. Statistical testing involves exercising a piece of software by supplying it with input values that are randomly selected according to a defined probability distribution on its input domain. It can be used as a practical tool for revealing faults in a fault removal phase, and for assessing software dependability in a fault forecasting phase. In both of these, its efficiency is linked to the adequacy of the input probability distribution with respect to the test experiment goal. In this paper a mixed validation strategy combining deterministic and random test data is defined, and the theoretical and experimental work performed to support the strategy is reported. The quoted results relate to the unit testing of four real programs from the nuclear field. They confirm the high fault revealing power of statistical structural testing. Two main directions for further investigation of statistical testing are indicated by the reported work. These are described and solutions to the associated problems are outlined.",
"Operational profiles are an important part of the technology and practice of software reliability engineering. The concept was developed originally ( 1987) to make it possible to specify the nature of the use of a software-based system so that testing could be made as realistic as possible and so that reliability measurements would reflect that realism. However, the operational profile rapidly became useful for additional purposes in software reliability engineering (Musa 1993). In fact, it is also proving useful for purposes outside of software reliability engineering as well. This paper gives an overview of operational profile practice, discussing what the operational profile is, why it is important, and how it is developed and applied. It also presents some current open research questions; work in these areas can be expected to affect the practice of the future.",
"We propose a novel way of automating statistical structural testing of software, based on the combination of uniform generation of combinatorial structures, and of randomized constraint solving techniques. More precisely, we show how to draw test cases which balance the coverage of program structures according to structural testing criteria. The control flow graph is formalized as a combinatorial structure specification. This provides a way of uniformly drawing execution paths which have suitable properties. Once a path has been drawn, the predicate characterizing those inputs which lead to its execution is solved using a constraint solving library. The constraint solver is enriched with powerful heuristics in order to deal with resolution failures and random choice strategies."
]
}
|
1310.2545
|
2950483073
|
Agile software development methodologies focus on software projects which are behind schedule or highly likely to have a problematic development phase. In the last decade, Agile methods have transformed from cult techniques to mainstream methodologies. Scrum, an Agile software development method, has been widely adopted due to its adaptive nature. This paper presents a metric that measures the quality of the testing process in a Scrum process. As product quality and process quality correlate, improved test quality can ensure high quality products. Also, gaining experience from eight years of successful Scrum implementation at SoftwarePeople, we describe the Scrum process emphasizing the testing process. We propose a metric Product Backlog Rating (PBR) to assess the testing process in Scrum. PBR considers the complexity of the features to be developed in an iteration of Scrum, assesses test ratings and offers a numerical score of the testing process. This metric is able to provide a comprehensive overview of the testing process over the development cycle of a product. We present a case study which shows how the metric is used at SoftwarePeople. The case study explains some features that have been developed in a Sprint in terms of feature complexity and potential test assessment difficulties and shows how PBR is calculated during the Sprint. We propose a test process assessment metric that provides insights into the Scrum testing process. However, the metric needs further evaluation considering associated resources (e.g., quality assurance engineers, the length of the Scrum cycle).
|
Software metrics quantify specific attributes of a software product or the process of the software development @cite_57 . A wide body of research attempts to numerically evaluate the quality characteristics of a software product. Also, software process improvement has been studied for a long time. However, to the best our knowledge, we are the first to evaluate the quality of the software testing in an Agile process.
|
{
"cite_N": [
"@cite_57"
],
"mid": [
"2123964455"
],
"abstract": [
"The word success is very powerful. It creates strong, but widely varied, images that may range from the final seconds of an athletic contest to a graduation ceremony to the loss of 10 pounds. Success makes us feel good; it's cause for celebration. All these examples of success are marked by a measurable end point, whether externally or self-created. Most of us who create software approach projects with some similar idea of success. Our feelings from project start to end are often strongly influenced by whether we spent any early time describing this success and how we might measure progress. Software metrics measure specific attributes of a software product or a software development process. In other words, they are measures of success. It's convenient to group the ways that we apply metrics to measure success into four areas. What do you need to measure and analyze to make your project a success? We show examples from many projects and Hewlett Packard divisions which may help you chart your course. >"
]
}
|
1310.2545
|
2950483073
|
Agile software development methodologies focus on software projects which are behind schedule or highly likely to have a problematic development phase. In the last decade, Agile methods have transformed from cult techniques to mainstream methodologies. Scrum, an Agile software development method, has been widely adopted due to its adaptive nature. This paper presents a metric that measures the quality of the testing process in a Scrum process. As product quality and process quality correlate, improved test quality can ensure high quality products. Also, gaining experience from eight years of successful Scrum implementation at SoftwarePeople, we describe the Scrum process emphasizing the testing process. We propose a metric Product Backlog Rating (PBR) to assess the testing process in Scrum. PBR considers the complexity of the features to be developed in an iteration of Scrum, assesses test ratings and offers a numerical score of the testing process. This metric is able to provide a comprehensive overview of the testing process over the development cycle of a product. We present a case study which shows how the metric is used at SoftwarePeople. The case study explains some features that have been developed in a Sprint in terms of feature complexity and potential test assessment difficulties and shows how PBR is calculated during the Sprint. We propose a test process assessment metric that provides insights into the Scrum testing process. However, the metric needs further evaluation considering associated resources (e.g., quality assurance engineers, the length of the Scrum cycle).
|
To address the product quality characteristics and subcharacteristics, the Joint Technical Committee @math of the International Organization for Standardization and International Electrotechnical Commission defined a set of software product quality standards known as ISO IEC 9126 @cite_36 (see a complete listing in Table ). They defined six high-level product quality characteristics which are as follows. Functionality--extent to which each function of the software system operates in conformance with the requirement specification; reliability---extent to which a software can be expected to perform its intended function with required precision; usability---effort required to learn, operate, prepare input, and interpret output of a program; efficiency---the amount of computing resources and code required by a software to perform a function; maintainability---effort required to diagnose and fix a bug in an operational software and portability---effort required to transfer a software from one hardware configuration and or software system environment to another @cite_17 . These quality characteristics have been studied and researchers proposed a wide range of metrics to measure them (e.g., @cite_12 @cite_75 @cite_40 @cite_1 @cite_16 @cite_7 @cite_50 ).
|
{
"cite_N": [
"@cite_7",
"@cite_36",
"@cite_1",
"@cite_40",
"@cite_50",
"@cite_16",
"@cite_75",
"@cite_12",
"@cite_17"
],
"mid": [
"2096532105",
"2114574011",
"2114728368",
"2171242934",
"2025043404",
"2044802063",
"103971554",
"2153704451",
"2095032754"
],
"abstract": [
"In component-based software development, it is necessary to measure the reusability of components in order to realize the reuse of components effectively. There are some product metrics for measuring the reusability of object-oriented software. However, in application development with reuse, it is difficult to use conventional metrics because the source codes of components cannot be obtained, and these metrics require analysis of source codes. We propose a metrics suite for measuring the reusability of such black-box components based on limited information that can be obtained from the outside of components without any source codes. We define five metrics for measuring a component's understandability, adaptability, and portability, with confidence intervals that were set by statistical analysis of a number of JavaBeans components. Moreover, we provide a reusability metric by combining these metrics based on a reusability model. As a result of evaluation experiments, it is found that our metrics can effectively identify black-box components with high reusability.",
"To address the issues of software product quality, the Joint Technical Committee 1 of the International Organization for Standardization and International Electrotechnical Commission published a set of software product quality standards known as ISO IEC 9126. These standards specify software product quality's characteristics and subcharacteristics and their metrics. Based on a user survey, this study of the standard helps clarity quality attributes and provides guidance for the resulting standards.",
"It is noted that the factors of software that determine or influence maintainability can be organized into a hierarchical structure of measurable attributes. For each of these attributes the authors show a metric definition consistent with the published definitions of the software characteristic being measured. The result is a tree structure of maintainability metrics which can be used for purposes of evaluating the relative maintainability of the software system. The authors define metrics for measuring the maintainability of a target software system and discuss how those metrics can be combined into a single index of maintainability. >",
"A number of analytical models have been proposed during the past 15 years for assessing the reliability of a software system. In this paper we present an overview of the key modeling approaches, provide a critical analysis of the underlying assumptions, and assess the limitations and applicability of these models during the software development cycle. We also propose a step-by-step procedure for fitting a model and illustrate it via an analysis of failure data from a medium-sized real-time command and control software system.",
"Structured design methodologies provide a disciplined and organized guide to the construction of software systems. However, while the methodology structures and documents the points at which design decisions are made, it does not provide a specific, quantitative basis for making these decisions. Typically, the designers' only guidelines are qualitative, perhaps even vague, principles such as \"functionality,\" \"data transparency,\" or \"clarity.\" This paper, like several recent publications, defines and validates a set of software metrics which are appropriate for evaluating the structure of large-scale systems. These metrics are based on the measurement of information flow between system components. Specific metrics are defined for procedure complexity, module complexity, and module coupling. The validation, using the source code for the UNIX operating system, shows that the complexity measures are strongly correlated with the occurrence of changes. Further, the metrics for procedures and modules can be interpreted to reveal various types of structural flaws in the design and implementation.",
"Software metrics have been much criticized in the last few years, sometimes justly but more often unjustly, because critics misunderstand the intent behind the technology. Software complexity metrics, for example, rarely measure the \"inherent complexity\" embedded in software systems, but they do a very good job of comparing the relative complexity of one portion of a system with another. In essence, they are good modeling tools. Whether they are also good measuring tools depends on how consistently and appropriately they are applied. >",
"We are developing a suite of metrics for early assessment of software reliability and to provide feedback to the developer on the quality of their testing effort. The suite consists of easy-to-measure information collected from the source code and test programs. We are studying correlation between these metrics and the reliability of the developed software. The results of an initial case study demonstrate the feasibility of the approach. Metrics will be added to or deleted from the suite based on further validation work.",
"The last decade marked the first real attempt to turn software development into engineering through the concepts of Component-Based Software Development (CBSD) and Commercial Off-The-Shelf (COTS) components, with the goal of creating high-quality parts that could be joined together to form a functioning system. One of the most critical processes in CBSD is the selection of a set of software components from in-house or external repositories that fulfil some architectural and user-defined requirements. However, there is a lack of quality models and metrics that can help evaluate the quality characteristics of software components during this selection process. This paper presents a set of measures to assess the Usability of software components, and describes the method followed to obtain and validate them. Such a method can be re-used as a pattern for defining and validating measures for further quality characteristics.",
"Research in software metrics incorporated in a framework established for software quality measurement can potentially provide significant benefits to software quality assurance programs. The research described has been conducted by General Electric Company for the Air Force Systems Command Rome Air Development Center. The problems encountered defining software quality and the approach taken to establish a framework for the measurement of software quality are described in this paper."
]
}
|
1310.1896
|
2950977994
|
After a sequence of improvements Boyd, Sitters, van der Ster, and Stougie proved that any 2-connected graph whose n vertices have degree 3, i.e., a cubic 2-connected graph, has a Hamiltonian tour of length at most (4 3)n, establishing in particular that the integrality gap of the subtour LP is at most 4 3 for cubic 2-connected graphs and matching the conjectured value of the famous 4 3 conjecture. In this paper we improve upon this result by designing an algorithm that finds a tour of length (4 3 - 1 61236)n, implying that cubic 2-connected graphs are among the few interesting classes of graphs for which the integrality gap of the subtour LP is strictly less than 4 3. With the previous result, and by considering an even smaller epsilon, we show that the integrality gap of the TSP relaxation is at most 4 3 - epsilon, even if the graph is not 2-connected (i.e. for cubic connected graphs), implying that the approximability threshold of the TSP in cubic graphs is strictly below 4 3. Finally, using similar techniques we show, as an additional result, that every Barnette graph admits a tour of length at most (4 3 - 1 18)n.
|
There have been several improvements for important special cases of the metric TSP in the last couple of years. Oveis Gharan, Saberi, and Singh @cite_1 design a @math -approximation algorithm for the case of graph metrics, while M "omke and Svensson @cite_16 improve that to @math , using a different approach. Mucha @cite_0 then showed that the approximation guarantee of the M "omke and Svensson algorithm is @math . Finally, still in the shortest path metric case, Seb "o and Vygen @cite_17 find an algorithm with a guarantee of @math . These results in particular show that the integrality gap of the subtour LP is below @math in case the metric comes from a graph. Another notable direction of recent improvements concern the @math path version of the TSP on arbitrary metrics, where the natural extension of Christofides' heuristic guarantees a solution within a factor of @math of optimum. An, Shmoys, and Kleinberg @cite_13 , find a @math -approximation algorithm this version of the TSP, while Seb "o @cite_4 further improved this result obtaining a @math -approximation algorithm.
|
{
"cite_N": [
"@cite_4",
"@cite_1",
"@cite_0",
"@cite_16",
"@cite_13",
"@cite_17"
],
"mid": [
"1546311336",
"",
"2950382808",
"2950432526",
"2051585135",
"1999799345"
],
"abstract": [
"We prove the approximation ratio 8 5 for the metric s,t -path-TSP, and more generally for shortest connected T-joins. The algorithm that achieves this ratio is the simple \"Best of Many\" version of Christofides' algorithm (1976), suggested by An, Kleinberg and Shmoys (2012), which consists in determining the best Christofides s,t -tour out of those constructed from a family @math of trees having a convex combination dominated by an optimal solution x* of the Held-Karp relaxation. They give the approximation guarantee @math for such an s,t -tour, which is the first improvement after the 5 3 guarantee of Hoogeveen's Christofides type algorithm (1991). Cheriyan, Friggstad and Gao (2012) extended this result to a 13 8-approximation of shortest connected T-joins, for |T|≥4. The ratio 8 5 is proved by simplifying and improving the approach of An, Kleinberg and Shmoys that consists in completing x* 2 in order to dominate the cost of \"parity correction\" for spanning trees. We partition the edge-set of each spanning tree in @math into an s,t -path (or more generally, into a T-join) and its complement, which induces a decomposition of x*. This decomposition can be refined and then efficiently used to complete x* 2 without using linear programming or particular properties of T, but by adding to each cut deficient for x* 2 an individually tailored explicitly given vector, inherent in x*. A simple example shows that the Best of Many Christofides algorithm may not find a shorter s,t -tour than 3 2 times the incidentally common optima of the problem and of its fractional relaxation.",
"",
"The Travelling Salesman Problem is one the most fundamental and most studied problems in approximation algorithms. For more than 30 years, the best algorithm known for general metrics has been Christofides's algorithm with approximation factor of 3 2, even though the so-called Held-Karp LP relaxation of the problem is conjectured to have the integrality gap of only 4 3. Very recently, significant progress has been made for the important special case of graphic metrics, first by Oveis , and then by Momke and Svensson. In this paper, we provide an improved analysis for the approach introduced by Momke and Svensson yielding a bound of 13 9 on the approximation factor, as well as a bound of 19 12+epsilon for any epsilon>0 for a more general Travelling Salesman Path Problem in graphic metrics.",
"We present a framework for approximating the metric TSP based on a novel use of matchings. Traditionally, matchings have been used to add edges in order to make a given graph Eulerian, whereas our approach also allows for the removal of certain edges leading to a decreased cost. For the TSP on graphic metrics (graph-TSP), the approach yields a 1.461-approximation algorithm with respect to the Held-Karp lower bound. For graph-TSP restricted to a class of graphs that contains degree three bounded and claw-free graphs, we show that the integrality gap of the Held-Karp relaxation matches the conjectured ratio 4 3. The framework allows for generalizations in a natural way and also leads to a 1.586-approximation algorithm for the traveling salesman path problem on graphic metrics where the start and end vertices are prespecified.",
"We present a deterministic (1+√5 2)-approximation algorithm for the s-t path TSP for an arbitrary metric. Given a symmetric metric cost on @math vertices including two prespecified endpoints, the problem is to find a shortest Hamiltonian path between the two endpoints; Hoogeveen showed that the natural variant of Christofides' algorithm is a 5 3-approximation algorithm for this problem, and this asymptotically tight bound in fact had been the best approximation ratio known until now. We modify this algorithm so that it chooses the initial spanning tree based on an optimal solution to the Held-Karp relaxation rather than a minimum spanning tree; we prove this simple but crucial modification leads to an improved approximation ratio, surpassing the 20-year-old barrier set by the natural Christofides' algorithm variant. Our algorithm also proves an upper bound of 1+√5 2 on the integrality gap of the path-variant Held-Karp relaxation. The techniques devised in this paper can be applied to other optimization problems as well: these applications include improved approximation algorithms and improved LP integrality gap upper bounds for the prize-collecting s-t path problem and the unit-weight graphical metric s-t path TSP.",
"We prove new results for approximating the graph-TSP and some related problems. We obtain polynomial-time algorithms with improved approximation guarantees. For the graph-TSP itself, we improve the approximation ratio to 7=5. For a generalization, the minimum T-tour problem, we obtain the first nontrivial approximation algorithm, with ratio 3=2. This contains the s-t-path graph-TSP as a special case. Our approximation guarantee for finding a smallest 2-edge-connected spanning subgraph is 4=3. The key new ingredient of all our algorithms is a special kind of ear-decomposition optimized using forest representations of hypergraphs. The same methods also provide the lower bounds (arising from LP relaxations) that we use to deduce the approximation ratios."
]
}
|
1310.1693
|
2951492986
|
Recently it has been shown that an aggregation of Thermostatically Controlled Loads (TCLs) can be utilized to provide fast regulating reserve service for power grids and the behavior of the aggregation can be captured by a stochastic battery with dissipation. In this paper, we address two practical issues associated with the proposed battery model. First, we address clustering of a heterogeneous collection and show that by finding the optimal dissipation parameter for a given collection, one can divide these units into few clusters and improve the overall battery model. Second, we analytically characterize the impact of imposing a no-short-cycling requirement on TCLs as constraints on the ramping rate of the regulation signal. We support our theorems by providing simulation results.
|
TCL have been recently considered for providing load following and regulation services to the grid @cite_4 @cite_0 @cite_24 @cite_7 @cite_17 @cite_20 . In particular, it has been recently shown that the aggregate flexibility offered by a collection of TCL can be succinctly modeled as a stochastic battery with dissipation @cite_10 @cite_2 . The power limits and energy capacity of this battery model can be calculated in terms of TCL model parameters and exogenous variables such as ambient temperature and user-specified set-points. Simple battery models are also considered in @cite_0 @cite_20 . Clustering and no-short-cycling of TCL have been reported in @cite_15 @cite_9 .
|
{
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_0",
"@cite_24",
"@cite_2",
"@cite_15",
"@cite_10",
"@cite_20",
"@cite_17"
],
"mid": [
"1999913837",
"1836404886",
"2044689796",
"2031238918",
"2181681996",
"",
"2091510875",
"2000502807",
"1787290749",
"2010029789"
],
"abstract": [
"This paper develops new methods to model and control the aggregated power demand from a population of thermostatically controlled loads, with the goal of delivering services such as regulation and load following. Previous work on direct load control focuses primarily on peak load shaving by directly interrupting power to loads. In contrast, the emphasis of this paper is on controlling loads to produce relatively short time scale responses (hourly to sub-hourly), and the control signal is applied by manipulation of temperature set points, possibly via programmable communicating thermostats or advanced metering infrastructure. To this end, the methods developed here leverage the existence of system diversity and use physically-based load models to inform the development of a new theoretical model that accurately predicts – even when the system is not in equilibrium – changes in load resulting from changes in thermostat temperature set points. Insight into the transient dynamics that result from set point changes is developed by deriving a new exact solution to a well-known hybrid state aggregated load model. The eigenvalues of the solution, which depend only on the thermal time constant of the loads under control, are shown to have a strong effect on the accuracy of the model. The paper also shows that load heterogeneity – generally something that must be assumed away in direct load control models – actually has a positive effect on model accuracy. System identification techniques are brought to bear on the problem, and it is shown that identified models perform only marginally better than the theoretical model. The paper concludes by deriving a minimum variance control law, and demonstrates its effectiveness in simulations wherein a population of loads is made to follow the output of a wind plant with very small changes in the nominal thermostat temperature set points.",
"As the penetration of intermittent energy sources grows substantially, loads will be required to play an increasingly important role in compensating the fast time-scale fluctuations in generated power. Recent numerical modeling of thermostatically controlled loads (TCLs) has demonstrated that such load following is feasible, but analytical models that satisfactorily quantify the aggregate power consumption of a group of TCLs are desired to enable controller design. We develop such a model for the aggregate power response of a homogeneous population of TCLs to uniform variation of all TCL setpoints. A linearized model of the response is derived, and a linear quadratic regulator (LQR) has been designed. Using the TCL setpoint as the control input, the LQR enables aggregate power to track reference signals that exhibit step, ramp and sinusoidal variations. Although much of the work assumes a homogeneous population of TCLs with deterministic dynamics, we also propose a method for probing the dynamics of systems where load characteristics are not well known.",
"Demand-side Demand-side control is playing an increasingly important role in smart grid control strategies. Modeling the dynamical behavior of a large population of appliances is especially important to evaluate the effectiveness of various load control strategies. In this paper, a high accuracy aggregated model is first developed for a population of HVAC units. The model efficiently includes statistical information of the population, systematically deals with heterogeneity, and accounts for a second-order effect necessary to accurately capture the transient dynamics in the collective response. Furthermore, the model takes into account the lockout effect of the compressor in order to represent the dynamics of the system under control more accurately. Then, a novel closed loop load control strategy is designed to track a desired demand curve and to ensure a stable and smooth response.",
"Abstract In this paper, a coordination approach for thermostat-controlled household appliances is developed. Under consideration are heating and cooling devices such as refrigerators, freezers, water boilers and heat pumps, which are usually characterized by an intermittent (duty cycle) operation. Without influence from the outside, an internal hysteresis switching controller toggles the “on off” state of a device when its temperature boundary is reached. In order to realize the approach proposed here, the devices are connected to a central control entity by two-way communication. The coordination consists of a step-wise heuristic solution of a binary optimization problem, which serves to select a number of devices in each time step that are subject to compulsory switching, i.e. toggling the “on off” state. It allows a large group of devices to track a setpoint trajectory with its aggregated power consumption, acting like a distributed virtual energy storage, while the individual temperature bounds of the appliances are not violated. This behavior can be used for grid-control purposes, such as the provision of active power reserves. It will be shown that the group of appliances can be characterized by an approximate aggregated dynamical model consisting of a first-order differential equation together with an approximate aggregated nonlinear cost function penalizing the control actions. The developed methodology is evaluated in a numerical simulation with a small appliance cluster corresponding to a residential housing area.",
"This paper presents a novel modeling and con- trol approach for the aggregation of large numbers of hetero- geneous thermostatically controlled loads, such as refrigera- tors, electric water heaters, and air conditioners, and their usage for Demand Response. Unlike traditional Demand Re- sponse methods that act on time scales of hours, this ap- proach is able to provide short-term (e.g., second-to-second) ancillary services, such as balancing and frequency control. A statistical modeling approach based on Markov Chains is used to describe the evolution of probability mass in a tem- perature state space. The Markov state transition matrix is identified using state information from the population of thermostatically controlled loads. A predictive controller is used to control the aggregate population of loads such that it tracks a signal. A simulation example shows the applicabil- ity of the approach to realistic systems, and includes a com- parison of control performance depending on available state information and controller parameterization.",
"",
"Due to the potentially large number of Distributed Energy Resources (DERs) - demand response, distributed generation, distributed storage - that are expected to be deployed, it is impractical to use detailed models of these resources when integrated with the transmission system. Being able to accurately estimate the transients caused by demand response is especially important to analyze the stability of the system under different demand response strategies, where dynamics on time scales of seconds to minutes are important. On the other hand, a less complex model is more amenable to study stability of a large power system, and to design feedback control strategies for the population of devices to provide ancillary services. The main contribution of this paper is to develop an aggregated model for a heterogeneous population of Thermostatic Controlled Loads (TCLs) to accurately capture their collective behavior under demand response. The aggregated model efficiently includes statistical information of the population, systematically deals with heterogeneity, and accounts for a second-order effect necessary to accurately capture the transient dynamics in the collective response. The developed aggregated model is validated against simulations of thousands of detailed building models using GridLAB-D (an open source distribution simulation software) under both steady state and severe dynamic conditions.",
"The thermal storage potential of Thermostatically Controlled Loads (TCLs) is a tremendous flexible resource for providing various ancillary services to the grid. In this work, we study aggregate modeling, characterization, and control of TCLs for frequency regulation service provision. We propose a generalized battery model for aggregating flexibility of a collection of TCLs. A theoretical characterization of the aggregate power limits and energy capacity of TCLs is provided. Moreover, we propose a priority-stack-based control strategy to manipulate the power consumption of TCLs for frequency regulation, while preventing short cycling on the units. Numerical experiments are provided to show the accuracy of the proposed model and the efficacy of the developed control method.",
"We investigate the potential for aggregations of residential thermostatically controlled loads (TCLs), such as air conditioners, to arbitrage intraday wholesale electricity market prices via non-disruptive direct load control. Since wholesale electricity prices reflect power system conditions, arbitrage provides a service to the grid, helping to balance real-time supply and demand. While previous work on the energy arbitrage problem has used simple energy storage models, we use high fidelity TCL-specific models which allow us to understand and quantify the full capabilities and constraints of these time-varying systems. We explore two optimization control frameworks for solving the arbitrage problem, both based on receding horizon linear programming. Since we find that the first approach requires significant computation, we develop a second approach involving decomposition of the optimal control problem into separate optimization and control problems. Simulation results show that TCLs could save on the order of 10 of wholesale energy costs via arbitrage, with savings decreasing with price forecast error.",
"Thermostatically controlled loads (TCLs), such as refrigerators, air conditioners, and electric water heaters, can be aggregated and used to deliver power systems services. The effectiveness of control strategies depends on the level of infrastructure and communications. This paper explores the use of TCLs for load following when measured state information is not available in real time. We use Markov Chain models to describe the temperature state evolution of populations of TCLs, and Kalman filtering techniques for both state estimation and joint parameter state estimation. We find power tracking RMS errors in the range of 2-16 of the aggregate steady state power consumption of the TCL population. Results depend upon the information available for system identification, state estimation, and control. If high precision tracking is not required, TCLs may not need to be metered to provide state information to the central controller in real time or at all."
]
}
|
1310.1757
|
2952295562
|
The Neural Autoregressive Distribution Estimator (NADE) and its real-valued version RNADE are competitive density models of multidimensional data across a variety of domains. These models use a fixed, arbitrary ordering of the data dimensions. One can easily condition on variables at the beginning of the ordering, and marginalize out variables at the end of the ordering, however other inference tasks require approximate inference. In this work we introduce an efficient procedure to simultaneously train a NADE model for each possible ordering of the variables, by sharing parameters across all these models. We can thus use the most convenient model for each inference task at hand, and ensembles of such models with different orderings are immediately available. Moreover, unlike the original NADE, our training procedure scales to deep models. Empirically, ensembles of Deep NADE models obtain state of the art density estimation performance.
|
Our algorithm also bears similarity with denoising autoencoders @cite_5 trained using so-called masking noise''. There are two crucial differences however. The first is that our procedure corresponds to training on the average reconstruction of only the inputs that are missing from the input layer. The second is that, unlike denoising autoencoders, the NADE models that we train can be used as tractable density estimators.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"2025768430"
],
"abstract": [
"Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite."
]
}
|
1310.1115
|
1517563762
|
We study properties of an attractive-repulsive energy functional based on power-kernels, which can be used for halftoning of images. In the first part of this work, using a variational framework for probability measures, we examine existence and behavior of minimizers to the functional and to a regularization of it by a total variation term. Moreover, we introduce particle approximations to the functional and to its regularized version and prove their consistency in terms of Gamma-convergence, which we additionally illustrate by numerical examples. In the second part, we consider the gradient flow of the functional in the 2-Wasserstein space and prove statements about its asymptotic behavior for large times, for which we employ the pseudo-inverse technique for probability measures in 1D. Depending on the parameter range, this includes existence of a subsequence converging to a steady state or even convergence of the whole trajectory to a limit which we can specify explicitly. For both parts of the work, a key ingredient is the generalized Fourier transform, which allows us to verify the conditional positive definiteness of the interaction kernel for coinciding attractive and repulsive exponents.
|
The range of mathematical questions when investigating such models is diverse: Firstly, one can study the continuous functional to find conditions for the existence of (local or global) minimizers and afterwards determine some of their properties. Examples for this are the so called non-local isoperimetric problem studied in @cite_17 and @cite_19 , where a total variation term as in not only appears but is in fact critical for the model, and the non-local Ginzburg-Landau energies for diblock polymer systems as in @cite_25 @cite_29 .
|
{
"cite_N": [
"@cite_19",
"@cite_29",
"@cite_25",
"@cite_17"
],
"mid": [
"2962724035",
"1893146140",
"",
"2963161858"
],
"abstract": [
"We give a detailed description of the geometry of single droplet patterns in a non- local isoperimetric problem. In particular we focus on the sharp interface limit of the Ohta- Kawasaki free energy for diblock copolymers, regarded as a paradigm for those energies modeling physical systems characterized by a competition between short and a long-range interactions. Exploiting fine properties of the regularity theory for minimal surfaces, we extend previous partial results in different directions and give robust tools for the geometric analysis of more complex patterns. where u is the order parameter of a two-phases system confined in ⊂ R n , and and m are two nonnegative numerical parameters. The two terms in the energy mimic attractive short-range and repulsive long-range energies between the phases. More precisely the first term is local, it favors minimal interface area and drives the system toward a partition into few pure phases while the second term, involving a Coulomb-like kernel G, is non-local and favor a fine mixing of the phases. A detailed description of the energy is given in § 1. The competition between these two terms is expected to induce the formation of highly regular mesoscopic patterns, whose geometry strongly depends on the choice of the parameters and m (e.g. spherical spots, cylinders, gyroids and lamellae). 0.1. The Ohta-Kawasaki functional for diblock copolymers. The model we consider arises as a simplification of a Ginzburg-Landau functional proposed by Ohta and Kawasaki in their pioneering paper (41) as a possible description of a diblock copolymers' (DBC) system. Even though it is questionable whether such an energy actually describes DBCs (see Choksi and Ren (15), Muratov (39) and Niethammer and Oshita (40)), nevertheless it is a first, and mathematically non-trivial, attempt to capture some of the main features of these systems. For such a reason it deserved over the last twenty years great attention from both the mathematical and the physical",
"This is the second in a series of papers in which we derive a @math -expansion for the two-dimensional non-local Ginzburg-Landau energy with Coulomb repulsion known as the Ohta-Kawasaki model in connection with diblock copolymer systems. In this model, two phases appear, which interact via a nonlocal Coulomb type energy. Here we focus on the sharp interface version of this energy in the regime where one of the phases has very small volume fraction, thus creating small \"droplets\" of the minority phase in a \"sea\" of the majority phase. In our previous paper, we computed the @math -limit of the leading order energy, which yields the averaged behavior for almost minimizers, namely that the density of droplets should be uniform. Here we go to the next order and derive a next order @math -limit energy, which is exactly the Coulombian renormalized energy obtained by Sandier and Serfaty as a limiting interaction energy for vortices in the magnetic Ginzburg-Landau model. The derivation is based on the abstract scheme of Sandier-Serfaty that serves to obtain lower bounds for 2-scale energies and express them through some probabilities on patterns via the multiparameter ergodic theorem. Without thus appealing to the Euler-Lagrange equation, we establish for all configurations which have \"almost minimal energy\" the asymptotic roundness and radius of the droplets, and the fact that they asymptotically shrink to points whose arrangement minimizes the renormalized energy in some averaged sense. Via a kind of @math -equivalence, the obtained results also yield an expansion of the minimal energy for the original Ohta-Kawasaki energy. This leads to expecting to see triangular lattices of droplets as energy minimizers.",
"",
"This paper is the continuation of a previous paper (H. Knupfer and C. B. Muratov, Comm. Pure Appl. Math. 66 (2013), 1129‐1162). We investigate the classical isoperimetric problem modified by an addition of a nonlocal repulsive term generated by a kernel given by an inverse power of the distance. In this work, we treat the case of a general space dimension. We obtain basic existence results for minimizers with sufficiently small masses. For certain ranges of the exponent in the kernel, we also obtain nonexistence results for sufficiently large masses, as well as a characterization of minimizers as balls for sufficiently small masses and low spatial dimensionality. The physically important special case of three space dimensions and Coulombic repulsion is included in all the results mentioned above. In particular, our work yields a negative answer to the question if stable atomic nuclei at arbitrarily high atomic numbers can exist in the framework of the classical liquid drop model of nuclear matter. In all cases the minimal energy scales linearly with mass for large masses, even if the infimum of energy cannot be attained. © 2014 Wiley Periodicals, Inc."
]
}
|
1310.1115
|
1517563762
|
We study properties of an attractive-repulsive energy functional based on power-kernels, which can be used for halftoning of images. In the first part of this work, using a variational framework for probability measures, we examine existence and behavior of minimizers to the functional and to a regularization of it by a total variation term. Moreover, we introduce particle approximations to the functional and to its regularized version and prove their consistency in terms of Gamma-convergence, which we additionally illustrate by numerical examples. In the second part, we consider the gradient flow of the functional in the 2-Wasserstein space and prove statements about its asymptotic behavior for large times, for which we employ the pseudo-inverse technique for probability measures in 1D. Depending on the parameter range, this includes existence of a subsequence converging to a steady state or even convergence of the whole trajectory to a limit which we can specify explicitly. For both parts of the work, a key ingredient is the generalized Fourier transform, which allows us to verify the conditional positive definiteness of the interaction kernel for coinciding attractive and repulsive exponents.
|
With respect to our particular problem and the static setting, see @cite_28 for efficient optimization algorithms to find local minima of ( E ) and @cite_6 for the relationship of minimizers of ( E ) and the error of quadrature formulas, also highlighting the connection between those minimizers and the problem of optimal quantization of measures (see @cite_23 @cite_32 ). As for the gradient flow, see for example @cite_3 for the analysis of symmetric steady states for the gradient flow of interaction functionals similar to ( W ), but being composed of the sum of an attractive and a repulsive power function.
|
{
"cite_N": [
"@cite_28",
"@cite_32",
"@cite_3",
"@cite_6",
"@cite_23"
],
"mid": [
"2059201851",
"",
"2950380698",
"1975318518",
"1980999467"
],
"abstract": [
"Motivated by a recent halftoning method which is based on electrostatic principles, we analyze a halftoning framework where one minimizes a functional consisting of the difference of two convex functions. One describes attracting forces caused by the image's gray values; the other one enforces repulsion between points. In one dimension, the minimizers of our functional can be computed analytically and have the following desired properties: The points are pairwise distinct, lie within the image frame, and can be placed at grid points. In the two-dimensional setting, we prove some useful properties of our functional, such as its coercivity, and propose computing a minimizer by a forward-backward splitting algorithm. We suggest computing the special sums occurring in each iteration step of our dithering algorithm by a fast summation technique based on the fast Fourier transform at nonequispaced knots, which requires only @math arithmetic operations for @math points. Finally, we present numerical results showing the excellent performance of our dithering method.",
"",
"In this paper, we investigate nonlocal interaction equations with repulsive-attractive radial potentials. Such equations describe the evolution of a continuum density of particles in which they repulse each other in the short range and attract each other in the long range. We prove that under some conditions on the potential, radially symmetric solutions converge exponentially fast in some transport distance toward a spherical shell stationary state. Otherwise we prove that it is not possible for a radially symmetric solution to converge weakly toward the spherical shell stationary state. We also investigate under which condition it is possible for a non-radially symmetric solution to converge toward a singular stationary state supported on a general hypersurface. Finally we provide a detailed analysis of the specific case of the repulsive-attractive power law potential as well as numerical results. We point out the the conditions of radial ins stability are sharp.",
"Moving groups of animals, including fish, ungulates, birds and honeybee swarms seem able to take complex decisions in the absence of signalling mechanisms, and when group members cannot establish who has or has not got information. A numerical simulation shows how such groups can make accurate consensus decisions, and that the larger the group, the smaller the proportion of informed individuals needed to guide the group. A very small proportion of informed individuals is sufficient for near maximal accuracy. This has implications for our understanding of the evolution of information transfer in groups, and also suggests a new design protocol for the guidance of grouping robots. Cover photo, by Phillip Colla Natural History Photography ( http: www.OceanLight.com ), shows schooling jack mackerel.",
"Minimum sums of moments or, equivalently, distortion of optimum quantizers play an important role in several branches of mathematics. Fejes Toth's inequality for sums of moments in the plane and Zador's asymptotic formula for minimum distortion in Euclidean d-space are the first precise pertinent results in dimension d⩾2. In this article these results are generalized in the form of asymptotic formulae for minimum sums of moments, resp. distortion of optimum quantizers on Riemannian d-manifolds and normed d-spaces. In addition, we provide geometric and analytic information on the structure of optimum configurations. Our results are then used to obtain information on (i) the minimum distortion of high-resolution vector quantization and optimum quantizers, (ii) the error of best approximation of probability measures by discrete measures and support sets of best approximating discrete measures, (iii) the minimum error of numerical integration formulae for classes of Holder continuous functions and optimum sets of nodes, (iv) best volume approximation of convex bodies by circumscribed convex polytopes and the form of best approximating polytopes, and (v) the minimum isoperimetric quotient of convex polytopes in Minkowski spaces and the form of the minimizing polytopes."
]
}
|
1310.1137
|
2087133238
|
We introduce GOTCHAs (Generating panOptic Turing Tests to Tell Computers and Humans Apart) as a way of preventing automated offline dictionary attacks against user selected passwords. A GOTCHA is a randomized puzzle generation protocol, which involves interaction between a computer and a human. Informally, a GOTCHA should satisfy two key properties: (1) The puzzles are easy for the human to solve. (2) The puzzles are hard for a computer to solve even if it has the random bits used by the computer to generate the final puzzle --- unlike a CAPTCHA [44]. Our main theorem demonstrates that GOTCHAs can be used to mitigate the threat of offline dictionary attacks against passwords by ensuring that a password cracker must receive constant feedback from a human being while mounting an attack. Finally, we provide a candidate construction of GOTCHAs based on Inkblot images. Our construction relies on the usability assumption that users can recognize the phrases that they originally used to describe each Inkblot image --- a much weaker usability assumption than previous password systems based on Inkblots which required users to recall their phrase exactly. We conduct a user study to evaluate the usability of our GOTCHA construction. We also generate a GOTCHA challenge where we encourage artificial intelligence and security researchers to try to crack several passwords protected with our scheme.
|
We stress that our use of Inkblot images is different in two ways: (1) Usability: We do not require users to recall the word or phrase associated with each Inkblot. Instead we require user s to recognize the word or phrase associated with each Inkblot so that they can match each phrase with the appropriate Inkblot image. Recognition is widely accepted to be easier than the task of recall @cite_13 @cite_16 . (2) Security: We do not need to assume that it would be difficult for other humans to match the phrases with each Inkblot. We only assume that it is difficult for a computer to perform this matching automatically.
|
{
"cite_N": [
"@cite_16",
"@cite_13"
],
"mid": [
"2079981839",
"1706881574"
],
"abstract": [
"This paper offers a critical appreciation of the theory that the recall of an event entails a generation (or search) process followed by a recognition (or decision) process. The theory provides an elegant account of a variety of experimental findings. On the other hand, the results of a large number of studies run counter to its predictions or are otherwise not readily accommodated by it. It is suggested that the general notion of recall as generation-plus-recognition should be retained as a useful model, but not as a theory to be held strictly accountable to the data.",
"Why Do We Need Memory? Perceiving and Remembering. How Many Kinds of Memory? The Evidence for STM. The Role of Memory in Cognition - Working Memory. Visual Memory and the Visuo-spatial Sketchpad. Attention and the Control of Memory. When Practice Makes Perfect. Organizing and Learning. Acquiring Habits. When Memory Fails. Retrieval. Recollection and Autobiographical Memory. Where Next? Connectionism Rides Again. Knowledge. Memory, Emotion and Cognition. Understanding Amnesia. Treating Memory Problems. Consciousness. Implicit Knowledge and Learning. Implicit Memory and Recollection."
]
}
|
1310.1137
|
2087133238
|
We introduce GOTCHAs (Generating panOptic Turing Tests to Tell Computers and Humans Apart) as a way of preventing automated offline dictionary attacks against user selected passwords. A GOTCHA is a randomized puzzle generation protocol, which involves interaction between a computer and a human. Informally, a GOTCHA should satisfy two key properties: (1) The puzzles are easy for the human to solve. (2) The puzzles are hard for a computer to solve even if it has the random bits used by the computer to generate the final puzzle --- unlike a CAPTCHA [44]. Our main theorem demonstrates that GOTCHAs can be used to mitigate the threat of offline dictionary attacks against passwords by ensuring that a password cracker must receive constant feedback from a human being while mounting an attack. Finally, we provide a candidate construction of GOTCHAs based on Inkblot images. Our construction relies on the usability assumption that users can recognize the phrases that they originally used to describe each Inkblot image --- a much weaker usability assumption than previous password systems based on Inkblots which required users to recall their phrase exactly. We conduct a user study to evaluate the usability of our GOTCHA construction. We also generate a GOTCHA challenge where we encourage artificial intelligence and security researchers to try to crack several passwords protected with our scheme.
|
CAPTCHAs --- formally introduced by Von @cite_24 --- have gained widespread adoption on the internet to prevent bots from automatically registering for accounts. A CAPTCHA is a program that generates a puzzle --- which should be easy for a human to solve and difficult for a computer to solve --- as well as a solution. Many popular forms of CAPTCHAs (e.g., reCAPTCHA @cite_44 ) generate garbled text, which is easy Admitedly some people would dispute the use of the label easy.' for a human to read, but difficult for a computer to decipher. Other versions of CAPTCHAs rely on the natural human capacity for audio @cite_25 or image recognition @cite_5 .
|
{
"cite_N": [
"@cite_24",
"@cite_44",
"@cite_5",
"@cite_25"
],
"mid": [
"1603565383",
"2022710553",
"2156749117",
""
],
"abstract": [
"We introduce captcha, an automated test that humans can pass, but current computer programs can't pass: any program that has high success over a captcha can be used to solve an unsolved Artificial Intelligence (AI) problem. We provide several novel constructions of captchas. Since captchas have many applications in practical security, our approach introduces a new class of hard problems that can be exploited for security purposes. Much like research in cryptography has had a positive impact on algorithms for factoring and discrete log, we hope that the use of hard AI problems for security purposes allows us to advance the field of Artificial Intelligence. We introduce two families of AI problems that can be used to construct captchas and we show that solutions to such problems can be used for steganographic communication. captchas based on these AI problem families, then, imply a win-win situation: either the problems remain unsolved and there is a way to differentiate humans from computers, or the problems are solved and there is a way to communicate covertly on some channels.",
"CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) are widespread security measures on the World Wide Web that prevent automated programs from abusing online services. They do so by asking humans to perform a task that computers cannot yet perform, such as deciphering distorted characters. Our research explored whether such human effort can be channeled into a useful purpose: helping to digitize old printed material by asking users to decipher scanned words from books that computerized optical character recognition failed to recognize. We showed that this method can transcribe text with a word accuracy exceeding 99 , matching the guarantee of professional human transcribers. Our apparatus is deployed in more than 40,000 Web sites and has transcribed over 440 million words.",
"We present Asirra (Figure 1), a CAPTCHA that asks users to identify cats out of a set of 12 photographs of both cats and dogs. Asirra is easy for users; user studies indicate it can be solved by humans 99.6 of the time in under 30 seconds. Barring a major advance in machine vision, we expect computers will have no better than a 1 54,000 chance of solving it. Asirra’s image database is provided by a novel, mutually beneficial partnership with Petfinder.com. In exchange for the use of their three million images, we display an “adopt me” link beneath each one, promoting Petfinder’s primary mission of finding homes for homeless animals. We describe the design of Asirra, discuss threats to its security, and report early deployment experiences. We also describe two novel algorithms for amplifying the skill gap between humans and computers that can be used on many existing CAPTCHAs.",
""
]
}
|
1310.1137
|
2087133238
|
We introduce GOTCHAs (Generating panOptic Turing Tests to Tell Computers and Humans Apart) as a way of preventing automated offline dictionary attacks against user selected passwords. A GOTCHA is a randomized puzzle generation protocol, which involves interaction between a computer and a human. Informally, a GOTCHA should satisfy two key properties: (1) The puzzles are easy for the human to solve. (2) The puzzles are hard for a computer to solve even if it has the random bits used by the computer to generate the final puzzle --- unlike a CAPTCHA [44]. Our main theorem demonstrates that GOTCHAs can be used to mitigate the threat of offline dictionary attacks against passwords by ensuring that a password cracker must receive constant feedback from a human being while mounting an attack. Finally, we provide a candidate construction of GOTCHAs based on Inkblot images. Our construction relies on the usability assumption that users can recognize the phrases that they originally used to describe each Inkblot image --- a much weaker usability assumption than previous password systems based on Inkblots which required users to recall their phrase exactly. We conduct a user study to evaluate the usability of our GOTCHA construction. We also generate a GOTCHA challenge where we encourage artificial intelligence and security researchers to try to crack several passwords protected with our scheme.
|
The only HOSP construction proposed in @cite_23 involved stuffing a hard drive with unsolved CAPTCHAs. The problem of finding a HOSP construction that does not rely on a dataset of unsolved CAPTCHAs was left as an open problem @cite_23 . Several other candidate HOSP constructions have been experimentally evaluated in subsequent work @cite_34 (they are called POSHs in the second paper), but the usability results for every scheme that did not rely on a large dataset on unsolved CAPTCHAs were underwhelming.
|
{
"cite_N": [
"@cite_34",
"@cite_23"
],
"mid": [
"2142442406",
"1823595194"
],
"abstract": [
"A puzzle only solvable by humans, or POSH, is a prompt or question with three important properties: it can be generated by a computer, it can be answered consistently by a human, and a human answer cannot be efficiently predicted by a computer. In fact, unlike a CAPTCHA, a POSH does not necessarily have to be verifiable by a computer at all. One application of POSHes is a scheme proposed by that limits off-line dictionary attacks against password-protected local storage, without the use of any secure hardware or secret storage. We explore the area of POSHes, implement several candidate POSHes and have users solve them, to evaluate their effectiveness. Given these data, we then implement the above scheme as an extension to the Mozilla Firefox web browser, where it is used to protect user certificates and saved passwords. In the course of doing so, we also define certain aspects of the threat model for our implementation (and the scheme) more precisely.",
"We address the issue of encrypting data in local storage using a key that is derived from the user's password. The typical solution in use today is to derive the key from the password using a cryptographic hash function. This solution provides relatively weak protection, since an attacker that gets hold of the encrypted data can mount an off-line dictionary attack on the user's password, thereby recovering the key and decrypting the stored data. We propose an approach for limiting off-line dictionary attacks in this setting without relying on secret storage or secure hardware. In our proposal, the process of deriving a key from the password requires the user to solve a puzzle that is presumed to be solvable only by humans (e.g, a CAPTCHA). We describe a simple protocol using this approach: many different puzzles are stored on the disk, the user's password is used to specify which of them need to be solved, and the encryption key is derived from the password and the solutions of the specified puzzles. Completely specifying and analyzing this simple protocol, however, raises a host of modeling and technical issues, such as new properties of human-solvable puzzles and some seemingly hard combinatorial problems. Here we analyze this protocol in some interesting special cases."
]
}
|
1310.1137
|
2087133238
|
We introduce GOTCHAs (Generating panOptic Turing Tests to Tell Computers and Humans Apart) as a way of preventing automated offline dictionary attacks against user selected passwords. A GOTCHA is a randomized puzzle generation protocol, which involves interaction between a computer and a human. Informally, a GOTCHA should satisfy two key properties: (1) The puzzles are easy for the human to solve. (2) The puzzles are hard for a computer to solve even if it has the random bits used by the computer to generate the final puzzle --- unlike a CAPTCHA [44]. Our main theorem demonstrates that GOTCHAs can be used to mitigate the threat of offline dictionary attacks against passwords by ensuring that a password cracker must receive constant feedback from a human being while mounting an attack. Finally, we provide a candidate construction of GOTCHAs based on Inkblot images. Our construction relies on the usability assumption that users can recognize the phrases that they originally used to describe each Inkblot image --- a much weaker usability assumption than previous password systems based on Inkblots which required users to recall their phrase exactly. We conduct a user study to evaluate the usability of our GOTCHA construction. We also generate a GOTCHA challenge where we encourage artificial intelligence and security researchers to try to crack several passwords protected with our scheme.
|
Users are often advised (or required) to follow strict guidelines when selecting their password (e.g., use a mix of upper lower case letters, include numbers and change the password frequently) @cite_32 . However, empirical studies show that user's are are often frustrated by restricting policies and commonly forget their passwords @cite_6 @cite_7 @cite_8 In fact the resulting passwords are sometimes more vulnerable to an offline attack! @cite_6 @cite_7 . Furthermore, the cost of these restrictive policies can be quite high. For example, a Gartner case study @cite_26 estimated that it cost over $17 per password-reset call. Florencio and Herley @cite_47 studied the economic factors that institutions consider before adopting password policies and found that they often value usability over security.
|
{
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_8",
"@cite_32",
"@cite_6",
"@cite_47"
],
"mid": [
"",
"2184919743",
"2171920515",
"",
"2113266120",
"2119545418"
],
"abstract": [
"",
"AbstractSince passwords are one of the main mechanisms used to protect data and information, it is important to ensure that passwords are managed correctly and that those factors which will have a significant impact on password management are identified and prioritized. Therefore, in order for an information and communication technology (ICT) overall security program to be successful, a security awareness program or component must be included. The aim of this paper is to perform an exploratory study with the objective of introducing certain fundamental causes that may impact password management. Empirical results, followed by a survey as well as the application of several management science techniques are presented.",
"We report the results of a large scale study of password use andpassword re-use habits. The study involved half a million users over athree month period. A client component on users' machines recorded a variety of password strength, usage and frequency metrics. This allows us to measure or estimate such quantities as the average number of passwords and average number of accounts each user has, how many passwords she types per day, how often passwords are shared among sites, and how often they are forgotten. We get extremely detailed data on password strength, the types and lengths of passwords chosen, and how they vary by site. The data is the first large scale study of its kind, and yields numerous other insights into the role the passwords play in users' online experience.",
"",
"Text-based passwords are the most common mechanism for authenticating humans to computer systems. To prevent users from picking passwords that are too easy for an adversary to guess, system administrators adopt password-composition policies (e.g., requiring passwords to contain symbols and numbers). Unfortunately, little is known about the relationship between password-composition policies and the strength of the resulting passwords, or about the behavior of users (e.g., writing down passwords) in response to different policies. We present a large-scale study that investigates password strength, user behavior, and user sentiment across four password-composition policies. We characterize the predictability of passwords by calculating their entropy, and find that a number of commonly held beliefs about password composition and strength are inaccurate. We correlate our results with user behavior and sentiment to produce several recommendations for password-composition policies that result in strong passwords without unduly burdening users.",
"We examine the password policies of 75 different websites. Our goal is understand the enormous diversity of requirements: some will accept simple six-character passwords, while others impose rules of great complexity on their users. We compare different features of the sites to find which characteristics are correlated with stronger policies. Our results are surprising: greater security demands do not appear to be a factor. The size of the site, the number of users, the value of the assets protected and the frequency of attacks show no correlation with strength. In fact we find the reverse: some of the largest, most attacked sites with greatest assets allow relatively weak passwords. Instead, we find that those sites that accept advertising, purchase sponsored links and where the user has a choice show strong inverse correlation with strength. We conclude that the sites with the most restrictive password policies do not have greater security concerns, they are simply better insulated from the consequences of poor usability. Online retailers and sites that sell advertising must compete vigorously for users and traffic. In contrast to government and university sites, poor usability is a luxury they cannot afford. This in turn suggests that much of the extra strength demanded by the more restrictive policies is superfluous: it causes considerable inconvenience for negligible security improvement."
]
}
|
1310.0894
|
2949800700
|
We present techniques to characterize which data is important to a recommender system and which is not. Important data is data that contributes most to the accuracy of the recommendation algorithm, while less important data contributes less to the accuracy or even decreases it. Characterizing the importance of data has two potential direct benefits: (1) increased privacy and (2) reduced data management costs, including storage. For privacy, we enable increased recommendation accuracy for comparable privacy levels using existing data obfuscation techniques. For storage, our results indicate that we can achieve large reductions in recommendation data and yet maintain recommendation accuracy. Our main technique is called differential data analysis. The name is inspired by other sorts of differential analysis, such as differential power analysis and differential cryptanalysis, where insight comes through analysis of slightly differing inputs. In differential data analysis we chunk the data and compare results in the presence or absence of each chunk. We present results applying differential data analysis to two datasets and three different kinds of attributes. The first attribute is called user hardship. This is a novel attribute, particularly relevant to location datasets, that indicates how burdensome a data point was to achieve. The second and third attributes are more standard: timestamp and user rating. For user rating, we confirm previous work concerning the increased importance to the recommender of data corresponding to high and low user ratings.
|
A body of work for location privacy concentrates on localizing the user at certain points of time, e.g., see @cite_13 for a survey. The data generally consists of timestamped location checkins, and the goal is to identify or link these points. For our problem of privacy-preserving location recommendation, we consider the location checkins to be sporadic (even without a timestamp), so the privacy concerns are more about each location in isolation. However, for both this body of work and ours, the distance metric on location checkins is of central importance. For this body of work, distance is an integral part of obfuscation and inference algorithms. For us, distance is a critical ingredient for evaluating relative importance of data points to the recommender system.
|
{
"cite_N": [
"@cite_13"
],
"mid": [
"2170166043"
],
"abstract": [
"This is a literature survey of computational location privacy, meaning computation-based privacy mechanisms that treat location data as geometric information. This definition includes privacy-preserving algorithms like anonymity and obfuscation as well as privacy-breaking algorithms that exploit the geometric nature of the data. The survey omits non-computational techniques like manually inspecting geotagged photos, and it omits techniques like encryption or access control that treat location data as general symbols. The paper reviews studies of peoples' attitudes about location privacy, computational threats on leaked location data, and computational countermeasures for mitigating these threats."
]
}
|
1310.0242
|
2951997523
|
Nowadays a vast and growing body of open source software (OSS) project data is publicly available on the internet. Despite this public body of project data, the field of software analytics has not yet settled on a solid quantitative base for basic properties such as code size, growth, team size, activity, and project failure. What is missing is a quantification of the base rates of such properties, where other fields (such as medicine) commonly rely on base rates for decision making and the evaluation of experimental results. The lack of knowledge in this area impairs both research activities in the field of software analytics and decision making on software projects in general. This paper contributes initial results of our research towards obtaining base rates using the data available at Ohloh (a large-scale index of OSS projects). Zooming in on the venerable 'lines of code' metric for code size and growth, we present and discuss summary statistics and identify further research challenges.
|
Software Analytics is a term recently introduced by Zhang et. al. @cite_0 to label research aimed at supporting decision making in software. Our work can be seen as an instance of Software Analytics. Work that is closely related to ours has been done by Herraiz. He studied the statistical properties of data available on SourceForge and the FreeBSD package base . We see our work as an follow-up in terms of scope and diversity, as by studying Ohloh, we use a larger and more diverse data source (which does not primarily focus on C).
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2122772101"
],
"abstract": [
"Software analytics is to enable software practitioners to perform data exploration and analysis in order to obtain insightful and actionable information for data-driven tasks around software and services. In this position paper, we advocate that when applying analytic technologies in practice of software analytics, one should (1) incorporate a broad spectrum of domain knowledge and expertise, e.g., management, machine learning, large-scale data processing and computing, and information visualization; and (2) investigate how practitioners take actions on the produced information, and provide effective support for such information-based action taking. Our position is based on our experiences of successful technology transfer on software analytics at Microsoft Research Asia."
]
}
|
1310.0005
|
2950075274
|
This paper proposes a new measure of node centrality in social networks, the Harmonic Influence Centrality, which emerges naturally in the study of social influence over networks. Using an intuitive analogy between social and electrical networks, we introduce a distributed message passing algorithm to compute the Harmonic Influence Centrality of each node. Although its design is based on theoretical results which assume the network to have no cycle, the algorithm can also be successfully applied on general graphs.
|
In social science and network science there is a large literature on defining and computing centrality measures @cite_5 . Among the most popular definitions, we mention degree centrality, node betweenness, information centrality @cite_13 , and Bonacich centrality @cite_1 , which is related to the well-studied Google PageRank algorithm. These notions have proven useful in a range of applications but are not universally the appropriate concepts by any means. Our interest in opinion dynamics thus motivates the choice to define a centrality measure for our purposes. The problem of computing the HIC has previously been solved, via a centralized solution, in @cite_3 for a special case of the opinion dynamics model considered here.
|
{
"cite_N": [
"@cite_1",
"@cite_5",
"@cite_13",
"@cite_3"
],
"mid": [
"2087194317",
"2053507997",
"1986310535",
"61072750"
],
"abstract": [
"Although network centrality is generally assumed to produce power, recent research shows that this is not the case in exchange networks. This paper proposes a generalization of the concept of centrality that accounts for both the usual positive relationship between power and centrality and 's recent exceptional results.",
"Three measures of actors' network centrality are derived from an elementary rocess model of social influence. The measures are closely related to, and cast new light on, widely used measures of actors' centrality; for example, the essential social organization of status that has been assumed by Hubbell, Bonacich, Coleman, and Burt appears as a deducible outcome of this social influence process. Unlike previous measures, which have been viewed as competing alternatives, the present measures are complementary and, in their juxtaposition, provide for a rich description of social structure. The complementary indicates a degree of theoretical unification in the work on network centrality that was heretofore unsuspected.",
"Abstract A new model of centrality is proposed for networks. The centrality measure is based on the “information” contained in all possible paths between pairs of points. The method does not require path enumeration and is not limited to the shortest paths or geodesies. We apply this measure to two examples: a network of homosexual men diagnosed with AIDS, and observations on a colony of baboons. Comparisons are made with “betweenness” and “closeness” centrality measures. The processes by which structural changes in networks occur over time are also discussed.",
"We study discrete opinion dynamics in a social network with \"stubborn agents\" who influence others but do not change their opinions. We generalize the classical voter model by introducing nodes (stubborn agents) that have a fixed state. We show that the presence of stubborn agents with opposing opinions precludes convergence to consensus; instead, opinions converge in distribution with disagreement and fluctuations. In addition to the first moment of this distribution typically studied in the literature, we study the behavior of the second moment in terms of network properties and the opinions and locations of stubborn agents. We also study the problem of \"optimal placement of stubborn agents\" where the location of a fixed number of stubborn agents is chosen to have the maximum impact on the long-run expected opinions of agents."
]
}
|
1310.0005
|
2950075274
|
This paper proposes a new measure of node centrality in social networks, the Harmonic Influence Centrality, which emerges naturally in the study of social influence over networks. Using an intuitive analogy between social and electrical networks, we introduce a distributed message passing algorithm to compute the Harmonic Influence Centrality of each node. Although its design is based on theoretical results which assume the network to have no cycle, the algorithm can also be successfully applied on general graphs.
|
As opposed to centralized algorithms, the interest for distributed algorithms to compute centrality measures has risen more recently. @cite_8 a randomized algorithm is used to compute PageRank centrality. @cite_12 distributed algorithms are designed to compute node (and edge) betweenness on trees, but are not suitable for general graphs.
|
{
"cite_N": [
"@cite_12",
"@cite_8"
],
"mid": [
"2060973893",
"2106101949"
],
"abstract": [
"Knowing how important a node or an edge is, within a network, can be very valuable. In the area of complex networks, a variety of centrality measures, which assign to each node or edge a number representing its importance, have been proposed. In this paper, we consider two such measures, namely, node and edge betweenness that characterize how often a node or edge lies on the shortest paths between all pairs of nodes. For each measure, we use a dynamical systems approach to develop continuous- and discrete-time distributed algorithms, which enable every node in an undirected and unweighted tree graph to compute its own measure with only local interaction and without any centralized coordination. We show that the algorithms are simple and scalable, with the continuous-time version being unconditionally exponentially convergent, and the discrete-time version unconditionally exhibiting a deadbeat response. Moreover, we show that they require minimal node memories to execute, bypass entirely the need to construct shortest paths, and can handle time-varying topologies.",
"In the search engine of Google, the PageRank algorithm plays a crucial role in ranking the search results. The algorithm quantifies the importance of each web page based on the link structure of the web. We first provide an overview of the original problem setup. Then, we propose several distributed randomized schemes for the computation of the PageRank, where the pages can locally update their values by communicating to those connected by links. The main objective of the paper is to show that these schemes asymptotically converge in the mean-square sense to the true PageRank values. A detailed discussion on the close relations to the multi-agent consensus problems is also given."
]
}
|
1310.0234
|
2951847103
|
A cloud radio access network (Cloud-RAN) is a network architecture that holds the promise of meeting the explosive growth of mobile data traffic. In this architecture, all the baseband signal processing is shifted to a single baseband unit (BBU) pool, which enables efficient resource allocation and interference management. Meanwhile, conventional powerful base stations can be replaced by low-cost low-power remote radio heads (RRHs), producing a green and low-cost infrastructure. However, as all the RRHs need to be connected to the BBU pool through optical transport links, the transport network power consumption becomes significant. In this paper, we propose a new framework to design a green Cloud-RAN, which is formulated as a joint RRH selection and power minimization beamforming problem. To efficiently solve this problem, we first propose a greedy selection algorithm, which is shown to provide near- optimal performance. To further reduce the complexity, a novel group sparse beamforming method is proposed by inducing the group-sparsity of beamformers using the weighted @math -norm minimization, where the group sparsity pattern indicates those RRHs that can be switched off. Simulation results will show that the proposed algorithms significantly reduce the network power consumption and demonstrate the importance of considering the transport link power consumption.
|
A main design tool applied in this paper is optimization with the group sparsity induced norm. With the recent theoretical breakthrough in compressed sensing @cite_11 @cite_20 , the sparsity patterns in different applications in signal processing and communications have been exploited for more efficient system design, e.g., for pilot aided sparse channel estimation @cite_1 . The sparsity inducing norms have been widely applied in high-dimensional statistics, signal processing, and machine learning in the last decade @cite_31 . The @math -norm regularization has been successfully applied in compressed sensing @cite_11 @cite_20 . More recently, mixed @math -norms are widely investigated in the case where some variables forming a group will be selected or removed simultaneously, where the mixed @math -norm @cite_27 and mixed @math -norm @cite_13 are two commonly used ones to induce group sparsity for their computational and analytical convenience.
|
{
"cite_N": [
"@cite_1",
"@cite_27",
"@cite_31",
"@cite_13",
"@cite_20",
"@cite_11"
],
"mid": [
"2168625863",
"2138019504",
"2952139899",
"1480312878",
"",
""
],
"abstract": [
"Compressive sensing is a topic that has recently gained much attention in the applied mathematics and signal processing communities. It has been applied in various areas, such as imaging, radar, speech recognition, and data acquisition. In communications, compressive sensing is largely accepted for sparse channel estimation and its variants. In this article we highlight the fundamental concepts of compressive sensing and give an overview of its application to pilot aided channel estimation. We point out that a popular assumption - that multipath channels are sparse in their equivalent baseband representation - has pitfalls. There are over-complete dictionaries that lead to much sparser channel representations and better estimation performance. As a concrete example, we detail the application of compressive sensing to multicarrier underwater acoustic communications, where the channel features sparse arrivals, each characterized by its distinct delay and Doppler scale factor. To work with practical systems, several modifications need to be made to the compressive sensing framework as the channel estimation error varies with how detailed the channel is modeled, and how data and pilot symbols are mixed in the signal design.",
"Summary. We consider the problem of selecting grouped variables (factors) for accurate prediction in regression. Such a problem arises naturally in many practical situations with the multifactor analysis-of-variance problem as the most important and well-known example. Instead of selecting factors by stepwise backward elimination, we focus on the accuracy of estimation and consider extensions of the lasso, the LARS algorithm and the non-negative garrotte for factor selection. The lasso, the LARS algorithm and the non-negative garrotte are recently proposed regression methods that can be used to select individual variables. We study and propose efficient algorithms for the extensions of these methods for factor selection and show that these extensions give superior performance to the traditional stepwise backward elimination method in factor selection problems. We study the similarities and the differences between these methods. Simulations and real examples are used to illustrate the methods.",
"Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate non-smooth norms. The goal of this paper is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted @math -penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view.",
"Given a collection of r ≥ 2 linear regression problems in p dimensions, suppose that the regression coefficients share partially common supports of size at most s. This set-up suggests the use of l1 l∞-regularized regression for joint estimation of the p×r matrix of regression coefficients. We analyze the high-dimensional scaling of l1 l∞-regularized quadratic programming, considering both consistency rates in l∞-norm, and how the minimal sample size n required for consistent variable selection scales with model dimension, sparsity, and overlap between the supports. We first establish bounds on the l∞-error as well sufficient conditions for exact variable selection for fixed design matrices, as well as for designs drawn randomly from general Gaussian distributions. Specializing to the case r = 2 linear regression problems with standard Gaussian designs whose supports overlap in a fraction α ∈ [0,1] of their entries, we prove that l1 l∞-regularized method undergoes a phase transition characterized by the rescaled sample size θ1,∞(n, p, s, α) = n (4 - 3 α) s log(p-(2- α) s) . An implication is that the use of l1 l∞-regularization yields improved statistical efficiency if the overlap parameter is large enough ( α >; 2 3), but has worse statistical efficiency than a naive Lasso-based approach for moderate to small overlap (α <; 2 3 ). Empirical simulations illustrate the close agreement between theory and actual behavior in practice. These results show that caution must be exercised in applying l1 l∞ block regularization: if the data does not match its structure very closely, it can impair statistical performance relative to computationally less expensive schemes.",
"",
""
]
}
|
1310.0512
|
2950554872
|
In standard clustering problems, data points are represented by vectors, and by stacking them together, one forms a data matrix with row or column cluster structure. In this paper, we consider a class of binary matrices, arising in many applications, which exhibit both row and column cluster structure, and our goal is to exactly recover the underlying row and column clusters by observing only a small fraction of noisy entries. We first derive a lower bound on the minimum number of observations needed for exact cluster recovery. Then, we propose three algorithms with different running time and compare the number of observations needed by them for successful cluster recovery. Our analytical results show smooth time-data trade-offs: one can gradually reduce the computational complexity when increasingly more observations are available.
|
Much of the prior work on graph clustering, as surveyed in @cite_30 , focuses on graphs with a single node type, where nodes in the same cluster are more likely to have edges among them. A low-rank plus sparse matrix decomposition approach is proved to exactly recover the clusters with the best known performance guarantee in @cite_13 . The same approach is used to recover the clusters from a partially observed graph in @cite_20 . A spectral method for exact cluster recovery is proposed and analyzed in @cite_6 with the number of clusters fixed . More recently, @cite_25 proved an upper bound on the number of nodes mis-clustered'' by a spectral clustering algorithm in the high dimensional setting with a growing number of clusters.
|
{
"cite_N": [
"@cite_30",
"@cite_6",
"@cite_13",
"@cite_25",
"@cite_20"
],
"mid": [
"2127048411",
"1605711022",
"2111040408",
"1967184357",
"2952000622"
],
"abstract": [
"The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i.e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e.g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.",
"Problems such as bisection, graph coloring, and clique are generally believed hard in the worst case. However, they can be solved if the input data is drawn randomly from a distribution over graphs containing acceptable solutions. In this paper we show that a simple spectral algorithm can solve all three problems above in the average case, as well as a more general problem of partitioning graphs based on edge density. In nearly all cases our approach meets or exceeds previous parameters, while introducing substantial generality. We apply spectral techniques, using foremost the observation that in all of these problems, the expected adjacency matrix is a low rank matrix wherein the structure of the solution is evident.",
"We develop a new algorithm to cluster sparse unweighted graphs - i.e. partition the nodes into disjoint clusters so that there is higher density within clusters, and low across clusters. By sparsity we mean the setting where both the in-cluster and across cluster edge densities are very small, possibly vanishing in the size of the graph. Sparsity makes the problem noisier, and hence more difficult to solve. Any clustering involves a tradeoff between minimizing two kinds of errors: missing edges within clusters and present edges across clusters. Our insight is that in the sparse case, these must be penalized differently. We analyze our algorithm's performance on the natural, classical and widely studied \"planted partition\" model (also called the stochastic block model); we show that our algorithm can cluster sparser graphs, and with smaller clusters, than all previous methods. This is seen empirically as well.",
"Networks or graphs can easily represent a diverse set of data sources that are characterized by interacting units or actors. Social ne tworks, representing people who communicate with each other, are one example. Communities or clusters of highly connected actors form an essential feature in the structure of several empirical networks. Spectral clustering is a popular and computationally feasi ble method to discover these communities. The Stochastic Block Model (, 1983) is a social network model with well defined communities; each node is a member of one community. For a network generated from the Stochastic Block Model, we bound the number of nodes \"misclus- tered\" by spectral clustering. The asymptotic results in th is paper are the first clustering results that allow the number of clusters in the model to grow with the number of nodes, hence the name high-dimensional. In order to study spectral clustering under the Stochastic Block Model, we first show that under the more general latent space model, the eigenvectors of the normalized graph Laplacian asymptotically converge to the eigenvectors of a \"population\" normal- ized graph Laplacian. Aside from the implication for spectral clustering, this provides insight into a graph visualization technique. Our method of studying the eigenvectors of random matrices is original. AMS 2000 subject classifications: Primary 62H30, 62H25; secondary 60B20.",
"This paper considers the problem of clustering a partially observed unweighted graph---i.e., one where for some node pairs we know there is an edge between them, for some others we know there is no edge, and for the remaining we do not know whether or not there is an edge. We want to organize the nodes into disjoint clusters so that there is relatively dense (observed) connectivity within clusters, and sparse across clusters. We take a novel yet natural approach to this problem, by focusing on finding the clustering that minimizes the number of \"disagreements\"---i.e., the sum of the number of (observed) missing edges within clusters, and (observed) present edges across clusters. Our algorithm uses convex optimization; its basis is a reduction of disagreement minimization to the problem of recovering an (unknown) low-rank matrix and an (unknown) sparse matrix from their partially observed sum. We evaluate the performance of our algorithm on the classical Planted Partition Stochastic Block Model. Our main theorem provides sufficient conditions for the success of our algorithm as a function of the minimum cluster size, edge density and observation probability; in particular, the results characterize the tradeoff between the observation probability and the edge density gap. When there are a constant number of clusters of equal size, our results are optimal up to logarithmic factors."
]
}
|
1310.0512
|
2950554872
|
In standard clustering problems, data points are represented by vectors, and by stacking them together, one forms a data matrix with row or column cluster structure. In this paper, we consider a class of binary matrices, arising in many applications, which exhibit both row and column cluster structure, and our goal is to exactly recover the underlying row and column clusters by observing only a small fraction of noisy entries. We first derive a lower bound on the minimum number of observations needed for exact cluster recovery. Then, we propose three algorithms with different running time and compare the number of observations needed by them for successful cluster recovery. Our analytical results show smooth time-data trade-offs: one can gradually reduce the computational complexity when increasingly more observations are available.
|
Biclustering @cite_8 @cite_9 @cite_14 @cite_1 tries to find sub-matrices (which may overlap) with particular patterns in a data matrix. Many of the proposed algorithms are based on heuristic searches without provable performance guarantees. Our cluster recovery problem can be viewed as a special case where the data matrix consists of non-overlapping sub-matrices with constant binary entries, and our paper provides a thorough study of this special biclustering problem. Recently, there is a line of work studying another special case of biclustering problem, which tries to detect a single small submatrix with elevated mean in a large fully observed noisy matrix @cite_17 . Interesting statistical and computational trade-offs are summarized in @cite_0 .
|
{
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_0",
"@cite_17"
],
"mid": [
"2144544802",
"2036328877",
"1493217831",
"",
"",
"2145799156"
],
"abstract": [
"A large number of clustering approaches have been proposed for the analysis of gene expression data obtained from microarray experiments. However, the results from the application of standard clustering methods to genes are limited. This limitation is imposed by the existence of a number of experimental conditions where the activity of genes is uncorrelated. A similar limitation exists when clustering of conditions is performed. For this reason, a number of algorithms that perform simultaneous clustering on the row and column dimensions of the data matrix has been proposed. The goal is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this paper, we refer to this class of algorithms as biclustering. Biclustering is also referred in the literature as coclustering and direct clustering, among others names, and has also been used in fields such as information retrieval and data mining. In this comprehensive survey, we analyze a large number of existing approaches to biclustering, and classify them in accordance with the type of biclusters they can find, the patterns of biclusters that are discovered, the methods used to perform the search, the approaches used to evaluate the solution, and the target applications.",
"Abstract Clustering algorithms are now in widespread use for sorting heterogeneous data into homogeneous blocks. If the data consist of a number of variables taking values over a number of cases, these algorithms may be used either to construct clusters of variables (using, say, correlation as a measure of distance between variables) or clusters of cases. This article presents a model, and a technique, for clustering cases and variables simultaneously. The principal advantage in this approach is the direct interpretation of the clusters on the data.",
"An efficient node-deletion algorithm is introduced to find submatrices in expression data that have low mean squared residue scores and it is shown to perform well in finding co-regulation patterns in yeast and human. This introduces \"biclustering’, or simultaneous clustering of both genes and conditions, to knowledge discovery from expression data. This approach overcomes some problems associated with traditional clustering methods, by allowing automatic discovery of similarity based on a subset of attributes, simultaneous clustering of genes and conditions, and overlapped grouping that provides a better representation for genes with multiple functions or regulated by many factors.",
"",
"",
"We consider the problem of identifying a sparse set of relevant columns and rows in a large data matrix with highly corrupted entries. This problem of identifying groups from a collection of bipartite variables such as proteins and drugs, biological species and gene sequences, malware and signatures, etc is commonly referred to as biclustering or co-clustering. Despite its great practical relevance, and although several ad-hoc methods are available for biclustering, theoretical analysis of the problem is largely non-existent. The problem we consider is also closely related to structured multiple hypothesis testing, an area of statistics that has recently witnessed a flurry of activity. We make the following contributions 1. We prove lower bounds on the minimum signal strength needed for successful recovery of a bicluster as a function of the noise variance, size of the matrix and bicluster of interest. 2. We show that a combinatorial procedure based on the scan statistic achieves this optimal limit. 3. We characterize the SNR required by several computationally tractable procedures for biclustering including element-wise thresholding, column row average thresholding and a convex relaxation approach to sparse singular vector decomposition."
]
}
|
1310.0512
|
2950554872
|
In standard clustering problems, data points are represented by vectors, and by stacking them together, one forms a data matrix with row or column cluster structure. In this paper, we consider a class of binary matrices, arising in many applications, which exhibit both row and column cluster structure, and our goal is to exactly recover the underlying row and column clusters by observing only a small fraction of noisy entries. We first derive a lower bound on the minimum number of observations needed for exact cluster recovery. Then, we propose three algorithms with different running time and compare the number of observations needed by them for successful cluster recovery. Our analytical results show smooth time-data trade-offs: one can gradually reduce the computational complexity when increasingly more observations are available.
|
Under our model, the underlying true data matrix is a specific type of low-rank matrix. If we recover the true data matrix, we immediately get the user (or movie) clusters by assigning the identical rows (or columns) of the matrix to the same cluster. In the noiseless setting with no flipping, the nuclear norm minimization approach @cite_19 @cite_7 @cite_35 can be directly applied to recover the true data matrix and further recover the row and column clusters. Alternate minimization is another popular and empirically successful approach for low-matrix completion @cite_3 . However, it is harder to analyze and the performance guarantee is weaker than nuclear norm minimization @cite_29 . In the low noise setting with the flipping probability restricting to be a small constant, the low-rank plus sparse matrix decomposition approach @cite_24 @cite_23 @cite_5 can be applied to exactly recover data matrix and further recover the row and column clusters.
|
{
"cite_N": [
"@cite_35",
"@cite_7",
"@cite_29",
"@cite_3",
"@cite_24",
"@cite_19",
"@cite_23",
"@cite_5"
],
"mid": [
"2120872934",
"",
"2952489973",
"2054141820",
"2003753589",
"2134332047",
"2145962650",
"2158121106"
],
"abstract": [
"This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low-rank matrix. These results improve on prior work by Candes and Recht (2009), Candes and Tao (2009), and (2009). The reconstruction is accomplished by minimizing the nuclear norm, or sum of the singular values, of the hidden matrix subject to agreement with the provided entries. If the underlying matrix satisfies a certain incoherence condition, then the number of entries required is equal to a quadratic logarithmic factor times the number of parameters in the singular value decomposition. The proof of this assertion is short, self contained, and uses very elementary analysis. The novel techniques herein are based on recent work in quantum information theory.",
"",
"Alternating minimization represents a widely applicable and empirically successful approach for finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix Challenge. In the alternating minimization approach, the low-rank target matrix is written in a bi-linear form, i.e. @math ; the algorithm then alternates between finding the best @math and the best @math . Typically, each alternating step in isolation is convex and tractable. However the overall problem becomes non-convex and there has been almost no theoretical understanding of when this approach yields a good result. In this paper we present first theoretical analysis of the performance of alternating minimization for matrix completion, and the related problem of matrix sensing. For both these problems, celebrated recent results have shown that they become well-posed and tractable once certain (now standard) conditions are imposed on the problem. We show that alternating minimization also succeeds under similar conditions. Moreover, compared to existing results, our paper shows that alternating minimization guarantees faster (in particular, geometric) convergence to the true matrix, while allowing a simpler analysis.",
"As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.",
"Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown low-rank matrix. Our goal is to decompose the given matrix into its sparse and low-rank components. Such a problem arises in a number of applications in model and system identification and is intractable to solve in general. In this paper we consider a convex optimization formulation to splitting the specified matrix into its components by minimizing a linear combination of the l1 norm and the nuclear norm of the components. We develop a notion of rank-sparsity incoherence, expressed as an uncertainty principle between the sparsity pat- tern of a matrix and its row and column spaces, and we use it to characterize both fundamental identifiability as well as (deterministic) sufficient conditions for exact recovery. Our analysis is geometric in nature with the tangent spaces to the algebraic varieties of sparse and low-rank matrices playing a prominent role. When the sparse and low-rank matrices are drawn from certain natural random ensembles, we show that the sufficient conditions for exact recovery are satisfied with high probability. We conclude with simulation results on synthetic matrix decomposition problems.",
"This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In general, accurate recovery of a matrix from a small number of entries is impossible, but the knowledge that the unknown matrix has low rank radically changes this premise, making the search for solutions meaningful. This paper presents optimality results quantifying the minimum number of entries needed to recover a matrix of rank r exactly by any method whatsoever (the information theoretic limit). More importantly, the paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors). This convex program simply finds, among all matrices consistent with the observed entries, that with minimum nuclear norm. As an example, we show that on the order of nr log(n) samples are needed to recover a random n x n matrix of rank r by any method, and to be sure, nuclear norm minimization succeeds as soon as the number of entries is of the form nr polylog(n).",
"This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individuallyq We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.",
"This paper considers the recovery of a low-rank matrix from an observed version that simultaneously contains both 1) erasures, most entries are not observed, and 2) errors, values at a constant fraction of (unknown) locations are arbitrarily corrupted. We provide a new unified performance guarantee on when minimizing nuclear norm plus l1 norm succeeds in exact recovery. Our result allows for the simultaneous presence of random and deterministic components in both the error and erasure patterns. By specializing this one single result in different ways, we recover (up to poly-log factors) as corollaries all the existing results in exact matrix completion, and exact sparse and low-rank matrix decomposition. Our unified result also provides the first guarantees for 1) recovery when we observe a vanishing fraction of entries of a corrupted matrix, and 2) deterministic matrix completion."
]
}
|
1309.7960
|
2208756836
|
Using results on the topology of moduli space of polygons [Jaggi, 92; Kapovich and Millson, 94], it can be shown that for a planar robot arm with @math segments there are some values of the base-length, @math , at which the configuration space of the constrained arm (arm with its end effector fixed) has two disconnected components, while at other values the constrained configuration space has one connected component. We first review some of these known results. Then the main design problem addressed in this paper is the construction of pairs of continuous inverse kinematics for arbitrary robot arms, with the property that the two inverse kinematics agree when the constrained configuration space has a single connected component, but they give distinct configurations (one in each connected component) when the configuration space of the constrained arm has two components. This design is made possible by a fundamental theoretical contribution in this paper -- a classification of configuration spaces of robot arms such that the type of path that the system (robot arm) takes through certain critical values of the forward kinematics function is completely determined by the class to which the configuration space of the arm belongs. This classification result makes the aforesaid design problem tractable, making it sufficient to design a pair of inverse kinematics for each class of configuration spaces (three of them in total). We discuss the motivation for this work, which comes from a more extensive problem of motion planning for the end effector of a robot arm requiring us to continuously sample one configuration from each connected component of the constrained configuration spaces. We demonstrate the low complexity of the presented algorithm through a Javascript + HTML5 based implementation available at this http URL
|
[1.] In literature, for planar and spatial arms with a few segments, this has often been achieved by explicit trigonometric formulae developed from the geometry of the specific arm @cite_10 @cite_18 . However, very often such formulations are limited to the specific arm that the IK is designed for, becomes increasingly complex with the increase in the number of segments, and does suffer from isolated singularities and or discontinuities. For more complex arms, traditionally numerical gradient-decent type algorithms have been used @cite_3 @cite_7 @cite_21 @cite_9 . However, the problem very often being nonlinear and non-convex, guarantees of completeness or even continuity are difficult to achieve. Furthermore, numerical techniques are often computationally expensive. A mixed numerical and analytic technique has been used in @cite_13 for computing inverse kinematics in context of path planning for the end effector of an arm with @math segments and point obstacles in the environment. A closed form solutions to the inverse kinematics problem has recently been hinted in @cite_11 using a approach. But the problem being addressed for a spatial arm, some numerical techniques were also adopted.
|
{
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_7",
"@cite_9",
"@cite_21",
"@cite_3",
"@cite_10",
"@cite_11"
],
"mid": [
"2113407491",
"1564897360",
"2081897478",
"2146097584",
"2159806973",
"2139691007",
"1495118766",
"2159911618"
],
"abstract": [
"In this paper we develop a systematic topological approach to motion planning for a planar 2-R manipulator with point obstacles. By considering components in the free space for the second joint as the first joint varies, we build a two-dimensional array representing the cells of the free space and an associated graph representing the boundaries of those cells. Using this graph, we derive a closed formula for the number of components of the free space. At the same time we solve the motion existence problem, namely, when are two arbitrary configurations in the same component? If so, we develop two explicit algorithms for constructing the path - a middle path method and a linear interpolation method. These algorithms give complete solutions to the path planning problem. Extensive examples are worked out which verify the correctness and efficiency of the resulting program. Then we briefly discuss how these methods generalize to a 3-R planar manipulator.",
"INTRODUCTION: Brief History. Multifingered Hands and Dextrous Manipulation. Outline of the Book. Bibliography. RIGID BODY MOTION: Rigid Body Transformations. Rotational Motion in R3. Rigid Motion in R3. Velocity of a Rigid Body. Wrenches and Reciprocal Screws. MANIPULATOR KINEMATICS: Introduction. Forward Kinematics. Inverse Kinematics. The Manipulator Jacobian. Redundant and Parallel Manipulators. ROBOT DYNAMICS AND CONTROL: Introduction. Lagrange's Equations. Dynamics of Open-Chain Manipulators. Lyapunov Stability Theory. Position Control and Trajectory Tracking. Control of Constrained Manipulators. MULTIFINGERED HAND KINEMATICS: Introduction to Grasping. Grasp Statics. Force-Closure. Grasp Planning. Grasp Constraints. Rolling Contact Kinematics. HAND DYNAMICS AND CONTROL: Lagrange's Equations with Constraints. Robot Hand Dynamics. Redundant and Nonmanipulable Robot Systems. Kinematics and Statics of Tendon Actuation. Control of Robot Hands. NONHOLONOMIC BEHAVIOR IN ROBOTIC SYSTEMS: Introduction. Controllability and Frobenius' Theorem. Examples of Nonholonomic Systems. Structure of Nonholonomic Systems. NONHOLONOMIC MOTION PLANNING: Introduction. Steering Model Control Systems Using Sinusoids. General Methods for Steering. Dynamic Finger Repositioning. FUTURE PROSPECTS: Robots in Hazardous Environments. Medical Applications for Multifingered Hands. Robots on a Small Scale: Microrobotics. APPENDICES: Lie Groups and Robot Kinematics. A Mathematica Package for Screw Calculus. Bibliography. Index Each chapter also includes a Summary, Bibliography, and Exercises",
"A necessary feature of robot systems is a capability to process robot paths in terms of Cartesian coordinates. However, the so-called articulated robot arms which represent the more advanced systems operate in terms of their revolute and sliding joint coordinates, and these do not lend themselves easily to translation into Cartesian equivalents. As a rule, the direct (joint-to-Cartesian space) coordinate transformation can be solved for in closed form. As for the inverse (Cartesian-to-joint) transformation, for some rather common arm designs practical solutions cannot be found in closed form. An iterative procedure for one such class of articulated robots is presented. The procedure produces an exact solution for the wrist tip position coordinates and an approximate solution for the wrist orientation coordinates. The convergence characteristics of the procedure are discussed.",
"A new method for computing numerical solutions to the inverse kinematics problem of robotic manipulators is developed. The method is based on a combination of two nonlinear programming techniques and the forward recursion formulas, with the joint limitations of the robot being handled implicitly as simple boundary constraints. This method is numerically stable since it converges to the correct answer with virtually any initial approximation, and it is not sensitive to the singular configuration of the manipulator. In addition, this method is computationally efficient and can be applied to serial manipulators having any number of degrees of freedom. >",
"The kinematic transformation between task space and joint configuration coordinates is nonlinear and configuration dependent. A solution to the inverse kinematics is a vector of joint configuration coordinates that corresponds to a set of task space coordinates. For a class of robots closed form solutions always exist, but constraints on joint displacements cannot be systematically incorporated in the process of obtaining a solution. An iterative solution is presented that is suitable for any class of robots having rotary or prismatic joints, with any arbitrary number of degrees of freedom, including both standard and kinematically redundant robots. The solution can be obtained subject to specified constraints and based on certain performance criteria. The solution is based on a new rapidly convergent constrained nonlinear optimization algorithm which uses a modified Newton-Raphson technique for solving a system nonlinear equations. The algorithm is illustrated using as an example a kinematically redundant robot.",
"Numerical solution of the inverse kinematics of robots using modified Newton-Raphson (MNR) and modified predictor-corrector (MPC) algorithms is discussed. These modified algorithms are highly reliable and stable. Both algorithms always find a solution if a physically realizable robot configuration exists. They are also capable of approaching singular configurations of the robot as E = C em, where E is the error between the theoretical and numerically computed singular joint values, the convergence criteria, and C and m are appropriate constants. It is also shown that the modified predictor-corector (MPC) algorithm is 5-15 times faster than the modified Newton-Raphson (MNR) algorithm.",
"\"Richard Paul is perhaps the world's leading authority on the science of robot manipulation. He has contributed to almost every aspect of the field. His impressive publication record includes important articles on the kinematics of robot arms, their dynamics, and their control. He has developed a succession of interesting ideas concerning representation, specifically the use of homogeneous matrices.... Paul's book is written in his usual clear style, and it contains numerous interesting examples.\"--Patrick H. Winston and Mike Brady, editors, The MIT Press Artificial Intelligence Series\"Robot Manipulators\" is firmly grounded on the theoretical principles of the subject and makes considerable use of vector and matrix methods in its development. It is the first full treatment to be published, and it is designed for graduate courses in robotics as well as for practicing engineers. Following an introduction, the book's ten chapters cover homogeneous transformations, defining transformation equations, solving transformation equations, differential transformation relationships, motion trajectories, dynamics, digital servo systems, force transformations, compliance, and manipulation languages.Paul writes that the impact of robot manipulators on the workplace and the economy over the coming decade could be profound: \"While currently available industrial robots will probably not have a major impact on manufacturing, a low-cost, mass-produced, sensor-controlled robot could have a revolutionary effect.... Such robots would represent the conclusion of the industrial revolution, replacing the type of labor required at its outset to perform the repetitive machine-linked tasks whose ideal performance is characterized by our conception of a robot, not a human. Based on current research work, laboratory demonstrations, and the general level of technology in this country, we believe that it is possible to achieve such a robot within the coming decade.\"",
"Inverse Kinematics is used in computer graphics and robotics to control the posture of an articulated body. We introduce a new method utilising the law of cosines, which quickly determines the joint angles of a kinematic chain when given a target. Our method incurs a lower computational and rotational cost than Cyclic-Coordinate Descent (CCD), yet is also guaranteed to find a solution if one is available. Additionally it moves the upper joints of any kinematic chain first, thus ensuring a closer simulation of natural movement than CCD, which tends to overemphasise the movement of joints closer to the end-eector of the kinematic chain."
]
}
|
1309.7960
|
2208756836
|
Using results on the topology of moduli space of polygons [Jaggi, 92; Kapovich and Millson, 94], it can be shown that for a planar robot arm with @math segments there are some values of the base-length, @math , at which the configuration space of the constrained arm (arm with its end effector fixed) has two disconnected components, while at other values the constrained configuration space has one connected component. We first review some of these known results. Then the main design problem addressed in this paper is the construction of pairs of continuous inverse kinematics for arbitrary robot arms, with the property that the two inverse kinematics agree when the constrained configuration space has a single connected component, but they give distinct configurations (one in each connected component) when the configuration space of the constrained arm has two components. This design is made possible by a fundamental theoretical contribution in this paper -- a classification of configuration spaces of robot arms such that the type of path that the system (robot arm) takes through certain critical values of the forward kinematics function is completely determined by the class to which the configuration space of the arm belongs. This classification result makes the aforesaid design problem tractable, making it sufficient to design a pair of inverse kinematics for each class of configuration spaces (three of them in total). We discuss the motivation for this work, which comes from a more extensive problem of motion planning for the end effector of a robot arm requiring us to continuously sample one configuration from each connected component of the constrained configuration spaces. We demonstrate the low complexity of the presented algorithm through a Javascript + HTML5 based implementation available at this http URL
|
The IK algorithm that we propose and use in this paper is an analytically computable one of @math complexity ( @math being the number of segments of the arm), akin to the triangulation approach of @cite_11 . However our additional requirement of identifying and passing through certain critical points in the configuration space, as will be described in the next point, has necessitated a more careful and formal construction of the IK algorithm. In particular, our algorithm has a recursive or incremental flavor to it, wherein we construct the inverse kinematics by breaking up the arm into smaller components.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2159911618"
],
"abstract": [
"Inverse Kinematics is used in computer graphics and robotics to control the posture of an articulated body. We introduce a new method utilising the law of cosines, which quickly determines the joint angles of a kinematic chain when given a target. Our method incurs a lower computational and rotational cost than Cyclic-Coordinate Descent (CCD), yet is also guaranteed to find a solution if one is available. Additionally it moves the upper joints of any kinematic chain first, thus ensuring a closer simulation of natural movement than CCD, which tends to overemphasise the movement of joints closer to the end-eector of the kinematic chain."
]
}
|
1309.7341
|
2952209837
|
In collaborative agile ontology development projects support for modular reuse of ontologies from large existing remote repositories, ontology project life cycle management, and transitive dependency management are important needs. The Apache Maven approach has proven its success in distributed collaborative Software Engineering by its widespread adoption. The contribution of this paper is a new design artifact called OntoMaven. OntoMaven adopts the Maven-based development methodology and adapts its concepts to knowledge engineering for Maven-based ontology development and management of ontology artifacts in distributed ontology repositories.
|
There are many existing ontology engineering methodologies and ontology editors available. With its Maven-based approach for structuring the development phases into different goals providing different functionalities during the development project's life cycle, OntoMaven supports in particular agile ontology development methods, such as RapidOWL @cite_6 and COLM @cite_12 , as well as development methods which are inherently based on modularization such as aspect-oriented ontology development @cite_15 .
|
{
"cite_N": [
"@cite_15",
"@cite_12",
"@cite_6"
],
"mid": [
"2402632475",
"1583043953",
"2089256692"
],
"abstract": [
"In this paper, we describe our ongoing work on the application of the Aspect-Oriented Programming paradigm to the problem of ontology modularization driven by overlapping modularization requirements. We examine commonalities between ontology modules and software aspects and propose an approach to applying the latter to the problem of a priori construction of modular ontologies and a posteriori ontology modularization.",
"Corporate Semantic Web describes the application of semantic technologies within enterprises for better knowledge management or enhanced IT service management. But, well-known cost- and process-oriented problems of ontology engineering hinder the employment of ontologies as a flexible, scalable, and cost effective means for integrating data in small and mid-sized enterprises.We propose an innovative ontology lifecycle, examine existing tools towards the functional requirements of the lifecycle phases, and propose the vision of an architecture supporting them integratively.",
"Agile methodologies have recently gained growing success in many economic and technical spheres. This is due to the fact that flexibility, in particular fast and efficient reactions to changed prerequisites, is becoming increasingly important in the information society. To support adaptive, semantic collaboration between domain experts and knowledge engineers, a new, agile knowledge engineering methodology, called RapidOWL is proposed. This methodology is based on the idea of iterative refinement, annotation and structuring of a knowledge base. A central paradigm for the RapidOWL methodology is the concentration on smallest possible information chunks. The collaborative aspect comes into play, when those information chunks can be selectively added, removed, annotated with comments or ratings. Design rationales for the RapidOWL methodology are to be light-weight, easy-to-implement, and support of spatially distributed and highly collaborative scenarios."
]
}
|
1309.7341
|
2952209837
|
In collaborative agile ontology development projects support for modular reuse of ontologies from large existing remote repositories, ontology project life cycle management, and transitive dependency management are important needs. The Apache Maven approach has proven its success in distributed collaborative Software Engineering by its widespread adoption. The contribution of this paper is a new design artifact called OntoMaven. OntoMaven adopts the Maven-based development methodology and adapts its concepts to knowledge engineering for Maven-based ontology development and management of ontology artifacts in distributed ontology repositories.
|
New standardization efforts such as OMG Application Programming Interfaces for Knowledge Bases (OMG API4KB) www.omgwiki.org API4KB and OMG OntoIOP aim at the accessability and interoperability of heterogenous ontologies via standardized interfaces and semantic transformations defined on the meta-level of the ontology models, e.g., by the Distributed Ontology Language (DOL) @cite_10 . These approaches do not address ontology engineering directly, but can provide a standardized repository back-end for OntoMaven ontology development projects.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2952247164"
],
"abstract": [
"The Distributed Ontology Language (DOL) is currently being standardized within the OntoIOp (Ontology Integration and Interoperability) activity of ISO TC 37 SC 3. It aims at providing a unified framework for (1) ontologies formalized in heterogeneous logics, (2) modular ontologies, (3) links between ontologies, and (4) annotation of ontologies. This paper focuses on an application of DOL's meta-theoretical features in mathematical formalization: validating relationships between ontological formalizations of mathematical concepts in COLORE (Common Logic Repository), which provide the foundation for formalizing real-world notions such as spatial and temporal relations."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.