aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1306.6169
|
2951249203
|
Small cell networks have recently been proposed as an important evolution path for the next-generation cellular networks. However, with more and more irregularly deployed base stations (BSs), it is becoming increasingly difficult to quantify the achievable network throughput or energy efficiency. In this paper, we develop an analytical framework for downlink performance evaluation of small cell networks, based on a random spatial network model, where BSs and users are modeled as two independent spatial Poisson point processes. A new simple expression of the outage probability is derived, which is analytically tractable and is especially useful with multi-antenna transmissions. This new result is then applied to evaluate the network throughput and energy efficiency. It is analytically shown that deploying more BSs or more BS antennas can always increase the network throughput, but the performance gain critically depends on the BS-user density ratio and the number of BS antennas. On the other hand, increasing the BS density or the number of transmit antennas will first increase and then decrease the energy efficiency if different components of BS power consumption satisfy certain conditions, and the optimal BS density and the optimal number of BS antennas can be found. Otherwise, the energy efficiency will always decrease. Simulation results shall demonstrate that our conclusions based on the random network model are general and also hold in a regular grid-based model.
|
Previous works that investigate the throughput, energy efficiency and their tradeoff have mainly focused on the point-to-point communication link or the single-cell case @cite_10 @cite_32 @cite_36 @cite_33 , while the interference from other cells are neglected. Meanwhile, the throughput analysis of conventional cellular networks has received lots of attention, and different models have been proposed, such as the Wyner model @cite_0 or the grid model @cite_35 @cite_3 . While the Wyner model is commonly used due to its tractability, it may lose the essential characteristics of real and practical networks @cite_26 . On the other hand, the regular grid model becomes intractable as the network size grows, and it cannot handle the irregular network structure in small cell networks. In general, it is quite challenging to accurately evaluate the performance of cellular networks, due to the complexity of the network topology, and effects of multi-path propagation. A more common way to evaluate cellular networks is by simulation. For example, in @cite_3 , different cellular network architectures were compared through simulation. While evaluating the network performance through simulation can provide insights on some specific settings, the results may not be extended to other scenarios and the computational complexity is rather high.
|
{
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_33",
"@cite_36",
"@cite_32",
"@cite_3",
"@cite_0",
"@cite_10"
],
"mid": [
"2057029508",
"2131070905",
"",
"2096380022",
"",
"2076138763",
"2156315971",
""
],
"abstract": [
"In this paper, we present a systematic study of the uplink capacity and coverage of pico-cell wireless networks. Both the one dimensional as well as the two dimensional cases are investigated. Our goal is to compute the size of pico-cells that maximizes the spatial throughput density. To achieve this goal, we consider fluid models that allow us to obtain explicit expressions for the interference and the total received power at a base station. We study the impact of various parameters on the performance: the path loss factor, the spatial reuse factor and the receiver structure (matched filter or multiuser detector). We relate the performance of the fluid models to that of the original discrete system and show that the fluid model provides a bound for the discrete one.",
"The Wyner model has been widely used to model and analyze cellular networks due to its simplicity and analytical tractability. Its key aspects include fixed user locations and the deterministic and homogeneous interference intensity. While clearly a significant simplification of a real cellular system, which has random user locations and interference levels that vary by several orders of magnitude over a cell, a common presumption by theorists is that the Wyner model nevertheless captures the essential aspects of cellular interactions. But is this true? To answer this question, we compare the Wyner model to a model that includes random user locations and fading. We consider both uplink and downlink transmissions and both outage-based and average-based metrics. For the uplink, for both metrics, we conclude that the Wyner model is in fact quite accurate for systems with a sufficient number of simultaneous users, e.g., a CDMA system. Conversely, it is broadly inaccurate otherwise. Turning to the downlink, the Wyner model becomes inaccurate even for systems with a large number of simultaneous users. In addition, we derive an approximation for the main parameter in the Wyner model - the interference intensity term, which depends on the path loss exponent.",
"",
"The dramatic increase of network infrastructure comes at the cost of rapidly increasing energy consumption, which makes optimization of energy efficiency (EE) an important topic. Since EE is often modeled as the ratio of rate to power, we present a mathematical framework called fractional programming that provides insight into this class of optimization problems, as well as algorithms for computing the solution. The main idea is that the objective function is transformed to a weighted sum of rate and power. A generic problem formulation for systems dissipating transmit-independent circuit power in addition to transmit-dependent power is presented. We show that a broad class of EE maximization problems can be solved efficiently, provided the rate is a concave function of the transmit power. We elaborate examples of various system models including time-varying parallel channels. Rate functions with an arbitrary discrete modulation scheme are also treated. The examples considered lead to water-filling solutions, but these are different from the dual problems of power minimization under rate constraints and rate maximization under power constraints, respectively, because the constraints need not be active. We also demonstrate that if the solution to a rate maximization problem is known, it can be utilized to reduce the EE problem into a one-dimensional convex problem.",
"",
"The energy consumption of different cellular network architectures are analyzed. In particular, a comparison of the transmit energy consumption between a single large cell with multiple co-located antennas, multiple micro-cells with a single antenna at each cell, and a large cell with a distributed antenna system are presented. The influence of different system parameters such as cell size, spatial distribution of the users, and the availability of channel state information (CSI) toward the total required transmit energy are analyzed. It is shown that the current macro-cellular architecture with co-located antennas has poor energy efficiency in the absence of CSI, but has better energy efficiency than small cells when perfect CSI is available. Moreover, macro-cells with distributed antennas have the best energy efficiency of all three architectures under perfect CSI. These results shed light on design guidelines to improve the energy efficiency of cellular network architectures.",
"We obtain Shannon-theoretic limits for a very simple cellular multiple-access system. In our model the received signal at a given cell site is the sum of the signals transmitted from within that cell plus a factor spl alpha (0 spl les spl alpha spl les 1) times the sum of the signals transmitted from the adjacent cells plus ambient Gaussian noise. Although this simple model is scarcely realistic, it nevertheless has enough meat so that the results yield considerable insight into the workings of real systems. We consider both a one dimensional linear cellular array and the familiar two-dimensional hexagonal cellular pattern. The discrete-time channel is memoryless. We assume that N contiguous cells have active transmitters in the one-dimensional case, and that N sup 2 contiguous cells have active transmitters in the two-dimensional case. There are K transmitters per cell. Most of our results are obtained for the limiting case as N spl rarr spl infin . The results include the following. (1) We define C sub N ,C spl circ sub N as the largest achievable rate per transmitter in the usual Shannon-theoretic sense in the one- and two-dimensional cases, respectively (assuming that all signals are jointly decoded). We find expressions for limN spl rarr spl infin C sub N and limN spl rarr spl infin C spl circ sub N . (2) As the interference parameter spl alpha increases from 0, C sub N and C spl circ sub N increase or decrease according to whether the signal-to-noise ratio is less than or greater than unity. (3) Optimal performance is attainable using TDMA within the cell, but using TDMA for adjacent cells is distinctly suboptimal. (4) We suggest a scheme which does not require joint decoding of all the users, and is, in many cases, close to optimal. >",
""
]
}
|
1306.6169
|
2951249203
|
Small cell networks have recently been proposed as an important evolution path for the next-generation cellular networks. However, with more and more irregularly deployed base stations (BSs), it is becoming increasingly difficult to quantify the achievable network throughput or energy efficiency. In this paper, we develop an analytical framework for downlink performance evaluation of small cell networks, based on a random spatial network model, where BSs and users are modeled as two independent spatial Poisson point processes. A new simple expression of the outage probability is derived, which is analytically tractable and is especially useful with multi-antenna transmissions. This new result is then applied to evaluate the network throughput and energy efficiency. It is analytically shown that deploying more BSs or more BS antennas can always increase the network throughput, but the performance gain critically depends on the BS-user density ratio and the number of BS antennas. On the other hand, increasing the BS density or the number of transmit antennas will first increase and then decrease the energy efficiency if different components of BS power consumption satisfy certain conditions, and the optimal BS density and the optimal number of BS antennas can be found. Otherwise, the energy efficiency will always decrease. Simulation results shall demonstrate that our conclusions based on the random network model are general and also hold in a regular grid-based model.
|
Recently, Andrews proposed a random spatial model where BSs are modeled as a spatial Poisson point process (PPP) @cite_23 . Such kind of random network model has been used extensively in wireless ad-hoc networks @cite_34 @cite_30 @cite_18 @cite_4 @cite_7 , and it is well suited for small cell networks, where BS positions are becoming irregular. Moreover, with the help of stochastic geometry and the point process theory @cite_16 @cite_27 @cite_8 , this model has been shown to be tractable and accurate, and can be used to analyze the outage probability and throughput in cellular networks. This random spatial network model has also been used to analyze other networks such as heterogeneous cellular networks @cite_15 @cite_21 @cite_24 @cite_19 , distributed antenna systems @cite_20 , and cognitive radio networks @cite_22 @cite_17 .
|
{
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_21",
"@cite_16",
"@cite_24",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_15",
"@cite_34",
"@cite_20",
"@cite_17"
],
"mid": [
"2132987440",
"2137079066",
"2012396676",
"1980857000",
"1565287731",
"2145873277",
"2109830484",
"2118166339",
"2087240286",
"2149170915",
"635250944",
"2150166076",
"2120419969",
"2095796369",
"2165706200",
"2117059442"
],
"abstract": [
"An Aloha-type access control mechanism for large mobile, multihop, wireless networks is defined and analyzed. This access scheme is designed for the multihop context, where it is important to find a compromise between the spatial density of communications and the range of each transmission. More precisely, the analysis aims at optimizing the product of the number of simultaneously successful transmissions per unit of space (spatial reuse) by the average range of each transmission. The optimization is obtained via an averaging over all Poisson configurations for the location of interfering mobiles, where an exact evaluation of signal over noise ratio is possible. The main mathematical tools stem from stochastic geometry and are spatial versions of the so-called additive and max shot noise processes. The resulting medium access control (MAC) protocol exhibits some interesting properties. First, it can be implemented in a decentralized way provided some local geographic information is available to the mobiles. In addition, its transport capacity is proportional to the square root of the density of mobiles which is the upper bound of Gupta and Kumar. Finally, this protocol is self-adapting to the node density and it does not require prior knowledge of this density.",
"This paper derives the outage probability and transmission capacity of ad hoc wireless networks with nodes employing multiple antenna diversity techniques, for a general class of signal distributions. This analysis allows system performance to be quantified for fading or non-fading environments. The transmission capacity is given for interference-limited uniformly random networks on the entire plane with path loss exponent alpha > 2 in which nodes use: (1) static beamforming through M sectorized antennas, for which the increase in transmission capacity is shown to be thetas(M2) if the antennas are without sidelobes, but less in the event of a nonzero sidelobe level; (2) dynamic eigenbeamforming (maximal ratio transmission combining), in which the increase is shown to be thetas(M 2 alpha ); (3) various transmit antenna selection and receive antenna selection combining schemes, which give appreciable but rapidly diminishing gains; and (4) orthogonal space-time block coding, for which there is only a small gain due to channel hardening, equivalent to Nakagami-m fading for increasing m. It is concluded that in ad hoc networks, static and dynamic beamforming perform best, selection combining performs well but with rapidly diminishing returns with added antennas, and that space-time block coding offers only marginal gains.",
"We develop a general framework for the analysis of a broad class of point-to-point linear multiple-input multiple-output (MIMO) transmission schemes in decentralized wireless ad hoc networks. New general closed-form expressions are derived for the outage probability, throughput and transmission capacity. For the throughput, we investigate the optimal number of data streams in various asymptotic regimes, which is shown to be dependent on different network parameters. For the transmission capacity, we prove that it scales linearly with the number of antennas, provided that the number of data streams also scales linearly with the number of antennas, in addition to meeting some mild technical conditions. We also characterize the optimal number of data streams for maximizing the transmission capacity. To make our discussion concrete, we apply our general framework to investigate three popular MIMO schemes, each requiring different levels of feedback. In particular, we consider eigenmode selection with MIMO singular value decomposition, multiple transmit antenna selection, and open-loop spatial multiplexing. Our analysis of these schemes reveals that significant performance gains are achieved by utilizing feedback under a range of network conditions.",
"This paper analyzes spectrum sharing between multiple systems. The efficiency of spectrum sharing is determined primarily by interference, which is a function of the spatial densities of the transmitters in systems dependent on the chosen spectrum sharing method. One method is underlay, which allows all systems to concurrently use the whole spectrum, and the other is overlay, in which a system only utilizes its own assigned spectrum. We define the spectrum-sharing transmission capacity (S-TC) as the number of successful transmissions per unit area subject to outage probability constraints for each system. To prevent some systems from monopolizing access to the spectrum, we also propose a fair coexistence constraint and derive the optimal spatial densities and relative transmission powers both with and without this constraint in terms of the sum S-TC. Through analytical results, the overlay and underlay methods are compared, verifying that the overlay method is generally preferred, and the underlay method is equally good only for optimal transmission power ratios under a fair coexistence constraint.",
"This paper investigates the performance of open-loop multi-antenna point-to-point links in ad hoc networks with slotted ALOHA medium access control (MAC). We consider spatial multiplexing transmission with linear maximum ratio combining and zero forcing receivers, as well as orthogonal space time block coded transmission. New closed-form expressions are derived for the outage probability, throughput and transmission capacity. Our results demonstrate that both the best performing scheme and the optimum number of transmit antennas depend on different network parameters, such as the node intensity and the signal-to-interference-and-noise ratio operating value. We then compare the performance to a network consisting of single-antenna devices and an idealized fully centrally coordinated MAC. These results show that multi-antenna schemes with a simple decentralized slotted ALOHA MAC can outperform even idealized single-antenna networks in various practical scenarios.",
"Wireless networks are fundamentally limited by the intensity of the received signals and by their interference. Since both of these quantities depend on the spatial location of the nodes, mathematical techniques have been developed in the last decade to provide communication-theoretic results accounting for the networks geometrical configuration. Often, the location of the nodes in the network can be modeled as random, following for example a Poisson point process. In this case, different techniques based on stochastic geometry and the theory of random geometric graphs -including point process theory, percolation theory, and probabilistic combinatorics-have led to results on the connectivity, the capacity, the outage probability, and other fundamental limits of wireless networks. This tutorial article surveys some of these techniques, discusses their application to model wireless networks, and presents some of the main results that have appeared in the literature. It also serves as an introduction to the field for the other papers in this special issue.",
"The deployment of femtocells in a macrocell network is an economical and effective way to increase network capacity and coverage. Nevertheless, such deployment is challenging due to the presence of inter-tier and intra-tier interference, and the ad hoc operation of femtocells. Motivated by the flexible subchannel allocation capability of OFDMA, we investigate the effect of spectrum allocation in two-tier networks, where the macrocells employ closed access policy and the femtocells can operate in either open or closed access. By introducing a tractable model, we derive the success probability for each tier under different spectrum allocation and femtocell access policies. In particular, we consider joint subchannel allocation, in which the whole spectrum is shared by both tiers, as well as disjoint subchannel allocation, whereby disjoint sets of subchannels are assigned to both tiers. We formulate the throughput maximization problem subject to quality of service constraints in terms of success probabilities and per-tier minimum rates, and provide insights into the optimal spectrum allocation. Our results indicate that with closed access femtocells, the optimized joint and disjoint subchannel allocations provide the highest throughput among all schemes in sparse and dense femtocell networks, respectively. With open access femtocells, the optimized joint subchannel allocation provides the highest possible throughput for all femtocell densities.",
"Mathematical Foundation. Point Processes I--The Poisson Point Process. Random Closed Sets I--The Boolean Model. Point Processes II--General Theory. Point Processes III--Construction of Models. Random Closed Sets II--The General Case. Random Measures. Random Processes of Geometrical Objects. Fibre and Surface Processes. Random Tessellations. Stereology. References. Indexes.",
"In this paper, we adopt stochastic geometry theory to analyze the optimal macro micro BS (base station) density for energy-efficient heterogeneous cellular networks with QoS constraints. We first derive the upper and lower bounds of the optimal BS density for homogeneous scenarios and, based on these, we analyze the optimal BS density for heterogeneous networks. The optimal macro micro BS density can be calculated numerically through our analysis, and the closed-form approximation is also derived. Our results reveal the best type of BSs to be deployed for capacity extension, or to be switched off for energy saving. Specifically, if the ratio between the micro BS cost and the macro BS cost is lower than a threshold, which is a function of path loss and their transmit power, the micro BSs are preferred, i.e., deploy more micro BSs for capacity extension or switch off certain macro BSs for energy saving. Otherwise, the optimal choice is the opposite. Our work provides guidance for energy efficient cellular network planning and dynamic operation control.1",
"Cellular networks are in a major transition from a carefully planned set of large tower-mounted base-stations (BSs) to an irregular deployment of heterogeneous infrastructure elements that often additionally includes micro, pico, and femtocells, as well as distributed antennas. In this paper, we develop a tractable, flexible, and accurate model for a downlink heterogeneous cellular network (HCN) consisting of K tiers of randomly located BSs, where each tier may differ in terms of average transmit power, supported data rate and BS density. Assuming a mobile user connects to the strongest candidate BS, the resulting Signal-to-Interference-plus-Noise-Ratio (SINR) is greater than 1 when in coverage, Rayleigh fading, we derive an expression for the probability of coverage (equivalently outage) over the entire network under both open and closed access, which assumes a strikingly simple closed-form in the high SINR regime and is accurate down to -4 dB even under weaker assumptions. For external validation, we compare against an actual LTE network (for tier 1) with the other K-1 tiers being modeled as independent Poisson Point Processes. In this case as well, our model is accurate to within 1-2 dB. We also derive the average rate achieved by a randomly located mobile and the average load on each tier of BSs. One interesting observation for interference-limited open access networks is that at a given , adding more tiers and or BSs neither increases nor decreases the probability of coverage or outage when all the tiers have the same target-SINR.",
"Preface. Preface to Volume II. Contents of Volume II. Part IV Medium Access Control 1 Spatial Aloha: the Bipole Model 2 Receiver Selection in Spatial 3 Carrier Sense Multiple 4 Code Division Multiple Access in Cellular Networks Bibliographical Notes on Part IV. Part V Multihop Routing in Mobile ad Hoc Networks: 5 Optimal Routing 6 Greedy Routing 7 Time-Space Routing Bibliographical Notes on Part V. Part VI Appendix:Wireless Protocols and Architectures: 8 RadioWave Propagation 9 Signal Detection 10 Wireless Network Architectures and Protocols Bibliographical Notes on Part VI Bibliography Table of Notation Index.",
"Cellular networks are usually modeled by placing the base stations on a grid, with mobile users either randomly scattered or placed deterministically. These models have been used extensively but suffer from being both highly idealized and not very tractable, so complex system-level simulations are used to evaluate coverage outage probability and rate. More tractable models have long been desirable. We develop new general models for the multi-cell signal-to-interference-plus-noise ratio (SINR) using stochastic geometry. Under very general assumptions, the resulting expressions for the downlink SINR CCDF (equivalent to the coverage probability) involve quickly computable integrals, and in some practical special cases can be simplified to common integrals (e.g., the Q-function) or even to simple closed-form expressions. We also derive the mean rate, and then the coverage gain (and mean rate loss) from static frequency reuse. We compare our coverage predictions to the grid model and an actual base station deployment, and observe that the proposed model is pessimistic (a lower bound on coverage) whereas the grid model is optimistic, and that both are about equally accurate. In addition to being more tractable, the proposed model may better capture the increasingly opportunistic and dense placement of base stations in future networks.",
"Two-tier networks, comprising a conventional cellular network overlaid with shorter range hotspots (e.g. femtocells, distributed antennas, or wired relays), offer an economically viable way to improve cellular system capacity. The capacity-limiting factor in such networks is interference. The cross-tier interference between macrocells and femtocells can suffocate the capacity due to the near-far problem, so in practice hotspots should use a different frequency channel than the potentially nearby high-power macrocell users. Centralized or coordinated frequency planning, which is difficult and inefficient even in conventional cellular networks, is all but impossible in a two-tier network. This paper proposes and analyzes an optimum decentralized spectrum allocation policy for two-tier networks that employ frequency division multiple access (including OFDMA). The proposed allocation is optimal in terms of area spectral efficiency (ASE), and is subjected to a sensible quality of service (QoS) requirement, which guarantees that both macrocell and femtocell users attain at least a prescribed data rate. Results show the dependence of this allocation on the QoS requirement, hotspot density and the co-channel interference from the macrocell and femtocells. Design interpretations are provided.",
"In this paper, upper and lower bounds on the transmission capacity of spread-spectrum (SS) wireless ad hoc networks are derived. We define transmission capacity as the product of the maximum density of successful transmissions multiplied by their data rate, given an outage constraint. Assuming that the nodes are randomly distributed in space according to a Poisson point process, we derive upper and lower bounds for frequency hopping (FH-CDMA) and direct sequence (DS-CDMA) SS networks, which incorporate traditional modulation types (no spreading) as a special case. These bounds cleanly summarize how ad hoc network capacity is affected by the outage probability, spreading factor, transmission power, target signal-to-noise ratio (SNR), and other system parameters. Using these bounds, it can be shown that FH-CDMA obtains a higher transmission capacity than DS-CDMA on the order of M sup 1-2 spl alpha , where M is the spreading factor and spl alpha >2 is the path loss exponent. A tangential contribution is an (apparently) novel technique for obtaining tight bounds on tail probabilities of additive functionals of homogeneous Poisson point processes.",
"In a cellular distributed antenna system (DAS), distributed antenna elements (AEs) are connected to the base station via an offline dedicated link, e.g. fiber optics or line-of-sight RF. Distributed antennas have been recently shown to provide considerable gains in coverage and capacity, at much lower cost than decreasing cell size. Previous studies have neglected the key sources of randomness in such systems, notably (i) random channel effects (fading and shadowing) and (ii) the random quantity and locations of both the mobile users and the AEs. Typically, path loss has been the focus, and the AEs are assumed to be regularly spaced, both of which are significant idealizations. First, we develop an analytical framework that allows random channels to be accommodated. We use this approach to show that selection transmission (using a single AE) is preferable to maximum ratio transmission (which uses all the AEs) in a multicell environment. Interestingly, the opposite is true in an isolated cell. Second, since AEs are placed opportunistically (on tall structures with backhaul access) rather than regularly, we develop a stochastic geometry-inspired approach to determine the outage probability as a function of the number of randomly placed AEs, which we model as a point process. With selection transmission, the outage probability is shown to decrease exponentially with the number of AEs and users. In the most general setup - with multiple distributed antennas and users, and both AE selection and user selection - we show that randomly deployed AEs provide nearly the same performance as regularly spaced AEs.",
"Consider a cognitive radio network with two types of users: primary users (PUs) and cognitive users (CUs), whose locations follow two independent Poisson point processes. The cognitive users follow the policy that a cognitive transmitter is active only when it is outside the primary user exclusion regions. We found that under this setup the active cognitive users form a point process called the Poisson hole process. Due to the interaction between the primary users and the cognitive users through exclusion regions, an exact calculation of the interference and the outage probability seems unfeasible. Instead, two different approaches are taken to tackle this problem. First, bounds for the interference (in the form of Laplace transforms) and the outage probability are derived, and second, it is shown how to use a Poisson cluster process to model the interference in this kind of network. Furthermore, the bipolar network model with different exclusion region settings is analyzed."
]
}
|
1306.6169
|
2951249203
|
Small cell networks have recently been proposed as an important evolution path for the next-generation cellular networks. However, with more and more irregularly deployed base stations (BSs), it is becoming increasingly difficult to quantify the achievable network throughput or energy efficiency. In this paper, we develop an analytical framework for downlink performance evaluation of small cell networks, based on a random spatial network model, where BSs and users are modeled as two independent spatial Poisson point processes. A new simple expression of the outage probability is derived, which is analytically tractable and is especially useful with multi-antenna transmissions. This new result is then applied to evaluate the network throughput and energy efficiency. It is analytically shown that deploying more BSs or more BS antennas can always increase the network throughput, but the performance gain critically depends on the BS-user density ratio and the number of BS antennas. On the other hand, increasing the BS density or the number of transmit antennas will first increase and then decrease the energy efficiency if different components of BS power consumption satisfy certain conditions, and the optimal BS density and the optimal number of BS antennas can be found. Otherwise, the energy efficiency will always decrease. Simulation results shall demonstrate that our conclusions based on the random network model are general and also hold in a regular grid-based model.
|
So far, most studies that adopt the random network model to analyze cellular networks only focus on the spatial distribution of BSs, while the distribution of mobile users is largely ignored. Specifically, BSs are modeled as a PPP, and each BS always has a mobile user to serve, so the user density and the BS-user association are irrelevant. Such an assumption holds only when the user density is much larger than the BS density, which is not the case in small cell networks where the user density is comparable to the BS density. In this paper, we will explicitly consider the user density and the BS-user association. Moreover, most previous works only consider single-antenna BSs. As shown in previous works on wireless ad-hoc networks @cite_18 @cite_4 @cite_7 , random network models with multi-antenna transmission are much more challenging than single-antenna systems. In cellular networks, stochastic orders were introduced in @cite_13 to provide qualitative comparison between different multi-antenna techniques, but such method cannot be used for quantitative analysis. In our work, we will consider multi-antenna transmission in small cell networks and investigate the effect of multiple BS antennas on the system performance.
|
{
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_4",
"@cite_7"
],
"mid": [
"",
"2137079066",
"2012396676",
"1565287731"
],
"abstract": [
"",
"This paper derives the outage probability and transmission capacity of ad hoc wireless networks with nodes employing multiple antenna diversity techniques, for a general class of signal distributions. This analysis allows system performance to be quantified for fading or non-fading environments. The transmission capacity is given for interference-limited uniformly random networks on the entire plane with path loss exponent alpha > 2 in which nodes use: (1) static beamforming through M sectorized antennas, for which the increase in transmission capacity is shown to be thetas(M2) if the antennas are without sidelobes, but less in the event of a nonzero sidelobe level; (2) dynamic eigenbeamforming (maximal ratio transmission combining), in which the increase is shown to be thetas(M 2 alpha ); (3) various transmit antenna selection and receive antenna selection combining schemes, which give appreciable but rapidly diminishing gains; and (4) orthogonal space-time block coding, for which there is only a small gain due to channel hardening, equivalent to Nakagami-m fading for increasing m. It is concluded that in ad hoc networks, static and dynamic beamforming perform best, selection combining performs well but with rapidly diminishing returns with added antennas, and that space-time block coding offers only marginal gains.",
"We develop a general framework for the analysis of a broad class of point-to-point linear multiple-input multiple-output (MIMO) transmission schemes in decentralized wireless ad hoc networks. New general closed-form expressions are derived for the outage probability, throughput and transmission capacity. For the throughput, we investigate the optimal number of data streams in various asymptotic regimes, which is shown to be dependent on different network parameters. For the transmission capacity, we prove that it scales linearly with the number of antennas, provided that the number of data streams also scales linearly with the number of antennas, in addition to meeting some mild technical conditions. We also characterize the optimal number of data streams for maximizing the transmission capacity. To make our discussion concrete, we apply our general framework to investigate three popular MIMO schemes, each requiring different levels of feedback. In particular, we consider eigenmode selection with MIMO singular value decomposition, multiple transmit antenna selection, and open-loop spatial multiplexing. Our analysis of these schemes reveals that significant performance gains are achieved by utilizing feedback under a range of network conditions.",
"This paper investigates the performance of open-loop multi-antenna point-to-point links in ad hoc networks with slotted ALOHA medium access control (MAC). We consider spatial multiplexing transmission with linear maximum ratio combining and zero forcing receivers, as well as orthogonal space time block coded transmission. New closed-form expressions are derived for the outage probability, throughput and transmission capacity. Our results demonstrate that both the best performing scheme and the optimum number of transmit antennas depend on different network parameters, such as the node intensity and the signal-to-interference-and-noise ratio operating value. We then compare the performance to a network consisting of single-antenna devices and an idealized fully centrally coordinated MAC. These results show that multi-antenna schemes with a simple decentralized slotted ALOHA MAC can outperform even idealized single-antenna networks in various practical scenarios."
]
}
|
1306.6115
|
1579807273
|
This report presents our work on behavioral types for OSGi component systems. It extends previously published work and presents features and details that have not yet been published. In particular, we cover a discussion on behavioral types in general, and Eclipse based implementation work on behavioral types . The implementation work covers: editors, means for comparing types at development and runtime, a tool connection to resolve incompatibilities, and an AspectJ based infrastructure to ensure behavioral type correctness at runtime of a system. Furthermore, the implementation comprises various auxiliary operations. We present some evaluation work based on examples.
|
Specification and contract languages for component based systems have been studied in the context of web services. A process algebra like language and deductive techniques are studied in @cite_2 . Another process algebra based contract language for web services is studied in @cite_17 . Emphasize in the formalism is put on compliance, a correctness guaranty for properties like deadlock and livelock freedom. Another algebraic approach to service composition is featured in @cite_33 .
|
{
"cite_N": [
"@cite_17",
"@cite_33",
"@cite_2"
],
"mid": [
"2164834097",
"40768353",
"2032399648"
],
"abstract": [
"We investigate, in a process algebraic setting, a new notion of correctness for service compositions, which we call strong service compliance: composed services are strong compliant if their composition is both deadlock and livelock free (this is the traditional notion of compliance), and whenever a message can be sent to invoke a service, it is guranteed to be ready to serve the invocation. We also define a new notion of refinement, called strong subcontract pre-order, suitable for strong compliance: given a composition of strong compliant services, we can replace any service with any other service in subcontract relation while preserving the overall strong compliance. Finally, we present a characterisation of the strong subcontract pre-order by resorting to the theory of a (should) testing pre-order.",
"We address the problem of ensuring that, when an application executing a service binds to a service that matches required functional properties, both the application and the service can work together, i.e., their composition is consistent. Our approach is based on a component algebra for service-oriented computing in which the configurations of applications and of services are modelled as asynchronous relational nets typed with logical interfaces. The techniques that we propose allow for the consistency of composition to be guaranteed based on properties of service orchestrations (implementations) and interfaces that can be checked at design time, which is essential for supporting the levels of dynamicity required by run-time service binding.",
"Contracts are behavioral descriptions of Web services. We devise a theory of contracts that formalizes the compatibility of a client with a service, and the safe replacement of a service with another service. The use of contracts statically ensures the successful completion of every possible interaction between compatible clients and services. The technical device that underlies the theory is the filter, which is an explicit coercion preventing some possible behaviors of services and, in doing so, make services compatible with different usage scenarios. We show that filters can be seen as proofs of a sound and complete subcontracting deduction system which simultaneously refines and extends Hennessy's classical axiomatization of the must testing preorder. The relation is decidable, and the decision algorithm is obtained via a cut-elimination process that proves the coherence of subcontracting as a logical system. Despite the richness of the technical development, the resulting approach is based on simple ideas and basic intuitions. Remarkably, its application is mostly independent of the language used to program the services or the clients. We outline the practical aspects of our theory by studying two different concrete syntaxes for contracts and applying each of them to Web services languages. We also explore implementation issues of filters and discuss the perspectives of future research this work opens."
]
}
|
1306.6115
|
1579807273
|
This report presents our work on behavioral types for OSGi component systems. It extends previously published work and presents features and details that have not yet been published. In particular, we cover a discussion on behavioral types in general, and Eclipse based implementation work on behavioral types . The implementation work covers: editors, means for comparing types at development and runtime, a tool connection to resolve incompatibilities, and an AspectJ based infrastructure to ensure behavioral type correctness at runtime of a system. Furthermore, the implementation comprises various auxiliary operations. We present some evaluation work based on examples.
|
Behavioral types as means for behavioral checks at runtime for component based systems have been investigated in @cite_19 . In this work, the focus is rather put on the definition of a suitable formal representation to express types and investigate their methodical application in the context of a model-based development process.
|
{
"cite_N": [
"@cite_19"
],
"mid": [
"2770168982"
],
"abstract": [
"The notion of Abstract Data Type (ADT) has served as a foundation model for structured and object oriented programming for some thirty years. The current trend in software engineering toward component based systems requires a foundation model as well. The most basic inherent property of an ADT, i.e., that it provides a set of operations, subverts some highly desirable properties in emerging formal models for components that are based on the object oriented paradigm."
]
}
|
1306.6115
|
1579807273
|
This report presents our work on behavioral types for OSGi component systems. It extends previously published work and presents features and details that have not yet been published. In particular, we cover a discussion on behavioral types in general, and Eclipse based implementation work on behavioral types . The implementation work covers: editors, means for comparing types at development and runtime, a tool connection to resolve incompatibilities, and an AspectJ based infrastructure to ensure behavioral type correctness at runtime of a system. Furthermore, the implementation comprises various auxiliary operations. We present some evaluation work based on examples.
|
Monitoring of performance and availability attributes of OSGi systems has been studied in @cite_14 . Here, a focus is on the dynamic reconfiguration ability of OSGi. Another work using the .Net framework for runtime monitor integration is described in @cite_21 . Runtime monitors for interface specifications of web-service in the context of a concrete e-commerce service have been studied in @cite_9 . Behavioral conformance of web-services and corresponding runtime verification has also been investigated in @cite_6 . Runtime monitoring for web-services where runtime monitors are derived from UML diagrams is studied in @cite_30 .
|
{
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_9",
"@cite_21",
"@cite_6"
],
"mid": [
"2100716485",
"93958208",
"",
"2167056683",
"2089812122"
],
"abstract": [
"For a system of distributed processes, correctness can be ensured by (statically) checking whether their composition satisfies properties of interest. In contrast, Web services are being designed so that each partner discovers properties of others dynamically, through a published interface. Since the overall system may not be available statically and since each business process is supposed to be relatively simple, we propose to use runtime monitoring of conversations between partners as a means of checking behavioural correctness of the entire web service system. Specifically, we identify a subset of UML 2.0 Sequence Diagrams as a property specification language and show that it is sufficiently expressive for capturing safety and liveness properties. By transforming these diagrams to automata, we enable conformance checking of finite execution traces against the specification. We describe an implementation of our approach as part of an industrial system and report on preliminary experience.",
"There is an increasing need to monitor quality attributes (e.g., performance and availability) in SOA environments. Existing approaches to monitor these attributes (commonly referred to as QoS attributes) do not allow reconfiguration while services are in execution. This paper presents a generic QoS-aware SOA mechanism able to monitor runtime quality attributes of services. The solution is dynamic, event-based, extensible, transparent and lightweight in such way that the performance impact on the application is minimal and the overall mechanism is easily reconfigurable. To validate our solution, we propose a typical SOA scenario and evaluate its impact on the performance of the service execution.",
"",
"MOBILE is an extension of the .NET Common Intermediate Language that supports certified In-Lined Reference Monitoring. Mobile programs have the useful property that if they are well-typed with respect to a declared security policy, then they are guaranteed not to violate that security policy when executed. Thus, when an In-Lined Reference Monitor (IRM) is expressed in Mobile, it can be certified by a simple type-checker to eliminate the need to trust the producer of the IRM.Security policies in Mobile are declarative, can involve unbounded collections of objects allocated at runtime, and can regard infinite-length histories of security events exhibited by those objects. The prototype Mobile implementation enforces properties expressed by finite-state security automata - one automaton for each security-relevant object - and can type-check Mobile programs in the presence of exceptions, finalizers, concurrency, and non-termination. Executing Mobile programs requires no change to existing .NET virtual machine implementations, since Mobile programs consist of normal managed CIL code with extra typing annotations stored in .NET attributes.",
"This paper presents a methodology to perform passive testing of behavioural conformance for the web services based on the security rule. The proposed methodology can be used either to check a trace (offline checking) or to runtime verification (online checking) with timing constraints, including future and past time. In order to perform this: firstly, we use the Nomad language to define the security rules. Secondly, we propose an algorithm that can check simultaneously multi instances. Afterwards, with each security rule, we propose a graphical statistics, with some fixed properties, that helps the tester to easy assess about the service. In addition to the theoretical framework we have developed a software tool, called RV4WS (Runtime Verification engine for Web Service), that helps in the automation of our passive testing approach. In particular the algorithm presented in this paper is fully implemented in the tool. We also present a mechanism to collect the observable trace in this paper."
]
}
|
1306.6115
|
1579807273
|
This report presents our work on behavioral types for OSGi component systems. It extends previously published work and presents features and details that have not yet been published. In particular, we cover a discussion on behavioral types in general, and Eclipse based implementation work on behavioral types . The implementation work covers: editors, means for comparing types at development and runtime, a tool connection to resolve incompatibilities, and an AspectJ based infrastructure to ensure behavioral type correctness at runtime of a system. Furthermore, the implementation comprises various auxiliary operations. We present some evaluation work based on examples.
|
Runtime enforcement of safety properties was initiated with security automata @cite_13 that are able to halt the underlying program upon a deviation from the expected behaviors. In our behavioral types framework, the enforcement of specifications is in parts left to the system developer, who may or may not take potential Java exceptions resulting from behavioral type violations into account.
|
{
"cite_N": [
"@cite_13"
],
"mid": [
"2097368879"
],
"abstract": [
"A precise characterization is given for the class of security policies that can be enforced using mechanisms that work by monitoring system execution, and a class of automata is introduced for specifying those security policies. Techniques to enforce security policies specified by such automata are also discussed. READERS NOTE: A substantially revised version of this document is available at http: cs-tr.cs.cornell.edu:80 Dienst UI 1.0 Display ncstrl.cornell TR99-1759"
]
}
|
1306.6115
|
1579807273
|
This report presents our work on behavioral types for OSGi component systems. It extends previously published work and presents features and details that have not yet been published. In particular, we cover a discussion on behavioral types in general, and Eclipse based implementation work on behavioral types . The implementation work covers: editors, means for comparing types at development and runtime, a tool connection to resolve incompatibilities, and an AspectJ based infrastructure to ensure behavioral type correctness at runtime of a system. Furthermore, the implementation comprises various auxiliary operations. We present some evaluation work based on examples.
|
Our behavioral types represent an abstract view on the semantics of OSGi. We have summarized our work on the OSGi semantics in a report @cite_12 . Other work does describe OSGi and its semantics only at a very high level. A specification based on process algebras is featured in @cite_15 . Means for ensuring OSGi compatibility of bundles realized by using an advanced versioning system for OSGi bundles based on their type information is studied in @cite_31 . Some investigations on the relation between OSGi and some more formal component models have been done in @cite_27 . Aspects on formal security models for OSGi have been studied in @cite_25 .
|
{
"cite_N": [
"@cite_31",
"@cite_27",
"@cite_15",
"@cite_25",
"@cite_12"
],
"mid": [
"2110138217",
"102096933",
"",
"126473541",
"1480723928"
],
"abstract": [
"Consistency of component software is a crucial condition required for correct program execution. The existing consistency controls of OSGi at build time or in runtime cannot prevent type mismatch failures caused by independent client and server bundle development. This paper describes our solution to this problem using automated versioning of components. Version identifiers are generated from results of subtype-based comparison of component representations, thus achieving a consistent and formally backed interpretation of the version numbering scheme. The implementation of the approach allows its integration into standard OSGi bundle development and build cycle.",
"Formal component models have been subject to research for decades, but current component frameworks hardly reflect their capabilities with respect to composition, dependency management and interaction modeling. Thus the frameworks don’t exploit the benefits of formal component models like understandability and ease of maintenance, which are enabled when software is composed of hierarchical and reusable components that are loosely coupled, self-describing and self-contained. In this contribution, we try to examine the discrepancies between the state of research and the capabilities of an existing module framework, the widely-used OSGi bundle management framework for the Java platform. Based on this we propose modifications and enhancements to the OSGi framework that allow to exploit the benefits of formal component models in OSGi-based applications.",
"",
"The natural business model of OSGi is dynamic loading and removal of bundles or services on an OSGi platform. If bundles can come from different stakeholders, how do we make sure that one’s services will only be invoked by the authorized bundles? A simple solution is to interweave functional and security logic within each bundle, but this decreases the benefits of using a common platform for service deployment and is a well-known source of errors. Our solution is to use the Security-by-Contract methodology (SxC) for loading time security verification to separate the security from the business logic while controlling access to applications. The basic idea is that each bundle has a contract embedded into its manifest, that contains details on functional requirements and permissions for access by other bundles on the platform. During bundle installation the contract is matched with the platform security policy (aggregating the contracts of the installed bundles). We illustrate the SxC methodology on a concrete case study for home gateways and discuss how it can help to overcome the OSGi security management shortcomings.",
"We present a formalization of the OSGi component framework. Our formalization is intended to be used as a basis for describing behavior of OSGi based systems. Furthermore, we describe specification formalisms for describing properties of OSGi based systems. One application is its use for behavioral types. Potential uses comprise the derivation of runtime monitors, checking compatibility of component composition, discovering components using brokerage services and checking the compatibility of implementation artifacts towards a specification."
]
}
|
1306.6206
|
2950831269
|
System dynamics and agent based simulation models can both be used to model and understand interactions of entities within a population. Our modeling work presented here is concerned with understanding the suitability of the different types of simulation for the immune system aging problems and comparing their results. We are trying to answer questions such as: How fit is the immune system given a certain age? Would an immune boost be of therapeutic value, e.g. to improve the effectiveness of a simultaneous vaccination? Understanding the processes of immune system aging and degradation may also help in development of therapies that reverse some of the damages caused thus improving life expectancy. Therefore as a first step our research focuses on T cells; major contributors to immune system functionality. One of the main factors influencing immune system aging is the output rate of naive T cells. Of further interest is the number and phenotypical variety of these cells in an individual, which will be the case study focused on in this paper.
|
In @cite_6 , the authors show the application of both SD and ABS to simulate non-equilibrium ligand-receptor dynamics over a broad range of concentrations. They concluded that both approaches are powerful tools and are also complementary. In addition, in their case study, they did not indicate a preferred paradigm, although SD is an obvious choice when studying systems at a high level of aggregation and abstraction, and ABS is well suited to studying phenomena at the level of individual receptors and molecules.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2129213500"
],
"abstract": [
"Cellular receptor dynamics are often analyzed using differential equations, making system dynamics (SD) a candidate methodology. In some cases it may be useful to model the phenomena at the biomolecular level, especially when concentrations and reaction probabilities are low and might lead to unexpected behavior modes. In such cases, agent-based simulation (ABS) may be useful. We show the application of both SD and ABS to simulate non-equilibrium ligand-receptor dynamics over a broad range of concentrations, where the probability of interaction varies from low to very low. Both approaches offer much to the researcher and are complementary. We did not find a clear demarcation indicating when one paradigm or the other would be strongly preferred, although SD is an obvious choice when studying systems at a high level of aggregation and abstraction, and ABS is well suited to studying phenomena at the level of individual receptors and molecules."
]
}
|
1306.6206
|
2950831269
|
System dynamics and agent based simulation models can both be used to model and understand interactions of entities within a population. Our modeling work presented here is concerned with understanding the suitability of the different types of simulation for the immune system aging problems and comparing their results. We are trying to answer questions such as: How fit is the immune system given a certain age? Would an immune boost be of therapeutic value, e.g. to improve the effectiveness of a simultaneous vaccination? Understanding the processes of immune system aging and degradation may also help in development of therapies that reverse some of the damages caused thus improving life expectancy. Therefore as a first step our research focuses on T cells; major contributors to immune system functionality. One of the main factors influencing immune system aging is the output rate of naive T cells. Of further interest is the number and phenotypical variety of these cells in an individual, which will be the case study focused on in this paper.
|
The work presented in @cite_12 also compares these modelling approaches and identify a list of likely opportunities for cross-fertilization. The list presented is not exhaustive and should be a starting point to other researchers to take such synergistic views even further.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2093587840"
],
"abstract": [
"Summary form only given. This work proposes a systematic approach to identify opportunities for cross-fertilization between two modeling paradigms: system dynamics and agent-based modeling. The motivation for this work is the authors' belief that there are gains to be made by crossing the boundaries between different domains of research, or different scientific approaches. This paper presents a comparison between the two modeling approaches, which is brought one step beyond the simple statement of the similarities and differences between them by introducing the novel aspect of taking a synergistic view specifically aimed at identifying a list of likely opportunities for cross-fertilization. The list presented here is not exhaustive and should be regarded more as a starting point than an ending point, and an invitation to other scientists to take such synergistic views even further."
]
}
|
1306.5362
|
2953237849
|
One popular method for dealing with large-scale data sets is sampling. For example, by using the empirical statistical leverage scores as an importance sampling distribution, the method of algorithmic leveraging samples and rescales rows columns of data matrices to reduce the data size before performing computations on the subproblem. This method has been successful in improving computational efficiency of algorithms for matrix problems such as least-squares approximation, least absolute deviations approximation, and low-rank matrix approximation. Existing work has focused on algorithmic issues such as worst-case running times and numerical issues associated with providing high-quality implementations, but none of it addresses statistical aspects of this method. In this paper, we provide a simple yet effective framework to evaluate the statistical properties of algorithmic leveraging in the context of estimating parameters in a linear regression model with a fixed number of predictors. We show that from the statistical perspective of bias and variance, neither leverage-based sampling nor uniform sampling dominates the other. This result is particularly striking, given the well-known result that, from the algorithmic perspective of worst-case analysis, leverage-based sampling provides uniformly superior worst-case algorithmic results, when compared with uniform sampling. Based on these theoretical results, we propose and analyze two new leveraging algorithms. A detailed empirical evaluation of existing leverage-based methods as well as these two new methods is carried out on both synthetic and real data sets. The empirical results indicate that our theory is a good predictor of practical performance of existing and new leverage-based algorithms and that the new algorithms achieve improved performance.
|
Here, we will review relevant work on random sampling algorithms for computing approximate solutions to the general overconstrained LS problem @cite_19 @cite_32 @cite_1 . These algorithms choose (in general, nonuniformly) a subsample of the data, e.g., a small number of rows of @math and the corresponding elements of @math , and then they perform (typically weighted) LS on the subsample. Importantly, these algorithms make no assumptions on the input data @math and @math , except that @math .
|
{
"cite_N": [
"@cite_19",
"@cite_1",
"@cite_32"
],
"mid": [
"163118316",
"2616345629",
"2951863880"
],
"abstract": [
"",
"The statistical leverage scores of a matrix A are the squared row-norms of the matrix containing its (top) left singular vectors and the coherence is the largest leverage score. These quantities are of interest in recently-popular problems such as matrix completion and Nystrom-based low-rank matrix approximation as well as in large-scale statistical data analysis applications more generally; moreover, they are of interest since they define the key structural nonuniformity that must be dealt with in developing fast randomized matrix algorithms. Our main result is a randomized algorithm that takes as input an arbitrary n × d matrix A, with n ≫ d, and that returns as output relative-error approximations to all n of the statistical leverage scores. The proposed algorithm runs (under assumptions on the precise values of n and d) in O(nd logn) time, as opposed to the O(nd2) time required by the naive algorithm that involves computing an orthogonal basis for the range of A. Our analysis may be viewed in terms of computing a relative-error approximation to an underconstrained least-squares approximation problem, or, relatedly, it may be viewed as an application of Johnson-Lindenstrauss type ideas. Several practically-important extensions of our basic result are also described, including the approximation of so-called cross-leverage scores, the extension of these ideas to matrices with n ≈ d, and the extension to streaming environments.",
"Randomized algorithms for very large matrix problems have received a great deal of attention in recent years. Much of this work was motivated by problems in large-scale data analysis, and this work was performed by individuals from many different research communities. This monograph will provide a detailed overview of recent work on the theory of randomized matrix algorithms as well as the application of those ideas to the solution of practical problems in large-scale data analysis. An emphasis will be placed on a few simple core ideas that underlie not only recent theoretical advances but also the usefulness of these tools in large-scale data applications. Crucial in this context is the connection with the concept of statistical leverage. This concept has long been used in statistical regression diagnostics to identify outliers; and it has recently proved crucial in the development of improved worst-case matrix algorithms that are also amenable to high-quality numerical implementation and that are useful to domain scientists. Randomized methods solve problems such as the linear least-squares problem and the low-rank matrix approximation problem by constructing and operating on a randomized sketch of the input matrix. Depending on the specifics of the situation, when compared with the best previously-existing deterministic algorithms, the resulting randomized algorithms have worst-case running time that is asymptotically faster; their numerical implementations are faster in terms of clock-time; or they can be implemented in parallel computing environments where existing numerical algorithms fail to run at all. Numerous examples illustrating these observations will be described in detail."
]
}
|
1306.5362
|
2953237849
|
One popular method for dealing with large-scale data sets is sampling. For example, by using the empirical statistical leverage scores as an importance sampling distribution, the method of algorithmic leveraging samples and rescales rows columns of data matrices to reduce the data size before performing computations on the subproblem. This method has been successful in improving computational efficiency of algorithms for matrix problems such as least-squares approximation, least absolute deviations approximation, and low-rank matrix approximation. Existing work has focused on algorithmic issues such as worst-case running times and numerical issues associated with providing high-quality implementations, but none of it addresses statistical aspects of this method. In this paper, we provide a simple yet effective framework to evaluate the statistical properties of algorithmic leveraging in the context of estimating parameters in a linear regression model with a fixed number of predictors. We show that from the statistical perspective of bias and variance, neither leverage-based sampling nor uniform sampling dominates the other. This result is particularly striking, given the well-known result that, from the algorithmic perspective of worst-case analysis, leverage-based sampling provides uniformly superior worst-case algorithmic results, when compared with uniform sampling. Based on these theoretical results, we propose and analyze two new leveraging algorithms. A detailed empirical evaluation of existing leverage-based methods as well as these two new methods is carried out on both synthetic and real data sets. The empirical results indicate that our theory is a good predictor of practical performance of existing and new leverage-based algorithms and that the new algorithms achieve improved performance.
|
A prototypical example of this approach is given by the following meta-algorithm @cite_19 @cite_32 @cite_1 , which we call SubsampleLS , and which takes as input an @math matrix @math , where @math , a vector @math , and a probability distribution @math , and which returns as output an approximate solution @math , which is an estimate of @math of Eqn. ).
|
{
"cite_N": [
"@cite_19",
"@cite_1",
"@cite_32"
],
"mid": [
"163118316",
"2616345629",
"2951863880"
],
"abstract": [
"",
"The statistical leverage scores of a matrix A are the squared row-norms of the matrix containing its (top) left singular vectors and the coherence is the largest leverage score. These quantities are of interest in recently-popular problems such as matrix completion and Nystrom-based low-rank matrix approximation as well as in large-scale statistical data analysis applications more generally; moreover, they are of interest since they define the key structural nonuniformity that must be dealt with in developing fast randomized matrix algorithms. Our main result is a randomized algorithm that takes as input an arbitrary n × d matrix A, with n ≫ d, and that returns as output relative-error approximations to all n of the statistical leverage scores. The proposed algorithm runs (under assumptions on the precise values of n and d) in O(nd logn) time, as opposed to the O(nd2) time required by the naive algorithm that involves computing an orthogonal basis for the range of A. Our analysis may be viewed in terms of computing a relative-error approximation to an underconstrained least-squares approximation problem, or, relatedly, it may be viewed as an application of Johnson-Lindenstrauss type ideas. Several practically-important extensions of our basic result are also described, including the approximation of so-called cross-leverage scores, the extension of these ideas to matrices with n ≈ d, and the extension to streaming environments.",
"Randomized algorithms for very large matrix problems have received a great deal of attention in recent years. Much of this work was motivated by problems in large-scale data analysis, and this work was performed by individuals from many different research communities. This monograph will provide a detailed overview of recent work on the theory of randomized matrix algorithms as well as the application of those ideas to the solution of practical problems in large-scale data analysis. An emphasis will be placed on a few simple core ideas that underlie not only recent theoretical advances but also the usefulness of these tools in large-scale data applications. Crucial in this context is the connection with the concept of statistical leverage. This concept has long been used in statistical regression diagnostics to identify outliers; and it has recently proved crucial in the development of improved worst-case matrix algorithms that are also amenable to high-quality numerical implementation and that are useful to domain scientists. Randomized methods solve problems such as the linear least-squares problem and the low-rank matrix approximation problem by constructing and operating on a randomized sketch of the input matrix. Depending on the specifics of the situation, when compared with the best previously-existing deterministic algorithms, the resulting randomized algorithms have worst-case running time that is asymptotically faster; their numerical implementations are faster in terms of clock-time; or they can be implemented in parallel computing environments where existing numerical algorithms fail to run at all. Numerous examples illustrating these observations will be described in detail."
]
}
|
1306.5362
|
2953237849
|
One popular method for dealing with large-scale data sets is sampling. For example, by using the empirical statistical leverage scores as an importance sampling distribution, the method of algorithmic leveraging samples and rescales rows columns of data matrices to reduce the data size before performing computations on the subproblem. This method has been successful in improving computational efficiency of algorithms for matrix problems such as least-squares approximation, least absolute deviations approximation, and low-rank matrix approximation. Existing work has focused on algorithmic issues such as worst-case running times and numerical issues associated with providing high-quality implementations, but none of it addresses statistical aspects of this method. In this paper, we provide a simple yet effective framework to evaluate the statistical properties of algorithmic leveraging in the context of estimating parameters in a linear regression model with a fixed number of predictors. We show that from the statistical perspective of bias and variance, neither leverage-based sampling nor uniform sampling dominates the other. This result is particularly striking, given the well-known result that, from the algorithmic perspective of worst-case analysis, leverage-based sampling provides uniformly superior worst-case algorithmic results, when compared with uniform sampling. Based on these theoretical results, we propose and analyze two new leveraging algorithms. A detailed empirical evaluation of existing leverage-based methods as well as these two new methods is carried out on both synthetic and real data sets. The empirical results indicate that our theory is a good predictor of practical performance of existing and new leverage-based algorithms and that the new algorithms achieve improved performance.
|
Since SubsampleLS samples constraints and not variables, the dimensionality of the vector @math that solves the (still overconstrained, but smaller) weighted LS subproblem is the same as that of the vector @math that solves the original LS problem. The former may thus be taken as an approximation of the latter, where, of course, the quality of the approximation depends critically on the choice of @math . There are several distributions that have been considered previously @cite_19 @cite_32 @cite_1 .
|
{
"cite_N": [
"@cite_19",
"@cite_1",
"@cite_32"
],
"mid": [
"163118316",
"2616345629",
"2951863880"
],
"abstract": [
"",
"The statistical leverage scores of a matrix A are the squared row-norms of the matrix containing its (top) left singular vectors and the coherence is the largest leverage score. These quantities are of interest in recently-popular problems such as matrix completion and Nystrom-based low-rank matrix approximation as well as in large-scale statistical data analysis applications more generally; moreover, they are of interest since they define the key structural nonuniformity that must be dealt with in developing fast randomized matrix algorithms. Our main result is a randomized algorithm that takes as input an arbitrary n × d matrix A, with n ≫ d, and that returns as output relative-error approximations to all n of the statistical leverage scores. The proposed algorithm runs (under assumptions on the precise values of n and d) in O(nd logn) time, as opposed to the O(nd2) time required by the naive algorithm that involves computing an orthogonal basis for the range of A. Our analysis may be viewed in terms of computing a relative-error approximation to an underconstrained least-squares approximation problem, or, relatedly, it may be viewed as an application of Johnson-Lindenstrauss type ideas. Several practically-important extensions of our basic result are also described, including the approximation of so-called cross-leverage scores, the extension of these ideas to matrices with n ≈ d, and the extension to streaming environments.",
"Randomized algorithms for very large matrix problems have received a great deal of attention in recent years. Much of this work was motivated by problems in large-scale data analysis, and this work was performed by individuals from many different research communities. This monograph will provide a detailed overview of recent work on the theory of randomized matrix algorithms as well as the application of those ideas to the solution of practical problems in large-scale data analysis. An emphasis will be placed on a few simple core ideas that underlie not only recent theoretical advances but also the usefulness of these tools in large-scale data applications. Crucial in this context is the connection with the concept of statistical leverage. This concept has long been used in statistical regression diagnostics to identify outliers; and it has recently proved crucial in the development of improved worst-case matrix algorithms that are also amenable to high-quality numerical implementation and that are useful to domain scientists. Randomized methods solve problems such as the linear least-squares problem and the low-rank matrix approximation problem by constructing and operating on a randomized sketch of the input matrix. Depending on the specifics of the situation, when compared with the best previously-existing deterministic algorithms, the resulting randomized algorithms have worst-case running time that is asymptotically faster; their numerical implementations are faster in terms of clock-time; or they can be implemented in parallel computing environments where existing numerical algorithms fail to run at all. Numerous examples illustrating these observations will be described in detail."
]
}
|
1306.5362
|
2953237849
|
One popular method for dealing with large-scale data sets is sampling. For example, by using the empirical statistical leverage scores as an importance sampling distribution, the method of algorithmic leveraging samples and rescales rows columns of data matrices to reduce the data size before performing computations on the subproblem. This method has been successful in improving computational efficiency of algorithms for matrix problems such as least-squares approximation, least absolute deviations approximation, and low-rank matrix approximation. Existing work has focused on algorithmic issues such as worst-case running times and numerical issues associated with providing high-quality implementations, but none of it addresses statistical aspects of this method. In this paper, we provide a simple yet effective framework to evaluate the statistical properties of algorithmic leveraging in the context of estimating parameters in a linear regression model with a fixed number of predictors. We show that from the statistical perspective of bias and variance, neither leverage-based sampling nor uniform sampling dominates the other. This result is particularly striking, given the well-known result that, from the algorithmic perspective of worst-case analysis, leverage-based sampling provides uniformly superior worst-case algorithmic results, when compared with uniform sampling. Based on these theoretical results, we propose and analyze two new leveraging algorithms. A detailed empirical evaluation of existing leverage-based methods as well as these two new methods is carried out on both synthetic and real data sets. The empirical results indicate that our theory is a good predictor of practical performance of existing and new leverage-based algorithms and that the new algorithms achieve improved performance.
|
Although Uniform Subsampling (with or without replacement) is very simple to implement, it is easy to construct examples where it will perform very poorly (e.g., see below or see @cite_19 @cite_32 ). On the other hand, it has been shown that, for a parameter @math to be tuned, if then the following relative-error bounds hold: where @math is the condition number of @math and where @math is a parameter defining the amount of the mass of @math inside the column space of @math @cite_19 @cite_32 @cite_1 .
|
{
"cite_N": [
"@cite_19",
"@cite_1",
"@cite_32"
],
"mid": [
"163118316",
"2616345629",
"2951863880"
],
"abstract": [
"",
"The statistical leverage scores of a matrix A are the squared row-norms of the matrix containing its (top) left singular vectors and the coherence is the largest leverage score. These quantities are of interest in recently-popular problems such as matrix completion and Nystrom-based low-rank matrix approximation as well as in large-scale statistical data analysis applications more generally; moreover, they are of interest since they define the key structural nonuniformity that must be dealt with in developing fast randomized matrix algorithms. Our main result is a randomized algorithm that takes as input an arbitrary n × d matrix A, with n ≫ d, and that returns as output relative-error approximations to all n of the statistical leverage scores. The proposed algorithm runs (under assumptions on the precise values of n and d) in O(nd logn) time, as opposed to the O(nd2) time required by the naive algorithm that involves computing an orthogonal basis for the range of A. Our analysis may be viewed in terms of computing a relative-error approximation to an underconstrained least-squares approximation problem, or, relatedly, it may be viewed as an application of Johnson-Lindenstrauss type ideas. Several practically-important extensions of our basic result are also described, including the approximation of so-called cross-leverage scores, the extension of these ideas to matrices with n ≈ d, and the extension to streaming environments.",
"Randomized algorithms for very large matrix problems have received a great deal of attention in recent years. Much of this work was motivated by problems in large-scale data analysis, and this work was performed by individuals from many different research communities. This monograph will provide a detailed overview of recent work on the theory of randomized matrix algorithms as well as the application of those ideas to the solution of practical problems in large-scale data analysis. An emphasis will be placed on a few simple core ideas that underlie not only recent theoretical advances but also the usefulness of these tools in large-scale data applications. Crucial in this context is the connection with the concept of statistical leverage. This concept has long been used in statistical regression diagnostics to identify outliers; and it has recently proved crucial in the development of improved worst-case matrix algorithms that are also amenable to high-quality numerical implementation and that are useful to domain scientists. Randomized methods solve problems such as the linear least-squares problem and the low-rank matrix approximation problem by constructing and operating on a randomized sketch of the input matrix. Depending on the specifics of the situation, when compared with the best previously-existing deterministic algorithms, the resulting randomized algorithms have worst-case running time that is asymptotically faster; their numerical implementations are faster in terms of clock-time; or they can be implemented in parallel computing environments where existing numerical algorithms fail to run at all. Numerous examples illustrating these observations will be described in detail."
]
}
|
1306.5362
|
2953237849
|
One popular method for dealing with large-scale data sets is sampling. For example, by using the empirical statistical leverage scores as an importance sampling distribution, the method of algorithmic leveraging samples and rescales rows columns of data matrices to reduce the data size before performing computations on the subproblem. This method has been successful in improving computational efficiency of algorithms for matrix problems such as least-squares approximation, least absolute deviations approximation, and low-rank matrix approximation. Existing work has focused on algorithmic issues such as worst-case running times and numerical issues associated with providing high-quality implementations, but none of it addresses statistical aspects of this method. In this paper, we provide a simple yet effective framework to evaluate the statistical properties of algorithmic leveraging in the context of estimating parameters in a linear regression model with a fixed number of predictors. We show that from the statistical perspective of bias and variance, neither leverage-based sampling nor uniform sampling dominates the other. This result is particularly striking, given the well-known result that, from the algorithmic perspective of worst-case analysis, leverage-based sampling provides uniformly superior worst-case algorithmic results, when compared with uniform sampling. Based on these theoretical results, we propose and analyze two new leveraging algorithms. A detailed empirical evaluation of existing leverage-based methods as well as these two new methods is carried out on both synthetic and real data sets. The empirical results indicate that our theory is a good predictor of practical performance of existing and new leverage-based algorithms and that the new algorithms achieve improved performance.
|
Although it is not our main focus, the running time for leverage-based sampling algorithms is of interest. The running times of these algorithms depend on both the time to construct the probability distribution, @math , and the time to solve the subsampled problem. For UNIF, the former is trivial and the latter depends on the size of the subproblem. For estimators that depend on the exact or approximate (recall the flexibility in Eqn. ) provided by @math ) leverage scores, the running time is dominated by the exact or approximate computation of those scores. A na " i ve algorithm involves using a QR decomposition or the thin SVD of @math to obtain the exact leverage scores. Unfortunately, this exact algorithm takes @math time and is thus no faster than solving the original LS problem exactly. Of greater interest is the algorithm of @cite_1 that computes relative-error approximations to all of the leverage scores of @math in @math time.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2616345629"
],
"abstract": [
"The statistical leverage scores of a matrix A are the squared row-norms of the matrix containing its (top) left singular vectors and the coherence is the largest leverage score. These quantities are of interest in recently-popular problems such as matrix completion and Nystrom-based low-rank matrix approximation as well as in large-scale statistical data analysis applications more generally; moreover, they are of interest since they define the key structural nonuniformity that must be dealt with in developing fast randomized matrix algorithms. Our main result is a randomized algorithm that takes as input an arbitrary n × d matrix A, with n ≫ d, and that returns as output relative-error approximations to all n of the statistical leverage scores. The proposed algorithm runs (under assumptions on the precise values of n and d) in O(nd logn) time, as opposed to the O(nd2) time required by the naive algorithm that involves computing an orthogonal basis for the range of A. Our analysis may be viewed in terms of computing a relative-error approximation to an underconstrained least-squares approximation problem, or, relatedly, it may be viewed as an application of Johnson-Lindenstrauss type ideas. Several practically-important extensions of our basic result are also described, including the approximation of so-called cross-leverage scores, the extension of these ideas to matrices with n ≈ d, and the extension to streaming environments."
]
}
|
1306.5362
|
2953237849
|
One popular method for dealing with large-scale data sets is sampling. For example, by using the empirical statistical leverage scores as an importance sampling distribution, the method of algorithmic leveraging samples and rescales rows columns of data matrices to reduce the data size before performing computations on the subproblem. This method has been successful in improving computational efficiency of algorithms for matrix problems such as least-squares approximation, least absolute deviations approximation, and low-rank matrix approximation. Existing work has focused on algorithmic issues such as worst-case running times and numerical issues associated with providing high-quality implementations, but none of it addresses statistical aspects of this method. In this paper, we provide a simple yet effective framework to evaluate the statistical properties of algorithmic leveraging in the context of estimating parameters in a linear regression model with a fixed number of predictors. We show that from the statistical perspective of bias and variance, neither leverage-based sampling nor uniform sampling dominates the other. This result is particularly striking, given the well-known result that, from the algorithmic perspective of worst-case analysis, leverage-based sampling provides uniformly superior worst-case algorithmic results, when compared with uniform sampling. Based on these theoretical results, we propose and analyze two new leveraging algorithms. A detailed empirical evaluation of existing leverage-based methods as well as these two new methods is carried out on both synthetic and real data sets. The empirical results indicate that our theory is a good predictor of practical performance of existing and new leverage-based algorithms and that the new algorithms achieve improved performance.
|
In more detail, given as input an arbitrary @math matrix @math , with @math , and an error parameter @math , the main algorithm of @cite_1 (described also in below) computes numbers @math , for all @math , that are relative-error approximations to the leverage scores @math , in the sense that @math , for all @math . This algorithm runs in roughly @math time, In more detail, the asymptotic running time of the main algorithm of @cite_1 is @math To simplify this expression, suppose that @math and treat @math as a constant; then, the asymptotic running time is @math which for appropriate parameter settings is @math time @cite_1 . Given the numbers @math , for all @math , we can let @math , which then yields probabilities of the form of Eqn. ) with (say) @math or @math . Thus, we can use these @math in place of @math in BELV, SLEV, or LEVUNW, thus providing a way to implement these procedures in @math time.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2616345629"
],
"abstract": [
"The statistical leverage scores of a matrix A are the squared row-norms of the matrix containing its (top) left singular vectors and the coherence is the largest leverage score. These quantities are of interest in recently-popular problems such as matrix completion and Nystrom-based low-rank matrix approximation as well as in large-scale statistical data analysis applications more generally; moreover, they are of interest since they define the key structural nonuniformity that must be dealt with in developing fast randomized matrix algorithms. Our main result is a randomized algorithm that takes as input an arbitrary n × d matrix A, with n ≫ d, and that returns as output relative-error approximations to all n of the statistical leverage scores. The proposed algorithm runs (under assumptions on the precise values of n and d) in O(nd logn) time, as opposed to the O(nd2) time required by the naive algorithm that involves computing an orthogonal basis for the range of A. Our analysis may be viewed in terms of computing a relative-error approximation to an underconstrained least-squares approximation problem, or, relatedly, it may be viewed as an application of Johnson-Lindenstrauss type ideas. Several practically-important extensions of our basic result are also described, including the approximation of so-called cross-leverage scores, the extension of these ideas to matrices with n ≈ d, and the extension to streaming environments."
]
}
|
1306.5362
|
2953237849
|
One popular method for dealing with large-scale data sets is sampling. For example, by using the empirical statistical leverage scores as an importance sampling distribution, the method of algorithmic leveraging samples and rescales rows columns of data matrices to reduce the data size before performing computations on the subproblem. This method has been successful in improving computational efficiency of algorithms for matrix problems such as least-squares approximation, least absolute deviations approximation, and low-rank matrix approximation. Existing work has focused on algorithmic issues such as worst-case running times and numerical issues associated with providing high-quality implementations, but none of it addresses statistical aspects of this method. In this paper, we provide a simple yet effective framework to evaluate the statistical properties of algorithmic leveraging in the context of estimating parameters in a linear regression model with a fixed number of predictors. We show that from the statistical perspective of bias and variance, neither leverage-based sampling nor uniform sampling dominates the other. This result is particularly striking, given the well-known result that, from the algorithmic perspective of worst-case analysis, leverage-based sampling provides uniformly superior worst-case algorithmic results, when compared with uniform sampling. Based on these theoretical results, we propose and analyze two new leveraging algorithms. A detailed empirical evaluation of existing leverage-based methods as well as these two new methods is carried out on both synthetic and real data sets. The empirical results indicate that our theory is a good predictor of practical performance of existing and new leverage-based algorithms and that the new algorithms achieve improved performance.
|
The running time of the relative-error approximation algorithm of @cite_1 depends on the time needed to premultiply @math by a randomized Hadamard transform (i.e., a structured'' random projection). Recently, high-quality numerical implementations of such random projections have been provided; see, e.g., Blendenpik @cite_28 , as well as LSRN @cite_5 , which extends these implementations to large-scale parallel environments. These implementations demonstrate that, for matrices as small as several thousand by several hundred, leverage-based algorithms such as LEV and SLEV can be better in terms of running time than the computation of QR decompositions or the SVD with, e.g., . See @cite_28 @cite_5 for details, and see @cite_34 for the application of these methods to the fast computation of leverage scores. Below, we will evaluate an implementation of a variant of the main algorithm of @cite_1 in the software environment R.
|
{
"cite_N": [
"@cite_28",
"@cite_5",
"@cite_34",
"@cite_1"
],
"mid": [
"",
"2952826487",
"2949526110",
"2616345629"
],
"abstract": [
"",
"We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to @math , where @math with @math or @math , and where @math may be rank-deficient. Tikhonov regularization may also be included. Since @math is only involved in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when @math is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size @math , where @math is moderately larger than 1, e.g., @math . We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results demonstrate that on a shared-memory machine, LSRN outperforms LAPACK's DGELSD on large dense problems, and MATLAB's backslash (SuiteSparseQR) on sparse problems. Further experiments demonstrate that LSRN scales well on an Amazon Elastic Compute Cloud cluster.",
"We reconsider randomized algorithms for the low-rank approximation of symmetric positive semi-definite (SPSD) matrices such as Laplacian and kernel matrices that arise in data analysis and machine learning applications. Our main results consist of an empirical evaluation of the performance quality and running time of sampling and projection methods on a diverse suite of SPSD matrices. Our results highlight complementary aspects of sampling versus projection methods; they characterize the effects of common data preprocessing steps on the performance of these algorithms; and they point to important differences between uniform sampling and nonuniform sampling methods based on leverage scores. In addition, our empirical results illustrate that existing theory is so weak that it does not provide even a qualitative guide to practice. Thus, we complement our empirical results with a suite of worst-case theoretical bounds for both random sampling and random projection methods. These bounds are qualitatively superior to existing bounds---e.g. improved additive-error bounds for spectral and Frobenius norm error and relative-error bounds for trace norm error---and they point to future directions to make these algorithms useful in even larger-scale machine learning applications.",
"The statistical leverage scores of a matrix A are the squared row-norms of the matrix containing its (top) left singular vectors and the coherence is the largest leverage score. These quantities are of interest in recently-popular problems such as matrix completion and Nystrom-based low-rank matrix approximation as well as in large-scale statistical data analysis applications more generally; moreover, they are of interest since they define the key structural nonuniformity that must be dealt with in developing fast randomized matrix algorithms. Our main result is a randomized algorithm that takes as input an arbitrary n × d matrix A, with n ≫ d, and that returns as output relative-error approximations to all n of the statistical leverage scores. The proposed algorithm runs (under assumptions on the precise values of n and d) in O(nd logn) time, as opposed to the O(nd2) time required by the naive algorithm that involves computing an orthogonal basis for the range of A. Our analysis may be viewed in terms of computing a relative-error approximation to an underconstrained least-squares approximation problem, or, relatedly, it may be viewed as an application of Johnson-Lindenstrauss type ideas. Several practically-important extensions of our basic result are also described, including the approximation of so-called cross-leverage scores, the extension of these ideas to matrices with n ≈ d, and the extension to streaming environments."
]
}
|
1306.5362
|
2953237849
|
One popular method for dealing with large-scale data sets is sampling. For example, by using the empirical statistical leverage scores as an importance sampling distribution, the method of algorithmic leveraging samples and rescales rows columns of data matrices to reduce the data size before performing computations on the subproblem. This method has been successful in improving computational efficiency of algorithms for matrix problems such as least-squares approximation, least absolute deviations approximation, and low-rank matrix approximation. Existing work has focused on algorithmic issues such as worst-case running times and numerical issues associated with providing high-quality implementations, but none of it addresses statistical aspects of this method. In this paper, we provide a simple yet effective framework to evaluate the statistical properties of algorithmic leveraging in the context of estimating parameters in a linear regression model with a fixed number of predictors. We show that from the statistical perspective of bias and variance, neither leverage-based sampling nor uniform sampling dominates the other. This result is particularly striking, given the well-known result that, from the algorithmic perspective of worst-case analysis, leverage-based sampling provides uniformly superior worst-case algorithmic results, when compared with uniform sampling. Based on these theoretical results, we propose and analyze two new leveraging algorithms. A detailed empirical evaluation of existing leverage-based methods as well as these two new methods is carried out on both synthetic and real data sets. The empirical results indicate that our theory is a good predictor of practical performance of existing and new leverage-based algorithms and that the new algorithms achieve improved performance.
|
Our leverage-based methods for estimating @math are related to resampling methods such as the bootstrap @cite_3 , and many of these resampling methods enjoy desirable asymptotic properties @cite_6 . Resampling methods in linear models were studied extensively in @cite_29 and are related to the jackknife @cite_35 @cite_4 @cite_20 @cite_41 . They usually produce resamples at a similar size to that of the full data, whereas algorithmic leveraging is primarily interested in constructing subproblems that are much smaller than the full data. In addition, the goal of resampling is traditionally to perform statistical inference and not to improve the running time of an algorithm, except in the very recent work @cite_24 . Additional related work in statistics includes @cite_0 @cite_30 @cite_23 @cite_36 @cite_2 .
|
{
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_4",
"@cite_41",
"@cite_36",
"@cite_29",
"@cite_6",
"@cite_3",
"@cite_24",
"@cite_0",
"@cite_23",
"@cite_2",
"@cite_20"
],
"mid": [
"2001994823",
"1981492171",
"2033260623",
"1976927254",
"",
"2104499761",
"2327088997",
"2117897510",
"2949432502",
"",
"1963828038",
"",
""
],
"abstract": [
"",
"SUMMARY Research on the jackknife technique since its introduction by Quenouille and Tukey is reviewed. Both its role in bias reduction and in robust interval estimation are treated. Some speculations and suggestions about future research are made. The bibliography attempts to include all published work on jackknife methodology.",
"",
"Abstract This is an invited expository article for The American Statistician. It reviews the nonparametric estimation of statistical error, mainly the bias and standard error of an estimator, or the error rate of a prediction rule. The presentation is written at a relaxed mathematical level, omitting most proofs, regularity conditions, and technical details.",
"",
"On etudie les methodes classiques de reechantillonnage dans le contexte des modeles de regression et on propose de nouvelles methodes",
"15. The Jackknife and Bootstrap. By J. Shao and D. Tu. ISBN 0 387 94515 6. Springer, New York, 1995. xviii + 516 pp. DM 74.",
"We discuss the following problem given a random sample X = (X 1, X 2,…, X n) from an unknown probability distribution F, estimate the sampling distribution of some prespecified random variable R(X, F), on the basis of the observed data x. (Standard jackknife theory gives an approximate mean and variance in the case R(X, F) = ( ( F ) - ( F ) ), θ some parameter of interest.) A general method, called the “bootstrap”, is introduced, and shown to work satisfactorily on a variety of estimation problems. The jackknife is shown to be a linear approximation method for the bootstrap. The exposition proceeds by a series of examples: variance of the sample median, error rates in a linear discriminant analysis, ratio estimation, estimating regression parameters, etc.",
"The bootstrap provides a simple and powerful means of assessing the quality of estimators. However, in settings involving large datasets, the computation of bootstrap-based quantities can be prohibitively demanding. As an alternative, we present the Bag of Little Bootstraps (BLB), a new procedure which incorporates features of both the bootstrap and subsampling to obtain a robust, computationally efficient means of assessing estimator quality. BLB is well suited to modern parallel and distributed computing architectures and retains the generic applicability, statistical efficiency, and favorable theoretical properties of the bootstrap. We provide the results of an extensive empirical and theoretical investigation of BLB's behavior, including a study of its statistical correctness, its large-scale implementation and performance, selection of hyperparameters, and performance on real data.",
"",
"Abstract We discuss ways of combining rejection sampling and importance sampling methods in Monte Carlo computations and demonstrate their usefulness in updating dynamic systems. Specifically, we propose the rejection controlled sequential importance sampling (RC-SIS) algorithm, which is designed to simultaneously reduce Monte Carlo variation and retain independent samples in sequential importance sampling. The proposed method is demonstrated by three examples taken from econometrics, hierarchical Bayes analysis, and digital telecommunications. They all show significant improvements over previous results.",
"",
""
]
}
|
1306.5204
|
2949731435
|
Twitter is a social media giant famous for the exchange of short, 140-character messages called "tweets". In the scientific community, the microblogging site is known for openness in sharing its data. It provides a glance into its millions of users and billions of tweets through a "Streaming API" which provides a sample of all tweets matching some parameters preset by the API user. The API service has been used by many researchers, companies, and governmental institutions that want to extract knowledge in accordance with a diverse array of questions pertaining to social media. The essential drawback of the Twitter API is the lack of documentation concerning what and how much data users get. This leads researchers to question whether the sampled data is a valid representation of the overall activity on Twitter. In this work we embark on answering this question by comparing data collected using Twitter's sampled API service with data collected using the full, albeit costly, Firehose stream that includes every single published tweet. We compare both datasets using common statistical metrics as well as metrics that allow us to compare topics, networks, and locations of tweets. The results of our work will help researchers and practitioners understand the implications of using the Streaming API.
|
Twitter's Streaming API has been used throughout the domain of social media and network analysis to generate understanding of how users behave on these platforms. It has been used to collect data for topic modeling @cite_15 @cite_17 , network analysis @cite_21 , and statistical analysis of content @cite_6 , among others. Researchers' reliance upon this data source is significant, and these examples only provide a cursory glance at the tip of the iceberg. Due to the widespread use of Twitter's Streaming API in various scientific fields, it is important that we understand how using a sub-sample of the data generated affects these results.
|
{
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_6",
"@cite_17"
],
"mid": [
"2063904635",
"2060009247",
"2018165284",
"2010273307"
],
"abstract": [
"Social networks such as Facebook, LinkedIn, and Twitter have been a crucial source of information for a wide spectrum of users. In Twitter, popular information that is deemed important by the community propagates through the network. Studying the characteristics of content in the messages becomes important for a number of tasks, such as breaking news detection, personalized message recommendation, friends recommendation, sentiment analysis and others. While many researchers wish to use standard text mining tools to understand messages on Twitter, the restricted length of those messages prevents them from being employed to their full potential. We address the problem of using standard topic models in micro-blogging environments by studying how the models can be trained on the dataset. We propose several schemes to train a standard topic model and compare their quality and effectiveness through a set of carefully designed experiments from both qualitative and quantitative perspectives. We show that by training a topic model on aggregated messages we can obtain a higher quality of learned model which results in significantly better performance in two real-world classification problems. We also discuss how the state-of-the-art Author-Topic model fails to model hierarchical relationships between entities in Social Media.",
"In this work we developed a surveillance architecture to detect diseases-related postings in social networks using Twitter as an example for a high-traffic social network. Our real-time architecture uses Twitter streaming API to crawl Twitter messages as they are posted. Data mining techniques have been used to index, extract and classify postings. Finally, we evaluate the performance of the classifier with a dataset of public health postings and also evaluate the run-time performance of whole system with respect to latency and throughput.",
"We present TwitterMonitor, a system that performs trend detection over the Twitter stream. The system identifies emerging topics (i.e. 'trends') on Twitter in real time and provides meaningful analytics that synthesize an accurate description of each topic. Users interact with the system by ordering the identified trends using different criteria and submitting their own description for each trend. We discuss the motivation for trend detection over social media streams and the challenges that lie therein. We then describe our approach to trend detection, as well as the architecture of TwitterMonitor. Finally, we lay out our demonstration scenario.",
"Human-generated textual data streams from services such as Twitter increasingly become geo-referenced. The spatial resolution of their coverage improves quickly, making them a promising instrument for sensing various aspects of evolution and dynamics of social systems. This work explores spacetime structures of the topical content of short textual messages in a stream available from Twitter in Ireland. It uses a streaming Latent Dirichlet Allocation topic model trained with an incremental variational Bayes method. The posterior probabilities of the discovered topics are post-processed with a spatial kernel density and subjected to comparative analysis. The identified prevailing topics are often found to be spatially contiguous. We apply Markov-modulated non-homogeneous Poisson processes to quantify a proportion of novelty in the observed abnormal patterns. A combined use of these techniques allows for real-time analysis of the temporal evolution and spatial variability of population's response to various stimuli such as large scale sportive, political or cultural events."
]
}
|
1306.5204
|
2949731435
|
Twitter is a social media giant famous for the exchange of short, 140-character messages called "tweets". In the scientific community, the microblogging site is known for openness in sharing its data. It provides a glance into its millions of users and billions of tweets through a "Streaming API" which provides a sample of all tweets matching some parameters preset by the API user. The API service has been used by many researchers, companies, and governmental institutions that want to extract knowledge in accordance with a diverse array of questions pertaining to social media. The essential drawback of the Twitter API is the lack of documentation concerning what and how much data users get. This leads researchers to question whether the sampled data is a valid representation of the overall activity on Twitter. In this work we embark on answering this question by comparing data collected using Twitter's sampled API service with data collected using the full, albeit costly, Firehose stream that includes every single published tweet. We compare both datasets using common statistical metrics as well as metrics that allow us to compare topics, networks, and locations of tweets. The results of our work will help researchers and practitioners understand the implications of using the Streaming API.
|
From a statistical point of view, the law of large numbers'' (mean of a sample converges to the mean of the entire ) and the Glivenko-Cantelli theorem (the unknown distribution @math of an attribute in a population can be approximated with the observed distribution @math ) guarantee satisfactory results from sampled data when the randomly selected sub-sample is big enough. From network algorithmic @cite_25 perspective the question is more complicated. Previous efforts have delved into the topic of network sampling and how working with a restricted set of data can affect common network measures. The problem was studied earlier in @cite_5 , where the author proposes an algorithm to sample networks in a way that allows one to estimate basic network properties. More recently, @cite_32 and @cite_27 have studied the affect of data error on common network centrality measures by randomly deleting and adding nodes and edges. The authors discover that centrality measures are usually most resilient on dense networks. @cite_37 , the authors study global properties of simulated random graphs to better understand data error in social networks. @cite_35 proposes a strategy for sampling large graphs to preserve network measures.
|
{
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_32",
"@cite_27",
"@cite_5",
"@cite_25"
],
"mid": [
"2146008005",
"2131492273",
"",
"2042901984",
"2150797113",
"2061901927"
],
"abstract": [
"Given a huge real graph, how can we derive a representative sample? There are many known algorithms to compute interesting measures (shortest paths, centrality, betweenness, etc.), but several of them become impractical for large graphs. Thus graph sampling is essential.The natural questions to ask are (a) which sampling method to use, (b) how small can the sample size be, and (c) how to scale up the measurements of the sample (e.g., the diameter), to get estimates for the large graph. The deeper, underlying question is subtle: how do we measure success?.We answer the above questions, and test our answers by thorough experiments on several, diverse datasets, spanning thousands nodes and edges. We consider several sampling methods, propose novel methods to check the goodness of sampling, and develop a set of scaling laws that describe relations between the properties of the original and the sample.In addition to the theoretical contributions, the practical conclusions from our work are: Sampling strategies based on edge selection do not perform well; simple uniform random node selection performs surprisingly well. Overall, best performing methods are the ones based on random-walks and \"forest fire\"; they match very accurately both static as well as evolutionary graph patterns, with sample sizes down to about 15 of the original graph.",
"We perform sensitivity analyses to assess the impact of missing data on the structural properties of social networks. The social network is conceived of as being generated by a bipartite graph, in which actors are linked together via multiple interaction contexts or affiliations. We discuss three principal missing data mechanisms: network boundary specification (non-inclusion of actors or affiliations), survey non-response, and censoring by vertex degree (fixed choice design), examining their impact on the scientific collaboration network from the Los Alamos E-print Archive as well as random bipartite graphs. The simulation results show that network boundary specification and fixed choice designs can dramatically alter estimates of network-level statistics. The observed clustering and assortativity coefficients are overestimated via omission of affiliations or fixed choice thereof, and underestimated via actor non-response, which results in inflated measurement error. We also find that social networks with multiple interaction contexts may have certain interesting properties due to the presence of overlapping cliques. In particular, assortativity by degree does not necessarily improve network robustness to random omission of nodes as predicted by current theory.",
"",
"An analysis is conducted on the robustness of measures of centrality in the face of random error in the network data. We use random networks of varying sizes and densities and subject them (separately) to four kinds of random error in varying amounts. The types of error are edge deletion, node deletion, edge addition, and node addition. The results show that the accuracy of centrality measures declines smoothly and predictably with the amount of error. This suggests that, for random networks and random error, we shall be able to construct confidence intervals around centrality scores. In addition, centrality measures were highly similar in their response to error. Dense networks were the most robust in the face of all kinds of error except edge deletion. For edge deletion, sparse networks were more accurately measured.",
"Social network research has been confined to small groups because large networks are intractable, and no systematic theory of network sampling exists. This paper describes a practical method for sampling average acquaintance volume (the average number of people known by each person) from large populations and derives confidence limits on the resulting estimates. It is shown that this average figure also yields an estimate of what has been called \"network density.\" Applications of the procedure to community studies, hierarchical structures, and interorganizational networks are proposed. Problems in developing a general theory of network sampling are discussed.",
"Part I. Introduction: Networks, Relations, and Structure: 1. Relations and networks in the social and behavioral sciences 2. Social network data: collection and application Part II. Mathematical Representations of Social Networks: 3. Notation 4. Graphs and matrixes Part III. Structural and Locational Properties: 5. Centrality, prestige, and related actor and group measures 6. Structural balance, clusterability, and transitivity 7. Cohesive subgroups 8. Affiliations, co-memberships, and overlapping subgroups Part IV. Roles and Positions: 9. Structural equivalence 10. Blockmodels 11. Relational algebras 12. Network positions and roles Part V. Dyadic and Triadic Methods: 13. Dyads 14. Triads Part VI. Statistical Dyadic Interaction Models: 15. Statistical analysis of single relational networks 16. Stochastic blockmodels and goodness-of-fit indices Part VII. Epilogue: 17. Future directions."
]
}
|
1306.5204
|
2949731435
|
Twitter is a social media giant famous for the exchange of short, 140-character messages called "tweets". In the scientific community, the microblogging site is known for openness in sharing its data. It provides a glance into its millions of users and billions of tweets through a "Streaming API" which provides a sample of all tweets matching some parameters preset by the API user. The API service has been used by many researchers, companies, and governmental institutions that want to extract knowledge in accordance with a diverse array of questions pertaining to social media. The essential drawback of the Twitter API is the lack of documentation concerning what and how much data users get. This leads researchers to question whether the sampled data is a valid representation of the overall activity on Twitter. In this work we embark on answering this question by comparing data collected using Twitter's sampled API service with data collected using the full, albeit costly, Firehose stream that includes every single published tweet. We compare both datasets using common statistical metrics as well as metrics that allow us to compare topics, networks, and locations of tweets. The results of our work will help researchers and practitioners understand the implications of using the Streaming API.
|
In this work we compare the datasets by analyzing facets commonly used in the literature. We start by comparing the top hashtags found in the tweets, a feature of the text commonly used for analysis. @cite_19 , the authors try to predict the magnitude of the number of tweets mentioning a particular hashtag. Using a regression model trained with features extracted from the text, the authors find that the content of the idea behind the tag is vital to the count of the tweets employing it. Tweeting a hashtag automatically adds a tweet to a page showing tweets published by other tweeters containing that hashtag. @cite_3 , the authors find that this communal property of hashtags along with the meaning of the tag itself drive the adoption of hashtags on Twitter. @cite_14 studies the propagation patterns of URLs on sampled Twitter data.
|
{
"cite_N": [
"@cite_19",
"@cite_14",
"@cite_3"
],
"mid": [
"2114544578",
"49973572",
"2104894372"
],
"abstract": [
"Current social media research mainly focuses on temporal trends of the information flow and on the topology of the social graph that facilitates the propagation of information. In this paper we study the effect of the content of the idea on the information propagation. We present an efficient hybrid approach based on a linear regression for predicting the spread of an idea in a given time frame. We show that a combination of content features with temporal and topological features minimizes prediction error. Our algorithm is evaluated on Twitter hashtags extracted from a dataset of more than 400 million tweets. We analyze the contribution and the limitations of the various feature types to the spread of information, demonstrating that content aspects can be used as strong predictors thus should not be disregarded. We also study the dependencies between global features such as graph topology and content features.",
"Platforms such as Twitter have provided researchers with ample opportunities to analytically study social phenomena. There are however, significant computational challenges due to the enormous rate of production of new information: researchers are therefore, often forced to analyze a judiciously selected “sample” of the data. Like other social media phenomena, information diffusion is a social process–it is affected by user context, and topic, in addition to the graph topology. This paper studies the impact of different attribute and topology based sampling strategies on the discovery of an important social media phenomena–information diffusion. We examine several widely-adopted sampling methods that select nodes based on attribute (random, location, and activity) and topology (forest fire) as well as study the impact of attribute based seed selection on topology based sampling. Then we develop a series of metrics for evaluating the quality of the sample, based on user activity (e.g. volume, number of seeds), topological (e.g. reach, spread) and temporal characteristics (e.g. rate). We additionally correlate the diffusion volume metric with two external variables–search and news trends. Our experiments reveal that for small sample sizes (30 ), a sample that incorporates both topology and user context (e.g. location, activity) can improve on naive methods by a significant margin of 15-20 .",
"Researchers and social observers have both believed that hashtags, as a new type of organizational objects of information, play a dual role in online microblogging communities (e.g., Twitter). On one hand, a hashtag serves as a bookmark of content, which links tweets with similar topics; on the other hand, a hashtag serves as the symbol of a community membership, which bridges a virtual community of users. Are the real users aware of this dual role of hashtags? Is the dual role affecting their behavior of adopting a hashtag? Is hashtag adoption predictable? We take the initiative to investigate and quantify the effects of the dual role on hashtag adoption. We propose comprehensive measures to quantify the major factors of how a user selects content tags as well as joins communities. Experiments using large scale Twitter datasets prove the effectiveness of the dual role, where both the content measures and the community measures significantly correlate to hashtag adoption on Twitter. With these measures as features, a machine learning model can effectively predict the future adoption of hashtags that a user has never used before."
]
}
|
1306.5204
|
2949731435
|
Twitter is a social media giant famous for the exchange of short, 140-character messages called "tweets". In the scientific community, the microblogging site is known for openness in sharing its data. It provides a glance into its millions of users and billions of tweets through a "Streaming API" which provides a sample of all tweets matching some parameters preset by the API user. The API service has been used by many researchers, companies, and governmental institutions that want to extract knowledge in accordance with a diverse array of questions pertaining to social media. The essential drawback of the Twitter API is the lack of documentation concerning what and how much data users get. This leads researchers to question whether the sampled data is a valid representation of the overall activity on Twitter. In this work we embark on answering this question by comparing data collected using Twitter's sampled API service with data collected using the full, albeit costly, Firehose stream that includes every single published tweet. We compare both datasets using common statistical metrics as well as metrics that allow us to compare topics, networks, and locations of tweets. The results of our work will help researchers and practitioners understand the implications of using the Streaming API.
|
Topic analysis can also be used to better understand the content of tweets. @cite_26 drills the problem down to disaster-related tweets, discovering two main types of topics: informational and emotional. Finally, @cite_34 @cite_9 @cite_17 all study the problem of identifying topics in geographical Twitter datasets, proposing models to extract topics relevant to different geographical areas in the data. @cite_13 studies how the topics users discuss drive their geolocation.
|
{
"cite_N": [
"@cite_26",
"@cite_9",
"@cite_34",
"@cite_13",
"@cite_17"
],
"mid": [
"",
"2149510050",
"2127860643",
"2047240862",
"2010273307"
],
"abstract": [
"",
"Micro-blogging services have become indispensable communication tools for online users for disseminating breaking news, eyewitness accounts, individual expression, and protest groups. Recently, Twitter, along with other online social networking services such as Foursquare, Gowalla, Facebook and Yelp, have started supporting location services in their messages, either explicitly, by letting users choose their places, or implicitly, by enabling geo-tagging, which is to associate messages with latitudes and longitudes. This functionality allows researchers to address an exciting set of questions: 1) How is information created and shared across geographical locations, 2) How do spatial and linguistic characteristics of people vary across regions, and 3) How to model human mobility. Although many attempts have been made for tackling these problems, previous methods are either complicated to be implemented or oversimplified that cannot yield reasonable performance. It is a challenge task to discover topics and identify users' interests from these geo-tagged messages due to the sheer amount of data and diversity of language variations used on these location sharing services. In this paper we focus on Twitter and present an algorithm by modeling diversity in tweets based on topical diversity, geographical diversity, and an interest distribution of the user. Furthermore, we take the Markovian nature of a user's location into account. Our model exploits sparse factorial coding of the attributes, thus allowing us to deal with a large and diverse set of covariates efficiently. Our approach is vital for applications such as user profiling, content recommendation and topic tracking. We show high accuracy in location estimation based on our model. Moreover, the algorithm identifies interesting topics based on location and language.",
"This paper studies the problem of discovering and comparing geographical topics from GPS-associated documents. GPS-associated documents become popular with the pervasiveness of location-acquisition technologies. For example, in Flickr, the geo-tagged photos are associated with tags and GPS locations. In Twitter, the locations of the tweets can be identified by the GPS locations from smart phones. Many interesting concepts, including cultures, scenes, and product sales, correspond to specialized geographical distributions. In this paper, we are interested in two questions: (1) how to discover different topics of interests that are coherent in geographical regions? (2) how to compare several topics across different geographical locations? To answer these questions, this paper proposes and compares three ways of modeling geographical topics: location-driven model, text-driven model, and a novel joint model called LGTA (Latent Geographical Topic Analysis) that combines location and text. To make a fair comparison, we collect several representative datasets from Flickr website including Landscape, Activity, Manhattan, National park, Festival, Car, and Food. The results show that the first two methods work in some datasets but fail in others. LGTA works well in all these datasets at not only finding regions of interests but also providing effective comparisons of the topics across different locations. The results confirm our hypothesis that the geographical distributions can help modeling topics, while topics provide important cues to group different geographical regions.",
"In this work, we use foursquare check-ins to cluster users via topic modeling, a technique commonly used to classify text documents according to latent \"themes\". Here, however, the latent variables which group users can be thought of not as themes but rather as factors which drive check in behaviors, allowing for a qualitative understanding of influences on user check ins. Our model is agnostic of geo-spatial location, time, users' friends on social networking sites and the venue categories-we treat the existence of and intricate interactions between these factors as being latent, allowing them to emerge entirely from the data. We instantiate our model on data from New York and the San Francisco Bay Area and find evidence that the model is able to identify groups of people which are of different types (e.g. tourists), communities (e.g. users tightly clustered in space) and interests (e.g. people who enjoy athletics).",
"Human-generated textual data streams from services such as Twitter increasingly become geo-referenced. The spatial resolution of their coverage improves quickly, making them a promising instrument for sensing various aspects of evolution and dynamics of social systems. This work explores spacetime structures of the topical content of short textual messages in a stream available from Twitter in Ireland. It uses a streaming Latent Dirichlet Allocation topic model trained with an incremental variational Bayes method. The posterior probabilities of the discovered topics are post-processed with a spatial kernel density and subjected to comparative analysis. The identified prevailing topics are often found to be spatially contiguous. We apply Markov-modulated non-homogeneous Poisson processes to quantify a proportion of novelty in the observed abnormal patterns. A combined use of these techniques allows for real-time analysis of the temporal evolution and spatial variability of population's response to various stimuli such as large scale sportive, political or cultural events."
]
}
|
1306.5204
|
2949731435
|
Twitter is a social media giant famous for the exchange of short, 140-character messages called "tweets". In the scientific community, the microblogging site is known for openness in sharing its data. It provides a glance into its millions of users and billions of tweets through a "Streaming API" which provides a sample of all tweets matching some parameters preset by the API user. The API service has been used by many researchers, companies, and governmental institutions that want to extract knowledge in accordance with a diverse array of questions pertaining to social media. The essential drawback of the Twitter API is the lack of documentation concerning what and how much data users get. This leads researchers to question whether the sampled data is a valid representation of the overall activity on Twitter. In this work we embark on answering this question by comparing data collected using Twitter's sampled API service with data collected using the full, albeit costly, Firehose stream that includes every single published tweet. We compare both datasets using common statistical metrics as well as metrics that allow us to compare topics, networks, and locations of tweets. The results of our work will help researchers and practitioners understand the implications of using the Streaming API.
|
Geolocation has become a prominent area in the study of social media data. @cite_22 the authors try to classify towns based upon the content of the geotagged tweets that originate from within the town. @cite_2 studies Twitter's use as a sensor for disaster information by studying the geographical properties of users tweets. The authors discover that Twitter's information is accurate in the later stages of a crisis for information dissemination and retrieval.
|
{
"cite_N": [
"@cite_22",
"@cite_2"
],
"mid": [
"1980402475",
"2070722606"
],
"abstract": [
"The advent of location-based social networking sites provides an open sharing space of crowd-sourced lifelogs that can be regarded as a novel source to monitor massive crowds' lifestyles in the real world. In this paper, we challenge to analyze urban characteristics in terms of crowd behavior by utilizing the crowd lifelogs in urban area. In order to collect crowd behavioral data, we utilize Twitter where enormous numbers of geo-tagged crowd's micro lifelogs can be easily acquired. We model the crowd behavior on the social network sites as a feature, which will be used to derive crowd-based urban characteristics. Based on this crowd behavior feature, we analyze significant crowd behavioral patterns for extracting urban characteristics. In the experiment, we actually conduct the urban characterization over the crowd behavioral patterns using a large number of geo-tagged tweets found in Japan from Twitter and report a comparison result with map-based observation of cities as an evaluation.",
"The emergence of innovative web applications, often labelled as Web 2.0, has permitted an unprecedented increase of content created by non-specialist users. In particular, Location-based Social Networks (LBSN) are designed as platforms allowing the creation, storage and retrieval of vast amounts of georeferenced and user-generated contents. LBSN can thus be seen by Geographic Information specialists as a timely and cost-effective source of spatio-temporal information for many fields of application, provided that they can set up workflows to retrieve, validate and organise such information. This paper aims to improve the understanding on how LBSN can be used as a reliable source of spatio-temporal information, by analysing the temporal, spatial and social dynamics of Twitter activity during a major forest fire event in the South of France in July 2009."
]
}
|
1306.5226
|
2085482877
|
Consider @math points in @math and @math local coordinate systems that are related through unknown rigid transforms. For each point we are given (possibly noisy) measurements of its local coordinates in some of the coordinate systems. Alternatively, for each coordinate system, we observe the coordinates of a subset of the points. The problem of estimating the global coordinates of the @math points (up to a rigid transform) from such measurements comes up in distributed approaches to molecular conformation and sensor network localization, and also in computer vision and graphics. The least-squares formulation of this problem, though non-convex, has a well known closed-form solution when @math (based on the singular value decomposition). However, no closed form solution is known for @math . In this paper, we demonstrate how the least-squares formulation can be relaxed into a convex program, namely a semidefinite program (SDP). By setting up connections between the uniqueness of this SDP and results from rigidity theory, we prove conditions for exact and stable recovery for the SDP relaxation. In particular, we prove that the SDP relaxation can guarantee recovery under more adversarial conditions compared to earlier proposed spectral relaxations, and derive error bounds for the registration error incurred by the SDP relaxation. We also present results of numerical experiments on simulated data to confirm the theoretical findings. We empirically demonstrate that (a) unlike the spectral relaxation, the relaxation gap is mostly zero for the semidefinite program (i.e., we are able to solve the original non-convex least-squares problem) up to a certain noise threshold, and (b) the semidefinite program performs significantly better than spectral and manifold-optimization methods, particularly at large noise levels.
|
Another closely related work is the paper by on global registration @cite_3 , where the optimal transforms (rotations to be specific) are computed by extending the objective in to the multipatch case. The subsequent mathematical formulation has strong resemblance with our formulation, and, in fact, leads to a subproblem similar to . @cite_3 propose the use of manifold optimization to solve , where the manifold is the product manifold of rotations. However, as mentioned earlier, manifold methods generally do not offer guarantees on convergence (to the global minimum) and stability. Moreover, the manifold in is not connected. Therefore, local method will fail to attain the global optimum of if the initial guess is on the wrong component of the manifold.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2109323956"
],
"abstract": [
"We propose a novel algorithm to register multiple 3D point sets within a common reference frame using a manifold optimization approach. The point sets are obtained with multiple laser scanners or a mobile scanner. Unlike most prior algorithms, our approach performs an explicit optimization on the manifold of rotations, allowing us to formulate the registration problem as an unconstrained minimization on a constrained manifold. This approach exploits the Lie group structure of SO 3 and the simple representation of its associated Lie algebra so 3 in terms of R3. Our contributions are threefold. We present a new analytic method based on singular value decompositions that yields a closed-form solution for simultaneous multiview registration in the noise-free scenario. Secondly, we use this method to derive a good initial estimate of a solution in the noise-free case. This initialization step may be of use in any general iterative scheme. Finally, we present an iterative scheme based on Newton's method on SO 3 that has locally quadratic convergence. We demonstrate the efficacy of our scheme on scan data taken both from the Digital Michelangelo project and from scans extracted from models, and compare it to some of the other well known schemes for multiview registration. In all cases, our algorithm converges much faster than the other approaches, (in some cases orders of magnitude faster), and generates consistently higher quality registrations."
]
}
|
1306.5226
|
2085482877
|
Consider @math points in @math and @math local coordinate systems that are related through unknown rigid transforms. For each point we are given (possibly noisy) measurements of its local coordinates in some of the coordinate systems. Alternatively, for each coordinate system, we observe the coordinates of a subset of the points. The problem of estimating the global coordinates of the @math points (up to a rigid transform) from such measurements comes up in distributed approaches to molecular conformation and sensor network localization, and also in computer vision and graphics. The least-squares formulation of this problem, though non-convex, has a well known closed-form solution when @math (based on the singular value decomposition). However, no closed form solution is known for @math . In this paper, we demonstrate how the least-squares formulation can be relaxed into a convex program, namely a semidefinite program (SDP). By setting up connections between the uniqueness of this SDP and results from rigidity theory, we prove conditions for exact and stable recovery for the SDP relaxation. In particular, we prove that the SDP relaxation can guarantee recovery under more adversarial conditions compared to earlier proposed spectral relaxations, and derive error bounds for the registration error incurred by the SDP relaxation. We also present results of numerical experiments on simulated data to confirm the theoretical findings. We empirically demonstrate that (a) unlike the spectral relaxation, the relaxation gap is mostly zero for the semidefinite program (i.e., we are able to solve the original non-convex least-squares problem) up to a certain noise threshold, and (b) the semidefinite program performs significantly better than spectral and manifold-optimization methods, particularly at large noise levels.
|
It is exactly at this point that we depart from @cite_3 , namely, we propose to relax into a tractable semidefinite program (SDP). This was motivated by a long line of work on the use of SDP relaxations for non-convex (particularly NP-hard) problems. See, for example, @cite_33 @cite_6 @cite_7 @cite_51 @cite_62 @cite_18 , and these reviews @cite_11 @cite_8 @cite_70 . Note that for @math , is a quadratic Boolean optimization, similar to the MAX-CUT problem. An SDP-based algorithm with randomized rounding for solving MAX-CUT was proposed in the seminal work of Goemans and Williamson @cite_6 . The semidefinite relaxation that we consider in Section is motivated by this work. In connection with the present work, we note that provably stable SDP algorithms have been considered for low rank matrix completion @cite_62 , phase retrieval @cite_19 @cite_26 , and graph localization @cite_23 .
|
{
"cite_N": [
"@cite_18",
"@cite_62",
"@cite_26",
"@cite_33",
"@cite_7",
"@cite_8",
"@cite_70",
"@cite_3",
"@cite_6",
"@cite_19",
"@cite_23",
"@cite_51",
"@cite_11"
],
"mid": [
"2050058873",
"2611328865",
"",
"2106264302",
"2023710022",
"",
"",
"2109323956",
"1985123706",
"2078397124",
"2142153450",
"2082401067",
""
],
"abstract": [
"Consider a data set of vector-valued observations that consists of noisy inliers, which are explained well by a low-dimensional subspace, along with some number of outliers. This work describes a convex optimization problem, called reaper, that can reliably fit a low-dimensional model to this type of data. This approach parameterizes linear subspaces using orthogonal projectors and uses a relaxation of the set of orthogonal projectors to reach the convex formulation. The paper provides an efficient algorithm for solving the reaper problem, and it documents numerical experiments that confirm that reaper can dependably find linear structure in synthetic and natural data. In addition, when the inliers lie near a low-dimensional subspace, there is a rigorous theory that describes when reaper can approximate this subspace.",
"We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys @math for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.",
"",
"It has been recognized recently that to represent a polyhedron as the projection of a higher-dimensional, but simpler, polyhedron, is a powerful tool in polyhedral combinatorics. A general method is developed to construct higher-dimensional polyhedra (or, in some cases, convex sets) whose projection approximates the convex hull of 0–1 valued solutions of a system of linear inequalities. An important feature of these approximations is that one can optimize any linear objective function over them in polynomial time.In the special case of the vertex packing polytope, a sequence of systems of inequalities is obtained such that the first system already includes clique, odd hole, odd antihole, wheel, and orthogonality constraints. In particular, for perfect (and many other) graphs, this first system gives the vertex packing polytope. For various classes of graphs, including t-perfect graphs, it follows that the stable set polytope is the projection of a polytope with a polynomial number of facets.An extension o...",
"Semidefinite programming (SDP) is currently one of the most active areas of research in optimization. SDP has attracted researchers from a wide variety of areas because of its theoretical and numerical elegance as well as its wide applicability. In this paper we present a survey of two major areas of application for SDP, namely discrete optimization and matrix completion problems.In the first part of this paper we present a recipe for finding SDP relaxations based on adding redundant constraints and using Lagrangian relaxation. We illustrate this with several examples. We first show that many relaxations for the max-cut problem (MC) are equivalent to both the Lagrangian and the well-known SDP relaxation. We then apply the recipe to obtain new strengthened SDP relaxations for MC as well as known SDP relaxations for several other hard discrete optimization problems.In the second part of this paper we discuss two completion problems, the positive semidefinite matrix completion problem and the Euclidean distance matrix completion problem. We present some theoretical results on the existence of such completions and then proceed to the application of SDP to find approximate completions. We conclude this paper with a new application of SDP to find approximate matrix completions for large and sparse instances of Euclidean distance matrices.",
"",
"",
"We propose a novel algorithm to register multiple 3D point sets within a common reference frame using a manifold optimization approach. The point sets are obtained with multiple laser scanners or a mobile scanner. Unlike most prior algorithms, our approach performs an explicit optimization on the manifold of rotations, allowing us to formulate the registration problem as an unconstrained minimization on a constrained manifold. This approach exploits the Lie group structure of SO 3 and the simple representation of its associated Lie algebra so 3 in terms of R3. Our contributions are threefold. We present a new analytic method based on singular value decompositions that yields a closed-form solution for simultaneous multiview registration in the noise-free scenario. Secondly, we use this method to derive a good initial estimate of a solution in the noise-free case. This initialization step may be of use in any general iterative scheme. Finally, we present an iterative scheme based on Newton's method on SO 3 that has locally quadratic convergence. We demonstrate the efficacy of our scheme on scan data taken both from the Digital Michelangelo project and from scans extracted from models, and compare it to some of the other well known schemes for multiview registration. In all cases, our algorithm converges much faster than the other approaches, (in some cases orders of magnitude faster), and generates consistently higher quality registrations.",
"We present randomized approximation algorithms for the maximum cut (MAX CUT) and maximum 2-satisfiability (MAX 2SAT) problems that always deliver solutions of expected value at least.87856 times the optimal value. These algorithms use a simple and elegant technique that randomly rounds the solution to a nonlinear programming relaxation. This relaxation can be interpreted both as a semidefinite program and as an eigenvalue minimization problem. The best previously known approximation algorithms for these problems had performance guarantees of 1 2 for MAX CUT and 3 4 or MAX 2SAT. Slight extensions of our analysis lead to a.79607-approximation algorithm for the maximum directed cut problem (MAX DICUT) and a.758-approximation algorithm for MAX SAT, where the best previously known approximation algorithms had performance guarantees of 1 4 and 3 4, respectively. Our algorithm gives the first substantial progress in approximating MAX CUT in nearly twenty years, and represents the first use of semidefinite programming in the design of approximation algorithms.",
"Suppose we wish to recover a signal amssym @math from m intensity measurements of the form , ; that is, from data in which phase information is missing. We prove that if the vectors are sampled independently and uniformly at random on the unit sphere, then the signal x can be recovered exactly (up to a global phase factor) by solving a convenient semidefinite program–-a trace-norm minimization problem; this holds with large probability provided that m is on the order of , and without any assumption about the signal whatsoever. This novel result demonstrates that in some instances, the combinatorial phase retrieval problem can be solved by convex programming techniques. Finally, we also prove that our methodology is robust vis-a-vis additive noise. © 2012 Wiley Periodicals, Inc.",
"We consider the problem of positioning a cloud of points in the Euclidean space ℝ d , using noisy measurements of a subset of pairwise distances. This task has applications in various areas, such as sensor network localization and reconstruction of protein conformations from NMR measurements. It is also closely related to dimensionality reduction problems and manifold learning, where the goal is to learn the underlying global geometry of a data set using local (or partial) metric information. Here we propose a reconstruction algorithm based on semidefinite programming. For a random geometric graph model and uniformly bounded noise, we provide a precise characterization of the algorithm’s performance: in the noiseless case, we find a radius r 0 beyond which the algorithm reconstructs the exact positions (up to rigid transformations). In the presence of noise, we obtain upper and lower bounds on the reconstruction error that match up to a factor that depends only on the dimension d, and the average degree of the nodes in the graph.",
"Let B i be deterministic real symmetric m × m matrices, and ξ i be independent random scalars with zero mean and “of order of one” (e.g., @math ). We are interested to know under what conditions “typical norm” of the random matrix @math is of order of 1. An evident necessary condition is @math , which, essentially, translates to @math ; a natural conjecture is that the latter condition is sufficient as well. In the paper, we prove a relaxed version of this conjecture, specifically, that under the above condition the typical norm of S N is @math : @math for all Ω > 0 We outline some applications of this result, primarily in investigating the quality of semidefinite relaxations of a general quadratic optimization problem with orthogonality constraints @math , where F is quadratic in X = (X 1,... ,X k ). We show that when F is convex in every one of X j , a natural semidefinite relaxation of the problem is tight within a factor slowly growing with the size m of the matrices @math .",
""
]
}
|
1306.4478
|
2036688247
|
Display Omitted We present a method to robustly track the geometry of an object that deforms over time.We fit a FEM-based model to the data leading to physically plausible results.We evaluate the performance of our method using synthetic and scanned data. We present an approach to robustly track the geometry of an object that deforms over time from a set of input point clouds captured from a single viewpoint. The deformations we consider are caused by applying forces to known locations on the object's surface. Our method combines the use of prior information on the geometry of the object modeled by a smooth template and the use of a linear finite element method to predict the deformation. This allows the accurate reconstruction of both the observed and the unobserved sides of the object. We present tracking results for noisy low-quality point clouds acquired by either a stereo camera or a depth camera, and simulations with point clouds corrupted by different error terms. We show that our method is also applicable to large non-linear deformations.
|
Computing the correspondence between deformed shapes has received considerable attention in recent years and the surveys of van @cite_23 and @cite_18 give a comprehensive overview of existing methods. The review in this paper focuses on techniques that do not employ a priori skelet al models or manually placed marker positions, as we aim to minimize assumptions about the structure of the surface. None of the following approaches combine physics-based models with tracking approaches.
|
{
"cite_N": [
"@cite_18",
"@cite_23"
],
"mid": [
"1965805571",
"2173758409"
],
"abstract": [
"Three-dimensional surface registration transforms multiple three-dimensional data sets into the same coordinate system so as to align overlapping components of these sets. Recent surveys have covered different aspects of either rigid or nonrigid registration, but seldom discuss them as a whole. Our study serves two purposes: 1) To give a comprehensive survey of both types of registration, focusing on three-dimensional point clouds and meshes and 2) to provide a better understanding of registration from the perspective of data fitting. Registration is closely related to data fitting in which it comprises three core interwoven components: model selection, correspondences and constraints, and optimization. Study of these components 1) provides a basis for comparison of the novelties of different techniques, 2) reveals the similarity of rigid and nonrigid registration in terms of problem representations, and 3) shows how overfitting arises in nonrigid registration and the reasons for increasing interest in intrinsic techniques. We further summarize some practical issues of registration which include initializations and evaluations, and discuss some of our own observations, insights and foreseeable research trends.",
"We review methods designed to compute correspondences between geometric shapes represented by triangle meshes, contours or point sets. This survey is motivated in part by recent developments in space–time registration, where one seeks a correspondence between non-rigid and time-varying surfaces, and semantic shape analysis, which underlines a recent trend to incorporate shape understanding into the analysis pipeline. Establishing a meaningful correspondence between shapes is often difficult because it generally requires an understanding of the structure of the shapes at both the local and global levels, and sometimes the functionality of the shape parts as well. Despite its inherent complexity, shape correspondence is a recurrent problem and an essential component of numerous geometry processing applications. In this survey, we discuss the different forms of the correspondence problem and review the main solution methods, aided by several classification criteria arising from the problem definition. The main categories of classification are defined in terms of the input and output representation, objective function and solution approach. We conclude the survey by discussing open problems and future perspectives."
]
}
|
1306.4478
|
2036688247
|
Display Omitted We present a method to robustly track the geometry of an object that deforms over time.We fit a FEM-based model to the data leading to physically plausible results.We evaluate the performance of our method using synthetic and scanned data. We present an approach to robustly track the geometry of an object that deforms over time from a set of input point clouds captured from a single viewpoint. The deformations we consider are caused by applying forces to known locations on the object's surface. Our method combines the use of prior information on the geometry of the object modeled by a smooth template and the use of a linear finite element method to predict the deformation. This allows the accurate reconstruction of both the observed and the unobserved sides of the object. We present tracking results for noisy low-quality point clouds acquired by either a stereo camera or a depth camera, and simulations with point clouds corrupted by different error terms. We show that our method is also applicable to large non-linear deformations.
|
The following techniques solve the tracking problem using a template as shape prior. De @cite_35 tracked multi-view stereo data of a human subject that is acquired using a set of cameras. The algorithm uses a template of the human subject that was acquired in a similar posture as the first frame. The approach used for tracking first uses Laplace deformations of a volumetric mesh to find the rough shape deformation and refines the shape using a surface deformation. The deformation makes use of automatically computed features that are found based on color information. @cite_30 developed a similar system to track multi-view stereo data of human subjects. Tung and Marsuyama @cite_24 extended de 's approach by using 3D shape rather than color information to find the features.
|
{
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_24"
],
"mid": [
"2122578066",
"2109752307",
"1993167366"
],
"abstract": [
"Details in mesh animations are difficult to generate but they have great impact on visual quality. In this work, we demonstrate a practical software system for capturing such details from multi-view video recordings. Given a stream of synchronized video images that record a human performance from multiple viewpoints and an articulated template of the performer, our system captures the motion of both the skeleton and the shape. The output mesh animation is enhanced with the details observed in the image silhouettes. For example, a performance in casual loose-fitting clothes will generate mesh animations with flowing garment motions. We accomplish this with a fast pose tracking method followed by nonrigid deformation of the template to fit the silhouettes. The entire process takes less than sixteen seconds per frame and requires no markers or texture cues. Captured meshes are in full correspondence making them readily usable for editing operations including texturing, deformation transfer, and deformation model learning.",
"This paper proposes a new marker-less approach to capturing human performances from multi-view video. Our algorithm can jointly reconstruct spatio-temporally coherent geometry, motion and textural surface appearance of actors that perform complex and rapid moves. Furthermore, since our algorithm is purely meshbased and makes as few as possible prior assumptions about the type of subject being tracked, it can even capture performances of people wearing wide apparel, such as a dancer wearing a skirt. To serve this purpose our method efficiently and effectively combines the power of surface- and volume-based shape deformation techniques with a new mesh-based analysis-through-synthesis framework. This framework extracts motion constraints from video and makes the laser-scan of the tracked subject mimic the recorded performance. Also small-scale time-varying shape detail is recovered by applying model-guided multi-view stereo to refine the model surface. Our method delivers captured performance data at high level of detail, is highly versatile, and is applicable to many complex types of scenes that could not be handled by alternative marker-based or marker-free recording techniques.",
"This paper presents a novel approach that achieves complete matching of 3D dynamic surfaces. Surfaces are captured from multi-view video data and represented by sequences of 3D manifold meshes in motion (3D videos). We propose to perform dense surface matching between 3D video frames using geodesic diffeomorphisms. Our algorithm uses a coarse-to-fine strategy to derive a robust correspondence map, then a probabilistic formulation is coupled with a voting scheme in order to obtain local unicity of matching candidates and a smooth mapping. The significant advantage of the proposed technique compared to existing approaches is that it does not rely on a color-based feature extraction process. Hence, our method does not lose accuracy in poorly textured regions and is not bounded to be used on video sequences of a unique subject. Therefore our complete surface mapping can be applied to: (1) texture transfer between surface models extracted from different sequences, (2) dense motion flow estimation in 3D video, and (3) motion transfer from a 3D video to an unanimated 3D model. Experiments are performed on challenging publicly available real-world datasets and show compelling results."
]
}
|
1306.4478
|
2036688247
|
Display Omitted We present a method to robustly track the geometry of an object that deforms over time.We fit a FEM-based model to the data leading to physically plausible results.We evaluate the performance of our method using synthetic and scanned data. We present an approach to robustly track the geometry of an object that deforms over time from a set of input point clouds captured from a single viewpoint. The deformations we consider are caused by applying forces to known locations on the object's surface. Our method combines the use of prior information on the geometry of the object modeled by a smooth template and the use of a linear finite element method to predict the deformation. This allows the accurate reconstruction of both the observed and the unobserved sides of the object. We present tracking results for noisy low-quality point clouds acquired by either a stereo camera or a depth camera, and simulations with point clouds corrupted by different error terms. We show that our method is also applicable to large non-linear deformations.
|
@cite_19 @cite_21 proposed a generic data-driven technique for mesh tracking. A template is used to model the rough geometry of the deformed object, and the algorithm deforms this template to each observed frame. A deformation graph is used to derive a coarse-to-fine strategy that decouples the complexity of the original mesh geometry from the representation of the deformation. @cite_34 proposed an alternative where the template is decomposed into a set of patches to which vertices are attached. The template is then deformed to each observed frame using a data term that encourages inter-patch rigidity. @cite_37 extended this technique to allow for outliers by using a probabilistic framework.
|
{
"cite_N": [
"@cite_19",
"@cite_37",
"@cite_21",
"@cite_34"
],
"mid": [
"2098466221",
"1489824013",
"",
"2040322039"
],
"abstract": [
"We present a registration algorithm for pairs of deforming and partial range scans that addresses the challenges of non-rigid registration within a single non-linear optimization. Our algorithm simultaneously solves for correspondences between points on source and target scans, confidence weights that measure the reliability of each correspondence and identify non-overlapping areas, and a warping field that brings the source scan into alignment with the target geometry. The optimization maximizes the region of overlap and the spatial coherence of the deformation while minimizing registration error. All optimization parameters are chosen automatically; hand-tuning is not necessary. Our method is not restricted to part-in-whole matching, but addresses the general problem of partial matching, and requires no explicit prior correspondences or feature points. We evaluate the performance and robustness of our method using scan data acquired by a structured light scanner and compare our method with existing non-rigid registration algorithms.",
"In this paper, we address the problem of tracking the temporal evolution of arbitrary shapes observed in multi-camera setups. This is motivated by the ever growing number of applications that require consistent shape information along temporal sequences. The approach we propose considers a temporal sequence of independently reconstructed surfaces and iteratively deforms a reference mesh to fit these observations. To effectively cope with outlying and missing geometry, we introduce a novel probabilistic mesh deformation framework. Using generic local rigidity priors and accounting for the uncertainty in the data acquisition process, this framework effectively handles missing data, relatively large reconstruction artefacts and multiple objects. Extensive experiments demonstrate the effectiveness and robustness of the method on various 4D datasets.",
"",
"In this paper, we consider the problem of tracking nonrigid surfaces and propose a generic data-driven mesh deformation framework. In contrast to methods using strong prior models, this framework assumes little on the observed surface and hence easily generalizes to most free-form surfaces while effectively handling large deformations. To this aim, the reference surface is divided into elementary surface cells or patches. This strategy ensures robustness by providing natural integration domains over the surface for noisy data, while enabling to express simple patch-level rigidity constraints. In addition, we associate to this scheme a robust numerical optimization that solves for physically plausible surface deformations given arbitrary constraints. In order to demonstrate the versatility of the proposed framework, we conducted experiments on open and closed surfaces, with possibly non-connected components, that undergo large deformations and fast motions. We also performed quantitative and qualitative evaluations in multicameras and monocular environments, and with different types of data including 2D correspondences and 3D point clouds."
]
}
|
1306.4478
|
2036688247
|
Display Omitted We present a method to robustly track the geometry of an object that deforms over time.We fit a FEM-based model to the data leading to physically plausible results.We evaluate the performance of our method using synthetic and scanned data. We present an approach to robustly track the geometry of an object that deforms over time from a set of input point clouds captured from a single viewpoint. The deformations we consider are caused by applying forces to known locations on the object's surface. Our method combines the use of prior information on the geometry of the object modeled by a smooth template and the use of a linear finite element method to predict the deformation. This allows the accurate reconstruction of both the observed and the unobserved sides of the object. We present tracking results for noisy low-quality point clouds acquired by either a stereo camera or a depth camera, and simulations with point clouds corrupted by different error terms. We show that our method is also applicable to large non-linear deformations.
|
The following techniques solve the tracking problem without using a shape prior. However, the methods assume prior information on the deformation of the object. @cite_4 modeled the surface tracking problem as a problem of finding a smooth space-time surface in four-dimensional space. To achieve this, they exploited the temporal coherence in the densely sampled data with respect to both time and space. @cite_20 used a similar concept to find a volumetric space-time solid. Their approach assumes that each cell of the discretized four-dimensional space contains the amount of material that flowed into it.
|
{
"cite_N": [
"@cite_4",
"@cite_20"
],
"mid": [
"2105465549",
"2041855012"
],
"abstract": [
"We propose an algorithm that performs registration of large sets of unstructured point clouds of moving and deforming objects without computing correspondences. Given as input a set of frames with dense spatial and temporal sampling, such as the raw output of a fast scanner, our algorithm exploits the underlying temporal coherence in the data to directly compute the motion of the scanned object and bring all frames into a common coordinate system. In contrast with existing methods which usually perform pairwise alignments between consecutive frames, our algorithm computes a globally consistent motion spanning multiple frames. We add a time coordinate to all the input points based on the ordering of the respective frames and pose the problem of computing the motion of each frame as an estimation of certain kinematic properties of the resulting space-time surface. By performing this estimation for each frame as a whole we are able to compute rigid inter-frame motions, and by adapting our method to perform a local analysis of the space-time surface, we extend the basic algorithm to handle registration of deformable objects as well. We demonstrate the performance of our algorithm on a number of synthetic and scanned examples, each consisting of hundreds of scans.",
"We introduce a volumetric space-time technique for the reconstruction of moving and deforming objects from point data. The output of our method is a four-dimensional space-time solid, made up of spatial slices, each of which is a three-dimensional solid bounded by a watertight manifold. The motion of the object is described as an incompressible flow of material through time. We optimize the flow so that the distance material moves from one time frame to the next is bounded, the density of material remains constant, and the object remains compact. This formulation overcomes deficiencies in the acquired data, such as persistent occlusions, errors, and missing frames. We demonstrate the performance of our flow-based technique by reconstructing coherent sequences of watertight models from incomplete scanner data."
]
}
|
1306.4478
|
2036688247
|
Display Omitted We present a method to robustly track the geometry of an object that deforms over time.We fit a FEM-based model to the data leading to physically plausible results.We evaluate the performance of our method using synthetic and scanned data. We present an approach to robustly track the geometry of an object that deforms over time from a set of input point clouds captured from a single viewpoint. The deformations we consider are caused by applying forces to known locations on the object's surface. Our method combines the use of prior information on the geometry of the object modeled by a smooth template and the use of a linear finite element method to predict the deformation. This allows the accurate reconstruction of both the observed and the unobserved sides of the object. We present tracking results for noisy low-quality point clouds acquired by either a stereo camera or a depth camera, and simulations with point clouds corrupted by different error terms. We show that our method is also applicable to large non-linear deformations.
|
@cite_7 used a probabilistic model based on Bayesian statistics to track a deformable model. The surface is modeled as a graph of oriented particles that move in time. The position and orientation of the particles are controlled by statistical potentials that trade off data fitting and surface smoothness. @cite_32 extended this approach by first tracking a few stable landmarks and by subsequently computing a dense matching.
|
{
"cite_N": [
"@cite_32",
"@cite_7"
],
"mid": [
"2016663152",
"1944319588"
],
"abstract": [
"In this article, we consider the problem of animation reconstruction, that is, the reconstruction of shape and motion of a deformable object from dynamic 3D scanner data, without using user-provided template models. Unlike previous work that addressed this problem, we do not rely on locally convergent optimization but present a system that can handle fast motion, temporally disrupted input, and can correctly match objects that disappear for extended time periods in acquisition holes due to occlusion. Our approach is motivated by cartography: We first estimate a few landmark correspondences, which are extended to a dense matching and then used to reconstruct geometry and motion. We propose a number of algorithmic building blocks: a scheme for tracking landmarks in temporally coherent and incoherent data, an algorithm for robust estimation of dense correspondences under topological noise, and the integration of local matching techniques to refine the result. We describe and evaluate the individual components and propose a complete animation reconstruction pipeline based on these ideas. We evaluate our method on a number of standard benchmark datasets and show that we can obtain correct reconstructions in situations where other techniques fail completely or require additional user guidance such as a template model.",
"In this paper, we describe a system for the reconstruction of deforming geometry from a time sequence of unstructured, noisy point clouds, as produced by recent real-time range scanning devices. Our technique reconstructs both the geometry and dense correspondences over time. Using the correspondences, holes due to occlusion are filled in from other frames. Our reconstruction technique is based on a statistical framework: The reconstruction should both match the measured data points and maximize prior probability densities that prefer smoothness, rigid deformation and smooth movements over time. The optimization procedure consists of an inner loop that optimizes the 4D shape using continuous numerical optimization and an outer loop that infers the discrete 4D topology of the data set using an iterative model assembly algorithm. We apply the technique to a variety of data sets, demonstrating that the new approach is capable of robustly retrieving animated models with correspondences from data sets suffering from significant noise, outliers and acquisition holes."
]
}
|
1306.4478
|
2036688247
|
Display Omitted We present a method to robustly track the geometry of an object that deforms over time.We fit a FEM-based model to the data leading to physically plausible results.We evaluate the performance of our method using synthetic and scanned data. We present an approach to robustly track the geometry of an object that deforms over time from a set of input point clouds captured from a single viewpoint. The deformations we consider are caused by applying forces to known locations on the object's surface. Our method combines the use of prior information on the geometry of the object modeled by a smooth template and the use of a linear finite element method to predict the deformation. This allows the accurate reconstruction of both the observed and the unobserved sides of the object. We present tracking results for noisy low-quality point clouds acquired by either a stereo camera or a depth camera, and simulations with point clouds corrupted by different error terms. We show that our method is also applicable to large non-linear deformations.
|
Furukawa and Ponce @cite_16 proposed a technique to track data from a multi-camera setup. Instead of using a template of the shape as prior information, their technique computes the polyhedral mesh that captures the first frame and deforms this mesh to the data in subsequent frames.
|
{
"cite_N": [
"@cite_16"
],
"mid": [
"2121227088"
],
"abstract": [
"This paper proposes a novel approach to non-rigid, markerless motion capture from synchronized video streams acquired by calibrated cameras. The instantaneous geometry of the observed scene is represented by a polyhedral mesh with fixed topology. The initial mesh is constructed in the first frame using the publicly available PMVS software for multi-view stereo [7]. Its deformation is captured by tracking its vertices over time, using two optimization processes at each frame: a local one using a rigid motion model in the neighborhood of each vertex, and a global one using a regularized nonrigid model for the whole mesh. Qualitative and quantitative experiments using seven real datasets show that our algorithm effectively handles complex nonrigid motions and severe occlusions."
]
}
|
1306.4478
|
2036688247
|
Display Omitted We present a method to robustly track the geometry of an object that deforms over time.We fit a FEM-based model to the data leading to physically plausible results.We evaluate the performance of our method using synthetic and scanned data. We present an approach to robustly track the geometry of an object that deforms over time from a set of input point clouds captured from a single viewpoint. The deformations we consider are caused by applying forces to known locations on the object's surface. Our method combines the use of prior information on the geometry of the object modeled by a smooth template and the use of a linear finite element method to predict the deformation. This allows the accurate reconstruction of both the observed and the unobserved sides of the object. We present tracking results for noisy low-quality point clouds acquired by either a stereo camera or a depth camera, and simulations with point clouds corrupted by different error terms. We show that our method is also applicable to large non-linear deformations.
|
@cite_33 took images acquired using a single depth camera from different viewpoints while the object deforms and assembled them into a complete deformable model over time. @cite_43 used a similar approach that is tailored to allow for topological consistency across the motion.
|
{
"cite_N": [
"@cite_43",
"@cite_33"
],
"mid": [
"1988179100",
"2105906154"
],
"abstract": [
"Most objects deform gradually over time, without abrupt changes in geometry or topology, such as changes in genus. Correct space-time reconstruction of such objects should satisfy this gradual change prior. This requirement necessitates a globally consistent interpretation of spatial adjacency. Consider the capture of a surface that comes in contact with itself during the deformation process, such as a hand with different fingers touching one another in parts of the sequence. Naive reconstruction would glue the contact regions together for the duration of each contact and keep them apart in other parts of the sequence. However such reconstruction violates the gradual change prior as it enforces a drastic intrinsic change in the object's geometry at the transition between the glued and unglued sub-sequences. Instead consistent global reconstruction should keep the surfaces separate throughout the entire sequence. We introduce a new method for globally consistent space-time geometry and motion reconstruction from video capture. We use the gradual change prior to resolve inconsistencies and faithfully reconstruct the geometry and motion of the scanned objects. In contrast to most previous methods our algorithm doesn't require a strong shape prior such as a template and provides better results than other template-free approaches.",
"We propose a novel approach to reconstruct complete 3D deformable models over time by a single depth camera, provided that most parts of the models are observed by the camera at least once. The core of this algorithm is based on the assumption that the deformation is continuous and predictable in a short temporal interval. While the camera can only capture part of a whole surface at any time instant, partial surfaces reconstructed from different times are assembled together to form a complete 3D surface for each time instant, even when the shape is under severe deformation. A mesh warping algorithm based on linear mesh deformation is used to align different partial surfaces. A volumetric method is then used to combine partial surfaces, fix missing holes, and smooth alignment errors. Our experiment shows that this approach is able to reconstruct visually plausible 3D surface deformation results with a single camera."
]
}
|
1306.4478
|
2036688247
|
Display Omitted We present a method to robustly track the geometry of an object that deforms over time.We fit a FEM-based model to the data leading to physically plausible results.We evaluate the performance of our method using synthetic and scanned data. We present an approach to robustly track the geometry of an object that deforms over time from a set of input point clouds captured from a single viewpoint. The deformations we consider are caused by applying forces to known locations on the object's surface. Our method combines the use of prior information on the geometry of the object modeled by a smooth template and the use of a linear finite element method to predict the deformation. This allows the accurate reconstruction of both the observed and the unobserved sides of the object. We present tracking results for noisy low-quality point clouds acquired by either a stereo camera or a depth camera, and simulations with point clouds corrupted by different error terms. We show that our method is also applicable to large non-linear deformations.
|
@cite_28 avoid the use of a template model by initializing the tracking procedure with the visual hull of the object. @cite_42 track a deformable model using a skeleton-based approach, where the skeleton is computed using the data.
|
{
"cite_N": [
"@cite_28",
"@cite_42"
],
"mid": [
"2081927584",
"2146700972"
],
"abstract": [
"We present a novel shape completion technique for creating temporally coherent watertight surfaces from real-time captured dynamic performances. Because of occlusions and low surface albedo, scanned mesh sequences typically exhibit large holes that persist over extended periods of time. Most conventional dynamic shape reconstruction techniques rely on template models or assume slow deformations in the input data. Our framework sidesteps these requirements and directly initializes shape completion with topology derived from the visual hull. To seal the holes with patches that are consistent with the subject's motion, we first minimize surface bending energies in each frame to ensure smooth transitions across hole boundaries. Temporally coherent dynamics of surface patches are obtained by unwarping all frames within a time window using accurate interframe correspondences. Aggregated surface samples are then filtered with a temporal visibility kernel that maximizes the use of nonoccluded surfaces. A key benefit of our shape completion strategy is that it does not rely on long-range correspondences or a template model. Consequently, our method does not suffer error accumulation typically introduced by noise, large deformations, and drastic topological changes. We illustrate the effectiveness of our method on several high-resolution scans of human performances captured with a state-of-the-art multiview 3D acquisition system.",
"We introduce the notion of consensus skeletons for non-rigid space-time registration of a deforming shape. Instead of basing the registration on point features, which are local and sensitive to noise, we adopt the curve skeleton of the shape as a global and descriptive feature for the task. Our method uses no template and only assumes that the skelet al structure of the captured shape remains largely consistent over time. Such an assumption is generally weaker than those relying on large overlap of point features between successive frames, allowing for more sparse acquisition across time. Building our registration framework on top of the low-dimensional skeletontime structure avoids heavy processing of dense point or volumetric data, while skeleton consensusization provides robust handling of incompatibilities between per-frame skeletons. To register point clouds from all frames, we deform them by their skeletons, mirroring the skeleton registration process, to jump-start a non-rigid ICP. We present results for non-rigid space-time registration under sparse and noisy spatio-temporal sampling, including cases where data was captured from only a single view."
]
}
|
1306.4478
|
2036688247
|
Display Omitted We present a method to robustly track the geometry of an object that deforms over time.We fit a FEM-based model to the data leading to physically plausible results.We evaluate the performance of our method using synthetic and scanned data. We present an approach to robustly track the geometry of an object that deforms over time from a set of input point clouds captured from a single viewpoint. The deformations we consider are caused by applying forces to known locations on the object's surface. Our method combines the use of prior information on the geometry of the object modeled by a smooth template and the use of a linear finite element method to predict the deformation. This allows the accurate reconstruction of both the observed and the unobserved sides of the object. We present tracking results for noisy low-quality point clouds acquired by either a stereo camera or a depth camera, and simulations with point clouds corrupted by different error terms. We show that our method is also applicable to large non-linear deformations.
|
Several authors suggested learning the parameters of linear finite element models from a set of observations. We use such a method in combination with a tracking method to find an accurate tracking result of the observed and the unobserved side of the model. For a summary of linear finite element methods for elastic deformations, refer to Bro-Nielsen @cite_9 .
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2129494071"
],
"abstract": [
"Modeling the deformation of human organs for surgery simulation systems has turned out to be quite a challenge. Not only is very little known about the physical properties of general human tissue but in addition, most conventional modeling techniques are not applicable because of the timing requirements of simulation systems. To produce a video-like visualization of a deforming organ, the deformation must be determined at rates of 10-20 times s. In the fields of elasticity and related modeling paradigms, the main interest has been the development of accurate mathematical models. The speed of these models has been a secondary interest. But for surgery simulation systems, the priorities are reversed. The main interest is the speed and robustness of the models, and accuracy is of less concern. Recent years have seen the development of different practical modeling techniques that take into account the reversed priorities and can be used in practice for real-time modeling of deformable organs. The paper discusses some of these new techniques in the reference frame of finite element models. In particular, it builds on the recent work by the author on fast finite element models and discusses the advantages and disadvantages of these models in comparison to previous models."
]
}
|
1306.4478
|
2036688247
|
Display Omitted We present a method to robustly track the geometry of an object that deforms over time.We fit a FEM-based model to the data leading to physically plausible results.We evaluate the performance of our method using synthetic and scanned data. We present an approach to robustly track the geometry of an object that deforms over time from a set of input point clouds captured from a single viewpoint. The deformations we consider are caused by applying forces to known locations on the object's surface. Our method combines the use of prior information on the geometry of the object modeled by a smooth template and the use of a linear finite element method to predict the deformation. This allows the accurate reconstruction of both the observed and the unobserved sides of the object. We present tracking results for noisy low-quality point clouds acquired by either a stereo camera or a depth camera, and simulations with point clouds corrupted by different error terms. We show that our method is also applicable to large non-linear deformations.
|
Lang and Pai @cite_41 used a surface displacement and contact force at one point along with range-flow data to estimate elastic constants of homogeneous isotropic materials using numerical optimization. Becker and Teschner @cite_17 presented an approach to estimate the elasticity parameters for isotropic materials using a linear finite element method. The approach takes as input a set of displacement and force measurements and uses them to compute the Young's modulus and Poisson's ratio (see Bro-Nielsen @cite_9 ) with the help of an efficient quadratic programming technique. @cite_1 presented a similar approach to estimate both the elasticity and viscosity parameters for viscoelastic materials using a linear finite element method. This approach reduces the problem to solving a linear system of equations and has been shown to run in real-time.
|
{
"cite_N": [
"@cite_41",
"@cite_9",
"@cite_1",
"@cite_17"
],
"mid": [
"",
"2129494071",
"2006303959",
"852449"
],
"abstract": [
"",
"Modeling the deformation of human organs for surgery simulation systems has turned out to be quite a challenge. Not only is very little known about the physical properties of general human tissue but in addition, most conventional modeling techniques are not applicable because of the timing requirements of simulation systems. To produce a video-like visualization of a deforming organ, the deformation must be determined at rates of 10-20 times s. In the fields of elasticity and related modeling paradigms, the main interest has been the development of accurate mathematical models. The speed of these models has been a secondary interest. But for surgery simulation systems, the priorities are reversed. The main interest is the speed and robustness of the models, and accuracy is of less concern. Recent years have seen the development of different practical modeling techniques that take into account the reversed priorities and can be used in practice for real-time modeling of deformable organs. The paper discusses some of these new techniques in the reference frame of finite element models. In particular, it builds on the recent work by the author on fast finite element models and discusses the advantages and disadvantages of these models in comparison to previous models.",
"The linear dynamic finite element model can be formulated such that the elasticity and viscosity of the elements appear as the parameters in a linear system of equations. The resulting system of equations can be solved directly using singular value decomposition or a similar technique or through defining a quadratic functional. A priori knowledge and regularity measures can be added as equality or inequality constraints. The sensitivity of the inverse problem solution to the displacement noise and model imperfections are tested in simulations, where the parameters were successfully reconstructed with a displacement signal-to-noise ratio as low as 20 dB. Also, the viscoelastic parameters have been successfully estimated for a breast phantom with an embedded hard inclusion. The study of the computation speed demonstrates the potential of the new method for real-time implementations.",
"Realistic elasticity parameters are important for the accurate simulation of deformable objects, e. g. in medical simulations. In this paper, we present an approach for estimating elasticity parameters for isotropic elastic materials using the linear Finite Element Method. Employing the initial undeformed geometry and a measured forcedeformation relation, the method computes the elasticity parameters based on Quadratic Programming. The structure of the stiffness matrix is employed to accelerate the estimation process. Experiments suggest, that the parameter estimation approach can be used for noisy data."
]
}
|
1306.4478
|
2036688247
|
Display Omitted We present a method to robustly track the geometry of an object that deforms over time.We fit a FEM-based model to the data leading to physically plausible results.We evaluate the performance of our method using synthetic and scanned data. We present an approach to robustly track the geometry of an object that deforms over time from a set of input point clouds captured from a single viewpoint. The deformations we consider are caused by applying forces to known locations on the object's surface. Our method combines the use of prior information on the geometry of the object modeled by a smooth template and the use of a linear finite element method to predict the deformation. This allows the accurate reconstruction of both the observed and the unobserved sides of the object. We present tracking results for noisy low-quality point clouds acquired by either a stereo camera or a depth camera, and simulations with point clouds corrupted by different error terms. We show that our method is also applicable to large non-linear deformations.
|
Syllebranque and Boivin @cite_14 estimated the parameters of a quasi-static finite element simulation of a deformable solid object from a video sequence. The problems of optimizing the Young's modulus and the Poisson ratio were solved sequentially. @cite_38 used finite element models to validate the non-rigid image registration of magnetic resonance images. Nguyen and Boyce @cite_31 presented an approach to estimate the anisotropic material properties of the cornea.
|
{
"cite_N": [
"@cite_38",
"@cite_31",
"@cite_14"
],
"mid": [
"2127374940",
"2091593985",
"2038837307"
],
"abstract": [
"Presents a novel method for validation of nonrigid medical image registration. This method is based on the simulation of physically plausible, biomechanical tissue deformations using finite-element methods. Applying a range of displacements to finite-element models of different patient anatomies generates model solutions which simulate gold standard deformations. From these solutions, deformed images are generated with a range of deformations typical of those likely to occur in vivo. The registration accuracy with respect to the finite-element simulations is quantified by co-registering the deformed images with the original images and comparing the recovered voxel displacements with the biomechanically simulated ones. The functionality of the validation method is demonstrated for a previously described nonrigid image registration technique based on free-form deformations using B-splines and normalized mutual information as a voxel similarity measure, with an application to contrast-enhanced magnetic resonance mammography image pairs. The exemplar nonrigid registration technique is shown to be of subvoxel accuracy on average for this particular application. The validation method presented here is an important step toward more generic simulations of biomechanically plausible tissue deformations and quantification of tissue motion recovery using nonrigid image registration. It will provide a basis for improving and comparing different nonrigid registration techniques for a diversity of medical applications, such as intrasubject tissue deformation or motion correction in the brain, liver or heart.",
"An inverse finite element method was developed to determine the anisotropic properties of bovine cornea from an in vitro inflation experiment. The experiment used digital image correlation (DIC) to measure the three-dimensional surface geometry and displacement field of the cornea at multiple pressures. A finite element model of a bovine cornea was developed using the DIC measured surface geometry of the undeformed specimen. The model was applied to determine five parameters of an anisotropic hyperelastic model that minimized the error between the measured and computed surface displacement field and to investigate the sensitivity of the measured bovine inflation response to variations in the anisotropic properties of the cornea. The results of the parameter optimization revealed that the collagen structure of bovine cornea exhibited a high degree of anisotropy in the limbus region, which agreed with recent histological findings, and a transversely isotropic central region. The parameter study showed that the bovine corneal response to the inflation experiment was sensitive to the shear modulus of the matrix at pressures below the intraocular pressure, the properties of the collagen lamella at higher pressures, and the degree of anisotropy in the limbus region. It was not sensitive to a weak collagen anisotropy in the central region.",
"In this paper, we present a new method to estimate the mechanical parameters of soft bodies directly from videos of solids getting deformed under external user action. Our method requires one standard camera, a deformable solid made of homogeneous material, and a regular light source. We make estimations using an inverse method based on a quasi-static FEM simulation and a visual error metric. The result is a set of two parameters, the Young modulus and the Poisson ratio, that can be used for more complex simulations, or force feedback applications like virtual surgery, for example. We also present a new device for capturing the external forces applied on the deformable solids."
]
}
|
1306.4478
|
2036688247
|
Display Omitted We present a method to robustly track the geometry of an object that deforms over time.We fit a FEM-based model to the data leading to physically plausible results.We evaluate the performance of our method using synthetic and scanned data. We present an approach to robustly track the geometry of an object that deforms over time from a set of input point clouds captured from a single viewpoint. The deformations we consider are caused by applying forces to known locations on the object's surface. Our method combines the use of prior information on the geometry of the object modeled by a smooth template and the use of a linear finite element method to predict the deformation. This allows the accurate reconstruction of both the observed and the unobserved sides of the object. We present tracking results for noisy low-quality point clouds acquired by either a stereo camera or a depth camera, and simulations with point clouds corrupted by different error terms. We show that our method is also applicable to large non-linear deformations.
|
@cite_22 proposed a physics-based approach to predict shape deformations. First, a set of deformations is measured by recording both the force applied to an object and the resulting shape of the object. This information is used to learn a relationship between the applied force and the shape deformation, which allows to predict the shape deformation for new force inputs. The technique assumes the object material to have linear elasticity properties. @cite_15 extended this approach to allow the modeling and fabrication of materials with a desired deformation behavior using stacked layers of homogeneous materials.
|
{
"cite_N": [
"@cite_15",
"@cite_22"
],
"mid": [
"2082254676",
"2168685057"
],
"abstract": [
"This paper introduces a data-driven process for designing and fabricating materials with desired deformation behavior. Our process starts with measuring deformation properties of base materials. For each base material we acquire a set of example deformations, and we represent the material as a non-linear stress-strain relationship in a finite-element model. We have validated our material measurement process by comparing simulations of arbitrary stacks of base materials with measured deformations of fabricated material stacks. After material measurement, our process continues with designing stacked layers of base materials. We introduce an optimization process that finds the best combination of stacked layers that meets a user's criteria specified by example deformations. Our algorithm employs a number of strategies to prune poor solutions from the combinatorial search space. We demonstrate the complete process by designing and fabricating objects with complex heterogeneous materials using modern multi-material 3D printers.",
"This paper introduces a data-driven representation and modeling technique for simulating non-linear heterogeneous soft tissue. It simplifies the construction of convincing deformable models by avoiding complex selection and tuning of physical material parameters, yet retaining the richness of non-linear heterogeneous behavior. We acquire a set of example deformations of a real object, and represent each of them as a spatially varying stress-strain relationship in a finite-element model. We then model the material by non-linear interpolation of these stress-strain relationships in strain-space. Our method relies on a simple-to-build capture system and an efficient run-time simulation algorithm based on incremental loading, making it suitable for interactive computer graphics applications. We present the results of our approach for several non-linear materials and biological soft tissue, with accurate agreement of our model to the measured data."
]
}
|
1306.4478
|
2036688247
|
Display Omitted We present a method to robustly track the geometry of an object that deforms over time.We fit a FEM-based model to the data leading to physically plausible results.We evaluate the performance of our method using synthetic and scanned data. We present an approach to robustly track the geometry of an object that deforms over time from a set of input point clouds captured from a single viewpoint. The deformations we consider are caused by applying forces to known locations on the object's surface. Our method combines the use of prior information on the geometry of the object modeled by a smooth template and the use of a linear finite element method to predict the deformation. This allows the accurate reconstruction of both the observed and the unobserved sides of the object. We present tracking results for noisy low-quality point clouds acquired by either a stereo camera or a depth camera, and simulations with point clouds corrupted by different error terms. We show that our method is also applicable to large non-linear deformations.
|
Recently, Choi and Szymczak @cite_40 used FEM to predict a consistent set of deformations from a sequence of coarse watertight meshes but their method does not apply to single-view tracking, which is the focus of our method.
|
{
"cite_N": [
"@cite_40"
],
"mid": [
"2091843001"
],
"abstract": [
"Computing correspondence between time frames of a time-dependent 3D surface is essential for the understanding of its motion and deformation. In particular, it can be a useful tool in compression, editing, texturing, or analysis of the physical or structural properties of deforming objects. However, correspondence information is not trivial to obtain for experimentally acquired 3D animations, such as time-dependent visual hulls (typically represented as either a binary occupancy grid or as a sequence of meshes of varying connectivity). In this article we present a new nonrigid fitting method that can compute such correspondence information for objects that do not undergo large volume or topological changes, such as living creatures. Experimental results show that it is robust enough to handle visual hull data, allowing to convert it into a constant connectivity mesh with vertices moving in time. Our procedure first creates a rest-state mesh from one of the input frames. This rest-state mesh is then fitted to the consecutive frames. We do this by iteratively displacing its vertices so that a combination of surface distance and elastic potential energy is minimized. A novel rotation compensation method enables us to obtain high-quality results with linear elasticity, even in presence of significant bending."
]
}
|
1306.3882
|
1538041170
|
Testing of synchronous reactive systems is challenging because long input sequences are often needed to drive them into a state at which a desired feature can be tested. This is particularly problematic in on-target testing, where a system is tested in its real-life application environment and the time required for resetting is high. This paper presents an approach to discovering a test case chain---a single software execution that covers a group of test goals and minimises overall test execution time. Our technique targets the scenario in which test goals for the requirements are given as safety properties. We give conditions for the existence and minimality of a single test case chain and minimise the number of test chains if a single test chain is infeasible. We report experimental results with a prototype tool for C code generated from Simulink models and compare it to state-of-the-art test suite generators.
|
There are many approaches to reactive system testing: While random testing @cite_27 is still commonly used, approaches have been developed that combine random testing with @cite_12 @cite_3 @cite_22 ( @cite_12 , @cite_3 , @cite_22 ) to guide exhaustive path enumeration. employ test specifications to guide test case generation towards a particular functionality @cite_18 @cite_33 @cite_31 ( , @cite_18 , @cite_33 , @cite_31 ) . These methods restrict the input space using static analysis and apply (non-uniform) random test case generation. (see @cite_38 @cite_13 for surveys on this topic) considers specification models based on labelled transition systems. For instance, extended finite state machines (EFSM) @cite_15 @cite_11 @cite_15 @cite_19 are commonly used in communication protocol testing to provide exhaustive test case generation for conformance testing @cite_6 @cite_4 . Available tools include, , @cite_6 and @cite_4 .
|
{
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_4",
"@cite_22",
"@cite_33",
"@cite_15",
"@cite_3",
"@cite_6",
"@cite_19",
"@cite_27",
"@cite_31",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"2083965123",
"2025914262",
"",
"1710734607",
"2012163960",
"2103095043",
"",
"1522465028",
"",
"",
"2086082110",
"",
"2096449544",
"2164283480"
],
"abstract": [
"Model-based testing is focused on testing techniques which rely on the use of models. The diversity of systems and software to be tested implies the need for research on a variety of models and methods for test automation. We briefly review this research area and introduce several papers selected from the 22nd International Conference on Testing Software and Systems (ICTSS).",
"Several studies have shown that automated testing is a promising approach to save significant amounts of time and money in the industry of reactive software. But automated testing requires a formal framework and adequate means to generate test data. In the context of synchronous reactive software, we have built such a framework and its associated tool-Lutess-to integrate various well-founded testing techniques. This tool automatically constructs test harnesses for fully automated test data generation and verdict return. The generation conforms to different formal descriptions: software environment constraints, functional and safety-oriented properties to be satisfied by the software, software operational profiles and software behavior patterns. These descriptions are expressed in an extended executable temporal logic. They correspond to more and more complex test objectives raised by the first pre-industrial applications of Lutess. This paper concentrates on the latest development of the tool and its use in the validation of standard feature specifications in telephone systems. The four testing techniques which are coordinated in Lutess uniform framework are shown to be well-suited to efficient software testing. The lessons learnt from the use of Lutess in the context of industrial partnerships are discussed.",
"",
"We present a new symbolic execution tool, KLEE, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentally-intensive programs. We used KLEE to thoroughly check all 89 stand-alone programs in the GNU COREUTILS utility suite, which form the core user-level environment installed on millions of Unix systems, and arguably are the single most heavily tested set of open-source programs in existence. KLEE-generated tests achieve high line coverage -- on average over 90 per tool (median: over 94 ) -- and significantly beat the coverage of the developers' own hand-written test suite. When we did the same for 75 equivalent tools in the BUSYBOX embedded system suite, results were even better, including 100 coverage on 31 of them. We also used KLEE as a bug finding tool, applying it to 452 applications (over 430K total lines of code), where it found 56 serious bugs, including three in COREUTILS that had been missed for over 15 years. Finally, we used KLEE to crosscheck purportedly identical BUSYBOX and COREUTILS utilities, finding functional correctness errors and a myriad of inconsistencies.",
"Lurette is an automated testing tool dedicated to reactive programs. The test process is automated at two levels: given a formal description of the System Under Test (SUT) environment, Lurette generates realistic input sequences; and, given a formal description of expected properties, Lurette performs the test results analysis. Lurette has been re-implemented from scratch. In this new version, the main novelty lies in the way the SUT environment is described. This is done by means of a new language called Lucky, dedicated to the programming of probabilistic reactive systems. This article recalls the principles of Lurette, briefly presents the Lucky language, and describes some case studies from the IST project Safeair II. The objective is to illustrate the usefulness of Lurette on real case studies, and the expressiveness of Lucky in accurately describing SUT environments. We show in particular how Lurette can be used to test a typical fault-tolerant system; we also present case studies conducted with Hispano-Suiza and Renault.",
"A method for automated selection of test sequences from a protocol specification given in Estelle for the purpose of testing both control and data flow aspects of a protocol implementation is discussed. First, a flowgraph modeling the flow of both control and data expressed in the given specification is constructed. In the flowgraph, definitions and uses of each context variable, as well as each input and output interaction parameter employed in the specification, are identified. Based on this information, associations between each output and those inputs that influence the output are established. Test sequences are selected to cover each such association at least once. The resulting test sequences are shown to provide the capability of checking whether a protocol implementation under test establishes the desired flow of both control and data expressed in the protocol specification. The proposed method is illustrated by using the class 0 transport protocol as an example. >",
"",
"This paper presents the TGV tool, which allows for the automatic synthesis of conformance test cases from a formal specification of a (non-deterministic) reactive system. TGV was developed by Irisa Rennes and Verimag Grenoble, with the support of the Vasy team of Inria Rhones-Alpes. The paper describes the main elements of the underlying testing theory, which is based on a model of transitions system which distinguishes inputs, outputs and internal actions, and is based on the concept of conformance relation. The principles of the test synthesis process, as well as the main algorithms, are explained. We then describe the main characteristics of the TGV tool and refer to some industrial experiments that have been conducted to validate the approach. As a conclusion, we describe some ongoing work on test synthesis.",
"",
"",
"This paper presents the language Lutin and its operational semantics. This language specifically targets the domain of reactive systems, where an execution is a (virtually) infinite sequence of input output reactions. More precisely, it is dedicated to the description and the execution of constrained random scenarios. Its first use is for test sequence specification and generation. It can also be useful for early simulation of huge systems, where Lutin programs can be used to describe and simulate modules that are not yet fully developed. Basic statements are input output relations expressing constraints on a single reaction. Those constraints are then combined to describe non deterministic sequences of reactions. The language constructs are inspired by regular expressions and process algebra (sequence, choice, loop, concurrency). Moreover, the set of statements can be enriched with user-defined operators. A notion of stochastic directives is also provided in order to finely influence the selection of a particular class of scenarios.",
"",
"We present a new tool, named DART, for automatically testing software that combines three main techniques: (1) automated extraction of the interface of a program with its external environment using static source-code parsing; (2) automatic generation of a test driver for this interface that performs random testing to simulate the most general environment the program can operate in; and (3) dynamic analysis of how the program behaves under random testing and automatic generation of new test inputs to direct systematically the execution along alternative program paths. Together, these three techniques constitute Directed Automated Random Testing, or DART for short. The main strength of DART is thus that testing can be performed completely automatically on any program that compiles -- there is no need to write any test driver or harness code. During testing, DART detects standard errors such as program crashes, assertion violations, and non-termination. Preliminary experiments to unit test several examples of C programs are very encouraging.",
"In feature testing of communication protocols, we want to construct a minimal number of tests with a desirable fault coverage. We model the protocols by extended finite state machines and reduce the test generation process to optimization problems on graphs. We study efficient algorithms and their complexity. We report experimental results on real systems, including Personal HandyPhone System, a 5ESS based ISDN wireless system, and 5ESS Intelligent Network Application Protocol."
]
}
|
1306.4363
|
2117151108
|
Online multiplayer games are a popular form of social interaction, used by hundreds of millions of individuals. However, little is known about the social networks within these online games, or how they evolve over time. Understanding human social dynamics within massive online games can shed new light on social interactions in general and inform the development of more engaging systems. Here, we study a novel, large friendship network, inferred from nearly 18 billion social interactions over 44 weeks between 17 million individuals in the popular online game Halo:Reach. This network is one of the largest, most detailed temporal interaction networks studied to date, and provides a novel perspective on the dynamics of online friendship networks, as opposed to mere interaction graphs. Initially, this network exhibits strong structural turnover and decays rapidly from a peak size. In the following period, however, both network size and turnover stabilize, producing a dynamic structural equilibrium. In contrast to other studies, we nd that the Halo friendship network is non-densifying: both the mean degree and the average pairwise distance are stable, suggesting that densication cannot occur when maintaining friendships is costly. Finally, players with greater long-term engagement exhibit stronger local clustering, suggesting a group-level social engagement process. These results demonstrate the utility of online games for studying social networks, shed new light on empirical temporal graph patterns, and clarify the claims of universality of network densication.
|
Work related to the analysis of networks began with the study of static networks. Researchers identified important structural patterns such as heavy tailed degree distributions @cite_1 , small-world phenomenon @cite_8 @cite_28 , and communities, as well as probabilistic models and algorithms that produce and detect them @cite_3 @cite_21 .
|
{
"cite_N": [
"@cite_8",
"@cite_28",
"@cite_21",
"@cite_1",
"@cite_3"
],
"mid": [
"1573579329",
"2112090702",
"2047940964",
"2008620264",
"2148606196"
],
"abstract": [
"",
"Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.",
"The discovery and analysis of community structure in networks is a topic of considerable recent interest within the physics community, but most methods proposed so far are unsuitable for very large networks because of their computational cost. Here we present a hierarchical agglomeration algorithm for detecting community structure which is faster than many competing algorithms: its running time on a network with n vertices and m edges is O(m d log n) where d is the depth of the dendrogram describing the community structure. Many real-world networks are sparse and hierarchical, with m n and d log n, in which case our algorithm runs in essentially linear time, O(n log^2 n). As an example of the application of this algorithm we use it to analyze a network of items for sale on the web-site of a large online retailer, items in the network being linked if they are frequently purchased by the same buyer. The network has more than 400,000 vertices and 2 million edges. We show that our algorithm can extract meaningful communities from this network, revealing large-scale patterns present in the purchasing habits of customers.",
"Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.",
"Inspired by empirical studies of networked systems such as the Internet, social networks, and biological networks, researchers have in recent years developed a variety of techniques and models to help us understand or predict the behavior of these systems. Here we review developments in this field, including such concepts as the small-world effect, degree distributions, clustering, network correlations, random graph models, models of network growth and preferential attachment, and dynamical processes taking place on networks."
]
}
|
1306.4363
|
2117151108
|
Online multiplayer games are a popular form of social interaction, used by hundreds of millions of individuals. However, little is known about the social networks within these online games, or how they evolve over time. Understanding human social dynamics within massive online games can shed new light on social interactions in general and inform the development of more engaging systems. Here, we study a novel, large friendship network, inferred from nearly 18 billion social interactions over 44 weeks between 17 million individuals in the popular online game Halo:Reach. This network is one of the largest, most detailed temporal interaction networks studied to date, and provides a novel perspective on the dynamics of online friendship networks, as opposed to mere interaction graphs. Initially, this network exhibits strong structural turnover and decays rapidly from a peak size. In the following period, however, both network size and turnover stabilize, producing a dynamic structural equilibrium. In contrast to other studies, we nd that the Halo friendship network is non-densifying: both the mean degree and the average pairwise distance are stable, suggesting that densication cannot occur when maintaining friendships is costly. Finally, players with greater long-term engagement exhibit stronger local clustering, suggesting a group-level social engagement process. These results demonstrate the utility of online games for studying social networks, shed new light on empirical temporal graph patterns, and clarify the claims of universality of network densication.
|
As new online social systems continue to emerge on the web, the static analysis of social networks continues to be an area of great interest. Researchers have a steady stream of new empirical network data with which they can test new and existing theories about social dynamics, whose sources include Twitter @cite_17 , Facebook @cite_12 , Orkut and Flicker @cite_15 .
|
{
"cite_N": [
"@cite_15",
"@cite_12",
"@cite_17"
],
"mid": [
"2115022330",
"1893161742",
"2101196063"
],
"abstract": [
"Online social networking sites like Orkut, YouTube, and Flickr are among the most popular sites on the Internet. Users of these sites form a social network, which provides a powerful means of sharing, organizing, and finding content and contacts. The popularity of these sites provides an opportunity to study the characteristics of online social network graphs at large scale. Understanding these graphs is important, both to improve current systems and to design new applications of online social networks. This paper presents a large-scale measurement study and analysis of the structure of multiple online social networks. We examine data gathered from four popular online social networks: Flickr, YouTube, LiveJournal, and Orkut. We crawled the publicly accessible user links on each site, obtaining a large portion of each social network's graph. Our data set contains over 11.3 million users and 328 million links. We believe that this is the first study to examine multiple online social networks at scale. Our results confirm the power-law, small-world, and scale-free properties of online social networks. We observe that the indegree of user nodes tends to match the outdegree; that the networks contain a densely connected core of high-degree nodes; and that this core links small groups of strongly clustered, low-degree nodes at the fringes of the network. Finally, we discuss the implications of these structural properties for the design of social network based systems.",
"We study the structure of the social graph of active Facebook users, the largest social network ever analyzed. We compute numerous features of the graph including the number of users and friendships, the degree distribution, path lengths, clustering, and mixing patterns. Our results center around three main observations. First, we characterize the global structure of the graph, determining that the social network is nearly fully connected, with 99.91 of individuals belonging to a single large connected component, and we confirm the \"six degrees of separation\" phenomenon on a global scale. Second, by studying the average local clustering coefficient and degeneracy of graph neighborhoods, we show that while the Facebook graph as a whole is clearly sparse, the graph neighborhoods of users contain surprisingly dense structure. Third, we characterize the assortativity patterns present in the graph by studying the basic demographic and network properties of users. We observe clear degree assortativity and characterize the extent to which \"your friends have more friends than you\". Furthermore, we observe a strong effect of age on friendship preferences as well as a globally modular community structure driven by nationality, but we do not find any strong gender homophily. We compare our results with those from smaller social networks and find mostly, but not entirely, agreement on common structural network characteristics.",
"Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing. We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28]. In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85 ) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1,000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet. To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it."
]
}
|
1306.4363
|
2117151108
|
Online multiplayer games are a popular form of social interaction, used by hundreds of millions of individuals. However, little is known about the social networks within these online games, or how they evolve over time. Understanding human social dynamics within massive online games can shed new light on social interactions in general and inform the development of more engaging systems. Here, we study a novel, large friendship network, inferred from nearly 18 billion social interactions over 44 weeks between 17 million individuals in the popular online game Halo:Reach. This network is one of the largest, most detailed temporal interaction networks studied to date, and provides a novel perspective on the dynamics of online friendship networks, as opposed to mere interaction graphs. Initially, this network exhibits strong structural turnover and decays rapidly from a peak size. In the following period, however, both network size and turnover stabilize, producing a dynamic structural equilibrium. In contrast to other studies, we nd that the Halo friendship network is non-densifying: both the mean degree and the average pairwise distance are stable, suggesting that densication cannot occur when maintaining friendships is costly. Finally, players with greater long-term engagement exhibit stronger local clustering, suggesting a group-level social engagement process. These results demonstrate the utility of online games for studying social networks, shed new light on empirical temporal graph patterns, and clarify the claims of universality of network densication.
|
More recently, researchers have begun to study how time influences @cite_14 and changes network structure @cite_32 . New dynamical patterns, such as densification @cite_9 @cite_29 and shrinking @math -cores @cite_6 , as well as probabilistic models have been identified that shed light on how changes in the underlying processes that produce these networks affect its structure.
|
{
"cite_N": [
"@cite_14",
"@cite_29",
"@cite_9",
"@cite_32",
"@cite_6"
],
"mid": [
"1782565250",
"2122710250",
"2111708605",
"2121761994",
"2123726161"
],
"abstract": [
"The topology of social networks can be understood as being inherently dynamic, with edges having a distinct position in time. Most characterizations of dynamic networks discretize time by converting temporal information into a sequence of network \"snapshots\" for further analysis. Here we study a highly resolved data set of a dynamic proximity network of 66 individuals. We show that the topology of this network evolves over a very broad distribution of time scales, that its behavior is characterized by strong periodicities driven by external calendar cycles, and that the conversion of inherently continuous-time data into a sequence of snapshots can produce highly biased estimates of network structure. We suggest that dynamic social networks exhibit a natural time scale nat , and that the best conversion of such dynamic data to a discrete sequence of networks is done at this natural rate.",
"In this paper, we consider the evolution of structure within large online social networks. We present a series of measurements of two such networks, together comprising in excess of five million people and ten million friendship links, annotated with metadata capturing the time of every event in the life of the network. Our measurements expose a surprising segmentation of these networks into three regions: singletons who do not participate in the network; isolated communities which overwhelmingly display star structure; and a giant component anchored by a well-connected core region which persists even in the absence of stars.We present a simple model of network growth which captures these aspects of component structure. The model follows our experimental results, characterizing users as either passive members of the network; inviters who encourage offline friends and acquaintances to migrate online; and linkers who fully participate in the social evolution of the network.",
"How do real graphs evolve over time? What are \"normal\" growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network, or in a very small number of snapshots; these include heavy tails for in- and out-degree distributions, communities, small-world phenomena, and others. However, given the lack of information about network evolution over long periods, it has been hard to convert these findings into statements about trends over time.Here we study a wide range of real graphs, and we observe some surprising phenomena. First, most of these graphs densify over time, with the number of edges growing super-linearly in the number of nodes. Second, the average distance between nodes often shrinks over time, in contrast to the conventional wisdom that such distance parameters should increase slowly as a function of the number of nodes (like O(log n) or O(log(log n)).Existing graph generation models do not exhibit these types of behavior, even at a qualitative level. We provide a new graph generator, based on a \"forest fire\" spreading process, that has a simple, intuitive justification, requires very few parameters (like the \"flammability\" of nodes), and produces graphs exhibiting the full range of properties observed both in prior work and in the present study.",
"Social networking services are a fast-growing business in the Internet. However, it is unknown if online relationships and their growth patterns are the same as in real-life social networks. In this paper, we compare the structures of three online social networking services: Cyworld, MySpace, and orkut, each with more than 10 million users, respectively. We have access to complete data of Cyworld's ilchon (friend) relationships and analyze its degree distribution, clustering property, degree correlation, and evolution over time. We also use Cyworld data to evaluate the validity of snowball sampling method, which we use to crawl and obtain partial network topologies of MySpace and orkut. Cyworld, the oldest of the three, demonstrates a changing scaling behavior over time in degree distribution. The latest Cyworld data's degree distribution exhibits a multi-scaling behavior, while those of MySpace and orkut have simple scaling behaviors with different exponents. Very interestingly, each of the two e ponents corresponds to the different segments in Cyworld's degree distribution. Certain online social networking services encourage online activities that cannot be easily copied in real life; we show that they deviate from close-knit online social networks which show a similar degree correlation pattern to real-life social networks.",
"We empirically analyze five online communities: Friendster, Livejournal, Facebook, Orkut, and Myspace, to study how social networks decline. We define social resilience as the ability of a community to withstand changes. We do not argue about the cause of such changes, but concentrate on their impact. Changes may cause users to leave, which may trigger further leaves of others who lost connection to their friends. This may lead to cascades of users leaving. A social network is said to be resilient if the size of such cascades can be limited. To quantify resilience, we use the k-core analysis, to identify subsets of the network in which all users have at least k friends. These connections generate benefits (b) for each user, which have to outweigh the costs (c) of being a member of the network. If this difference is not positive, users leave. After all cascades, the remaining network is the k-core of the original network determined by the cost-to-benefit (c b) ratio. By analysing the cumulative distribution of k-cores we are able to calculate the number of users remaining in each community. This allows us to infer the impact of the c b ratio on the resilience of these online communities. We find that the different online communities have different k-core distributions. Consequently, similar changes in the c b ratio have a different impact on the amount of active users. Further, our resilience analysis shows that the topology of a social network alone cannot explain its success of failure. As a case study, we focus on the evolution of Friendster. We identify time periods when new users entering the network observed an insufficient c b ratio. This measure can be seen as a precursor of the later collapse of the community. Our analysis can be applied to estimate the impact of changes in the user interface, which may temporarily increase the c b ratio, thus posing a threat for the community to shrink, or even to collapse."
]
}
|
1306.4363
|
2117151108
|
Online multiplayer games are a popular form of social interaction, used by hundreds of millions of individuals. However, little is known about the social networks within these online games, or how they evolve over time. Understanding human social dynamics within massive online games can shed new light on social interactions in general and inform the development of more engaging systems. Here, we study a novel, large friendship network, inferred from nearly 18 billion social interactions over 44 weeks between 17 million individuals in the popular online game Halo:Reach. This network is one of the largest, most detailed temporal interaction networks studied to date, and provides a novel perspective on the dynamics of online friendship networks, as opposed to mere interaction graphs. Initially, this network exhibits strong structural turnover and decays rapidly from a peak size. In the following period, however, both network size and turnover stabilize, producing a dynamic structural equilibrium. In contrast to other studies, we nd that the Halo friendship network is non-densifying: both the mean degree and the average pairwise distance are stable, suggesting that densication cannot occur when maintaining friendships is costly. Finally, players with greater long-term engagement exhibit stronger local clustering, suggesting a group-level social engagement process. These results demonstrate the utility of online games for studying social networks, shed new light on empirical temporal graph patterns, and clarify the claims of universality of network densication.
|
Since the underlying processes that produce complex networks are typically not random, a rich body of work related to mathematically modeling various networks whose structure cannot be generated by an Erd o s-R 'enyi random graph model has emerged @cite_4 . One such model is the configuration model, which produces a graph with a predefined degree distribution by randomly assigning edges between vertices according to their degree sequence @cite_24 . Another is the preferential attachment model, which is a generative model that produces heavy tailed degree distributions by assigning edges to vertices according to the notion of the rich get richer". That is, edges are assigned to vertices according to how many they already have. Other approaches that produce specific properties, such as short diameters and communities include the Watts and Strogatz model and stochastic block model respectively @cite_28 @cite_26 .
|
{
"cite_N": [
"@cite_24",
"@cite_28",
"@cite_4",
"@cite_26"
],
"mid": [
"2169015768",
"2112090702",
"210736762",
"1971421925"
],
"abstract": [
"Recent work on the structure of social networks and the internet has focused attention on graphs with distributions of vertex degree that are significantly different from the Poisson degree distributions that have been widely studied in the past. In this paper we develop in detail the theory of random graphs with arbitrary degree distributions. In addition to simple undirected, unipartite graphs, we examine the properties of directed and bipartite graphs. Among other results, we derive exact expressions for the position of the phase transition at which a giant component first forms, the mean component size, the size of the giant component if there is one, the mean number of vertices a certain distance away from a randomly chosen vertex, and the average vertex-vertex distance within a graph. We apply our theory to some real-world graphs, including the worldwide web and collaboration graphs of scientists and Fortune 1000 company directors. We demonstrate that in some cases random graphs with appropriate distributions of vertex degree predict with surprising accuracy the behavior of the real world, while in others there is a measurable discrepancy between theory and reality, perhaps indicating the presence of additional social structure in the network that is not captured by the random graph.",
"Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.",
"",
"A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases."
]
}
|
1306.4363
|
2117151108
|
Online multiplayer games are a popular form of social interaction, used by hundreds of millions of individuals. However, little is known about the social networks within these online games, or how they evolve over time. Understanding human social dynamics within massive online games can shed new light on social interactions in general and inform the development of more engaging systems. Here, we study a novel, large friendship network, inferred from nearly 18 billion social interactions over 44 weeks between 17 million individuals in the popular online game Halo:Reach. This network is one of the largest, most detailed temporal interaction networks studied to date, and provides a novel perspective on the dynamics of online friendship networks, as opposed to mere interaction graphs. Initially, this network exhibits strong structural turnover and decays rapidly from a peak size. In the following period, however, both network size and turnover stabilize, producing a dynamic structural equilibrium. In contrast to other studies, we nd that the Halo friendship network is non-densifying: both the mean degree and the average pairwise distance are stable, suggesting that densication cannot occur when maintaining friendships is costly. Finally, players with greater long-term engagement exhibit stronger local clustering, suggesting a group-level social engagement process. These results demonstrate the utility of online games for studying social networks, shed new light on empirical temporal graph patterns, and clarify the claims of universality of network densication.
|
Lastly, we would be remiss if we did not also mention work related to the study of online games. Most uses of online game data have focused on understanding certain aspects of human social behavior in online environments. Examples include individual and team performance @cite_33 @cite_30 @cite_11 @cite_10 , expert behavior @cite_16 @cite_18 , homophily @cite_20 , group formation @cite_27 , economic activity @cite_5 @cite_31 , and deviant behavior @cite_7 . Most of this work has focused on massively multiplayer online role playing games (MMORPGs), e.g., World of Warcraft, although a few have examined social behavior in first person shooter (FPS) games like @cite_11 . Relatively little of this work has focused on the structure and dynamics of social networks.
|
{
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_33",
"@cite_7",
"@cite_27",
"@cite_5",
"@cite_31",
"@cite_16",
"@cite_10",
"@cite_20",
"@cite_11"
],
"mid": [
"2074434617",
"2105177883",
"1724761733",
"2155022902",
"2098027474",
"2131719610",
"1490158239",
"2143427414",
"1998882870",
"2102994316",
"2116441593"
],
"abstract": [
"In this study, we propose a comprehensive performance management tool for measuring and reporting operational activities of game players. This study uses performance data of game players in EverQuest II, a popular MMORPG developed by Sony Online Entertainment, to build performance prediction models for game players. The prediction models provide a projection of player’s future performance based on his past performance, which is expected to be a useful addition to existing player performance monitoring tools. First, we show that variations of PECOTA [2] and MARCEL [3], two most popular baseball home run prediction methods, can be used for game player performance prediction. Second, we evaluate the effects of varying lengths of past performance and show that past performance can be a good predictor of future performance up to a certain degree. Third, we show that game players do not regress towards the mean and that prediction models built on buckets using discretization based on binning and histograms lead to higher prediction coverage.",
"How do video game skills develop, and what sets the top players apart? We study this question of skill through a rating generated from repeated multiplayer matches called TrueSkill. Using these ratings from 7 months of games from over 3 million players, we look at how play intensity, breaks in play, skill change over time, and other games affect skill. These analyzed factors are then combined to model future skill and games played; the results show that skill change in early matches is a useful metric for modeling future skill, while play intensity explains eventual games played. The best players in the 7-month period, who we call \"Master Blasters\", have varied skill patterns that often run counter to the trends we see for typical players. The data analysis is supplemented with a 70 person survey to explore how players' self-perceptions compare to the gameplay data; most survey responses align well with the data and provide insight into player beliefs and motivation. Finally, we wrap up with a discussion about hiding skill information from players, and implications for game designers.",
"In this study, we propose a comprehensive performance management tool for measuring and reporting operational activities of game players This study uses performance data of game players in EverQuest II, a popular MMORPG developed by Sony Online Entertainment, to build performance prediction models for game players The prediction models provide a projection of player's future performance based on his past performance, which is expected to be a useful addition to existing player performance monitoring tools First, we show that variations of PECOTA [2] and MARCEL [3], two most popular baseball home run prediction methods, can be used for game player performance prediction Second, we evaluate the effects of varying lengths of past performance and show that past performance can be a good predictor of future performance up to a certain degree Third, we show that game players do not regress towards the mean and that prediction models built on buckets using discretization based on binning and histograms lead to higher prediction coverage.",
"Gold farming refers to the illicit practice of gathering and selling virtual goods in online games for real money. Although around one million gold farmers engage in gold farming related activities, to date a systematic study of identifying gold farmers has not been done. In this paper we use data from the massively-multiplayer online role-playing game (MMORPG) EverQuest II to identify gold farmers. We perform an exploratory logistic regression analysis to identify salient descriptive statistics followed by a machine learning binary classification problem to identify a set of features for classification purposes. Given the cost associated with investigating gold farmers, we also give criteria for evaluating gold farming detection techniques, and provide suggestions for future testing and evaluation techniques.",
"Advanced communication technologies enable strangers to work together on the same tasks or projects in virtual environments. Understanding the formation of task-oriented groups is an important first step to study the dynamics of team collaboration. In this paper, we investigated group combat activities in Sony’s EverQuest II game to identify the role of player and group attributes on group formation. We found that group formation is highly influenced by players’ common interests on challenging tasks. Players with less combat experience are more likely to participate in group events for difficult tasks and team performance is positively correlated to group size.",
"This article proposes an empirical test of whether aggregate economic behavior maps from the real to the virtual. Transaction data from a large commercial virtual world — the first such data set provided to outside researchers — is used to calculate metrics for production, consumption and money supply based on real-world definitions. Movements in these metrics over time were examined for consistency with common theories of macroeconomic change. The results indicated that virtual economic behavior follows real-world patterns. Moreover, a natural experiment occurred, in that a new version of the virtual world with the same rules came online during the study. The new world's macroeconomic aggregates quickly grew to be nearly exact replicas of those of the existing worlds, suggesting that Code is Law': macroeconomic outcomes in a virtual world may be explained largely by design structure.",
"This paper examines social structures underlying economic activity in Second Life (SL), a massively multiplayer virtual world that allows users to create and trade virtual objects and commodities. We find that users conduct many of their transactions both within their social networks and within groups. Using frequency of chat as a proxy of tie strength, we observe that free items are more likely to be exchanged as the strength of the tie increases. Social ties particularly play a significant role in paid transactions for sellers with a moderately sized customer base. We further find that sellers enjoying repeat business are likely to be selling to niche markets, because their customers tend to be contained in a smaller number of groups. But while social structure and interaction can help explain a seller's revenues and repeat business, they provide little information in the forecasting a seller's future performance. Our quantitative analysis is complemented by a novel method of visualizing the transaction activity of a seller, including revenue, customer base growth, and repeat business.",
"We examine the social behaviors of game experts in Everquest II, a popular massive multiplayer online role-playing game (MMO). We rely on Exponential Random Graph Models (ERGM) to examine the anonymous privacy-protected social networks of 1,457 players over a five-day period. We find that those who achieve the most in the game send and receive more communication, while those who perform the most efficiently at the game show no difference in communication behavior from other players. Both achievement and performance experts tend to communicate with those at similar expertise levels, and higher-level experts are more likely to receive communication from other players.",
"In this paper, we report findings from an exploratory study of player and team performance in Halo 3, a popular First-Person-Shooter game developed by Bungie. In the study, we first analyze player and team statistics obtained from the 2008 and 2009 seasons for professional Halo 3 games in order to investigate the impact of change in team composition on player and team performance. We then examine the impact of past performance on future performance of players and teams. Performing a large-scale experiment on a real-world dataset, we observe that player and team performance can be predicted with fairly high accuracy and that information about change in team composition can further improve the prediction results.",
"Virtual space eliminates the constraints of physical distances on communication and interaction. In this study, we examine the impact of offline proximity and homophily of players on their online interactions in EverQuest II. The results show that spatial proximity as well as homophily still influence players’ online behavior.",
"The market for video games has skyrocketed over the past decade. In the United States alone, the video game industry in 2009 generated almost US$20 billion in sales. Furthermore, according to (2008), an estimated 97 of the teenage population and 53 of the adult population are regular game players. Massively multiplayer online games (MMOGs) have become increasingly popular and amassed communities comprised of over 47 million subscribers by the year 2008. MMOGs are online spaces providing users with comprehensive virtual universes, each with its own unique context and mechanics. They range from the fantastical world of elves, dwarfs, and humans to space faring corporations and mirrors of our world. Large numbers of users interact and role-play via in-game mechanics."
]
}
|
1306.3525
|
1578424623
|
In this paper, we consider several finite-horizon Bayesian multi-armed bandit problems with side constraints which are computationally intractable (NP-Hard) and for which no optimal (or near optimal) algorithms are known to exist with sub-exponential running time. All of these problems violate the standard exchange property, which assumes that the reward from the play of an arm is not contingent upon when the arm is played. Not only are index policies suboptimal in these contexts, there has been little analysis of such policies in these problem settings. We show that if we consider near-optimal policies, in the sense of approximation algorithms, then there exists (near) index policies. Conceptually, if we can find policies that satisfy an approximate version of the exchange property, namely, that the reward from the play of an arm depends on when the arm is played to within a constant factor, then we have an avenue towards solving these problems. However such an approximate version of the idling bandit property does not hold on a per-play basis and are shown to hold in a global sense. Clearly, such a property is not necessarily true of arbitrary single arm policies and finding such single arm policies is nontrivial. We show that by restricting the state spaces of arms we can find single arm policies and that these single arm policies can be combined into global (near) index policies where the approximate version of the exchange property is true in expectation. The number of different bandit problems that can be addressed by this technique already demonstrate its wide applicability.
|
Multi-armed bandit problems have been extensively studied since their introduction by Robbins in @cite_6 . From that starting point the literature has diverged into a number of (often incomparable) directions, based on the objective and the information available. In context of theoretical results, one typical goal @cite_28 @cite_59 @cite_64 @cite_41 @cite_15 @cite_62 @cite_33 @cite_36 has been to assume that the agent has absolutely no knowledge of @math (model free assumption) and then the task has been to minimize the regret'' or the lost reward, that is comparing the performance to an omniscient policy that plays the best arm from the start. However, these results both require the reward rate to be large and large time horizon @math compared the number of arms (that is, vanishing @math ). In the application scenarios mentioned above, it will typically be the case that the number of arms is very large and comparable to the optimization horizon and the reward rates are low.
|
{
"cite_N": [
"@cite_64",
"@cite_62",
"@cite_33",
"@cite_28",
"@cite_41",
"@cite_36",
"@cite_6",
"@cite_59",
"@cite_15"
],
"mid": [
"1570963478",
"1979675141",
"2169401877",
"",
"2121671791",
"2952840318",
"1998498767",
"2168405694",
"159910205"
],
"abstract": [
"1. Introduction 2. Prediction with expert advice 3. Tight bounds for specific losses 4. Randomized prediction 5. Efficient forecasters for large classes of experts 6. Prediction with limited feedback 7. Prediction and playing games 8. Absolute loss 9. Logarithmic loss 10. Sequential investment 11. Linear pattern recognition 12. Linear classification 13. Appendix.",
"We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts . Our analysis is for worst-case situations, i.e., we make no assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the algorithm by the difference between the expected number of mistakes it makes on the bit sequence and the expected number of mistakes made by the best expert on this sequence, where the expectation is taken with respect to the randomization in the predictins. We show that the minimum achievable difference is on the order of the square root of the number of mistakes of the best expert, and we give efficient algorithms that achieve this. Our upper and lower bounds have matching leading constants in most cases. We then show how this leads to certain kinds of pattern recognition learning algorithms with performance bounds that improve on the best results currently know in this context. We also compare our analysis to the case in which log loss is used instead of the expected number of mistakes.",
"In an online decision problem, one makes a sequence of decisions without knowledge of the future. Each period, one pays a cost based on the decision and observed state. We give a simple approach for doing nearly as well as the best single decision, where the best is chosen with the benefit of hindsight. A natural idea is to follow the leader, i.e. each period choose the decision which has done best so far. We show that by slightly perturbing the totals and then choosing the best decision, the expected performance is nearly as good as the best decision in hindsight. Our approach, which is very much like Hannan's original game-theoretic approach from the 1950s, yields guarantees competitive with the more modern exponential weighting algorithms like Weighted Majority. More importantly, these follow-the-leader style algorithms extend naturally to a large class of structured online problems for which the exponential algorithms are inefficient.",
"",
"We present an algorithm for solving a broad class of online resource allocation problems. Our online algorithm can be applied in environments where abstract jobs arrive one at a time, and one can complete the jobs by investing time in a number of abstract activities, according to some schedule. We assume that the fraction of jobs completed by a schedule is a monotone, submodular function of a set of pairs (v, τ), where τ is the time invested in activity v. Under this assumption, our online algorithm performs near-optimally according to two natural metrics: (i) the fraction of jobs completed within time T, for some fixed deadline T > 0, and (ii) the average time required to complete each job. We evaluate our algorithm experimentally by using it to learn, online, a schedule for allocating CPU time among solvers entered in the 2007 SAT solver competition.",
"We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).",
"Until recently, statistical theory has been restricted to the design and analysis of sampling experiments in which the size and composition of the samples are completely determined before the experimentation begins. The reasons for this are partly historical, dating back to the time when the statistician was consulted, if at all, only after the experiment was over, and partly intrinsic in the mathematical difficulty of working with anything but a fixed number of independent random variables. A major advance now appears to be in the making with the creation of a theory of the sequential design of experiments, in which the size and composition of the samples are not fixed in advance but are functions of the observations themselves.",
"Reinforcement learning policies face the exploration versus exploitation dilemma, i.e. the search for a balance between exploring the environment to find profitable actions while taking the empirically best action as often as possible. A popular measure of a policy's success in addressing this dilemma is the regret, that is the loss due to the fact that the globally optimal policy is not followed all the times. One of the simplest examples of the exploration exploitation dilemma is the multi-armed bandit problem. Lai and Robbins were the first ones to show that the regret for this problem has to grow at least logarithmically in the number of plays. Since then, policies which asymptotically achieve this regret have been devised by Lai and Robbins and many others. In this work we show that the optimal logarithmic regret is also achievable uniformly over time, with simple and efficient policies, and for all reward distributions with bounded support.",
"We present an asymptotically optimal algorithm for the max variant of the k-armed bandit problem. Given a set of k slot machines, each yielding payoff from a fixed (but unknown) distribution, we wish to allocate trials to the machines so as to maximize the expected maximum payoff received over a series of n trials. Subject to certain distributional assumptions, we show that O(k ln(k δ) ln(n)2 e2) trials are sufficient to identify, with probability at least 1 - δ, a machine whose expected maximum payoff is within e of optimal. This result leads to a strategy for solving the problem that is asymptotically optimal in the following sense: the gap between the expected maximum payoff obtained by using our strategy for n trials and that obtained by pulling the single best arm for all n trials approaches zero as n → ∞."
]
}
|
1306.3525
|
1578424623
|
In this paper, we consider several finite-horizon Bayesian multi-armed bandit problems with side constraints which are computationally intractable (NP-Hard) and for which no optimal (or near optimal) algorithms are known to exist with sub-exponential running time. All of these problems violate the standard exchange property, which assumes that the reward from the play of an arm is not contingent upon when the arm is played. Not only are index policies suboptimal in these contexts, there has been little analysis of such policies in these problem settings. We show that if we consider near-optimal policies, in the sense of approximation algorithms, then there exists (near) index policies. Conceptually, if we can find policies that satisfy an approximate version of the exchange property, namely, that the reward from the play of an arm depends on when the arm is played to within a constant factor, then we have an avenue towards solving these problems. However such an approximate version of the idling bandit property does not hold on a per-play basis and are shown to hold in a global sense. Clearly, such a property is not necessarily true of arbitrary single arm policies and finding such single arm policies is nontrivial. We show that by restricting the state spaces of arms we can find single arm policies and that these single arm policies can be combined into global (near) index policies where the approximate version of the exchange property is true in expectation. The number of different bandit problems that can be addressed by this technique already demonstrate its wide applicability.
|
The problem considered in @cite_15 is similar in name but is different from the problem we have posed herein --- that paper maximizes the single maximum value seen across all the arms and across all the @math steps. The authors of @cite_49 show that several natural index policies for the budgeted learning problem are constant approximations using analysis arguments which are different from those presented here. The specific approximation ratios proved in @cite_49 are improved upon by the results herein with better running times. The authors of @cite_57 present a constant approximation for non-martingale finite horizon bandits; however, these problems require techniques that are orthogonal to those in this paper. The problems considered in @cite_48 is an infinite horizon restless bandit problem. Though that work also uses a weakly coupled relaxation and its Lagrangian (as is standard for many MAB problems), the techniques we use here are different.
|
{
"cite_N": [
"@cite_57",
"@cite_48",
"@cite_15",
"@cite_49"
],
"mid": [
"2950215956",
"2040774021",
"159910205",
"2949517871"
],
"abstract": [
"In the stochastic knapsack problem, we are given a knapsack of size B, and a set of jobs whose sizes and rewards are drawn from a known probability distribution. However, we know the actual size and reward only when the job completes. How should we schedule jobs to maximize the expected total reward? We know O(1)-approximations when we assume that (i) rewards and sizes are independent random variables, and (ii) we cannot prematurely cancel jobs. What can we say when either or both of these assumptions are changed? The stochastic knapsack problem is of interest in its own right, but techniques developed for it are applicable to other stochastic packing problems. Indeed, ideas for this problem have been useful for budgeted learning problems, where one is given several arms which evolve in a specified stochastic fashion with each pull, and the goal is to pull the arms a total of B times to maximize the reward obtained. Much recent work on this problem focus on the case when the evolution of the arms follows a martingale, i.e., when the expected reward from the future is the same as the reward at the current state. What can we say when the rewards do not form a martingale? In this paper, we give constant-factor approximation algorithms for the stochastic knapsack problem with correlations and or cancellations, and also for budgeted learning problems where the martingale condition is not satisfied. Indeed, we can show that previously proposed LP relaxations have large integrality gaps. We propose new time-indexed LP relaxations, and convert the fractional solutions into distributions over strategies, and then use the LP values and the time ordering information from these strategies to devise a randomized adaptive scheduling algorithm. We hope our LP formulation and decomposition methods may provide a new way to address other correlated bandit problems with more general contexts.",
"The restless bandit problem is one of the most well-studied generalizations of the celebrated stochastic multi-armed bandit (MAB) problem in decision theory. In its ultimate generality, the restless bandit problem is known to be PSPACE-Hard to approximate to any nontrivial factor, and little progress has been made on this problem despite its significance in modeling activity allocation under uncertainty. In this article, we consider the Feedback MAB problem, where the reward obtained by playing each of n independent arms varies according to an underlying on off Markov process whose exact state is only revealed when the arm is played. The goal is to design a policy for playing the arms in order to maximize the infinite horizon time average expected reward. This problem is also an instance of a Partially Observable Markov Decision Process (POMDP), and is widely studied in wireless scheduling and unmanned aerial vehicle (UAV) routing. Unlike the stochastic MAB problem, the Feedback MAB problem does not admit to greedy index-based optimal policies. We develop a novel duality-based algorithmic technique that yields a surprisingly simple and intuitive (2+e)-approximate greedy policy to this problem. We show that both in terms of approximation factor and computational efficiency, our policy is closely related to the Whittle index, which is widely used for its simplicity and efficiency of computation. Subsequently we define a multi-state generalization, that we term Monotone bandits, which remains subclass of the restless bandit problem. We show that our policy remains a 2-approximation in this setting, and further, our technique is robust enough to incorporate various side-constraints such as blocking plays, switching costs, and even models where determining the state of an arm is a separate operation from playing it. Our technique is also of independent interest for other restless bandit problems, and we provide an example in nonpreemptive machine replenishment. Interestingly, in this case, our policy provides a constant factor guarantee, whereas the Whittle index is provably polynomially worse. By presenting the first O(1) approximations for nontrivial instances of restless bandits as well as of POMDPs, our work initiates the study of approximation algorithms in both these contexts.",
"We present an asymptotically optimal algorithm for the max variant of the k-armed bandit problem. Given a set of k slot machines, each yielding payoff from a fixed (but unknown) distribution, we wish to allocate trials to the machines so as to maximize the expected maximum payoff received over a series of n trials. Subject to certain distributional assumptions, we show that O(k ln(k δ) ln(n)2 e2) trials are sufficient to identify, with probability at least 1 - δ, a machine whose expected maximum payoff is within e of optimal. This result leads to a strategy for solving the problem that is asymptotically optimal in the following sense: the gap between the expected maximum payoff obtained by using our strategy for n trials and that obtained by pulling the single best arm for all n trials approaches zero as n → ∞.",
"In the budgeted learning problem, we are allowed to experiment on a set of alternatives (given a fixed experimentation budget) with the goal of picking a single alternative with the largest possible expected payoff. Approximation algorithms for this problem were developed by Guha and Munagala by rounding a linear program that couples the various alternatives together. In this paper we present an index for this problem, which we call the ratio index, which also guarantees a constant factor approximation. Index-based policies have the advantage that a single number (i.e. the index) can be computed for each alternative irrespective of all other alternatives, and the alternative with the highest index is experimented upon. This is analogous to the famous Gittins index for the discounted multi-armed bandit problem. The ratio index has several interesting structural properties. First, we show that it can be computed in strongly polynomial time. Second, we show that with the appropriate discount factor, the Gittins index and our ratio index are constant factor approximations of each other, and hence the Gittins index also gives a constant factor approximation to the budgeted learning problem. Finally, we show that the ratio index can be used to create an index-based policy that achieves an O(1)-approximation for the finite horizon version of the multi-armed bandit problem. Moreover, the policy does not require any knowledge of the horizon (whereas we compare its performance against an optimal strategy that is aware of the horizon). This yields the following surprising result: there is an index-based policy that achieves an O(1)-approximation for the multi-armed bandit problem, oblivious to the underlying discount factor."
]
}
|
1306.3484
|
2115817980
|
Timing side channels in two-user schedulers are studied. When two users share a scheduler, one user may learn the other user's behavior from patterns of service timings. We measure the information leakage of the resulting timing side channel in schedulers serving a legitimate user and a malicious attacker, using a privacy metric defined as the Shannon equivocation of the user's job density. We show that the commonly used first-come-first-serve (FCFS) scheduler provides no privacy as the attacker is able to to learn the user's job pattern completely. Furthermore, we introduce an scheduling policy, accumulate-and-serve scheduler, which services jobs from the user and attacker in batches after buffering them. The information leakage in this scheduler is mitigated at the price of service delays, and the maximum privacy is achievable when large delays are added.
|
We study the timing side channel between two users, an attacker and a regular user, sending jobs that are scheduled through a shared server. Information leakage in this side channel is determined by the scheduling policy. In this paper, we quantify information leakage using Shannon equivocation and analyze privacy of commonly used FCFS policy. Similar studies under this model can be found in @cite_21 @cite_23 @cite_3 , where minimum-mean-square-error (MMSE) and attack-dependent metrics were used.
|
{
"cite_N": [
"@cite_21",
"@cite_3",
"@cite_23"
],
"mid": [
"2021931360",
"",
"2141422616"
],
"abstract": [
"In this work, we study information leakage in timing side channels that arise in the context of shared event schedulers. Consider two processes, one of them an innocuous process (referred to as Alice) and the other a malicious one (referred to as Bob), using a common scheduler to process their jobs. Based on when his jobs get processed, Bob wishes to learn about the pattern (size and timing) of jobs of Alice. Depending on the context, knowledge of this pattern could have serious implications on Alice's privacy and security. For instance, shared routers can reveal traffic patterns, shared memory access can reveal cloud usage patterns, and suchlike. We present a formal framework to study the information leakage in shared resource schedulers using the pattern estimation error as a performance metric. In this framework, a uniform upper bound is derived to benchmark different scheduling policies. The first-come-first-serve scheduling policy is analyzed, and shown to leak significant information when the scheduler is loaded heavily. To mitigate the timing information leakage, we propose an “Accumulate-and-Serve” policy which trades in privacy for a higher delay. The policy is analyzed under the proposed framework and is shown to leak minimum information to the attacker, and is shown to have comparatively lower delay than a fixed scheduler that preemptively assigns service times irrespective of traffic patterns.",
"",
"Traditionally, scheduling policies have been optimized to perform well on metrics such as throughput, delay and fairness. In the context of shared event schedulers, where a common processor is shared among multiple users, one also has to consider the privacy offered by the scheduling policy. The privacy offered by a scheduling policy measures how much information about the usage pattern of one user of the system can be learnt by another as a consequence of sharing the scheduler. In [1], we introduced an estimation error based metric to quantify this privacy. We showed that the most commonly deployed scheduling policy, the first-come-first-served (FCFS) offers very little privacy to its users. We also proposed a parametric non-work-conserving policy which traded off delay for improved privacy. In this work, we ask the question, is a trade-off between delay and privacy fundamental to the design to scheduling policies? In particular, is there a work-conserving, possibly randomized, scheduling policy that scores high on the privacy metric? Answering the first question, we show that there does exist a fundamental limit on the privacy performance of a work-conserving scheduling policy. We quantify this limit. Furthermore, answering the second question, we demonstrate that the round-robin scheduling policy (a deterministic policy) is privacy optimal within the class of work-conserving policies."
]
}
|
1306.2918
|
2950635915
|
Consider a 2-player normal-form game repeated over time. We introduce an adaptive learning procedure, where the players only observe their own realized payoff at each stage. We assume that agents do not know their own payoff function, and have no information on the other player. Furthermore, we assume that they have restrictions on their own action set such that, at each stage, their choice is limited to a subset of their action set. We prove that the empirical distributions of play converge to the set of Nash equilibria for zero-sum and potential games, and games where one player has two actions.
|
Let @math be a compact convex subset of an euclidean space and, for all @math , let @math . We are interested in the asymptotic behavior of the random sequence @math . Let us call In order to maintain the terminology employed in @cite_25 , we introduce the following definition, which is stated in a slightly different form (see [Definition 2.4] br10 ).
|
{
"cite_N": [
"@cite_25"
],
"mid": [
"2091328462"
],
"abstract": [
"This paper studies a class of non-Markovian and nonhomogeneous stochastic processes on a finite state space. Relying on a recent paper by Benaim, Hofbauer, and Sorin [SIAM J. Control Optim., 44 (2005), pp. 328-348] it is shown that, under certain assumptions, the asymptotic behavior of occupation measures can be described in terms of a certain set-valued deterministic dynamical system. This provides a unified approach to simulated annealing type processes and permits the study of new models of vertex reinforced random walks and new models of learning in games such as Markovian fictitious play."
]
}
|
1306.2918
|
2950635915
|
Consider a 2-player normal-form game repeated over time. We introduce an adaptive learning procedure, where the players only observe their own realized payoff at each stage. We assume that agents do not know their own payoff function, and have no information on the other player. Furthermore, we assume that they have restrictions on their own action set such that, at each stage, their choice is limited to a subset of their action set. We prove that the empirical distributions of play converge to the set of Nash equilibria for zero-sum and potential games, and games where one player has two actions.
|
Under the assumptions above, it is well known (see, Aubin and Cellina @cite_0 ) that ) admits at least one solution (i.e. an absolutely continuous mapping @math such that @math for almost every @math ) through any initial point.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"1514218977"
],
"abstract": [
"0. Background Notes.- 1. Continuous Partitions of Unity.- 2. Absolutely Continuous Functions.- 3. Some Compactness Theorems.- 4. Weak Convergence and Asymptotic Center of Bounded Sequences.- 5. Closed Convex Hulls and the Mean-Value Theorem.- 6. Lower Semicontinuous Convex Functions and Projections of Best Approximation.- 7. A Concise Introduction to Convex Analysis.- 1. Set-Valued Maps.- 1. Set-Valued Maps and Continuity Concepts.- 2. Examples of Set-Valued Maps.- 3. Continuity Properties of Maps with Closed Convex Graph.- 4. Upper Hemicontinuous Maps and the Convergence Theorem.- 5. Hausdorff Topology.- 6. The Selection Problem.- 7. The Minimal Selection.- 8. Chebishev Selection.- 9. The Barycentric Selection.- 10. Selection Theorems for Locally Selectionable Maps.- 11. Michael's Selection Theorem.- 12. The Approximate Selection Theorem and Kakutani's Fixed Point Theorem.- 13. (7-Selectionable Maps.- 14. Measurable Selections.- 2. Existence of Solutions to Differential Inclusions.- 1. Convex Valued Differential Inclusions.- 2. Qualitative Properties of the Set of Trajectories of Convex-Valued Differential Inclusions.- 3. Nonconvex-Valued Differential Inclusions.- 4. Differential Inclusions with Lipschitzean Maps and the Relaxation Theorem.- 5. The Fixed-Point Approach.- 6. The Lower Semicontinuous Case.- 3. Differential Inclusions with Maximal Monotone Maps.- 1. Maximal Monotone Maps.- 2. Existence and Uniqueness of Solutions to Differential Inclusions with Maximal Monotone Maps.- 3. Asymptotic Behavior of Trajectories and the Ergodic Theorem.- 4. Gradient Inclusions.- 5. Application: Gradient Methods for Constrained Minimization Problems.- 4. Viability Theory: The Nonconvex Case.- 1. Bouligand's Contingent Cone.- 2. Viable and Monotone Trajectories.- 3. Contingent Derivative of a Set-Valued Map.- 4. The Time Dependent Case.- 5. A Continuous Version of Newton's Method.- 6. A Viability Theorem for Continuous Maps with Nonconvex Images..- 7. Differential Inclusions with Memory.- 5. Viability Theory and Regulation of Controled Systems: The Convex Case.- 1. Tangent Cones and Normal Cones to Convex Sets.- 2. Viability Implies the Existence of an Equilibrium.- 3. Viability Implies the Existence of Periodic Trajectories.- 4. Regulation of Controled Systems Through Viability.- 5. Walras Equilibria and Dynamical Price Decentralization.- 6. Differential Variational Inequalities.- 7. Rate Equations and Inclusions.- 6. Liapunov Functions.- 1. Upper Contingent Derivative of a Real-Valued Function.- 2. Liapunov Functions and Existence of Equilibria.- 3. Monotone Trajectories of a Differential Inclusion.- 4. Construction of Liapunov Functions.- 5. Stability and Asymptotic Behavior of Trajectories.- Comments."
]
}
|
1306.2918
|
2950635915
|
Consider a 2-player normal-form game repeated over time. We introduce an adaptive learning procedure, where the players only observe their own realized payoff at each stage. We assume that agents do not know their own payoff function, and have no information on the other player. Furthermore, we assume that they have restrictions on their own action set such that, at each stage, their choice is limited to a subset of their action set. We prove that the empirical distributions of play converge to the set of Nash equilibria for zero-sum and potential games, and games where one player has two actions.
|
Bena "im and Raimond @cite_25 introduce an adaptive process they call (MFP). As in , we consider that players have constraints on their action set, i.e. each player has an exploration matrix @math which is supposed to be irreducible and reversible with respect to its unique invariant measure @math .
|
{
"cite_N": [
"@cite_25"
],
"mid": [
"2091328462"
],
"abstract": [
"This paper studies a class of non-Markovian and nonhomogeneous stochastic processes on a finite state space. Relying on a recent paper by Benaim, Hofbauer, and Sorin [SIAM J. Control Optim., 44 (2005), pp. 328-348] it is shown that, under certain assumptions, the asymptotic behavior of occupation measures can be described in terms of a certain set-valued deterministic dynamical system. This provides a unified approach to simulated annealing type processes and permits the study of new models of vertex reinforced random walks and new models of learning in games such as Markovian fictitious play."
]
}
|
1306.2918
|
2950635915
|
Consider a 2-player normal-form game repeated over time. We introduce an adaptive learning procedure, where the players only observe their own realized payoff at each stage. We assume that agents do not know their own payoff function, and have no information on the other player. Furthermore, we assume that they have restrictions on their own action set such that, at each stage, their choice is limited to a subset of their action set. We prove that the empirical distributions of play converge to the set of Nash equilibria for zero-sum and potential games, and games where one player has two actions.
|
Proposition 3.4 in @cite_25 shows that the norm of @math can be controlled as a function of the spectral gap @math . If in addition the constants @math are sufficiently small, then holds.
|
{
"cite_N": [
"@cite_25"
],
"mid": [
"2091328462"
],
"abstract": [
"This paper studies a class of non-Markovian and nonhomogeneous stochastic processes on a finite state space. Relying on a recent paper by Benaim, Hofbauer, and Sorin [SIAM J. Control Optim., 44 (2005), pp. 328-348] it is shown that, under certain assumptions, the asymptotic behavior of occupation measures can be described in terms of a certain set-valued deterministic dynamical system. This provides a unified approach to simulated annealing type processes and permits the study of new models of vertex reinforced random walks and new models of learning in games such as Markovian fictitious play."
]
}
|
1306.1901
|
2089005875
|
Program termination is a hot research topic in program analysis. The last few years have witnessed the development of termination analyzers for programming languages such as C and Java with remarkable precision and performance. These systems are largely based on techniques and tools coming from the field of declarative constraint programming. In this paper, we first recall an algorithm based on Farkas' Lemma for discovering linear ranking functions proving termination of a certain class of loops. Then we propose an extension of this method for showing the existence of eventual linear ranking functions, i.e., linear functions that become ranking functions after a finite unrolling of the loop. We show correctness and completeness of this algorithm.
|
The method proposed in @cite_15 repeatedly divides the state space to find a linear ranking function on each subspace, and then checks that the transitive closure of the transition relation is included in the union of the ranking relations. As the process may not terminate, one needs to bound the search. @cite_15 also proposes a test suite, upon which we tested our approach. As expected, every loop [Table 1] ChenFM12 which terminates with a linear ranking also has an eventual linear ranking. Moreover, loops 6, 12, 13, 18, 21, 23, 24, 26, 27, 28, 31, 32, 35, and 36 admit an eventual linear ranking function (which is discovered without using neither @math nor its relaxation). These are all shown terminating with the tool of @cite_15 . On the other hand, loops 14, 34, and 38 do have a (following the terminology of @cite_15 ), but do not admit an eventual linear ranking function.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2110820023"
],
"abstract": [
"Analysis of termination and other liveness properties of an imperative program can be reduced to termination proof synthesis for simple loops, i.e., loops with only variable updates in the loop body. Among simple loops, the subset of Linear Simple Loops (LSLs) is particular interesting because it is common in practice and expressive in theory. Existing techniques can successfully synthesize a linear ranking function for an LSL if there exists one. However, when a terminating LSL does not have a linear ranking function, these techniques fail. In this paper we describe an automatic method that generates proofs of universal termination for LSLs based on the synthesis of disjunctive ranking relations. The method repeatedly finds linear ranking functions on parts of the state space and checks whether the transitive closure of the transition relation is included in the union of the ranking relations. Our method extends the work of Podelski and Rybalchenko [27]. We have implemented a prototype of the method and have shown experimental evidence of the effectiveness of our method."
]
}
|
1306.1901
|
2089005875
|
Program termination is a hot research topic in program analysis. The last few years have witnessed the development of termination analyzers for programming languages such as C and Java with remarkable precision and performance. These systems are largely based on techniques and tools coming from the field of declarative constraint programming. In this paper, we first recall an algorithm based on Farkas' Lemma for discovering linear ranking functions proving termination of a certain class of loops. Then we propose an extension of this method for showing the existence of eventual linear ranking functions, i.e., linear functions that become ranking functions after a finite unrolling of the loop. We show correctness and completeness of this algorithm.
|
@cite_20 shows how to partition the loop relation into behaviors that terminate and behaviors to be analyzed in a subsequent termination proof after refinement. This work addresses both termination and conditional termination problems in the same framework. Concerning the benchmarks proposed in [Table 1] GantyG13 , loops 6--41 all have an eventually linear ranking function except for loops 11, 14, 30, 34, and 38.
|
{
"cite_N": [
"@cite_20"
],
"mid": [
"2102026074"
],
"abstract": [
"We present a novel technique for proving program termination which introduces a new dimension of modularity. Existing techniques use the program to incrementally construct a termination proof. While the proof keeps changing, the program remains the same. Our technique goes a step further. We show how to use the current partial proof to partition the transition relation into those behaviors known to be terminating from the current proof, and those whose status (terminating or not) is not known yet. This partition enables a new and unexplored dimension of incremental reasoning on the program side. In addition, we show that our approach naturally applies to conditional termination which searches for a precondition ensuring termination. We further report on a prototype implementation that advances the state-of-the-art on the grounds of termination and conditional termination."
]
}
|
1306.1901
|
2089005875
|
Program termination is a hot research topic in program analysis. The last few years have witnessed the development of termination analyzers for programming languages such as C and Java with remarkable precision and performance. These systems are largely based on techniques and tools coming from the field of declarative constraint programming. In this paper, we first recall an algorithm based on Farkas' Lemma for discovering linear ranking functions proving termination of a certain class of loops. Then we propose an extension of this method for showing the existence of eventual linear ranking functions, i.e., linear functions that become ranking functions after a finite unrolling of the loop. We show correctness and completeness of this algorithm.
|
A method based on abstract interpretation for synthesizing ranking functions is described in @cite_14 . Although the work contains no completeness result, the approach is able to discover piecewise-defined ranking functions.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"2114695797"
],
"abstract": [
"We present a parameterized abstract domain for proving program termination by abstract interpretation. The domain automatically synthesizes piecewise-defined ranking functions and infers sufficient conditions for program termination. The analysis uses over-approximations but we prove its soundness, meaning that all program executions respecting these sufficient conditions are indeed terminating."
]
}
|
1306.1901
|
2089005875
|
Program termination is a hot research topic in program analysis. The last few years have witnessed the development of termination analyzers for programming languages such as C and Java with remarkable precision and performance. These systems are largely based on techniques and tools coming from the field of declarative constraint programming. In this paper, we first recall an algorithm based on Farkas' Lemma for discovering linear ranking functions proving termination of a certain class of loops. Then we propose an extension of this method for showing the existence of eventual linear ranking functions, i.e., linear functions that become ranking functions after a finite unrolling of the loop. We show correctness and completeness of this algorithm.
|
Finally, let us point out that the concept of appeared first in @cite_3 @cite_2 . The class loops studied in these works is wider but, as the technique of @cite_2 relies on finite differences, this approach is incomplete. On the other hand, while @cite_3 is also based on Farkas' Lemma, it seems [A. R. Bradley, Personal communication, May 2013] that the approach cannot prove, e.g., termination of the SLC loop @math , which admits an eventual linear ranking function.
|
{
"cite_N": [
"@cite_3",
"@cite_2"
],
"mid": [
"1585194019",
"1541144457"
],
"abstract": [
"Although every terminating loop has a ranking function, not every loop has a ranking function of a restricted form, such as a lexicographic tuple of polynomials over program variables. The polyranking principle is proposed as a generalization of polynomial ranking for analyzing termination of loops. We define lexicographic polyranking functions in the context of loops with parallel transitions consisting of polynomial assertions, including inequalities, over primed and unprimed variables. Next, we address synthesis of these functions with a complete and automatic method for synthesizing lexicographic linear polyranking functions with supporting linear invariants over linear loops.",
"We present a technique to prove termination of multipath polynomial programs, an expressive class of loops that enables practical code abstraction and analysis. The technique is based on finite differences of expressions over transition systems. Although no complete method exists for determining termination for this class of loops, we show that our technique is useful in practice. We demonstrate that our prototype implementation for C source code readily scales to large software projects, proving termination for a high percentage of targeted loops."
]
}
|
1306.1977
|
2951121256
|
In various data settings, it is necessary to compare observations from disparate data sources. We assume the data is in the dissimilarity representation and investigate a joint embedding method that results in a commensurate representation of disparate dissimilarities. We further assume that there are "matched" observations from different conditions which can be considered to be highly similar, for the sake of inference. The joint embedding results in the joint optimization of fidelity (preservation of within-condition dissimilarities) and commensurability (preservation of between-condition dissimilarities between matched observations). We show that the tradeoff between these two criteria can be made explicit using weighted raw stress as the objective function for multidimensional scaling. In our investigations, we use a weight parameter, @math , to control the tradeoff, and choose match detection as the inference task. Our results show weights that are optimal (with respect to the inference task) are different than equal weights for commensurability and fidelity and the proposed weighted embedding scheme provides significant improvements in statistical power.
|
Our problem is very similar to the 3-way multidimensional scaling problem where the dissimilarity data is a ( @math ) tensor which represent pairwise dissimilarities between @math objects as measured in @math different conditions. However, 3-way MDS methods @cite_2 @cite_0 find a single configuration of @math points representing each object (which is referred to as group space), that is as consistent as possible with the dissimilarity data under different conditions. DISTATIS @cite_12 accomplishes this goal by finding a compromise inner product matrix that is a weighted combination of the inner product matrices in different conditions. 3-way MDS methods such as INDSCAL @cite_2 , PROXSCAL @cite_0 assume a common configuration in group space, from which the individual dissimilarity matrices are computed after being distorted by weight matrices. In contrast, our embedding approach always results in a distinct point for each object under each different condition and we never estimate the representation of the objects in the common space. In fact, some of our inference tasks such as match detection makes sense only if we represent each object under each condition as a distinct point.
|
{
"cite_N": [
"@cite_0",
"@cite_12",
"@cite_2"
],
"mid": [
"",
"2097264885",
"2000215628"
],
"abstract": [
"",
"In this paper we present a generalization of classical multidimensional scaling called DISTATIS which is a new method that can be used to compare algorithms when their outputs consist of distance matrices computed on the same set of objects. The method first evaluates the similarity between algorithms using a coefficient called the RV coefficient. From this analysis, a compromise matrix is computed which represents the best aggregate of the original matrices. In order to evaluate the differences between algorithms, the original distance matrices are then projected onto the compromise. We illustrate this method with a \"toy example\" in which four different \"algorithms\" (two computer programs and two sets of human observers) evaluate the similarity among faces.",
"An individual differences model for multidimensional scaling is outlined in which individuals are assumed differentially to weight the several dimensions of a common “psychological space”. A corresponding method of analyzing similarities data is proposed, involving a generalization of “Eckart-Young analysis” to decomposition of three-way (or higher-way) tables. In the present case this decomposition is applied to a derived three-way table of scalar products between stimuli for individuals. This analysis yields a stimulus by dimensions coordinate matrix and a subjects by dimensions matrix of weights. This method is illustrated with data on auditory stimuli and on perception of nations."
]
}
|
1306.1977
|
2951121256
|
In various data settings, it is necessary to compare observations from disparate data sources. We assume the data is in the dissimilarity representation and investigate a joint embedding method that results in a commensurate representation of disparate dissimilarities. We further assume that there are "matched" observations from different conditions which can be considered to be highly similar, for the sake of inference. The joint embedding results in the joint optimization of fidelity (preservation of within-condition dissimilarities) and commensurability (preservation of between-condition dissimilarities between matched observations). We show that the tradeoff between these two criteria can be made explicit using weighted raw stress as the objective function for multidimensional scaling. In our investigations, we use a weight parameter, @math , to control the tradeoff, and choose match detection as the inference task. Our results show weights that are optimal (with respect to the inference task) are different than equal weights for commensurability and fidelity and the proposed weighted embedding scheme provides significant improvements in statistical power.
|
Another classical method relevant to our inference task is canonical correlation analysis(CCA) @cite_8 @cite_5 . CCA can be used to find a pair of orthogonal projections for mapping data of each modality to the same space. The results for this approach are not presented herein for brevity and can be found in @cite_9 . In terms of performance for the match detection task, the CCA-based method was very competitive with our dissimilarity-centric approach.
|
{
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_8"
],
"mid": [
"2100235303",
"2606438765",
"2025341678"
],
"abstract": [
"We present a general method using kernel canonical correlation analysis to learn a semantic representation to web images and their associated text. The semantic space provides a common representation and enables a comparison between the text and images. In the experiments, we look at two approaches of retrieving images based on only their content from a text query. We compare orthogonalization approaches against a standard cross-representation retrieval technique known as the generalized vector space model.",
"",
"Concepts of correlation and regression may be applied not only to ordinary one-dimensional variates but also to variates of two or more dimensions. Marksmen side by side firing simultaneous shots at targets, so that the deviations are in part due to independent individual errors and in part to common causes such as wind, provide a familiar introduction to the theory of correlation; but only the correlation of the horizontal components is ordinarily discussed, whereas the complex consisting of horizontal and vertical deviations may be even more interesting. The wind at two places may be compared, using both components of the velocity in each place. A fluctuating vector is thus matched at each moment with another fluctuating vector. The study of individual differences in mental and physical traits calls for a detailed study of the relations between sets of correlated variates. For example the scores on a number of mental tests may be compared with physical measurements on the same persons. The questions then arise of determining the number and nature of the independent relations of mind and body shown by these data to exist, and of extracting from the multiplicity of correlations in the system suitable characterizations of these independent relations. As another example, the inheritance of intelligence in rats might be studied by applying not one but s different mental tests to N mothers and to a daughter of each"
]
}
|
1306.1977
|
2951121256
|
In various data settings, it is necessary to compare observations from disparate data sources. We assume the data is in the dissimilarity representation and investigate a joint embedding method that results in a commensurate representation of disparate dissimilarities. We further assume that there are "matched" observations from different conditions which can be considered to be highly similar, for the sake of inference. The joint embedding results in the joint optimization of fidelity (preservation of within-condition dissimilarities) and commensurability (preservation of between-condition dissimilarities between matched observations). We show that the tradeoff between these two criteria can be made explicit using weighted raw stress as the objective function for multidimensional scaling. In our investigations, we use a weight parameter, @math , to control the tradeoff, and choose match detection as the inference task. Our results show weights that are optimal (with respect to the inference task) are different than equal weights for commensurability and fidelity and the proposed weighted embedding scheme provides significant improvements in statistical power.
|
There have been many efforts toward solving the related problem of , manifold alignment". Manifold alignment" seeks to find correspondences between disparate datasets in different conditions (which are sometimes referred to as domains'') by aligning their underlying manifolds. A common data setting found in the literature is the semi-supervised setting @cite_16 , where correspondences between two collections of observations are given and the task is to find correspondences between a new set of observations in each condition. The proposed solutions @cite_7 @cite_15 @cite_3 follow the common approach of seeking a common latent space for multiple conditions such that the representations (either projections or embeddings) of the observations match (are commensurate) in this space.
|
{
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_3",
"@cite_7"
],
"mid": [
"2056612725",
"194200989",
"",
"2123261262"
],
"abstract": [
"In this paper, we propose a novel manifold alignment method by learning the underlying common manifold with supervision of corresponding data pairs from different observation sets. Different from the previous algorithms of semi-supervised manifold alignment, our method learns the explicit corresponding projections from each original observation space to the common embedding space everywhere. Benefiting from this property, our method could process new test data directly rather than re-alignment. Furthermore, our approach doesn’t have any assumption on the data structures, thus it could handle more complex cases and get better results compared with previous work. In the proposed algorithm, manifold alignment is formulated as a minimization problem with proper constraints, which could be solved in an analytical manner with closed-form solution. Experimental results on pose manifold alignment of different objects and faces demonstrate the effectiveness of our proposed method.",
"In this paper, we study a family of semisupervised learning algorithms for “aligning” different data sets that are characterized by the same underlying manifold. The optimizations of these algorithms are based on graphs that provide a discretized approximation to the manifold. Partial alignments of the data sets—obtained from prior knowledge of their manifold structure or from pairwise correspondences of subsets of labeled examples— are completed by integrating supervised signals with unsupervised frameworks for manifold learning. As an illustration of this semisupervised setting, we show how to learn mappings between different data sets of images that are parameterized by the same underlying modes of variability (e.g., pose and viewing angle). The curse of dimensionality in these problems is overcome by exploiting the low dimensional structure of image manifolds.",
"",
"In this paper we introduce a novel approach to manifold alignment, based on Procrustes analysis. Our approach differs from \"semi-supervised alignment\" in that it results in a mapping that is defined everywhere - when used with a suitable dimensionality reduction method - rather than just on the training data points. We describe and evaluate our approach both theoretically and experimentally, providing results showing useful knowledge transfer from one domain to another. Novel applications of our method including cross-lingual information retrieval and transfer learning in Markov decision processes are presented."
]
}
|
1306.1977
|
2951121256
|
In various data settings, it is necessary to compare observations from disparate data sources. We assume the data is in the dissimilarity representation and investigate a joint embedding method that results in a commensurate representation of disparate dissimilarities. We further assume that there are "matched" observations from different conditions which can be considered to be highly similar, for the sake of inference. The joint embedding results in the joint optimization of fidelity (preservation of within-condition dissimilarities) and commensurability (preservation of between-condition dissimilarities between matched observations). We show that the tradeoff between these two criteria can be made explicit using weighted raw stress as the objective function for multidimensional scaling. In our investigations, we use a weight parameter, @math , to control the tradeoff, and choose match detection as the inference task. Our results show weights that are optimal (with respect to the inference task) are different than equal weights for commensurability and fidelity and the proposed weighted embedding scheme provides significant improvements in statistical power.
|
@cite_15 solves an optimization problem with respect to two projection matrices for the observations in two domains. The energy function that is optimized contains three terms: two and one . The ensure that the local neighborhood of points are preserved in the low-dimensional space, by making use of the reconstruction error for Locally Linear Embedding @cite_4 . The ensures that matched'' points are mapped to close locations in the commensurate space.
|
{
"cite_N": [
"@cite_15",
"@cite_4"
],
"mid": [
"2056612725",
"2053186076"
],
"abstract": [
"In this paper, we propose a novel manifold alignment method by learning the underlying common manifold with supervision of corresponding data pairs from different observation sets. Different from the previous algorithms of semi-supervised manifold alignment, our method learns the explicit corresponding projections from each original observation space to the common embedding space everywhere. Benefiting from this property, our method could process new test data directly rather than re-alignment. Furthermore, our approach doesn’t have any assumption on the data structures, thus it could handle more complex cases and get better results compared with previous work. In the proposed algorithm, manifold alignment is formulated as a minimization problem with proper constraints, which could be solved in an analytical manner with closed-form solution. Experimental results on pose manifold alignment of different objects and faces demonstrate the effectiveness of our proposed method.",
"Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text. How do we judge similarity? Our mental representations of the world are formed by processing large numbers of sensory in"
]
}
|
1306.1977
|
2951121256
|
In various data settings, it is necessary to compare observations from disparate data sources. We assume the data is in the dissimilarity representation and investigate a joint embedding method that results in a commensurate representation of disparate dissimilarities. We further assume that there are "matched" observations from different conditions which can be considered to be highly similar, for the sake of inference. The joint embedding results in the joint optimization of fidelity (preservation of within-condition dissimilarities) and commensurability (preservation of between-condition dissimilarities between matched observations). We show that the tradeoff between these two criteria can be made explicit using weighted raw stress as the objective function for multidimensional scaling. In our investigations, we use a weight parameter, @math , to control the tradeoff, and choose match detection as the inference task. Our results show weights that are optimal (with respect to the inference task) are different than equal weights for commensurability and fidelity and the proposed weighted embedding scheme provides significant improvements in statistical power.
|
@cite_16 solve the problem in the semi-supervised setting by a similar approach, by optimizing a energy function that has three terms that are analogous to the terms in @cite_15 .
|
{
"cite_N": [
"@cite_15",
"@cite_16"
],
"mid": [
"2056612725",
"194200989"
],
"abstract": [
"In this paper, we propose a novel manifold alignment method by learning the underlying common manifold with supervision of corresponding data pairs from different observation sets. Different from the previous algorithms of semi-supervised manifold alignment, our method learns the explicit corresponding projections from each original observation space to the common embedding space everywhere. Benefiting from this property, our method could process new test data directly rather than re-alignment. Furthermore, our approach doesn’t have any assumption on the data structures, thus it could handle more complex cases and get better results compared with previous work. In the proposed algorithm, manifold alignment is formulated as a minimization problem with proper constraints, which could be solved in an analytical manner with closed-form solution. Experimental results on pose manifold alignment of different objects and faces demonstrate the effectiveness of our proposed method.",
"In this paper, we study a family of semisupervised learning algorithms for “aligning” different data sets that are characterized by the same underlying manifold. The optimizations of these algorithms are based on graphs that provide a discretized approximation to the manifold. Partial alignments of the data sets—obtained from prior knowledge of their manifold structure or from pairwise correspondences of subsets of labeled examples— are completed by integrating supervised signals with unsupervised frameworks for manifold learning. As an illustration of this semisupervised setting, we show how to learn mappings between different data sets of images that are parameterized by the same underlying modes of variability (e.g., pose and viewing angle). The curse of dimensionality in these problems is overcome by exploiting the low dimensional structure of image manifolds."
]
}
|
1306.2158
|
2949101325
|
With the increasing empirical success of distributional models of compositional semantics, it is timely to consider the types of textual logic that such models are capable of capturing. In this paper, we address shortcomings in the ability of current models to capture logical operations such as negation. As a solution we propose a tripartite formulation for a continuous vector space representation of semantics and subsequently use this representation to develop a formal compositional notion of negation within such models.
|
The first class of approaches seeks to use distributional models of word semantics to enhance logic-based models of textual inference. The work which best exemplifies this strand of research is found in the efforts of and, more recently, . This line of research converts logical representations obtained from syntactic parses using Bos' Boxer @cite_21 into Markov Logic Networks @cite_2 , and uses distributional semantics-based models such as that of to deal with issues polysemy and ambiguity.
|
{
"cite_N": [
"@cite_21",
"@cite_2"
],
"mid": [
"2033194278",
"1977970897"
],
"abstract": [
"Boxer is an open-domain software component for semantic analysis of text, based on Combinatory Categorial Grammar (CCG) and Discourse Representation Theory (DRT). Used together with the CC (b) discourse structure triggered by conditionals, negation or discourse adverbs was overall correctly computed; (c) some measure and time expressions are correctly analysed, others aren't; (d) several shallow analyses are given for lexical phrases that require deep analysis; (e) bridging references and pronouns are not resolved in most cases. Boxer is distributed with the C&C tools and freely available for research purposes.",
"We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a first-order formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach."
]
}
|
1306.2158
|
2949101325
|
With the increasing empirical success of distributional models of compositional semantics, it is timely to consider the types of textual logic that such models are capable of capturing. In this paper, we address shortcomings in the ability of current models to capture logical operations such as negation. As a solution we propose a tripartite formulation for a continuous vector space representation of semantics and subsequently use this representation to develop a formal compositional notion of negation within such models.
|
The second class of approaches seeks to integrate boolean-like logical operations into distributional semantic models using existing mechanisms for representing and composing semantic vectors. postulate a mathematical framework generalising the syntax-semantic passage of Montague Grammar @cite_17 to other forms of syntactic and semantic representation. They show that the parses yielded by syntactic calculi satisfying certain structural constraints can be canonically mapped to vector combination operations in distributional semantic models. They illustrate their framework by demonstrating how the truth-value of sentences can be obtained from the combination of vector representations of words and multi-linear maps standing for logical predicates and relations. They furthermore give a matrix interpretation of negation as a swap' matrix which inverts the truth-value of vectorial sentence representations, and show how it can be embedded in sentence structure.
|
{
"cite_N": [
"@cite_17"
],
"mid": [
"2952300142"
],
"abstract": [
"The development of compositional distributional models of semantics reconciling the empirical aspects of distributional semantics with the compositional aspects of formal semantics is a popular topic in the contemporary literature. This paper seeks to bring this reconciliation one step further by showing how the mathematical constructs commonly used in compositional distributional models, such as tensors and matrices, can be used to simulate different aspects of predicate logic. This paper discusses how the canonical isomorphism between tensors and multilinear maps can be exploited to simulate a full-blown quantifier-free predicate calculus using tensors. It provides tensor interpretations of the set of logical connectives required to model propositional calculi. It suggests a variant of these tensor calculi capable of modelling quantifiers, using few non-linear operations. It finally discusses the relation between these variants, and how this relation should constitute the subject of future work."
]
}
|
1306.2360
|
169938165
|
This paper studies the problem of serving multiple live video streams to several different clients from a single access point over unreliable wireless links, which is expected to be major a consumer of future wireless capacity. This problem involves two characteristics. On the streaming side, different video streams may generate variable-bit-rate traffic with different traffic patterns. On the network side, the wireless transmissions are unreliable, and the link qualities differ from client to client. In order to alleviate the above stochastic aspects of both video streams and link unreliability, each client typically buffers incoming packets before playing the video. The quality of the video playback subscribed to by each flow depends, among other factors, on both the delay of packets as well as their throughput. In this paper we characterize precisely the capacity of the wireless video server in terms of what combination of joint per-packet-delays and throughputs can be supported for the set of flows, as a function of the buffering delay introduced at the server. We also address how to schedule packets at the access point to satisfy the joint per-packet-delay-throughput performance measure. We test the designed policy on the traces of three movies. From our tests, it appears to outperform other policies by a large margin.
|
Hou el al @cite_13 have proposed a model that jointly considers the per-packet delay bound and the per-flow throughput requirement. This model has been extended to consider variable-bit-rate traffic @cite_15 , fading wireless channels @cite_1 @cite_11 , the mixture of real-time and non-real-time traffic @cite_14 , and multi-hop wireless transmissions @cite_17 . However, these studies assume that all flows are synchronized and generate packets at the same time, and the results depend critically on this assumption. Moreover, they assume that all flows start playback immediately without buffering any packets, which is a critical feature of the playback process. Dutta el al @cite_2 have studied serving video streams when receivers may buffer packets before playing them. They assume, however, that all packets are available at the server when the system starts, which is applicable to on-demand videos, but not to live videos.
|
{
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_17",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_11"
],
"mid": [
"2056459334",
"",
"2951365801",
"2150396122",
"1985441584",
"",
"2154594768"
],
"abstract": [
"This paper studies the problem of congestion control and scheduling in ad hoc wireless networks that have to support a mixture of best-effort and real-time traffic. Optimization and stochastic network theory have been successful in designing architectures for fair resource allocation to meet long-term throughput demands. However, to the best of our knowledge, strict packet delay deadlines were not considered in this framework previously. In this paper, we propose a model for incorporating the quality-of-service (QoS) requirements of packets with deadlines in the optimization framework. The solution to the problem results in a joint congestion control and scheduling algorithm that fairly allocates resources to meet the fairness objectives of both elastic and inelastic flows and per-packet delay requirements of inelastic flows.",
"",
"We study the problem of scheduling periodic real-time tasks so as to meet their individual minimum reward requirements. A task generates jobs that can be given arbitrary service times before their deadlines. A task then obtains rewards based on the service times received by its jobs. We show that this model is compatible to the imprecise computation models and the increasing reward with increasing service models. In contrast to previous work on these models, which mainly focus on maximize the total reward in the system, we aim to fulfill different reward requirements by different tasks, which offers better fairness and allows fine-grained tradeoff between tasks. We first derive a necessary and sufficient condition for a system, along with reward requirements of tasks, to be feasible. We also obtain an off-line feasibility optimal scheduling policy. We then studies a sufficient condition for a policy to be feasibility optimal or achieves some approximation bound. This condition can serve as a guideline for designing on-line scheduling policy and we obtains a greedy policy based on it. We prove that the on-line policy is feasibility optimal when all tasks have the same periods and also obtain an approximation bound for the policy under general cases.",
"Managing the Quality-of-Experience (QoE) of video streaming for wireless clients is becoming increasingly important due to the rapid growth of video traffic on wireless networks. The inherent variability of the wireless channel as well as the Variable Bit Rate (VBR) of the compressed video streams make QoE management a challenging problem. Prior work has studied this problem in the context of transmitting a single video stream. In this paper, we investigate multiplexing schemes to transmit multiple video streams from a base station to mobile clients that use number of playout stalls as a performance metric. In this context, we present an epoch-by-epoch framework to fairly allocate wireless transmission slots to streaming videos. In each epoch our scheme essentially reduces the vulnerability to stalling by allocating slots to videos in a way that maximizes the minimum ‘playout lead’ across all videos. Next, we show that the problem of allocating slots fairly is NP-complete even for a constant number of videos. We then present a fast lead-aware greedy algorithm for the problem. Our choice of greedy algorithm is motivated by the fact that this algorithm is optimal when the channel quality of a user remains unchanged within an epoch (but different users may experience different channel quality). Moreover, our experimental results based on public MPEG-4 video traces and wireless channel traces that we collected from a WiMAX test-bed show that the lead-aware greedy approach performs a fair distribution of stalls across the clients when compared to other algorithms, while still maintaining similar or lower average number of stalls per client.",
"Providing differentiated Quality of Service (QoS) over unreliable wireless channels is an important challenge for supporting several future applications. We analyze a model that has been proposed to describe the QoS requirements by four criteria: traffic pattern, channel reliability, delay bound, and throughput bound. We study this mathematical model and extend it to handle variable bit rate applications. We then obtain a sharp characterization of schedulability vis-a-vis latencies and timely throughput. Our results extend the results so that they are general enough to be applied on a wide range of wireless applications, including MPEG Variable-Bit-Rate (VBR) video streaming, VoIP with differentiated quality, and wireless sensor networks (WSN). Two major issues concerning QoS over wireless are admission control and scheduling. Based on the model incorporating the QoS criteria, we analytically derive a necessary and sufficient condition for a set of variable bit-rate clients to be feasible. Admission control is reduced to evaluating the necessary and sufficient condition. We further analyze two scheduling policies that have been proposed, and show that they are both optimal in the sense that they can fulfill every set of clients that is feasible by some scheduling algorithms. The policies are easily implemented on the IEEE 802.11 standard. Simulation results under various settings support the theoretical study.",
"",
"This paper studies the problem of scheduling in single-hop wireless networks with real-time traffic, where every packet arrival has an associated deadline and a minimum fraction of packets must be transmitted before the end of the deadline. Using optimization and stochastic network theory we study the problem of scheduling to meet quality of service (QoS) requirements under heterogeneous delay constraints and time-varying channel conditions. Our analysis results in an optimal scheduling algorithm which fairly allocates data rates to all flows while meeting long-term delay demands. We also prove that under a simplified scenario our solution translates into a greedy strategy that makes optimal decisions with low complexity."
]
}
|
1306.1556
|
2950074180
|
The interference in wireless networks is temporally correlated, since the node or user locations are correlated over time and the interfering transmitters are a subset of these nodes. For a wireless network where (potential) interferers form a Poisson point process and use ALOHA for channel access, we calculate the joint success and outage probabilities of n transmissions over a reference link. The results are based on the diversity polynomial, which captures the temporal interference correlation. The joint outage probability is used to determine the diversity gain (as the SIR goes to infinity), and it turns out that there is no diversity gain in simple retransmission schemes, even with independent Rayleigh fading over all links. We also determine the complete joint SIR distribution for two transmissions and the distribution of the local delay, which is the time until a repeated transmission over the reference link succeeds.
|
A separate line of work focuses on the local delay , which is the time it takes for a node to connect to a nearby neighbor. The local delay, introduced in @cite_8 and further investigated in @cite_2 @cite_1 , is a sensitive indicator of correlations in the network. In @cite_6 the two lines of work are combined and approximate joint temporal statistics of the interference are used to derive throughput and local delay results in the high-reliability regime. In @cite_7 , the mean local delay for ALOHA and frequency-hopping multiple access (FHMA) are compared, and it is shown that FHMA has comparable performance in the mean delay but is significantly more efficient than ALOHA in terms of the delay variance.
|
{
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_2"
],
"mid": [
"2104046024",
"2168164497",
"2040269208",
"2154107426",
"1986417832"
],
"abstract": [
"The capacity of wireless networks is fundamentally limited by interference. However, little research has focused on the interference correlation, which may greatly increase the local delay (namely the number of time slots required for a node to successfully transmit a packet). This paper focuses on the question whether increasing randomness in the MAC, specifically frequency-hopping multiple access (FHMA) and ALOHA, helps to reduce the effect of interference correlation. We derive closed-form results for the mean and variance of the local delay for the two MAC protocols and evaluate the optimal parameters that minimize the mean local delay. Based on the optimal parameters, we identify two operating regimes, the correlation-limited regime and the bandwidth-limited regime. Our results reveal that while the mean local delays for FHMA with N sub-bands and for ALOHA with transmit probability p essentially coincide when p=1 N, a fundamental discrepancy exists between their variances. We also discuss implications from the analysis, including an interesting mean delay-jitter tradeoff, and convenient bounds on the tail probability of the local delay, which shed useful insights into system design.",
"We study a slotted version of the Aloha Medium Access (MAC) protocol in a Mobile Ad-hoc Network (MANET). Our model features transmitters randomly located in the Euclidean plane, according to a Poisson point process and a set of receivers representing the next-hop from every transmitter. We concentrate on the so-called outage scenario, where a successful transmission requires a Signal-to-Interference-and-Noise (SINR) larger than some threshold. We analyze the local delays in such a network, namely the number of times slots required for nodes to transmit a packet to their prescribed next-hop receivers. The analysis depends very much on the receiver scenario and on the variability of the fading. In most cases, each node has finite-mean geometric random delay and thus a positive next hop throughput. However, the spatial (or large population) averaging of these individual finite mean-delays leads to infinite values in several practical cases, including the Rayleigh fading and positive thermal noise case. In some cases it exhibits an interesting phase transition phenomenon where the spatial average is finite when certain model parameters (receiver distance, thermal noise, Aloha medium access probability) are below a threshold and infinite above. To the best of our knowledge, this phenomenon, which we propose to call the wireless contention phase transition, has not been discussed in the literature. We comment on the relationships between the above facts and the heavy tails found in the so-called \"RESTART\" algorithm. We argue that the spatial average of the mean local delays is infinite primarily because of the outage logic, where one transmits full packets at time slots when the receiver is covered at the required SINR and where one wastes all the other time slots. This results in the \"RESTART\" mechanism, which in turn explains why we have infinite spatial average. Adaptive coding offers another nice way of breaking the outage RESTART logic. We show examples where the average delays are finite in the adaptive coding case, whereas they are infinite in the outage case.",
"For communication between two neighboring nodes in wireless networks, the local delay is defined as the time it takes a node to successfully transmit a packet. Previous research focuses on the local delay in static or infinitely mobile Poisson networks with ALOHA. In this paper, we extend the local delay results to Poisson networks with finite mobility. The results obtained show that mobility helps reduce the local delay. Bounds of the local delay in mobile Poisson networks are derived for different mobility and transmission models. The phase transition that marks the jump of the local delay from finite to infinite is also characterized.",
"Communication in decentralized wireless networks is limited by interference. Because transmissions typically last for more than a single contention time slot, interference often exhibits a strong statistical dependence over time that results in temporally correlated communication performance. The temporal dependence in interference increases as user mobility decreases and or the total transmission time increases. We propose a network model that spans the extremes of temporal independence to long-term temporal dependence. Using the proposed model, closed-form single hop communication performance metrics are derived that are asymptotically exact in the low outage regime. The primary contributions are (i) deriving the joint temporal statistics of network interference and showing that it follows a multivariate symmetric alpha stable distribution; (ii) utilizing the joint interference statistics to derive closed-form expressions for local delay, throughput outage probability, and average network throughput; and (iii) using the joint interference statistics to redefine and analyze the network transmission capacity that captures the throughput-delay-reliability tradeoffs in single hop transmissions. Simulation results verify the closed-form expressions derived in this paper and we demonstrate up to 2× gain in network throughput and reliability by optimizing certain parameters of medium access control layer protocol in view of the temporal correlations.",
"Communication between two neighboring nodes is a very basic operation in wireless networks. Yet very little research has focused on the local delay in networks with randomly placed nodes, defined as the mean time it takes a node to connect to its nearest neighbor. We study this problem for Poisson networks, first considering interference only, then noise only, and finally and briefly, interference plus noise. In the noiseless case, we analyze four different types of nearest-neighbor communication and compare the extreme cases of high mobility, where a new Poisson process is drawn in each time slot, and no mobility, where only a single realization exists and nodes stay put forever. It turns out that the local delay behaves rather differently in the two cases. We also provide the low- and high-rate asymptotic behavior of the minimum achievable delay in each case. In the cases with noise, power control is essential to keep the delay finite, and randomized power control can drastically reduce the required (mean) power for finite local delay."
]
}
|
1306.1265
|
2953324705
|
For all @math , we show that the set of Poisson Binomial distributions on @math variables admits a proper @math -cover in total variation distance of size @math , which can also be computed in polynomial time. We discuss the implications of our construction for approximation algorithms and the computation of approximate Nash equilibria in anonymous games.
|
In Probability and Statistics there is a broad literature studying various properties of these distributions; see @cite_18 for an introduction to some of this work. Many results provide approximations to the Poisson Binomial distribution via simpler distributions. In a well-known result, Le Cam @cite_19 shows that, for any vector @math , @math where @math is the Poisson distribution with parameter @math . Subsequently many other proofs of this bound and improved ones, such as Theorem of , were given, using a range of different techniques; @cite_21 @cite_2 @cite_24 @cite_8 is a sampling of work along these lines, and Steele @cite_5 gives an extensive list of relevant references. Much work has also been done on approximating PBDs by Normal distributions (see e.g. @cite_25 @cite_11 @cite_13 @cite_27 @cite_23 ) and by Binomial distributions; see e.g. Ehm's result @cite_7 , given as Theorem of , as well as Soon's result @cite_4 and Roos's result @cite_22 , given as Theorem of .
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_21",
"@cite_24",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_13",
"@cite_25",
"@cite_11"
],
"mid": [
"2085786299",
"2337787166",
"2094799272",
"2063522056",
"2017388832",
"2049446512",
"2084454857",
"2095575865",
"2047376378",
"364436601",
"2067676737",
"2052974900",
"",
"2020658897",
""
],
"abstract": [
"",
"A binomial approximation theorem for dependent indicators using Stein's method and coupling is proved. The approximating binomial distribution B(n � ,p � ) is chosen in such a way that its first moment is equal to that of W and its variance is asymptotically equal to that of W as ntends to infinity where W is the sum of independent indicators and pis bounded away from 1. Three examples, one of which concerns two different approximations for the hypergeometric distribution, are given to illustrate applications of the theorem obtained.",
"The Poisson binomial distribution is approximated by a binomial distribution and also by finite signed measures resulting from the corresponding Krawtchouk expansion. Bounds and asymptotic relations for the total variation distance and the point metric are given.",
"Upper and lower bounds are given for the total variation distance between the distribution of a sum S of n independent, non-identically distributed 0-1 random variables and the binomial distribution (n, p) having the same expectation as S. The proof uses the Stein--Chen technique. Equivalence of the total variation and the Kolmogorov distance is established, and an application to sampling with and without replacement is presented.",
"",
"",
"On utilise l'adaptation elegante de Chen de la methode de Stein pour ameliorer des estimateurs",
"",
"This paper introduces an estimate of approximation errors for the distribution function of a sum of random indicators. The approximation is demonstrated with the help of the problem of estimating the distribution function of the empty cells number in the equiprobable scheme for group distribution of particles.",
"Preface.- 1.Introduction.- 2.Fundamentals of Stein's Method.- 3.Berry-Esseen Bounds for Independent Random Variables.- 4.L^1 Bounds.- 5.L^1 by Bounded Couplings.- 6 L^1: Applications.- 7.Non-uniform Bounds for Independent Random Variables.- 8.Uniform and Non-uniform Bounds under Local Dependence.- 9.Uniform and Non-Uniform Bounds for Non-linear Statistics.- 10.Moderate Deviations.- 11.Multivariate Normal Approximation.- 12.Discretized normal approximation.- 13.Non-normal Approximation.- 14.Extensions.- References.- Author Index .- Subject Index.- Notation.",
"",
"where A = P1 + P2 + * .. + Pn Naturally, this inequality contains the classical Poisson limit law (Just set pi = A n and note that the right side simplifies to 2A2 n), but it also achieves a great deal more. In particular, Le Cam's inequality identifies the sum of the squares of the pi as a quantity governing the quality of the Poisson approximation. Le Cam's inequality also seems to be one of those facts that repeatedly calls to be proved-and improved. Almost before the ink was dry on Le Cam's 1960 paper, an elementary proof was given by Hodges and Le Cam [18]. This proof was followed by numerous generalizations and refinements including contributions by Kerstan [19], Franken [15], Vervatt [30], Galambos [17], Freedman [16], Serfling [24], and Chen [11, 12]. In fact, for raw simplicity it is hard to find a better proof of Le Cam's inequality than that given in the survey of Serfling [25]. One purpose of this note is to provide a proof of Le Cam's inequality using some basic facts from matrix analysis. This proof is simple, but simplicity is not its raison d'etre. It also serves as a concrete introduction to the semi-group method for approximation of probability distributions. This method was used in Le Cam [20], and it has been used again most recently by Deheuvels and Pfeifer [13] to provide impressively precise results. The semi-group method is elegant and powerful, but it faces tough competition, especially from the coupling method and the Chen-Stein method. The literature of these methods is reviewed, and it is shown how they also lead to proofs of Le Cam's inequality.",
"",
"The sum of finitely many variates possesses, under familiar conditions, an almost Gaussian probability distribution. This already much discussed \"central limit theorem\"(x) in the theory of probability is the object of further investigation in the present paper. The cases of Liapounoff(2), Lindeberg(3), and Feller(4) will be reviewed. Numerical estimates for the degrees of approximation attained in these cases will be presented in the three theorems of §4. Theorem 3, the arithmetical refinement of the general theorem of Feller, constitutes our principal result. As the foregoing implies, we require throughout the paper that the given variates be totally independent. And we consider only one-dimensional variates. The first three sections of the paper are devoted to the preparatory Theorem 1 in which the variates.meet the further condition of possessing finite third order absolute moments. Let X , Xi, • • • , Xn be the given variates. For each k k = ,2, ■ ■ ■ , n) let ^(Xk) and ixs Xk) denote, respectively, the second and third order absolute moments of Xk about its mean (expected) value a*. These moments are either both zero or both positive. The former case arises only when Xk is essentially constant, i.e., differs from its mean value at most in cases of total probability zero. To avoid trivialities we suppose that PziXk) >0 for at least one k (k = 1, 2, • • • , n). The non-negative square root of m Xk) is the standard deviation of Xk and will be denoted by ak. We call",
""
]
}
|
1306.1265
|
2953324705
|
For all @math , we show that the set of Poisson Binomial distributions on @math variables admits a proper @math -cover in total variation distance of size @math , which can also be computed in polynomial time. We discuss the implications of our construction for approximation algorithms and the computation of approximate Nash equilibria in anonymous games.
|
These results provide structural information about PBDs that can be well approximated by simpler distributions, but fall short of our goal of approximating a PBD to within arbitrary accuracy . Indeed, the approximations obtained in the probability literature (such as the Poisson, Normal and Binomial approximations) typically depend on the first few moments of the PBD being approximated, while higher moments are crucial for arbitrary approximation @cite_22 . At the same time, algorithmic applications often require that the approximating distribution is of the same kind as the distribution that is being approximated. E.g., in the anonymous game application mentioned earlier, the parameters of the given PBD correspond to mixed strategies of players at Nash equilibrium, and the parameters of the approximating PBD correspond to mixed strategies at approximate Nash equilibrium. Approximating the given PBD via a Poisson or a Normal distribution would not have any meaning in the context of a game.
|
{
"cite_N": [
"@cite_22"
],
"mid": [
"2094799272"
],
"abstract": [
"The Poisson binomial distribution is approximated by a binomial distribution and also by finite signed measures resulting from the corresponding Krawtchouk expansion. Bounds and asymptotic relations for the total variation distance and the point metric are given."
]
}
|
1306.1356
|
2953076353
|
This paper provides novel results for the recovery of signals from undersampled measurements based on analysis @math -minimization, when the analysis operator is given by a frame. We both provide so-called uniform and nonuniform recovery guarantees for cosparse (analysis-sparse) signals using Gaussian random measurement matrices. The nonuniform result relies on a recovery condition via tangent cones and the uniform recovery guarantee is based on an analysis version of the null space property. Examining these conditions for Gaussian random matrices leads to precise bounds on the number of measurements required for successful recovery. In the special case of standard sparsity, our result improves a bound due to Rudelson and Vershynin concerning the exact reconstruction of sparse signals from Gaussian measurements with respect to the constant and extends it to stability under passing to approximately sparse signals and to robustness under noise on the measurements.
|
Let us discuss briefly related theoretical studies on recovery of analysis sparse vectors and compare them with our main results. An earlier version of Theorem was shown by Cand e s and Needell in @cite_21 . However, they were only able to treat the case that the analysis operator is given by a tight frame, that is, when @math . Moreover, their analysis is based on a version of the restricted isometry property and does not provide explicit constants in the corresponding bound on the required number of measurements. To be fair, we note, however, that their analysis applies to general subgaussian random matrices. The results of @cite_21 were extended to the case of non-tight frames and Weibull matrices in the work of Foucart in @cite_15 . The analysis in @cite_15 incorporates the robust null space property, the verification of which for the Weibull matrices relies on a variant of the classical restricted isometry property. In our work we prove that Gaussian random matrices satisfy the robust null space property by referring to a modification of the Gordon's escape through a mesh theorem.
|
{
"cite_N": [
"@cite_15",
"@cite_21"
],
"mid": [
"2083945855",
"2103955025"
],
"abstract": [
"Abstract We investigate the recovery of almost s -sparse vectors x ∈ C N from undersampled and inaccurate data y = A x + e ∈ C m by means of minimizing ‖ z ‖ 1 subject to the equality constraints A z = y . If m ≍ s ln ( N s ) and if Gaussian random matrices A ∈ R m × N are used, this equality-constrained l 1 -minimization is known to be stable with respect to sparsity defects and robust with respect to measurement errors. If m ≍ s ln ( N s ) and if Weibull random matrices are used, we prove here that the equality-constrained l 1 -minimization remains stable and robust. The arguments are based on two key ingredients, namely the robust null space property and the quotient property. The robust null space property relies on a variant of the classical restricted isometry property where the inner norm is replaced by the l 1 -norm and the outer norm is replaced by a norm comparable to the l 2 -norm. For the l 1 -minimization subject to inequality constraints, this yields stability and robustness results that are also valid when considering sparsity relative to a redundant dictionary. As for the quotient property, it relies on lower estimates for the tail probability of sums of independent Weibull random variables.",
"This article presents novel results concerning the recovery of signals from undersampled data in the common situation where such signals are not sparse in an orthonormal basis or incoherent dictionary, but in a truly redundant dictionary. This work thus bridges a gap in the literature and shows not only that compressed sensing is viable in this context, but also that accurate recovery is possible via an ‘1-analysis optimization problem. We introduce a condition on the measurement sensing matrix, which is a natural generalization of the now well-known restricted isometry property, and which guarantees accurate recovery of signals that are nearly sparse in (possibly) highly overcomplete and coherent dictionaries. This condition imposes no incoherence restriction on the dictionary and our results may be the rst of this kind. We discuss practical examples and the implications of our results on those applications, and complement our study by demonstrating the potential of ‘1-analysis for such problems."
]
}
|
1306.1356
|
2953076353
|
This paper provides novel results for the recovery of signals from undersampled measurements based on analysis @math -minimization, when the analysis operator is given by a frame. We both provide so-called uniform and nonuniform recovery guarantees for cosparse (analysis-sparse) signals using Gaussian random measurement matrices. The nonuniform result relies on a recovery condition via tangent cones and the uniform recovery guarantee is based on an analysis version of the null space property. Examining these conditions for Gaussian random matrices leads to precise bounds on the number of measurements required for successful recovery. In the special case of standard sparsity, our result improves a bound due to Rudelson and Vershynin concerning the exact reconstruction of sparse signals from Gaussian measurements with respect to the constant and extends it to stability under passing to approximately sparse signals and to robustness under noise on the measurements.
|
A recent contribution by Needell and Ward @cite_17 provides theoretical recovery guarantees for the special case of total variation minimization, which corresponds to analysis @math -minimization with a certain difference operator. Unfortunately, we cannot cover this situation with our main results because the difference operator is not a frame. Nevertheless, it would be interesting to pursue theoretical recovery guarantees for total variation minimization and Gaussian random matrices using the approach of this paper.
|
{
"cite_N": [
"@cite_17"
],
"mid": [
"2054010646"
],
"abstract": [
"This paper presents near-optimal guarantees for stable and robust image recovery from undersampled noisy measurements using total variation minimization. In particular, we show that from @math nonadaptive linear measurements, an image can be reconstructed to within the best @math -term approximation of its gradient up to a logarithmic factor, and this factor can be removed by taking slightly more measurements. Along the way, we prove a strengthened Sobolev inequality for functions lying in the null space of a suitably incoherent matrix."
]
}
|
1306.1356
|
2953076353
|
This paper provides novel results for the recovery of signals from undersampled measurements based on analysis @math -minimization, when the analysis operator is given by a frame. We both provide so-called uniform and nonuniform recovery guarantees for cosparse (analysis-sparse) signals using Gaussian random measurement matrices. The nonuniform result relies on a recovery condition via tangent cones and the uniform recovery guarantee is based on an analysis version of the null space property. Examining these conditions for Gaussian random matrices leads to precise bounds on the number of measurements required for successful recovery. In the special case of standard sparsity, our result improves a bound due to Rudelson and Vershynin concerning the exact reconstruction of sparse signals from Gaussian measurements with respect to the constant and extends it to stability under passing to approximately sparse signals and to robustness under noise on the measurements.
|
's work @cite_24 provides a systematic introduction of the analysis sparsity model and treats also greedy recovery methods, see also @cite_25 . Further contributions are contained in @cite_31 @cite_27 .
|
{
"cite_N": [
"@cite_24",
"@cite_27",
"@cite_31",
"@cite_25"
],
"mid": [
"2152171106",
"2142211803",
"",
"2107059427"
],
"abstract": [
"Abstract After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to the synthesis alternative, is markedly different. Surprisingly, the analysis model did not get a similar attention, and its understanding today is shallow and partial. In this paper we take a closer look at the analysis approach, better define it as a generative model for signals, and contrast it with the synthesis one. This work proposes effective pursuit methods that aim to solve inverse problems regularized with the analysis-model prior, accompanied by a preliminary theoretical study of their performance. We demonstrate the effectiveness of the analysis model in several experiments, and provide a detailed study of the model associated with the 2D finite difference analysis operator, a close cousin of the TV norm.",
"This paper investigates the theoretical guarantees of l1-analysis regularization when solving linear inverse problems. Most of previous works in the literature have mainly focused on the sparse synthesis prior where the sparsity is measured as the l1 norm of the coefficients that synthesize the signal from a given dictionary. In contrast, the more general analysis regularization minimizes the l1 norm of the correlations between the signal and the atoms in the dictionary, where these correlations define the analysis support. The corresponding variational problem encompasses several well-known regularizations such as the discrete total variation and the fused Lasso. Our main contributions consist in deriving sufficient conditions that guarantee exact or partial analysis support recovery of the true signal in presence of noise. More precisely, we give a sufficient condition to ensure that a signal is the unique solution of the l1 -analysis regularization in the noiseless case. The same condition also guarantees exact analysis support recovery and l2-robustness of the l1-analysis minimizer vis-a-vis an enough small noise in the measurements. This condition turns to be sharp for the robustness of the sign pattern. To show partial support recovery and l2 -robustness to an arbitrary bounded noise, we introduce a stronger sufficient condition. When specialized to the l1-synthesis regularization, our results recover some corresponding recovery and robustness guarantees previously known in the literature. From this perspective, our work is a generalization of these results. We finally illustrate these theoretical findings on several examples to study the robustness of the 1-D total variation, shift-invariant Haar dictionary, and fused Lasso regularizations.",
"",
"The cosparse analysis model has been introduced recently as an interesting alternative to the standard sparse synthesis approach. A prominent question brought up by this new construction is the analysis pursuit problem – the need to find a signal belonging to this model, given a set of corrupted measurements of it. Several pursuit methods have already been proposed based on l1 relaxation and a greedy approach. In this work we pursue this question further, and propose a new family of pursuit algorithms for the cosparse analysis model, mimicking the greedy-like methods – compressive sampling matching pursuit (CoSaMP), subspace pursuit (SP), iterative hard thresholding (IHT) and hard thresholding pursuit (HTP). Assuming the availability of a near optimal projection scheme that finds the nearest cosparse subspace to any vector, we provide performance guarantees for these algorithms. Our theoretical study relies on a restricted isometry property adapted to the context of the cosparse analysis model. We explore empirically the performance of these algorithms by adopting a plain thresholding projection, demonstrating their good performance."
]
}
|
1306.1356
|
2953076353
|
This paper provides novel results for the recovery of signals from undersampled measurements based on analysis @math -minimization, when the analysis operator is given by a frame. We both provide so-called uniform and nonuniform recovery guarantees for cosparse (analysis-sparse) signals using Gaussian random measurement matrices. The nonuniform result relies on a recovery condition via tangent cones and the uniform recovery guarantee is based on an analysis version of the null space property. Examining these conditions for Gaussian random matrices leads to precise bounds on the number of measurements required for successful recovery. In the special case of standard sparsity, our result improves a bound due to Rudelson and Vershynin concerning the exact reconstruction of sparse signals from Gaussian measurements with respect to the constant and extends it to stability under passing to approximately sparse signals and to robustness under noise on the measurements.
|
Our nonuniform recovery guarantees rely on a geometric characterization of the successful recovery. We obtain quantitative estimates by bounding a certain Gaussian width which can be thought as an intrinsic complexity measure. The authors of @cite_3 exploit the geometry of optimality conditions to study phase transition phenomena in random linear inverse problems and random demixing problems. They express their results in terms of the statistical dimension which is essentially equivalent to the Gaussian width, see Section 10. 3 of @cite_3 for further details.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"1800334520"
],
"abstract": [
"Recent empirical research indicates that many convex optimization problems with random constraints exhibit a phase transition as the number of constraints increases. For example, this phenomenon emerges in the @math minimization method for identifying a sparse vector from random linear samples. Indeed, this approach succeeds with high probability when the number of samples exceeds a threshold that depends on the sparsity level; otherwise, it fails with high probability. @PARASPLIT This paper provides the first rigorous analysis that explains why phase transitions are ubiquitous in random convex optimization problems. It also describes tools for making reliable predictions about the quantitative aspects of the transition, including the location and the width of the transition region. These techniques apply to regularized linear inverse problems with random measurements, to demixing problems under a random incoherence model, and also to cone programs with random affine constraints. @PARASPLIT These applications depend on foundational research in conic geometry. This paper introduces a new summary parameter, called the statistical dimension, that canonically extends the dimension of a linear subspace to the class of convex cones. The main technical result demonstrates that the sequence of conic intrinsic volumes of a convex cone concentrates sharply near the statistical dimension. This fact leads to an approximate version of the conic kinematic formula that gives bounds on the probability that a randomly oriented cone shares a ray with a fixed cone."
]
}
|
1306.1356
|
2953076353
|
This paper provides novel results for the recovery of signals from undersampled measurements based on analysis @math -minimization, when the analysis operator is given by a frame. We both provide so-called uniform and nonuniform recovery guarantees for cosparse (analysis-sparse) signals using Gaussian random measurement matrices. The nonuniform result relies on a recovery condition via tangent cones and the uniform recovery guarantee is based on an analysis version of the null space property. Examining these conditions for Gaussian random matrices leads to precise bounds on the number of measurements required for successful recovery. In the special case of standard sparsity, our result improves a bound due to Rudelson and Vershynin concerning the exact reconstruction of sparse signals from Gaussian measurements with respect to the constant and extends it to stability under passing to approximately sparse signals and to robustness under noise on the measurements.
|
Also, we note that the optimization problems ) and ) often appear in image processing @cite_16 @cite_20 .
|
{
"cite_N": [
"@cite_16",
"@cite_20"
],
"mid": [
"2086670019",
"1579559187"
],
"abstract": [
"Split Bregman methods introduced in [T. Goldstein and S. Osher, SIAM J. Imaging Sci., 2 (2009), pp. 323–343] have been demonstrated to be efficient tools for solving total variation norm minimization problems, which arise from partial differential equation based image restoration such as image denoising and magnetic resonance imaging reconstruction from sparse samples. In this paper, we prove the convergence of the split Bregman iterations, where the number of inner iterations is fixed to be one. Furthermore, we show that these split Bregman iterations can be used to solve minimization problems arising from the analysis based approach for image restoration in the literature. We apply these split Bregman iterations to the analysis based image restoration approach whose analysis operator is derived from tight framelets constructed in [A. Ron and Z. Shen, J. Funct. Anal., 148 (1997), pp. 408–447]. This gives a set of new frame based image restoration algorithms that cover several topics in image restorations...",
"Preface 1. Introduction 2. Some modern image analysis tools 3. Image modeling and representation 4. Image denoising 5. Image deblurring 6. Image inpainting 7. Image processing: segmentation Bibliography Index."
]
}
|
1306.0963
|
2952998053
|
We aim to reduce the burden of programming and deploying autonomous systems to work in concert with people in time-critical domains, such as military field operations and disaster response. Deployment plans for these operations are frequently negotiated on-the-fly by teams of human planners. A human operator then translates the agreed upon plan into machine instructions for the robots. We present an algorithm that reduces this translation burden by inferring the final plan from a processed form of the human team's planning conversation. Our approach combines probabilistic generative modeling with logical plan validation used to compute a highly structured prior over possible plans. This hybrid approach enables us to overcome the challenge of performing inference over the large solution space with only a small amount of noisy data from the team planning session. We validate the algorithm through human subject experimentation and show we are able to infer a human team's final plan with 83 accuracy on average. We also describe a robot demonstration in which two people plan and execute a first-response collaborative task with a PR2 robot. To the best of our knowledge, this is the first work that integrates a logical planning technique within a generative model to perform plan inference.
|
This problem could also be approached as a logical constraint problem of partial order planning, if there were no noise in the utterances. In other words, if the team discussed only the partial plans relating to the final plan and did not make any errors or revisions, then a plan generator such as a PDDL solver @cite_3 could produce the final plan with global sequencing. Unfortunately human conversation data is sufficiently noisy to preclude this approach.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"1974951007"
],
"abstract": [
"Abstract Metric temporal planning involves both selecting and organising actions to satisfy the goals and also assigning to each of these actions its start time and, where necessary, its duration. The assignment of start times to actions is a central concern of scheduling. In pddl 2.1, the widely adopted planning domain description language standard, metric temporal planning problems are described using actions with durations. A large number of planners have been developed to handle this language, but the great majority of them are fundamentally limited in the class of temporal problems they can solve. In this paper, we review the source of this limitation and present an approach to metric temporal planning that is not so restricted. Our approach links planning and scheduling algorithms into a planner, Crikey , that can successfully tackle a wide range of temporal problems. We show how Crikey can be simplified to solve a wide and interesting subset of metric temporal problems, while remaining competitive with other temporal planners that are unable to handle required concurrency. We provide empirical data comparing the performance of this planner, CRIKEY SHE , our original version, Crikey , and a range of other modern temporal planners. Our contribution is to describe the first competitive planner capable of solving problems that require concurrent actions."
]
}
|
1306.0963
|
2952998053
|
We aim to reduce the burden of programming and deploying autonomous systems to work in concert with people in time-critical domains, such as military field operations and disaster response. Deployment plans for these operations are frequently negotiated on-the-fly by teams of human planners. A human operator then translates the agreed upon plan into machine instructions for the robots. We present an algorithm that reduces this translation burden by inferring the final plan from a processed form of the human team's planning conversation. Our approach combines probabilistic generative modeling with logical plan validation used to compute a highly structured prior over possible plans. This hybrid approach enables us to overcome the challenge of performing inference over the large solution space with only a small amount of noisy data from the team planning session. We validate the algorithm through human subject experimentation and show we are able to infer a human team's final plan with 83 accuracy on average. We also describe a robot demonstration in which two people plan and execute a first-response collaborative task with a PR2 robot. To the best of our knowledge, this is the first work that integrates a logical planning technique within a generative model to perform plan inference.
|
This motivates a combined approach. We build a probabilistic generative model for the structured utterance observations. We use a logic-based plan validator @cite_11 to compute a highly structured prior distribution over possible plans, which encodes our assumption that the final plan is likely, but not required, to be a valid plan. This combined approach naturally deals with noise in the data and the challenge of performing inference over plans with only a small amount of data. We perform sampling inference in the model using Gibbs sampling and Metropolis-Hastings steps within to approximate the posterior distribution over final plans. We show through empirical validation with human subject experiments that the algorithm achieves 83
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2149361370"
],
"abstract": [
"This work describes aspects of our plan validation tool, VAL. The tool was initially developed to support the 3rd International Planning Competition, but has subsequently been extended in order to exploit its capabilities in plan validation and development. In particular, the tool has been extended to include advanced features of PDDL2.1 which have proved important in mixed-initiative planning in a space operations project. Amongst these features, treatment of continuous effects is the most significant, with important effects on the semantic interpretation of plans. The tool has also been extended to keep abreast of developments in PDDL, providing critical support to participants and organisers of the 4th IPC."
]
}
|
1306.0963
|
2952998053
|
We aim to reduce the burden of programming and deploying autonomous systems to work in concert with people in time-critical domains, such as military field operations and disaster response. Deployment plans for these operations are frequently negotiated on-the-fly by teams of human planners. A human operator then translates the agreed upon plan into machine instructions for the robots. We present an algorithm that reduces this translation burden by inferring the final plan from a processed form of the human team's planning conversation. Our approach combines probabilistic generative modeling with logical plan validation used to compute a highly structured prior over possible plans. This hybrid approach enables us to overcome the challenge of performing inference over the large solution space with only a small amount of noisy data from the team planning session. We validate the algorithm through human subject experimentation and show we are able to infer a human team's final plan with 83 accuracy on average. We also describe a robot demonstration in which two people plan and execute a first-response collaborative task with a PR2 robot. To the best of our knowledge, this is the first work that integrates a logical planning technique within a generative model to perform plan inference.
|
Combining a logical approach with probabilistic modeling has gained interest in recent years. @cite_5 introduce a language for describing statistical models over typed relational domains and demonstrate model learning using noisy and uncertain real-world data. @cite_22 introduce statistical sampling to improve the efficiency of search for satisfiability testing. @cite_2 @cite_13 @cite_23 @cite_7 introduce Markov logic networks, and form the joint distribution of a probabilistic graphical model by weighting the formulas in a first-order logic.
|
{
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_13"
],
"mid": [
"",
"2149612380",
"2150406842",
"",
"2002826893",
"1977970897"
],
"abstract": [
"",
"The past few years have witnessed an significant interest in probabilistic logic learning, i.e. in research lying at the intersection of probabilistic reasoning, logical representations, and machine learning. A rich variety of different formalisms and learning techniques have been developed. This paper provides an introductory survey and overview of the state-of-the-art in probabilistic logic learning through the identification of a number of important probabilistic, logical and learning concepts.",
"We present the first unsupervised approach to the problem of learning a semantic parser, using Markov logic. Our USP system transforms dependency trees into quasi-logical forms, recursively induces lambda forms from these, and clusters them to abstract away syntactic variations of the same meaning. The MAP semantic parse of a sentence is obtained by recursively assigning its parts to lambda-form clusters and composing them. We evaluate our approach by using it to extract a knowledge base from biomedical abstracts and answer questions. USP substantially outperforms TextRunner, DIRT and an informed baseline on both precision and recall on this task.",
"",
"Statistical Relational Learning (SRL) is a subarea of machine learning which combines elements from statistical and probabilistic modeling with languages which support structured data representations. In this survey, we will: 1) provide an introduction to SRL, 2) describe some of the distinguishing characteristics of SRL systems, including relational feature construction and collective classification, 3) describe three SRL systems in detail, 4) discuss applications of SRL techniques to important data management problems such as entity resolution, selectivity estimation, and information integration, and 5) discuss connections between SRL methods and existing database research such as probabilistic databases.",
"We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a first-order formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach."
]
}
|
1306.1157
|
2949141898
|
Discrete polymatroids are the multi-set analogue of matroids. In this paper, we explore the connections among linear network coding, linear index coding and representable discrete polymatroids. We consider vector linear solutions of networks over a field @math with possibly different message and edge vector dimensions, which are referred to as linear fractional solutions. We define a network and show that a linear fractional solution over a field @math exists for a network if and only if the network is discrete polymatroidal with respect to a discrete polymatroid representable over @math An algorithm to construct networks starting from certain class of discrete polymatroids is provided. Every representation over @math for the discrete polymatroid, results in a linear fractional solution over @math for the constructed network. Next, we consider the index coding problem and show that a linear solution to an index coding problem exists if and only if there exists a representable discrete polymatroid satisfying certain conditions which are determined by the index coding problem considered. El Rouayheb et. al. showed that the problem of finding a multi-linear representation for a matroid can be reduced to finding a for an index coding problem obtained from that matroid. We generalize the result of El Rouayheb et. al. by showing that the problem of finding a representation for a discrete polymatroid can be reduced to finding a perfect linear index coding solution for an index coding problem obtained from that discrete polymatroid.
|
The concept of network coding, originally introduced by Ahlswede et. al. in @cite_16 , helps towards providing more throughput in a communication network than what pure routing solutions provide. For multicast networks, it was shown in @cite_6 that linear solutions exist for sufficiently large field size. An algebraic framework for finding linear solutions in networks was introduced in @cite_7 .
|
{
"cite_N": [
"@cite_16",
"@cite_7",
"@cite_6"
],
"mid": [
"2105831729",
"2138928022",
"2106403318"
],
"abstract": [
"We introduce a new class of problems called network information flow which is inspired by computer network applications. Consider a point-to-point communication network on which a number of information sources are to be multicast to certain sets of destinations. We assume that the information sources are mutually independent. The problem is to characterize the admissible coding rate region. This model subsumes all previously studied models along the same line. We study the problem with one information source, and we have obtained a simple characterization of the admissible coding rate region. Our result can be regarded as the max-flow min-cut theorem for network information flow. Contrary to one's intuition, our work reveals that it is in general not optimal to regard the information to be multicast as a \"fluid\" which can simply be routed or replicated. Rather, by employing coding at the nodes, which we refer to as network coding, bandwidth can in general be saved. This finding may have significant impact on future design of switching systems.",
"We take a new look at the issue of network capacity. It is shown that network coding is an essential ingredient in achieving the capacity of a network. Building on recent work by (see Proc. 2001 IEEE Int. Symp. Information Theory, p.102), who examined the network capacity of multicast networks, we extend the network coding framework to arbitrary networks and robust networking. For networks which are restricted to using linear network codes, we find necessary and sufficient conditions for the feasibility of any given set of connections over a given network. We also consider the problem of network recovery for nonergodic link failures. For the multicast setup we prove that there exist coding strategies that provide maximally robust networks and that do not require adaptation of the network interior to the failure pattern in question. The results are derived for both delay-free networks and networks with delays.",
"Consider a communication network in which certain source nodes multicast information to other nodes on the network in the multihop fashion where every node can pass on any of its received data to others. We are interested in how fast each node can receive the complete information, or equivalently, what the information rate arriving at each node is. Allowing a node to encode its received data before passing it on, the question involves optimization of the multicast mechanisms at the nodes. Among the simplest coding schemes is linear coding, which regards a block of data as a vector over a certain base field and allows a node to apply a linear transformation to a vector before passing it on. We formulate this multicast problem and prove that linear coding suffices to achieve the optimum, which is the max-flow from the source to each receiving node."
]
}
|
1306.1157
|
2949141898
|
Discrete polymatroids are the multi-set analogue of matroids. In this paper, we explore the connections among linear network coding, linear index coding and representable discrete polymatroids. We consider vector linear solutions of networks over a field @math with possibly different message and edge vector dimensions, which are referred to as linear fractional solutions. We define a network and show that a linear fractional solution over a field @math exists for a network if and only if the network is discrete polymatroidal with respect to a discrete polymatroid representable over @math An algorithm to construct networks starting from certain class of discrete polymatroids is provided. Every representation over @math for the discrete polymatroid, results in a linear fractional solution over @math for the constructed network. Next, we consider the index coding problem and show that a linear solution to an index coding problem exists if and only if there exists a representable discrete polymatroid satisfying certain conditions which are determined by the index coding problem considered. El Rouayheb et. al. showed that the problem of finding a multi-linear representation for a matroid can be reduced to finding a for an index coding problem obtained from that matroid. We generalize the result of El Rouayheb et. al. by showing that the problem of finding a representation for a discrete polymatroid can be reduced to finding a perfect linear index coding solution for an index coding problem obtained from that discrete polymatroid.
|
In scalar and vector network coding, it is inherently assumed that the dimensions of the message vectors are the same and also the same as the dimensions of the vectors carried in the edges of the network. It is possible that a network does not admit any scalar or vector solution, but admits a solution if all the dimensions of the message vectors are not equal to the edge vector dimension. Such network coding solutions, called Fractional Network Coding (FNC) solutions have been considered in @cite_12 @cite_9 @cite_4 . The work in @cite_12 primarily focusses on fractional routing, which is a special case of FNC. In @cite_9 , algorithms were provided to compute the capacity region for a network, which was defined to be the closure of all rates achievable using FNC. In @cite_4 , achievable rate regions for certain specific networks were found and it was shown that achievable rate regions using linear FNC need not be convex.
|
{
"cite_N": [
"@cite_9",
"@cite_4",
"@cite_12"
],
"mid": [
"",
"2951824038",
"2114578240"
],
"abstract": [
"",
"Determining the achievable rate region for networks using routing, linear coding, or non-linear coding is thought to be a difficult task in general, and few are known. We describe the achievable rate regions for four interesting networks (completely for three and partially for the fourth). In addition to the known matrix-computation method for proving outer bounds for linear coding, we present a new method which yields actual characteristic-dependent linear rank inequalities from which the desired bounds follow immediately.",
"We define the routing capacity of a network to be the supremum of all possible fractional message throughputs achievable by routing. We prove that the routing capacity of every network is achievable and rational, we present an algorithm for its computation, and we prove that every rational number in (0, 1] is the routing capacity of some solvable network. We also determine the routing capacity for various example networks. Finally, we discuss the extension of routing capacity to fractional coding solutions and show that the coding capacity of a network is independent of the alphabet used"
]
}
|
1306.0813
|
1772235642
|
A UK-based online questionnaire investigating aspects of usage of user-generated media (UGM), such as Facebook, LinkedIn and Twitter, attracted 587 participants. Results show a high degree of engagement with social networking media such as Facebook, and a significant engagement with other media such as professional media, microblogs and blogs. Participants who experience information overload are those who engage less frequently with the media, rather than those who have fewer posts to read. Professional users show different behaviours to social users. Microbloggers complain of information overload to the greatest extent. Two thirds of Twitter-users have felt that they receive too many posts, and over half of Twitter-users have felt the need for a tool to filter out the irrelevant posts. Generally speaking, participants express satisfaction with the media, though a significant minority express a range of concerns including information overload and privacy.
|
Social-graph services such as Facebook and LinkedIn enforce relationship reciprocity by requiring that both parties consent to being linked. By doing this, Twitter has allowed a fascinating evolution to take place, from a social-based small messaging service to a highly versatile social environment, finding its niche as the main interest-graph medium but not limited to that. Naaman @cite_16 found over 40 , posts by a user describing what they are currently doing, in the manner encouraged by the "what's happening" prompt. Next most common were statements and random thoughts, opinions and complaints and information sharing such as links, each taking over 20 . Less common tweet themes were self-promotion, questions to followers, presence maintenance e.g. "I'm back!", anecdotes about oneself and anecdotes about another. Messages posted from mobile devices are more likely to be "me now" messages (51 "me now" messages than males. A relatively small number of people undertake information sharing as a major activity; users can be grouped into "informers" and "meformers", where meformers mostly share information about themselves. Informers and meformers differ in various ways. Informers tend to be more conversational and have more contacts.
|
{
"cite_N": [
"@cite_16"
],
"mid": [
"2139575250"
],
"abstract": [
"In this work we examine the characteristics of social activity and patterns of communication on Twitter, a prominent example of the emerging class of communication systems we call \"social awareness streams.\" We use system data and message content from over 350 Twitter users, applying human coding and quantitative analysis to provide a deeper understanding of the activity of individuals on the Twitter network. In particular, we develop a content-based categorization of the type of messages posted by Twitter users, based on which we examine users' activity. Our analysis shows two common types of user behavior in terms of the content of the posted messages, and exposes differences between users in respect to these activities."
]
}
|
1306.0813
|
1772235642
|
A UK-based online questionnaire investigating aspects of usage of user-generated media (UGM), such as Facebook, LinkedIn and Twitter, attracted 587 participants. Results show a high degree of engagement with social networking media such as Facebook, and a significant engagement with other media such as professional media, microblogs and blogs. Participants who experience information overload are those who engage less frequently with the media, rather than those who have fewer posts to read. Professional users show different behaviours to social users. Microbloggers complain of information overload to the greatest extent. Two thirds of Twitter-users have felt that they receive too many posts, and over half of Twitter-users have felt the need for a tool to filter out the irrelevant posts. Generally speaking, participants express satisfaction with the media, though a significant minority express a range of concerns including information overload and privacy.
|
As well as enforced reciprocity, Facebook and LinkedIn differ from Twitter in that they involve a privacy system. Whereas all tweets are public, posts to Facebook and LinkedIn can be seen only by those to whom permission has been granted. Despite this, privacy is more of an issue among Facebook users than Twitter users, perhaps because some kind of decision-making process is entailed. Hoadly @cite_21 investigate the reaction among users when Facebook, in 2006, introduced the news feed, in which information users previously would have had to seek out by going to each others' pages was now aggregated into a time-ordered digest and placed front and centre on the site. "Perceived control" and "ease of information access" were determined to be factors in how comfortable a person feels with privacy aspects of using Facebook. Anecdotally, people seem far more likely to run into trouble with privacy on Facebook than on Twitter; in the UK for example, there have been several convictions resulting from criminal evidence posted by offenders @cite_20 . There is also some suggestion that privacy settings, despite correct use by users, may be overcome relatively easily, making privacy relatively illusory @cite_14 .
|
{
"cite_N": [
"@cite_14",
"@cite_21",
"@cite_20"
],
"mid": [
"2103133870",
"2113977894",
""
],
"abstract": [
"In order to address privacy concerns, many social media websites allow users to hide their personal profiles from the public. In this work, we show how an adversary can exploit an online social network with a mixture of public and private user profiles to predict the private attributes of users. We map this problem to a relational classification problem and we propose practical models that use friendship and group membership information (which is often not hidden) to infer sensitive attributes. The key novel idea is that in addition to friendship links, groups can be carriers of significant information. We show that on several well-known social media sites, we can easily and accurately recover the information of private-profile users. To the best of our knowledge, this is the first work that uses link-based and group-based classification to study privacy implications in social networks with mixed public and private user profiles.",
"Increasingly, millions of people, especially youth, post personal information in online social networks (OSNs). In September 2006, one of the most popular sites—Facebook.com—introduced the features of News Feed and Mini Feed, revealing no more information than before, but resulting in immediate criticism from users. To investigate the privacy controversy, we conducted a survey among 172 current Facebook users in a large US university to explore their usage behaviors and privacy attitudes toward the",
""
]
}
|
1306.0813
|
1772235642
|
A UK-based online questionnaire investigating aspects of usage of user-generated media (UGM), such as Facebook, LinkedIn and Twitter, attracted 587 participants. Results show a high degree of engagement with social networking media such as Facebook, and a significant engagement with other media such as professional media, microblogs and blogs. Participants who experience information overload are those who engage less frequently with the media, rather than those who have fewer posts to read. Professional users show different behaviours to social users. Microbloggers complain of information overload to the greatest extent. Two thirds of Twitter-users have felt that they receive too many posts, and over half of Twitter-users have felt the need for a tool to filter out the irrelevant posts. Generally speaking, participants express satisfaction with the media, though a significant minority express a range of concerns including information overload and privacy.
|
LinkedIn capitalises on its status as a permission-requiring social-graph medium in that it presents itself as a professional introductions service. The site encourages strictly professional information to be shared, and tends to attract older professionals @cite_0 . In their review of social and professional media in the workplace context, Skeels and Grudin @cite_0 find tension around privacy in social media use. However, they also report rapid adoption, as numerous benefits are found. They found, as will be echoed in our own findings, that users post more frequently to Facebook than LinkedIn, demonstrating a particular degree of attraction to that medium. Though users report great value to LinkedIn, they do not seem driven to visit the site so often. Our own investigation supports this observation; reasons for this will be explored later in the paper. LinkedIn usage seems limited to the professional context, whereas Twitter seems broadly undifferentiated in that regard, and a certain amount of Facebook usage will tend to be professional; for example, Facebook allows businesses to create pages on which they may publicise themselves.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2018739138"
],
"abstract": [
"The use of social networking software by professionals is increasing dramatically. How it is used, whether it enhances or reduces productivity, and how enterprise-friendly design and use might evolve are open questions. We examine attitudes and behaviors in a large, technologically-savvy organization through a broad survey and thirty focused interviews. We find extensive social and work uses, with complex patterns that differ with software system and networker age. Tensions arise when use spans social groups and the organization's firewall. Although use is predominantly to support weak ties whose contribution to productivity can be difficult to prove, we anticipate rapid uptake of social networking technology by organizations."
]
}
|
1306.1073
|
1622387430
|
Maintenance of multiple, distributed up-to-date copies of collections of changing Web resources is important in many application contexts and is often achieved using ad hoc or proprietary synchronization solutions. ResourceSync is a resource synchronization framework that integrates with the Web architecture and leverages XML sitemaps. We define a model for the ResourceSync framework as a basis for understanding its properties. We then describe experiments in which simulations of a variety of synchronization scenarios illustrate the effects of model configuration on consistency, latency, and data transfer efficiency. These results provide insight into which congurations are appropriate for various application scenarios.
|
The synchronization problem has been discussed in the context of clock synchronization @cite_3 , concurrency in distributed databases (e.g. @cite_13 ), large-scale clould computing (e.g., @cite_11 ). ResourceSync is certainly not designed for that purpose but focuses on global synchronization of resources across system boundaries.
|
{
"cite_N": [
"@cite_13",
"@cite_3",
"@cite_11"
],
"mid": [
"1993505169",
"1973501242",
"2153704625"
],
"abstract": [
"In this paper we survey, consolidate, and present the state of the art in distributed database concurrency control. The heart of our analysts is a decomposition of the concurrency control problem into two major subproblems: read-write and write-write synchronization. We describe a series of synchromzation techniques for solving each subproblem and show how to combine these techniques into algorithms for solving the entire concurrency control problem. Such algorithms are called \"concurrency control methods.\" We describe 48 principal methods, including all practical algorithms that have appeared m the literature plus several new ones. We concentrate on the structure and correctness of concurrency control algorithms. Issues of performance are given only secondary treatment.",
"The concept of one event happening before another in a distributed system is examined, and is shown to define a partial ordering of the events. A distributed algorithm is given for synchronizing a system of logical clocks which can be used to totally order the events. The use of the total ordering is illustrated with a method for solving synchronization problems. The algorithm is then specialized for synchronizing physical clocks, and a bound is derived on how far out of synchrony the clocks can become.",
"Reliability at massive scale is one of the biggest challenges we face at Amazon.com, one of the largest e-commerce operations in the world; even the slightest outage has significant financial consequences and impacts customer trust. The Amazon.com platform, which provides services for many web sites worldwide, is implemented on top of an infrastructure of tens of thousands of servers and network components located in many datacenters around the world. At this scale, small and large components fail continuously and the way persistent state is managed in the face of these failures drives the reliability and scalability of the software systems. This paper presents the design and implementation of Dynamo, a highly available key-value storage system that some of Amazon's core services use to provide an \"always-on\" experience. To achieve this level of availability, Dynamo sacrifices consistency under certain failure scenarios. It makes extensive use of object versioning and application-assisted conflict resolution in a manner that provides a novel interface for developers to use."
]
}
|
1306.1073
|
1622387430
|
Maintenance of multiple, distributed up-to-date copies of collections of changing Web resources is important in many application contexts and is often achieved using ad hoc or proprietary synchronization solutions. ResourceSync is a resource synchronization framework that integrates with the Web architecture and leverages XML sitemaps. We define a model for the ResourceSync framework as a basis for understanding its properties. We then describe experiments in which simulations of a variety of synchronization scenarios illustrate the effects of model configuration on consistency, latency, and data transfer efficiency. These results provide insight into which congurations are appropriate for various application scenarios.
|
The problem of changing resources has been discussed by Cho and Molina @cite_16 for general Web documents and by @cite_17 for Linked Data resources. Linner @cite_2 gives an overview of existing techniques and proposes an instant state synchronization approach for Web hypertext applications. His work is closely related to ours, with the main difference being that ResourceSync does not target real-time synchronization of application events.
|
{
"cite_N": [
"@cite_16",
"@cite_2",
"@cite_17"
],
"mid": [
"1976624301",
"2501449638",
"2167764583"
],
"abstract": [
"Many online data sources are updated autonomously and independently. In this article, we make the case for estimating the change frequency of data to improve Web crawlers, Web caches and to help data mining. We first identify various scenarios, where different applications have different requirements on the accuracy of the estimated frequency. Then we develop several \"frequency estimators\" for the identified scenarios, showing analytically and experimentally how precise they are. In many cases, our proposed estimators predict change frequencies much more accurately and improve the effectiveness of applications. For example, a Web crawler could achieve 35p improvement in \"freshness\" simply by adopting our proposed estimator.",
"",
"Datasets in the LOD cloud are far from being static in their nature and how they are exposed. As resources are added and new links are set, applications consuming the data should be able to deal with these changes. In this paper we investigate how LOD datasets change and what sensible measures there are to accommodate dataset dynamics. We compare our findings with traditional, document-centric studies concerning the “freshness” of the document collections and propose metrics for LOD datasets."
]
}
|
1306.0772
|
2008364132
|
We consider a general heterogeneous network in which, besides general propagation effects (shadowing and or fading), individual base stations can have different emitting powers and be subject to different parameters of Hata-like path-loss models (path-loss exponent and constant) due to, for example, varying antenna heights. We assume also that the stations may have varying parameters of, for example, the link layer performance (SINR threshold, etc). By studying the propagation processes of signals received by the typical user from all antennas marked by the corresponding antenna parameters, we show that seemingly different heterogeneous networks based on Poisson point processes can be equivalent from the point of view a typical user. These neworks can be replaced with a model where all the previously varying propagation parameters (including path-loss exponents) are set to constants while the only trade-off being the introduction of an isotropic base station density. This allows one to perform analytic comparisons of different network models via their isotropic representations. In the case of a constant path-loss exponent, the isotropic representation simplifies to a homogeneous modification of the constant intensity of the original network, thus generalizing a previous result showing that the propagation processes only depend on one moment of the emitted power and propagation effects. We give examples and applications to motivate these results and highlight an interesting observation regarding random path-loss exponents.
|
In the context of multi-tier (heterogeneous) cellular networks, @cite_11 and Mukherjee @cite_2 both derived results for the distribution of the (downlink) SINR based on models consisting of independent superpositions of Poisson processes with Rayleigh fading. @cite_0 obtained similar SINR expressions, but derived and used the above propagation invariance result to show that their (and by extension, the above) results hold for arbitrary propagation effects. Independently, B @cite_7 used the same argument to derive the SINR-based @math -coverage probability for a single-tier network by first assuming Rayleigh fading, then lifting the assumption via propagation invariance. It should be stressed that this approach applies to all results based on functions of propagation processes and not just results involving sums or interference terms. A worthy pursuit would be to list all such results that hold under arbitrary propagation effects, but this is beyond the scope of this paper.
|
{
"cite_N": [
"@cite_0",
"@cite_2",
"@cite_7",
"@cite_11"
],
"mid": [
"2043180800",
"2059973889",
"2049496443",
"2149170915"
],
"abstract": [
"Abstract--- This paper studies the carrier-to-interference ratio (CIR) and carrier-to-interference-plus-noise ratio (CINR) performance at the mobile station (MS) within a multi-tier network composed of M tiers of wireless networks, with each tier modeled as the homogeneous n-dimensional (n-D, n=1,2, and 3) shotgun cellular system, where the base station (BS) distribution is given by the homogeneous Poisson point process in n-D. The CIR and CINR at the MS in a single tier network are thoroughly analyzed to simplify the analysis of the multi-tier network. For the multi-tier network with given system parameters, the following are the main results of this paper: (1) semi-analytical expressions for the tail probabilities of CIR and CINR; (2) a closed form expression for the tail probability of CIR in the range [1,infinity); (3) a closed form expression for the tail probability of an approximation to CINR in the entire range [0,infinity); (4) a lookup table based approach for obtaining the tail probability of CINR, and (5) the study of the effect of shadow fading and BSs with ideal sectorized antennas on the CIR and CINR. Based on these results, it is shown that, in a practical cellular system, the installation of additional wireless networks (microcells, picocells and femtocells) with low power BSs over the already existing macrocell network will always improve the CINR performance at the MS.",
"The Signal to Interference Plus Noise Ratio (SINR) on a wireless link is an important basis for consideration of outage, capacity, and throughput in a cellular network. It is therefore important to understand the SINR distribution within such networks, and in particular heterogeneous cellular networks, since these are expected to dominate future network deployments . Until recently the distribution of SINR in heterogeneous networks was studied almost exclusively via simulation, for selected scenarios representing pre-defined arrangements of users and the elements of the heterogeneous network such as macro-cells, femto-cells, etc. However, the dynamic nature of heterogeneous networks makes it difficult to design a few representative simulation scenarios from which general inferences can be drawn that apply to all deployments. In this paper, we examine the downlink of a heterogeneous cellular network made up of multiple tiers of transmitters (e.g., macro-, micro-, pico-, and femto-cells) and provide a general theoretical analysis of the distribution of the SINR at an arbitrarily-located user. Using physically realistic stochastic models for the locations of the base stations (BSs) in the tiers, we can compute the general SINR distribution in closed form. We illustrate a use of this approach for a three-tier network by calculating the probability of the user being able to camp on a macro-cell or an open-access (OA) femto-cell in the presence of Closed Subscriber Group (CSG) femto-cells. We show that this probability depends only on the relative densities and transmit powers of the macro- and femto-cells, the fraction of femto-cells operating in OA vs. Closed Subscriber Group (CSG) mode, and on the parameters of the wireless channel model. For an operator considering a femto overlay on a macro network, the parameters of the femto deployment can be selected from a set of universal curves.",
"We give numerically tractable, explicit integral expressions for the distribution of the signal-to-interference-and-noise-ratio (SINR) experienced by a typical user in the downlink channel from the k-th strongest base stations of a cellular network modelled by Poisson point process on the plane. Our signal propagation-loss model comprises of a power-law path-loss function with arbitrarily distributed shadowing, independent across all base stations, with and without Rayleigh fading. Our results are valid in the whole domain of SINR, in particular for SINR <; 1, where one observes multiple coverage. In this latter aspect our paper complements previous studies reported in [1].",
"Cellular networks are in a major transition from a carefully planned set of large tower-mounted base-stations (BSs) to an irregular deployment of heterogeneous infrastructure elements that often additionally includes micro, pico, and femtocells, as well as distributed antennas. In this paper, we develop a tractable, flexible, and accurate model for a downlink heterogeneous cellular network (HCN) consisting of K tiers of randomly located BSs, where each tier may differ in terms of average transmit power, supported data rate and BS density. Assuming a mobile user connects to the strongest candidate BS, the resulting Signal-to-Interference-plus-Noise-Ratio (SINR) is greater than 1 when in coverage, Rayleigh fading, we derive an expression for the probability of coverage (equivalently outage) over the entire network under both open and closed access, which assumes a strikingly simple closed-form in the high SINR regime and is accurate down to -4 dB even under weaker assumptions. For external validation, we compare against an actual LTE network (for tier 1) with the other K-1 tiers being modeled as independent Poisson Point Processes. In this case as well, our model is accurate to within 1-2 dB. We also derive the average rate achieved by a randomly located mobile and the average load on each tier of BSs. One interesting observation for interference-limited open access networks is that at a given , adding more tiers and or BSs neither increases nor decreases the probability of coverage or outage when all the tiers have the same target-SINR."
]
}
|
1306.0772
|
2008364132
|
We consider a general heterogeneous network in which, besides general propagation effects (shadowing and or fading), individual base stations can have different emitting powers and be subject to different parameters of Hata-like path-loss models (path-loss exponent and constant) due to, for example, varying antenna heights. We assume also that the stations may have varying parameters of, for example, the link layer performance (SINR threshold, etc). By studying the propagation processes of signals received by the typical user from all antennas marked by the corresponding antenna parameters, we show that seemingly different heterogeneous networks based on Poisson point processes can be equivalent from the point of view a typical user. These neworks can be replaced with a model where all the previously varying propagation parameters (including path-loss exponents) are set to constants while the only trade-off being the introduction of an isotropic base station density. This allows one to perform analytic comparisons of different network models via their isotropic representations. In the case of a constant path-loss exponent, the isotropic representation simplifies to a homogeneous modification of the constant intensity of the original network, thus generalizing a previous result showing that the propagation processes only depend on one moment of the emitted power and propagation effects. We give examples and applications to motivate these results and highlight an interesting observation regarding random path-loss exponents.
|
Our second result involves the equivalence of heterogeneous networks with random parameters, including path-loss exponents. For tractability, the aforementioned multi-tier results all assumed constant path-loss exponents across all tiers. @cite_9 extended this to a model with a different (but constant) path-loss exponent on each tier, but only assumed Rayleigh fading in their work and examined the SINR based on the base station with the smallest distance to the typical user. Also assuming different (but constant) path-loss exponents, @cite_6 generalized this approach to arbitrary propagation effects by using propagation invariance. For constant (but different) parameters across all tiers, they also showed that a multi-tier network is stochastically equivalent to a single-tier network with unity parameters while all the original parameters are incorporated into the density of the (inhomogeneous Poisson) propagation process. In the context of cellular networks or related fields, we are unaware of work involving random path-loss exponents or equivalence results to the level of generality (due to more randomized parameters) presented here.
|
{
"cite_N": [
"@cite_9",
"@cite_6"
],
"mid": [
"2034420299",
"1977905728"
],
"abstract": [
"In this paper we develop a tractable framework for SINR analysis in downlink heterogeneous cellular networks (HCNs) with flexible cell association policies. The HCN is modeled as a multi-tier cellular network where each tier's base stations (BSs) are randomly located and have a particular transmit power, path loss exponent, spatial density, and bias towards admitting mobile users. For example, as compared to macrocells, picocells would usually have lower transmit power, higher path loss exponent (lower antennas), higher spatial density (many picocells per macrocell), and a positive bias so that macrocell users are actively encouraged to use the more lightly loaded picocells. In the present paper we implicitly assume all base stations have full queues; future work should relax this. For this model, we derive the outage probability of a typical user in the whole network or a certain tier, which is equivalently the downlink SINR cumulative distribution function. The results are accurate for all SINRs, and their expressions admit quite simple closed-forms in some plausible special cases. We also derive the average ergodic rate of the typical user, and the minimum average user throughput - the smallest value among the average user throughputs supported by one cell in each tier. We observe that neither the number of BSs or tiers changes the outage probability or average ergodic rate in an interference-limited full-loaded HCN with unbiased cell association (no biasing), and observe how biasing alters the various metrics.",
"In this paper, we consider the downlink signal-to-interference-plus-noise ratio (SINR) analysis in a heterogeneous cellular network with K tiers. Each tier is characterized by a base-station (BS) arrangement according to a homogeneous Poisson point process with certain BS density, transmission power, random shadow fading factors with arbitrary distribution, arbitrary path-loss exponent and a certain bias towards admitting the mobile-station (MS). The MS associates with the BS that has the maximum instantaneous biased received power under the open access cell association scheme. For such a general setting, we provide an analytical characterization of the coverage probability at the MS."
]
}
|
1306.0195
|
1833224505
|
While passwords, by definition, are meant to be secret, recent trends have witnessed an increasing number of people sharing their email passwords with friends, colleagues, and significant others. However, leading websites like Google advise their users not to share their passwords with anyone, to avoid security and privacy breaches. To understand users' general password sharing behavior and practices, we conducted an online survey with 209 Indian participants and found that 64.35 of the participants felt a need to share their email passwords. Further, about 77 of the participants said that they would want to use a system which could provide them access control features, to maintain their privacy while sharing emails. To address the privacy concerns of users who need to share emails, we propose ChaMAILeon, a system which enables users to share their email passwords while maintaining their privacy. ChaMAILeon allows users to create multiple passwords for their email account. Each such password corresponds to a different set of access control rules, and gives a different view of the same email account. We conducted a controlled experiment with 30 participants to evaluate the usability of the system. Each participant was required to perform 5 tasks. Each task corresponded to different access control rules, which the participant was required to set, for a dummy email account. We found that, with a reasonable number of multiple attempts, all 30 participants were able to perform all 5 tasks given to them. The system usability score was found out to be 75.42. Moreover, 56.6 of the participants said that they would like to use ChaMAILeon frequently.
|
Mifrenz http: mifrenz.com is another application, which allows parents to control the email accounts of their children. This application gives parents the ability to let their children safely use email, with the minimum of intervention @cite_25 . Although, Mifrenz does not directly relate to the problem of password sharing, it overlaps with ChaMAILeon's objective of allowing partial controlled access to an individual's emails.
|
{
"cite_N": [
"@cite_25"
],
"mid": [
"155891624"
],
"abstract": [
"Products currently available for monitoring children 's email usage are either considered to encourage dubious ethical behaviour or are time consuming for parents to administer. This paper describes the development of a new email client application for children called Mifrenz. This new application gives parents the ability to let their children safely use email, with the minimum of intervention. It was developed using mostly free software and also with the desire to provide real first hand programming examples to demonstrate to students."
]
}
|
1306.0193
|
2058210123
|
The idea of social participatory sensing provides a substrate to benefit from friendship relations in recruiting a critical mass of participants willing to attend in a sensing campaign. However, the selection of suitable participants who are trustable and provide high quality contributions is challenging. In this paper, we propose a recruitment framework for social participatory sensing. Our framework leverages multi-hop friendship relations to identify and select suitable and trustworthy participants among friends or friends of friends, and finds the most trustable paths to them. The framework also includes a suggestion component which provides a cluster of suggested friends along with the path to them, which can be further used for recruitment or friendship establishment. Simulation results demonstrate the efficacy of our proposed recruitment framework in terms of selecting a large number of well-suited participants and providing contributions with high overall trust, in comparison with one-hop recruitment architecture.
|
For most participatory sensing applications, availability of participants in terms of geographic and temporal coverage of the sensing area is important. For example, in a noise map sensing campaign, it is desirable to recruit participants who regularly pass through the sensing region and cover as much of the area as possible. @cite_4 proposed a recruitment framework for participatory sensing systems which aims at identifying suitable participants based on parameters such as geographical and temporal availability of participant. Participants have previously collected location traces for a period of time that represent their typical behaviour. The location traces are used by the qualifier component to select participants who have the task minimum location requirements. Once participants are selected, the assessment component identifies which subset of participants maximizes coverage over task specific area. These participants are then recruited and begin producing sensor data. During the campaign, the progress review component periodically monitors the coverage and availability of participants to see whether they remain consistent with task requirements.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"1553085258"
],
"abstract": [
"Mobile phones have evolved from devices that are just used for voice and text communication to platforms that are able to capture and transmit a range of data types (image, audio, and location). The adoption of these increasingly capable devices by society has enabled a potentially pervasive sensing paradigm - participatory sensing. A coordinated participatory sensing system engages individuals carrying mobile phones to explore phenomena of interest using in situ data collection. For participatory sensing to succeed, several technical challenges need to be solved. In this paper, we discuss one particular issue: developing a recruitment framework to enable organizers to identify well-suited participants for data collections based on geographic and temporal availability as well as participation habits. This recruitment system is evaluated through a series of pilot data collections where volunteers explored sustainable processes on a university campus."
]
}
|
1306.0193
|
2058210123
|
The idea of social participatory sensing provides a substrate to benefit from friendship relations in recruiting a critical mass of participants willing to attend in a sensing campaign. However, the selection of suitable participants who are trustable and provide high quality contributions is challenging. In this paper, we propose a recruitment framework for social participatory sensing. Our framework leverages multi-hop friendship relations to identify and select suitable and trustworthy participants among friends or friends of friends, and finds the most trustable paths to them. The framework also includes a suggestion component which provides a cluster of suggested friends along with the path to them, which can be further used for recruitment or friendship establishment. Simulation results demonstrate the efficacy of our proposed recruitment framework in terms of selecting a large number of well-suited participants and providing contributions with high overall trust, in comparison with one-hop recruitment architecture.
|
@cite_10 also proposed a distributed recruitment and data collection framework for opportunistic sensing. The recruitment component exploits the suitability of user behaviours and based on the mobility history information, it recruits only the nodes that are likely to be in the sensing area when the sensing activity is taking place. As a distributed recruitment framework, a set of recruiting nodes visit the sensor area before the campaign is launched and then disseminate recruitment messages. In order to transfer collected sensor data to the requester, a collection of nodes called data sinks are used and participating nodes opportunistically exploit ad hoc encounters to reach data sinks that are temporarily deployed in the sensed area.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2042014342"
],
"abstract": [
"People-centric sensing is a novel apporach that exploits the sensing capabilities offered by smartphones and the mobility of users to sense large scale areas without requiring the deployment of sensors in-situ. Given the ubiquitous nature of smartphones, people-centric sensing is a viable and efficient solution for crowdsourcing data. In this work, we propose a fully distributed, opportunistic sensing framework that involves two main components which both work in an ad hoc fashion: Recruitment and Data Collection. We analyzed the feasibility of our distributed approach for both components through preliminary simulations. The results show that our recruitment method is able to select 66 of the nodes that are appropriate for the sensing activity and 88 of the messages sent by these selected nodes reach the sink by using our data collection method."
]
}
|
1306.0193
|
2058210123
|
The idea of social participatory sensing provides a substrate to benefit from friendship relations in recruiting a critical mass of participants willing to attend in a sensing campaign. However, the selection of suitable participants who are trustable and provide high quality contributions is challenging. In this paper, we propose a recruitment framework for social participatory sensing. Our framework leverages multi-hop friendship relations to identify and select suitable and trustworthy participants among friends or friends of friends, and finds the most trustable paths to them. The framework also includes a suggestion component which provides a cluster of suggested friends along with the path to them, which can be further used for recruitment or friendship establishment. Simulation results demonstrate the efficacy of our proposed recruitment framework in terms of selecting a large number of well-suited participants and providing contributions with high overall trust, in comparison with one-hop recruitment architecture.
|
Narrowing it down to participatory sensing, authors of @cite_15 have presented a group of participatory sensing reputation metrics categorized into two groups: cross-campaign metrics such as number of previous campaigns taken and the success of participant in previous campaigns, and campaign-specific metrics such as timeliness, relevancy and the quality of sensor data. However, in @cite_4 , they have limited reputation to considering participants' willingness (given the opportunity, is data collected) and diligence in collecting samples (timeliness, relevance and quality of data).
|
{
"cite_N": [
"@cite_15",
"@cite_4"
],
"mid": [
"2131848448",
"1553085258"
],
"abstract": [
"Because participatory sensing – targeted campaigns where people harness mobile phones as tools for data collection – involves large and distributed groups of people, participatory sensing systems benefit from tools to measure and evaluate the contributions of individual participants. This paper develops a set of metrics to help participatory sensing organizers determine individual participants’ fit with any given sensing project, and describes experiments evaluating the resulting reputation system. I. INTRODUCTION The rapid adoption of mobile phones over the last decade and an increasing ability to capture, classify, and transmit a wide variety of data (image, audio, and location) have enabled a new sensing paradigm – participatory urban sensing – where humans carrying mobile phones act as, and contribute to, sensing systems [1], [2], [3]. In this paper, we discuss an important factor in participatory sensing systems: measurement and evaluation of participation and performance during sensing projects. In participatory sensing, mobile phone-based data gathering is coordinated across a potentially large number of participants over wide spans of space and time. We draw from three pilot projects to illustrate participatory sensing and describe the unique challenges to measurement and evaluation provided by “campaigns”: distributed and targeted efforts to collect data. Project Budburst [4], Personal Environmental Impact Report (PEIR) [5], and Walkability all situate “humans in the loop”, but have critical differences in their goals and challenges (Table I).",
"Mobile phones have evolved from devices that are just used for voice and text communication to platforms that are able to capture and transmit a range of data types (image, audio, and location). The adoption of these increasingly capable devices by society has enabled a potentially pervasive sensing paradigm - participatory sensing. A coordinated participatory sensing system engages individuals carrying mobile phones to explore phenomena of interest using in situ data collection. For participatory sensing to succeed, several technical challenges need to be solved. In this paper, we discuss one particular issue: developing a recruitment framework to enable organizers to identify well-suited participants for data collections based on geographic and temporal availability as well as participation habits. This recruitment system is evaluated through a series of pilot data collections where volunteers explored sustainable processes on a university campus."
]
}
|
1306.0193
|
2058210123
|
The idea of social participatory sensing provides a substrate to benefit from friendship relations in recruiting a critical mass of participants willing to attend in a sensing campaign. However, the selection of suitable participants who are trustable and provide high quality contributions is challenging. In this paper, we propose a recruitment framework for social participatory sensing. Our framework leverages multi-hop friendship relations to identify and select suitable and trustworthy participants among friends or friends of friends, and finds the most trustable paths to them. The framework also includes a suggestion component which provides a cluster of suggested friends along with the path to them, which can be further used for recruitment or friendship establishment. Simulation results demonstrate the efficacy of our proposed recruitment framework in terms of selecting a large number of well-suited participants and providing contributions with high overall trust, in comparison with one-hop recruitment architecture.
|
@cite_2 proposes a reputation-based framework which makes use of Beta reputation @cite_11 to assign a reputation score to each sensor node in a wireless sensor network. Beta reputation has simple updating rules as well as facilitates easy integration of ageing. However, it is less aggressive in penalizing users with poor quality contributions. A reputation framework for participatory sensing was proposed in @cite_16 . A watchdog module computes a cooperative rating for each device according to its short-term behaviour which acts as input to the reputation module which utilizes Gompertz function to build a long-term reputation score.
|
{
"cite_N": [
"@cite_11",
"@cite_16",
"@cite_2"
],
"mid": [
"1604936042",
"2103112159",
"2034419870"
],
"abstract": [
"Reputation systems can be used to foster good behaviour and to encourage adherence to contracts in e-commerce. Several reputation systems have been deployed in practical applications or proposed in the literature. This paper describes a new system called the beta reputation system which is based on using beta probability density functions to combine feedback and derive reputation ratings. The advantage of the beta reputation system is flexibility and simplicity as well as its foundation on the theory of statistics.",
"The continual advancement in semiconductor technology has enabled vendors to integrate an increasing number of sensors, e.g., accelerometer, gyroscope, digital compass, high resolution cameras and others, in modern mobile phones. Combining with the ever expanding market penetration of mobile phones, the onboard sensors are driving the sensing community towards a new paradigm called participatory sensing in which ordinary citizens voluntarily collect and share information from their local environment using personal mobile devices. The inherent openness of this platform is a double-edged sword. While encouraging user participation on the one hand, it also makes it easy to contribute corrupted data. As such, data trustworthiness becomes a key issue which needs to be addressed to ensure sustainable development of this emerging paradigm. In this paper, we propose a reputation system in which a reputation score is calculated for each device as a reflection of the trustworthiness of its sensor data. We adopt the Gompertz function as a fundamental building block of our system, since it is better suited to deal with the unique characteristics of participatory sensing. We present an application-agnostic implementation of the system, which can work with a wide variety of participatory sensing applications. We evaluate the performance of our reputation system in the context of two real-world participatory sensing applications. Our results show that the proposed reputation system outperforms existing solutions by up to a factor of six.",
"Sensor network technology promises a vast increase in automatic data collection capabilities through efficient deployment of tiny sensing devices. The technology will allow users to measure phenomena of interest at unprecedented spatial and temporal densities. However, as with almost every data-driven technology, the many benefits come with a significant challenge in data reliability. If wireless sensor networks are really going to provide data for the scientific community, citizen-driven activism, or organizations which test that companies are upholding environmental laws, then an important question arises: How can a user trust the accuracy of information provided by the sensor networkq Data integrity is vulnerable to both node and system failures. In data collection systems, faults are indicators that sensor nodes are not providing useful information. In data fusion systems the consequences are more dire; the final outcome is easily affected by corrupted sensor measurements, and the problems are no longer visibly obvious. In this article, we investigate a generalized and unified approach for providing information about the data accuracy in sensor networks. Our approach is to allow the sensor nodes to develop a community of trust. We propose a framework where each sensor node maintains reputation metrics which both represent past behavior of other nodes and are used as an inherent aspect in predicting their future behavior. We employ a Bayesian formulation, specifically a beta reputation system, for the algorithm steps of reputation representation, updates, integration and trust evolution. This framework is available as a middleware service on motes and has been ported to two sensor network operating systems, TinyOS and SOS. We evaluate the efficacy of this framework using multiple contexts: (1) a lab-scale test bed of Mica2 motes, (2) Avrora simulations, and (3) real data sets collected from sensor network deployments in James Reserve."
]
}
|
1306.0193
|
2058210123
|
The idea of social participatory sensing provides a substrate to benefit from friendship relations in recruiting a critical mass of participants willing to attend in a sensing campaign. However, the selection of suitable participants who are trustable and provide high quality contributions is challenging. In this paper, we propose a recruitment framework for social participatory sensing. Our framework leverages multi-hop friendship relations to identify and select suitable and trustworthy participants among friends or friends of friends, and finds the most trustable paths to them. The framework also includes a suggestion component which provides a cluster of suggested friends along with the path to them, which can be further used for recruitment or friendship establishment. Simulation results demonstrate the efficacy of our proposed recruitment framework in terms of selecting a large number of well-suited participants and providing contributions with high overall trust, in comparison with one-hop recruitment architecture.
|
Expert based recruitment consists of identifying persons with relevant expertise or experience for a given topic. Expert finding have been excessively studied in social networks. @cite_17 developed a Bayesian hierarchical model for expert finding that accounts for both social relationships and content. The model assumes that social links are determined by expertise similarity between candidates. @cite_13 proposed a propagation based approach for finding expert in a social network. The approach consists of two steps. In the first step, they make use of person local information to estimate an initial expert score for each person and select the top ranked persons as candidates. The selected persons are used to construct a sub-graph. In the second step, one’s expert score is propagated to the persons with whom he has relationships.
|
{
"cite_N": [
"@cite_13",
"@cite_17"
],
"mid": [
"139119103",
"2072268076"
],
"abstract": [
"This paper addresses the issue of expert finding in a social network. The task of expert finding, as one of the most important research issues in social networks, is aimed at identifying persons with relevant expertise or experience for a given topic. In this paper, we propose a propagation-based approach that takes into consideration of both person local information and network information (e.g. relationships between persons). Experimental results show that our approach can outperform the baseline approach.",
"Expert finding is a task of finding knowledgeable people on a given topic. State-of-the-art expertise retrieval algorithms identify matching experts based on analysis of textual content of documents experts are associated with. While powerful, these models ignore social structure that might be available. In this paper, we develop a Bayesian hierarchical model for expert finding that accounts for both social relationships and content. The model assumes that social links are determined by expertise similarity between candidates. We demonstrate the improved retrieval performance of our model over the baseline on a realistic data set."
]
}
|
1306.0424
|
2129211331
|
Citation cascades in blog networks are often considered as traces of information spreading on this social medium. In this work, we question this point of view using both a structural and semantic analysis of five months activity of the most representative blogs of the french-speaking community.Statistical measures reveal that our dataset shares many features with those that can be found in the literature, suggesting the existence of an identical underlying process. However, a closer analysis of the post content indicates that the popular epidemic-like descriptions of cascades are misleading in this context.A basic model, taking only into account the behavior of bloggers and their restricted social network, accounts for several important statistical features of the data.These arguments support the idea that citations primary goal may not be information spreading on the blogosphere.
|
The rise of blogs among online social media has been discussed from a sociological point of view in numerous papers, e.g., @cite_8 give some insights about bloggers demographics and cultural behaviors; in @cite_29 , the author describes their influence on society. Given the size and richness of the blog datasets, automatic classification and text-mining tools have been widely used to study the dynamics of trends and opinions in the blogosphere @cite_19 @cite_7 @cite_26 @cite_14 @cite_28 . For example, some studies concentrate on the political blogosphere to understand the ties between political parties, in particular the way information spreads from a group to another @cite_18 @cite_22 . The question of trends is closely related to the definition of authority, influence and trust in social networks @cite_29 @cite_22 . As such, these text-mining tools are also used for various practical purposes as online advertising @cite_5 or search engines, ranking blogs according to their spreading ability @cite_2 .
|
{
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_14",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_29",
"@cite_19",
"@cite_2",
"@cite_5"
],
"mid": [
"2152284345",
"",
"",
"2951840452",
"2107666336",
"",
"2006802316",
"",
"2113889316",
"118766000",
"2094382041"
],
"abstract": [
"In this paper, we study the linking patterns and discussion topics of political bloggers. Our aim is to measure the degree of interaction between liberal and conservative blogs, and to uncover any differences in the structure of the two communities. Specifically, we analyze the posts of 40 \"A-list\" blogs over the period of two months preceding the U.S. Presidential Election of 2004, to study how often they referred to one another and to quantify the overlap in the topics they discussed, both within the liberal and conservative communities, and also across communities. We also study a single day snapshot of over 1,000 political blogs. This snapshot captures blogrolls (the list of links to other blogs frequently found in sidebars), and presents a more static picture of a broader blogosphere. Most significantly, we find differences in the behavior of liberal and conservative blogs, with conservative blogs linking to each other more frequently and in a denser pattern.",
"",
"",
"The blogosphere can be construed as a knowledge network made of bloggers who are interacting through a social network to share, exchange or produce information. We claim that the social and semantic dimensions are essentially co-determined and propose to investigate the co-evolutionary dynamics of the blogosphere by examining two intertwined issues: First, how does knowledge distribution drive new interactions and thus influence the social network topology? Second, which role structural network properties play in the information circulation in the system? We adopt an empirical standpoint by analyzing the semantic and social activity of a portion of the US political blogosphere, monitored on a period of four months.",
"Beyond serving as online diaries, weblogs have evolved into a complex social structure, one which is in many ways ideal for the study of the propagation of information. As weblog authors discover and republish information, we are able to use the existing link structure of blogspace to track its flow. Where the path by which it spreads is ambiguous, we utilize a novel inference scheme that takes advantage of data describing historical, repeating patterns of \"infection.\" Our paper describes this technique as well as a visualization system that allows for the graphical tracking of information flow.",
"",
"The last decade has seen an explosion in blogging and the blogosphere is continuing to grow, having a large global reach and many vibrant communities. Researchers have been pouring over blog data with the goal of finding communities, tracking what people are saying, finding influencers, and using many social network analytic tools to analyze the underlying social networks embedded within the blogosphere. One of the key technical problems with analyzing large social networks such as those embedded in the blogosphere is that there are many links between individuals and we often do not know the context or meaning of those links. This is problematic because it makes it difficult if not impossible to tease out the true communities, their behavior, how information flows, and who the central players are (if any). This paper seeks to further our understanding of how to analyze large blog networks and what they can tell us. We analyze 1.13M blogs posted by 185K bloggers over a period of 3 weeks. These bloggers span private blog sites through large blog-sites such as LiveJournal and Blogger. We show that we can, in fact, tag links in meaningful ways by leveraging topic-detection over the blogs themselves. We use these topics to contextually tag links coming from a particular blog post. This enrichment enables us to create smaller topic-specific graphs which we can analyze in some depth. We show that these topic-specific graphs not only have a different topology from the general blog graph but also enable us to find central bloggers which were otherwise hard to find. We further show that a temporal analysis identifies behaviors in terms of how components form as well as how bloggers continue to link after components form. These behaviors come to light when doing an analysis on the topic-specific graphs but are hidden or not easily discernable when analyzing the general blog graph.",
"",
"We study the dynamics of information propagation in environments of low-overhead personal publishing, using a large collection of WebLogs over time as our example domain. We characterize and model this collection at two levels. First, we present a macroscopic characterization of topic propagation through our corpus, formalizing the notion of long-running \"chatter\" topics consisting recursively of \"spike\" topics generated by outside world events, or more rarely, by resonances within the community. Second, we present a microscopic characterization of propagation from individual to individual, drawing on the theory of infectious diseases to model the flow. We propose, validate, and employ an algorithm to induce the underlying propagation network from a sequence of posts, and report on the results.",
"",
"Allowing global distribution of information to large audiences at very low cost, the Internet has emerged as a vital medium for marketing and advertising. Weblogs, a new form of self publication on the Internet, have attracted online advertisers because of their incredible growth-rate in recent years. In this paper, we propose to discover information diffusion paths from the blogosphere to track how information frequently flows from blog to blog. This knowledge can be used in various applications of online campaign. Our approach is based on analyzing the content of blogs. After detecting trackable topics of blogs, we model a blog community as a blog sequence database. Then, the discovery of information diffusion paths is formalized as a problem of frequent pattern mining. We develop a new data mining algorithm to discover information diffusion paths. Experiments conducted on real life dataset show that our algorithm discovers information diffusion paths efficiently. The discovered information diffusion paths are accurate in predicting the future information flow in the blog community."
]
}
|
1306.0424
|
2129211331
|
Citation cascades in blog networks are often considered as traces of information spreading on this social medium. In this work, we question this point of view using both a structural and semantic analysis of five months activity of the most representative blogs of the french-speaking community.Statistical measures reveal that our dataset shares many features with those that can be found in the literature, suggesting the existence of an identical underlying process. However, a closer analysis of the post content indicates that the popular epidemic-like descriptions of cascades are misleading in this context.A basic model, taking only into account the behavior of bloggers and their restricted social network, accounts for several important statistical features of the data.These arguments support the idea that citations primary goal may not be information spreading on the blogosphere.
|
In this work, we use data in which the connection between users is explicit and combine it with a popular cascade-like description of the spreading @cite_7 @cite_6 @cite_3 @cite_20 @cite_11 @cite_0 @cite_21 . More precisely, we adopt definitions very similar to the ones developed in @cite_6 , and will therefore often refer to this study for comparison purposes. In addition to the statistical analyses of these datasets, models have been proposed to explain the observed features @cite_19 @cite_26 @cite_6 @cite_11 . Among them, many are inspired by virus spreading models in epidemiology; in the following, we address the issue of the relevance of this particular class of models in the context of blog networks.
|
{
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_21",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"2107666336",
"4503277",
"2151118940",
"1489677531",
"",
"2113889316",
"",
"1606050636"
],
"abstract": [
"",
"Beyond serving as online diaries, weblogs have evolved into a complex social structure, one which is in many ways ideal for the study of the propagation of information. As weblog authors discover and republish information, we are able to use the existing link structure of blogspace to track its flow. Where the path by which it spreads is ambiguous, we utilize a novel inference scheme that takes advantage of data describing historical, repeating patterns of \"infection.\" Our paper describes this technique as well as a visualization system that allows for the graphical tracking of information flow.",
"With an increasing number of people that read, write and comment on blogs, the blogosphere has established itself as an essential medium of communication. A fundamental characteristic of the blogging activity is that bloggers often link to each other. The succession of linking behavior determines the way in which information propagates in the blogosphere, forming cascades. Analyzing cascades can be useful in various applications, such as providing insight of public opinion on various topics and developing better cascade models. This paper presents the results of an excessive study on cascading behavior in the blogosphere. Our objective is to present trends on the degree of engagement and reaction of bloggers in stories that become available in blogs under various parameters and constraints. To this end, we analyze cascades that are attributed to different population groups constrained by factors of gender, age, and continent. We also analyze how cascades differentiate depending on their subject. Our analysis is performed on one of the largest available datasets, including 30M active blogs and 700M posts. The study reveals large variations in the properties of cascades.",
"Can we cluster blogs into types by considering their typical posting and linking behavior? How do blogs evolve over time? In this work we answer these questions, by providing several sets of blog and post features that can help distinguish between blogs. The first two sets of features focus on the topology of the cascades that the blogs are involved in, and the last set of features focuses on the temporal evolution, using chaotic and fractal ideas. We also propose to use PCA to reduce dimensionality, so that we can visualize the resulting clouds of points. We run all our proposed tools on the icwsm dataset. Our findings are that (a) topology features can help us distinguish blogs, like ‘humor’ versus ‘conservative’ blogs (b) the temporal activity of blogs is very non-uniform and bursty but (c) surprisingly often, it is self-similar and thus can be compactly characterized by the so-called bias factor (the ‘80’ in a recursive 80-20 distribution).",
"How do blogs cite and influence each other? How do such links evolve? Does the popularity of old blog posts drop exponentially with time? These are some of the questions that we address in this work. Our goal is to build a model that generates realistic cascades, so that it can help us with link prediction and outlier detection. Blogs (weblogs) have become an important medium of information because of their timely publication, ease of use, and wide availability. In fact, they often make headlines, by discussing and discovering evidence about political events and facts. Often blogs link to one another, creating a publicly available record of how information and influence spreads through an underlying social network. Aggregating links from several blog posts creates a directed graph which we analyze to discover the patterns of information propagation in blogspace, and thereby understand the underlying social network. Not only are blogs interesting on their own merit, but our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. Here we report some surprising findings of the blog linking and information propagation structure, after we analyzed one of the largest available datasets, with 45,000 blogs and 2.2 million blog-postings. Our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. We also present a simple model that mimics the spread of information on the blogosphere, and produces information cascades very similar to those found in real life.",
"",
"We study the dynamics of information propagation in environments of low-overhead personal publishing, using a large collection of WebLogs over time as our example domain. We characterize and model this collection at two levels. First, we present a macroscopic characterization of topic propagation through our corpus, formalizing the notion of long-running \"chatter\" topics consisting recursively of \"spike\" topics generated by outside world events, or more rarely, by resonances within the community. Second, we present a microscopic characterization of propagation from individual to individual, drawing on the theory of infectious diseases to model the flow. We propose, validate, and employ an algorithm to induce the underlying propagation network from a sequence of posts, and report on the results.",
"",
"How do blogs produce posts? What local, underlying mechanisms lead to the bursty temporal behaviors observed in blog networks? Earlier work analyzed network patterns of blogs and found that blog behavior is bursty and often follows power laws in both topological and temporal characteristics. However, no intuitive and realistic model has yet been introduced, that can lead to such patterns. This is exactly the focus of this work. We propose a generative model that uses simple and intuitive principles for each individual blog, and yet it is able to produce the temporal characteristics of the blogosphere together with global topological network patterns, like power-laws for degree distributions, for inter-posting times, and several more. Our model ZC uses a novel ‘zero-crossing’ approach based on a random walk, combined with other powerful ideas like exploration and exploitation. This makes it the first model to simultaneously model the topology and temporal dynamics of the blogosphere. We validate our model with experiments on a large collection of 45,000 blogs and 2.2 million posts."
]
}
|
1305.6783
|
1782636746
|
Wireless cellular networks feature two emerging technological trends. The first is the direct Device-to-Device (D2D) communications, which enables direct links between the wireless devices that reutilize the cellular spectrum and radio interface. The second is that of Machine-Type Communications (MTC), where the objective is to attach a large number of low-rate low-power devices, termed Machine-Type Devices (MTDs) to the cellular network. MTDs pose new challenges to the cellular network, one if which is that the low transmission power can lead to outage problems for the cell-edge devices. Another issue imminent to MTC is the that can lead to overload of the radio interface. In this paper we explore the opportunity opened by D2D links for supporting MTDs, since it can be desirable to carry the MTC traffic not through direct links to a Base Station, but through a nearby relay. MTC is modeled as a fixed-rate traffic with an outage requirement. We propose two network-assisted D2D schemes that enable the cooperation between MTDs and standard cellular devices, thereby meeting the MTC outage requirements while maximizing the rate of the broadband services for the other devices. The proposed schemes apply the principles Opportunistic Interference Cancellation and the Cognitive Radio's underlaying. We show through analysis and numerical results the gains of the proposed schemes.
|
The research literature and 3GPP feature several efforts to enable efficient MTC within GSM and LTE cellular networks @cite_55 @cite_50 . The current focus is mostly on the characterization of the effect of a massive amount of MTDs requesting access in a cell @cite_23 @cite_17 @cite_57 , on adaptive contention and load control schemes to handle massive contention access @cite_49 @cite_41 @cite_42 @cite_16 and on traffic aggregation in cellular devices from non-cellular networks @cite_36 . In 3GPP there is an effort on enabling low cost LTE devices @cite_36 @cite_6 for MTC, where the reduction in cost comes mainly from lowering the device bandwidth, baseband processing complexity, and using half-duplex transceivers. From the many challenges to enable these devices to coexist with normal LTE devices @cite_3 , the main one is the reduced coverage, which is commonly addressed by reducing the cell sizes, introducing relays, or increasing the transmission time. The latter option is the one gaining traction @cite_3 @cite_36 , since the former ones would lead to prohibitively expensive solutions.
|
{
"cite_N": [
"@cite_41",
"@cite_36",
"@cite_55",
"@cite_42",
"@cite_6",
"@cite_3",
"@cite_57",
"@cite_23",
"@cite_50",
"@cite_49",
"@cite_16",
"@cite_17"
],
"mid": [
"2163814902",
"2142179860",
"1965046040",
"2123503096",
"",
"2007107924",
"2110262254",
"2114220728",
"",
"2116633494",
"2119977207",
"2021153196"
],
"abstract": [
"We propose a novel distributed random access scheme for wireless networks based on slotted ALOHA, motivated by the analogies between successive interference cancellation and iterative belief-propagation decoding on erasure channels. The proposed scheme assumes that each user independently accesses the wireless link in each slot with a predefined probability, resulting in a distribution of user transmissions over slots. The operation bears analogy with rateless codes, both in terms of probability distributions as well as to the fact that the ALOHA frame becomes fluid and adapted to the current contention process. Our aim is to optimize the slot access probability in order to achieve rateless-like distributions, focusing both on the maximization of the resolution probability of user transmissions and the throughput of the scheme.",
"We evaluate the feasibility of cognitive machine-to-machine communication on cellular bands from engineering and business perspectives. We propose a hierarchical network structure for cognitive M2M communication, where cluster headers gather M2M traffic using cognitive radio and forward it to the cellular networks. This structure can resolve the congestion problem that arises in conventional M2M systems. We obtain the optimal network parameters that minimize congestion at the radio access network. In addition, we investigate the business value of cognitive M2M on cellular bands. Taking into account the network usage fee, service fee, and hardware production costs, we model the profit structure of M2M services and derive the condition under which the CR type of M2M communication is superior to conventional M2M communication. We find that the optimal network design parameters (i.e., the number of cluster headers and cluster size) for business value would be different from the parameters expected from an engineering point of view. We believe that cognitive M2M communication can be a good solution for dealing with excessive M2M traffic in cellular networks, in terms of technical feasibility and business opportunity.",
"Machine-to-machine (M2M) communications emerge to autonomously operate to link interactions between Internet cyber world and physical systems. We present the technological scenario of M2M communications consisting of wireless infrastructure to cloud, and machine swarm of tremendous devices. Related technologies toward practical realization are explored to complete fundamental understanding and engineering knowledge of this new communication and networking technology front.",
"Supporting trillions of devices is the critical challenge in machine-to-machine (M2M) communications, which results in severe congestions in random access channels of cellular systems that have been recognized as promising scenarios enabling M2M communications. 3GPP thus developed the access class barring (ACB) for individual stabilization in each base station (BS). However, without cooperations among BSs, devices within dense areas suffer severe access delays. To facilitate devices escaping from continuous congestions, we propose the cooperative ACB for global stabilization and access load sharing to eliminate substantial defects in the ordinary ACB, thus significantly improving access delays.",
"",
"This paper describes issues related to one of the most significant challenges that 3GPP faces in supporting M2M communications over LTE: facilitating the support for low cost M2M devices. The paper considers the requirements for low cost M2M devices and the most significant contributors to device cost. System bandwidth is identified as being a significant cost contributor and backwards compatible approaches to reducing system bandwidth are identified.",
"Cellular network based Machine-to-Machine (M2M) communication is fast becoming a market-changing force for a wide spectrum of businesses and applications such as telematics, smart metering, point-of-sale terminals, and home security and automation systems. In this paper, we aim to answer the following important question: Does traffic generated by M2M devices impose new requirements and challenges for cellular network design and management? To answer this question, we take a first look at the characteristics of M2M traffic and compare it with traditional smartphone traffic. We have conducted our measurement analysis using a week-long traffic trace collected from a tier-1 cellular network in the United States. We characterize M2M traffic from a wide range of perspectives, including temporal dynamics, device mobility, application usage, and network performance. Our experimental results show that M2M traffic exhibits significantly different patterns than smartphone traffic in multiple aspects. For instance, M2M devices have a much larger ratio of uplink to downlink traffic volume, their traffic typically exhibits different diurnal patterns, they are more likely to generate synchronized traffic resulting in bursty aggregate traffic volumes, and are less mobile compared to smartphones. On the other hand, we also find that M2M devices are generally competing with smartphones for network resources in co-located geographical regions. These and other findings suggest that better protocol design, more careful spectrum allocation, and modified pricing schemes may be needed to accommodate the rise of M2M devices.",
"Increased attention has been drawn to machine type communication (MTC) lately, which represents important extension for user functionality in today's technologies, as well as an important potential for marketing expansion. GERAN studies over MTC has mainly focused on applications requiring low throughput and wide coverage such as smart meter, which could cause some impacts over normal data users if operating in a synchronous manner. This paper proposes a new methodology for studying the impact on signaling channels when mixture of synchronous and asynchronous traffic is present in the network. This is based on statistical analysis, considering signaling accessing attempts as driven by a Poisson process. In this approach successive random access attempts by the mobile stations are considered as well as the response on the network side. First implementation is held for GSM GPRS network, although generalization of the method for other technologies is possible. The simulated results show potential problems to be solved when a large number of synchronized users is present. Additionally, an analysis on the effects of the different levels of synchronization and the addition of extra signaling channels is included. Results also point out possible development paths that could be taken when designing new features to make networks more robust to synchronized traffic.",
"",
"The random access methods used for support of machine-type communications (MTC) in current cellular standards are derivatives of traditional framed slotted ALOHA and therefore do not support high user loads efficiently. Motivated by the random access method employed in LTE, we propose a novel approach that is able to sustain a wide random access load range, while preserving the physical layer unchanged and incurring minor changes in the medium access control layer. The proposed scheme increases the amount of available contention resources, without resorting to the increase of system resources, such as contention sub-frames and preambles. This increase is accomplished by expanding the contention space to the code domain, through the creation of random access codewords. Specifically, in the proposed scheme, users perform random access by transmitting one or none of the available LTE orthogonal preambles in multiple random access sub-frames, thus creating access codewords that are used for contention. In this way, for the same number of random access sub-frames and orthogonal preambles, the amount of available contention resources is drastically increased, enabling the support of an increased number of MTC users. We present the framework and analysis of the proposed code-expanded random access method and show that our approach supports load regions that are beyond the reach of current systems.",
"As Machine-Type-Communications (MTC) continues to burgeon rapidly, a comprehensive study on overload control approach to manage the data and signaling traffic from massive MTC devices is required. In this work, we study the problem of RACH overload, survey several types of RAN-level contention resolution methods, and introduce the current development of CN (core network) overload mechanisms in 3GPP LTE. Additionally, we simulate and compare different methods and offer further observations on the solution design.",
"The need to deploy large number of wireless devices, such as electricity or water meters, is becoming a key challenge for any utility. Furthermore, such a deployment should be functional for more than a decade. Many cellular operators consider LTE to be the single long term solution for wide area connectivity serving all types of wireless traffic. On the other hand, GSM is a well-adopted technology and represents a valuable asset to build M2M infrastructure due to the good coverage, device maturity, and low cost. In this paper we assess the potential of GSM GPRS EDGE to operate as a dedicated network for M2M communications. In order to enable M2M-dedicated operation in the near future, we reengineer the GSM GPRS EDGE protocol in a way that requires only minor software updates of the protocol stack. We propose different schemes to boost the number of M2M devices in the system without affecting the network stability. We show that a single GSM cell can support simultaneous low-data rate connections (e. g. to smart meters) in the order of 104 devices."
]
}
|
1305.6783
|
1782636746
|
Wireless cellular networks feature two emerging technological trends. The first is the direct Device-to-Device (D2D) communications, which enables direct links between the wireless devices that reutilize the cellular spectrum and radio interface. The second is that of Machine-Type Communications (MTC), where the objective is to attach a large number of low-rate low-power devices, termed Machine-Type Devices (MTDs) to the cellular network. MTDs pose new challenges to the cellular network, one if which is that the low transmission power can lead to outage problems for the cell-edge devices. Another issue imminent to MTC is the that can lead to overload of the radio interface. In this paper we explore the opportunity opened by D2D links for supporting MTDs, since it can be desirable to carry the MTC traffic not through direct links to a Base Station, but through a nearby relay. MTC is modeled as a fixed-rate traffic with an outage requirement. We propose two network-assisted D2D schemes that enable the cooperation between MTDs and standard cellular devices, thereby meeting the MTC outage requirements while maximizing the rate of the broadband services for the other devices. The proposed schemes apply the principles Opportunistic Interference Cancellation and the Cognitive Radio's underlaying. We show through analysis and numerical results the gains of the proposed schemes.
|
In contrast, in this paper we motivate that the coverage of MTDs can be enhanced through the use of the D2D communication paradigm. We note that within the literature dedicated to D2D, the synergies to be gained by combining MTC and D2D have not until now been identified, although the authors of @cite_24 have motivated that the use of relaying in the uplink direction in cellular network is one of the key aspects to enable MTC within a cellular network.
|
{
"cite_N": [
"@cite_24"
],
"mid": [
"1988701064"
],
"abstract": [
"Relaying can replace one transmission over a long distance with several transmissions over shorter distances, and the total transmission power of mobile stations thus can be lower in the case of relaying. As such, the relaying concept and protocols in a cellular environment have been much discussed in the literature recently. We mathematically analyze the idea of uplink relaying schemes, using a simple model that represents their essential functions. In order to simplify our performance analysis, we assume that the originator of a packet sets the maximum number of hops to the base station to be 2. Our results indicate that the performance benefits of relaying are significant"
]
}
|
1305.6783
|
1782636746
|
Wireless cellular networks feature two emerging technological trends. The first is the direct Device-to-Device (D2D) communications, which enables direct links between the wireless devices that reutilize the cellular spectrum and radio interface. The second is that of Machine-Type Communications (MTC), where the objective is to attach a large number of low-rate low-power devices, termed Machine-Type Devices (MTDs) to the cellular network. MTDs pose new challenges to the cellular network, one if which is that the low transmission power can lead to outage problems for the cell-edge devices. Another issue imminent to MTC is the that can lead to overload of the radio interface. In this paper we explore the opportunity opened by D2D links for supporting MTDs, since it can be desirable to carry the MTC traffic not through direct links to a Base Station, but through a nearby relay. MTC is modeled as a fixed-rate traffic with an outage requirement. We propose two network-assisted D2D schemes that enable the cooperation between MTDs and standard cellular devices, thereby meeting the MTC outage requirements while maximizing the rate of the broadband services for the other devices. The proposed schemes apply the principles Opportunistic Interference Cancellation and the Cognitive Radio's underlaying. We show through analysis and numerical results the gains of the proposed schemes.
|
Within the D2D context, there are already several proposals on underlaying @cite_14 @cite_25 @cite_37 and overlaying @cite_7 D2D with a cellular network. Underlaying schemes where the D2D shares the same air interface as the cellular network have been proposed first in @cite_14 @cite_25 @cite_37 . Further, we note that the approaches currently applied to D2D are applicable to Vehicular-to-Vehicular (V2V), as shown in @cite_2 . There are two main design directions currently found in the literature: . The network performs all the decisions in regards to resource sharing mode selection (D2D or via the cellular infrastructure), power control, scheduling, selection of transmission format (such as modulation, coding rates, multi-antenna transmission mode, etc.) @cite_14 @cite_25 @cite_4 @cite_44 @cite_10 @cite_37 @cite_9 @cite_26 @cite_2 @cite_33 @cite_8 @cite_43 @cite_28 @cite_19 ; . The network provides at most only synchronization signals to the devices @cite_31 @cite_39 @cite_27 @cite_22 .
|
{
"cite_N": [
"@cite_22",
"@cite_44",
"@cite_43",
"@cite_2",
"@cite_10",
"@cite_4",
"@cite_8",
"@cite_39",
"@cite_37",
"@cite_26",
"@cite_7",
"@cite_28",
"@cite_19",
"@cite_27",
"@cite_25",
"@cite_14",
"@cite_33",
"@cite_9",
"@cite_31"
],
"mid": [
"2068443796",
"2026860654",
"2130171753",
"2011991135",
"2096820671",
"2100372817",
"2066106876",
"2079334758",
"2136375880",
"2138580126",
"2044576662",
"2045954096",
"1994004937",
"2136393527",
"2106535821",
"2140656373",
"1975186862",
"2117537207",
"2156858898"
],
"abstract": [
"The demand for high-data-rate transmission has triggered the design and development of advanced cellular networks, such as fourth-generation (4G) Long-Term Evolution (LTE) networks. However, their poor coverage and relatively high interference can significantly impair data transmission. Femtocells and device-to-device (D2D) communication are promising solutions that have recently gathered interest. Operating in the licensed spectrum, femtocells provide users with higher capacity and coverage than their earlier designs, and they will be offered by cellular operators to enable high-data-rate local services in future LTE-Advanced networks. As an underlay to the cellular networks, D2D communication enables nearby devices to communicate directly with each other by reusing cellular resources, which reduces interference in the cellular network compared with communication through base stations (BSs). However, the tight integration of femtocells and D2D communication creates a challenge for existing network design and has been studied less in the literature. In this paper, we propose an open-access algorithm for femtocell BSs (FBSs) in D2D LTE-Advanced networks, to optimize network connectivity. Additionally, a greedy algorithm is proposed to balance macrocell BSs and FBSs to maximize network connectivity. The simulation results demonstrate the effectiveness of the proposed algorithms.",
"Device-to-Device communication underlaying a cellular network enables local services with limited interference to the cellular network. In this paper we study the optimal selection of possible resource sharing modes with the cellular network in a single cell. Based on the learning from the single cell studies we propose a mode selection procedure for a multi-cell environment. Our evaluation results of the proposed procedure show that it enables a much more reliable device-to-device communication with limited interference to the cellular network compared to simpler mode selection procedures. A well performing and practical mode selection is critical to enable the adoption of underlay device-to-device communication in cellular networks.",
"We consider Device-to-Device (D2D) communication underlaying cellular networks to improve local services. The system aims to optimize the throughput over the shared resources while fulfilling prioritized cellular service constraints. Optimum resource allocation and power control between the cellular and D2D connections that share the same resources are analyzed for different resource sharing modes. Optimality is discussed under practical constraints such as minimum and maximum spectral efficiency restrictions, and maximum transmit power or energy limitation. It is found that in most of the considered cases, optimum power control and resource allocation for the considered resource sharing modes can either be solved in closed form or searched from a finite set. The performance of the D2D underlay system is evaluated in both a single-cell scenario, and a Manhattan grid environment with multiple WINNER II A1 office buildings. The results show that by proper resource management, D2D communication can effectively improve the total throughput without generating harmful interference to cellular networks.",
"This paper investigates the resource-sharing problem in vehicular networks, including both vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication links. A novel underlaying resource-sharing communication mode for vehicular networks is proposed, in which different V2V and V2I communication links are permitted to access the same resources for their individual data transmission. To solve the resource-sharing problem in vehicular networks, we, for the first time, apply graph theory and propose the following two interference graph-based resource-sharing schemes: 1) the interference-aware graph-based resource-sharing scheme and 2) the interference-classified graph-based resource-sharing scheme. Compared with the traditional orthogonal communication mode in vehicular networks, the proposed two resource-sharing schemes express better network sum rate. The utility of the proposed V2V and V2I underlaying communication mode and the two proposed interference graph-based resource-sharing schemes are verified by simulations.",
"An underlaying direct Device-to-Device (D2D) communication mode in future cellular networks, such as IMT-Advanced, is expected to provide spectrally efficient and low latency support of e.g. rich multi-media local services. Enabling D2D links in a cellular network presents a challenge in transceiver design due to the potentially severe interference between the cellular network and D2D radios. In this paper we propose MIMO transmission schemes for cellular downlink that avoid generating interference to a D2D receiver operating on the same time-frequency resource. System simulations demonstrate that substantial gains in D2D SINR of up to 15 dB and around 10 total cell capacity gains can be obtained by using the proposed scheme.",
"We address resource sharing of the cellular network and a device-to-device (D2D) underlay communication assuming that the cellular network has control over the transmit power and the radio resources of D2D links. We show that by proper power control, the interference between two services can be coordinated to benefit the overall performance. In addition, we consider a scenario with prioritized cellular communication and an upper limit on the maximum transmission rate of all links. We derive the optimum power allocation for the considered resource sharing modes. The results show that cellular service can be effectively guaranteed while having a comparable sum rate with a none power control case in most of the cell area.",
"A new interference management strategy is proposed to enhance the overall capacity of cellular networks (CNs) and device-to-device (D2D) systems. We consider M out of K cellular user equipments (CUEs) and one D2D pair exploiting the same resources in the uplink (UL) period under the assumption of M multiple antennas at the base station (BS). First, we use the conventional mechanism which limits the maximum transmit power of the D2D transmitter so as not to generate harmful interference from D2D systems to CNs. Second, we propose a δD-interference limited area (ILA) control scheme to manage interference from CNs to D2D systems. The method does not allow the coexistence (i.e., use of the same resources) of CUEs and a D2D pair if the CUEs are located in the δD-ILA defined as the area in which the interference to signal ratio (ISR) at the D2D receiver is greater than the predetermined threshold, δD. Next, we analyze the coverage of the δD-ILA and derive the lower bound of the ergodic capacity as a closed form. Numerical results show that the δD-ILA based D2D gain is much greater than the conventional D2D gain, whereas the capacity loss to the CNs caused by using the δD-ILA is negligibly small.",
"Device-to-device (D2D) communications underlaying a cellular infrastructure has recently been proposed as a means of increasing the resource utilization, improving the user throughput and extending the battery lifetime of user equipments. In this paper we propose a new distributed power control algorithm that iteratively determines the signal- to-noise-and- interference-ratio (SINR) targets in a mixed cellular and D2D environment and allocates transmit powers such that the overall power consumption is minimized subject to a sum-rate constraint. The performance of the distributed power control algorithm is benchmarked with respect to the optimal SINR target setting that we obtain using the Augmented Lagrangian Penalty Function (ALPF) method. The proposed scheme shows consistently near optimum performance both in a single-input-multiple-output (SIMO) and a multiple-input-multiple-output (MIMO) setting.",
"In this article we propose to facilitate local peer-to-peer communication by a Device-to-Device (D2D) radio that operates as an underlay network to an IMT-Advanced cellular network. It is expected that local services may utilize mobile peer-to-peer communication instead of central server based communication for rich multimedia services. The main challenge of the underlay radio in a multi-cell environment is to limit the interference to the cellular network while achieving a reasonable link budget for the D2D radio. We propose a novel power control mechanism for D2D connections that share cellular uplink resources. The mechanism limits the maximum D2D transmit power utilizing cellular power control information of the devices in D2D communication. Thereby it enables underlaying D2D communication even in interference-limited networks with full load and without degrading the performance of the cellular network. Secondly, we study a single cell scenario consisting of a device communicating with the base station and two devices that communicate with each other. The results demonstrate that the D2D radio, sharing the same resources as the cellular network, can provide higher capacity (sum rate) compared to pure cellular communication where all the data is transmitted through the base station.",
"We address device-to-device (D2D) communication as a potential resource reuse technique underlaying the cellular network. We consider the shared channel of the two systems as an interference channel and formulate the statistics of the signal to interference plus noise ratio (SINR) of all users. The potential performance of D2D communication is evaluated by considering a scenario where only limited interference coordination between the cellular and the D2D communication is possible. We apply a simple power control method to the D2D communication which constrains the SINR degradation of the cellular link to a certain level. Results show that the SINR statistics of the D2D users is comparable to that of the cellular user in most of the cell area. Scheduling gain is possible by properly assigning either of the downlink (DL) or the uplink (UL) resources to the D2D communication.",
"Spectrum sharing is a novel opportunistic strategy to improve spectral efficiency of wireless networks. Much of the research to quantify such a gain is done under the premise that the spectrum is being used inefficiently by the primary network. Our main result is that even in a spectrally efficient network, device to device users can exploit the network topology to render gains in additional throughput. The focus will be on providing ad-hoc multihop access to a network for device to device users, that are transparent to the primary wireless cellular network, while sharing the primary network's resources.",
"An innovative resource allocation scheme is proposed to improve the performance of device-to-device (D2D) communications as an underlay in the downlink (DL) cellular networks. To optimize the system sum rate over the resource sharing of both D2D and cellular modes, we introduce a sequential second price auction as the allocation mechanism. In the auction, all the spectrum resources are considered as a set of resource units, which are auctioned off by groups of D2D pairs in sequence. We first formulate the value of each resource unit for each D2D pair, as a basis of the proposed auction. And then a detailed auction algorithm is explained using a N-ary tree. The equilibrium path of a sequential second price auction is obtained in the auction process, and the state value of the leaf node in the end of the path represents the final allocation. The simulation results show that the proposed auction algorithm leads to a good performance on the system sum rate, efficiency and fairness.",
"Device-to-device (D2D) communication as an underlaying cellular network empowers user-driven rich multimedia applications and also has proven to be network efficient offloading eNodeB traffic. However, D2D transmitters may cause significant amount of interference to the primary cellular network when radio resources are shared between them. During the downlink (DL) phase, primary cell UE (user equipment) may suffer from interference by the D2D transmitter. On the other hand, the immobile eNodeB is the victim of interference by the D2D transmitter during the uplink (UL) phase when radio resources are allocated randomly. Such interference can be avoided otherwise diminish if radio resource allocated intelligently with the coordination from the eNodeB. In this paper, we formulate the problem of radio resource allocation to the D2D communications as a mixed integer nonlinear programming (MINLP). Such an optimization problem is notoriously hard to solve within fast scheduling period of the Long Term Evolution (LTE) network. We therefore propose an alternative greedy heuristic algorithm that can lessen interference to the primary cellular network utilizing channel gain information. We also perform extensive simulation to prove the efficacy of the proposed algorithm.",
"Future cellular networks such as IMT-Advanced are expected to allow underlaying direct Device-to-Device (D2D) communication for spectrally efficient support of e.g. rich multimedia local services. Enabling D2D links in a cellular network presents a challenge in radio resource management due to the potentially severe interference it may cause to the cellular network. We propose a practical and efficient scheme for generating local awareness of the interference between the cellular and D2D terminals at the base station, which then exploits the multiuser diversity inherent in the cellular network to minimize the interference. System simulations demonstrate that substantial gains in cellular and D2D performance can be obtained using the proposed scheme.",
"In this paper the possibility of device-to-device (D2D) communications as an underlay of an LTE-A network is introduced. The D2D communication enables new service opportunities and reduces the eNB load for short range data intensive peer-to-peer communication. The cellular network may establish a new type of radio bearer dedicated for D2D communications and stay in control of the session setup and the radio resources without routing the user plane traffic. The paper addresses critical issues and functional blocks to enable D2D communication as an add-on functionality to the LTE SAE architecture. Unlike 3G spread spectrum cellular and OFDM WLAN techniques, LTE-A resource management is fast and operates in high time-frequency resolution. This could allow the use of non-allocated time-frequency resources, or even partial reuse of the allocated resources for D2D with eNB controlled power constraints. The feasibility and the range of D2D communication, and its impact to the power margins of cellular communications are studied by simulations in two example scenarios. The results demonstrate that by tolerating a modest increase in interference, D2D communication with practical range becomes feasible. By tolerating higher interference power the D2D range will increase.",
"In this article device-to-device (D2D) communication underlaying a 3GPP LTE-Advanced cellular network is studied as an enabler of local services with limited interference impact on the primary cellular network. The approach of the study is a tight integration of D2D communication into an LTE-Advanced network. In particular, we propose mechanisms for D2D communication session setup and management involving procedures in the LTE System Architecture Evolution. Moreover, we present numerical results based on system simulations in an interference limited local area scenario. Our results show that D2D communication can increase the total throughput observed in the cell area.",
"Device-to-device (D2D) communications underlaying a cellular infrastructure has recently been proposed as a means of increasing the cellular capacity, improving the user throughput and extending the battery lifetime of user equipments by facilitating the reuse of spectrum resources between D2D and cellular links. In network assisted D2D communications, when two devices are in the proximity of each other, the network can not only help the devices to set the appropriate transmit power and schedule time and frequency resources but also to determine whether communication should take place via the direct D2D link (D2D mode) or via the cellular base station (cellular mode). In this paper we formulate the joint mode selection, scheduling and power control task as an optimization problem that we first solve assuming the availability of a central entity. We also propose a distributed suboptimal joint mode selection and resource allocation scheme that we benchmark with respect to the centralized optimal solution. We find that the distributed scheme performs close to the optimal scheme both in terms of resource efficiency and user fairness.",
"Device-to-device (D2D) communications underlaying a cellular infrastructure has been proposed as a means of taking advantage of the physical proximity of communicating devices, increasing resource utilization, and improving cellular coverage. Relative to the traditional cellular methods, there is a need to design new peer discovery methods, physical layer procedures, and radio resource management algorithms that help realize the potential advantages of D2D communications. In this article we use the 3GPP Long Term Evolution system as a baseline for D2D design, review some of the key design challenges, and propose solution approaches that allow cellular devices and D2D pairs to share spectrum resources and thereby increase the spectrum and energy efficiency of traditional cellular networks. Simulation results illustrate the viability of the proposed design.",
"Aura-net is a mobile communications system whose function realizes a new form of proximityaware networking, and whose form points in the direction of a \"Proximity-aware Internetwork.\" The system is founded on an implementation of a \"wireless sense.\" The existence of such a sense, it is argued, is essential for realization of a vision of Ubiquitous Computing famously expounded by Mark Weiser [1]. Moreover, current wireless technologies are ill-suited to enabling this vision. The proposed wireless technology (FlashLinQ) is described at a conceptual and tutorial level."
]
}
|
1305.6783
|
1782636746
|
Wireless cellular networks feature two emerging technological trends. The first is the direct Device-to-Device (D2D) communications, which enables direct links between the wireless devices that reutilize the cellular spectrum and radio interface. The second is that of Machine-Type Communications (MTC), where the objective is to attach a large number of low-rate low-power devices, termed Machine-Type Devices (MTDs) to the cellular network. MTDs pose new challenges to the cellular network, one if which is that the low transmission power can lead to outage problems for the cell-edge devices. Another issue imminent to MTC is the that can lead to overload of the radio interface. In this paper we explore the opportunity opened by D2D links for supporting MTDs, since it can be desirable to carry the MTC traffic not through direct links to a Base Station, but through a nearby relay. MTC is modeled as a fixed-rate traffic with an outage requirement. We propose two network-assisted D2D schemes that enable the cooperation between MTDs and standard cellular devices, thereby meeting the MTC outage requirements while maximizing the rate of the broadband services for the other devices. The proposed schemes apply the principles Opportunistic Interference Cancellation and the Cognitive Radio's underlaying. We show through analysis and numerical results the gains of the proposed schemes.
|
@cite_12 the authors introduce three receiving modes for reliable D2D communication when the D2D UEs share the cellular radio resources. The first mode treats the interference as noise, the second mode decodes the interference and then cancels it. In low-interference regime, the first mode offers a higher sum rate, as in such regime treating interference as noise is optimal @cite_30 . The second mode is applicable for the regime of very strong interference in which interference cancelation can be applied @cite_18 . For the transient region between low and very strong interference, instead of the conventional approach with orthogonalization, the authors propose a mode in which the interference is retransmitted to the receiver which then cancels it from the original transmission. Nevertheless, the third mode is only feasible with multiple antennas We note that the schemes proposed in this paper are not dependent on multiple antenna capability, although they could be greatly enhanced by it, but such discussion is left for future works. .
|
{
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_12"
],
"mid": [
"2130781639",
"1992583700",
"2104086762"
],
"abstract": [
"Establishing the capacity region of a Gaussian interference network is an open problem in information theory. Recent progress on this problem has led to the characterization of the capacity region of a general two-user Gaussian interference channel within one bit. In this paper, we develop new, improved outer bounds on the capacity region. Using these bounds, we show that treating interference as noise achieves the sum capacity of the two-user Gaussian interference channel in a low-interference regime, where the interference parameters are below certain thresholds. We then generalize our techniques and results to Gaussian interference networks with more than two users. In particular, we demonstrate that the total interference threshold, below which treating interference as noise achieves the sum capacity, increases with the number of users.",
"This paper presents the capacity region of frequency-selective Gaussian interference channels under the condition of strong interference, assuming an average power constraint per user. First, a frequency-selective Gaussian interference channel is modeled as a set of independent parallel memoryless Gaussian interference channels. Using nonfrequency selective results, the capacity region of frequency-selective Gaussian interference channels under strong interference is expressed mathematically. Exploiting structures inherent in the problem, a dual problem is constructed for each independent memoryless channel, in which both mathematical and numerical analysis are performed. Furthermore, three suboptimal methods are compared to the capacity-achieving coding and power allocation scheme. Iterative waterfilling, a suboptimal scheme, provides close-to-optimum performance and has a distributed coding and power allocation scheme, which are attractive in practice.",
"A new interference management scheme is proposed to improve the reliability of a device-to-device (D2D) communication in the uplink (UL) period without reducing the power of cellular user equipment (UE). To improve the reliability of the D2D receiver, two conventional receive techniques and one proposed method are introduced. One of the conventional methods is demodulating the desired signal first (MODE1), while the other is demodulating an interference first (MODE2), and the proposed method is exploiting a retransmission of the interference from the base station (BS) (MODE3). We derive their outage probabilities in closed forms and explain the mechanism of receive mode selection which selects the mode guaranteeing the minimum outage probability among three modes. Numerical results show that by applying the receive mode selection, the D2D receiver achieves a remarkable enhancement of outage probability in the middle interference regime from the usage of MODE3 compared to the conventional ways of using only MODE1 or MODE2."
]
}
|
1305.6783
|
1782636746
|
Wireless cellular networks feature two emerging technological trends. The first is the direct Device-to-Device (D2D) communications, which enables direct links between the wireless devices that reutilize the cellular spectrum and radio interface. The second is that of Machine-Type Communications (MTC), where the objective is to attach a large number of low-rate low-power devices, termed Machine-Type Devices (MTDs) to the cellular network. MTDs pose new challenges to the cellular network, one if which is that the low transmission power can lead to outage problems for the cell-edge devices. Another issue imminent to MTC is the that can lead to overload of the radio interface. In this paper we explore the opportunity opened by D2D links for supporting MTDs, since it can be desirable to carry the MTC traffic not through direct links to a Base Station, but through a nearby relay. MTC is modeled as a fixed-rate traffic with an outage requirement. We propose two network-assisted D2D schemes that enable the cooperation between MTDs and standard cellular devices, thereby meeting the MTC outage requirements while maximizing the rate of the broadband services for the other devices. The proposed schemes apply the principles Opportunistic Interference Cancellation and the Cognitive Radio's underlaying. We show through analysis and numerical results the gains of the proposed schemes.
|
The underlaying approach in this paper takes advantage of Opportunistic Interference Cancellation (OIC) proposed first in the context of cognitive radio in @cite_32 . The main idea of OIC can be explained as follows. The receiver @math observes a multiple access channel, created by the desired signal and an interfering signal. The interfering signal is a useful signal for a different node. However, if the current channel conditions and the rate selection allow, then @math can decode cancel the interference and retrieve the desired signal at a higher rate.
|
{
"cite_N": [
"@cite_32"
],
"mid": [
"2100066584"
],
"abstract": [
"In this paper, we investigate the problem of spectrally efficient operation of a cognitive radio, also called secondary spectrum user, under an interference from the primary system. A secondary receiver observes a multiple access channel of two users, the secondary and the primary transmitter, respectively. The secondary receiver applies Opportunistic Interference Cancelation (OIC) and Suboptimal Opportunistic Interference Cancelation (S-OIC) thus decoding the primary signal when such an opportunity is created by the rate selected at the primary transmitter and the power received from the primary transmitter. First, we investigate how the secondary transmitter, when using OIC and S-OIC for fixed transmitting power, should select its rate in order to meet its target outage probability under different assumptions about the channel-state-information available at the secondary transmitter. We study three different cases and for each of them identify the region of achievable primary and secondary rates. Second, we determine how the secondary transmitter should select its transmitting power not to violate the target outage probability at the primary terminals. Our numerical results show that the best secondary performance is always obtained when the secondary transmitter knows the instantaneous channel-state-information toward the intended receiver. We also evaluate the degradation in terms of achievable rate at the secondary receiver when it uses suboptimal decoding (S-OIC rather than OIC) and the interplay between the allowed power at the secondary transmitter (which depends on the target outage probability at the primary receiver) and the decodability at the secondary receiver."
]
}
|
1305.6783
|
1782636746
|
Wireless cellular networks feature two emerging technological trends. The first is the direct Device-to-Device (D2D) communications, which enables direct links between the wireless devices that reutilize the cellular spectrum and radio interface. The second is that of Machine-Type Communications (MTC), where the objective is to attach a large number of low-rate low-power devices, termed Machine-Type Devices (MTDs) to the cellular network. MTDs pose new challenges to the cellular network, one if which is that the low transmission power can lead to outage problems for the cell-edge devices. Another issue imminent to MTC is the that can lead to overload of the radio interface. In this paper we explore the opportunity opened by D2D links for supporting MTDs, since it can be desirable to carry the MTC traffic not through direct links to a Base Station, but through a nearby relay. MTC is modeled as a fixed-rate traffic with an outage requirement. We propose two network-assisted D2D schemes that enable the cooperation between MTDs and standard cellular devices, thereby meeting the MTC outage requirements while maximizing the rate of the broadband services for the other devices. The proposed schemes apply the principles Opportunistic Interference Cancellation and the Cognitive Radio's underlaying. We show through analysis and numerical results the gains of the proposed schemes.
|
To the best of our knowledge, the MTC setting has not been considered within the context of D2D, underlaying and cognitive radio. The nearest match within cognitive radio can be found on @cite_51 although there OIC was not considered. We note that interference cancellation schemes have recently stirred increased attention both in academia @cite_47 @cite_5 and industry @cite_29 @cite_46 .
|
{
"cite_N": [
"@cite_47",
"@cite_29",
"@cite_5",
"@cite_46",
"@cite_51"
],
"mid": [
"2066452435",
"1955204823",
"2039876598",
"2246074592",
"2149165606"
],
"abstract": [
"Interference plays a crucial role for performance degradation in communication networks nowadays. An appealing approach to interference avoidance is the Interference Cancellation (IC) methodology. Particularly, the Successive IC (SIC) method represents the most effective IC-based reception technique in terms of Bit-Error-Rate (BER) performance and, thus, yielding to the overall system robustness. Moreover, SIC in conjunction with Orthogonal Frequency Division Multiplexing (OFDM), in the context of SIC-OFDM, is shown to approach the Shannon capacity when single-antenna infrastructures are applied while this capacity limit can be further extended with the aid of multiple antennas. Recently, SIC-based reception has studied for Orthogonal Frequency and Code Division Multiplexing or (spread-OFDM systems), namely OFCDM. Such systems provide extremely high error resilience and robustness, especially in multi-user environments. In this paper, we present a comprehensive survey on the performance of SIC for single- and multiple-antenna OFDM and spread OFDM (OFCDM) systems. Thereby, we focus on all the possible OFDM formats that have been developed so far. We study the performance of SIC by examining closely two major aspects, namely the BER performance and the computational complexity of the reception process, thus striving for the provision and optimization of SIC. Our main objective is to point out the state-of-the-art on research activity for SIC-OF(C)DM systems, applied on a variety of well-known network implementations, such as cellular, ad hoc and infrastructure-based platforms. Furthermore, we introduce a Performance-Complexity Tradeoff (PCT) in order to indicate the contribution of the approaches studied in this paper. Finally, we provide analytical performance comparison tables regarding to the surveyed techniques with respect to the PCT level.",
"Techniques to process a number of “received” symbol streams in a Multiple-Input Multiple-Output (MIMO) system with multipath channels such that improved performance may be achieved when using successive interference cancellation (SIC) processing. In an aspect, metrics indicative of the quality or “goodness” of a “detected” symbol stream are provided. These metrics consider the frequency selective response of the multipath channels used to transmit the symbol stream. For example, the metrics may relate to (1) an overall channel capacity for all transmission channels used for the symbol stream, or (2) an equivalent signal-to-interference noise ratio (SNR) of an Additive White Gaussian Noise (AWGN) channel modeling these transmission channels. In another aspect, techniques are provided to process the received symbol streams, using SIC processing, to recover a number of transmitted symbol streams. The particular order in which the symbol streams are recovered is determined based on the metrics determined for symbol streams detected at each SIC processing stage.",
"In this paper, we address the problem of computing the probability that r out of n interfering wireless signals are \"captured,\" i.e., received with sufficiently large Signal to Interference plus Noise Ratio (SINR) to correctly decode the signals by a receiver with multi-packet reception (MPR) and Successive Interference Cancellation (SIC) capabilities. We start by considering the simpler case of a pure MPR system without SIC, for which we provide an expression for the distribution of the number of captured packets, whose computational complexity scales with n and r. This analysis makes it possible to investigate the system throughput as a function of the MPR capabilities of the receiver. We then generalize the analysis to SIC systems. In addition to the exact expressions for the capture probability and the normalized system throughput, we also derive approximate expressions that are much easier to compute and provide accurate results in some practical scenarios. Finally, we present selected results for some case studies with the purpose of illustrating the potential of the proposed mathematical framework and validating the approximate methods.",
"In an ad hoc peer-to-peer communication network between wireless devices, a low priory first transmitter device adjusts it transmit power based on a received transmission request response from a higher priority second receiver device. The first transmitter device broadcasts a first transmission request to a corresponding first receiver device and may receive a first transmission request response from a different second receiver device. The second transmission request response is sent by the second receiver device in response to a second transmission request from a second transmitter device. The first transmitter device calculates an interference cost to the second receiver device as a function of the received power of the first transmission request response. A transmission power is obtained as a function of the calculated interference cost and the transmission power of the first transmission request, the transmission power to be used for traffic transmissions corresponding to the first transmission request.",
"Spectrum sharing between wireless networks improves the efficiency of spectrum usage, and thereby alleviates spectrum scarcity due to growing demands for wireless broadband access. To improve the usual underutilization of the cellular uplink spectrum, this paper addresses spectrum sharing between a cellular uplink and a mobile ad hoc networks. These networks access either all frequency subchannels or their disjoint subsets, called spectrum underlay and spectrum overlay, respectively. Given these spectrum sharing methods, the capacity trade-off between the coexisting networks is analyzed based on the transmission capacity of a network with Poisson distributed transmitters. This metric is defined as the maximum density of transmitters subject to an outage constraint for a given signal-to-interference ratio (SIR). Using tools from stochastic geometry, the transmission-capacity trade-off between the coexisting networks is analyzed, where both spectrum overlay and underlay as well as successive interference cancellation (SIC) are considered. In particular, for small target outage probability, the transmission capacities of the coexisting networks are proved to satisfy a linear equation, whose coefficients depend on the spectrum sharing method and whether SIC is applied. This linear equation shows that spectrum overlay is more efficient than spectrum underlay. Furthermore, this result also provides insight into the effects of network parameters on transmission capacities, including link diversity gains, transmission distances, and the base station density. In particular, SIC is shown to increase the transmission capacities of both coexisting networks by a linear factor, which depends on the interference-power threshold for qualifying canceled interferers."
]
}
|
1305.5522
|
2086149975
|
In this paper, we propose a conceptual framework where a centralized system, classifies the road based upon the level of damage. The centralized system also identifies the traffic intensity thereby prioritizing the roads that need quick action to be taken upon. Moreover, the system helps the driver to detect the level of damage to the road stretch and route the vehicle from an alternative path to its destination. The system sends a feedback to the concerned authorities for a quick response to the condition of the roads. The system we use comprises a laser sensor and pressure sensors in shock absorbers to detect and quantify the intensity of the pothole, a centralized server which maintains a database of locations of all the potholes which can be accessed by another unit inside the vehicle. A point to point connection device is also installed in vehicles so that, when a vehicle detects a pothole which is not in the database, all the vehicles within a range of 20 meters are warned about the pothole. The system computes a route with least number of potholes which is nearest to the desired destination . If the destination is unknown, then the system will check for potholes in the current road and displays the level of damage. The system is flexible enough that the destination can be added, removed or changed any time during the travel. The best possible route is suggested by the system upon the alteration. We prove that the algorithm returns an efficient path with least number of potholes.
|
The problem is well studied for autonomous robots too. A detection-avoidance mechanism was introduced for the navigational aid of autonomous vehicles @cite_10 . In this paper, they discuss a solution to detection and avoidance of simulated potholes in the path of an autonomous vehicle operating in an unstructured environment. An obstacle avoidance system was developed for a custom-made autonomous navigational robotic vehicle (ANROV), based on an intelligent sensor network and fuzzy logic control @cite_9 .
|
{
"cite_N": [
"@cite_9",
"@cite_10"
],
"mid": [
"1527370562",
"2008059592"
],
"abstract": [
"An obstacle avoidance system was developed for a custom-made autonomous navigational robotic vehicle (ANROV), based on an intelligent sensor network and fuzzy logic control. Unlike conventional crisp control systems, this system is based on soft computing, such that the system mimics human decision making processes and can better recognize obstacles and decide the best course of action. The fuzzy logic control system was developed in MATLAB, the GUI was designed in LabView, and the hardware realization was installed in ANROV for evaluation and development",
"In the navigation of an autonomous vehicle, tracking and avoidance of the obstacles presents an interesting problem as this involves the integration of the vision and the motion systems. In an unstructured environment, the problem becomes much more severe as the obstacles have to be clearly recognized for any decisive action to be taken. In this paper, we discuss a solution to detection and avoidance of simulated potholes in the path of an autonomous vehicle operating in an unstructured environment. Pothole avoidance may be considered similar to other obstacle avoidance except that the potholes are depressions rather than extrusions form a surface. A non-contact vision approach has been taken since potholes usually are significantly different visually from a background surface. Large potholes more than 2 feet in diameter will be detected. Furthermore, only white potholes will be detected on a background of grass, asphalt, sand or green painted bridges. The signals from the environment are captured by the vehicle's vision systems and pre-processed appropriately. A histogram is used to determine a brightness threshold to determine if a pothole is within the field of view. Then, a binary image is formed. Regions are then detected in the binary image. Regions that have a diameter close to 2 feet and a ratio of circumference to diameter close to pi are considered potholes. The neuro-fuzzy logic controller where navigational strategies are evaluated uses these signals to decide a final course of navigation. The primary significance of the solution is that it is interfaced seamlessly into the existing central logic controller. The solution can also be easily extended to detect and avoid any two dimensional shape."
]
}
|
1305.5981
|
2950781204
|
Extensive research has been conducted on query log analysis. A query log is generally represented as a bipartite graph on a query set and a URL set. Most of the traditional methods used the raw click frequency to weigh the link between a query and a URL on the click graph. In order to address the disadvantages of raw click frequency, researchers proposed the entropy-biased model, which incorporates raw click frequency with inverse query frequency of the URL as the weighting scheme for query representation. In this paper, we observe that the inverse query frequency can be considered a global property of the URL on the click graph, which is more informative than raw click frequency, which can be considered a local property of the URL. Based on this insight, we develop the global consistency model for query representation, which utilizes the click frequency and the inverse query frequency of a URL in a consistent manner. Furthermore, we propose a new scheme called inverse URL frequency as an effective way to capture the global property of a URL. Experiments have been conducted on the AOL search engine log data. The result shows that our global consistency model achieved better performance than the current models.
|
Extensive research has been conducted on click graphs to exploit implicit feedback @cite_7 . Frequently studied topics include agglomerative clustering @cite_13 , query clustering for URL recommendation @cite_15 , query suggestion @cite_17 , which used hitting time to generate semantic consistent suggestions, and rare query suggestion @cite_23 . Moreover, @cite_1 worked on query classification through increasing the amount of training data by semi-supervised learning on the click graph instead of enriching feature representation. While there are works studying different aspects of user click information, @cite_10 revealed that the click probability of a webpage is influenced by its position on the result page. The sequential nature of user clicks has been considered in @cite_18 , whereas @cite_23 combined both the click and skip information from users. In addition, having noticed that click graphs are very sparse and the click frequency follows the power law, @cite_0 made use of co-click information for document annotation. Random walk has been applied to click graphs @cite_4 to improve the performance of image retrieval. @cite_16 also employed random walk to smooth the click graph to tackle the sparseness issue.
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_7",
"@cite_10",
"@cite_1",
"@cite_0",
"@cite_23",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_17"
],
"mid": [
"",
"",
"2125771191",
"1992549066",
"2163375626",
"2129235726",
"2098326081",
"",
"",
"1972645849",
""
],
"abstract": [
"",
"",
"We show that incorporating user behavior data can significantly improve ordering of top results in real web search setting. We examine alternatives for incorporating feedback into the ranking process and explore the contributions of user feedback compared to other common web search features. We report results of a large scale evaluation over 3,000 queries and 12 million user interactions with a popular web search engine. We show that incorporating implicit feedback can augment other features, improving the accuracy of a competitive web search ranking algorithms by as much as 31 relative to the original performance.",
"Search engine click logs provide an invaluable source of relevance information, but this information is biased. A key source of bias is presentation order: the probability of click is influenced by a document's position in the results page. This paper focuses on explaining that bias, modelling how probability of click depends on position. We propose four simple hypotheses about how position bias might arise. We carry out a large data-gathering effort, where we perturb the ranking of a major search engine, to see how clicks are affected. We then explore which of the four hypotheses best explains the real-world position effects, and compare these to a simple logistic regression model. The data are not well explained by simple position models, where some users click indiscriminately on rank 1 or there is a simple decay of attention over ranks. A 'cascade' model, where users view results from top to bottom and leave as soon as they see a worthwhile document, is our best explanation for position bias in early ranks",
"This work presents the use of click graphs in improving query intent classifiers, which are critical if vertical search and general-purpose search services are to be offered in a unified user interface. Previous works on query classification have primarily focused on improving feature representation of queries, e.g., by augmenting queries with search engine results. In this work, we investigate a completely orthogonal approach --- instead of enriching feature representation, we aim at drastically increasing the amounts of training data by semi-supervised learning with click graphs. Specifically, we infer class memberships of unlabeled queries from those of labeled ones according to their proximities in a click graph. Moreover, we regularize the learning with click graphs by content-based classification to avoid propagating erroneous labels. We demonstrate the effectiveness of our algorithms in two different applications, product intent and job intent classification. In both cases, we expand the training data with automatically labeled queries by over two orders of magnitude, leading to significant improvements in classification performance. An additional finding is that with a large amount of training data obtained in this fashion, classifiers using only query words phrases as features can work remarkably well.",
"The performance of web search engines may often deteriorate due to the diversity and noisy information contained within web pages. User click-through data can be used to introduce more accurate description (metadata) for web pages, and to improve the search performance. However, noise and incompleteness, sparseness, and the volatility of web pages and queries are three major challenges for research work on user click-through log mining. In this paper, we propose a novel iterative reinforced algorithm to utilize the user click-through data to improve search performance. The algorithm fully explores the interrelations between queries and web pages, and effectively finds \"virtual queries\" for web pages and overcomes the challenges discussed above. Experiment results on a large set of MSN click-through log data show a significant improvement on search performance over the naive query log mining algorithm as well as the baseline search engine.",
"Query suggestion has been an effective approach to help users narrow down to the information they need. However, most of existing studies focused on only popular head queries. Since rare queries possess much less information (e.g., clicks) than popular queries in the query logs, it is much more difficult to efficiently suggest relevant queries to a rare query. In this paper, we propose an optimal rare query suggestion framework by leveraging implicit feedbacks from users in the query logs. Our model resembles the principle of pseudo-relevance feedback which assumes that top-returned results by search engines are relevant. However, we argue that the clicked URLs and skipped URLs contain different levels of information and thus should be treated differently. Hence, our framework optimally combines both the click and skip information from users and uses a random walk model to optimize the query correlation. Our model specifically optimizes two parameters: (1) the restarting (jumping) rate of random walk, and (2) the combination ratio of click and skip information. Unlike the Rocchio algorithm, our learning process does not involve the content of the URLs but simply leverages the click and skip counts in the query-URL bipartite graphs. Consequently, our model is capable of scaling up to the need of commercial search engines. Experimental results on one-month query logs from a large commercial search engine with over 40 million rare queries demonstrate the superiority of our framework, with statistical significance, over the traditional random walk models and pseudo-relevance feedback models.",
"",
"",
"!# @math ) 4 ' \" + @math 4 ) @ & A B C D E . * . ! F ' G H ' IJ K L * M & + *) N z'cC g) ' & p : q : r Htd|#t9 ,s tvuw x yFz |;w]tS 9 x |; C n+ U <dU' 4 2",
""
]
}
|
1305.5981
|
2950781204
|
Extensive research has been conducted on query log analysis. A query log is generally represented as a bipartite graph on a query set and a URL set. Most of the traditional methods used the raw click frequency to weigh the link between a query and a URL on the click graph. In order to address the disadvantages of raw click frequency, researchers proposed the entropy-biased model, which incorporates raw click frequency with inverse query frequency of the URL as the weighting scheme for query representation. In this paper, we observe that the inverse query frequency can be considered a global property of the URL on the click graph, which is more informative than raw click frequency, which can be considered a local property of the URL. Based on this insight, we develop the global consistency model for query representation, which utilizes the click frequency and the inverse query frequency of a URL in a consistent manner. Furthermore, we propose a new scheme called inverse URL frequency as an effective way to capture the global property of a URL. Experiments have been conducted on the AOL search engine log data. The result shows that our global consistency model achieved better performance than the current models.
|
In contrast, less work has been carried out on the study of query representation on click graphs. @cite_9 represented each query as a point in a high dimensional space, with each dimension corresponding to a distinct URL. @cite_20 introduced the query-set based model for document representation using query terms as features for summarizing the clicked webpages. The entropy-biased model for query representation has been proposed @cite_12 to replace raw click frequency on the click graph. It assumed that less clicked URLs are more effective in representing a given query than heavily clicked ones. Thus, the raw click frequency was weighed by the inverse query frequency of the URL. However, the entropy-biased model utilized raw click frequency and inverse query frequency in the same manner as TF-IDF does, which may not be appropriate in the context of click graph. This is because user click information is content-ignorant while text retrieval is content-aware. Our work is closely related to @cite_12 , while our contribution is to study how to combine the raw click frequency and the global weight of URL in a consistent way for query representation.
|
{
"cite_N": [
"@cite_9",
"@cite_12",
"@cite_20"
],
"mid": [
"2086378526",
"",
"2022830384"
],
"abstract": [
"In this paper we study a large query log of more than twenty million queries with the goal of extracting the semantic relations that are implicitly captured in the actions of users submitting queries and clicking answers. Previous query log analyses were mostly done with just the queries and not the actions that followed after them. We first propose a novel way to represent queries in a vector space based on a graph derived from the query-click bipartite graph. We then analyze the graph produced by our query log, showing that it is less sparse than previous results suggested, and that almost all the measures of these graphs follow power laws, shedding some light on the searching user behavior as well as on the distribution of topics that people want in the Web. The representation we introduce allows to infer interesting semantic relationships between queries. Second, we provide an experimental analysis on the quality of these relations, showing that most of them are relevant. Finally we sketch an application that detects multitopical URLs.",
"",
"In this paper we present a new document representation model based on implicit user feedback obtained from search engine queries. The main objective of this model is to achieve better results in non-supervised tasks, such as clustering and labeling, through the incorporation of usage data obtained from search engine queries. This type of model allows us to discover the motivations of users when visiting a certain document. The terms used in queries can provide a better choice of features, from the user's point of view, for summarizing the Web pages that were clicked from these queries. In this work we extend and formalize as \"query model\" an existing but not very well known idea of \"query view\" for document representation. Furthermore, we create a novel model based on \"frequent query patterns\" called the \"query-set model\". Our evaluation shows that both \"query-based\" models outperform the vector-space model when used for clustering and labeling documents in a website. In our experiments, the query-set model reduces by more than 90 the number of features needed to represent a set of documents and improves by over 90 the quality of the results. We believe that this can be explained because our model chooses better features and provides more accurate labels according to the user's expectations."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.