aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1112.2188
|
1934586604
|
In a social network, agents are intelligent and have the capability to make decisions to maximize their utilities. They can either make wise decisions by taking advantages of other agents’ experiences through learning, or make decisions earlier to avoid competitions from huge crowds. Both these two effects, social learning and negative network externality, play important roles in the decision process of an agent. While there are existing works on either social learning or negative network externality, a general study on considering both these two contradictory effects is stil l limited. We find that the Chinese restaurant process, a popular random process, provides a well-defined s tructure to model the decision process of an agent under these two effects. By introducing the strategic behavior into the non-strategic Chinese restaurant process, in Part I of this two-part paper, we prop ose a new game, called Chinese Restaurant Game, to formulate the social learning problem with negative network externality. Through analyzing the proposed Chinese restaurant game, we derive the optimal strategy of each agent and provide a recursive method to achieve the optimal strategy. How social learning and negative network externality influence each other under various settings is also studied through si mulations.
|
Costain provided a more general dynamic global game with an unknown binary state and a general utility function in @cite_0 . The utility function includes information revelation, strategic complementarities, and payoff heterogeneity. To simplify the analysis, the positions of the agents in the game are assumed to be unknown. Nevertheless, most of these works study the multiplicity of equilibria in dynamic global game with simplified models, such as binary state or binary investment model. Moreover, the network externality they considered in their models are mostly positive. By proposing the Chinese restaurant game, we hereby provides a more general game-theoretic framework on studying the social learning in a network with negative network externality, which has many applications in various research fields.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2024712778"
],
"abstract": [
"Recently, it has been claimed that full-information multiple equilibria in games with strategic complementarities are not robust, because generalizing to allow slightly heterogeneous information implies uniqueness. This paper argues that this \"global games\" uniqueness result is itself not robust. If we generalize by allowing most agents to observe a few previous actions before choosing, instead of forcing players to move exactly simultaneously, then multiplicity of outcomes is restored. Only a small sample of observations is needed to make our herding equilibrium behave like a full-information sunspot equilibrium instead of a global games equilibrium."
]
}
|
1112.1178
|
1649129612
|
We study the problem of assigning @math identical servers to a set of @math parallel queues in a time-slotted queueing system. The connectivity of each queue to each server is randomly changing with time; each server can serve at most one queue and each queue can be served by at most one server during each time slot. Such a queueing model has been used in addressing resource allocation problems in wireless networks. It has been previously proven that Maximum Weighted Matching (MWM) is a throughput-optimal server assignment policy for such a queueing system. In this paper, we prove that for a system with i.i.d. Bernoulli packet arrivals and connectivities, MWM minimizes, in stochastic ordering sense, a broad range of cost functions of the queue lengths such as total queue occupancy (which implies minimization of average queueing delays). Then, we extend the model by considering imperfect services where it is assumed that the service of a scheduled packet fails randomly with a certain probability. We prove that the same policy is still optimal for the extended model. We finally show that the results are still valid for more general connectivity and arrival processes which follow conditional permutation invariant distributions.
|
In contrast to the single-server system (where LCQ was both throughput-optimal and delay-optimal), in MQMS-Type1 system the MW policy is not necessarily delay-optimal. More specifically, in @cite_29 it was also shown that although MW policy is throughput-optimal, even for a system with i.i.d. Bernoulli arrivals and connectivity processes, MW policy in its general form, is not delay-optimal.
|
{
"cite_N": [
"@cite_29"
],
"mid": [
"1979213681"
],
"abstract": [
"Network capacity region of multi-queue multi-server queueing system with random connectivities and stationary arrival processes is studied in this paper. Specifically, the necessary and sufficient conditions for the stability of the system are derived under general arrival processes with finite first and second moments. In the case of stationary arrival processes, these conditions establish the network capacity region of the system. It is also shown that AS LCQ (Any Server Longest Connected Queue) policy stabilizes the system when it is stabilizable. Furthermore, an upper bound for the average queue occupancy is derived for this policy."
]
}
|
1112.1178
|
1649129612
|
We study the problem of assigning @math identical servers to a set of @math parallel queues in a time-slotted queueing system. The connectivity of each queue to each server is randomly changing with time; each server can serve at most one queue and each queue can be served by at most one server during each time slot. Such a queueing model has been used in addressing resource allocation problems in wireless networks. It has been previously proven that Maximum Weighted Matching (MWM) is a throughput-optimal server assignment policy for such a queueing system. In this paper, we prove that for a system with i.i.d. Bernoulli packet arrivals and connectivities, MWM minimizes, in stochastic ordering sense, a broad range of cost functions of the queue lengths such as total queue occupancy (which implies minimization of average queueing delays). Then, we extend the model by considering imperfect services where it is assumed that the service of a scheduled packet fails randomly with a certain probability. We prove that the same policy is still optimal for the extended model. We finally show that the results are still valid for more general connectivity and arrival processes which follow conditional permutation invariant distributions.
|
For more information on optimal scheduling and resource allocation problems in wireless networks the reader is encouraged to also consult with @cite_7 @cite_28 @cite_14 @cite_19 @cite_21 @cite_24 .
|
{
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_28",
"@cite_21",
"@cite_24",
"@cite_19"
],
"mid": [
"171504317",
"2138993731",
"2052836712",
"2293221123",
"2120086682",
"2173655337"
],
"abstract": [
"With an abstraction of serving rate-adaptive sources on a broadcast-type wireless channel as a utility maximization problem, it is shown how one can design many intuitive online scheduling policies based upon the feedback that one obtains at the scheduler. Using a stochastic approximation argument it is then shown that the constructed algorithms converge to optimal solutions of the utility maximization problem over different sets which critically depend on the quality of the feedback information.",
"Information flow in a telecommunication network is accomplished through the interaction of mechanisms at various design layers with the end goal of supporting the information exchange needs of the applications. In wireless networks in particular, the different layers interact in a nontrivial manner in order to support information transfer. In this text we will present abstract models that capture the cross-layer interaction from the physical to transport layer in wireless network architectures including cellular, ad-hoc and sensor networks as well as hybrid wireless-wireline. The model allows for arbitrary network topologies as well as traffic forwarding modes, including datagrams and virtual circuits. Furthermore the time varying nature of a wireless network, due either to fading channels or to changing connectivity due to mobility, is adequately captured in our model to allow for state dependent network control policies. Quantitative performance measures that capture the quality of service requirements in these systems depending on the supported applications are discussed, including throughput maximization, energy consumption minimization, rate utility function maximization as well as general performance functionals. Cross-layer control algorithms with optimal or suboptimal performance with respect to the above measures are presented and analyzed. A detailed exposition of the related analysis and design techniques is provided.",
"Scheduling has been extensively studied in various disciplines in operations research and wireline networking. However, the unique characteristics of wireless communication systems - namely, time-varying channel conditions and multiuser diversity - means that new scheduling solutions need to be developed that are specifically tailored for this environment. In this paper, we summarize various opportunistic scheduling schemes that exploit the time-varying nature of the radio environment to improve the spectrum efficiency while maintaining a certain level of satisfaction for each user. We also discuss the advantages and costs associated with opportunistic scheduling, and identify possible future research directions.",
"In a preferred embodiment, a drill gas turbine motor in which the housing structure directs pressurized gas both radially inwardly and axially along the longitudinal axis of the elongated housing, against vanes arranged along a circumscribing surface of the revolvable (rotatable) rotor mounted within rotor space within the housing to effect rotation of the rotor, and a drivable shaft extending from an opening at one end of the housing, and a dampening-spring mechanism at an opposite end of the housing and rotor arranged to dampen pressure and movement of the rotor in a direction away from the end of the drivable shaft thereby reducing wear on bearings supporting the rotar at opposite ends thereof, together with brake mechanism for manually exerting braking pressure against the rotor, and key mechanism for locking the rotor in a non-revolvable state during change of chuck on the drivable shaft, and the drivable shaft having formed in a distal end thereof a female receptacle receivable of a male end of a chuck.",
"Fair scheduling of delay and rate-sensitive packet flows over a wireless channel is not addressed effectively by most contemporary wireline fair-scheduling algorithms because of two unique characteristics of wireless media: (1) bursty channel errors and (2) location-dependent channel capacity and errors. Besides, in packet cellular networks, the base station typically performs the task of packet scheduling for both downlink and uplink flows in a cell; however, a base station has only a limited knowledge of the arrival processes of uplink flows. We propose a new model for wireless fair-scheduling based on an adaptation of fluid fair queueing (FFQ) to handle location-dependent error bursts. We describe an ideal wireless fair-scheduling algorithm which provides a packetized implementation of the fluid mode, while assuming full knowledge of the current channel conditions. For this algorithm, we derive the worst-case throughput and delay bounds. Finally, we describe a practical wireless scheduling algorithm which approximates the ideal algorithm. Through simulations, we show that the algorithm achieves the desirable properties identified in the wireless FFQ model.",
"We consider the problem of allocating resources (time slots, frequency, power, etc.) at a base station to many competing flows, where each flow is intended for a different receiver. The channel conditions may be time-varying and different for different receivers. It is well-known that appropriately chosen queue-length based policies are throughput-optimal while other policies based on the estimation of channel statistics can be used to allocate resources fairly (such as proportional fairness) among competing users. In this paper, we show that a combination of queue-length-based scheduling at the base station and congestion control implemented either at the base station or at the end users can lead to fair resource allocation and queue-length stability."
]
}
|
1112.0708
|
2953338282
|
We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by KrzakalaEtAl , message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of non-zero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate @math exceeds the (upper) R 'enyi information dimension of the signal, @math . More precisely, for a sequence of signals of diverging dimension @math whose empirical distribution converges to @math , reconstruction is with high probability successful from @math measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension @math and @math non-zero entries, this implies reconstruction from @math measurements. For discrete' signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from @math measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal @math .
|
More broadly, message passing algorithms for compressed sensing where the object of a number of studies studies, starting with @cite_29 . As mentioned, we will focus on approximate message passing (AMP) as introduced in @cite_19 @cite_40 . As shown in @cite_11 these algorithms can be used in conjunction with a rich class of denoisers @math . A subset of these denoisers arise as posterior mean associated to a prior @math . Several interesting examples were studied by Schniter and collaborators @cite_25 @cite_49 @cite_36 , and by Rangan and collaborators @cite_26 @cite_4 .
|
{
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_36",
"@cite_29",
"@cite_19",
"@cite_40",
"@cite_49",
"@cite_25",
"@cite_11"
],
"mid": [
"2166670884",
"",
"1500149156",
"2135859872",
"2082029531",
"2963206527",
"2132381218",
"2154153158",
"2949898670"
],
"abstract": [
"We consider the estimation of a random vector observed through a linear transform followed by a componentwise probabilistic measurement channel. Although such linear mixing estimation problems are generally highly non-convex, Gaussian approximations of belief propagation (BP) have proven to be computationally attractive and highly effective in a range of applications. Recently, Bayati and Montanari have provided a rigorous and extremely general analysis of a large class of approximate message passing (AMP) algorithms that includes many Gaussian approximate BP methods. This paper extends their analysis to a larger class of algorithms to include what we call generalized AMP (G-AMP). G-AMP incorporates general (possibly non-AWGN) measurement channels. Similar to the AWGN output channel case, we show that the asymptotic behavior of the G-AMP algorithm under large i.i.d. Gaussian transform matrices is described by a simple set of state evolution (SE) equations. The general SE equations recover and extend several earlier results, including SE equations for approximate BP on general output channels by Guo and Wang.",
"",
"We propose a novel algorithm for compressive imaging that exploits both the sparsity and persistence across scales found in the 2D wavelet transform coefficients of natural images. Like other recent works, we model wavelet structure using a hidden Markov tree (HMT) but, unlike other works, ours is based on loopy belief propagation (LBP). For LBP, we adopt a recently proposed “turbo” message passing schedule that alternates between exploitation of HMT structure and exploitation of compressive-measurement structure. For the latter, we leverage Donoho, Maleki, and Montanari's recently proposed approximate message passing (AMP) algorithm. Experiments with a large image database suggest that, relative to existing schemes, our turbo LBP approach yields state-of-the-art reconstruction performance with substantial reduction in complexity.",
"Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, sub-Nyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can complement conventional CS methods based on linear programming or greedy algorithms. We perform asymptotically optimal Bayesian inference using belief propagation (BP) decoding, which represents the CS encoding matrix as a graphical model. Fast computation is obtained by reducing the size of the graphical model with sparse encoding matrices. To decode a length-N signal containing K large coefficients, our CS-BP decoding algorithm uses O(K log(N)) measurements and O(N log2(N)) computation. Finally, although we focus on a two-state mixture Gaussian model, CS-BP is easily adapted to other signal models.",
"Compressed sensing aims to undersample certain high-dimensional signals yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity–undersampling tradeoff is achieved when reconstructing by convex optimization, which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsity–undersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity–undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity–undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this apparently very different theoretical formalism.",
"In a recent paper, the authors proposed a new class of low-complexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements [1]. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the first of two conference papers describing the derivation of these algorithms, connection with the related literature, extensions of the original framework, and new empirical evidence. In particular, the present paper outlines the derivation of AMP from standard sum-product belief propagation, and its extension in several directions. We also discuss relations with formal calculations based on statistical mechanics methods.",
"We propose a factor-graph-based approach to joint channel-estimation-and-decoding (JCED) of bit-interleaved coded orthogonal frequency division multiplexing (BICM-OFDM). In contrast to existing designs, ours is capable of exploiting not only sparsity in sampled channel taps but also clustering among the large taps, behaviors which are known to manifest at larger communication bandwidths. In order to exploit these channel-tap structures, we adopt a two-state Gaussian mixture prior in conjunction with a Markov model on the hidden state. For loopy belief propagation, we exploit a “generalized approximate message passing” (GAMP) algorithm recently developed in the context of compressed sensing, and show that it can be successfully coupled with soft-input soft-output decoding, as well as hidden Markov inference, through the standard sum-product framework. For N subcarriers and any channel length L<;N, the resulting JCED-GAMP scheme has a computational complexity of only O(N log2 N +N|S|), where |S| is the constellation size. Numerical experiments using IEEE 802.15.4a channels show that our scheme yields BER performance within 1 dB of the known-channel bound and 3-4 dB better than soft equalization based on LMMSE and LASSO.",
"This paper considers the reconstruction of structured-sparse signals from noisy linear observations. In particular, the support of the signal coefficients is parameterized by hidden binary pattern, and a structured probabilistic prior (e.g., Markov random chain field tree) is assumed on the pattern. Exact inference is discussed and an approximate inference scheme, based on loopy belief propagation (BP), is proposed. The proposed scheme iterates between exploitation of the observation-structure and exploitation of the pattern-structure, and is closely related to noncoherent turbo equalization, as used in digital communication receivers. An algorithm that exploits the observation structure is then detailed based on approximate message passing ideas. The application of EXIT charts is discussed, and empirical phase transition plots are calculated for Markov-chain structured sparsity.1",
"Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse objects. The formula applies to Approximate Message Passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar soft thresholding denoiser. This paper gives several examples including scalar denoisers not derived from convex penalization -- the firm shrinkage nonlinearity and the minimax nonlinearity -- and also nonscalar denoisers -- block thresholding, monotone regression, and total variation minimization. Let the variables eps = k N and delta = n N denote the generalized sparsity and undersampling fractions for sampling the k-generalized-sparse N-vector x_0 according to y=Ax_0. Here A is an n N measurement matrix whose entries are iid standard Gaussian. The formula states that the phase transition curve delta = delta(eps) separating successful from unsuccessful reconstruction of x_0 by AMP is given by: delta = M(eps| Denoiser), where M(eps| Denoiser) denotes the per-coordinate minimax mean squared error (MSE) of the specified, optimally-tuned denoiser in the directly observed problem y = x + z. In short, the phase transition of a noiseless undersampling problem is identical to the minimax MSE in a denoising problem."
]
}
|
1112.0708
|
2953338282
|
We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by KrzakalaEtAl , message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of non-zero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate @math exceeds the (upper) R 'enyi information dimension of the signal, @math . More precisely, for a sequence of signals of diverging dimension @math whose empirical distribution converges to @math , reconstruction is with high probability successful from @math measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension @math and @math non-zero entries, this implies reconstruction from @math measurements. For discrete' signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from @math measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal @math .
|
Spatial coupling has been the object of growing interest within coding theory over the last few years. The first instance of spatially coupled code ensembles were the convolutional LDPC codes of Felstr "om and Zigangirov @cite_48 . While the excellent performances of such codes had been known for quite some time @cite_22 , the fundamental reason was not elucidated until recently @cite_14 (see also @cite_50 ). In particular @cite_14 proved, for communication over the binary erasure channel (BEC), that the thresholds of spatially coupled ensembles under message passing decoding coincide with the thresholds of the base LDPC code under MAP decoding. In particular, this implies that spatially coupled ensembles achieve capacity over the BEC. The analogous statement for general memoryless symmetric channels was first elucidated in @cite_37 and finally proved in @cite_12 . The paper @cite_2 discusses similar ideas in a number of graphical models.
|
{
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_22",
"@cite_48",
"@cite_50",
"@cite_2",
"@cite_12"
],
"mid": [
"2951577762",
"2172679141",
"2410688423",
"1991528082",
"2156991284",
"2963578263",
"2139012746"
],
"abstract": [
"We consider spatially coupled code ensembles. A particular instance are convolutional LDPC ensembles. It was recently shown that, for transmission over the binary erasure channel, this coupling increases the belief propagation threshold of the ensemble to the maximum a-priori threshold of the underlying component ensemble. We report on empirical evidence which suggest that the same phenomenon also occurs when transmission takes place over a general binary memoryless symmetric channel. This is confirmed both by simulations as well as by computing EBP GEXIT curves and by comparing the empirical BP thresholds of coupled ensembles to the empirically determined MAP thresholds of the underlying regular ensembles. We further consider ways of reducing the rate-loss incurred by such constructions.",
"Convolutional low-density parity-check (LDPC) ensembles, introduced by Felstrom and Zigangirov, have excellent thresholds and these thresholds are rapidly increasing functions of the average degree. Several variations on the basic theme have been proposed to date, all of which share the good performance characteristics of convolutional LDPC ensembles. We describe the fundamental mechanism that explains why “convolutional-like” or “spatially coupled” codes perform so well. In essence, the spatial coupling of individual codes increases the belief-propagation (BP) threshold of the new ensemble to its maximum possible value, namely the maximum a posteriori (MAP) threshold of the underlying ensemble. For this reason, we call this phenomenon “threshold saturation.” This gives an entirely new way of approaching capacity. One significant advantage of this construction is that one can create capacity-approaching ensembles with an error correcting radius that is increasing in the blocklength. Although we prove the “threshold saturation” only for a specific ensemble and for the binary erasure channel (BEC), empirically the phenomenon occurs for a wide class of ensembles and channels. More generally, we conjecture that for a large range of graphical systems a similar saturation of the “dynamical” threshold occurs once individual components are coupled sufficiently strongly. This might give rise to improved algorithms and new techniques for analysis.",
"An ensemble of LDPC convolutional codes with parity-check matrices composed of permutation matrices is introduced. The convergence of the iterative belief propagation based decoder for terminated convolutional codes in the ensemble when operating on the erasure channel is analyzed. The structured irregularity in the Tanner graph of the codes leads to significantly better thresholds when compared to the corresponding LDPC block codes.",
"We present a class of convolutional codes defined by a low-density parity-check matrix and an iterative algorithm for decoding these codes. The performance of this decoding is close to the performance of turbo decoding. Our simulation shows that for the rate R=1 2 binary codes, the performance is substantially better than for ordinary convolutional codes with the same decoding complexity per information bit. As an example, we constructed convolutional codes with memory M=1025, 2049, and 4097 showing that we are about 1 dB from the capacity limit at a bit-error rate (BER) of 10 sup -5 and a decoding complexity of the same magnitude as a Viterbi decoder for codes having memory M=10.",
"A threshold analysis of terminated generalized LDPC convolutional codes (GLDPC CCs) is presented for the binary erasure channel. Different ensembles of protograph-based GLDPC CCs are considered, including braided block codes (BBCs). It is shown that the terminated PG-GLDPC CCs have better thresholds than their block code counterparts. Surprisingly, our numerical analysis suggests that for large termination factors the belief propagation decoding thresholds of PG-GLDPC CCs coincide with the ML decoding thresholds of the corresponding PG-GLDPC block codes.",
"The excellent performance of convolutional low-density parity-check codes is the result of the spatial coupling of individual underlying codes across a window of growing size, but much smaller than the length of the individual codes. Remarkably, the belief-propagation threshold of the coupled ensemble is boosted to the maximum-a-posteriori one of the individual system. We investigate the generality of this phenomenon beyond coding theory: we couple general graphical models into a one-dimensional chain of large individual systems. For the later we take the Curie-Weiss, random field Curie-Weiss, If-satisfiability, and Q-coloring models. We always find, based on analytical as well as numerical calculations, that the message passing thresholds of the coupled systems come very close to the static ones of the individual models. The remarkable properties of convolutional low-density parity-check codes are a manifestation of this very general phenomenon.",
"We investigate spatially coupled code ensembles. For transmission over the binary erasure channel, it was recently shown that spatial coupling increases the belief propagation threshold of the ensemble to essentially the maximum a-priori threshold of the underlying component ensemble. This explains why convolutional LDPC ensembles, originally introduced by Felstrom and Zigangirov, perform so well over this channel. We show that the equivalent result holds true for transmission over general binary-input memoryless output-symmetric channels. More precisely, given a desired error probability and a gap to capacity, we can construct a spatially coupled ensemble which fulfills these constraints universally on this class of channels under belief propagation decoding. In fact, most codes in that ensemble have that property. The quantifier universal refers to the single ensemble code which is good for all channels if we assume that the channel is known at the receiver. The key technical result is a proof that under belief propagation decoding spatially coupled ensembles achieve essentially the area threshold of the underlying uncoupled ensemble. We conclude by discussing some interesting open problems."
]
}
|
1112.0708
|
2953338282
|
We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by KrzakalaEtAl , message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of non-zero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate @math exceeds the (upper) R 'enyi information dimension of the signal, @math . More precisely, for a sequence of signals of diverging dimension @math whose empirical distribution converges to @math , reconstruction is with high probability successful from @math measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension @math and @math non-zero entries, this implies reconstruction from @math measurements. For discrete' signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from @math measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal @math .
|
The first application of spatial coupling ideas to compressed sensing is due to Kudekar and Pfister @cite_10 . They consider a class of sparse spatially coupled sensing matrices, very similar to parity check matrices for spatially coupled LDPC codes. On the other hand, their proposed message passing algorithms do not make use of the signal distribution @math , and do not fully exploit the potential of spatially coupled matrices. The message passing algorithm used here belongs to the general class introduced in @cite_19 . The specific use of the minimum-mean square error denoiser was suggested in @cite_40 . The same choice is made in @cite_3 , which also considers Gaussian matrices with heteroscedastic entries although the variance structure is somewhat less general.
|
{
"cite_N": [
"@cite_19",
"@cite_40",
"@cite_10",
"@cite_3"
],
"mid": [
"2082029531",
"2963206527",
"",
"2073868986"
],
"abstract": [
"Compressed sensing aims to undersample certain high-dimensional signals yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity–undersampling tradeoff is achieved when reconstructing by convex optimization, which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsity–undersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity–undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity–undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this apparently very different theoretical formalism.",
"In a recent paper, the authors proposed a new class of low-complexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements [1]. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the first of two conference papers describing the derivation of these algorithms, connection with the related literature, extensions of the original framework, and new empirical evidence. In particular, the present paper outlines the derivation of AMP from standard sum-product belief propagation, and its extension in several directions. We also discuss relations with formal calculations based on statistical mechanics methods.",
"",
"Compressed sensing is triggering a major evolution in signal acquisition. It consists in sampling a sparse signal at low rate and later using computational power for its exact reconstruction, so that only the necessary information is measured. Currently used reconstruction techniques are, however, limited to acquisition rates larger than the true density of the signal. We design a new procedure which is able to reconstruct exactly the signal with a number of measurements that approaches the theoretical limit in the limit of large systems. It is based on the joint use of three essential ingredients: a probabilistic approach to signal reconstruction, a message-passing algorithm adapted from belief propagation, and a careful design of the measurement matrix inspired from the theory of crystal nucleation. The performance of this new algorithm is analyzed by statistical physics methods. The obtained improvement is confirmed by numerical studies of several cases."
]
}
|
1112.0708
|
2953338282
|
We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by KrzakalaEtAl , message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of non-zero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate @math exceeds the (upper) R 'enyi information dimension of the signal, @math . More precisely, for a sequence of signals of diverging dimension @math whose empirical distribution converges to @math , reconstruction is with high probability successful from @math measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension @math and @math non-zero entries, this implies reconstruction from @math measurements. For discrete' signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from @math measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal @math .
|
Finally, let us mention that robust sparse recovery of @math -sparse vectors from @math measurement is possible, using suitable adaptive' sensing schemes @cite_31 .
|
{
"cite_N": [
"@cite_31"
],
"mid": [
"2168831133"
],
"abstract": [
"The goal of (stable) sparse recovery is to recover a @math -sparse approximation @math of a vector @math from linear measurements of @math . Specifically, the goal is to recover @math such that @math for some constant @math and norm parameters @math and @math . It is known that, for @math or @math , this task can be accomplished using @math non-adaptive measurements CRT06:Stable-Signal and that this bound is tight DIPW, FPRU, PW11 . In this paper we show that if one is allowed to perform measurements that are adaptive , then the number of measurements can be considerably reduced. Specifically, for @math and @math we show A scheme with @math measurements that uses @math rounds. This is a significant improvement over the best possible non-adaptive bound. A scheme with @math measurements that uses two rounds. This improves over the best possible non-adaptive bound. To the best of our knowledge, these are the first results of this type."
]
}
|
1112.0674
|
2951268556
|
Interference management techniques are critical to the performance of heterogeneous cellular networks, which will have dense and overlapping coverage areas, and experience high levels of interference. Fractional frequency reuse (FFR) is an attractive interference management technique due to its low complexity and overhead, and significant coverage improvement for low-percentile (cell-edge) users. Instead of relying on system simulations based on deterministic access point locations, this paper instead proposes an analytical model for evaluating Strict FFR and Soft Frequency Reuse (SFR) deployments based on the spatial Poisson point process. Our results both capture the non-uniformity of heterogeneous deployments and produce tractable expressions which can be used for system design with Strict FFR and SFR. We observe that the use of Strict FFR bands reserved for the users of each tier with the lowest average SINR provides the highest gains in terms of coverage and rate, while the use of SFR allows for more efficient use of shared spectrum between the tiers, while still mitigating much of the interference. Additionally, in the context of multi-tier networks with closed access in some tiers, the proposed framework shows the impact of cross-tier interference on closed access FFR, and informs the selection of key FFR parameters in open access.
|
Early work on frequency partitioning for two-tier networks is found in @cite_4 . Their proposed strategy maximizes the spectral efficiency for a minimum QoS requirement and the number of users per tier. They assume that the femtocells are given a separate frequency band from the macrocells, such that there is no cross-tier interference.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2120419969"
],
"abstract": [
"Two-tier networks, comprising a conventional cellular network overlaid with shorter range hotspots (e.g. femtocells, distributed antennas, or wired relays), offer an economically viable way to improve cellular system capacity. The capacity-limiting factor in such networks is interference. The cross-tier interference between macrocells and femtocells can suffocate the capacity due to the near-far problem, so in practice hotspots should use a different frequency channel than the potentially nearby high-power macrocell users. Centralized or coordinated frequency planning, which is difficult and inefficient even in conventional cellular networks, is all but impossible in a two-tier network. This paper proposes and analyzes an optimum decentralized spectrum allocation policy for two-tier networks that employ frequency division multiple access (including OFDMA). The proposed allocation is optimal in terms of area spectral efficiency (ASE), and is subjected to a sensible quality of service (QoS) requirement, which guarantees that both macrocell and femtocell users attain at least a prescribed data rate. Results show the dependence of this allocation on the QoS requirement, hotspot density and the co-channel interference from the macrocell and femtocells. Design interpretations are provided."
]
}
|
1112.0674
|
2951268556
|
Interference management techniques are critical to the performance of heterogeneous cellular networks, which will have dense and overlapping coverage areas, and experience high levels of interference. Fractional frequency reuse (FFR) is an attractive interference management technique due to its low complexity and overhead, and significant coverage improvement for low-percentile (cell-edge) users. Instead of relying on system simulations based on deterministic access point locations, this paper instead proposes an analytical model for evaluating Strict FFR and Soft Frequency Reuse (SFR) deployments based on the spatial Poisson point process. Our results both capture the non-uniformity of heterogeneous deployments and produce tractable expressions which can be used for system design with Strict FFR and SFR. We observe that the use of Strict FFR bands reserved for the users of each tier with the lowest average SINR provides the highest gains in terms of coverage and rate, while the use of SFR allows for more efficient use of shared spectrum between the tiers, while still mitigating much of the interference. Additionally, in the context of multi-tier networks with closed access in some tiers, the proposed framework shows the impact of cross-tier interference on closed access FFR, and informs the selection of key FFR parameters in open access.
|
The authors in @cite_9 consider an adaptive FFR strategy for mitigating inter-femtocell interference while keeping spectral efficiency as high as possible. They vary the size of FFR partitions and transmit power based on the amount of estimated interference. However they use a deterministic model for the femtocells inside of a single building and neglect macrocell or femtocell interference outside of the building. Very recent work in @cite_30 considers a deterministic model analysis of the spectral efficiency of femtocells as a function of the femtocell's location in a two-tier network with base stations modeled as a hexagonal grid and femtocells uniformly deployed in each cell. They fix the macrocell FFR sub-band allocations and then consider the spectral efficiency of a femtocell as a function of its distance from the cell center.
|
{
"cite_N": [
"@cite_30",
"@cite_9"
],
"mid": [
"2121469866",
"2076855206"
],
"abstract": [
"In OFDMA systems adopting fractional frequency reuse, we can allocate orthogonal bandwidth to femtocells, which mitigates throughput degradation of macrocell users. In this paper, we discuss the optimal power allocation for femtocells with different orthogonal subbands, based on analysis of macrocell interferences.",
"In this paper, we consider the use of fractional frequency reuse (FFR) to mitigate inter-femtocell interference in multi-femtocell environments. The use of universal frequency reuse can be said optimum in terms of the ergodic system spectral efficiency. However, it may cause inter-femtocell interference near the cell edge, making it difficult to serve the whole cell coverage. To alleviate this problem, we adjust the frequency reuse factor with the aid of femtocell location information. The proposed scheme can provide reasonably high ergodic system spectral efficiency, while assuring a desired performance near the cell boundary. Simulation results show that the proposed scheme is quite effective in indoor environments such as residential or office buildings."
]
}
|
1112.0674
|
2951268556
|
Interference management techniques are critical to the performance of heterogeneous cellular networks, which will have dense and overlapping coverage areas, and experience high levels of interference. Fractional frequency reuse (FFR) is an attractive interference management technique due to its low complexity and overhead, and significant coverage improvement for low-percentile (cell-edge) users. Instead of relying on system simulations based on deterministic access point locations, this paper instead proposes an analytical model for evaluating Strict FFR and Soft Frequency Reuse (SFR) deployments based on the spatial Poisson point process. Our results both capture the non-uniformity of heterogeneous deployments and produce tractable expressions which can be used for system design with Strict FFR and SFR. We observe that the use of Strict FFR bands reserved for the users of each tier with the lowest average SINR provides the highest gains in terms of coverage and rate, while the use of SFR allows for more efficient use of shared spectrum between the tiers, while still mitigating much of the interference. Additionally, in the context of multi-tier networks with closed access in some tiers, the proposed framework shows the impact of cross-tier interference on closed access FFR, and informs the selection of key FFR parameters in open access.
|
Frequency partitioning between macrocells and femtocells is revisited in @cite_31 . They propose a model where some sub-bands are reserved for only macrocell or femtocell users in addition to a common group of sub-bands, similar in concept to the proposed Strict FFR model. They also alternately consider partitioning in the time domain. They provide a large number of simulation results based on a deterministic model for the AP locations and motivate a dynamic partitioning based on measured interference levels by users in either tier.
|
{
"cite_N": [
"@cite_31"
],
"mid": [
"2011220219"
],
"abstract": [
"We consider the problem of sharing spectrum between different base stations in an OFDM network where some cells have a small radius. Such scenarios will become increasingly common in fourth generation networks where the need for ubiquitous high-speed coverage will lead to an increased use of small cells as well as indoor femtocells. Our aim is to devise autonomous algorithms for small cells and femtocells to choose spectrum so that they can achieve high data rates without causing interference to users in the traditional macro cells. We present a number of algorithms that perform combinations of frequency and time sharing based on the channel conditions reported by the mobile users. Our schemes bear some resemblance to the traditional 802.11 MAC algorithms. However, they differ in the fact that they are able to use better information about channel conditions from the mobiles and they are allowed to adjust the amount of spectrum that they are using. We evaluate our schemes using a platform that combines a physical-layer ray-tracing tool for indoor and outdoor environments with an upper layer OFDM simulation tool. We believe that this type of simulation capability will become increasingly important as cellular networks target the provision of high-speed performance in dense urban environments. Our results suggest that user channel quality measurements can be used to set the level of sharing between femtocells and macrocells and that finding the correct level of sharing is important for optimal network performance."
]
}
|
1112.0674
|
2951268556
|
Interference management techniques are critical to the performance of heterogeneous cellular networks, which will have dense and overlapping coverage areas, and experience high levels of interference. Fractional frequency reuse (FFR) is an attractive interference management technique due to its low complexity and overhead, and significant coverage improvement for low-percentile (cell-edge) users. Instead of relying on system simulations based on deterministic access point locations, this paper instead proposes an analytical model for evaluating Strict FFR and Soft Frequency Reuse (SFR) deployments based on the spatial Poisson point process. Our results both capture the non-uniformity of heterogeneous deployments and produce tractable expressions which can be used for system design with Strict FFR and SFR. We observe that the use of Strict FFR bands reserved for the users of each tier with the lowest average SINR provides the highest gains in terms of coverage and rate, while the use of SFR allows for more efficient use of shared spectrum between the tiers, while still mitigating much of the interference. Additionally, in the context of multi-tier networks with closed access in some tiers, the proposed framework shows the impact of cross-tier interference on closed access FFR, and informs the selection of key FFR parameters in open access.
|
The two primary user association policies for heterogeneous networks are and . Under closed access, mobiles are restricted from connecting with certain tiers of access points based on system performance metrics or economic or legal factors in some cases @cite_35 . Open access instead allows users to connect to APs of different tiers based on the association policy, which may be measured signal-to-interference-ratio ( @math ) or traffic load and can be used as an interference management technique @cite_22 . The authors in @cite_16 consider performance tradeoffs for closed and open femtocell networks. Their analysis uses stochastic geometry tools from @cite_10 in order to derive @math distributions for different deployment scenarios at the cell edge or interior and for varying femtocell densities. However their analysis is constrained to the interior of a single macrocell and does not consider the effect of inter-cell interference or the use of FFR on the @math distributions.
|
{
"cite_N": [
"@cite_35",
"@cite_16",
"@cite_10",
"@cite_22"
],
"mid": [
"2122496159",
"2167340083",
"2096536332",
"2114539661"
],
"abstract": [
"The surest way to increase the system capacity of a wireless link is by getting the transmitter and receiver closer to each other, which creates the dual benefits of higher-quality links and more spatial reuse. In a network with nomadic users, this inevitably involves deploying more infrastructure, typically in the form of microcells, hot spots, distributed antennas, or relays. A less expensive alternative is the recent concept of femtocells - also called home base stations - which are data access points installed by home users to get better indoor voice and data coverage. In this article we overview the technical and business arguments for femtocells and describe the state of the art on each front. We also describe the technical challenges facing femtocell networks and give some preliminary ideas for how to overcome them.",
"A fundamental choice in femtocell deployments is the set of users which are allowed to access each femtocell. Closed access restricts the set to specifically registered users, while open access allows any mobile subscriber to use any femtocell. The main results of the paper are lemmas which provide expressions for the SINR distribution for various zones within a cell as a function of this MBS-femto distance. The average sum throughput (or any other SINR-based metric) of home and cellular users under open and closed access can be readily determined from these expressions. We show that unlike in the uplink, the interests of home and cellular users are in conflict, with home users preferring closed access and cellular users preferring open access. The conflict is most pronounced for femtocells near the cell edge, when there are many cellular users and fewer femtocells.",
"Cellular networks are usually modeled by placing the base stations according to a regular geometry such as a grid, with the mobile users scattered around the network either as a Poisson point process (i.e. uniform distribution) or deterministically. These models have been used extensively for cellular design and analysis but suffer from being both highly idealized and not very tractable. Thus, complex simulations are used to evaluate key metrics such as coverage probability for a specified target rate (equivalently, the outage probability) or average sum rate. We develop general models for multi-cell signal-to-noise-plus-interference ratio (SINR) based on homogeneous Poisson point processes and derive the coverage probability, which is one minus the outage probability. Under very general assumptions, the resulting expressions for the SINR cumulative distribution function involve quickly computable integrals, and in some important special cases of practical interest these integrals can be simplified to common integrals (e.g., the Q-function) or even to exact and quite simple closed-form expressions. We compare our coverage predictions to the standard grid model and an actual base station deployment. We observe that the proposed model is pessimistic (a lower bound on coverage) whereas the grid model is optimistic. In addition to being more tractable, the proposed model may better capture the increasingly opportunistic and dense placement of base stations in urban cellular networks with highly variable coverage radii.",
"Femtocells are assuming an increasingly important role in the coverage and capacity of cellular networks. In contrast to existing cellular systems, femtocells are end-user deployed and controlled, randomly located, and rely on third party backhaul (e.g. DSL or cable modem). Femtocells can be configured to be either open access or closed access. Open access allows an arbitrary nearby cellular user to use the femtocell, whereas closed access restricts the use of the femtocell to users explicitly approved by the owner. Seemingly, the network operator would prefer an open access deployment since this provides an inexpensive way to expand their network capabilities, whereas the femtocell owner would prefer closed access, in order to keep the femtocell's capacity and backhaul to himself. We show mathematically and through simulations that the reality is more complicated for both parties, and that the best approach depends heavily on whether the multiple access scheme is orthogonal (TDMA or OFDMA, per subband) or non-orthogonal (CDMA). In a TDMA OFDMA network, closed-access is typically preferable at high user densities, whereas in CDMA, open access can provide gains of more than 300 for the home user by reducing the near-far problem experienced by the femtocell. The results of this paper suggest that the interests of the femtocell owner and the network operator are more compatible than typically believed, and that CDMA femtocells should be configured for open access whereas OFDMA or TDMA femtocells should adapt to the cellular user density."
]
}
|
1112.0031
|
1942505030
|
The communities of a social network are sets of vertices with more connections inside the set than outside. We theoretically demonstrate that two commonly observed properties of social networks, heavy-tailed degree distributions and large clustering coefficients, imply the existence of vertex neighborhoods (also known as egonets) that are themselves good communities. We evaluate these neighborhood communities on a range of graphs. What we find is that the neighborhood communities often exhibit conductance scores that are as good as the Fiedler cut. Also, the conductance of neighborhood communities shows similar behavior as the network community profile computed with a personalized PageRank community detection method. The latter requires sweeping over a great many starting vertices, which can be expensive. By using a small and easy-to-compute set of neighborhood communities as seeds for these PageRank communities, however, we find communities that precisely capture the behavior of the network community profile when seeded everywhere in the graph, and at a significant reduction in total work.
|
Much of the modern work on networks rests on surprising empirical observations about the structure of real world connections. For instance, information networks were found to have a power-law in the degree distribution @cite_3 @cite_29 . These same networks were also found to have considerable local structure in the form of large clustering coefficients @cite_12 , but retained a small global diameter. Our theory shows that a third potential observation -- the existence of vertex neighborhood with low conductance -- is in fact implied by these other two properties. We formally show that heavy tailed degree distributions and high clustering coefficients imply the existence of large dense cores.
|
{
"cite_N": [
"@cite_29",
"@cite_12",
"@cite_3"
],
"mid": [
"1976969221",
"",
"2008620264"
],
"abstract": [
"Despite the apparent randomness of the Internet, we discover some surprisingly simple power-laws of the Internet topology. These power-laws hold for three snapshots of the Internet, between November 1997 and December 1998, despite a 45 growth of its size during that period. We show that our power-laws fit the real data very well resulting in correlation coefficients of 96 or higher.Our observations provide a novel perspective of the structure of the Internet. The power-laws describe concisely skewed distributions of graph properties such as the node outdegree. In addition, these power-laws can be used to estimate important parameters such as the average neighborhood size, and facilitate the design and the performance analysis of protocols. Furthermore, we can use them to generate and select realistic topologies for simulation purposes.",
"",
"Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems."
]
}
|
1112.0031
|
1942505030
|
The communities of a social network are sets of vertices with more connections inside the set than outside. We theoretically demonstrate that two commonly observed properties of social networks, heavy-tailed degree distributions and large clustering coefficients, imply the existence of vertex neighborhoods (also known as egonets) that are themselves good communities. We evaluate these neighborhood communities on a range of graphs. What we find is that the neighborhood communities often exhibit conductance scores that are as good as the Fiedler cut. Also, the conductance of neighborhood communities shows similar behavior as the network community profile computed with a personalized PageRank community detection method. The latter requires sweeping over a great many starting vertices, which can be expensive. By using a small and easy-to-compute set of neighborhood communities as seeds for these PageRank communities, however, we find communities that precisely capture the behavior of the network community profile when seeded everywhere in the graph, and at a significant reduction in total work.
|
Predictable behavior in the structure of ego-nets makes them a useful tool for detecting anomalous patterns in the structure of the network. For instance, Akoglu et al. @cite_38 compute a small collection of measures on each egonet, such as the average degree and largest eigenvalues. Outliers in this space of vertices are often rather anomalous vertices. Our work is, in contrast, a precise statement about the regularity of the ego-nets, and says that we always expect a large ego-net to be a good community.
|
{
"cite_N": [
"@cite_38"
],
"mid": [
"1492581097"
],
"abstract": [
"Given a large, weighted graph, how can we find anomalies? Which rules should be violated, before we label a node as an anomaly? We propose the oddball algorithm, to find such nodes The contributions are the following: (a) we discover several new rules (power laws) in density, weights, ranks and eigenvalues that seem to govern the so-called “neighborhood sub-graphs” and we show how to use these rules for anomaly detection; (b) we carefully choose features, and design oddball, so that it is scalable and it can work un-supervised (no user-defined constants) and (c) we report experiments on many real graphs with up to 1.6 million nodes, where oddball indeed spots unusual nodes that agree with intuition."
]
}
|
1112.0266
|
1560518130
|
We present an approximation to the Brunet--Derrida model of supercritical branching Brownian motion on the real line with selection of the @math right-most particles, valid when the population size @math is large. It consists of introducing a random space-time barrier at which particles are instantaneously killed in such a way that the population size stays almost constant over time. We prove that the suitably recentered position of this barrier converges at the @math timescale to a L 'evy process, which we identify. This validates the physicists' predictions about the fluctuations in the Brunet--Derrida model.
|
The author is aware of only two mathematically rigorous articles on the @math -BBM or the @math -BRW (branching random walk): B 'erard and Gou 'er 'e @cite_17 prove the @math correction of the linear speed of @math -BRW, thereby showing the validity of the approximation by a deterministic traveling wave with cutoff. Durrett and Remenik @cite_43 study the empirical distribution of @math -BRW and show that it converges to a system of integro-differential equations with moving boundary. BBM with absorption at a linear space-time barrier however is a well-studied process (see for example @cite_5 @cite_39 @cite_66 @cite_0 @cite_34 ) and is much more tractable than @math -BBM due to the greater independence between the particles and its connection with some differential equations @cite_66 @cite_25 @cite_34 .
|
{
"cite_N": [
"@cite_25",
"@cite_39",
"@cite_0",
"@cite_43",
"@cite_5",
"@cite_34",
"@cite_66",
"@cite_17"
],
"mid": [
"2112662214",
"2086030421",
"1989904735",
"2078356107",
"2025859223",
"1531393797",
"2031810620",
"2592832780"
],
"abstract": [
"We study supercritical branching Brownian motion on the real line starting at the origin and with constant drift @math . At the point @math , we add an absorbing barrier, i.e. individuals touching the barrier are instantly killed without producing offspring. It is known that there is a critical drift @math , such that this process becomes extinct almost surely if and only if @math . In this case, if @math denotes the number of individuals absorbed at the barrier, we give an asymptotic for @math as @math goes to infinity. If @math and the reproduction is deterministic, this improves upon results of [L. Addario-Berry and N. Broutin (2009), http: arxiv.org abs 0908.1083v1 ] and [E. A \" dekon (2009), http: arxiv.org abs 0911.0877v1 ] on a conjecture by David Aldous about the total progeny of a branching random walk. The main technique used in the proofs is analysis of the generating function of @math near its singular point @math , based on classical results on some complex differential equations.",
"Considerons une marche aleatoire branchante surcritique a temps discret. Nous nous interessons a la probabilite qu'il existe un rayon infini du support de la marche aleatoire branchante, le long duquel elle croit plus vite qu'une fonction lineaire de pente y - e, ou γ designe la vitesse asymptotique de la position de la particule la plus a droite dans la marche aleatoire branchante. Sous des hypotheses generales peu restrictives, nous prouvons que, lorsque e → 0, cette probabilite decroit comme exp —β+o(1) e 1 2 , ou β est une constante strictement positive dont la valeur depend de la loi de la marche aleatoire branchante. Dans le cas special ou des variables aleatoires i.i.d. de Bernoulli(p) (avec 0 < p < 1 2 ) sont placees sur les aretes d'un arbre binaire enracine, ceci repond a une question ouverte de Robin Pemantle (Ann. Appl. Probab. 19 (2009) 1273-1291).",
"We consider a branching diffusion Zt t[greater-or-equal, slanted]0 in which particles move during their life time according to a Brownian motion with drift -[mu] and variance coefficient [sigma]2, and in which each particle which enters the negative half line is instantaneously removed from the population. If particles die with probability c dt+o(dt) in [t,t+dt] and if the mean number of offspring per particle is m>1, then Zt dies out w.p.l. if [mu][greater-or-equal, slanted][mu]0[reverse not equivalent] 2[sigma]2c(m-1) 1 2. If [mu] 0 is only exp -const.T1 3+0(logT)2 , and conditionally on ZT>0 there are with high probability much fewer particles alive at time T than E ZTZT0 .",
"We consider a branching-selection system in R with N particles which give birth independently at rate 1 and where after each birth the leftmost particle is erased, keeping the number of particles constant. We show that, as N → ∞, the empirical measure process associated to the system converges in distribution to a deterministic measure-valued process whose densities solve a free boundary integrodifferential equation. We also show that this equation has a unique traveling wave solution traveling at speed c or no such solution depending on whether c ≥ a or c < a, where a is the asymptotic speed of the branching random walk obtained by ignoring the removal of the leftmost particles in our process. The traveling wave solutions correspond to solutions of Wiener-Hopf equations.",
"A branching random walk in presence of an absorbing wall moving at a constant velocity v undergoes a phase transition as v varies. The problem can be analyzed using the properties of the Fisher-Kolmogorov-Petrovsky-Piscounov (F-KPP) equation. We find that the survival probability of the branching random walk vanishes at a critical velocity vc of the wall with an essential singularity and we characterize the divergences of the relaxation times for v vc. At v=vc the survival probability decays like a stretched exponential. Using the F-KPP equation, one can also calculate the distribution of the population size at time t conditioned by the survival of one individual at a later time T>t. Our numerical results indicate that the size of the population diverges like the exponential of (vc−v)−1 2 in the quasi-stationary regime below vc. Moreover for v>vc, our data indicate that there is no quasi-stationary regime.",
"Out of simplicity, we restrict ourselves to consider the dyadic brownian branching process (Nt, t ∈ R+) on the real line. By definition of this process, its particles perform independent brownian motions untill they split into exactly two particles at independent and mean one exponential times; then Nt denotes the point process formed on R by the particles alive at time t.",
"Abstract At the heart of this article will be the study of a branching Brownian motion (BBM) with killing , where individual particles move as Brownian motions with drift − ρ , perform dyadic branching at rate β and are killed on hitting the origin. Firstly, by considering properties of the right-most particle and the extinction probability, we will provide a probabilistic proof of the classical result that the ‘one-sided’ FKPP travelling-wave equation of speed − ρ with solutions f : [ 0 , ∞ ) → [ 0 , 1 ] satisfying f ( 0 ) = 1 and f ( ∞ ) = 0 has a unique solution with a particular asymptotic when ρ 2 β , and no solutions otherwise. Our analysis is in the spirit of the standard BBM studies of [S.C. Harris, Travelling-waves for the FKPP equation via probabilistic arguments, Proc. Roy. Soc. Edinburgh Sect. A 129 (3) (1999) 503–517] and [A.E. Kyprianou, Travelling wave solutions to the K-P-P equation: alternatives to Simon Harris' probabilistic analysis, Ann. Inst. H. Poincare Probab. Statist. 40 (1) (2004) 53–72] and includes an intuitive application of a change of measure inducing a spine decomposition that, as a by product, gives the new result that the asymptotic speed of the right-most particle in the killed BBM is 2 β − ρ on the survival set. Secondly, we introduce and discuss the convergence of an additive martingale for the killed BBM, W λ , that appears of fundamental importance as well as facilitating some new results on the almost-sure exponential growth rate of the number of particles of speed λ ∈ ( 0 , 2 β − ρ ) . Finally, we prove a new result for the asymptotic behaviour of the probability of finding the right-most particle with speed λ > 2 β − ρ . This result combined with Chauvin and Rouault's [B. Chauvin, A. Rouault, KPP equation and supercritical branching Brownian motion in the subcritical speed area. Application to spatial trees, Probab. Theory Related Fields 80 (2) (1988) 299–314] arguments for standard BBM readily yields an analogous Yaglom-type conditional limit theorem for the killed BBM and reveals W λ as the limiting Radon–Nikodým derivative when conditioning the right-most particle to travel at speed λ into the distant future.",
"We consider a class of branching-selection particle systems on ( R ) similar to the one considered by E. Brunet and B. Derrida in their 1997 paper “Shift in the velocity of a front due to a cutoff”. Based on numerical simulations and heuristic arguments, Brunet and Derrida showed that, as the population size N of the particle system goes to infinity, the asymptotic velocity of the system converges to a limiting value at the unexpectedly slow rate (log N)−2. In this paper, we give a rigorous mathematical proof of this fact, for the class of particle systems we consider. The proof makes use of ideas and results by R. Pemantle, and by N. Gantert, Y. Hu and Z. Shi, and relies on a comparison of the particle system with a family of N independent branching random walks killed below a linear space-time barrier."
]
}
|
1112.0266
|
1560518130
|
We present an approximation to the Brunet--Derrida model of supercritical branching Brownian motion on the real line with selection of the @math right-most particles, valid when the population size @math is large. It consists of introducing a random space-time barrier at which particles are instantaneously killed in such a way that the population size stays almost constant over time. We prove that the suitably recentered position of this barrier converges at the @math timescale to a L 'evy process, which we identify. This validates the physicists' predictions about the fluctuations in the Brunet--Derrida model.
|
Let us also note that branching Brownian motion selection has a long history: Starting with @cite_61 it has been studied by many authors and under various aspects, along with its discrete counterpart, the branching random walk. Since @cite_21 , its connection to the FKPP equation has raised very fruitful interactions between analysis and probability theory (see for example @cite_31 and the references therein). BBM has been used in applications, for example to model ecological and epidemic spread @cite_51 or directed polymers on disordered trees @cite_65 . During the last years, there has been renewed interest in the behavior of its extremal particles, be it the right-most only @cite_64 @cite_22 @cite_35 or the whole point process formed by the particles at the right edge @cite_16 @cite_28 @cite_38 @cite_6 . The extremal statistics of several other models have actually been shown or are conjectured to belong to the same universality class as BBM, such as the Gaussian Free Field on a two-dimensional lattice @cite_59 @cite_52 @cite_54 , or the cover time of a 2D box by a random walk (see e.g. @cite_18 and the references therein).
|
{
"cite_N": [
"@cite_61",
"@cite_35",
"@cite_38",
"@cite_64",
"@cite_18",
"@cite_22",
"@cite_28",
"@cite_54",
"@cite_21",
"@cite_65",
"@cite_52",
"@cite_6",
"@cite_59",
"@cite_31",
"@cite_16",
"@cite_51"
],
"mid": [
"",
"2139310454",
"",
"2127129887",
"2949555217",
"2012860068",
"2963458180",
"1984642139",
"1972194505",
"2072790958",
"2076567039",
"1495091143",
"1572521168",
"2119847065",
"2014296995",
"76152643"
],
"abstract": [
"",
"We consider the minimum of a super-critical branching random walk. Addario-Berry and Reed [Ann. Probab. 37 (2009) 1044–1079] proved the tightness of the minimum centered around its mean value. We show that a convergence in law holds, giving the analog of a well-known result of Bramson [Mem. Amer. Math. Soc. 44 (1983) iv+190] in the case of the branching Brownian motion.",
"",
"Given a branching random walk, let @math be the minimum position of any member of the @math th generation. We calculate @math to within O(1) and prove exponential tail bounds for @math , under quite general conditions on the branching random walk. In particular, together with work by Bramson [Z. Wahrsch. Verw. Gebiete 45 (1978) 89―108], our results fully characterize the possible behavior of @math when the branching random walk has bounded branching and step size.",
"We study the cover time @math by (continuous-time) random walk on the 2D box of side length @math with wired boundary or on the 2D torus, and show that in both cases with probability approaching 1 as @math increases, @math . This improves a result of Dembo, Peres, Rosen, and Zeitouni (2004) and makes progress towards a conjecture of Bramson and Zeitouni (2009).",
"We establish a second-order almost sure limit theorem for the minimal position in a one-dimensional super-critical branching random walk, and also prove a martingale convergence theorem which answers a question of Big-gins and Kyprianou [Electron. J. Probab. 10 (2005) 609-631]. Our method applies, furthermore, to the study of directed polymers on a disordered tree. In particular, we give a rigorous proof of a phase transition phenomenon for the partition function (from the point of view of convergence in probability), already described by Derrida and Spohn [J. Statist. Phys. 51 (1988) 817-840]. Surprisingly, this phase transition phenomenon disappears in the sense of upper almost sure limits.",
"Branching Brownian motion describes a system of particles that diffuse in space and split into offspring according to a certain random mechanism. By virtue of the groundbreaking work by M. Bramson on the convergence of solutions of the Fisher-KPP equation to traveling waves, the law of the rightmost particle in the limit of large times is rather well understood. In this work, we address the full statistics of the extremal particles (first-, second-, third-largest, etc.). In particular, we prove that in the large t-limit, such particles descend with overwhelming probability from ancestors having split either within a distance of order 1 from time 0, or within a distance of order 1 from time t. The approach relies on characterizing, up to a certain level of precision, the paths of the extremal particles. As a byproduct, a heuristic picture of branching Brownian motion “at the edge” emerges, which sheds light on the still unknown limiting extremal process. © 2011 Wiley Periodicals, Inc.",
"We consider the maximum of the discrete two dimensional Gaussian free field (GFF) in a box, and prove that its maximum, centered at its mean, is tight, settling a long-standing conjecture. The proof combines a recent observation of Bolthausen, Deuschel and Zeitouni with elements from (Bramson 1978) and comparison theorems for Gaussian fields. An essential part of the argument is the precise evaluation, up to an error of order 1, of the expected value of the maximum of the GFF in a box. Related Gaussian fields, such as the GFF on a two-dimensional torus, are also discussed.",
"",
"We show that the problem of a directed polymer on a tree with disorder can be reduced to the study of nonlinear equations of reaction-diffusion type. These equations admit traveling wave solutions that move at all possible speeds above a certain minimal speed. The speed of the wavefront is the free energy of the polymer problem and the minimal speed corresponds to a phase transition to a glassy phase similar to the spin-glass phase. Several properties of the polymer problem can be extracted from the correspondence with the traveling wave: probability distribution of the free energy, overlaps, etc.",
"We consider the maximum of the discrete two dimensional Gaussian free field in a box, and prove the existence of a (dense) deterministic subsequence along which the maximum, centered at its mean, is tight. The method of proof relies on an argument developed by Dekking and Host for branching random walks with bounded increments and on comparison results specific to Gaussian fields.",
"Consider a critical branching random walk on the real line. In a recent paper, Aidekon (2011) developed a powerful method to obtain the convergence in law of its minimum after a log-factor translation. By an adaptation of this method, we show that the point process formed by the branching random walk seen from the minimum converges in law to a decorated Poisson point process. This result, confirming a conjecture of Brunet and Derrida (J Stat Phys 143:420–446, 2011), can be viewed as a discrete analog of the corresponding results for the branching Brownian motion, previously established by (2010, 2011) and (2011).",
"We consider the lattice version of the free eld in two dimensions (also called harmonic crystal). The main aim of the paper is to discuss quantitatively the entropic repulsion of the random surface in the presence of a hard wall. The basic ingredient of the proof is the analysis of the maximum of the eld which requires a multiscale analysis reducing the problem essentially to a problem on a eld with a tree structure. 2000 MSC: 60K35, 60G15, 82B41",
"Abstract Recently Harris [Proc. Roy. Soc. Edinburgh Sect. A 129 (1999) 503], using probabilistic methods alone, has given new proofs for the existence, asymptotics and uniqueness of travelling wave solutions to the K-P-P equation. Following in this vein we outline alternative probabilistic proofs. Specifically the techniques are confined to the study of additive and multiplicative martingales and spinal path decompositions along the lines of [B. Chauvin, A. Rouault, Probab. Theory Related Fields 80 (1988) 299], [R. Lyons, in: K.B. Athreya, P. Jagers (eds.), Classical and Modern Branching Processes, Vol. 84, Springer-Verlag, New York, 1997, pp. 217–222] and [R. , Ann. Probab. 23 (1995) 1125]. We also make use of a new decomposition where the spine is a conditioned process. Some new results concerning martingale convergence are obtained as a by-product of the analysis.",
"It has been conjectured since the work of Lalley and Sellke (Ann. Probab., 15, 1052–1061, 1987) that branching Brownian motion seen from its tip (e.g. from its rightmost particle) converges to an invariant point process. Very recently, it emerged that this can be proved in several different ways (see e.g. Brunet and Derrida, A branching random walk seen from the tip, 2010, Poissonian statistics in the extremal process of branching Brownian motion, 2010; , The extremal process of branching Brownian motion, 2011). The structure of this extremal point process turns out to be a Poisson point process with exponential intensity in which each atom has been decorated by an independent copy of an auxiliary point process. The main goal of the present work is to give a complete description of the limit object via an explicit construction of this decoration point process. Another proof and description has been obtained independently by (The extremal process of branching Brownian motion, 2011).",
""
]
}
|
1111.7246
|
2950553168
|
We provide a complete description of important geometric invariants of the Laplacian lattice of a multigraph under the distance function induced by a regular simplex, namely Voronoi Diagram, Delaunay Triangulation, Delaunay Polytope and its combinatorial structure, Shortest Vectors, Covering and Packing Radius. We use this information to obtain the following results: i. Every multigraph defines a Delaunay triangulation of its Laplacian lattice and this Delaunay triangulation contains complete information of the multigraph up to isomorphism. ii. The number of multigraphs with a given Laplacian lattice is controlled, in particular upper bounded, by the number of different Delaunay triangulations. iii. We obtain formulas for the covering and packing densities of a Laplacian lattice and deduce that in the space of Laplacian lattices of undirected connected multigraphs, the Laplacian lattices of highly connected multigraphs such as Ramanujan multigraphs possess good covering and packing properties.
|
A substantial body of work is devoted to the study of lattices constructed from graphs, the most well studied ones being the lattice of integral cuts and the lattice of integral flows. This line of investigation was pioneered by the work of Bacher, Harpe and Nagnibeda @cite_3 where they provide a combinatorial interpretation of various parameters of the lattice of integral flows and the lattice of integral cuts in the Euclidean distance function, for example they show that the square norm of the shortest vector of the lattice of integral flows is equal to the girth of the graph. In contrast, the study of the Laplacian lattice under simplicial distance function gives rise to some new and interesting phenomenon, that are not seen in the Euclidean case and in some cases provides more refined information about the graph. Here we note such instances:
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2254723345"
],
"abstract": [
"Les flots entiers sur un graphe fini Γ constituent naturellement un reseau entier Λ 1 (Γ) dans l'espace euclidien Ker(Δ 1 ) des fonctions harmoniques a valeurs reelles sur l'ensemble des aretes de Γ. On montre l'equivalence de diverses proprietes de Γ (caractere biparti, tour de taille, complexite, separabilite) avec des proprietes convenables de Λ l (Γ) (parite, norme minimales determinant, decomposabilite). Le reseau dual de Λ 1 (Γ) est identifie a la cohomologie enliere H 1 (Γ,Z), plongee dans Ker(Δ 1 ). On montre des traductions analogues pour le reseau des coupures entieres et les proprietes convenables du graphe (caractere eulerien, connectivite d'aretes, complexite, separabilite). Ces reseaux ont un groupe determinant qui joue pour les graphes le meme role que la jacobienne pour une surface de Riemann close. Ce sont alors les fonctions harmoniques sur un graphe (a valeurs dans un groupe abelien) qui tiennent lieu d'applications holomorphes."
]
}
|
1111.7246
|
2950553168
|
We provide a complete description of important geometric invariants of the Laplacian lattice of a multigraph under the distance function induced by a regular simplex, namely Voronoi Diagram, Delaunay Triangulation, Delaunay Polytope and its combinatorial structure, Shortest Vectors, Covering and Packing Radius. We use this information to obtain the following results: i. Every multigraph defines a Delaunay triangulation of its Laplacian lattice and this Delaunay triangulation contains complete information of the multigraph up to isomorphism. ii. The number of multigraphs with a given Laplacian lattice is controlled, in particular upper bounded, by the number of different Delaunay triangulations. iii. We obtain formulas for the covering and packing densities of a Laplacian lattice and deduce that in the space of Laplacian lattices of undirected connected multigraphs, the Laplacian lattices of highly connected multigraphs such as Ramanujan multigraphs possess good covering and packing properties.
|
ii. The Delaunay triangulation under the simplicial distance function provides more refined information on the underlying (connected) multigraph and in fact characterizes the multigraph completely up to isomorphism. On the other hand in the Euclidean case such an analogue is true only for three-connected graphs. A weaker result is known for general connected graphs @cite_0 .
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"1794573077"
],
"abstract": [
"Algebraic curves have a discrete analog in finite graphs. Pursuing this analogy, we prove a Torelli theorem for graphs. Namely, we show that two graphs have the same Albanese torus if and only if the graphs obtained from them by contracting all separating edges are 2-isomorphic. In particular, the strong Torelli theorem holds for 3-connected graphs. Next, using the correspondence between compact tropical curves and metric graphs, we prove a tropical Torelli theorem giving necessary and sufficient conditions for two tropical curves to have the same principally polarized tropical Jacobian. By contrast, we prove that, in a suitably defined sense, the tropical Torelli map has degree one. Finally, we describe some natural posets associated to a graph and prove that they characterize its Delaunay decomposition."
]
}
|
1111.7251
|
1715852318
|
We study the problem of computing the rank of a divisor on a finite graph, a quantity that arises in the Riemann-Roch theory on a finite graph developed by Baker and Norine (Advances of Mathematics, 215(2): 766-788, 2007). Our work consists of two parts: the first part is an algorithm whose running time is polynomial for a multigraph with a fixed number of vertices. More precisely, our algorithm has running time O(2^ n n )poly(size(G)), where n+1 is the number of vertices of the graph G. The second part consists of a new proof of the fact that testing if rank of a divisor is non-negative or not is in the complexity class NP intersection co-NP and motivated by this proof and its generalisations, we construct a new graph invariant that we call the critical automorphism group of the graph.
|
The central quantity in the Riemann-Roch theorem is the rank of a divisor and the efficient computation of rank is a natural problem, attributed to Hendrik Lenstra (see @cite_0 ). In fact this work, gave an algorithm, i.e., a procedure that terminates in a finite number of steps to compute the rank of a divisor on a tropical curve. But the algorithm does not run in polynomial time in the size of the multigraph even when the number of vertices are fixed since the algorithm involves iterating over all the spanning trees in the graph (see Proof of Theorem 23 in @cite_0 ) and the number of spanning trees in indeed not polynomially bounded in the size of the mutligraph even if the number of vertices are fixed. On the other hand, there are polynomial time algorithms for deciding if the rank of a divisor on a finite multigraph is non-negative, see @cite_2 , @cite_12 and @cite_5 .
|
{
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_12",
"@cite_2"
],
"mid": [
"2099082421",
"1818095014",
"2026915189",
""
],
"abstract": [
"We investigate, using purely combinatorial methods, structural and algorithmic properties of linear equivalence classes of divisors on tropical curves. In particular, we confirm a conjecture of Baker asserting that the rank of a divisor D on a (non-metric) graph is equal to the rank of D on the corresponding metric graph, and construct an algorithm for computing the rank of a divisor on a tropical curve.",
"Kirchhoff's matrix-tree theorem states that the number of spanning trees of a graph G is equal to the value of the determinant of the reduced Laplacian of @math . We outline an efficient bijective proof of this theorem, by studying a canonical finite abelian group attached to @math whose order is equal to the value of same matrix determinant. More specifically, we show how one can efficiently compute a bijection between the group elements and the spanning trees of the graph. The main ingredient for computing the bijection is an efficient algorithm for finding the unique @math -parking function (reduced divisor) in a linear equivalence class defined by a chip-firing game. We also give applications, including a new and completely algebraic algorithm for generating random spanning trees. Other applications include algorithms related to chip-firing games and sandpile group law, as well as certain algorithmic problems about the Riemann-Roch theory on graphs.",
"Bjorner, Lvasz, and Shor have introduced a chip firing game on graphs. This paper proves a polynomial bound on the length of the game in terms of the number of vertices of the graph provided the length is finite. The obtained bound is best possible within a constant factor.",
""
]
}
|
1111.7251
|
1715852318
|
We study the problem of computing the rank of a divisor on a finite graph, a quantity that arises in the Riemann-Roch theory on a finite graph developed by Baker and Norine (Advances of Mathematics, 215(2): 766-788, 2007). Our work consists of two parts: the first part is an algorithm whose running time is polynomial for a multigraph with a fixed number of vertices. More precisely, our algorithm has running time O(2^ n n )poly(size(G)), where n+1 is the number of vertices of the graph G. The second part consists of a new proof of the fact that testing if rank of a divisor is non-negative or not is in the complexity class NP intersection co-NP and motivated by this proof and its generalisations, we construct a new graph invariant that we call the critical automorphism group of the graph.
|
Our paper is centered around the problem of computing the rank. More precisely, (Section ) we obtain an algorithm whose running time is polynomial for a multigraph with a fixed number of vertices. More precisely, our algorithm has running time @math , where @math is the number of vertices of the multigraph @math . Recall that we are working with arbitrary undirected connected multigraphs or equivalently graphs with positive integer weights on the edges and indeed, the original Riemann-Roch theory was also developed in this setting. The main tools involved are the Riemann-Roch formula and a formula for rank (Theorem ) that is used in the proof of the Riemann-Roch formula, these results were first obtained in the work of Baker and Norine @cite_8 . We obtain a geometric interpretation of rank (Theorem ) and combine this geometric interpretation along with algorithms from the geometry of numbers to obtain the algorithm for computing the rank (Algorithm ). We find it satisfying that geometric tools seem to be essential in obtaining the algorithm, though the definition of rank of a divisor can be stated in purely combinatorial terms.
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2083387674"
],
"abstract": [
"Abstract It is well known that a finite graph can be viewed, in many respects, as a discrete analogue of a Riemann surface. In this paper, we pursue this analogy further in the context of linear equivalence of divisors. In particular, we formulate and prove a graph-theoretic analogue of the classical Riemann–Roch theorem. We also prove several results, analogous to classical facts about Riemann surfaces, concerning the Abel–Jacobi map from a graph to its Jacobian. As an application of our results, we characterize the existence or non-existence of a winning strategy for a certain chip-firing game played on the vertices of a graph."
]
}
|
1111.6244
|
2951717444
|
In this paper, we present a new family of fountain codes which overcome adversarial errors. That is, we consider the possibility that some portion of the arriving packets of a rateless erasure code are corrupted in an undetectable fashion. In practice, the corrupted packets may be attributed to a portion of the communication paths which are controlled by an adversary or to a portion of the sources that are malicious. The presented codes resemble and extend LT and Raptor codes. Yet, their benefits over existing coding schemes are manifold. First, to overcome the corrupted packets, our codes use information theoretic techniques, rather than cryptographic primitives. Thus, no secret channel between the senders and the receivers is required. Second, the encoders in the suggested scheme are oblivious to the strength of the adversary, yet perform as if its strength was known in advance. Third, the sparse structure of the codes facilitates efficient decoding. Finally, the codes easily fit a decentralized scenario with several sources, when no communication between the sources is allowed. We present both exhaustive as well as efficient decoding rules. Beyond the obvious use as a rateless codes, our codes have important applications in distributed computing.
|
In @cite_3 , a slightly different technique for packet verification in rateless codes is used. Therein, a Merkle-tree @cite_4 based signature structure is suggested. However, the solution proposed is, still, only valid against computationally bounded adversaries and relies on the existence of homomorphic, collision-resistant hash functions. Furthermore, as the size of a Merkle-tree is linear in the size of the original message, the authors propose a process of repeated hashing to reduce the size of the tree. Such recursive application of a hashing function is more likely to be susceptible to attack. An efficient scheme for signature generation of rateless codes appears in @cite_7 , where the authors use the computational hardness of the discreet log to provide a pki which enables the sender to efficiently sign each packet transmitted. The scheme is based on looking at the data being sent as spanning a specific vector space, and looking at packets as valid as long as they belong to the same vector space. The verification part uses standard cryptographic devices to facilitate the check.
|
{
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_3"
],
"mid": [
"319917506",
"2153098283",
"2058972589"
],
"abstract": [
"",
"Recent research has shown that network coding can be used in content distribution systems to improve the speed of downloads and the robustness of the systems. However, such systems are very vulnerable to attacks by malicious nodes, and we need to have a signature scheme that allows nodes to check the validity of a packet without decoding. In this paper, we propose such a signature scheme for network coding. Our scheme makes use of the linearity property of the packets in a coded system, and allows nodes to check the integrity of the packets received easily. We show that the proposed scheme is secure, and its overhead is negligible for large files.",
"An Information Dispersal Algorithm (IDA) is developed that breaks a file F of length L = u F u into n pieces F i , l ≤ i ≤ n , each of length u F i u = L m , so that every m pieces suffice for reconstructing F . Dispersal and reconstruction are computationally efficient. The sum of the lengths u F i u is ( n m ) · L . Since n m can be chosen to be close to l, the IDA is space efficient. IDA has numerous applications to secure and reliable storage of information in computer networks and even on single disks, to fault-tolerant and efficient transmission of information in networks, and to communications between processors in parallel computers. For the latter problem provably time-efficient and highly fault-tolerant routing on the n -cube is achieved, using just constant size buffers."
]
}
|
1111.6244
|
2951717444
|
In this paper, we present a new family of fountain codes which overcome adversarial errors. That is, we consider the possibility that some portion of the arriving packets of a rateless erasure code are corrupted in an undetectable fashion. In practice, the corrupted packets may be attributed to a portion of the communication paths which are controlled by an adversary or to a portion of the sources that are malicious. The presented codes resemble and extend LT and Raptor codes. Yet, their benefits over existing coding schemes are manifold. First, to overcome the corrupted packets, our codes use information theoretic techniques, rather than cryptographic primitives. Thus, no secret channel between the senders and the receivers is required. Second, the encoders in the suggested scheme are oblivious to the strength of the adversary, yet perform as if its strength was known in advance. Third, the sparse structure of the codes facilitates efficient decoding. Finally, the codes easily fit a decentralized scenario with several sources, when no communication between the sources is allowed. We present both exhaustive as well as efficient decoding rules. Beyond the obvious use as a rateless codes, our codes have important applications in distributed computing.
|
In @cite_8 , Koetter and Kschischang present a different approach, based on high dimensional vector spaces; a message of @math bits is encoded into a vector space, @math , of dimension @math , which is a subspace of an ambient vector space @math of dimension @math . @math is a parameter of the encoding scheme, @math is the number of bits in a message block and @math is the number of blocks in the message. Each packet the sender creates is a randomly chosen vector (of @math bits) in @math . The receiver, upon collecting enough vectors -- @math linearly independent vectors -- can proceed to reconstruct the original message from the received vector space @math . The authors present a minimal distance decoder, which can recover @math from @math provided that, when writing @math as @math where @math is the error space, @math and @math , it holds that @math .
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2139416652"
],
"abstract": [
"The problem of error-control in random linear network coding is considered. A ldquononcoherentrdquo or ldquochannel obliviousrdquo model is assumed where neither transmitter nor receiver is assumed to have knowledge of the channel transfer characteristic. Motivated by the property that linear network coding is vector-space preserving, information transmission is modeled as the injection into the network of a basis for a vector space V and the collection by the receiver of a basis for a vector space U. A metric on the projective geometry associated with the packet space is introduced, and it is shown that a minimum-distance decoder for this metric achieves correct decoding if the dimension of the space V capU is sufficiently large. If the dimension of each codeword is restricted to a fixed integer, the code forms a subset of a finite-field Grassmannian, or, equivalently, a subset of the vertices of the corresponding Grassmann graph. Sphere-packing and sphere-covering bounds as well as a generalization of the singleton bound are provided for such codes. Finally, a Reed-Solomon-like code construction, related to Gabidulin's construction of maximum rank-distance codes, is described and a Sudan-style ldquolist-1rdquo minimum-distance decoding algorithm is provided."
]
}
|
1111.6244
|
2951717444
|
In this paper, we present a new family of fountain codes which overcome adversarial errors. That is, we consider the possibility that some portion of the arriving packets of a rateless erasure code are corrupted in an undetectable fashion. In practice, the corrupted packets may be attributed to a portion of the communication paths which are controlled by an adversary or to a portion of the sources that are malicious. The presented codes resemble and extend LT and Raptor codes. Yet, their benefits over existing coding schemes are manifold. First, to overcome the corrupted packets, our codes use information theoretic techniques, rather than cryptographic primitives. Thus, no secret channel between the senders and the receivers is required. Second, the encoders in the suggested scheme are oblivious to the strength of the adversary, yet perform as if its strength was known in advance. Third, the sparse structure of the codes facilitates efficient decoding. Finally, the codes easily fit a decentralized scenario with several sources, when no communication between the sources is allowed. We present both exhaustive as well as efficient decoding rules. Beyond the obvious use as a rateless codes, our codes have important applications in distributed computing.
|
The codes presented in @cite_8 have theoretical merits. Nevertheless, they suffer from several severe implementation problems; to boost the error resiliency of the code, one should (a) increase @math or (b) decrease @math . The implications are the need for sending more redundant information in each packet (increasing @math ) and having larger packet sizes (decreasing @math ). Moreover, to recover from a Byzantine attack on at most a third of the packets sent, the codes presented require that @math (that is, each block must be of length @math bits), where @math is the size of the message. In contrast, our codes are not limited by block sizes and can cope with one third of Byzantine corrupted packets regardless of the block size.
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2139416652"
],
"abstract": [
"The problem of error-control in random linear network coding is considered. A ldquononcoherentrdquo or ldquochannel obliviousrdquo model is assumed where neither transmitter nor receiver is assumed to have knowledge of the channel transfer characteristic. Motivated by the property that linear network coding is vector-space preserving, information transmission is modeled as the injection into the network of a basis for a vector space V and the collection by the receiver of a basis for a vector space U. A metric on the projective geometry associated with the packet space is introduced, and it is shown that a minimum-distance decoder for this metric achieves correct decoding if the dimension of the space V capU is sufficiently large. If the dimension of each codeword is restricted to a fixed integer, the code forms a subset of a finite-field Grassmannian, or, equivalently, a subset of the vertices of the corresponding Grassmann graph. Sphere-packing and sphere-covering bounds as well as a generalization of the singleton bound are provided for such codes. Finally, a Reed-Solomon-like code construction, related to Gabidulin's construction of maximum rank-distance codes, is described and a Sudan-style ldquolist-1rdquo minimum-distance decoding algorithm is provided."
]
}
|
1111.6580
|
2137001571
|
Many proposed quantum mechanical models of black holes include highly nonlocal interactions. The time required for thermalization to occur in such models should reflect the relaxation times associated with classical black holes in general relativity. Moreover, the time required for a particularly strong form of thermalization to occur, sometimes known as scrambling, determines the time scale on which black holes should start to release information. It has been conjectured that black holes scramble in a time logarithmic in their entropy, and that no system in nature can scramble faster. In this article, we address the conjecture from two directions. First, we exhibit two examples of systems that do indeed scramble in logarithmic time: Brownian quantum circuits and the antiferromagnetic Ising model on a sparse random graph. Unfortunately, both fail to be truly ideal fast scramblers for reasons we discuss. Second, we use Lieb-Robinson techniques to prove a logarithmic lower bound on the scrambling time of systems with finite norm terms in their Hamiltonian. The bound holds in spite of any nonlocal structure in the Hamiltonian, which might permit every degree of freedom to interact directly with every other one.
|
Asplund, Berenstein and Trancanelli @cite_30 have numerically investigated relaxation in matrix models. Their approach is to look at the classical dynamics of the system, with initial states selected stochastically in such a way as to enforce the uncertainty principle. They do indeed find what appears to be very rapid relaxation of the system to an attractor state, but their article only considers a fixed-sized and relatively small system, so it cannot directly address the scaling of relaxation time with system size. The relationship between this classical relaxation time and quantum mechanical scrambling is also an interesting and currently unexplored question.
|
{
"cite_N": [
"@cite_30"
],
"mid": [
"1970767085"
],
"abstract": [
"We report on a numerical simulation of the classical evolution of the plane-wave matrix model with semiclassical initial conditions. Some of these initial conditions thermalize and are dual to a black hole forming from the collision of D-branes in the plane-wave geometry. In particular, we consider a large fuzzy sphere (a D2-brane) plus a single eigenvalue (a D0 particle) going exactly through the center of the fuzzy sphere and aimed to intersect it. Including quantum fluctuations of the off-diagonal modes in the initial conditions, with sufficient kinetic energy the configuration collapses to a small size. We also find evidence for fast thermalization: rapidly decaying autocorrelation functions at late times with respect to the natural time scale of the system."
]
}
|
1111.6580
|
2137001571
|
Many proposed quantum mechanical models of black holes include highly nonlocal interactions. The time required for thermalization to occur in such models should reflect the relaxation times associated with classical black holes in general relativity. Moreover, the time required for a particularly strong form of thermalization to occur, sometimes known as scrambling, determines the time scale on which black holes should start to release information. It has been conjectured that black holes scramble in a time logarithmic in their entropy, and that no system in nature can scramble faster. In this article, we address the conjecture from two directions. First, we exhibit two examples of systems that do indeed scramble in logarithmic time: Brownian quantum circuits and the antiferromagnetic Ising model on a sparse random graph. Unfortunately, both fail to be truly ideal fast scramblers for reasons we discuss. Second, we use Lieb-Robinson techniques to prove a logarithmic lower bound on the scrambling time of systems with finite norm terms in their Hamiltonian. The bound holds in spite of any nonlocal structure in the Hamiltonian, which might permit every degree of freedom to interact directly with every other one.
|
Barbon and Magan @cite_34 have approached the conjecture from a different direction. They suggest that the logarithmic factor in the black hole scrambling time arises from the hyperbolic geometry of the so-called optical metric'' @math associated to a simple coordinatization of Rindler space. Specifically, they argue that the Lyapunov time for a classical billiards game on such a geometry agrees with the scrambling time.
|
{
"cite_N": [
"@cite_34"
],
"mid": [
"2011011348"
],
"abstract": [
"Fast scramblers process information in characteristic times scaling logarithmically with the entropy, a behavior which has been conjectured for black hole horizons. In this note we use the AdS CFT fold to argue that causality bounds on information flow only depend on the properties of a single thermal cell, and admit a geometrical interpretation in terms of the optical depth, i.e. the thickness of the Rindler region in the so-called optical metric. The spatial sections of the optical metric are well approximated by constant-curvature hyperboloids. We use this fact to propose an effective kinetic model of scrambling which can be assimilated to a compact hyperbolic billiard, furnishing a classic example of hard chaos. It is suggested that classical chaos at large N is a crucial ingredient in reconciling the notion of fast scrambling with the required saturation of causality."
]
}
|
1111.6580
|
2137001571
|
Many proposed quantum mechanical models of black holes include highly nonlocal interactions. The time required for thermalization to occur in such models should reflect the relaxation times associated with classical black holes in general relativity. Moreover, the time required for a particularly strong form of thermalization to occur, sometimes known as scrambling, determines the time scale on which black holes should start to release information. It has been conjectured that black holes scramble in a time logarithmic in their entropy, and that no system in nature can scramble faster. In this article, we address the conjecture from two directions. First, we exhibit two examples of systems that do indeed scramble in logarithmic time: Brownian quantum circuits and the antiferromagnetic Ising model on a sparse random graph. Unfortunately, both fail to be truly ideal fast scramblers for reasons we discuss. Second, we use Lieb-Robinson techniques to prove a logarithmic lower bound on the scrambling time of systems with finite norm terms in their Hamiltonian. The bound holds in spite of any nonlocal structure in the Hamiltonian, which might permit every degree of freedom to interact directly with every other one.
|
More indirectly, while most work prior to @cite_40 argued that black holes held information for an amount of time comparable to the black hole lifetime, if not forever, occasional hints were found that information might leak out faster @cite_35 . Reversing the reasoning, one could interpret such arguments as evidence in favour of the fast scrambling conjecture.
|
{
"cite_N": [
"@cite_35",
"@cite_40"
],
"mid": [
"1979169890",
"2129872856"
],
"abstract": [
"We investigate a recently proposed model for a full quantum description of two-dimensional black hole evaporation, in which a reflecting boundary condition is imposed in the strong-coupling region. It is shown that in this model each initial state is mapped to a well-defined asymptotic out state, provided one performs a certain projection in the gravitational zero mode sector. We find that for an incoming localized energy pulse, the corresponding outgoing state contains approximately thermal radiation, in accordance with semiclassical predictions. In addition, our model allows for certain acausal strong-coupling effects near the singularity that give rise to corrections to the Hawking spectrum and restore the coherence of the out state. To an asymptotic observer these corrections appear to originate from behind the receding apparent horizon and start to influence the outgoing state long before the black hole has emitted most of its mass. Finally, by putting the system in a finite box, we are able to derive some algebraic properties of the scattering matrix and prove that the final state contains all initial information.",
"We study information retrieval from evaporating black holes, assuming that the internal dynamics of a black hole is unitary and rapidly mixing, and assuming that the retriever has unlimited control over the emitted Hawking radiation. If the evaporation of the black hole has already proceeded past the half-way'' point, where half of the initial entropy has been radiated away, then additional quantum information deposited in the black hole is revealed in the Hawking radiation very rapidly. Information deposited prior to the half-way point remains concealed until the half-way point, and then emerges quickly. These conclusions hold because typical local quantum circuits are efficient encoders for quantum error-correcting codes that nearly achieve the capacity of the quantum erasure channel. Our estimate of a black hole's information retention time, based on speculative dynamical assumptions, is just barely compatible with the black hole complementarity hypothesis."
]
}
|
1111.5312
|
1874144543
|
Temporal networks are ubiquitous and evolve over time by the addition, deletion, and changing of links, nodes, and attributes. Although many relational datasets contain temporal information, the majority of existing techniques in relational learning focus on static snapshots and ignore the temporal dynamics. We propose a framework for discovering temporal representations of relational data to increase the accuracy of statistical relational learning algorithms. The temporal relational representations serve as a basis for classification, ensembles, and pattern mining in evolving domains. The framework includes (1) selecting the time-varying relational components (links, attributes, nodes), (2) selecting the temporal granularity, (3) predicting the temporal influence of each time-varying relational component, and (4) choosing the weighted relational classifier. Additionally, we propose temporal ensemble methods that exploit the temporal-dimension of relational data. These ensembles outperform traditional and more sophisticated relational ensembles while avoiding the issue of learning the most optimal representation. Finally, the space of temporal-relational models are evaluated using a sample of classifiers. In all cases, the proposed temporal-relational classifiers outperform competing models that ignore the temporal information. The results demonstrate the capability and necessity of the temporal-relational representations for classification, ensembles, and for mining temporal datasets.
|
Most previous work uses static snapshots or significantly limits the amount of temporal information used for relational learning. Sharan et. al. @cite_4 assumes a strict temporal-representation that uses kernel estimation for links and includes these into a classifier. They do not consider multiple temporal granularities (all information is used, statically) and the attributes and nodes are not weighted. In addition, they focus only on one specific temporal pattern and ignore the rest whereas we explore many temporal-relational representations and propose a flexible framework capable of capturing the temporal patterns of links, attributes, and nodes. Moreover, they only evaluate and consider static prediction tasks. Other work has focused on discovering temporal patterns between attributes @cite_8 . There are also temporal centrality measures that capture properties of the network structure @cite_7 .
|
{
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8"
],
"mid": [
"2163032501",
"2139322472",
"2040466507"
],
"abstract": [
"Many relational domains contain temporal information and dynamics that are important to model (e.g., social networks, protein networks). However, past work in relational learning has focused primarily on modeling static \"snapshots\" of the data and has largely ignored the temporal dimension of these data. In this work, we extend relational techniques to temporally-evolving domains and outline a representational framework that is capable of modeling both temporal and relational dependencies in the data. We develop efficient learning and inference techniques within the framework by considering a restricted set of temporal-relational dependencies and using parameter-tying methods to generalize across relationships and entities. More specifically, we model dynamic relational data with a two-phase process, first summarizing the temporal-relational information with kernel smoothing, and then moderating attribute dependencies with the summarized relational information. We develop a number of novel temporal-relational models using the framework and then show that the current approaches to modeling static relational data are special cases within the framework. We compare the new models to the competing static relational methods on three real-world datasets and show that the temporal-relational models consistently outperform the relational models that ignore temporal information - achieving significant reductions in error ranging from 15 to 70 .",
"The study of influential members of human networks is an important research question in social network analysis. However, the current state-of-the-art is based on static or aggregated representation of the network topology. We argue that dynamically evolving network topologies are inherent in many systems, including real online social and technological networks: fortunately the nature of these systems is such that they allow the gathering of large quantities of finegrained temporal data on interactions amongst the network members. In this paper we propose novel temporal centrality metrics which take into account such dynamic interactions over time. Using a real corporate email dataset we evaluate the important individuals selected by means of static and temporal analysis taking two perspectives: firstly, from a semantic level, we investigate their corporate role in the organisation; and secondly, from a dynamic process point of view, we measure information dissemination and the role of information mediators. We find that temporal analysis provides a better understanding of dynamic processes and a more accurate identification of important people compared to traditional static methods.",
"Temporal Text Mining (TTM) is concerned with discovering temporal patterns in text information collected over time. Since most text information bears some time stamps, TTM has many applications in multiple domains, such as summarizing events in news articles and revealing research trends in scientific literature. In this paper, we study a particular TTM task -- discovering and summarizing the evolutionary patterns of themes in a text stream. We define this new text mining problem and present general probabilistic methods for solving this problem through (1) discovering latent themes from text; (2) constructing an evolution graph of themes; and (3) analyzing life cycles of themes. Evaluation of the proposed methods on two different domains (i.e., news articles and literature) shows that the proposed methods can discover interesting evolutionary theme patterns effectively."
]
}
|
1111.5596
|
2397607139
|
Extensions to the C++ implementation of the QCD Data Parallel Interface are provided enabling acceleration of expression evaluation on NVIDIA GPUs. Single expressions are off-loaded to the device memory and execution domain leveraging the Portable Expression Template Engine and using Just-in-Time compilation techniques. Memory management is automated by a software implementation of a cache controlling the GPU's memory. Interoperability with existing Krylov space solvers is demonstrated and special attention is paid on 'Chroma readiness'. Non-kernel routines in lattice QCD calculations typically not subject of hand-tuned optimisations are accelerated which can reduce the effects otherwise suffered from Amdahl's Law.
|
Efforts similar to this work are undergone at Jefferson Lab @cite_9 . This underlines the necessity of this approach.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2295191825"
],
"abstract": [
"Graphics Processing Units (GPUs) are having a transformational effect on numerical lattice quantum chromo- dynamics (LQCD) calculations of importance in nuclear and particle physics. The QUDA library provides a package of mixed precision sparse matrix linear solvers for LQCD applications, supporting single GPUs based on NVIDIA's Compute Unified Device Architecture (CUDA). This library, interfaced to the QDP++ Chroma framework for LQCD calculations, is currently in production use on the \"9g\" cluster at the Jefferson Laboratory, enabling unprecedented price performance for a range of problems in LQCD. Nevertheless, memory constraints on current GPU devices limit the problem sizes that can be tackled. In this contribution we describe the parallelization of the QUDA library onto multiple GPUs using MPI, including strategies for the overlapping of communication and computation. We report on both weak and strong scaling for up to 32 GPUs interconnected by InfiniBand, on which we sustain in excess of 4 Tflops."
]
}
|
1111.5596
|
2397607139
|
Extensions to the C++ implementation of the QCD Data Parallel Interface are provided enabling acceleration of expression evaluation on NVIDIA GPUs. Single expressions are off-loaded to the device memory and execution domain leveraging the Portable Expression Template Engine and using Just-in-Time compilation techniques. Memory management is automated by a software implementation of a cache controlling the GPU's memory. Interoperability with existing Krylov space solvers is demonstrated and special attention is paid on 'Chroma readiness'. Non-kernel routines in lattice QCD calculations typically not subject of hand-tuned optimisations are accelerated which can reduce the effects otherwise suffered from Amdahl's Law.
|
In previous work QDP++ Chroma was extended in a similar way targeted to a different heterogeneous multicore architecture, the Cell processor and QPACE @cite_3 @cite_7 .
|
{
"cite_N": [
"@cite_7",
"@cite_3"
],
"mid": [
"2955839182",
"1838254"
],
"abstract": [
"QPACE is a novel massively parallel architecture optimized for lattice QCD simulations. A single QPACE node is based on the IBM PowerXCell 8i processor. The nodes are interconnected by a custom 3-dimensional torus network implemented on an FPGA. The compute power of the processor is provided by 8 Synergistic Processing Units. Making efficient use of these accelerator cores in scientific applications is challenging. In this paper we describe our strategies for porting applications to the QPACE architecture and report on performance numbers.",
"Observables relevant for the understanding of the structure of baryons were determined by means of Monte Carlo simulations of Lattice Quantum Chromodynamics (QCD) using 2+1 dynamical quark flavours. Especial emphasis was placed on how these observables change when flavour symmetry is broken in comparison to choosing equal masses for the two light and the strange quark. The first two moments of unpolarised, longitudinally, and transversely polarised parton distribution functions were calculated for the nucleon and hyperons. The latter are baryons which comprise a strange quark. @PARASPLIT Lattice QCD simulations tend to be extremely expensive, reaching the need for petaflop computing and beyond, a regime of computing power we just reach today. Heterogeneous multicore computing is getting increasingly important in high performance scientific computing. The strategy of deploying multiple types of processing elements within a single workflow, and allowing each to perform the tasks to which it is best suited is likely to be part of the roadmap to exascale. In this work new design concepts were developed for an active library (QDP++) harnessing the compute power of a heterogeneous multicore processor (IBM PowerXCell 8i processor). Not only a proof-of-concept is given furthermore it was possible to run a QDP++ based physics application (Chroma) achieving a reasonable performance on the IBM BladeCenter QS22."
]
}
|
1111.4316
|
2951419967
|
The massive semantic data sources linked in the Web of Data give new meaning to old features like navigation; introduce new challenges like semantic specification of Web fragments; and make it possible to specify actions relying on semantic data. In this paper we introduce a declarative language to face these challenges. Based on navigational features, it is designed to specify fragments of the Web of Data and actions to be performed based on these data. We implement it in a centralized fashion, and show its power and performance. Finally, we explore the same ideas in a distributed setting, showing their feasibility, potentialities and challenges.
|
Navigation and specification languages of nodes in a graph have a deep research background. Nevertheless, most of its developments assume that data is stored in a central repository (e.g. Web query languages @cite_2 , XPath , navigational versions of SPARQL @cite_13 @cite_22 ). They were inspiration for the navigational core of .
|
{
"cite_N": [
"@cite_13",
"@cite_22",
"@cite_2"
],
"mid": [
"2126079271",
"2092512344",
"2072881690"
],
"abstract": [
"Navigational features have been largely recognized as fundamental for graph database query languages. This fact has motivated several authors to propose RDF query languages with navigational capabilities. In this paper, we propose the query language nSPARQL that uses nested regular expressions to navigate RDF data. We study some of the fundamental properties of nSPARQL and nested regular expressions concerning expressiveness and complexity of evaluation. Regarding expressiveness, we show that nSPARQL is expressive enough to answer queries considering the semantics of the RDFS vocabulary by directly traversing the input graph. We also show that nesting is necessary in nSPARQL to obtain this last result, and we study the expressiveness of the combination of nested regular expressions and SPARQL operators. Regarding complexity of evaluation, we prove that given an RDF graph G and a nested regular expression E, this problem can be solved in time O(|G|@?|E|).",
"RDF is a knowledge representation language dedicated to the annotation of resources within the framework of the semantic web. Among the query languages for RDF, SPARQL allows querying RDF through graph patterns, i.e., RDF graphs involving variables. Other languages, inspired by the work in databases, use regular expressions for searching paths in RDF graphs. Each approach can express queries that are out of reach of the other one. Hence, we aim at combining these two approaches. For that purpose, we define a language, called PRDF (for ''Path RDF'') which extends RDF such that the arcs of a graph can be labeled by regular expression patterns. We provide PRDF with a semantics extending that of RDF, and propose a correct and complete algorithm which, by computing a particular graph homomorphism, decides the consequence between an RDF graph and a PRDF graph. We then define the PSPARQL query language, extending SPARQL with PRDF graph patterns and complying with RDF model theoretic semantics. PRDF thus offers both graph patterns and path expressions. We show that this extension does not increase the computational complexity of SPARQL and, based on the proposed algorithm, we have implemented a correct and complete PSPARQL query engine.",
""
]
}
|
1111.4316
|
2951419967
|
The massive semantic data sources linked in the Web of Data give new meaning to old features like navigation; introduce new challenges like semantic specification of Web fragments; and make it possible to specify actions relying on semantic data. In this paper we introduce a declarative language to face these challenges. Based on navigational features, it is designed to specify fragments of the Web of Data and actions to be performed based on these data. We implement it in a centralized fashion, and show its power and performance. Finally, we explore the same ideas in a distributed setting, showing their feasibility, potentialities and challenges.
|
Specification (and retrieval) of collections of sites was early addressed, and a good example is the well known tool wget . Besides being non-declarative, it is restricted to almost purely syntactic features. At semantic level, @cite_8 proposed LDSpider, a crawler for the Web of Data able to retrieve RDF data by following RDF links according to different crawling strategies. They have little flexibility and are not declarative. The execution philosophy of wget was a source of inspiration for the incorporation of actions into and to the design of swget .
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2394815666"
],
"abstract": [
"The Web of Linked Data is growing and currently consists of several hundred interconnected data sources altogether serving over 25 billion RDF triples to the Web. What has hampered the exploitation of this global dataspace up till now is the lack of an open-source Linked Data crawler which can be employed by Linked Data applications to localize (parts of) the dataspace for further processing. With LDSpider, we are closing this gap in the landscape of publicly available Linked Data tools. LDSpider traverses the Web of Linked Data by following RDF links between data items, it supports different crawling strategies and allows crawled data to be stored either in files or in an RDF store."
]
}
|
1111.4316
|
2951419967
|
The massive semantic data sources linked in the Web of Data give new meaning to old features like navigation; introduce new challenges like semantic specification of Web fragments; and make it possible to specify actions relying on semantic data. In this paper we introduce a declarative language to face these challenges. Based on navigational features, it is designed to specify fragments of the Web of Data and actions to be performed based on these data. We implement it in a centralized fashion, and show its power and performance. Finally, we explore the same ideas in a distributed setting, showing their feasibility, potentialities and challenges.
|
Distributed data management has been explored and implemented by P2P and similar approaches @cite_14 . For RDF, RDFPeers @cite_16 and YARS2 uses P2P to answer RDF queries. Systems for distributed query processing on the Web have also been devised, e.g. DIASPORA @cite_18 . Our distributed version of borrows some ideas from these approaches.
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_16"
],
"mid": [
"1687803333",
"",
"2171708479"
],
"abstract": [
"Current proposals for web querying systems have assumed a centralized processing architecture wherein data is shipped from the remote sites to the user's site. We present here the design and implementation of DIASPORA, a highly distributed query processing system for the web. It is based on the premise that several web applications are more naturally processed in a distributed manner, opening up possibilities of significant reductions in network traffic and user response times. DIASPORA is built over an expressive graph-based data model that utilizes simple heuristics and lends itself to automatic generation. The model captures both the content of web documents and the hyperlink structural framework of a web site. Distributed queries on the model are expressed through a declarative language that permits users to explicitly specify navigation. DIASPORA implements a query-shipping model wherein queries are autonomously forwarded from one web-site to another, without requiring much coordination from the query originating site. Its design addresses a variety of interesting issues that arise in the distributed web context including determining query completion, handling query rewriting, supporting query termination and preventing multiple computations of a query at a site due to the same query arriving through different paths in the hyperlink framework. The DIASPORA system is currently operational and is undergoing testing on our campus network. In this paper we describe the design of the system and report initial performance results that indicate significant performance improvements over comparable centralized approaches.",
"",
"Centralized Resource Description Framework (RDF) repositories have limitations both in their failure tolerance and in their scalability. Existing Peer-to-Peer (P2P) RDF repositories either cannot guarantee to find query results, even if these results exist in the network, or require up-front definition of RDF schemas and designation of super peers. We present a scalable distributed RDF repository (RDFPeers) that stores each triple at three places in a multi-attribute addressable network by applying globally known hash functions to its subject predicate and object. Thus all nodes know which node is responsible for storing triple values they are looking for and both exact-match and range queries can be efficiently routed to those nodes. RDFPeers has no single point of failure nor elevated peers and does not require the prior definition of RDF schemas. Queries are guaranteed to find matched triples in the network if the triples exist. In RDFPeers both the number of neighbors per node and the number of routing hops for inserting RDF triples and for resolving most queries are logarithmic to the number of nodes in the network. We further performed experiments that show that the triple-storing load in RDFPeers differs by less than an order of magnitude between the most and the least loaded nodes for real-world RDF data."
]
}
|
1111.4316
|
2951419967
|
The massive semantic data sources linked in the Web of Data give new meaning to old features like navigation; introduce new challenges like semantic specification of Web fragments; and make it possible to specify actions relying on semantic data. In this paper we introduce a declarative language to face these challenges. Based on navigational features, it is designed to specify fragments of the Web of Data and actions to be performed based on these data. We implement it in a centralized fashion, and show its power and performance. Finally, we explore the same ideas in a distributed setting, showing their feasibility, potentialities and challenges.
|
Process the queries in a distributed form by using a federated query processor. DARQ @cite_0 and FedX @cite_3 provide mechanisms for transparently query answering on multiple query services. The query is split into sub-queries that are forwarded to the individual data sources and their result processed together. An evaluation of federated query approaches can be found in @cite_24 .
|
{
"cite_N": [
"@cite_0",
"@cite_24",
"@cite_3"
],
"mid": [
"1555486317",
"2169080911",
"1484056211"
],
"abstract": [
"Integrated access to multiple distributed and autonomous RDF data sources is a key challenge for many semantic web applications. As a reaction to this challenge, SPARQL, the W3C Recommendation for an RDF query language, supports querying of multiple RDF graphs. However, the current standard does not provide transparent query federation, which makes query formulation hard and lengthy. Furthermore, current implementations of SPARQL load all RDF graphs mentioned in a query to the local machine. This usually incurs a large overhead in network traffic, and sometimes is simply impossible for technical or legal reasons. To overcome these problems we present DARQ, an engine for federated SPARQL queries. DARQ provides transparent query access to multiple SPARQL services, i.e., it gives the user the impression to query one single RDF graph despite the real data being distributed on the web. A service description language enables the query engine to decompose a query into sub-queries, each of which can be answered by an individual service. DARQ also uses query rewriting and cost-based query optimization to speed-up query execution. Experiments show that these optimizations significantly improve query performance even when only a very limited amount of statistical information is available. DARQ is available under GPL License at http: darq.sf.net .",
"The Web has evolved from a global information space of linked documents to a web of linked data. The Web of Data enables answering complex, structured queries that could not be answered by a single data source alone. While the current procedure to work with multiple, distributed linked data sources is to load the desired data into a single RDF store and process queries in a centralized way against the merged data set, such an approach may not always be practically feasible or desired. In this paper, we analyze alternative approaches to federated query processing over linked data and how different design alternatives affect the performance and practicality of query processing. To this end, we define a benchmark for federated query processing, comprising a selection of data sources in various domains and representative queries. Using the benchmark, we perform experiments with different federation alternatives and provide insights about their advantages and disadvantages.",
"Motivated by the ongoing success of Linked Data and the growing amount of semantic data sources available on theWeb, new challenges to query processing are emerging. Especially in distributed settings that require joining data provided by multiple sources, sophisticated optimization techniques are necessary for efficient query processing. We propose novel join processing and grouping techniques to minimize the number of remote requests, and develop an effective solution for source selection in the absence of preprocessed metadata. We present FedX, a practical framework that enables efficient SPARQL query processing on heterogeneous, virtually integrated Linked Data sources. In experiments, we demonstrate the practicability and efficiency of our framework on a set of real-world queries and data sources from the Linked Open Data cloud. With FedX we achieve a significant improvement in query performance over state-of-the-art federated query engines."
]
}
|
1111.4316
|
2951419967
|
The massive semantic data sources linked in the Web of Data give new meaning to old features like navigation; introduce new challenges like semantic specification of Web fragments; and make it possible to specify actions relying on semantic data. In this paper we introduce a declarative language to face these challenges. Based on navigational features, it is designed to specify fragments of the Web of Data and actions to be performed based on these data. We implement it in a centralized fashion, and show its power and performance. Finally, we explore the same ideas in a distributed setting, showing their feasibility, potentialities and challenges.
|
Extend SPARQL with navigational features. The SERVICE feature of SPARQL 1.1 and proposals like the one of @cite_12 extend the scope of SPARQL queries with navigational features @cite_12 @cite_23 . The system SQUIN, based on link-traversal , a query execution paradigm that discover on the fly data sources relevant for the query, permits to automatically navigate to other sources while executing a query.
|
{
"cite_N": [
"@cite_23",
"@cite_12"
],
"mid": [
"1679455373",
"1809515864"
],
"abstract": [
"Link traversal based query execution is a new query execution paradigm for the Web of Data. This approach allows the execution engine to discover potentially relevant data during the query execution and, thus, enables users to tap the full potential of the Web. In earlier work we propose to implement the idea of link traversal based query execution using a synchronous pipeline of iterators. While this idea allows for an easy and efficient implementation, it introduces restrictions that cause less comprehensive result sets. In this paper we address this limitation. We analyze the restrictions and discuss how the evaluation order of a query may affect result set size and query execution costs. To identify a suitable order, we propose a heuristic for our scenario where no a-priory information about relevant data sources is present. We evaluate this heuristic by executing real-world queries over the Web of Data.",
"The Web of Linked Data forms a single, globally distributed dataspace. Due to the openness of this dataspace, it is not possible to know in advance all data sources that might be relevant for query answering. This openness poses a new challenge that is not addressed by traditional research on federated query processing. In this paper we present an approach to execute SPARQL queries over the Web of Linked Data. The main idea of our approach is to discover data that might be relevant for answering a query during the query execution itself. This discovery is driven by following RDF links between data sources based on URIs in the query and in partial results. The URIs are resolved over the HTTP protocol into RDF data which is continuously added to the queried dataset. This paper describes concepts and algorithms to implement our approach using an iterator-based pipeline. We introduce a formalization of the pipelining approach and show that classical iterators may cause blocking due to the latency of HTTP requests. To avoid blocking, we propose an extension of the iterator paradigm. The evaluation of our approach shows its strengths as well as the still existing challenges."
]
}
|
1111.4649
|
2950834478
|
We study integer programming instances over polytopes P(A,b)= x:Ax<=b where the constraint matrix A is random, i.e., its entries are i.i.d. Gaussian or, more generally, its rows are i.i.d. from a spherically symmetric distribution. The radius of the largest inscribed ball is closely related to the existence of integer points in the polytope. We show that for m=2^O(sqrt n ), there exist constants c_0 < c_1 such that with high probability, random polytopes are integer feasible if the radius of the largest ball contained in the polytope is at least c_1sqrt log(m n) ; and integer infeasible if the largest ball contained in the polytope is centered at (1 2,...,1 2) and has radius at most c_0sqrt log(m n) . Thus, random polytopes transition from having no integer points to being integer feasible within a constant factor increase in the radius of the largest inscribed ball. We show integer feasibility via a randomized polynomial-time algorithm for finding an integer point in the polytope. Our main tool is a simple new connection between integer feasibility and linear discrepancy. We extend a recent algorithm for finding low-discrepancy solutions (Lovett-Meka, FOCS '12) to give a constructive upper bound on the linear discrepancy of random matrices. By our connection between discrepancy and integer feasibility, this upper bound on linear discrepancy translates to the radius lower bound that guarantees integer feasibility of random polytopes.
|
Lov ' a sz, Spencer and Vesztergombi @cite_21 showed the following relation between hereditary discrepancy and linear discrepancy.
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"2080064821"
],
"abstract": [
"The discrepancy of a set-system is the minimum number d for which the vertices can be 2-coloured red and blue so that in each. of the given sets, the difference between the numbers of red and blue vertices is at most d. In this paper. we introduce various mathematically more tractable variants of this notion. We prove several inequalities relating these numbers, and formulate several further conjectures. We extend the notion to a general matrix, and formulate it as a problem of covering the unit cube by convex bodies."
]
}
|
1111.4395
|
2401406216
|
Supporting top-k document retrieval queries on general text databases, that is, finding the k documents where a given pattern occurs most frequently, has become a topic of interest with practical applications. While the problem has been solved in optimal time and linear space, the actual space usage is a serious concern. In this paper we study various reduced-space structures that support top-k retrieval and propose new alternatives. Our experimental results show that our novel algorithms and data structures dominate almost all the space time tradeoff.
|
We consider the plain and the compressed wavelet tree representations, and the straightforward and novel representations of 's succinct structure. We compare these alternatives with the original 's method (on plain and compressed wavelet trees), to test the hypothesis that adding 's structure is worth the extra space. Similarly, we include in the comparison the basic 's method (with their structure compressed or not) over 's @cite_18 sequence representation, to test the hypothesis that using 's method over the wavelet tree is worth compared to the brute force method over the fastest sequence representation @cite_18 . This brute force method is also at the core of the new proposal by @cite_22 .
|
{
"cite_N": [
"@cite_18",
"@cite_22"
],
"mid": [
"2044014345",
"2951300344"
],
"abstract": [
"We consider a generalization of the problem of supporting rank and select queries on binary strings. Given a string of length n from an alphabet of size σ, we give the first representation that supports rank and access operations in O(lg lg σ) time, and select in O(1) time while using the optimal n lg σ + o(n lg σ) bits. The best known previous structure for this problem required O(lg σ) time, for general values of σ. Our results immediately improve the search times of a variety of text indexing methods.",
"Let @math @math be a given set of @math string documents of total length @math , our task is to index @math , such that the @math most relevant documents for an online query pattern @math of length @math can be retrieved efficiently. We propose an index of size @math bits and @math query time for the basic relevance metric , where @math is the size (in bits) of a compressed full text index of @math , with @math time for searching a pattern of length @math . We further reduce the space to @math bits, however the query time will be @math , where @math is the alphabet size and @math is any constant."
]
}
|
1111.4807
|
2952906957
|
It is highly desirable and challenging for a wireless ad hoc network to have self-organization properties in order to achieve network wide characteristics. Studies have shown that Small World properties, primarily low average path length and high clustering coefficient, are desired properties for networks in general. However, due to the spatial nature of the wireless networks, achieving small world properties remains highly challenging. Studies also show that, wireless ad hoc networks with small world properties show a degree distribution that lies between geometric and power law. In this paper, we show that in a wireless ad hoc network with non-uniform node density with only local information, we can significantly reduce the average path length and retain the clustering coefficient. To achieve our goal, our algorithm first identifies logical regions using Lateral Inhibition technique, then identifies the nodes that beamform and finally the beam properties using Flocking. We use Lateral Inhibition and Flocking because they enable us to use local state information as opposed to other techniques. We support our work with simulation results and analysis, which show that a reduction of up to 40 can be achieved for a high-density network. We also show the effect of hopcount used to create regions on average path length, clustering coefficient and connectivity.
|
Decades of research on network and graph theory has led researchers to derive many fundamental concepts related to the importance of a node in the network. The concept of centrality was one such concept that was developed and used to address the topological characteristics of the network nodes. Proposed definitions of centrality measures include those that use global parameters as well as those that only use local information. Some examples of global centrality measures are Socio-Centric Betweenness @cite_28 @cite_8 and Closeness Centrality @cite_28 while Degree Centrality @cite_28 and Egocentric Betweenness Centrality @cite_65 @cite_20 are examples of the local centrality measure.
|
{
"cite_N": [
"@cite_28",
"@cite_65",
"@cite_20",
"@cite_8"
],
"mid": [
"",
"2012909434",
"2082674813",
"1971937094"
],
"abstract": [
"",
"In this paper, we look at the betweenness centrality of ego in an ego network. We discuss the issue of normalization and develop an efficient and simple algorithm for calculating the betweenness score. We then examine the relationship between the ego betweenness and the betweenness of the actor in the whole network. Whereas, we can show that there is no theoretical link between the two we undertake a simulation study, which indicates that the local ego betweenness is highly correlated with the betweenness of the actor in the complete network.",
"Message delivery in sparse Mobile Ad hoc Networks (MANETs) is difficult due to the fact that the network graph is rarely (if ever) connected. A key challenge is to find a route that can provide good delivery performance and low end-to-end delay in a disconnected network graph where nodes may move freely. This paper presents a multidisciplinary solution based on the consideration of the so-called small world dynamics which have been proposed for economy and social studies and have recently revealed to be a successful approach to be exploited for characterising information propagation in wireless networks. To this purpose, some bridge nodes are identified based on their centrality characteristics, i.e., on their capability to broker information exchange among otherwise disconnected nodes. Due to the complexity of the centrality metrics in populated networks the concept of ego networks is exploited where nodes are not required to exchange information about the entire network topology, but only locally available information is considered. Then SimBet Routing is proposed which exploits the exchange of pre-estimated \"betweenness' centrality metrics and locally determined social \"similarity' to the destination node. We present simulations using real trace data to demonstrate that SimBet Routing results in delivery performance close to Epidemic Routing but with significantly reduced overhead. Additionally, we show that SimBet Routing outperforms PRoPHET Routing, particularly when the sending and receiving nodes have low connectivity.",
"A family of new measures of point and graph centrality based on early intuitions of Bavelas (1948) is introduced. These measures define centrality in terms of the degree to which a point falls on the shortest path between others and there fore has a potential for control of communication. They may be used to index centrality in any large or small network of symmetrical relations, whether connected or unconnected."
]
}
|
1111.4898
|
2950676698
|
Human navigation has been a topic of interest in spatial cognition from the past few decades. It has been experimentally observed that humans accomplish the task of way-finding a destination in an unknown environment by recognizing landmarks. Investigations using network analytic techniques reveal that humans, when asked to way-find their destination, learn the top ranked nodes of a network. In this paper we report a study simulating the strategy used by humans to recognize the centers of a network. We show that the paths obtained from our simulation has the same properties as the paths obtained in human based experiment. The simulation thus performed leads to a novel way of path-finding in a network. We discuss the performance of our method and compare it with the existing techniques to find a path between a pair of nodes in a network.
|
Kleinberg's work @cite_17 on navigation in a small world, was the first paper to shed light on the navigation problem on complex networks. The paper highlighted the fact that it is easier to find short chains between points in networks than .
|
{
"cite_N": [
"@cite_17"
],
"mid": [
"1572272766"
],
"abstract": [
"It is easier to find short chains between points in some networks than others. The small-world phenomenon — the principle that most of us are linked by short chains of acquaintances — was first investigated as a question in sociology1,2 and is a feature of a range of networks arising in nature and technology3,4,5. Experimental study of the phenomenon1 revealed that it has two fundamental components: first, such short chains are ubiquitous, and second, individuals operating with purely local information are very adept at finding these chains. The first issue has been analysed2,3,4, and here I investigate the second by modelling how individuals can find short chains in a large social network."
]
}
|
1111.4898
|
2950676698
|
Human navigation has been a topic of interest in spatial cognition from the past few decades. It has been experimentally observed that humans accomplish the task of way-finding a destination in an unknown environment by recognizing landmarks. Investigations using network analytic techniques reveal that humans, when asked to way-find their destination, learn the top ranked nodes of a network. In this paper we report a study simulating the strategy used by humans to recognize the centers of a network. We show that the paths obtained from our simulation has the same properties as the paths obtained in human based experiment. The simulation thus performed leads to a novel way of path-finding in a network. We discuss the performance of our method and compare it with the existing techniques to find a path between a pair of nodes in a network.
|
A rigorous graph theory based approach to path finding strategy was proposed by , @cite_15 . They provide a method of navigation based on vertex labeling. They establish a labeling strategy for the vertices of a graph in a way that allows one to compute the distance between any two vertices directly from the labels, without using any additional information about the network.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2034501275"
],
"abstract": [
"We consider the problem of labeling the nodes of a graph in a way that will allow one to compute the distance between any two nodes directly from their labels (without using any additional information). Our main interest is in the minimal length of labels needed in different cases. We obtain upper and lower bounds for several interesting families of graphs. In particular, our main results are the following. For general graphs, we show that the length needed is Θ(n). For trees, we show that the length needed is Θ(log2 n). For planar graphs, we show an upper bound of O(√nlogn) and a lower bound of Ω(n1 3). For bounded degree graphs, we show a lower bound of Ω(√n). The upper bounds for planar graphs and for trees follow by a more general upper bound for graphs with a r(n)-separator. The two lower bounds, however, are obtained by two different arguments that may be interesting in their own right. We also show some lower bounds on the length of the labels, even if it is only required that distances be approximated to a multiplicative factor s. For example, we show that for general graphs the required length is Ω(n) for every s < 3. We also consider the problem of the time complexity of the distance function once the labels are computed. We show that there are graphs with optimal labels of length 3 log n, such that if we use any labels with fewer than n bits per label, computing the distance function requires exponential time. A similar result is obtained for planar and bounded degree graphs."
]
}
|
1111.4898
|
2950676698
|
Human navigation has been a topic of interest in spatial cognition from the past few decades. It has been experimentally observed that humans accomplish the task of way-finding a destination in an unknown environment by recognizing landmarks. Investigations using network analytic techniques reveal that humans, when asked to way-find their destination, learn the top ranked nodes of a network. In this paper we report a study simulating the strategy used by humans to recognize the centers of a network. We show that the paths obtained from our simulation has the same properties as the paths obtained in human based experiment. The simulation thus performed leads to a novel way of path-finding in a network. We discuss the performance of our method and compare it with the existing techniques to find a path between a pair of nodes in a network.
|
, in @cite_13 addresses the question of how participants in a small world experiment are able to find short paths in a social network using only local information about their immediate contacts. They conduct their experiment on an email network and demonstrate by empirical data that the small world search strategies using a contact's position in physical space or in an organizational hierarchy relative to the target can effectively be used to locate most individuals.
|
{
"cite_N": [
"@cite_13"
],
"mid": [
"2147952642"
],
"abstract": [
"We address the question of how participants in a small world experiment are able to find short paths in a social network using only local information about their immediate contacts. We simulate such experiments on a network of actual email contacts within an organization as well as on a student social networking website. On the email network we find that small world search strategies using a contact’s position in physical space or in an organizational hierarchy relative to the target can effectively be used to locate most individuals. However, we find that in the online student network, where the data is incomplete and hierarchical structures are not well defined, local search strategies are less effective. We compare our findings to recent theoretical hypotheses about underlying social structure that would enable these simple search strategies to succeed and discuss the implications to social software design."
]
}
|
1111.4898
|
2950676698
|
Human navigation has been a topic of interest in spatial cognition from the past few decades. It has been experimentally observed that humans accomplish the task of way-finding a destination in an unknown environment by recognizing landmarks. Investigations using network analytic techniques reveal that humans, when asked to way-find their destination, learn the top ranked nodes of a network. In this paper we report a study simulating the strategy used by humans to recognize the centers of a network. We show that the paths obtained from our simulation has the same properties as the paths obtained in human based experiment. The simulation thus performed leads to a novel way of path-finding in a network. We discuss the performance of our method and compare it with the existing techniques to find a path between a pair of nodes in a network.
|
A detailed survey on decentralized search algorithms on networks that exhibit small-world phenomena is given by Kleinberg in @cite_21 . This survey also contains an exhaustive list of open problems in this area.
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"2243945130"
],
"abstract": [
"The study of complex networks has emerged over the past several years as a theme spanning many disciplines, ranging from mathematics and computer science to the social and biological sciences. A significant amount of recent work in this area has focused on the development of random graph models that capture some of the qualitative properties observed in large-scale network data; such models have the potential to help us reason, at a general level, about the ways in which real-world networks are organized. We survey one particular line of network research, concerned with small-world phenomena and decentralized search algorithms, that illustrates this style of analysis. We begin by describing awell-known experiment that provided the first empirical basis for the �six degrees of separation� phenomenon in social networks; wethen discuss some probabilistic network models motivated by this work, illustrating how these models lead to novel algorithmic and graph-theoretic questions, and how they are supported by recent empirical studies of large social networks."
]
}
|
1111.4898
|
2950676698
|
Human navigation has been a topic of interest in spatial cognition from the past few decades. It has been experimentally observed that humans accomplish the task of way-finding a destination in an unknown environment by recognizing landmarks. Investigations using network analytic techniques reveal that humans, when asked to way-find their destination, learn the top ranked nodes of a network. In this paper we report a study simulating the strategy used by humans to recognize the centers of a network. We show that the paths obtained from our simulation has the same properties as the paths obtained in human based experiment. The simulation thus performed leads to a novel way of path-finding in a network. We discuss the performance of our method and compare it with the existing techniques to find a path between a pair of nodes in a network.
|
, @cite_18 propose a navigational technique based on a hybridized method of using degree and homophilly of vertices that yield better results than the currently known techniques.
|
{
"cite_N": [
"@cite_18"
],
"mid": [
"2027447600"
],
"abstract": [
"Many large distributed systems can be characterized as networks where short paths exist between nearly every pair of nodes. These include social, biological, communication, and distribution networks, which often display power-law or small-world structure. A central challenge of distributed systems is directing messages to specific nodes through a sequence of decisions made by individual nodes without global knowledge of the network. We present a probabilistic analysis of this navigation problem that produces a surprisingly simple and effective method for directing messages. This method requires calculating only the product of the two measures widely used to summarize all local information. It outperforms prior approaches reported in the literature by a large margin, and it provides a formal model that may describe how humans make decisions in sociological studies intended to explore the social network as well as how they make decisions in more naturalistic settings."
]
}
|
1111.4898
|
2950676698
|
Human navigation has been a topic of interest in spatial cognition from the past few decades. It has been experimentally observed that humans accomplish the task of way-finding a destination in an unknown environment by recognizing landmarks. Investigations using network analytic techniques reveal that humans, when asked to way-find their destination, learn the top ranked nodes of a network. In this paper we report a study simulating the strategy used by humans to recognize the centers of a network. We show that the paths obtained from our simulation has the same properties as the paths obtained in human based experiment. The simulation thus performed leads to a novel way of path-finding in a network. We discuss the performance of our method and compare it with the existing techniques to find a path between a pair of nodes in a network.
|
, in @cite_14 proposed a learning framework based on a First-Visit Monte Carlo Algorithm. They show that the and are strongly related to the network topology. @cite_29 , Cajueiro has proposed a strategy where the walker is assumed to take optimal paths in order to minimize the cost of walking. They provide an approach to generalize several concepts presented in the literature concerning random navigation and direct navigation.
|
{
"cite_N": [
"@cite_29",
"@cite_14"
],
"mid": [
"2068094787",
"2053544387"
],
"abstract": [
"Recent literature has presented evidence that the study of navigation in complex networks is useful to understand their dynamics and topology. Two main approaches are usually considered: navigation of random walkers and navigation of directed walkers. Unlike these approaches ours supposes that a traveler walks optimally in order to minimize the cost of the walking. If this happens, two extreme regimes arise—one dominated by directed walkers and the other by random walkers. We try to characterize the critical point of the transition from one regime to the other in function of the connectivity and the size of the network. Furthermore, we show that this approach can be used to generalize several concepts presented in the literature concerning random navigation and direct navigation. Finally, we defend that investigating the extreme regimes dominated by random walkers and directed walkers is not sufficient to correctly assess the characteristics of navigation in complex networks.",
"This letter addresses the issue of learning shortest paths in complex networks, which is of utmost importance in real-life navigation. The approach has been partially motivated by recent progress in characterizing navigation problems in networks, having as extreme situations the completely ignorant (random) walker and the rich directed walker, which can pay for information that will guide to the target node along the shortest path. A learning framework based on a first-visit Monte Carlo algorithm is implemented, together with four independent measures that characterize the learning process. The methodology is applied to a number of network classes, as well as to networks constructed from actual data. The results indicate that the navigation difficulty and learning velocity are strongly related to the network topology."
]
}
|
1111.4898
|
2950676698
|
Human navigation has been a topic of interest in spatial cognition from the past few decades. It has been experimentally observed that humans accomplish the task of way-finding a destination in an unknown environment by recognizing landmarks. Investigations using network analytic techniques reveal that humans, when asked to way-find their destination, learn the top ranked nodes of a network. In this paper we report a study simulating the strategy used by humans to recognize the centers of a network. We show that the paths obtained from our simulation has the same properties as the paths obtained in human based experiment. The simulation thus performed leads to a novel way of path-finding in a network. We discuss the performance of our method and compare it with the existing techniques to find a path between a pair of nodes in a network.
|
Recently a novel method of navigating a network using the underlying spanning tree was proposed by in @cite_16 . The paper provides a thorough graph theoretic treatment of the navigation problem. They reduce the navigation problem on a graph @math to that of the underlying spanning tree and navigate on the latter.
|
{
"cite_N": [
"@cite_16"
],
"mid": [
"1994320209"
],
"abstract": [
"The well-known least squares semidefinite programming (LSSDP) problem seeks the nearest adjustment of a given symmetric matrix in the intersection of the cone of positive semidefinite matrices and a set of linear constraints, and it captures many applications in diversing fields. The task of solving large-scale LSSDP with many linear constraints, however, is numerically challenging. This paper mainly shows the applicability of the classical alternating direction method (ADM) for solving LSSDP and convinces the efficiency of the ADM approach. We compare the ADM approach with some other existing approaches numerically, and we show the superiority of ADM for solving large-scale LSSDP."
]
}
|
1111.4898
|
2950676698
|
Human navigation has been a topic of interest in spatial cognition from the past few decades. It has been experimentally observed that humans accomplish the task of way-finding a destination in an unknown environment by recognizing landmarks. Investigations using network analytic techniques reveal that humans, when asked to way-find their destination, learn the top ranked nodes of a network. In this paper we report a study simulating the strategy used by humans to recognize the centers of a network. We show that the paths obtained from our simulation has the same properties as the paths obtained in human based experiment. The simulation thus performed leads to a novel way of path-finding in a network. We discuss the performance of our method and compare it with the existing techniques to find a path between a pair of nodes in a network.
|
On the application front, such navigational techniques are useful in a problem of finding paths between vertices in peer-to-peer systems. , @cite_27 in their work on routing indices on Peer-to-Peer systems, propose a novel method of query forwarding from vertices to their neighbors that are more likely to have answers. They provide different novel routing schemes and evaluate their performance.
|
{
"cite_N": [
"@cite_27"
],
"mid": [
"2135805607"
],
"abstract": [
"Finding information in a peer-to-peer system currently requires either a costly and vulnerable central index, or flooding the network with queries. We introduce the concept of routing indices (RIs), which allow nodes to forward queries to neighbors that are more likely to have answers. If a node cannot answer a query, it forwards the query to a subset of its neighbors, based on its local RI, rather than by selecting neighbors at random or by flooding the network by forwarding the query to all neighbors. We present three RI schemes: the compound, the hop-count, and the exponential routing indices. We evaluate their performance via simulations, and find that RIs can improve performance by one or two orders of magnitude vs. a flooding-based system, and by up to 100 vs. a random forwarding system. We also discuss the tradeoffs between the different RI schemes and highlight the effects of key design variables on system performance."
]
}
|
1111.3567
|
2949547753
|
A wide variety of privacy metrics have been proposed in the literature to evaluate the level of protection offered by privacy enhancing-technologies. Most of these metrics are specific to concrete systems and adversarial models, and are difficult to generalize or translate to other contexts. Furthermore, a better understanding of the relationships between the different privacy metrics is needed to enable more grounded and systematic approach to measuring privacy, as well as to assist systems designers in selecting the most appropriate metric for a given application. In this work we propose a theoretical framework for privacy-preserving systems, endowed with a general definition of privacy in terms of the estimation error incurred by an attacker who aims to disclose the private information that the system is designed to conceal. We show that our framework permits interpreting and comparing a number of well-known metrics under a common perspective. The arguments behind these interpretations are based on fundamental results related to the theories of information, probability and Bayes decision.
|
Mixes were proposed by Chaum @cite_41 in 1981, and are a basic building block for implementing high-latency anonymous communications. A mix takes a number of input messages, and outputs them in such a way that it is infeasible to link an output to its corresponding input. In order to achieve this goal, the mix changes the appearance (by encrypting and padding messages) and the flow of messages (by delaying and reordering them). Mixmaster @cite_14 and Mixminion @cite_25 are more advanced versions of the Chaumian mix @cite_41 , and they haven been deployed to provide anonymous email services.
|
{
"cite_N": [
"@cite_41",
"@cite_14",
"@cite_25"
],
"mid": [
"2103647628",
"2170385606",
"2150248082"
],
"abstract": [
"A technique based on public key cryptography is presented that allows an electronic mail system to hide who a participant communicates with as well as the content of the communication - in spite of an unsecured underlying telecommunication system. The technique does not require a universally trusted authority. One correspondent can remain anonymous to a second, while allowing the second to respond via an untraceable return address. The technique can also be used to form rosters of untraceable digital pseudonyms from selected applications. Applicants retain the exclusive ability to form digital signatures corresponding to their pseudonyms. Elections in which any interested party can verify that the ballots have been properly counted are possible if anonymously mailed ballots are signed with pseudonyms from a roster of registered voters. Another use allows an individual to correspond with a record-keeping organization under a unique pseudonym, which appears in a roster of acceptable clients.",
"",
"We present Mixminion, a message-based anonymous remailer protocol with secure single-use reply blocks. Mix nodes cannot distinguish Mixminion forward messages from reply messages, so forward and reply messages share the same anonymity set. We add directory servers that allow users to learn public keys and performance statistics of participating remailers, and we describe nymservers that provide long-term pseudonyms using single-use reply blocks as a primitive. Our design integrates link encryption between remailers to provide forward anonymity. Mixminion works in a real-world Internet environment, requires little synchronization or coordination between nodes, and protects against known anonymity-breaking attacks as well as or better than other systems with similar design parameters."
]
}
|
1111.3567
|
2949547753
|
A wide variety of privacy metrics have been proposed in the literature to evaluate the level of protection offered by privacy enhancing-technologies. Most of these metrics are specific to concrete systems and adversarial models, and are difficult to generalize or translate to other contexts. Furthermore, a better understanding of the relationships between the different privacy metrics is needed to enable more grounded and systematic approach to measuring privacy, as well as to assist systems designers in selecting the most appropriate metric for a given application. In this work we propose a theoretical framework for privacy-preserving systems, endowed with a general definition of privacy in terms of the estimation error incurred by an attacker who aims to disclose the private information that the system is designed to conceal. We show that our framework permits interpreting and comparing a number of well-known metrics under a common perspective. The arguments behind these interpretations are based on fundamental results related to the theories of information, probability and Bayes decision.
|
Several metrics have been proposed in the literature to assess the level of anonymity provided by anonymous communication systems (ACSs). Reiter and Rubin @cite_11 define the degree of anonymity as a probability @math , where @math is the probability assigned by an attacker to the potential initiators of a communication. In this model, users are more anonymous as they appear (towards a certain adversary) to be less likely of having sent a message, and the metric is thus computed individually for each user and for each communication. @cite_32 on the other hand define the degree of anonymity as the binary logarithm of the number of users of the system, which may be regarded as a Hartley entropy. This metric only depends on the number of users of the system, and does not take into account that some users might appear as more likely senders of a message than others.
|
{
"cite_N": [
"@cite_32",
"@cite_11"
],
"mid": [
"2116843749",
"1978884755"
],
"abstract": [
"There are different methods to build an anonymity service using MIXes. A substantial decision for doing so is the method of choosing the MIX route. In this paper we compare two special configurations: a fixed MIX route used by all participants and a network of freely usable MIXes where each participant chooses his own route. The advantages and disadvantages in respect to the freedom of choice are presented and examined. We'll show that some additional attacks are possible in networks with freely chosen MIX routes. After describing these attacks, we estimate their impact on the achievable degree of anonymity. Finally, we evaluate the relevance of the described attacks with respect to existing systems like e.g. Mixmaster, Crowds, and Freedom.",
"In this paper we introduce a system called Crowds for protecting users' anonymity on the world-wide-web. Crowds, named for the notion of “blending into a crowd,” operates by grouping users into a large and geographically diverse group (crowd) that collectively issues requests on behalf of its members. Web servers are unable to learn the true source of a request because it is equally likely to have originated from any member of the crowd, and even collaborating crowd members cannot distinguish the originator of a request from a member who is merely forwarding the request on behalf of another. We describe the design, implementation, security, performance, and scalability of our system. Our security analysis introduces degrees of anonymity as an important tool for describing and proving anonymity properties."
]
}
|
1111.3567
|
2949547753
|
A wide variety of privacy metrics have been proposed in the literature to evaluate the level of protection offered by privacy enhancing-technologies. Most of these metrics are specific to concrete systems and adversarial models, and are difficult to generalize or translate to other contexts. Furthermore, a better understanding of the relationships between the different privacy metrics is needed to enable more grounded and systematic approach to measuring privacy, as well as to assist systems designers in selecting the most appropriate metric for a given application. In this work we propose a theoretical framework for privacy-preserving systems, endowed with a general definition of privacy in terms of the estimation error incurred by an attacker who aims to disclose the private information that the system is designed to conceal. We show that our framework permits interpreting and comparing a number of well-known metrics under a common perspective. The arguments behind these interpretations are based on fundamental results related to the theories of information, probability and Bayes decision.
|
@math @cite_19 @cite_22 is the requirement that each tuple of key attribute values be shared by at least @math records in the database. This condition is illustrated in Fig. , where a microdata set is @math anonymized before publishing it. Particularly, this privacy criterion is enforced by using generalization and suppression, two mechanisms by which key attribute values are respectively coarsened and eliminated. As a result, all key attribute values within each group are replaced by a common tuple, and thus a record cannot be unambiguously linked to any public database containing identifiers. Consequently, @math anonymity is said to protect microdata against .
|
{
"cite_N": [
"@cite_19",
"@cite_22"
],
"mid": [
"2119067110",
"2159024459"
],
"abstract": [
"Today's globally networked society places great demands on the dissemination and sharing of information. While in the past released information was mostly in tabular and statistical form, many situations call for the release of specific data (microdata). In order to protect the anonymity of the entities (called respondents) to which information refers, data holders often remove or encrypt explicit identifiers such as names, addresses, and phone numbers. Deidentifying data, however, provides no guarantee of anonymity. Released information often contains other data, such as race, birth date, sex, and ZIP code, that can be linked to publicly available information to reidentify respondents and inferring information that was not intended for disclosure. In this paper we address the problem of releasing microdata while safeguarding the anonymity of respondents to which the data refer. The approach is based on the definition of k-anonymity. A table provides k-anonymity if attempts to link explicitly identifying information to its content map the information to at least k entities. We illustrate how k-anonymity can be provided without compromising the integrity (or truthfulness) of the information released by using generalization and suppression techniques. We introduce the concept of minimal generalization that captures the property of the release process not distorting the data more than needed to achieve k-anonymity, and present an algorithm for the computation of such a generalization. We also discuss possible preference policies to choose among different minimal generalizations.",
"Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, µ-Argus and k-Similar provide guarantees of privacy protection."
]
}
|
1111.3567
|
2949547753
|
A wide variety of privacy metrics have been proposed in the literature to evaluate the level of protection offered by privacy enhancing-technologies. Most of these metrics are specific to concrete systems and adversarial models, and are difficult to generalize or translate to other contexts. Furthermore, a better understanding of the relationships between the different privacy metrics is needed to enable more grounded and systematic approach to measuring privacy, as well as to assist systems designers in selecting the most appropriate metric for a given application. In this work we propose a theoretical framework for privacy-preserving systems, endowed with a general definition of privacy in terms of the estimation error incurred by an attacker who aims to disclose the private information that the system is designed to conceal. We show that our framework permits interpreting and comparing a number of well-known metrics under a common perspective. The arguments behind these interpretations are based on fundamental results related to the theories of information, probability and Bayes decision.
|
All these vulnerabilities motivated the appearance of a number of proposals, some of which we now overview. An enhancement of @math anonymity called @math sensitive @math anonymity @cite_7 incorporates the additional restriction that there be at least @math distinct values for each confidential attribute within each @math anonymous group. With the aim of addressing the data utility loss incurred by large values of @math , @math diversity @cite_6 proposes instead that there be at least @math well represented'' values for each confidential attribute. Unfortunately, both proposals are still vulnerable to similarity attacks and skewness attacks.
|
{
"cite_N": [
"@cite_6",
"@cite_7"
],
"mid": [
"2136114025",
"2116241118"
],
"abstract": [
"The k-anonymity privacy requirement for publishing microdata requires that each equivalence class (i.e., a set of records that are indistinguishable from each other with respect to certain \"identifying\" attributes) contains at least k records. Recently, several authors have recognized that k-anonymity cannot prevent attribute disclosure. The notion of l-diversity has been proposed to address this; l-diversity requires that each equivalence class has at least l well-represented values for each sensitive attribute. In this paper we show that l-diversity has a number of limitations. In particular, it is neither necessary nor sufficient to prevent attribute disclosure. We propose a novel privacy notion called t-closeness, which requires that the distribution of a sensitive attribute in any equivalence class is close to the distribution of the attribute in the overall table (i.e., the distance between the two distributions should be no more than a threshold t). We choose to use the earth mover distance measure for our t-closeness requirement. We discuss the rationale for t-closeness and illustrate its advantages through examples and experiments.",
"In this paper, we introduce a new privacy protection property called p-sensitive k-anonymity. The existing kanonymity property protects against identity disclosure, but it fails to protect against attribute disclosure. The new introduced privacy model avoids this shortcoming. Two necessary conditions to achieve p-sensitive kanonymity property are presented, and used in developing algorithms to create masked microdata with p-sensitive k-anonymity property using generalization and suppression."
]
}
|
1111.3530
|
2183102678
|
In this paper we provide a preliminary analysis of Google+ privacy. We identied that Google+ shares photo metadata with users who can access the photograph and discuss its potential impact on privacy. We also identied that Google+ encourages the provision of other names including maiden name, which may help criminals performing identity theft. We show that Facebook lists are a superset of Google+ circles, both functionally and logically, even though Google+ provides a better user interface. Finally we compare the use of encryption and depth of privacy control in Google+ versus in Facebook.
|
Social networks privacy and its potential threats have been widely studied in recent years. One of the earliest works on potential threats to individual's privacy including stalking, embarrassment and identity theft was done by Gross @cite_12 .
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2016563917"
],
"abstract": [
"Participation in social networking sites has dramatically increased in recent years. Services such as Friendster, Tribe, or the Facebook allow millions of individuals to create online profiles and share personal information with vast networks of friends - and, often, unknown numbers of strangers. In this paper we study patterns of information revelation in online social networks and their privacy implications. We analyze the online behavior of more than 4,000 Carnegie Mellon University students who have joined a popular social networking site catered to colleges. We evaluate the amount of information they disclose and study their usage of the site's privacy settings. We highlight potential attacks on various aspects of their privacy, and we show that only a minimal percentage of users changes the highly permeable privacy preferences."
]
}
|
1111.3806
|
813710634
|
Offloading work to cloud is one of the proposed solutions for increasing the battery life of mobile devices. Most prior research has focused on computation-intensive applications, even though such applications are not the most popular ones. In this paper, we first study the feasibility of method-level offloading in network-intensive applications, using an open source Twitter client as an example. Our key observation is that implementing offloading transparently to the developer is difficult: various constraints heavily limit the offloading possibilities, and estimation of the potential benefit is challenging. We then propose a toolkit, SmartDiet, to assist mobile application developers in creating code which is suitable for energy-efficient offloading. SmartDiet provides fine-grained offloading constraint identification and energy usage analysis for Android applications. In addition to outlining the overall functionality of the toolkit, we study some of its key mechanisms and identify the remaining challenges.
|
Two main approaches have been suggested for mobile application offloading. MAUI @cite_9 , Cuckoo @cite_10 and ThinkAir @cite_5 implement a framework on top of the existing runtime system. These three systems are fairly easy to deploy because they only require access to the program source code, and they do not need any special support from the operating system. The second approach, used by CloneCloud @cite_0 , is to modify the underlying virtual machine or operating system in order to implement richer mechanisms for offloading. CloneCloud is a fully automated system and does not require having the source code of the program, because it works directly on bytecode. We claim that the developer should participate in the offloading process and therefore focus on the first approach.
|
{
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_9",
"@cite_10"
],
"mid": [
"2023380813",
"1677698498",
"2101788345",
"1474786465"
],
"abstract": [
"Mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices. At the same time, such devices often enjoy strong connectivity with more powerful machines ranging from laptops and desktops to commercial clouds. This paper presents the design and implementation of CloneCloud, a system that automatically transforms mobile applications to benefit from the cloud. The system is a flexible application partitioner and execution runtime that enables unmodified mobile applications running in an application-level virtual machine to seamlessly off-load part of their execution from mobile devices onto device clones operating in a computational cloud. CloneCloud uses a combination of static analysis and dynamic profiling to partition applications automatically at a fine granularity while optimizing execution time and energy use for a target computation and communication environment. At runtime, the application partitioning is effected by migrating a thread from the mobile device at a chosen point to the clone in the cloud, executing there for the remainder of the partition, and re-integrating the migrated thread back to the mobile device. Our evaluation shows that CloneCloud can adapt application partitioning to different environments, and can help some applications achieve as much as a 20x execution speed-up and a 20-fold decrease of energy spent on the mobile device.",
"Smartphones have exploded in popularity in recent years, becoming ever more sophisticated and capable. As a result, developers worldwide are building increasingly complex applications that require ever increasing amounts of computational power and energy. In this paper we propose ThinkAir, a framework that makes it simple for developers to migrate their smartphone applications to the cloud. ThinkAir exploits the concept of smartphone virtualization in the cloud and provides method level computation offloading. Advancing on previous works, it focuses on the elasticity and scalability of the server side and enhances the power of mobile cloud computing by parallelizing method execution using multiple Virtual Machine (VM) images. We evaluate the system using a range of benchmarks starting from simple micro-benchmarks to more complex applications. First, we show that the execution time and energy consumption decrease two orders of magnitude for the N-queens puzzle and one order of magnitude for a face detection and a virus scan application, using cloud offloading. We then show that if a task is parallelizable, the user can request more than one VM to execute it, and these VMs will be provided dynamically. In fact, by exploiting parallelization, we achieve a greater reduction on the execution time and energy consumption for the previous applications. Finally, we use a memory-hungry image combiner tool to demonstrate that applications can dynamically request VMs with more computational power in order to meet their computational requirements.",
"This paper presents MAUI, a system that enables fine-grained energy-aware offload of mobile code to the infrastructure. Previous approaches to these problems either relied heavily on programmer support to partition an application, or they were coarse-grained requiring full process (or full VM) migration. MAUI uses the benefits of a managed code environment to offer the best of both worlds: it supports fine-grained code offload to maximize energy savings with minimal burden on the programmer. MAUI decides at run-time which methods should be remotely executed, driven by an optimization engine that achieves the best energy savings possible under the mobile device's current connectivity constrains. In our evaluation, we show that MAUI enables: 1) a resource-intensive face recognition application that consumes an order of magnitude less energy, 2) a latency-sensitive arcade game application that doubles its refresh rate, and 3) a voice-based language translation application that bypasses the limitations of the smartphone environment by executing unsupported components remotely.",
"Offloading computation from smartphones to remote cloud resources has recently been rediscovered as a technique to enhance the performance of smartphone applications, while reducing the energy usage."
]
}
|
1111.3806
|
813710634
|
Offloading work to cloud is one of the proposed solutions for increasing the battery life of mobile devices. Most prior research has focused on computation-intensive applications, even though such applications are not the most popular ones. In this paper, we first study the feasibility of method-level offloading in network-intensive applications, using an open source Twitter client as an example. Our key observation is that implementing offloading transparently to the developer is difficult: various constraints heavily limit the offloading possibilities, and estimation of the potential benefit is challenging. We then propose a toolkit, SmartDiet, to assist mobile application developers in creating code which is suitable for energy-efficient offloading. SmartDiet provides fine-grained offloading constraint identification and energy usage analysis for Android applications. In addition to outlining the overall functionality of the toolkit, we study some of its key mechanisms and identify the remaining challenges.
|
Specific solutions, such as Catnap @cite_4 , have been proposed in the literature for reducing the communication energy cost by applying a proxy or middlebox approach. However, these solutions will provide energy savings only for the communication part of the program, whereas offloading simultaneously provides savings in computational costs. Furthermore, since systems such as Catnap do not execute application logic at the proxy, all traffic must eventually reach the mobile device. With smart offloading, some part of the traffic (e.g., signaling) might never need to reach the mobile device, because the offloaded part of the program handles it directly.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2020240447"
],
"abstract": [
"Energy management is a critical issue for mobile devices, with network activity often consuming a significant portion of the total system energy. In this paper, we propose Catnap, a system that reduces energy consumption of mobile devices by allowing them to sleep during data transfers. Catnap exploits high bandwidth wireless interfaces -- which offer significantly higher bandwidth compared to available bandwidth across the Internet -- by combining small gaps between packets into meaningful sleep intervals, thereby allowing the NIC as well as the device to doze off. Catnap targets data oriented applications, such as web and file transfers, which can afford delay of individual packets as long as the overall transfer times do not increase. Our evaluation shows that for small transfers (128kB to 5MB), Catnap allows the NIC to sleep for up to 70 of the total transfer time and for larger transfers, it allows the whole device to sleep for a significant fraction of the total transfer time. This results in battery life improvement of up to 2-5x for real devices like Nokia N810 and Thinkpad T60."
]
}
|
1111.2904
|
2169796496
|
We present the first comprehensive characterization of the diffusion of ideas on Twitter, studying more than 4000 topics that include both popular and less popular topics. On a data set containing approximately 10 million users and a comprehensive scraping of all the tweets posted by these users between June 2009 and August 2009 (approximately 200 million tweets), we perform a rigorous temporal and spatial analysis, investigating the time-evolving properties of the subgraphs formed by the users discussing each topic. We focus on two different notions of the spatial: the network topology formed by follower-following links on Twitter, and the geospatial location of the users. We investigate the effect of initiators on the popularity of topics and find that users with a high number of followers have a strong impact on popularity. We deduce that topics become popular when disjoint clusters of users discussing them begin to merge and form one giant component that grows to cover a significant fraction of the network. Our geospatial analysis shows that highly popular topics are those that cross regional boundaries aggressively.
|
Leskovec, Backstrom and Kleinberg's seminal work on the evolution of topics in the news sphere was the starting point for this paper @cite_11 . They studied how the growth of one topic affects the growth of other topics in the blogosphere. They identified and tracked a small number of popular threads, and showed that the growth of the number of posts on a thread negatively impacts the growth of other threads. The basic question that arose on reading that work was this: Can the nuances of the temporal evolution of topics be explained by a more thorough study of their spatial evolution? Working with a data set taken from Twitter we were able to extract the high level of structural and geographical information about the actors of the process that has allowed us to answer this question in the affirmative. This allows us to challenge the line of research that studies only the temporal evolution of topics @cite_6 , or seeks to explain this evolution on the basis of content @cite_14 .
|
{
"cite_N": [
"@cite_14",
"@cite_6",
"@cite_11"
],
"mid": [
"160748399",
"2112056172",
"2127492100"
],
"abstract": [
"We study the relationship between content and temporal dynamics of information on Twitter, focusing on the persistence of information. We compare two extreme temporal patterns in the decay rate of URLs embedded in tweets, defining a prediction task to distinguish between URLs that fade rapidly following their peak of popularity and those that fade more slowly. Our experiments show a strong association between the content and the temporal dynamics of information: given unigram features extracted from corresponding HTML webpages, a linear SVM classifier can predict the temporal pattern of URLs with high accuracy. We further explore the content of URLs in the two temporal classes using various textual analysis techniques (via LIWC and trend detection). We find that the rapidly-fading information contains significantly more words related to negative emotion, actions, and more complicated cognitive processes, whereas the persistent information contains more words related to positive emotion, leisure, and lifestyle.",
"Online content exhibits rich temporal dynamics, and diverse realtime user generated content further intensifies this process. However, temporal patterns by which online content grows and fades over time, and by which different pieces of content compete for attention remain largely unexplored. We study temporal patterns associated with online content and how the content's popularity grows and fades over time. The attention that content receives on the Web varies depending on many factors and occurs on very different time scales and at different resolutions. In order to uncover the temporal dynamics of online content we formulate a time series clustering problem using a similarity metric that is invariant to scaling and shifting. We develop the K-Spectral Centroid (K-SC) clustering algorithm that effectively finds cluster centroids with our similarity measure. By applying an adaptive wavelet-based incremental approach to clustering, we scale K-SC to large data sets. We demonstrate our approach on two massive datasets: a set of 580 million Tweets, and a set of 170 million blog posts and news media articles. We find that K-SC outperforms the K-means clustering algorithm in finding distinct shapes of time series. Our analysis shows that there are six main temporal shapes of attention of online content. We also present a simple model that reliably predicts the shape of attention by using information about only a small number of participants. Our analyses offer insight into common temporal patterns of the content on theWeb and broaden the understanding of the dynamics of human attention.",
"Tracking new topics, ideas, and \"memes\" across the Web has been an issue of considerable interest. Recent work has developed methods for tracking topic shifts over long time scales, as well as abrupt spikes in the appearance of particular named entities. However, these approaches are less well suited to the identification of content that spreads widely and then fades over time scales on the order of days - the time scale at which we perceive news and events. We develop a framework for tracking short, distinctive phrases that travel relatively intact through on-line text; developing scalable algorithms for clustering textual variants of such phrases, we identify a broad class of memes that exhibit wide spread and rich variation on a daily basis. As our principal domain of study, we show how such a meme-tracking approach can provide a coherent representation of the news cycle - the daily rhythms in the news media that have long been the subject of qualitative interpretation but have never been captured accurately enough to permit actual quantitative analysis. We tracked 1.6 million mainstream media sites and blogs over a period of three months with the total of 90 million articles and we find a set of novel and persistent temporal patterns in the news cycle. In particular, we observe a typical lag of 2.5 hours between the peaks of attention to a phrase in the news media and in blogs respectively, with divergent behavior around the overall peak and a \"heartbeat\"-like pattern in the handoff between news and blogs. We also develop and analyze a mathematical model for the kinds of temporal variation that the system exhibits."
]
}
|
1111.2315
|
2951694529
|
In this work, we evaluate local capacity of wireless ad hoc networks with several medium access protocols and identify the most optimal protocol. We define local capacity as the average information rate received by a receiver randomly located in the network. We analyzed grid pattern protocols where simultaneous transmitters are positioned in a regular grid pattern, pure ALOHA protocols where simultaneous transmitters are dispatched according to a uniform Poisson distribution and exclusion protocols where simultaneous transmitters are dispatched according to an exclusion rule such as node coloring and carrier sense protocols. Our analysis allows us to conjecture that local capacity is optimal when simultaneous transmitters are positioned in a grid pattern based on equilateral triangles and our results show that this optimal local capacity is at most double the local capacity of simple ALOHA protocol. Our results also show that node coloring and carrier sense protocols approach the optimal local capacity by an almost negligible difference.
|
In other related works, @cite_9 analyzed local (single-hop) throughput and capacity with slotted ALOHA, in networks with random and deterministic node placement, and TDMA, in @math line-networks only. @cite_11 determined the optimum transmission range under the assumption that interferers are distributed according to PPP whereas @cite_5 gave a detailed analysis on the optimal probability of transmission for ALOHA which optimizes the product of simultaneously successful transmissions per unit of space by the average range of each transmission.
|
{
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_11"
],
"mid": [
"2132987440",
"2012912688",
"2106334285"
],
"abstract": [
"An Aloha-type access control mechanism for large mobile, multihop, wireless networks is defined and analyzed. This access scheme is designed for the multihop context, where it is important to find a compromise between the spatial density of communications and the range of each transmission. More precisely, the analysis aims at optimizing the product of the number of simultaneously successful transmissions per unit of space (spatial reuse) by the average range of each transmission. The optimization is obtained via an averaging over all Poisson configurations for the location of interfering mobiles, where an exact evaluation of signal over noise ratio is possible. The main mathematical tools stem from stochastic geometry and are spatial versions of the so-called additive and max shot noise processes. The resulting medium access control (MAC) protocol exhibits some interesting properties. First, it can be implemented in a decentralized way provided some local geographic information is available to the mobiles. In addition, its transport capacity is proportional to the square root of the density of mobiles which is the upper bound of Gupta and Kumar. Finally, this protocol is self-adapting to the node density and it does not require prior knowledge of this density.",
"Outage probabilities and single-hop throughput are two important performance metrics that have been evaluated for certain specific types of wireless networks. However, there is a lack of comprehensive results for larger classes of networks, and there is no systematic approach that permits the convenient comparison of the performance of networks with different geometries and levels of randomness. The uncertainty cube is introduced to categorize the uncertainty present in a network. The three axes of the cube represent the three main potential sources of uncertainty in interference-limited networks: the node distribution, the channel gains (fading), and the channel access scheme (set of transmitting nodes). For the performance analysis, a new parameter, the so- called spatial contention, is defined. It measures the slope of the outage probability in an ALOHA network as a function of the transmit probability p at p = 0. Outage is defined as the event that the signal-to-interference ratio (SIR) is below a certain threshold in a given time slot. It is shown that the spatial contention is sufficient to characterize outage and throughput in large classes of wireless networks, corresponding to different positions on the uncertainty cube. Existing results are placed in this framework, and new ones are derived. Further, interpreting the outage probability as the SIR distribution, the ergodic capacity of unit-distance links is determined and compared to the throughput achievable for fixed (yet optimized) transmission rates.",
"The evaluation of optimum transmission ranges in a packet radio network in a fading and shadowing environment is considered. It is shown that the optimal probability of transmission of each user is independent of the system model and is p sub o spl sime 0.271. The optimum range should be chosen so that on the average there are spl chi (G b) sup 2 spl eta terminals closer to the transmitter than the receiver, where G is the spread spectrum processing gain, b is the outage signal-to-noise ratio threshold, spl eta is the power loss factor and spl chi depends on the system parameters and the propagation model. The performance index is given in terms of the optimal normalized expected progress per slot, given by spl thetav (G b) sup 1 spl eta where spl thetav is proportional to the square root of spl chi . A comparison with the results obtained by using deterministic propagation models shows, for typical values of fading and shadowing parameters, a reduction up to 40 of the performance index. >"
]
}
|
1111.2111
|
1519179477
|
In this paper we introduce a generic model for multiplicative algorithms which is suitable for the MapReduce parallel programming paradigm. We implement three typical machine learning algorithms to demonstrate how similarity comparison, gradient descent, power method and other classic learning techniques fit this model well. Two versions of large-scale matrix multiplication are discussed in this paper, and different methods are developed for both cases with regard to their unique computational characteristics and problem settings. In contrast to earlier research, we focus on fundamental linear algebra techniques that establish a generic approach for a range of algorithms, rather than specific ways of scaling up algorithms one at a time. Experiments show promising results when evaluated on both speedup and accuracy. Compared with a standard implementation with computational complexity @math in the worst case, the large-scale matrix multiplication experiments prove our design is considerably more efficient and maintains a good speedup as the number of cores increases. Algorithm-specific experiments also produce encouraging results on runtime performance.
|
Efforts by individuals or groups other than Google have also contributed a number of ideas for running Machine Learning algorithms on MapReduce. One of the most recent efforts by indicates a novel way of upscaling Non-Negative Matrix Factorization on MapReduce by using the multiplicative method where the iterative update approach described in @cite_8 is adopted as a series of matrix multiplication. Several multiplication strategies are developed at different stages. In order to balance the load of servers and maximize the parallelization, a partitioning strategy is performed for large matrices. However, partitioning a huge matrix into single rows columns and combining into multiplicative permutations may consume considerible computational resources. But considering the problem setting they need to handle (extremely sparse matrices), it is acceptable in most cases. In contrast, a more generalized plan is illustrated in section . Major Reference for Multiplicative Methodologies
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2135029798"
],
"abstract": [
"Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the Expectation-Maximization algorithm. The algorithms can also be interpreted as diagonally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence."
]
}
|
1111.2246
|
2950452273
|
We quantify the throughput capacity of wireless multi-hop networks with several medium access schemes. We analyze pure ALOHA scheme where simultaneous transmitters are dispatched according to a uniform Poisson distribution and exclusion schemes where simultaneous transmitters are dispatched according to an exclusion rule such as node coloring and carrier sense based schemes. We consider both no-fading and standard Rayleigh fading channel models. Our results show that, under no-fading, slotted ALOHA can achieve at least one-third (or half under Rayleigh fading) of the throughput capacity of node coloring scheme whereas carrier sense based scheme can achieve almost the same throughput capacity as node coloring.
|
It can be noticed that most of the related works are limited to single-hop transmissions only and may not give a realistic view of the actual performance of medium access schemes in wireless multi-hop networks. In this article, we will develop a hybrid model, based on analytical model and Monte Carlo method, to compute the throughput capacity of various medium access schemes in random wireless networks consisting of @math nodes. We expect that our results will follow the @math scaling law and will also give additional insight into the constant factors associated with this scaling law. In case of multi-hop networks, some of the related works are as follows. @cite_5 gave an analysis on the optimal probability of transmission for ALOHA which optimizes the product of simultaneously successful transmissions per unit of space by the average range of each transmission. @cite_11 evaluated the transport capacity of random wireless networks without taking into account any particular routing scheme and under strong assumptions that interferers form a PPP and relays are equally spaced on a straight line between source and destination.
|
{
"cite_N": [
"@cite_5",
"@cite_11"
],
"mid": [
"2132987440",
"1979262703"
],
"abstract": [
"An Aloha-type access control mechanism for large mobile, multihop, wireless networks is defined and analyzed. This access scheme is designed for the multihop context, where it is important to find a compromise between the spatial density of communications and the range of each transmission. More precisely, the analysis aims at optimizing the product of the number of simultaneously successful transmissions per unit of space (spatial reuse) by the average range of each transmission. The optimization is obtained via an averaging over all Poisson configurations for the location of interfering mobiles, where an exact evaluation of signal over noise ratio is possible. The main mathematical tools stem from stochastic geometry and are spatial versions of the so-called additive and max shot noise processes. The resulting medium access control (MAC) protocol exhibits some interesting properties. First, it can be implemented in a decentralized way provided some local geographic information is available to the mobiles. In addition, its transport capacity is proportional to the square root of the density of mobiles which is the upper bound of Gupta and Kumar. Finally, this protocol is self-adapting to the node density and it does not require prior knowledge of this density.",
"We consider a network where each route comprises a backlogged source, a number of relays and a destination at a finite distance. The locations of the sources and the relays are realizations of independent Poisson point processes. Given that the nodes observe a TDMA ALOHA MAC protocol, our objective is to determine the number of relays and their placement such that the mean end-to-end delay in a typical route of the network is minimized.We first study an idealistic network model where all routes have the same number of hops, the same distance per hop and their own dedicated relays. Combining tools from queueing theory and stochastic geometry, we provide a precise characterization of the mean end-to-end delay. We find that the delay is minimized if the first hop is much longer than the remaining hops and that the optimal number of hops scales sublinearly with the source-destination distance. Simulating the original network scenario reveals that the analytical results are accurate, provided that the density of the relay process is sufficiently large. We conclude that, given the considered MAC protocol, our analysis provides a delay-minimizing routing strategy for random, multihop networks involving a small number of hops."
]
}
|
1111.1896
|
2952051872
|
Micro-blogging systems such as Twitter expose digital traces of social discourse with an unprecedented degree of resolution of individual behaviors. They offer an opportunity to investigate how a large-scale social system responds to exogenous or endogenous stimuli, and to disentangle the temporal, spatial and topical aspects of users' activity. Here we focus on spikes of collective attention in Twitter, and specifically on peaks in the popularity of hashtags. Users employ hashtags as a form of social annotation, to define a shared context for a specific event, topic, or meme. We analyze a large-scale record of Twitter activity and find that the evolution of hastag popularity over time defines discrete classes of hashtags. We link these dynamical classes to the events the hashtags represent and use text mining techniques to provide a semantic characterization of the hastag classes. Moreover, we track the propagation of hashtags in the Twitter social network and find that epidemic spreading plays a minor role in hastag popularity, which is mostly driven by exogenous factors.
|
Several aspects of Twitter have been extensively investigated in the literature, including its network topology @cite_11 @cite_20 @cite_30 , the relations and types of messages between users @cite_7 @cite_8 , the internal information propagation @cite_33 @cite_35 @cite_28 , the credibility of information @cite_22 @cite_23 , and even its potential as an indicator of the state of mind of a population @cite_16 @cite_24 @cite_13 @cite_27 @cite_25 @cite_28 .
|
{
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_33",
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_22",
"@cite_24",
"@cite_27",
"@cite_23",
"@cite_16",
"@cite_13",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"1752870744",
"1943015726",
"2140173168",
"",
"2003998314",
"2050619059",
"2100974526",
"2099366530",
"2084591134",
"",
"202178741",
"2027860007",
"2000200507",
"2046804949"
],
"abstract": [
"",
"Social networks have emerged as a critical factor in information dissemination, search, marketing, expertise and influence discovery, and potentially an important tool for mobilizing people. Social media has made social networks ubiquitous, and also given researchers access to massive quantities of data for empirical analysis. These data sets offer a rich source of evidence for studying dynamics of individual and group behavior, the structure of networks and global patterns of the flow of information on them. However, in most previous studies, the structure of the underlying networks was not directly visible but had to be inferred from the flow of information from one individual to another. As a result, we do not yet understand dynamics of information spread on networks or how the structure of the network affects it. We address this gap by analyzing data from two popular social news sites. Specifically, we extract social networks of active users on Digg and Twitter, and track how interest in news stories spreads among them. We show that social networks play a crucial role in the spread of information on these sites, and that network structure affects dynamics of information flow.",
"Microblogging sites are a unique and dynamic Web 2.0 communication medium. Understanding the information flow in these systems can not only provide better insights into the underlying sociology, but is also crucial for applications such as content ranking, recommendation and filtering, spam detection and viral marketing. In this paper, we characterize the propagation of URLs in the social network of Twitter, a popular microblogging site. We track 15 million URLs exchanged among 2.7 million users over a 300 hour period. Data analysis uncovers several statistical regularities in the user activity, the social graph, the structure of the URL cascades and the communication dynamics. Based on these results we propose a propagation model that predicts which users are likely to mention which URLs. The model correctly accounts for more than half of the URL mentions in our data set, while maintaining a false positive rate lower than 15 .",
"The microblogging service Twitter is in the process of being appropriated for conversational interaction and is starting to be used for collaboration, as well. In order to determine how well Twitter supports user-touser exchanges, what people are using Twitter for, and what usage or design modifications would make it (more) usable as a tool for collaboration, this study analyzes a corpus of naturally-occurring public Twitter messages (tweets), focusing on the functions and uses of the @ sign and the coherence of exchanges. The findings reveal a surprising degree of conversationality, facilitated especially by the use of @ as a marker of addressivity, and shed light on the limitations of Twitters current design for collaborative use.",
"",
"The recent wave of mobilizations in the Arab world and across Western countries has generated much discussion on how digital media is connected to the diffusion of protests. We examine that connection using data from the surge of mobilizations that took place in Spain in May 2011. We study recruitment patterns in the Twitter network and find evidence of social influence and complex contagion. We identify the network position of early participants (i.e. the leaders of the recruitment process) and of the users who acted as seeds of message cascades (i.e. the spreaders of information). We find that early participants cannot be characterized by a typical topological position but spreaders tend to be more central in the network. These findings shed light on the connection between online networks, social contagion, and collective dynamics, and offer an empirical test to the recruitment mechanisms theorized in formal models of collective action.",
"In this article we explore the behavior of Twitter users under an emergency situation. In particular, we analyze the activity related to the 2010 earthquake in Chile and characterize Twitter in the hours and days following this disaster. Furthermore, we perform a preliminary study of certain social phenomenons, such as the dissemination of false rumors and confirmed news. We analyze how this information propagated through the Twitter network, with the purpose of assessing the reliability of Twitter as an information source under extreme circumstances. Our analysis shows that the propagation of tweets that correspond to rumors differs from tweets that spread news because rumors tend to be questioned more than news by the Twitter community. This result shows that it is posible to detect rumors by using aggregate analysis on tweets.",
"Online social media are complementing and in some cases replacing person-to-person social interaction and redefining the diffusion of information. In particular, microblogs have become crucial grounds on which public relations, marketing, and political battles are fought. We demonstrate a web service that tracks political memes in Twitter and helps detect astroturfing, smear campaigns, and other misinformation in the context of U.S. political elections. We also present some cases of abusive behaviors uncovered by our service. Our web service is based on an extensible framework that will enable the real-time analysis of meme diffusion in social media by mining, visualizing, mapping, classifying, and modeling massive streams of public microblogging events.",
"Individual happiness is a fundamental societ al metric. Normally measured through self-report, happiness has often been indirectly characterized and overshadowed by more readily quantifiable economic indicators such as gross domestic product. Here, we examine expressions made on the online, global microblog and social networking service Twitter, uncovering and explaining temporal variations in happiness and information levels over timescales ranging from hours to years. Our data set comprises over 46 billion words contained in nearly 4.6 billion expressions posted over a 33 month span by over 63 million unique users. In measuring happiness, we construct a tunable, real-time, remote-sensing, and non-invasive, text-based hedonometer. In building our metric, made available with this paper, we conducted a survey to obtain happiness evaluations of over 10,000 individual words, representing a tenfold size improvement over similar existing word sets. Rather than being ad hoc, our word list is chosen solely by frequency of usage, and we show how a highly robust and tunable metric can be constructed and defended.",
"We analyze the information credibility of news propagated through Twitter, a popular microblogging service. Previous research has shown that most of the messages posted on Twitter are truthful, but the service is also used to spread misinformation and false rumors, often unintentionally. On this paper we focus on automatic methods for assessing the credibility of a given set of tweets. Specifically, we analyze microblog postings related to \"trending\" topics, and classify them as credible or not credible, based on features extracted from them. We use features from users' posting and re-posting (\"re-tweeting\") behavior, from the text of the posts, and from citations to external sources. We evaluate our methods using a significant number of human assessments about the credibility of items on a recent sample of Twitter postings. Our results shows that there are measurable differences in the way messages propagate, that can be used to classify them automatically as credible or not credible, with precision and recall in the range of 70 to 80 .",
"",
"We study astroturf political campaigns on microblogging platforms: politically-motivated individuals and organizations that use multiple centrally-controlled accounts to create the appearance of widespread support for a candidate or opinion. We describe a machine learning framework that combines topological, content-based and crowdsourced features of information diffusion networks on Twitter to detect the early stages of viral spreading of political misinformation. We present promising preliminary results with better than 96 accuracy in the detection of astroturf content in the run-up to the 2010 U.S. midterm elections.",
"Behavioral finance researchers can apply computational methods to large-scale social media data to better understand and predict markets.",
"Web 2.0 has brought about several new applications that have enabled arbitrary subsets of users to communicate with each other on a social basis. Such communication increasingly happens not just on Facebook and MySpace but on several smaller network applications such as Twitter and Dodgeball. We present a detailed characterization of Twitter, an application that allows users to send short messages. We gathered three datasets (covering nearly 100,000 users) including constrained crawls of the Twitter network using two different methodologies, and a sampled collection from the publicly available timeline. We identify distinct classes of Twitter users and their behaviors, geographic growth patterns and current size of the network, and compare crawl results obtained under rate limiting constraints.",
"Microblogging is a new form of communication in which users can describe their current status in short posts distributed by instant messages, mobile phones, email or the Web. Twitter, a popular microblogging tool has seen a lot of growth since it launched in October, 2006. In this paper, we present our observations of the microblogging phenomena by studying the topological and geographical properties of Twitter's social network. We find that people use microblogging to talk about their daily activities and to seek or share information. Finally, we analyze the user intentions associated at a community level and show how users with similar intentions connect with each other."
]
}
|
1111.1896
|
2952051872
|
Micro-blogging systems such as Twitter expose digital traces of social discourse with an unprecedented degree of resolution of individual behaviors. They offer an opportunity to investigate how a large-scale social system responds to exogenous or endogenous stimuli, and to disentangle the temporal, spatial and topical aspects of users' activity. Here we focus on spikes of collective attention in Twitter, and specifically on peaks in the popularity of hashtags. Users employ hashtags as a form of social annotation, to define a shared context for a specific event, topic, or meme. We analyze a large-scale record of Twitter activity and find that the evolution of hastag popularity over time defines discrete classes of hashtags. We link these dynamical classes to the events the hashtags represent and use text mining techniques to provide a semantic characterization of the hastag classes. Moreover, we track the propagation of hashtags in the Twitter social network and find that epidemic spreading plays a minor role in hastag popularity, which is mostly driven by exogenous factors.
|
The possibility that popular trends or hashtags could be classified in groups have been discussed in Refs. @cite_12 @cite_36 @cite_5 , and the effect of semantic differences on the persistence of a hashtag have also been considered @cite_34 . The shape of peaks in popularity profiles has been used to classify the events in groups @cite_21 @cite_12 @cite_32 @cite_5 . The hypothesis that both the increase and decrease of public attention follow a power-law-like functional shape whose exponents define universality classes, in parallel to what occurs with phase transitions in critical phenomena, has been explored @cite_21 . This approach, however, is difficult to apply to Twitter: the fast timescales involved and the highly reactive nature of Twitter make the time series very noisy and pose the challenge of characterizing activity dynamics in a way which is both robust and scalable.
|
{
"cite_N": [
"@cite_36",
"@cite_21",
"@cite_32",
"@cite_5",
"@cite_34",
"@cite_12"
],
"mid": [
"",
"2042034885",
"1569089799",
"2112056172",
"",
"2101196063"
],
"abstract": [
"",
"We study the relaxation response of a social system after endogenous and exogenous bursts of activity using the time series of daily views for nearly 5 million videos on YouTube. We find that most activity can be described accurately as a Poisson process. However, we also find hundreds of thousands of examples in which a burst of activity is followed by an ubiquitous power-law relaxation governing the timing of views. We find that these relaxation exponents cluster into three distinct classes and allow for the classification of collective human dynamics. This is consistent with an epidemic model on a social network containing two ingredients: a power-law distribution of waiting times between cause and action and an epidemic cascade of actions becoming the cause of future actions. This model is a conceptual extension of the fluctuation-dissipation theorem to social systems [Ruelle, D (2004) Phys Today 57:48–53] and [Roehner BM, et al, (2004) Int J Mod Phys C 15:809–834], and provides a unique framework for the investigation of timing in complex systems.",
"Twitter enjoys enormous popularity as a micro-blogging service largely due to its simplicity. On the downside, there is little organization to the Twitterverse and making sense of the stream of messages passing through the system has become a significant challenge for everyone involved. As a solution, Twitter users have adopted the convention of adding a hash at the beginning of a word to turn it into a hashtag. Hashtags have become the means in Twitter to create threads of conversation and to build communities around particular interests. In this paper, we take a first look at whether hashtags behave as strong identifiers, and thus whether they could serve as identifiers for the Semantic Web. We introduce some metrics that can help identify hashtags that show the desirable characteristics of strong identifiers. We look at the various ways in which hashtags are used, and show through evaluation that our metrics can be applied to detect hashtags that represent real world entities.",
"Online content exhibits rich temporal dynamics, and diverse realtime user generated content further intensifies this process. However, temporal patterns by which online content grows and fades over time, and by which different pieces of content compete for attention remain largely unexplored. We study temporal patterns associated with online content and how the content's popularity grows and fades over time. The attention that content receives on the Web varies depending on many factors and occurs on very different time scales and at different resolutions. In order to uncover the temporal dynamics of online content we formulate a time series clustering problem using a similarity metric that is invariant to scaling and shifting. We develop the K-Spectral Centroid (K-SC) clustering algorithm that effectively finds cluster centroids with our similarity measure. By applying an adaptive wavelet-based incremental approach to clustering, we scale K-SC to large data sets. We demonstrate our approach on two massive datasets: a set of 580 million Tweets, and a set of 170 million blog posts and news media articles. We find that K-SC outperforms the K-means clustering algorithm in finding distinct shapes of time series. Our analysis shows that there are six main temporal shapes of attention of online content. We also present a simple model that reliably predicts the shape of attention by using information about only a small number of participants. Our analyses offer insight into common temporal patterns of the content on theWeb and broaden the understanding of the dynamics of human attention.",
"",
"Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing. We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28]. In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85 ) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1,000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet. To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it."
]
}
|
1111.1896
|
2952051872
|
Micro-blogging systems such as Twitter expose digital traces of social discourse with an unprecedented degree of resolution of individual behaviors. They offer an opportunity to investigate how a large-scale social system responds to exogenous or endogenous stimuli, and to disentangle the temporal, spatial and topical aspects of users' activity. Here we focus on spikes of collective attention in Twitter, and specifically on peaks in the popularity of hashtags. Users employ hashtags as a form of social annotation, to define a shared context for a specific event, topic, or meme. We analyze a large-scale record of Twitter activity and find that the evolution of hastag popularity over time defines discrete classes of hashtags. We link these dynamical classes to the events the hashtags represent and use text mining techniques to provide a semantic characterization of the hastag classes. Moreover, we track the propagation of hashtags in the Twitter social network and find that epidemic spreading plays a minor role in hastag popularity, which is mostly driven by exogenous factors.
|
The causes that underlie the existence of distinct classes of popularity are thought to be a combination of all the mechanisms that drive public attention. News regarding a popular item can propagate either over the social network of the users of a given system -- a so-called endogenous process -- or it can be injected through mass media (exogenous driving). The duality between exogenous and endogenous information propagation has permeated the analysis of popularity in several recent studies @cite_21 @cite_12 @cite_38 @cite_15 , even though it is not always clear how to distinguish between them based solely on the shape of the respective popularity profiles @cite_15 .
|
{
"cite_N": [
"@cite_38",
"@cite_15",
"@cite_21",
"@cite_12"
],
"mid": [
"1987986834",
"2011832962",
"2042034885",
"2101196063"
],
"abstract": [
"Understanding content popularity growth is of great importance to Internet service providers, content creators and online marketers. In this work, we characterize the growth patterns of video popularity on the currently most popular video sharing application, namely YouTube. Using newly provided data by the application, we analyze how the popularity of individual videos evolves since the video's upload time. Moreover, addressing a key aspect that has been mostly overlooked by previous work, we characterize the types of the referrers that most often attracted users to each video, aiming at shedding some light into the mechanisms (e.g., searching or external linking) that often drive users towards a video, and thus contribute to popularity growth. Our analyses are performed separately for three video datasets, namely, videos that appear in the YouTube top lists, videos removed from the system due to copyright violation, and videos selected according to random queries submitted to YouTube's search engine. Our results show that popularity growth patterns depend on the video dataset. In particular, copyright protected videos tend to get most of their views much earlier in their lifetimes, often exhibiting a popularity growth characterized by a viral epidemic-like propagation process. In contrast, videos in the top lists tend to experience sudden significant bursts of popularity. We also show that not only search but also other YouTube internal mechanisms play important roles to attract users to videos in all three datasets.",
"Cluster analysis is the automated search for groups of related observations in a dataset. Most clustering done in practice is based largely on heuristic but intuitively reasonable procedures, and most clustering methods available in commercial software are also of this type. However, there is little systematic guidance associated with these methods for solving important practical questions that arise in cluster analysis, such as how many clusters are there, which clustering method should be used, and how should outliers be handled. We review a general methodology for model-based clustering that provides a principled statistical approach to these issues. We also show that this can be useful for other problems in multivariate analysis, such as discriminant analysis and multivariate density estimation. We give examples from medical diagnosis, minefield detection, cluster recovery from noisy data, and spatial density estimation. Finally, we mention limitations of the methodology and discuss recent development...",
"We study the relaxation response of a social system after endogenous and exogenous bursts of activity using the time series of daily views for nearly 5 million videos on YouTube. We find that most activity can be described accurately as a Poisson process. However, we also find hundreds of thousands of examples in which a burst of activity is followed by an ubiquitous power-law relaxation governing the timing of views. We find that these relaxation exponents cluster into three distinct classes and allow for the classification of collective human dynamics. This is consistent with an epidemic model on a social network containing two ingredients: a power-law distribution of waiting times between cause and action and an epidemic cascade of actions becoming the cause of future actions. This model is a conceptual extension of the fluctuation-dissipation theorem to social systems [Ruelle, D (2004) Phys Today 57:48–53] and [Roehner BM, et al, (2004) Int J Mod Phys C 15:809–834], and provides a unique framework for the investigation of timing in complex systems.",
"Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing. We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28]. In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85 ) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1,000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet. To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it."
]
}
|
1111.1227
|
2111561864
|
Social media, such as blogs, are often seen as democratic entities that allow more voices to be heard than the conventional mass or elite media. Some also feel that social media exhibits a balancing force against the arguably slanted elite media. A systematic comparison between social and mainstream media is necessary but challenging due to the scale and dynamic nature of modern communication. Here we propose empirical measures to quantify the extent and dynamics of social (blog) and mainstream (news) media bias. We focus on a particular form of bias---coverage quantity---as applied to stories about the 111th US Congress. We compare observed coverage of Members of Congress against a null model of unbiased coverage, testing for biases with respect to political party, popular front runners, regions of the country, and more. Our measures suggest distinct characteristics in news and blog media. A simple generative model, in agreement with data, reveals differences in the process of coverage selection between the two media.
|
There have been controversial responses to prior studies, and the origin in part lies in the difficulty to separate the recognition of bias from the belief of bias. A dependence on viewers' beliefs has been observed in studies @cite_1 @cite_6 , which is relevant to the theories on how supply-side forces or profit-related factors cause slants in media @cite_5 @cite_7 . Because of such a dependency, computationally identifying bias from media content remains an emerging research topic, and requires insights from other language analysis studies such as sentiment analysis @cite_12 or partisan features in texts @cite_3 @cite_7 .
|
{
"cite_N": [
"@cite_7",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_5",
"@cite_12"
],
"mid": [
"",
"2060704337",
"226959768",
"",
"2162068143",
"2097726431"
],
"abstract": [
"",
"We measure media bias by estimating ideological scores for several major media outlets. To compute this, we count the times that a particular media outlet cites various think tanks and policy groups, and then compare this with the times that members of Congress cite the same groups. Our results show a strong liberal bias: all of the news outlets we examine, except Fox News' Special Report and the Washington Times, received scores to the left of the average member of Congress. Consistent with claims made by conservative critics, CBS Evening News and the New York Times received scores far to the left of center. The most centrist media outlets were PBS NewsHour, CNN's Newsnight, and ABC's Good Morning America; among print outlets, USA Today was closest to the center. All of our findings refer strictly to news content; that is, we exclude editorials, letters, and the like. \"The editors in Los Angeles killed the story. They told Witcover that it didn't ‘come off’ and that it was an ‘opinion’ story.… The solution was simple, they told him. All he had to do was get other people to make the same points and draw the same conclusions and then write the article in their words\" (emphasis in original). Timothy Crouse, Boys on the Bus [1973, p. 116].",
"This paper considers the linguistic indicators of bias in political text. We used Amazon Mechanical Turk judgments about sentences from American political blogs, asking annotators to indicate whether a sentence showed bias, and if so, in which political direction and through which word tokens. We also asked annotators questions about their own political views. We conducted a preliminary analysis of the data, exploring how different groups perceive bias in different blogs, and showing some lexical indicators strongly associated with perceived bias.",
"",
"We investigate the market for news under two assumptions: that readers hold beliefs which they like to see confirmed, and that newspapers can slant stories toward these beliefs. We show that, on the topics where readers share common beliefs, one should not expect accuracy even from competitive media: competition results in lower prices, but common slanting toward reader biases. On topics where reader beliefs diverge (such as politically divisive issues), however, newspapers segment the market and slant toward extreme positions. Yet in the aggregate, a reader with access to all news sources could get an unbiased perspective. Generally speaking, reader heterogeneity is more important for accuracy in media than competition per se.",
"An important part of our information-gathering behavior has always been to find out what other people think. With the growing availability and popularity of opinion-rich resources such as online review sites and personal blogs, new opportunities and challenges arise as people now can, and do, actively use information technologies to seek out and understand the opinions of others. The sudden eruption of activity in the area of opinion mining and sentiment analysis, which deals with the computational treatment of opinion, sentiment, and subjectivity in text, has thus occurred at least in part as a direct response to the surge of interest in new systems that deal directly with opinions as a first-class object. This survey covers techniques and approaches that promise to directly enable opinion-oriented information-seeking systems. Our focus is on methods that seek to address the new challenges raised by sentiment-aware applications, as compared to those that are already present in more traditional fact-based analysis. We include material on summarization of evaluative text and on broader issues regarding privacy, manipulation, and economic impact that the development of opinion-oriented information-access services gives rise to. To facilitate future work, a discussion of available resources, benchmark datasets, and evaluation campaigns is also provided."
]
}
|
1111.0948
|
2951350170
|
Video streaming represents a large fraction of Internet traffic. Surprisingly, little is known about the network characteristics of this traffic. In this paper, we study the network characteristics of the two most popular video streaming services, Netflix and YouTube. We show that the streaming strategies vary with the type of the application (Web browser or native mobile application), and the type of container (Silverlight, Flash, or HTML5) used for video streaming. In particular, we identify three different streaming strategies that produce traffic patterns from non-ack clocked ON-OFF cycles to bulk TCP transfer. We then present an analytical model to study the potential impact of these streaming strategies on the aggregate traffic and make recommendations accordingly.
|
Plissonneau al @cite_13 , Saxena al @cite_15 , and Alcock al @cite_20 observe rate limitations on YouTube traffic. They do not identify the streaming strategies discussed in our paper. Akhshabi al @cite_2 only observed a rate limitation in the steady state phase for Netflix. Saxena al @cite_15 show that the YouTube videos streamed using the servers of Google have a buffering phase, whereas the legacy servers of YouTube do not show this buffering phase. Alcock @cite_20 only characterized the strategy for short ON-OFF cycles for Flash videos on YouTube.
|
{
"cite_N": [
"@cite_15",
"@cite_13",
"@cite_20",
"@cite_2"
],
"mid": [
"2047702325",
"",
"2079646068",
"2156133547"
],
"abstract": [
"Serving multimedia content over the Internet with negligible delay remains a challenge. With the advent of Web 2.0, numerous video sharing sites using different storage and content delivery models have become popular. Yet, little is known about these models from a global perspective. Such an understanding is important for designing systems which can efficiently serve video content to users all over the world. In this paper, we analyze and compare the underlying distribution frameworks of three video sharing services - YouTube, Dailymotion and Metacafe - based on traces collected from measurements over a period of 23 days. We investigate the variation in service delay with the user's geographical location and with video characteristics such as age and popularity. We leverage multiple vantage points distributed around the globe to validate our observations. Our results represent some of the first measurements directed towards analyzing these recently popular services.",
"",
"This paper presents the results of an investigation into the application flow control technique utilised by YouTube. We reveal and describe the basic properties of YouTube application flow control, which we term block sending, and show that it is widely used by YouTube servers. We also examine how the block sending algorithm interacts with the flow control provided by TCP and reveal that the block sending approach was responsible for over 40 of packet loss events in YouTube flows in a residential DSL dataset and the retransmission of over 1 of all YouTube data sent after the application flow control began. We conclude by suggesting that changing YouTube block sending to be less bursty would improve the performance and reduce the bandwidth usage of YouTube video streams.",
"Adaptive (video) streaming over HTTP is gradually being adopted, as it offers significant advantages in terms of both user-perceived quality and resource utilization for content and network service providers. In this paper, we focus on the rate-adaptation mechanisms of adaptive streaming and experimentally evaluate two major commercial players (Smooth Streaming, Netflix) and one open source player (OSMF). Our experiments cover three important operating conditions. First, how does an adaptive video player react to either persistent or short-term changes in the underlying network available bandwidth. Can the player quickly converge to the maximum sustainable bitrate? Second, what happens when two adaptive video players compete for available bandwidth in the bottleneck link? Can they share the resources in a stable and fair manner? And third, how does adaptive streaming perform with live content? Is the player able to sustain a short playback delay? We identify major differences between the three players, and significant inefficiencies in each of them."
]
}
|
1111.0084
|
2951598575
|
Lattice codes are known to achieve capacity in the Gaussian point-to-point channel, achieving the same rates as independent, identically distributed (i.i.d.) random Gaussian codebooks. Lattice codes are also known to outperform random codes for certain channel models that are able to exploit their linearity. In this work, we show that lattice codes may be used to achieve the same performance as known i.i.d. Gaussian random coding techniques for the Gaussian relay channel, and show several examples of how this may be combined with the linearity of lattices codes in multi-source relay networks. In particular, we present a nested lattice list decoding technique, by which, lattice codes are shown to achieve the Decode-and-Forward (DF) rate of single source, single destination Gaussian relay channels with one or more relays. We next present two examples of how this DF scheme may be combined with the linearity of lattice codes to achieve new rate regions which for some channel conditions outperform analogous known Gaussian random coding techniques in multi-source relay channels. That is, we derive a new achievable rate region for the two-way relay channel with direct links and compare it to existing schemes, and derive another achievable rate region for the multiple access relay channel. We furthermore present a lattice Compress-and-Forward (CF) scheme for the Gaussian relay channel which exploits a lattice Wyner-Ziv binning scheme and achieves the same rate as the Cover-El Gamal CF rate evaluated for Gaussian random codes. These results suggest that structured lattice codes may be used to mimic, and sometimes outperform, random Gaussian codes in general Gaussian networks.
|
Relay channels. Two of our main results are the demonstration that nested lattice codes may be used to achieve the DF and CF rates achieved by random Gaussian codes @cite_27 . For the DF scheme, we mimic the Regular encoding Sliding window decoding DF strategy @cite_12 @cite_2 in which the relay decodes the message of the source, re-encodes it, and then forwards it. The destination combines the information from the source and the relay by intersecting two independent lists of messages obtained from the source and relayed links respectively, over two transmission blocks. We will re-derive the DF rate, but with lattice codes replacing the random i.i.d. Gaussian codes. Of particular importance is constructing and utilizing a lattice version of the list decoder. It is worth mentioning that the concurrent work @cite_59 uses a different lattice coding scheme to achieve the DF rate in the three-node relay channel which does not rely on list decoding but rather on a careful nesting structure of the lattice codes.
|
{
"cite_N": [
"@cite_59",
"@cite_27",
"@cite_12",
"@cite_2"
],
"mid": [
"2101872481",
"2167447263",
"2162180430",
""
],
"abstract": [
"It has been conjectured that lattice codes are good for (almost) everything. As an additional bit of evidence for this claim, we offer a few results showing the utility of lattice codes for the AWGN relay channel. We show that the decode-and-forward rates of the relay channel can be achieved using lattice encoding and decoding. We present an encoding decoding technique that uses a doubly-nested lattice code. Encoding is accomplished using a combination of superposition encoding and block Markov encoding, while decoding is accomplished using a strategy reminiscent of Cover and El Gamal's list decoding. Our technique can be extended to a wide variety of relay topologies, including the half-duplex relay channel and the cooperative multiple-access channel.",
"A relay channel consists of an input x_ l , a relay output y_ 1 , a channel output y , and a relay sender x_ 2 (whose transmission is allowed to depend on the past symbols y_ 1 . The dependence of the received symbols upon the inputs is given by p(y,y_ 1 |x_ 1 ,x_ 2 ) . The channel is assumed to be memoryless. In this paper the following capacity theorems are proved. 1)If y is a degraded form of y_ 1 , then C : = : !_ p(x_ 1 ,x_ 2 ) , I(X_ 1 ,X_ 2 ;Y), I(X_ 1 ; Y_ 1 |X_ 2 ) . 2)If y_ 1 is a degraded form of y , then C : = : !_ p(x_ 1 ) x_ 2 I(X_ 1 ;Y|x_ 2 ) . 3)If p(y,y_ 1 |x_ 1 ,x_ 2 ) is an arbitrary relay channel with feedback from (y,y_ 1 ) to both x_ 1 x_ 2 , then C : = : p(x_ 1 ,x_ 2 ) , I(X_ 1 ,X_ 2 ;Y),I ,(X_ 1 ;Y,Y_ 1 |X_ 2 ) . 4)For a general relay channel, C : : p(x_ 1 ,x_ 2 ) , I ,(X_ 1 , X_ 2 ;Y),I(X_ 1 ;Y,Y_ 1 |X_ 2 ) . Superposition block Markov encoding is used to show achievability of C , and converses are established. The capacities of the Gaussian relay channel and certain discrete relay channels are evaluated. Finally, an achievable lower bound to the capacity of the general relay channel is established.",
"How much information can be carried over a wireless network with a multiplicity of nodes, and how should the nodes cooperate to transfer information? To study these questions, we formulate a model of wireless networks that particularly takes into account the distances between nodes, and the resulting attenuation of radio signals, and study a performance measure that weights information by the distance over which it is transported. Consider a network with the following features. I) n nodes located on a plane, with minimum separation distance spl rho sub min >0. II) A simplistic model of signal attenuation e sup - spl gamma spl rho spl rho sup spl delta over a distance spl rho , where spl gamma spl ges 0 is the absorption constant (usually positive, unless over a vacuum), and spl delta >0 is the path loss exponent. III) All receptions subject to additive Gaussian noise of variance spl sigma sup 2 . The performance measure we mainly, but not exclusively, study is the transport capacity C sub T :=sup spl Sigma on sub spl lscr =1 sup m R sub spl lscr spl middot spl rho sub spl lscr , where the supremum is taken over m, and vectors (R sub 1 ,R sub 2 ,...,R sub m ) of feasible rates for m source-destination pairs, and spl rho sub spl lscr is the distance between the spl lscr th source and its destination. It is the supremum distance-weighted sum of rates that the wireless network can deliver. We show that there is a dichotomy between the cases of relatively high and relatively low attenuation. When spl gamma >0 or spl delta >3, the relatively high attenuation case, the transport capacity is bounded by a constant multiple of the sum of the transmit powers of the nodes in the network. However, when spl gamma =0 and spl delta <3 2, the low-attenuation case, we show that there exist networks that can provide unbounded transport capacity for fixed total power, yielding zero energy priced communication. Examples show that nodes can profitably cooperate over large distances using coherence and multiuser estimation when the attenuation is low. These results are established by developing a coding scheme and an achievable rate for Gaussian multiple-relay channels, a result that may be of interest in its own right.",
""
]
}
|
1111.0084
|
2951598575
|
Lattice codes are known to achieve capacity in the Gaussian point-to-point channel, achieving the same rates as independent, identically distributed (i.i.d.) random Gaussian codebooks. Lattice codes are also known to outperform random codes for certain channel models that are able to exploit their linearity. In this work, we show that lattice codes may be used to achieve the same performance as known i.i.d. Gaussian random coding techniques for the Gaussian relay channel, and show several examples of how this may be combined with the linearity of lattices codes in multi-source relay networks. In particular, we present a nested lattice list decoding technique, by which, lattice codes are shown to achieve the Decode-and-Forward (DF) rate of single source, single destination Gaussian relay channels with one or more relays. We next present two examples of how this DF scheme may be combined with the linearity of lattice codes to achieve new rate regions which for some channel conditions outperform analogous known Gaussian random coding techniques in multi-source relay channels. That is, we derive a new achievable rate region for the two-way relay channel with direct links and compare it to existing schemes, and derive another achievable rate region for the multiple access relay channel. We furthermore present a lattice Compress-and-Forward (CF) scheme for the Gaussian relay channel which exploits a lattice Wyner-Ziv binning scheme and achieves the same rate as the Cover-El Gamal CF rate evaluated for Gaussian random codes. These results suggest that structured lattice codes may be used to mimic, and sometimes outperform, random Gaussian codes in general Gaussian networks.
|
Lattice codes for single-hop channels. Lattice codes are known to be good'' for almost everything in Gaussian point-to-point, single-hop channels @cite_56 @cite_65 @cite_60 , from both source and channel coding perspectives. In particular, nested lattice codes have been shown to be capacity achieving for the AWGN channel, the AWGN broadcast channel @cite_75 and the AWGN multiple access channel @cite_6 . Lattice codes may further be used in achieving the capacity of Gaussian channels with interference or state known at the transmitter (but not receiver) @cite_29 using a lattice equivalent @cite_75 of dirty-paper coding (DPC) @cite_36 . The nested lattice approach of @cite_75 for the dirty-paper channel is extended to dirty-paper networks in @cite_31 , where in some scenarios lattice codes are interestingly shown to outperform random codes. In @math -user interference channels, their structure has enabled the decoding of (portions of) sums of interference'' terms @cite_39 @cite_9 @cite_23 @cite_61 , allowing receivers to subtract off this sum rather than try to decode individual interference terms in order to remove them. From a source coding perspective, lattices have been useful in distributed Gaussian source coding when reconstructing a linear function @cite_1 @cite_35 .
|
{
"cite_N": [
"@cite_61",
"@cite_35",
"@cite_60",
"@cite_36",
"@cite_29",
"@cite_9",
"@cite_65",
"@cite_1",
"@cite_6",
"@cite_56",
"@cite_39",
"@cite_23",
"@cite_31",
"@cite_75"
],
"mid": [
"",
"",
"",
"1976109068",
"",
"",
"",
"2109053700",
"2005196269",
"",
"2151027523",
"",
"",
"2111992817"
],
"abstract": [
"",
"",
"",
"A channel with output Y = X + S + Z is examined, The state S N(0, QI) and the noise Z N(0, NI) are multivariate Gaussian random variables ( I is the identity matrix.). The input X R^ n satisfies the power constraint (l n) i=1 ^ n X_ i ^ 2 P . If S is unknown to both transmitter and receiver then the capacity is 1 2 (1 + P ( N + Q)) nats per channel use. However, if the state S is known to the encoder, the capacity is shown to be C^ = 1 2 (1 + P N) , independent of Q . This is also the capacity of a standard Gaussian channel with signal-to-noise power ratio P N . Therefore, the state S does not affect the capacity of the channel, even though S is unknown to the receiver. It is shown that the optimal transmitter adapts its signal to the state S rather than attempting to cancel it.",
"",
"",
"",
"Consider a pair of correlated Gaussian sources (X 1,X 2). Two separate encoders observe the two components and communicate compressed versions of their observations to a common decoder. The decoder is interested in reconstructing a linear combination of X 1 and X 2 to within a mean-square distortion of D. We obtain an inner bound to the optimal rate-distortion region for this problem. A portion of this inner bound is achieved by a scheme that reconstructs the linear function directly rather than reconstructing the individual components X 1 and X 2 first. This results in a better rate region for certain parameter values. Our coding scheme relies on lattice coding techniques in contrast to more prevalent random coding arguments used to demonstrate achievable rate regions in information theory. We then consider the case of linear reconstruction of K sources and provide an inner bound to the optimal rate-distortion region. Some parts of the inner bound are achieved using the following coding structure: lattice vector quantization followed by ldquocorrelatedrdquo lattice-structured binning.",
"Interference is usually viewed as an obstacle to communication in wireless networks. This paper proposes a new strategy, compute-and-forward, that exploits interference to obtain significantly higher rates between users in a network. The key idea is that relays should decode linear functions of transmitted messages according to their observed channel coefficients rather than ignoring the interference as noise. After decoding these linear equations, the relays simply send them towards the destinations, which given enough equations, can recover their desired messages. The underlying codes are based on nested lattices whose algebraic structure ensures that integer combinations of codewords can be decoded reliably. Encoders map messages from a finite field to a lattice and decoders recover equations of lattice points which are then mapped back to equations over the finite field. This scheme is applicable even if the transmitters lack channel state information.",
"",
"Recently, Etkin, Tse, and Wang found the capacity region of the two-user Gaussian interference channel to within 1 bit s Hz. A natural goal is to apply this approach to the Gaussian interference channel with an arbitrary number of users. We make progress towards this goal by finding the capacity region of the many-to-one and one-to-many Gaussian interference channels to within a constant number of bits. The result makes use of a deterministic model to provide insight into the Gaussian channel. The deterministic model makes explicit the dimension of signal level. A central theme emerges: the use of lattice codes for alignment of interfering signals on the signal level.",
"",
"",
"Network information theory promises high gains over simple point-to-point communication techniques, at the cost of higher complexity. However, lack of structured coding schemes limited the practical application of these concepts so far. One of the basic elements of a network code is the binning scheme. Wyner (1974, 1978) and other researchers proposed various forms of coset codes for efficient binning, yet these schemes were applicable only for lossless source (or noiseless channel) network coding. To extend the algebraic binning approach to lossy source (or noisy channel) network coding, previous work proposed the idea of nested codes, or more specifically, nested parity-check codes for the binary case and nested lattices in the continuous case. These ideas connect network information theory with the rich areas of linear codes and lattice codes, and have strong potential for practical applications. We review these developments and explore their tight relation to concepts such as combined shaping and precoding, coding for memories with defects, and digital watermarking. We also propose a few novel applications adhering to a unified approach."
]
}
|
1111.0801
|
1834311253
|
Balanced allocation of online balls-into-bins has long been an active area of research for efficient load balancing and hashing applications.There exists a large number of results in this domain for different settings, such as parallel allocations parallel , multi-dimensional allocations multi , weighted balls weight etc. For sequential multi-choice allocation, where @math balls are thrown into @math bins with each ball choosing @math (constant) bins independently uniformly at random, the maximum load of a bin is @math with high probability heavily_load . This offers the current best known allocation scheme. However, for @math , the gap reduces to @math soda08 .A similar constant gap bound has been established for parallel allocations with @math communication rounds lenzen . In this paper we propose a novel multi-choice allocation algorithm, ( @math ) achieving a constant gap with a high probability for the sequential single-dimensional online allocation problem with constant @math . We achieve a maximum load of @math with high probability for constant @math choice scheme with constant number of retries or rounds per ball. We also show that the bound holds even for an arbitrary large number of balls, @math . Further, we generalize this result to (i) the weighted case, where balls have weights drawn from an arbitrary weight distribution with finite variance, (ii) multi-dimensional setting, where balls have @math dimensions with @math randomly and uniformly chosen filled dimension for @math , and (iii) the parallel case, where @math balls arrive and are placed parallely in the bins. We show that the gap in these case is also a constant w.h.p. (independent of @math ) for constant value of @math with expected constant number of retries per ball.
|
@cite_23 showed that the two-choice paradigm can be applied effectively in a different context, namely, that of routing virtual circuits in interconnection networks with low congestion. They showed how to incorporate the two-choice approach to a well-studied paradigm due to Valiant for routing virtual circuits to achieve significantly lower congestion.
|
{
"cite_N": [
"@cite_23"
],
"mid": [
"1980177572"
],
"abstract": [
"In this paper we study randomized algorithms for circuit switching on multistage networks related to the butterfly. We devise algorithms that route messages by constructing circuits (or paths) for the messages with small congestion, dilation, and setup time. Our algorithms are based on the idea of having each message choose a route from two possibilities, a technique that has previously proven successful in simpler load balancing settings. As an application of our techniques, we propose a novel design for a data server."
]
}
|
1111.0801
|
1834311253
|
Balanced allocation of online balls-into-bins has long been an active area of research for efficient load balancing and hashing applications.There exists a large number of results in this domain for different settings, such as parallel allocations parallel , multi-dimensional allocations multi , weighted balls weight etc. For sequential multi-choice allocation, where @math balls are thrown into @math bins with each ball choosing @math (constant) bins independently uniformly at random, the maximum load of a bin is @math with high probability heavily_load . This offers the current best known allocation scheme. However, for @math , the gap reduces to @math soda08 .A similar constant gap bound has been established for parallel allocations with @math communication rounds lenzen . In this paper we propose a novel multi-choice allocation algorithm, ( @math ) achieving a constant gap with a high probability for the sequential single-dimensional online allocation problem with constant @math . We achieve a maximum load of @math with high probability for constant @math choice scheme with constant number of retries or rounds per ball. We also show that the bound holds even for an arbitrary large number of balls, @math . Further, we generalize this result to (i) the weighted case, where balls have weights drawn from an arbitrary weight distribution with finite variance, (ii) multi-dimensional setting, where balls have @math dimensions with @math randomly and uniformly chosen filled dimension for @math , and (iii) the parallel case, where @math balls arrive and are placed parallely in the bins. We show that the gap in these case is also a constant w.h.p. (independent of @math ) for constant value of @math with expected constant number of retries per ball.
|
Kunal et.al. @cite_21 prove that for weighted balls (weight distribution with finite fourth moment) and @math , the expected gap is independent of the number of balls and is less than @math , where @math depends on the weight distribution. They first prove the weak gap theorem which says that w.h.p @math . Since in the weighted case the @math choice process is not dominated by the one choice process, they prove the weak gap theorem via a potential function argument. Then, the is proved. While in @cite_10 the short memory theorem is proven via coupling, @cite_21 uses similar coupling arguments but defines a different distance function and use a sophisticated argument to show that the coupling converges.
|
{
"cite_N": [
"@cite_21",
"@cite_10"
],
"mid": [
"2021124283",
"1996492982"
],
"abstract": [
"We investigate balls-and-bins processes where m weighted balls areplaced into n bins using the \"power of two choices\" paradigm,whereby a ball is inserted into the less loaded of two randomly chosen bins. The case where each of the m balls has unit weight had been studied extensively. In a seminal paper Azar et.al. showed that when m=n the most loaded bin has Θ(log log n) balls with high probability. Surprisingly, thegap in load between the heaviest bin and the average bin does not increase with m and was shown by tobe Θ(log log n) with high probability for arbitrarily large m. We generalize this result to the weighted case where balls have weights drawn from an arbitrary weight distribution. We show that aslong as the weight distribution has finite second moment andsatisfies a mild technical condition, the gap between the weight of the heaviest bin and the weight of the average bin is independent ofthe number balls thrown. This is especially striking whenconsidering heavy tailed distributions such as Power-Law andLog-Normal distributions. In these cases, as more balls are thrown,heavier and heavier weights are encountered. Nevertheless with high probability, the imbalance in the load distribution does notincrease. Furthermore, if the fourth moment of the weight distribution is finite, the expected value of the gap is shown to beindependent of the number of balls.",
"We investigate balls-into-bins processes allocating m balls into n bins based on the multiple-choice paradigm. In the classical single-choice variant each ball is placed into a bin selected uniformly at random. In a multiple-choice process each ball can be placed into one out of @math randomly selected bins. It is known that in many scenarios having more than one choice for each ball can improve the load balance significantly. Formal analyses of this phenomenon prior to this work considered mostly the lightly loaded case, that is, when @math . In this paper we present the first tight analysis in the heavily loaded case, that is, when @math rather than @math .The best previously known results for the multiple-choice processes in the heavily loaded case were obtained using majorization by the single-choice process. This yields an upper bound of the maximum load of bins of @math O @math with high probability. We show, however, that the multiple-choice..."
]
}
|
1111.0801
|
1834311253
|
Balanced allocation of online balls-into-bins has long been an active area of research for efficient load balancing and hashing applications.There exists a large number of results in this domain for different settings, such as parallel allocations parallel , multi-dimensional allocations multi , weighted balls weight etc. For sequential multi-choice allocation, where @math balls are thrown into @math bins with each ball choosing @math (constant) bins independently uniformly at random, the maximum load of a bin is @math with high probability heavily_load . This offers the current best known allocation scheme. However, for @math , the gap reduces to @math soda08 .A similar constant gap bound has been established for parallel allocations with @math communication rounds lenzen . In this paper we propose a novel multi-choice allocation algorithm, ( @math ) achieving a constant gap with a high probability for the sequential single-dimensional online allocation problem with constant @math . We achieve a maximum load of @math with high probability for constant @math choice scheme with constant number of retries or rounds per ball. We also show that the bound holds even for an arbitrary large number of balls, @math . Further, we generalize this result to (i) the weighted case, where balls have weights drawn from an arbitrary weight distribution with finite variance, (ii) multi-dimensional setting, where balls have @math dimensions with @math randomly and uniformly chosen filled dimension for @math , and (iii) the parallel case, where @math balls arrive and are placed parallely in the bins. We show that the gap in these case is also a constant w.h.p. (independent of @math ) for constant value of @math with expected constant number of retries per ball.
|
The @math -choice scheme @cite_9 proved that if a ball chooses with @math probability the least loaded bin of @math randomly chosen bin, and otherwise i.u.r. a single bin, the @math becomes independent of @math and is given by @math .
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"1968904909"
],
"abstract": [
"Suppose m balls are sequentially thrown into n bins where each ball goes into a random bin. It is well-known that the gap between the load of the most loaded bin and the average is Θ (√mlog n n), for large m. If each ball goes to the lesser loaded of two random bins, this gap dramatically reduces to Θ (log log n) independent of m. Consider now the following \"(1 + β)-choice\" process for some parameter β ∈ (0, 1): each ball goes to a random bin with probability (1 - β) and the lesser loaded of two random bins with probability β. How does the gap for such a process behave? Suppose that the weight of each ball was drawn from a geometric distribution. How is the gap (now defined in terms of weight) affected? In this work, we develop general techniques for analyzing such balls-into-bins processes. Specifically, we show that for the (1 + β)-choice process above, the gap is Θ(log n β), irrespective of m. Moreover the gap stays at Θ(log n β) in the weighted case for a large class of weight distributions. No non-trivial explicit bounds were previously known in the weighted case, even for the 2-choice paradigm."
]
}
|
1111.0801
|
1834311253
|
Balanced allocation of online balls-into-bins has long been an active area of research for efficient load balancing and hashing applications.There exists a large number of results in this domain for different settings, such as parallel allocations parallel , multi-dimensional allocations multi , weighted balls weight etc. For sequential multi-choice allocation, where @math balls are thrown into @math bins with each ball choosing @math (constant) bins independently uniformly at random, the maximum load of a bin is @math with high probability heavily_load . This offers the current best known allocation scheme. However, for @math , the gap reduces to @math soda08 .A similar constant gap bound has been established for parallel allocations with @math communication rounds lenzen . In this paper we propose a novel multi-choice allocation algorithm, ( @math ) achieving a constant gap with a high probability for the sequential single-dimensional online allocation problem with constant @math . We achieve a maximum load of @math with high probability for constant @math choice scheme with constant number of retries or rounds per ball. We also show that the bound holds even for an arbitrary large number of balls, @math . Further, we generalize this result to (i) the weighted case, where balls have weights drawn from an arbitrary weight distribution with finite variance, (ii) multi-dimensional setting, where balls have @math dimensions with @math randomly and uniformly chosen filled dimension for @math , and (iii) the parallel case, where @math balls arrive and are placed parallely in the bins. We show that the gap in these case is also a constant w.h.p. (independent of @math ) for constant value of @math with expected constant number of retries per ball.
|
In the parallel setting, @cite_12 showed that a constant bound on the gap is possible with @math communication rounds. Adler et.al. @cite_22 consider parallel balls and bins with multiple rounds. They present analysis for @math bound on the gap (for @math ) using @math rounds of communication.
|
{
"cite_N": [
"@cite_22",
"@cite_12"
],
"mid": [
"2069817080",
"2951571245"
],
"abstract": [
"It is well known that after placing n balls independently and uniformly at Ž . random into n bins, the fullest bin holds Q log nrlog log n balls with high probability. More recently, analyzed the following process: randomly choose d bins for each ball, and then place the balls, one by one, into the least full bin from its d choices. They show that after all n balls have been placed, the fullest bin contains only Ž . log log nrlog dqQ 1 balls with high probability. We explore extensions of this result to parallel and distributed settings. Our results focus on the tradeoff between the amount of Correspondence to: M. Mitzenmacher * A preliminary version of this work appeared in the Proceedings of the Twenty-Se enth Annual ACM Symposium on the Theory of Computing, May 1995, pp. 238]247. † This work was primarily done while attending U.C. Berkeley, and was supported by a Schlumberger Foundation graduate fellowship. ‡ This work was primarily done while attending U.C. Berkeley, and was supported in part by ARPA Ž . under contract DABT63-92-C-0026, by NSF numbers CCR-9210260 and CDA-8722788 , and by Lawrence Livermore National Laboratory. § This work was primarily done while attending U.C. Berkeley, and was supported by the Office of Naval Research and by NSF grant CCR-9505448. ¶ Supported by a fellowship from U.C. Berkeley. Q 1998 John Wiley & Sons, Inc. CCC 1042-9832r98r020159-30",
"We explore the fundamental limits of distributed balls-into-bins algorithms. We present an adaptive symmetric algorithm that achieves a bin load of two in log* n+O(1) communication rounds using O(n) messages in total. Larger bin loads can be traded in for smaller time complexities. We prove a matching lower bound of (1-o(1))log* n on the time complexity of symmetric algorithms that guarantee small bin loads at an asymptotically optimal message complexity of O(n). For each assumption of the lower bound, we provide an algorithm violating it, in turn achieving a constant maximum bin load in constant time. As an application, we consider the following problem. Given a fully connected graph of n nodes, where each node needs to send and receive up to n messages, and in each round each node may send one message over each link, deliver all messages as quickly as possible to their destinations. We give a simple and robust algorithm of time complexity O(log* n) for this task and provide a generalization to the case where all nodes initially hold arbitrary sets of messages. A less practical algorithm terminates within asymptotically optimal O(1) rounds. All these bounds hold with high probability."
]
}
|
1111.0801
|
1834311253
|
Balanced allocation of online balls-into-bins has long been an active area of research for efficient load balancing and hashing applications.There exists a large number of results in this domain for different settings, such as parallel allocations parallel , multi-dimensional allocations multi , weighted balls weight etc. For sequential multi-choice allocation, where @math balls are thrown into @math bins with each ball choosing @math (constant) bins independently uniformly at random, the maximum load of a bin is @math with high probability heavily_load . This offers the current best known allocation scheme. However, for @math , the gap reduces to @math soda08 .A similar constant gap bound has been established for parallel allocations with @math communication rounds lenzen . In this paper we propose a novel multi-choice allocation algorithm, ( @math ) achieving a constant gap with a high probability for the sequential single-dimensional online allocation problem with constant @math . We achieve a maximum load of @math with high probability for constant @math choice scheme with constant number of retries or rounds per ball. We also show that the bound holds even for an arbitrary large number of balls, @math . Further, we generalize this result to (i) the weighted case, where balls have weights drawn from an arbitrary weight distribution with finite variance, (ii) multi-dimensional setting, where balls have @math dimensions with @math randomly and uniformly chosen filled dimension for @math , and (iii) the parallel case, where @math balls arrive and are placed parallely in the bins. We show that the gap in these case is also a constant w.h.p. (independent of @math ) for constant value of @math with expected constant number of retries per ball.
|
For offline balls-into-bins problem, using maximum flow computations it was shown that the maximum load of a bin w.h.p. is @math . @cite_24 showed that for @math balls, where @math is a sufficiently large constant, a perfect distribution of the balls was possible w.h.p. However, no such similar result is found in the literature for the online sequential case for constant @math choice.
|
{
"cite_N": [
"@cite_24"
],
"mid": [
"1577307925"
],
"abstract": [
"We investigate randomized processes underlying load balancing based on the multiple-choice paradigm: m balls have to be placed in n bins, and each ball can be placed into one out of 2 randomly selected bins. The aim is to distribute the balls as evenly as possible among the bins. Previously, it was known that a simple process that places the balls one by one in the least loaded bin can achieve a maximum load of m n + Θ(loglogn) with high probability. Furthermore, it was known that it is possible to achieve (with high probability) a maximum load of at most ⌈m n ⌉ + 1 using maximum flow computations."
]
}
|
1111.0801
|
1834311253
|
Balanced allocation of online balls-into-bins has long been an active area of research for efficient load balancing and hashing applications.There exists a large number of results in this domain for different settings, such as parallel allocations parallel , multi-dimensional allocations multi , weighted balls weight etc. For sequential multi-choice allocation, where @math balls are thrown into @math bins with each ball choosing @math (constant) bins independently uniformly at random, the maximum load of a bin is @math with high probability heavily_load . This offers the current best known allocation scheme. However, for @math , the gap reduces to @math soda08 .A similar constant gap bound has been established for parallel allocations with @math communication rounds lenzen . In this paper we propose a novel multi-choice allocation algorithm, ( @math ) achieving a constant gap with a high probability for the sequential single-dimensional online allocation problem with constant @math . We achieve a maximum load of @math with high probability for constant @math choice scheme with constant number of retries or rounds per ball. We also show that the bound holds even for an arbitrary large number of balls, @math . Further, we generalize this result to (i) the weighted case, where balls have weights drawn from an arbitrary weight distribution with finite variance, (ii) multi-dimensional setting, where balls have @math dimensions with @math randomly and uniformly chosen filled dimension for @math , and (iii) the parallel case, where @math balls arrive and are placed parallely in the bins. We show that the gap in these case is also a constant w.h.p. (independent of @math ) for constant value of @math with expected constant number of retries per ball.
|
Mitzenmacher et. al. in @cite_11 addresses both the single choice and d-choice paradigm for multidimensional balls and bins under the assumption that the balls are uniform D-dimensional (0, 1) vectors, where each ball has exactly @math populated dimensions. They show that the gap for multidimensional balls and bins, using the two-choice process, is bounded by O(log log(nD)). We provide a better bound of @math w.h.p. for @math case.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2018079147"
],
"abstract": [
"We consider a multidimensional variant of the balls-and-bins problem, where balls correspond to random D-dimensional 0-1 vectors. This variant is motivated by a problem in load balancing documents for distributed search engines. We demonstrate the utility of the power of two choices in this domain."
]
}
|
1111.0492
|
2181338740
|
We show the existence of rigid combinatorial objects which previously were not known to exist. Specifically, for a wide range of the underlying parameters, we show the existence of non-trivial orthogonal arrays, t-designs, and t-wise permutations. In all cases, the sizes of the objects ar e optimal up to polynomial overhead. The proof of existence is probabilistic. We show that a randomly chosen such object has the required properties with positive yet tiny pr obability. The main technical ingredient is a special local central limit theorem for suitable lattice ra ndom walks with finitely many steps.
|
One relaxation is to allow a set @math with a non-uniform distribution @math . For many practical applications of @math -designs and @math -wise permutations in statistics and computer science, but not quite every application, this relaxation is as good as the uniform question. The existence of a solution with small support is guaranteed by Carath 'eodory's theorem, using the fact that the constraints on @math are all linear equalities and inequalities. Moreover, such a solution can be found efficiently, as was shown by Karp and Papadimitriou @cite_2 and in more general settings by Koller and Megiddo @cite_6 . Alon and Lovett @cite_3 give a strongly explicit analog of this in the case of @math -wise permutations and more generally in the case of group actions.
|
{
"cite_N": [
"@cite_3",
"@cite_6",
"@cite_2"
],
"mid": [
"",
"2006637813",
"2018198126"
],
"abstract": [
"",
"The subject of this paper is finding small sample spaces for joint distributions of n discrete random variables. Such distributions are often only required to obey a certain limited set of constraints of the form Pr (Event) = @math . It is shown that the problem of deciding whether there exists any distribution satisfying a given set of constraints is NP-hard. However, if the constraints are consistent, then there exists a distribution satisfying them, which is supported by a \"small\" sample space (one whose cardinality is equal to the number of constraints). For the important case of independence constraints, where the constraints have a certain form and are consistent with a joint distribution of independent random variables, a small sample space can be constructed in polynomial time. This last result can be used to derandomize algorithms; this is demonstrated by an application to the problem of finding large independent sets in sparse hypergraphs.",
"We show that there can be no computationally tractable description by linear inequalities of the polyhedron associated with any NP-complete combinatorial optimization problem unless NP = co-NP—a very unlikely event. We also apply the ellipsoid method for linear programming to show that a combinatorial optimization problem is solvable in polynomial time if and only if it admits a small generator of violated inequalities."
]
}
|
1111.0492
|
2181338740
|
We show the existence of rigid combinatorial objects which previously were not known to exist. Specifically, for a wide range of the underlying parameters, we show the existence of non-trivial orthogonal arrays, t-designs, and t-wise permutations. In all cases, the sizes of the objects ar e optimal up to polynomial overhead. The proof of existence is probabilistic. We show that a randomly chosen such object has the required properties with positive yet tiny pr obability. The main technical ingredient is a special local central limit theorem for suitable lattice ra ndom walks with finitely many steps.
|
A different relaxation is to require the uniform distribution on @math to only approximately satisfy equation . Then it is trivial that a sufficiently large random subset @math satisfies the requirement with high probability, and the question is to find an explicit solution. For instance, we can relax the problem of @math -wise permutations to @math -wise permutations. For this variant an optimal solution (up to polynomial factors) was achieved by Kaplan, Naor and Reingold @cite_10 , who gave a construction of such an almost @math -wise permutation of size @math . Alternatively, one can start with the constant size expanding set of @math given by Kassabov @cite_17 and take a random walk on it of length @math .
|
{
"cite_N": [
"@cite_10",
"@cite_17"
],
"mid": [
"2175800841",
"2959799481"
],
"abstract": [
"Constructions of k-wise almost independent permutations have been receiving a growing amount of attention in recent years. However, unlike the case of k-wise independent functions, the size of previously constructed families of such permutations is far from optimal. This paper gives a new method for reducing the size of families given by previous constructions. Our method relies on pseudorandom generators for space-bounded computations. In fact, all we need is a generator, that produces “pseudorandom walks” on undirected graphs with a consistent labelling. One such generator is implied by Reingold's log-space algorithm for undirected connectivity [21,22]. We obtain families of k-wise almost independent permutations, with an optimal description length, up to a constant factor. More precisely, if the distance from uniform for any k tuple should be at most δ, then the size of the description of a permutation in the family is @math .",
"We construct an explicit generating sets @math and @math of the alternating and the symmetric groups, which make the Cayley graphs @math and @math a family of bounded degree expanders for all sufficiently large @math . These expanders have many applications in the theory of random walks on groups and other areas of mathematics."
]
}
|
1111.0045
|
1782779125
|
Entity resolution is the problem of reconciling database references corresponding to the same real-world entities. Given the abundance of publicly available databases that have unresolved entities, we motivate the problem of query-time entity resolution: quick and accurate resolution for answering queries over such 'unclean' databases at query-time. Since collective entity resolution approaches -- where related references are resolved jointly -- have been shown to be more accurate than independent attribute-based resolution for off-line entity resolution, we focus on developing new algorithms for collective resolution for answering entity resolution queries at query-time. For this purpose, we first formally show that, for collective resolution, precision and recall for individual entities follow a geometric progression as neighbors at increasing distances are considered. Unfolding this progression leads naturally to a two stage 'expand and resolve' query processing strategy. In this strategy, we first extract the related records for a query using two novel expansion operators, and then resolve the extracted records collectively. We then show how the same strategy can be adapted for query-time entity resolution by identifying and resolving only those database references that are the most helpful for processing the query. We validate our approach on two large real-world publication databases where we show the usefulness of collective resolution and at the same time demonstrate the need for adaptive strategies for query processing. We then show how the same queries can be answered in real-time using our adaptive approach while preserving the gains of collective resolution. In addition to experiments on real datasets, we use synthetically generated data to empirically demonstrate the validity of the performance trends predicted by our analysis of collective entity resolution over a wide range of structural characteristics in the data.
|
The entity resolution problem has been studied in many different areas under different names --- deduplication, record linkage, co-reference resolution, reference reconciliation, object consolidation, etc. Much of the work has focused on traditional attribute-based entity resolution. Extensive research has been done on defining approximate string similarity measures @cite_2 @cite_15 @cite_18 @cite_6 that may be used for unsupervised entity resolution. The other approach uses adaptive supervised algorithms that learn similarity measures from labeled data @cite_8 @cite_9 .
|
{
"cite_N": [
"@cite_18",
"@cite_8",
"@cite_9",
"@cite_6",
"@cite_2",
"@cite_15"
],
"mid": [
"2135223301",
"2170902582",
"2164456230",
"2105423800",
"2150698190",
"2001496424"
],
"abstract": [
"Identifying approximately duplicate database records that refer to the same entity is essential for information integration. The authors compare and describe methods for combining and learning textual similarity measures for name matching.",
"When integrating information from multiple websites, the same data objects can exist in inconsistent text formats across sites, making it difficult to identify matching objects using exact text match. We have developed an object identification system called Active Atlas, which compares the objects' shared attributes in order to identify matching objects. Certain attributes are more important for deciding if a mapping should exist between two objects. Previous methods of object identification have required manual construction of object identification rules or mapping rules for determining the mappings between objects, as well as domain-dependent transformations for recognizing format inconsistencies. This manual process is time consuming and error-prone. In our approach, Active Atlas learns to simultaneously tailor both mapping rules and a set of general transformations to a specific application domain, through limited user input. The experimental results demonstrate that we achieve higher accuracy and require less user involvement than previous methods across various application domains.",
"The problem of identifying approximately duplicate records in databases is an essential step for data cleaning and data integration processes. Most existing approaches have relied on generic or manually tuned distance metrics for estimating the similarity of potential duplicates. In this paper, we present a framework for improving duplicate detection using trainable measures of textual similarity. We propose to employ learnable text distance functions for each database field, and show that such measures are capable of adapting to the specific notion of similarity that is appropriate for the field's domain. We present two learnable text similarity measures suitable for this task: an extended variant of learnable string edit distance, and a novel vector-space based measure that employs a Support Vector Machine (SVM) for training. Experimental results on a range of datasets show that our framework can improve duplicate detection accuracy over traditional techniques.",
"To ensure high data quality, data warehouses must validate and cleanse incoming data tuples from external sources. In many situations, clean tuples must match acceptable tuples in reference tables. For example, product name and description fields in a sales record from a distributor must match the pre-recorded name and description fields in a product reference relation.A significant challenge in such a scenario is to implement an efficient and accurate fuzzy match operation that can effectively clean an incoming tuple if it fails to match exactly with any tuple in the reference relation. In this paper, we propose a new similarity function which overcomes limitations of commonly used similarity functions, and develop an efficient fuzzy match algorithm. We demonstrate the effectiveness of our techniques by evaluating them on real datasets.",
"To combine information from heterogeneous sources, equivalent data in the multiple sources must be identified. This task is the field matching problem. Specifically, the task is to determine whether or not two syntactic values are alternative designations of the same semantic entity. For example the addresses Dept. of Comput. Sci. and Eng., University of California, San Diego, 9500 Gilman Dr. Dept. 0114, La Jolla. CA 92093 and UCSD, Computer Science and Engineering Department, CA 92093-0114 do designate the same department. This paper describes three field matching algorithms, and evaluates their performance on real-world datasets. One proposed method is the well-known Smith-Waterman algorithm for comparing DNA and protein sequences. Several applications of field matching in knowledge discovery are described briefly, including WEBFIND, which is a new software tool that discovers scientific papers published on the worldwide web. WEBFIND uses external information sources to guide its search for authors and papers. Like many other worldwide web tools, WEBFIND needs to solve the field matching problem in order to navigate between information sources.",
"We survey the current techniques to cope with the problem of string matching that allows errors. This is becoming a more and more relevant issue for many fast growing areas such as information retrieval and computational biology. We focus on online searching and mostly on edit distance, explaining the problem and its relevance, its statistical behavior, its history and current developments, and the central ideas of the algorithms and their complexities. We present a number of experiments to compare the performance of the different algorithms and show which are the best choices. We conclude with some directions for future work and open problems."
]
}
|
1111.0045
|
1782779125
|
Entity resolution is the problem of reconciling database references corresponding to the same real-world entities. Given the abundance of publicly available databases that have unresolved entities, we motivate the problem of query-time entity resolution: quick and accurate resolution for answering queries over such 'unclean' databases at query-time. Since collective entity resolution approaches -- where related references are resolved jointly -- have been shown to be more accurate than independent attribute-based resolution for off-line entity resolution, we focus on developing new algorithms for collective resolution for answering entity resolution queries at query-time. For this purpose, we first formally show that, for collective resolution, precision and recall for individual entities follow a geometric progression as neighbors at increasing distances are considered. Unfolding this progression leads naturally to a two stage 'expand and resolve' query processing strategy. In this strategy, we first extract the related records for a query using two novel expansion operators, and then resolve the extracted records collectively. We then show how the same strategy can be adapted for query-time entity resolution by identifying and resolving only those database references that are the most helpful for processing the query. We validate our approach on two large real-world publication databases where we show the usefulness of collective resolution and at the same time demonstrate the need for adaptive strategies for query processing. We then show how the same queries can be answered in real-time using our adaptive approach while preserving the gains of collective resolution. In addition to experiments on real datasets, we use synthetically generated data to empirically demonstrate the validity of the performance trends predicted by our analysis of collective entity resolution over a wide range of structural characteristics in the data.
|
Probabilistic approaches that cast entity resolution as a classification problem have been extensively studied. The groundwork was done by fellegi:jasa69 . Others @cite_21 @cite_22 have more recently built upon this work. Adaptive machine learning approaches have been proposed for data integration @cite_14 @cite_8 , where active learning requires the user to label informative examples. Probabilistic models that use relationships for collective entity resolution have been applied to named entity recognition and citation matching . These probabilistic approaches are superior to similarity-based clustering algorithms in that they associate a degree of confidence with every decision, and learned models provide valuable insight into the domain. However, probabilistic inference for collective entity resolution is not known to be scalable in practice, particularly when relationships are also considered. These approaches have mostly been shown to work for small datasets, and are significantly slower than their clustering counterparts.
|
{
"cite_N": [
"@cite_14",
"@cite_21",
"@cite_22",
"@cite_8"
],
"mid": [
"2067566391",
"1564630549",
"",
"2170902582"
],
"abstract": [
"Deduplication is a key operation in integrating data from multiple sources. The main challenge in this task is designing a function that can resolve when a pair of records refer to the same entity in spite of various data inconsistencies. Most existing systems use hand-coded functions. One way to overcome the tedium of hand-coding is to train a classifier to distinguish between duplicates and non-duplicates. The success of this method critically hinges on being able to provide a covering and challenging set of training pairs that bring out the subtlety of deduplication function. This is non-trivial because it requires manually searching for various data inconsistencies between any two records spread apart in large lists.We present our design of a learning-based deduplication system that uses a novel method of interactively discovering challenging training pairs using active learning. Our experiments on real-life datasets show that active learning significantly reduces the number of instances needed to achieve high accuracy. We investigate various design issues that arise in building a system to provide interactive response, fast convergence, and interpretable output.",
"Although terminology differs, there is considerable overlap between record linkage methods based on the Fellegi-Sunter model (JASA 1969) and Bayesian networks used in machine learning (Mitchell 1997). Both are based on formal probabilistic models that can be shown to be equivalent in many situations (Winkler 2000). When no missing data are present in identifying fields and training data are available, then both can efficiently estimate parameters of interest. EM and MCMC methods can be used for automatically estimating parameters and error rates in some of the record linkage situations (Belin and Rubin 1995, Larsen and Rubin 2001).",
"",
"When integrating information from multiple websites, the same data objects can exist in inconsistent text formats across sites, making it difficult to identify matching objects using exact text match. We have developed an object identification system called Active Atlas, which compares the objects' shared attributes in order to identify matching objects. Certain attributes are more important for deciding if a mapping should exist between two objects. Previous methods of object identification have required manual construction of object identification rules or mapping rules for determining the mappings between objects, as well as domain-dependent transformations for recognizing format inconsistencies. This manual process is time consuming and error-prone. In our approach, Active Atlas learns to simultaneously tailor both mapping rules and a set of general transformations to a specific application domain, through limited user input. The experimental results demonstrate that we achieve higher accuracy and require less user involvement than previous methods across various application domains."
]
}
|
1110.5972
|
1569944010
|
Infrastructure-as-a-Service providers are offering their unused resources in the form of variable-priced virtual machines (VMs), known as "spot instances", at prices significantly lower than their standard fixed-priced resources. To lease spot instances, users specify a maximum price they are willing to pay per hour and VMs will run only when the current price is lower than the user's bid. This paper proposes a resource allocation policy that addresses the problem of running deadlineconstrained compute-intensive jobs on a pool of composed solely of spot instances, while exploiting variations in price and performance to run applications in a fast and economical way. Our policy relies on job runtime estimations to decide what are the best types of VMs to run each job and when jobs should run. Several estimation methods are evaluated and compared, using trace-based simulations, which take real price variation traces obtained from Amazon Web Services as input, as well as an application trace from the Parallel Workload Archive. Results demonstrate the effectiveness of running computational jobs on spot instances, at a fraction (up to 60 lower) of the price that would normally cost on fixed priced resources.
|
Research on building virtual clusters using cloud resources can be generally divided into two categories: (1) techniques to extend the capacity of in-house clusters at times of the peak demand, and (2) assembling resource pools using only public cloud resources and using them to run compute intensive applications. For instance, @cite_9 have evaluated a set of well known scheduling policies, including backfilling techniques, in a system that extends the capacity of a local cluster using fixed-priced cloud resources. Similarly, @cite_10 have evaluated policies that offload extra demand from a local cluster to a resource pool composed by Amazon EC2 spot instances. In contrast to these works, our system model does not consider the existence of a local cluster, instead all resources are cloud-based spot instances.
|
{
"cite_N": [
"@cite_9",
"@cite_10"
],
"mid": [
"1974555555",
"2120964770"
],
"abstract": [
"In this paper, we investigate the benefits that organisations can reap by using \"Cloud Computing\" providers to augment the computing capacity of their local infrastructure. We evaluate the cost of six scheduling strategies used by an organisation that operates a cluster managed by virtual machine technology and seeks to utilise resources from a remote Infrastructure as a Service (IaaS) provider to reduce the response time of its user requests. Requests for virtual machines are submitted to the organisation's cluster, but additional virtual machines are instantiated in the remote provider and added to the local cluster when there are insufficient resources to serve the users' requests. Naive scheduling strategies can have a great impact on the amount paid by the organisation for using the remote resources, potentially increasing the overall cost with the use of IaaS. Therefore, in this work we investigate six scheduling strategies that consider the use of resources from the \"Cloud\", to understand how these strategies achieve a balance between performance and usage cost, and how much they improve the requests' response times.",
"Dedicated computing clusters are typically sized based on an expected average workload over a period of years, rather than on peak workloads, which might exist for relatively short times of weeks or months. Recent work has proposed temporarily adding capacity to dedicated clusters during peak periods, by purchasing additional resources from Infrastructure as a Service (IaaS) providers such as Amazon's EC2. In this paper, we consider the economics of purchasing such resources by taking advantage of new opportunities offered for renting virtual infrastructure such as the spot pricing model introduced by Amazon. Furthermore, we define different provisioning policies and investigate the use of spot instances compared to normal instances in terms of cost savings and total breach time of tasks in the queue."
]
}
|
1110.5972
|
1569944010
|
Infrastructure-as-a-Service providers are offering their unused resources in the form of variable-priced virtual machines (VMs), known as "spot instances", at prices significantly lower than their standard fixed-priced resources. To lease spot instances, users specify a maximum price they are willing to pay per hour and VMs will run only when the current price is lower than the user's bid. This paper proposes a resource allocation policy that addresses the problem of running deadlineconstrained compute-intensive jobs on a pool of composed solely of spot instances, while exploiting variations in price and performance to run applications in a fast and economical way. Our policy relies on job runtime estimations to decide what are the best types of VMs to run each job and when jobs should run. Several estimation methods are evaluated and compared, using trace-based simulations, which take real price variation traces obtained from Amazon Web Services as input, as well as an application trace from the Parallel Workload Archive. Results demonstrate the effectiveness of running computational jobs on spot instances, at a fraction (up to 60 lower) of the price that would normally cost on fixed priced resources.
|
A few recently published works have touched the subject of leveraging variable pricing cloud resources in high-performance computing. @cite_3 have proposed a probabilistic decision model to help users decide how much to bid for a certain spot instance type in order to meet a given monetary budget or a deadline. The model suggests bid values based on the probability of failures calculated using a mean of past prices from Amazon EC2. It can then estimate, with a given confidence, values for a budget and a deadline that can be achieved if the given bid is used.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2162570510"
],
"abstract": [
"With the recent introduction of Spot Instances in the Amazon Elastic Compute Cloud (EC2), users can bid for resources and thus control the balance of reliability versus monetary costs. A critical challenge is to determine bid prices that minimize monetary costs for a user while meeting Service Level Agreement (SLA) constraints (for example, sufficient resource availability to complete a computation within a desired deadline). We propose a probabilistic model for the optimization of monetary costs, performance, and reliability, given user and application requirements and dynamic conditions. Using real instance price traces and workload models, we evaluate our model and demonstrate how users should bid optimally on Spot Instances to reach different objectives with desired levels of confidence."
]
}
|
1110.5972
|
1569944010
|
Infrastructure-as-a-Service providers are offering their unused resources in the form of variable-priced virtual machines (VMs), known as "spot instances", at prices significantly lower than their standard fixed-priced resources. To lease spot instances, users specify a maximum price they are willing to pay per hour and VMs will run only when the current price is lower than the user's bid. This paper proposes a resource allocation policy that addresses the problem of running deadlineconstrained compute-intensive jobs on a pool of composed solely of spot instances, while exploiting variations in price and performance to run applications in a fast and economical way. Our policy relies on job runtime estimations to decide what are the best types of VMs to run each job and when jobs should run. Several estimation methods are evaluated and compared, using trace-based simulations, which take real price variation traces obtained from Amazon Web Services as input, as well as an application trace from the Parallel Workload Archive. Results demonstrate the effectiveness of running computational jobs on spot instances, at a fraction (up to 60 lower) of the price that would normally cost on fixed priced resources.
|
@cite_6 proposed a method to reduce costs of computations and providing fault-tolerance when using EC2 spot instances. Based on the price history, they simulated how several checkpointing policies would perform when faced with out-of-bid situations. Their evaluation has shown that checkpointing schemes, in spite of the inherent overhead, can tolerate instance failures while reducing the price paid, as compared to normal on-demand instances.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2008793665"
],
"abstract": [
"Recently introduced spot instances in the Amazon Elastic Compute Cloud (EC2) offer lower resource costs in exchange for reduced reliability; these instances can be revoked abruptly due to price and demand fluctuations. Mechanisms and tools that deal with the cost-reliability trade-offs under this schema are of great value for users seeking to lessen their costs while maintaining high reliability. We study how one such a mechanism, namely check pointing, can be used to minimize the cost and volatility of resource provisioning. Based on the real price history of EC2 spot instances, we compare several adaptive check pointing schemes in terms of monetary costs and improvement of job completion times. Trace-based simulations show that our approach can reduce significantly both price and the task completion times."
]
}
|
1110.5969
|
2949220488
|
Cloud computing providers are now offering their unused resources for leasing in the spot market, which has been considered the first step towards a full-fledged market economy for computational resources. Spot instances are virtual machines (VMs) available at lower prices than their standard on-demand counterparts. These VMs will run for as long as the current price is lower than the maximum bid price users are willing to pay per hour. Spot instances have been increasingly used for executing compute-intensive applications. In spite of an apparent economical advantage, due to an intermittent nature of biddable resources, application execution times may be prolonged or they may not finish at all. This paper proposes a resource allocation strategy that addresses the problem of running compute-intensive jobs on a pool of intermittent virtual machines, while also aiming to run applications in a fast and economical way. To mitigate potential unavailability periods, a multifaceted fault-aware resource provisioning policy is proposed. Our solution employs price and runtime estimation mechanisms, as well as three fault tolerance techniques, namely checkpointing, task duplication and migration. We evaluate our strategies using trace-driven simulations, which take as input real price variation traces, as well as an application trace from the Parallel Workload Archive. Our results demonstrate the effectiveness of executing applications on spot instances, respecting QoS constraints, despite occasional failures.
|
@cite_9 proposed a method to reduce costs of computations and providing fault-tolerance when using EC2 spot instances. Based on the price history, they simulated how several checkpointing policies would perform when faced with out-of-bid situations. The proposed policies used two distinct techniques for deciding when to checkpoint a running program: at hour boundaries and at price rising edges. In the hour boundary scheme, checkpoints are taken periodically every hour, while in the rising edge scheme, checkpoints are taken when the spot price for a given instance type is increasing. The authors proposed combinations of the above mentioned schemes, including adaptive decisions, such as taking or skipping checkpointing at certain times. Their evaluation has shown that checkpointing schemes, in spite of the inherent overhead, can tolerate instance failures while reducing the price paid, as compared to normal on-demand instances. Similarly, we evaluate a checkpointing mechanism implemented according to this work, with the objective of comparing with other fault tolerance approaches.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2008793665"
],
"abstract": [
"Recently introduced spot instances in the Amazon Elastic Compute Cloud (EC2) offer lower resource costs in exchange for reduced reliability; these instances can be revoked abruptly due to price and demand fluctuations. Mechanisms and tools that deal with the cost-reliability trade-offs under this schema are of great value for users seeking to lessen their costs while maintaining high reliability. We study how one such a mechanism, namely check pointing, can be used to minimize the cost and volatility of resource provisioning. Based on the real price history of EC2 spot instances, we compare several adaptive check pointing schemes in terms of monetary costs and improvement of job completion times. Trace-based simulations show that our approach can reduce significantly both price and the task completion times."
]
}
|
1110.6267
|
1633278913
|
The use of the internet, and in particular web browsing, offers many potential advantages for educational institutions as students have access to a wide range of information previously not available. However, there are potential negative effects due to factors such as time-wasting and asocial behaviour. In this study, we conducted an empirical investigation of the academic performance and the web-usage pattern of 2153 undergraduate students. Data from university proxy logs allows us to examine usage patterns and we compared this data to the students' academic performance. The results show that there is a small but significant (both statistically and educationally) association between heavier web browsing and poorer academic results (lower average mark, higher failure rates). In addition, among good students, the proportion of students who are relatively light users of the internet is significantly greater than would be expected by chance.
|
Internet use (or abuse) by university students has been one focus of research. Some research focuses on general Internet use by students (e.g. @cite_5 which looks at gender differences in Internet use and @cite_3 @cite_18 which both consider race ethnicity differences). Some research considers how and or why students use the Web Internet . For example, surveyed 548 students from 3 universities to see how many students regularly use the Internet, how many hours per week regular users spend on the Internet and what computers they use. They also asked respondents their views of their future use of the Internet in their future careers. investigated controversial uses of the Internet by university students (e.g. academic cheating, fake emails, pornography, etc.). found that college students report that they rely very heavily on the Web for general and academic information; that their use includes research (getting information) for school work, banking and stock market information, email, checking sports scores and downloading music; and that they believe that this use will increase over time).
|
{
"cite_N": [
"@cite_5",
"@cite_18",
"@cite_3"
],
"mid": [
"2042311361",
"2010446440",
"2097460093"
],
"abstract": [
"A review of the recent literature concerning Internet usage among Americans reveals that the once stark gender gap is closing rapidly, but disparities remain in the purposes for which males and females use the Internet. Almost all of this research, however, is based on cross sections of American adults. Much less Internet research has focused on the college student population and, in particular, on female students; the few published studies show that female college students use the Internet less than males. However, even these recent studies may already be dated. This study, based on a large survey of college students from institutions of higher learning in Georgia, Hawaii, New Jersey, Massachusetts, and Rhode Island, considers these questions: (1) Has the gender gap in Internet use narrowed among college students to the same extent as it has in the general adult population? (2) Do female students differ from males in how they spend their time on the Internet? (3) Does family income, parental education or...",
"Given debate about the existence of a digital divide in the United States, the question remains: If individuals are in situations where all have access to the Internet (e.g., a university), will aspects of a digital divide still exist? The authors examine whether a racial digital divide exists among college students in the odds of their using the Internet and the different levels and types of usage. Data are from a random sample of full-time, residential college freshmen. Results indicate that aspects of a digital divide exist in terms of whether one uses the Internet for specific purposes; however, once individuals begin using the Internet, few racial differences exist. Internet experience and gender affect particular types of Internet usage, suggesting that the digital divide is multilayered. A policy implication from this study is that bringing individuals into structured environments with assured access may help to decrease aspects of the digital divide.",
"The purpose of the study was to gather descriptive information about college students' Internet use and to explore the relationship between types of Internet use and well-being. The sample consisted of 312 college students (67 female; age range 18-49 years; M = 21.34 years, SD = 5.05). Self-report questionnaires were administered in a large undergraduate psychology course. Exploratory factor analyses suggested 5 specific types of use: Meeting People, Information Seeking, Distraction, Coping, and E-mail. Confirmatory factor analyses on a new sample from the same university (N = 169) verified the 5-factor structure. Using the Internet for coping purposes related to depression, social anxiety, and family cohesion more so than frequency of use. This study highlights the importance of examining types of Internet use in relation to well-being."
]
}
|
1110.6267
|
1633278913
|
The use of the internet, and in particular web browsing, offers many potential advantages for educational institutions as students have access to a wide range of information previously not available. However, there are potential negative effects due to factors such as time-wasting and asocial behaviour. In this study, we conducted an empirical investigation of the academic performance and the web-usage pattern of 2153 undergraduate students. Data from university proxy logs allows us to examine usage patterns and we compared this data to the students' academic performance. The results show that there is a small but significant (both statistically and educationally) association between heavier web browsing and poorer academic results (lower average mark, higher failure rates). In addition, among good students, the proportion of students who are relatively light users of the internet is significantly greater than would be expected by chance.
|
reported on internet use, abuse and dependence among students at a regional U.S. university. Once again a survey was used to gather information about the students in the sample. It was found that the majority of students use the internet daily, and that half of the sample met the defined criteria for internet abuse. There were no gender differences in terms of daily access to the internet, however males and females did seem to use the internet for different reasons. Finally, depression was found to be positively correlated with more frequent internet use @cite_6 .
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2002978390"
],
"abstract": [
"Objective: To assess Internet use, abuse, and dependence. Participants: 411 undergraduate students. Results: Ninety percent of participants reported daily Internet use. Approximately half of the sample met criteria for Internet abuse, and one-quarter met criteria for Internet dependence. Men and women did not differ on the mean amount of time accessing the Internet each day; however, the reasons for accessing the Internet differed between the 2 groups. Depression was correlated with more frequent use of the Internet to meet people, socially experiment, and participate in chat rooms, and with less frequent face-to-face socialization. In addition, individuals meeting criteria for Internet abuse and dependence endorsed more depressive symptoms, more time online, and less face-to-face socialization than did those not meeting the criteria. Conclusions: Mental health and student affairs professionals should be alert to the problems associated with Internet overuse, especially as computers become an integral par..."
]
}
|
1110.6267
|
1633278913
|
The use of the internet, and in particular web browsing, offers many potential advantages for educational institutions as students have access to a wide range of information previously not available. However, there are potential negative effects due to factors such as time-wasting and asocial behaviour. In this study, we conducted an empirical investigation of the academic performance and the web-usage pattern of 2153 undergraduate students. Data from university proxy logs allows us to examine usage patterns and we compared this data to the students' academic performance. The results show that there is a small but significant (both statistically and educationally) association between heavier web browsing and poorer academic results (lower average mark, higher failure rates). In addition, among good students, the proportion of students who are relatively light users of the internet is significantly greater than would be expected by chance.
|
* Academic performance present the early findings of how internet use affects collegiate academic performance. This study focuses on students' dependence on the internet and attempts to quantify to what extent students are addicted to the internet. It was found that a significant percentage of students whose academic performance was bad indicated that the internet kept them up late at night, thereby making them tired for lectures the following day @cite_2 . Strong evidence was found to suggest that students' excessive use of the internet is associated with academic problems, however it was unclear if these students would have had similar problems even without the internet being so readily available. These findings demonstrate the need for more research into this field, and specifically the need for hard data to be analysed, and compared to the self-reported data available from other surveys.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2024372620"
],
"abstract": [
"Recent research at colleges and universities has suggested that some college students’ academic performance might be impaired by heavier use of the Internet. This study reviews the relevant literature and presents data from a survey of 572 students at a large public university. Heavier recreational Internet use was shown to be correlated highly with impaired academic performance. Loneliness, staying up late, tiredness, and missing class were also intercorrelated with self-reports of Internet-caused impairment. Self-reported Internet dependency and impaired academic performance were both associated with greater use of all Internet applications, but particularly with much greater use of synchronous communication applications such as chat rooms and MUDs, as opposed to asynchronous applications such as email and Usenet newsgroups."
]
}
|
1110.5794
|
1729261319
|
Anonymity networks hide user identities with the help of relayed anonymity routers. However, the state-of-the-art anonymity networks do not provide an effective trust model. As a result, users cannot circumvent malicious or vulnerable routers, thus making them susceptible to malicious router based attacks (e.g., correlation attacks). In this paper, we propose a novel social network based trust model to help anonymity networks circumvent malicious routers and obtain secure anonymity. In particular, we design an input independent fuzzy model to determine trust relationships between friends based on qualitative and quantitative social attributes, both of which can be readily obtained from existing social networks. Moreover, we design an algorithm for propagating trust over an anonymity network. We integrate these two elements in STor, a novel social network based Tor. We have implemented STor by modifying the Tor's source code and conducted experiments on PlanetLab to evaluate the effectiveness of STor. Both simulation and PlanetLab experiment results have demonstrated that STor can achieve secure anonymity by establishing trust-based circuits in a distributed way. Although the design of STor is based on Tor network, the social network based trust model can be adopted by other anonymity networks.
|
A pioneer security analysis for the Onion Routing has implicitly indicated the necessary of trust-based routing algorithm @cite_9 . Furthermore, by understanding the importance of the trust, the adversary models and routing algorithms for the trust-based anonymous communication have been demonstrated @cite_20 @cite_1 . Unlike these studies that focus on why the trust is necessary for anonymous communication, STor is a practical solution on how to introduce the trust to anonymous communication. Beside that, many studies @cite_38 @cite_59 @cite_12 have appeared to use peer-to-peer approaches for scalable anonymous communication. They mainly focus on the design of anonymous P2P lookup mechanisms in the scalable architecture. Unlike that, the social network based trust model introduces trust-based scalability to anonymity networks.
|
{
"cite_N": [
"@cite_38",
"@cite_9",
"@cite_1",
"@cite_59",
"@cite_12",
"@cite_20"
],
"mid": [
"2081307968",
"",
"2164414844",
"1969314716",
"2161829417",
"2109187517"
],
"abstract": [
"We introduce Torsk, a structured peer-to-peer low-latency anonymity protocol. Torsk is designed as an interoperable replacement for the relay selection and directory service of the popular Tor anonymity network, that decreases the bandwidth cost of relay selection and maintenance from quadratic to quasilinear while introducing no new attacks on the anonymity provided by Tor, and no additional delay to connections made via Tor. The resulting bandwidth savings make a modest-sized Torsk network significantly cheaper to operate, and allows low-bandwidth clients to join the network. Unlike previous proposals for P2P anonymity schemes, Torsk does not require all users to relay traffic for others. Torsk utilizes a combination of two P2P lookup mechanisms with complementary strengths in order to avoid attacks on the confidentiality and integrity of lookups. We show by analysis that previously known attacks on P2P anonymity schemes do not apply to Torsk, and report on experiments conducted with a 336-node wide-area deployment of Torsk, demonstrating its efficiency and feasibility.",
"",
"We introduce a novel model of routing security that incorporates the ordinarily overlooked variations in trust that users have for different parts of the network. We focus on anonymous communication, and in particular onion routing, although we expect the approach to apply more broadly. This paper provides two main contributions. First, we present a novel model to consider the various security concerns for route selection in anonymity networks when users vary their trust over parts of the network. Second, to show the usefulness of our model, we present as an example a new algorithm to select paths in onion routing. We analyze its effectiveness against deanonymization and other information leaks, and particularly how it fares in our model versus existing algorithms, which do not consider trust. In contrast to those, we find that our trust-based routing strategy can protect anonymity against an adversary capable of attacking a significant fraction of the network.",
"Network information distribution is a fundamental service for any anonymization network. Even though anonymization and information distribution about the network are two orthogonal issues, the design of the distribution service has a direct impact on the anonymization. Requiring each node to know about all other nodes in the network (as in Tor and AN.ON -- the most popular anonymization networks) limits scalability and offers a playground for intersection attacks. The distributed designs existing so far fail to meet security requirements and have therefore not been accepted in real networks. In this paper, we combine probabilistic analysis and simulation to explore DHT-based approaches for distributing network information in anonymization networks. Based on our findings we introduce NISAN, a novel approach that tries to scalably overcome known security problems. It allows for selecting nodes uniformly at random from the full set of all available peers, while each of the nodes has only limited knowledge about the network. We show that our scheme has properties similar to a centralized directory in terms of preventing malicious nodes from biasing the path selection. This is done, however, without requiring to trust any third party. At the same time our approach provides high scalability and adequate performance. Additionally, we analyze different design choices and come up with diverse proposals depending on the attacker model. The proposed combination of security, scalability, and simplicity, to the best of our knowledge, is not available in any other existing network information distribution system.",
"The ability to locate random relays is a key challenge for peer-to-peer (P2P) anonymous communication systems. Earlier attempts like Salsa and AP3 used distributes hash table lookups to locate relays, but the lack of anonymity in their lookup mechanisms enables an adversary to infer the path structure and compromise used anonymity. NISAN and Torsk are state-of-the-art systems for P2P anonymous communication. Their designs include mechanisms that are specifically tailored to mitigate information leak attacks. NISAN proposes to add anonymity into the lookup mechanism itself, while Torsk proposes the use of secret buddy nodes to anonymize the lookup initiator. In this paper, we attack the key mechanisms that hide the relationship between a lookup initiator and its selected relays in NISAN and Torsk. We present passive attacks on the NISAN lookup and show that it is not as anonymous as previously thought. We analyze three circuit construction mechanisms for anonymous communication using the NISAN lookup, and show that the information leaks in the NISAN lookup lead to a significant reduction in user anonymity. We also propose active attacks on Torsk that defeat its secret buddy mechanism and consequently compromise user anonymity. Our results are backed up by probabilistic modeling and extensive simulations. Our study motivates the search for a DHT lookup mechanism that is both secure and anonymous.",
"We consider using trust information to improve the anonymity provided by onion-routing networks. In particular, we introduce a model of trust in network nodes and use it to design path-selection strategies that minimize the probability that the adversary can successfully control the entrance to and exit from the network. This minimizes the chance that the adversary can observe and correlate patterns in the data flowing over the path and thereby deanonymize the user. We first describe the general case in which onion routers can be assigned arbitrary levels of trust. Selecting a strategy can be formulated in a straightforward way as a linear program, but it is exponential in size. We thus analyze a natural simplification of path selection for this case. More importantly, however, when choosing routes in practice, only a very coarse assessment of trust in specific onion routers is likely to be feasible. Therefore, we focus next on the special case in which there are only two trust levels. For this more practical case we identify three optimal route-selection strategies such that at least one is optimal, depending on the trust levels of the two classes, their size, and the reach of the adversary. This can yield practical input into routing decisions. We set out the relevant parameters and choices for making such decisions."
]
}
|
1110.5794
|
1729261319
|
Anonymity networks hide user identities with the help of relayed anonymity routers. However, the state-of-the-art anonymity networks do not provide an effective trust model. As a result, users cannot circumvent malicious or vulnerable routers, thus making them susceptible to malicious router based attacks (e.g., correlation attacks). In this paper, we propose a novel social network based trust model to help anonymity networks circumvent malicious routers and obtain secure anonymity. In particular, we design an input independent fuzzy model to determine trust relationships between friends based on qualitative and quantitative social attributes, both of which can be readily obtained from existing social networks. Moreover, we design an algorithm for propagating trust over an anonymity network. We integrate these two elements in STor, a novel social network based Tor. We have implemented STor by modifying the Tor's source code and conducted experiments on PlanetLab to evaluate the effectiveness of STor. Both simulation and PlanetLab experiment results have demonstrated that STor can achieve secure anonymity by establishing trust-based circuits in a distributed way. Although the design of STor is based on Tor network, the social network based trust model can be adopted by other anonymity networks.
|
A number of fuzzy model based approaches have been proposed to calculate the trust according to quantitative social properties and propagate the trust over the semantic web social networks @cite_11 @cite_56 @cite_15 @cite_36 @cite_54 . However, these studies calculate trust by using the traditional fuzzy model, thus loosing the functionality to convert the qualitative social attributes. Moreover, a basic model for the propagation of trust and distrust over a trust graph is proposed by @cite_18 , as well as the Friend-to-Friend networks (e.g., @cite_37 @cite_27 ) have been designed to use the trust from real-world social networks for data sharing. STor, on the other hand, introduces trust and trust propagation to anonymity networks from the real-world social networks.
|
{
"cite_N": [
"@cite_18",
"@cite_37",
"@cite_36",
"@cite_54",
"@cite_56",
"@cite_27",
"@cite_15",
"@cite_11"
],
"mid": [
"2144780381",
"2520602384",
"2166061604",
"1579695482",
"2054424221",
"2136347453",
"",
"2143450308"
],
"abstract": [
"A (directed) network of people connected by ratings or trust scores, and a model for propagating those trust scores, is a fundamental building block in many of today's most successful e-commerce and recommendation systems. We develop a framework of trust propagation schemes, each of which may be appropriate in certain circumstances, and evaluate the schemes on a large trust network consisting of 800K trust scores expressed among 130K people. We show that a small number of expressed trusts distrust per individual allows us to predict trust between any two people in the system with high accuracy. Our work appears to be the first to incorporate distrust in a computational trust propagation setting.",
"",
"Virtual marketplaces on the Web provide people with great facilities to buy and sell goods similar to conventional markets. In traditional business, reputation is subjectively built for known persons and companies as the deals are made in the course of time. As it is important to do business with trustful individuals and companies, there is a need to survive the reputation concept in virtual markets. Auction sites generally employ reputation systems based on feedbacks that provide a global view to a cyber dealer. In contrast to global trust, people usually infer their personal trust about someone whose reputation is completely or partially unknown by asking their trusted friends. Personal reputation is what makes a person trusted for some people and untrusted for others. There should be a facility for users in a virtual market to specify how much they trust a friend and also a mechanism that infers the trust of a user to another user who is not directly a friend of her. There are two main issues that should be addressed in trust inference. First, the trust modeling and aggregation problem needs to be challenged. Second, algorithms should be introduced to find and select the best paths among the existing trust paths from a source to a sink. First, as trust to a person can be stated more naturally using linguistic expressions, this work suggests employing linguistic terms for trust specification. To this end, corresponding fuzzy sets are defined for trust linguistic terms and a fuzzy trust aggregation method is also proposed. Comparing the fuzzy aggregation method to the existing aggregation methods shows superiority of fuzzy approach especially at aggregating contradictory information. Second, this paper proposes an incremental trust inference algorithm. The results show improvement in preciseness of inference for the proposed inference algorithm over the existing and recently proposed algorithm named TidalTrust.",
"Social networks let the people find and know other people and benefit form their information. Semantic Web standard ontologies support social network sites for making use of other social networks information and hence help their expansion and unification, making them a huge social network. As social networks are public virtual social places much information may exist in them that may not be trustworthy to all. A mechanism in needed to rate coming news, reviews and opinions about a definite subject from users, according to each user preference. There should be a feature for users to specify how much they trust a friend and a mechanism to infer the trust from one user to another that is not directly a friend of the user so that a recommender site can benefit from these trust ratings for showing trustworthy information to each user from her or his point of view from not only her or his directly trusted friends but also the other indirectly trusted users. This work suggests using fuzzy linguistic terms to specify trust to other users and proposes an algorithm for inferring trust from a person to another person that may be not directly connected in the trust graph of a social network. The algorithm is implemented and compared to an algorithm that let the users to specify their trust with a number in a definite range. While according to the imprecise nature of the trust concept writing and reading a linguistic expression for trust is much more natural than a number for users, the results show that the algorithm offers more precise information than the previously used algorithm especially when contradictory beliefs should be composed and also when a more precise inference is potentially possible in searching deeper paths. As the trust graphs and inference are viewed abstractly, they can be well employed in other multi agent systems.",
"People generate information or get it from the others. When one gets information from the others it is important to get it from trusted ones. Each individual in a society can get the information he needs form his trusted friends but there are also many other people in the society that he or she indirectly trusts and can benefit from their information. The idea of benefiting from the indirectly trusted people can well be employed in social networks where finding trusted people can be automated. There should be a feature for users to specify how much they trust a friend and a mechanism to infer the trust in the society trust graph from one user to another that is not directly a friend of the user so that a recommender site can benefit from these inferred trust ratings for showing trustworthy information to each user from her or his point of view from not only her or his directly trusted friends but also the other indirectly trusted users. A problem that is faced in inference in such a large network is contradictory information. This work suggests using fuzzy linguistic terms to specify trust to other users and proposes an algorithm for inferring trust from a person to another person that may be not directly connected in the trust graph of a social network. The algorithm is implemented and compared to the previous one that models trust as numbers in a range. While according to the imprecise nature of trust concept, writing and reading a linguistic expression for trust is much more natural than a number for users, the fuzzy composing strategy performs better than the previous algorithm in integrating conflicting beliefs and conveying the conflict in the inferred result.",
"Privacy -- the protection of information from unauthorized disclosure -- is increasingly scarce on the Internet. The lack of privacy is particularly true for popular peer-to-peer data sharing applications such as BitTorrent where user behavior is easily monitored by third parties. Anonymizing overlays such as Tor and Freenet can improve user privacy, but only at a cost of substantially reduced performance. Most users are caught in the middle, unwilling to sacrifice either privacy or performance. In this paper, we explore a new design point in this tradeoff between privacy and performance. We describe the design and implementation of a new P2P data sharing protocol, called OneSwarm, that provides users much better privacy than BitTorrent and much better performance than Tor or Freenet. A key aspect of the OneSwarm design is that users have explicit configurable control over the amount of trust they place in peers and in the sharing model for their data: the same data can be shared publicly, anonymously, or with access control, with both trusted and untrusted peers. OneSwarm's novel lookup and transfer techniques yield a median factor of 3.4 improvement in download times relative to Tor and a factor of 6.9 improvement relative to Freenet. OneSwarm is publicly available and has been downloaded by hundreds of thousands of users since its release.",
"",
"The use of previous direct interactions is probably the best way to calculate a reputation but, unfortunately this information is not always available. This is especially true in large multi-agent systems where interaction is scarce. In this paper we present a reputation system that takes advantage, among other things, of social relations between agents to overcome this problem."
]
}
|
1110.5051
|
1778185879
|
In this paper, we describe our approach to the Wikipedia Participation Challenge which aims to predict the number of edits a Wikipedia editor will make in the next 5 months. The best submission from our team, "zeditor", achieved 41.7 improvement over WMF's baseline predictive model and the final rank of 3rd place among 96 teams. An interesting characteristic of our approach is that only temporal dynamics features (i.e., how the number of edits changes in recent periods, etc.) are used in a self-supervised learning framework, which makes it easy to be generalised to other application domains.
|
The global slowdown of Wikipedia's growth rate (both in the number of editors and the number of edits per month) has been studied @cite_8 . It is found that medium-frequency editors now cover a lower percentage of the total population while high frequency editors continue to increase the number of their edits. Moreover, there are increased patterns of conflict and dominance (e.g., greater resistance to new edits in particular those from occasional editors), which may be the consequence of the increasingly limited opportunities in making novel contributions. These findings could guide us to generate other kinds of useful features to tackle the problem of edit number prediction. Furthermore, researchers have also investigated other activities of Wikipedia's editors, such as voting on the promotion of Wikipedia admins @cite_9 . In addition to Wikipedia, the temporal dynamics of online users' behaviour has been explored and exploited in web search @cite_7 @cite_11 @cite_6 @cite_4 , social tagging @cite_3 @cite_5 , blogging @cite_15 , twittering @cite_13 , and collaborative filtering @cite_10 . The @cite_14 and the @cite_2 seem to be recurrent themes across application domains.
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_10",
"@cite_9",
"@cite_6",
"@cite_3",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_11"
],
"mid": [
"2000042664",
"2057034832",
"1509214810",
"1993500013",
"",
"2117134415",
"2108280221",
"2118866100",
"2117487426",
"2127246734",
"2035621475",
"2114510817",
"1997310773"
],
"abstract": [
"Power-law distributions occur in many situations of scientific interest and have significant consequences for our understanding of natural and man-made phenomena. Unfortunately, the detection and characterization of power laws is complicated by the large fluctuations that occur in the tail of the distribution—the part of the distribution representing large but rare events—and by the difficulty of identifying the range over which power-law behavior holds. Commonly used methods for analyzing power-law data, such as least-squares fitting, can produce substantially inaccurate estimates of parameters for power-law distributions, and even in cases where such methods return accurate answers they are still unsatisfactory because they give no indication of whether the data obey a power law at all. Here we present a principled statistical framework for discerning and quantifying power-law behavior in empirical data. Our approach combines maximum-likelihood fitting methods with goodness-of-fit tests based on the Kolmogorov-Smirnov (KS) statistic and likelihood ratios. We evaluate the effectiveness of the approach with tests on synthetic data and give critical comparisons to previous approaches. We also apply the proposed methods to twenty-four real-world data sets from a range of different disciplines, each of which has been conjectured to follow a power-law distribution. In some cases we find these conjectures to be consistent with the data, while in others the power law is ruled out.",
"Web search is strongly influenced by time. The queries people issue change over time, with some queries occasionally spiking in popularity (e.g., earthquake) and others remaining relatively constant (e.g., youtube). The documents indexed by the search engine also change, with some documents always being about a particular query (e.g., the Wikipedia page on earthquakes is about the query earthquake) and others being about the query only at a particular point in time (e.g., the New York Times is only about earthquakes following a major seismic activity). The relationship between documents and queries can also change as people's intent changes (e.g., people sought different content for the query earthquake before the Haitian earthquake than they did after). In this paper, we explore how queries, their associated documents, and the query intent change over the course of 10 weeks by analyzing query log data, a daily Web crawl, and periodic human relevance judgments. We identify several interesting features by which changes to query popularity can be classified, and show that presence of these features, when accompanied by changes in result content, can be a good indicator of change in query intent.",
"We address the problem of online term recurrence prediction: for a stream of terms, at each time point predict what term is going to recur next in the stream given the term occurrence history so far. It has many applications, for example, in Web search and social tagging. In this paper, we propose a time-sensitive language modelling approach to this problem that effectively combines term frequency and term recency information, and describe how this approach can be implemented efficiently by an online learning algorithm. Our experiments on a real-world Web query log dataset show significant improvements over standard language modelling.",
"Prior research on Wikipedia has characterized the growth in content and editors as being fundamentally exponential in nature, extrapolating current trends into the future. We show that recent editing activity suggests that Wikipedia growth has slowed, and perhaps plateaued, indicating that it may have come against its limits to growth. We measure growth, population shifts, and patterns of editor and administrator activities, contrasting these against past results where possible. Both the rate of page growth and editor growth has declined. As growth has declined, there are indicators of increased coordination and overhead costs, exclusion of newcomers, and resistance to new edits. We discuss some possible explanations for these new developments in Wikipedia including decreased opportunities for sharing existing knowledge and increased bureaucratic stress on the socio-technical system itself.",
"",
"Social media sites are often guided by a core group of committed users engaged in various forms of governance. A crucial aspect of this type of governance is deliberation, in which such a group reaches decisions on issues of importance to the site. Despite its crucial — though subtle — role in how a number of prominent social media sites function, there has been relatively little investigation of the deliberative aspects of social media governance. Here we explore this issue, investigating a particular deliberative process that is extensive, public, and recorded: the promotion of Wikipedia admins, which is determined by elections that engage committed members of the Wikipedia community. We find that the group decision-making at the heart of this process exhibits several fundamental forms of relative assessment. First we observe that the chance that a voter will support a candidate is strongly dependent on the relationship between characteristics of the voter and the candidate. Second we investigate how both individual voter decisions and overall election outcomes can be based on models that take into account the sequential, public nature of the voting.",
"Many web documents are dynamic, with content changing in varying amounts at varying frequencies. However, current document search algorithms have a static view of the document content, with only a single version of the document in the index at any point in time. In this paper, we present the first published analysis of using the temporal dynamics of document content to improve relevance ranking. We show that there is a strong relationship between the amount and frequency of content change and relevance. We develop a novel probabilistic document ranking algorithm that allows differential weighting of terms based on their temporal characteristics. By leveraging such content dynamics we show significant performance improvements for navigational queries.",
"How often do tags recur? How hard is predicting tag recurrence? What tags are likely to recur? We try to answer these questions by analysing the RSDC08 dataset, in both individual and collective settings. Our findings provide useful insights for the development of tag suggestion techniques etc.",
"The data stream problem has been studied extensively in recent years, because of the great ease in collection of stream data. The nature of stream data makes it essential to use algorithms which require only one pass over the data. Recently, single-scan, stream analysis methods have been proposed in this context. However, a lot of stream data is high-dimensional in nature. High-dimensional data is inherently more complex in clustering, classification, and similarity search. Recent research discusses methods for projected clustering over high-dimensional data sets. This method is however difficult to generalize to data streams because of the complexity of the method and the large volume of the data streams. In this paper, we propose a new, high-dimensional, projected data stream clustering method, called HPStream. The method incorporates a fading cluster structure, and the projection based clustering methodology. It is incrementally updatable and is highly scalable on both the number of dimensions and the size of the data streams, and it achieves better clustering quality in comparison with the previous stream clustering methods. Our performance study with both real and synthetic data sets demonstrates the efficiency and effectiveness of our proposed framework and implementation methods.",
"The debate within the Web community over the optimal means by which to organize information often pits formalized classifications against distributed collaborative tagging systems. A number of questions remain unanswered, however, regarding the nature of collaborative tagging systems including whether coherent categorization schemes can emerge from unsupervised tagging by users. This paper uses data from the social bookmarking site delicio. us to examine the dynamics of collaborative tagging systems. In particular, we examine whether the distribution of the frequency of use of tags for \"popular\" sites with a long history (many tags and many users) can be described by a power law distribution, often characteristic of what are considered complex systems. We produce a generative model of collaborative tagging in order to understand the basic dynamics behind tagging, including how a power law distribution of tags could arise. We empirically examine the tagging history of sites in order to determine how this distribution arises over time and to determine the patterns prior to a stable distribution. Lastly, by focusing on the high-frequency tags of a site where the distribution of tags is a stabilized power law, we show how tag co-occurrence networks for a sample domain of tags can be used to analyze the meaning of particular tags given their relationship to other tags.",
"This article addresses the problem of spam blog (splog) detection using temporal and structural regularity of content, post time and links. Splogs are undesirable blogs meant to attract search engine traffic, used solely for promoting affiliate sites. Blogs represent popular online media, and splogs not only degrade the quality of search engine results, but also waste network resources. The splog detection problem is made difficult due to the lack of stable content descriptors. We have developed a new technique for detecting splogs, based on the observation that a blog is a dynamic, growing sequence of entries (or posts) rather than a collection of individual pages. In our approach, splogs are recognized by their temporal characteristics and content. There are three key ideas in our splog detection framework. (a) We represent the blog temporal dynamics using self-similarity matrices defined on the histogram intersection similarity measure of the time, content, and link attributes of posts, to investigate the temporal changes of the post sequence. (b) We study the blog temporal characteristics using a visual representation derived from the self-similarity measures. The visual signature reveals correlation between attributes and posts, depending on the type of blogs (normal blogs and splogs). (c) We propose two types of novel temporal features to capture the splog temporal characteristics. In our splog detector, these novel features are combined with content based features. We extract a content based feature vector from blog home pages as well as from different parts of the blog. The dimensionality of the feature vector is reduced by Fisher linear discriminant analysis. We have tested an SVM-based splog detector using proposed features on real world datasets, with appreciable results (90p accuracy).",
"Social Web describes a new culture of participation on the Web where more and more people actively participate in publishing and organizing Web content. As part of this culture, people leave a variety of traces when interacting with (other people via) Social Web systems. In this paper, we investigate user modeling strategies for inferring personal interest profiles from Social Web interactions. In particular, we analyze individual micro-blogging activities on Twitter. We compare different strategies for creating user profiles based on the Twitter messages a user has published and study how these profiles change over time. Moreover, we evaluate the quality of the user modeling strategies in the context of personalized recommender systems and show that those strategies which consider the temporal dynamics of the individual profiles allow for the best performance.",
"We study the recurrence dynamics of queries in Web search by analysing a large real-world query log dataset. We find that query frequency is more useful in predicting collective query recurrence whereas query recency is more useful in predicting individual query recurrence. Our findings provide valuable insights for understanding and improving Web search."
]
}
|
1110.5092
|
2069544271
|
This paper studies vector space interference alignment for the three-user MIMO interference channel with no time or frequency diversity. The main result is a characterization of the feasibility of interference alignment in the symmetric case where all transmitters have M antennas and all receivers have N antennas. If N >= M and all users desire d transmit dimensions, then alignment is feasible if and only if (2r+1)d = N. It turns out that, just as for the 3-user parallel interference channel BT09 , the length of alignment paths captures the essence of the problem. In fact, for each feasible value of M and N the maximum alignment path length dictates both the converse and achievability arguments. One of the implications of our feasibility criterion is that simply counting equations and comparing to the number of variables does not predict feasibility. Instead, a more careful investigation of the geometry of the alignment problem is required. The necessary condition obtained by counting equations is implied by our new feasibility criterion.
|
The problem we consider, of maximizing degrees-of-freedom using linear strategies for the @math -user MIMO IC with finite number of transmit and receive antennas, has received significant attention in the last several years. Jafar and Fakhereddin @cite_6 determined the degrees of freedom of the two-user MIMO IC with an arbitrary number of antennas at each of the four terminals. Cadambe and Jafar @cite_18 considered the problem for @math users and @math antennas, and showed that @math dof was achievable. For more than @math users or @math they assumed infinite time or frequency diversity and applied their main @math result. @cite_21 @cite_12 , posed the problem of determining feasibility of linear alignment in the constant channel setting, but left the problem unanswered and proposed a heuristic iterative numerical algorithm.
|
{
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_12",
"@cite_6"
],
"mid": [
"1979408141",
"2167357515",
"2098363461",
"2162370583"
],
"abstract": [
"For the fully connected K user wireless interference channel where the channel coefficients are time-varying and are drawn from a continuous distribution, the sum capacity is characterized as C(SNR)=K 2log(SNR)+o(log(SNR)) . Thus, the K user time-varying interference channel almost surely has K 2 degrees of freedom. Achievability is based on the idea of interference alignment. Examples are also provided of fully connected K user interference channels with constant (not time-varying) coefficients where the capacity is exactly achieved by interference alignment at all SNR values.",
"Recent results establish the optimality of interference alignment to approach the Shannon capacity of interference networks at high SNR. However, the extent to which interference can be aligned over a finite number of signalling dimensions remains unknown. Another important concern for interference alignment schemes is the requirement of global channel knowledge. In this work we provide examples of iterative algorithms that utilize the reciprocity of wireless networks to achieve interference alignment with only local channel knowledge at each node. These algorithms also provide numerical insights into the feasibility of interference alignment that are not yet available in theory.",
"Recent results establish the optimality of interference alignment to approach the Shannon capacity of interference networks at high SNR. However, the extent to which interference can be aligned over a finite number of signalling dimensions remains unknown. Another important concern for interference alignment schemes is the requirement of global channel knowledge. In this work, we provide examples of iterative algorithms that utilize the reciprocity of wireless networks to achieve interference alignment with only local channel knowledge at each node. These algorithms also provide numerical insights into the feasibility of interference alignment that are not yet available in theory.",
"In this correspondence, we show that the exact number of spatial degrees of freedom (DOF) for a two user nondegenerate (full rank channel matrices) multiple-input-multiple-output (MIMO) Gaussian interference channel with M1, M2 antennas at transmitters 1, 2 and N1, N2 antennas at the corresponding receivers, and perfect channel knowledge at all transmitters and receivers, is min M1 + M2, N1 + M2, max(M1, N2), max(M2, N1) . A constructive achievability proof shows that zero forcing is sufficient to achieve all the available DOF on the two user MIMO interference channel. We also show through an example of a share-and-transmit scheme how the gains of transmitter cooperation may be entirely offset by the cost of enabling that cooperation so that the available DOF are not increased."
]
}
|
1110.5092
|
2069544271
|
This paper studies vector space interference alignment for the three-user MIMO interference channel with no time or frequency diversity. The main result is a characterization of the feasibility of interference alignment in the symmetric case where all transmitters have M antennas and all receivers have N antennas. If N >= M and all users desire d transmit dimensions, then alignment is feasible if and only if (2r+1)d = N. It turns out that, just as for the 3-user parallel interference channel BT09 , the length of alignment paths captures the essence of the problem. In fact, for each feasible value of M and N the maximum alignment path length dictates both the converse and achievability arguments. One of the implications of our feasibility criterion is that simply counting equations and comparing to the number of variables does not predict feasibility. Instead, a more careful investigation of the geometry of the alignment problem is required. The necessary condition obtained by counting equations is implied by our new feasibility criterion.
|
Also at the heuristic level, @cite_10 proposed to determine feasibility of alignment by counting the number of equations and comparing to the number of variables. This approach was carried out rigorously to show a necessary and sufficient condition in @cite_0 for the symmetric square case of @math antennas at all @math transmitters and receivers and by @cite_20 for the case that the number of transmit dimensions @math divides both the number of transmit and receive antennas. Several other works have subsequently pursued a similar approach for related problems, including @cite_11 , @cite_3 (both heuristic), and @cite_19 .
|
{
"cite_N": [
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_10",
"@cite_20",
"@cite_11"
],
"mid": [
"2111785671",
"1525037223",
"2011127971",
"2168095411",
"1971328044",
"2154415981"
],
"abstract": [
"We consider interference alignment in the partially connected K-user MIMO interference channel (IC). Conversely to the fully-connected case, we show that interference alignment can be achievable for an arbitrary number of users K in the network, while the per-user signaling dimension remains fixed, provided that the number of interference links per user is bounded. For this class of channels, which we denote by L-interfering K-user MIMO IC, we provide a criterion applicable to symmetric systems for the system of IA equations to be proper, according to the framework introduced earlier by Properness is a necessary condition for IA to be feasible. Interestingly, this criterion is independent from the number of users K. Furthermore, we propose an iterative algorithm to solve the alignment problem for this class of channels.",
"Determining the feasibility conditions for vector space interference alignment in the K-user MIMO interference channel with constant channel coefficients has attracted much recent attention yet remains unsolved. The main result of this paper is restricted to the symmetric square case where all transmitters and receivers have N antennas, and each user desires d transmit dimensions. We prove that alignment is possible if and only if the number of antennas satisfies N>= d(K+1) 2. We also show a necessary condition for feasibility of alignment with arbitrary system parameters. An algebraic geometry approach is central to the results.",
"We study the degrees of freedom (DoF) of the @math -user interference channel with coordinated multipoint (CoMP) transmission and reception. Each message is jointly transmitted by @math successive transmitters, and is jointly received by @math successive receivers. We refer to this channel as the CoMP channel with a transmit cooperation order of @math and receive cooperation order of @math . Since the channel has a total of @math transmit antennas and @math receive antennas, the maximum possible DoF is equal to @math . We show that the CoMP channel has @math DoF if and only if @math . The key idea is that the zero forcing of the interference corresponding to the @math message at the decoder of the @math message, where @math , can be viewed as a shared responsibility between the @math transmitters carrying the @math message, and the @math receivers decoding the @math message. For the general case, we derive an outer bound that states that the DoF is bounded above by @math . For the special case with only CoMP transmission, i.e, @math , we propose a scheme that can achieve @math DoF for all @math K @math M_ t @math M_ t -1$ receivers, thereby allowing each of these receivers to enjoy 1 DoF, and asymptotic interference alignment is used to align the interfering signals at each other receiver to occupy half the signal space. The achievability proofs are based on the notion of algebraic independence from algebraic geometry.",
"We explore the feasibility of interference alignment in signal vector space-based only on beamforming-for K-user MIMO interference channels. Our main contribution is to relate the feasibility issue to the problem of determining the solvability of a multivariate polynomial system which is considered extensively in algebraic geometry. It is well known, e.g., from Bezout's theorem, that generic polynomial systems are solvable if and only if the number of equations does not exceed the number of variables. Following this intuition, we classify signal space interference alignment problems as either proper or improper based on the number of equations and variables. Rigorous connections between feasible and proper systems are made through Bernshtein's theorem for the case where each transmitter uses only one beamforming vector. The multibeam case introduces dependencies among the coefficients of a polynomial system so that the system is no longer generic in the sense required by both theorems. In this case, we show that the connection between feasible and proper systems can be further strengthened (since the equivalency between feasible and proper systems does not always hold) by including standard information theoretic outer bounds in the feasibility analysis.",
"Consider a K-user flat fading MIMO interference channel where the kth transmitter (or receiver) is equipped with Mk (respectively Nk) antennas. If an exponential (in K) number of generic channel extensions are used either across time or frequency, Cadambe and Jafar [1] showed that the total achievable degrees of freedom (DoF) can be maximized via interference alignment, resulting in a total DoF that grows linearly with A even if Mk and Nk are bounded. In this work we consider the case where no channel extension is allowed, and establish a general condition that must be satisfied by any degrees of freedom tuple (d1. d2....dK) achievable through linear interference alignment. For a symmetric system with Mk = M, Nk = N, dk = d for all k, this condition implies that the total achievable DoF cannot grow linearly with K, and is in fact no more than K(M + N) (K + 1). We also show that this bound is tight when the number of antennas at each transceiver is divisible by d, the number of data streams per user.",
"We explore the feasibility of linear interference alignment (IA) in MIMO cellular networks. Each base station (BTS) has N t transmit antennas, each mobile has N r receive antennas, and a BTS transmits a single beam to each active user. We present a necessary Zero-Forcing (ZF) condition for zero interference in terms of the number of users, the number of cells, N t and N r . We then examine the performance of iterative (forward-backward) algorithms for jointly optimizing the transmit precoders with linear receivers. Modifications of the max-SINR and minimum leakage algorithms are presented, which are observed to converge to a ZF solution whenever the necessary conditions are satisfied. In contrast, convergence of the (original) max-SINR algorithm is problematic when the necessary conditions are satisfied with (near) equality. A more restrictive ZF condition is presented, which predicts when these convergence problems are unlikely to occur."
]
}
|
1110.5092
|
2069544271
|
This paper studies vector space interference alignment for the three-user MIMO interference channel with no time or frequency diversity. The main result is a characterization of the feasibility of interference alignment in the symmetric case where all transmitters have M antennas and all receivers have N antennas. If N >= M and all users desire d transmit dimensions, then alignment is feasible if and only if (2r+1)d = N. It turns out that, just as for the 3-user parallel interference channel BT09 , the length of alignment paths captures the essence of the problem. In fact, for each feasible value of M and N the maximum alignment path length dictates both the converse and achievability arguments. One of the implications of our feasibility criterion is that simply counting equations and comparing to the number of variables does not predict feasibility. Instead, a more careful investigation of the geometry of the alignment problem is required. The necessary condition obtained by counting equations is implied by our new feasibility criterion.
|
The remaining work on linear alignment for the MIMO IC has focused on heuristic algorithms, mainly iterative in nature (see @cite_12 , @cite_17 , @cite_7 , @cite_15 , and @cite_1 ). Some have proofs of convergence, but no performance guarantees are known.
|
{
"cite_N": [
"@cite_7",
"@cite_1",
"@cite_15",
"@cite_12",
"@cite_17"
],
"mid": [
"2028676993",
"2164275514",
"2151381597",
"2098363461",
"2139960897"
],
"abstract": [
"Consider a MIMO interference channel whereby each transmitter and receiver are equipped with multiple antennas. The basic problem is to design optimal linear transceivers (or beamformers) that can maximize system throughput. The recent work [13] suggests that optimal beamformers should maximize the total degrees of freedom and achieve interference alignment in high SNR. In this paper we first consider the interference alignment problem in spatial domain and prove that the problem of maximizing the total degrees of freedom for a given MIMO interference channel is NP-hard. Furthermore, we show that even checking the achievability of a given tuple of degrees of freedom for all receivers is NP-hard when each receiver is equipped with at least three antennas. Moreover, in case where each transmitter and receiver use at most two antennas, the same problem is polynomial time solvable. Finally, we propose a distributed algorithm for transmit covariance matrix design, while assuming each receiver uses a linear MMSE beamformer. The simulation results show that the proposed algorithm outperforms the existing interference alignment algorithms in terms of system throughput.",
"Alternating minimization algorithms are typically used to find interference alignment (IA) solutions for multiple- input multiple-output (MIMO) interference channels with more than K =3 users. For these scenarios many IA solutions exit, and the initial point determines which one is obtained upon convergence. In this paper, we propose a new iterative algorithm that aims at finding the IA solution that maximizes the average sum-rate. At each step of the alternating minimization algorithm, either the precoders or the decoders are moved along the direction given by the gradient of the sum-rate. Since IA solutions are defined by a set of subspaces, the gradient optimization is performed on the Grassmann manifold. The step size of the gradient ascent algorithm is annealed to zero over the iterations in such a way that during the last iterations only the interference leakage is being minimized and a perfect alignment solution is finally reached. Simulation examples are provided showing that the proposed algorithm obtains IA solutions with significant higher throughputs than the conventional IA algorithms.",
"We show that the maximization of the sum degrees-of-freedom for the static flat-fading multiple-input multiple-output (MIMO) interference channel is equivalent to a rank constrained rank minimization problem (RCRM), when the signal spaces span all available dimensions. The rank minimization corresponds to maximizing interference alignment (IA) so that interference spans the lowest dimensional subspace possible. The rank constraints account for the useful signal spaces spanning all available spatial dimensions. That way, we reformulate all IA requirements to requirements involving ranks. Then, we present a convex relaxation of the RCRM problem inspired by recent results in compressed sensing and low-rank matrix completion theory that rely on approximating rank with the nuclear norm. We show that the convex envelope of the sum of ranks of the interference matrices is the normalized sum of their corresponding nuclear norms and introduce tractable constraints that are asymptotically equivalent to the rank constraints for the initial problem. We also show that our heuristic relaxation can be tuned for the multi-cell interference channel. Furthermore, we experimentally show that in many cases the proposed algorithm attains perfect interference alignment and in some cases outperforms previous approaches for finding precoding and zero-forcing matrices for interference alignment.",
"Recent results establish the optimality of interference alignment to approach the Shannon capacity of interference networks at high SNR. However, the extent to which interference can be aligned over a finite number of signalling dimensions remains unknown. Another important concern for interference alignment schemes is the requirement of global channel knowledge. In this work, we provide examples of iterative algorithms that utilize the reciprocity of wireless networks to achieve interference alignment with only local channel knowledge at each node. These algorithms also provide numerical insights into the feasibility of interference alignment that are not yet available in theory.",
"Using interference alignment, it has been shown that the number of degrees of freedom in the interference channel scales linearly with the number of users. Unfortunately, closed-form solutions for interference alignment over constant-coefficient channels with more than 3 users are difficult to derive. This paper proposes an algorithm for interference alignment in the MIMO interference channel with an arbitrary number of users, antennas, or spatial streams. The algorithm is an alternating minimization over the precoding matrices at the transmitters and the interference subspaces at the receivers, and is proven to converge. Numerical results show how the algorithm is useful for simulation and can give insight into the limitations of interference alignment."
]
}
|
1110.5092
|
2069544271
|
This paper studies vector space interference alignment for the three-user MIMO interference channel with no time or frequency diversity. The main result is a characterization of the feasibility of interference alignment in the symmetric case where all transmitters have M antennas and all receivers have N antennas. If N >= M and all users desire d transmit dimensions, then alignment is feasible if and only if (2r+1)d = N. It turns out that, just as for the 3-user parallel interference channel BT09 , the length of alignment paths captures the essence of the problem. In fact, for each feasible value of M and N the maximum alignment path length dictates both the converse and achievability arguments. One of the implications of our feasibility criterion is that simply counting equations and comparing to the number of variables does not predict feasibility. Instead, a more careful investigation of the geometry of the alignment problem is required. The necessary condition obtained by counting equations is implied by our new feasibility criterion.
|
We emphasize that in this paper we restrict attention to vector space interference alignment, where the effect of finite channel diversity can be observed. Interfering signals can also be aligned on the signal scale using lattice codes (first proposed in @cite_23 , see also @cite_14 , @cite_4 , @cite_13 ), however the understanding of this type of alignment is currently at the stage corresponding to infinite time or frequency diversity in the vector space setting. In other words, essentially perfect" alignment is possible due to the infinite channel precision available at infinite signal-to-noise ratios.
|
{
"cite_N": [
"@cite_13",
"@cite_14",
"@cite_4",
"@cite_23"
],
"mid": [
"2130172876",
"2168957483",
"",
"2151027523"
],
"abstract": [
"In this paper, we develop the machinery of real interference alignment. This machinery is extremely powerful in achieving the sum degrees of freedom (DoF) of single antenna systems. The scheme of real interference alignment is based on designing single-layer and multilayer constellations used for modulating information messages at the transmitters. We show that constellations can be aligned in a similar fashion as that of vectors in multiple antenna systems and space can be broken up into fractional dimensions. The performance analysis of the signaling scheme makes use of a recent result in the field of Diophantine approximation, which states that the convergence part of the Khintchine-Groshev theorem holds for points on nondegenerate manifolds. Using real interference alignment, we obtain the sum DoF of two model channels, namely the Gaussian interference channel (IC) and the X channel. It is proved that the sum DoF of the K-user IC is (K 2) for almost all channel parameters. We also prove that the sum DoF of the X-channel with K transmitters and M receivers is (K M K + M - 1) for almost all channel parameters.",
"An interference alignment example is constructed for the deterministic channel model of the K-user interference channel. The deterministic channel example is then translated into the Gaussian setting, creating the first known example of a fully connected Gaussian K-user interference network with single antenna nodes, real, nonzero and constant channel coefficients, and no propagation delays where the degrees of freedom outerbound is achieved. An analogy is drawn between the propagation delay based interference alignment examples and the deterministic channel model which also allows similar constructions for the two-user X channel as well.",
"",
"Recently, Etkin, Tse, and Wang found the capacity region of the two-user Gaussian interference channel to within 1 bit s Hz. A natural goal is to apply this approach to the Gaussian interference channel with an arbitrary number of users. We make progress towards this goal by finding the capacity region of the many-to-one and one-to-many Gaussian interference channels to within a constant number of bits. The result makes use of a deterministic model to provide insight into the Gaussian channel. The deterministic model makes explicit the dimension of signal level. A central theme emerges: the use of lattice codes for alignment of interfering signals on the signal level."
]
}
|
1110.5092
|
2069544271
|
This paper studies vector space interference alignment for the three-user MIMO interference channel with no time or frequency diversity. The main result is a characterization of the feasibility of interference alignment in the symmetric case where all transmitters have M antennas and all receivers have N antennas. If N >= M and all users desire d transmit dimensions, then alignment is feasible if and only if (2r+1)d = N. It turns out that, just as for the 3-user parallel interference channel BT09 , the length of alignment paths captures the essence of the problem. In fact, for each feasible value of M and N the maximum alignment path length dictates both the converse and achievability arguments. One of the implications of our feasibility criterion is that simply counting equations and comparing to the number of variables does not predict feasibility. Instead, a more careful investigation of the geometry of the alignment problem is required. The necessary condition obtained by counting equations is implied by our new feasibility criterion.
|
@cite_16 apply alignment on the signal scale to the @math -user @math MIMO IC. The converse arguments in that paper are obtained by forming a two-user interference channel with two users transmitting and decoding jointly; thus they obtain the inequality @math corresponding to @math in of the present paper.
|
{
"cite_N": [
"@cite_16"
],
"mid": [
"2170023479"
],
"abstract": [
"We consider the K user Multiple Input Multiple Output (MIMO) Gaussian interference channel with M antennas at each transmitter and N antennas at each receiver. It is assumed that channel coefficients are fixed and are available at all transmitters and at all receivers. The main objective of this paper is to characterize the total Degrees Of Freedom (DOF) for this channel. Using a new interference alignment technique which has been recently introduced in [1], we show that MN over M+N K degrees of freedom can be achieved for almost all channel realizations. Also, a new upper-bound on the total DOF for this channel is derived. This upper-bound coincides with our achievable DOF for K ≥ K u ≜ M+N over gcd(M,N) where gcd(M,N) denotes the greatest common divisor of M and N. This gives an exact characterization of DOF for MIMO Gaussian interference channel in the case of K ≥ K u ."
]
}
|
1110.5092
|
2069544271
|
This paper studies vector space interference alignment for the three-user MIMO interference channel with no time or frequency diversity. The main result is a characterization of the feasibility of interference alignment in the symmetric case where all transmitters have M antennas and all receivers have N antennas. If N >= M and all users desire d transmit dimensions, then alignment is feasible if and only if (2r+1)d = N. It turns out that, just as for the 3-user parallel interference channel BT09 , the length of alignment paths captures the essence of the problem. In fact, for each feasible value of M and N the maximum alignment path length dictates both the converse and achievability arguments. One of the implications of our feasibility criterion is that simply counting equations and comparing to the number of variables does not predict feasibility. Instead, a more careful investigation of the geometry of the alignment problem is required. The necessary condition obtained by counting equations is implied by our new feasibility criterion.
|
In exactly the same setting as the present paper, @cite_8 have independently proposed a similar achievable strategy for critical @math satisfying both and @math . @cite_8 is limited to critical values of @math and contains no converse arguments beyond the equation counting bound of @cite_0 and @cite_20 .
|
{
"cite_N": [
"@cite_0",
"@cite_20",
"@cite_8"
],
"mid": [
"1525037223",
"1971328044",
"2949418501"
],
"abstract": [
"Determining the feasibility conditions for vector space interference alignment in the K-user MIMO interference channel with constant channel coefficients has attracted much recent attention yet remains unsolved. The main result of this paper is restricted to the symmetric square case where all transmitters and receivers have N antennas, and each user desires d transmit dimensions. We prove that alignment is possible if and only if the number of antennas satisfies N>= d(K+1) 2. We also show a necessary condition for feasibility of alignment with arbitrary system parameters. An algebraic geometry approach is central to the results.",
"Consider a K-user flat fading MIMO interference channel where the kth transmitter (or receiver) is equipped with Mk (respectively Nk) antennas. If an exponential (in K) number of generic channel extensions are used either across time or frequency, Cadambe and Jafar [1] showed that the total achievable degrees of freedom (DoF) can be maximized via interference alignment, resulting in a total DoF that grows linearly with A even if Mk and Nk are bounded. In this work we consider the case where no channel extension is allowed, and establish a general condition that must be satisfied by any degrees of freedom tuple (d1. d2....dK) achievable through linear interference alignment. For a symmetric system with Mk = M, Nk = N, dk = d for all k, this condition implies that the total achievable DoF cannot grow linearly with K, and is in fact no more than K(M + N) (K + 1). We also show that this bound is tight when the number of antennas at each transceiver is divisible by d, the number of data streams per user.",
"In this paper, the 3-user multiple-input multiple-output Gaussian interference channel with M antennas at each transmitter and N antennas at each receiver is considered. It is assumed that the channel coefficients are constant and known to all transmitters and receivers. A novel scheme is presented that spans a new achievable degrees of freedom region. For some values of M and N, the proposed scheme achieve higher number of DoF than are currently achievable, while for other values it meets the best known upperbound. Simulation results are presented showing the superior performance of the proposed schemes to earlier approaches."
]
}
|
1110.5092
|
2069544271
|
This paper studies vector space interference alignment for the three-user MIMO interference channel with no time or frequency diversity. The main result is a characterization of the feasibility of interference alignment in the symmetric case where all transmitters have M antennas and all receivers have N antennas. If N >= M and all users desire d transmit dimensions, then alignment is feasible if and only if (2r+1)d = N. It turns out that, just as for the 3-user parallel interference channel BT09 , the length of alignment paths captures the essence of the problem. In fact, for each feasible value of M and N the maximum alignment path length dictates both the converse and achievability arguments. One of the implications of our feasibility criterion is that simply counting equations and comparing to the number of variables does not predict feasibility. Instead, a more careful investigation of the geometry of the alignment problem is required. The necessary condition obtained by counting equations is implied by our new feasibility criterion.
|
Finally, also independently, @cite_2 very recently posted a paper to the Arxiv containing many similar results. Their converse is information theoretic and, unlike ours, is not limited to linear strategies.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2953367734"
],
"abstract": [
"We show that the 3 user M_T x M_R MIMO interference channel has d(M,N)=min(M (2-1 k),N (2+1 k)) degrees of freedom (DoF) normalized by time, frequency, and space dimensions, where M=min(M_T,M_R), N=max(M_T,M_R), k=ceil M (N-M) . While the DoF outer bound is established for every M_T, M_R value, the achievability is established in general subject to normalization with respect to spatial-extensions. Given spatial-extensions, the achievability relies only on linear beamforming based interference alignment schemes with no need for time frequency extensions. In the absence of spatial extensions, we show through examples how essentially the same scheme may be applied over time frequency extensions. The central new insight to emerge from this work is the notion of subspace alignment chains as DoF bottlenecks. The DoF value d(M,N) is a piecewise linear function of M,N, with either M or N being the bottleneck within each linear segment. The corner points of these piecewise linear segments correspond to A= 1 2,2 3,3 4,... and B= 1 3,3 5,5 7,... . The set A contains all values of M N and only those for which there is redundancy in both M and N. The set B contains all values of M N and only those for which there is no redundancy in either M or N. Our results settle the feasibility of linear interference alignment, introduced by , for the 3 user M_T x M_R MIMO interference channel, completely for all values of M_T, M_R. Specifically, the linear interference alignment problem (M_T x M_R, d)^3 (as defined in previous work by ) is feasible if and only if d<=floor d(M,N) . With and only with the exception of the values M N B, we show that for every M N value there are proper systems that are not feasible. Our results show that M N A are the only values for which there is no DoF benefit of joint processing among co-located antennas at the transmitters or receivers."
]
}
|
1110.5395
|
1998419734
|
Tor is one of the more popular systems for anonymizing near-real-time communications on the Internet. [2007] proposed a denial-of-service-based attack on Tor (and related systems) that significantly increases the probability of compromising the anonymity provided. In this article, we analyze the effectiveness of the attack using both an analytic model and simulation. We also describe two algorithms for detecting such attacks, one deterministic and proved correct, the other probabilistic and verified in simulation.
|
The MorphMix system @cite_4 , like Tor, is a peer-to-peer system for low-latency anonymous communication on the Internet. The system's design includes a collusion detection mechanism. Later, morphmix:pet2006 showed that local knowledge of the network does not suffice to detect colluding adversaries.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2078813897"
],
"abstract": [
"Traditional mix-based systems are composed of a small set of static, well known, and highly reliable mixes. To resist traffic analysis attacks at a mix, cover traffic must be used, which results in significant bandwidth overhead. End-to-end traffic analysis attacks are even more difficult to counter because there are only a few entry-and exit-points in the system. Static mix networks also suffer from scalability problems and in several countries, institutions operating a mix could be targeted by legal attacks. In this paper, we introduce MorphMix, a system for peer-to-peer based anonymous Internet usage. Each MorphMix node is a mix and anyone can easily join the system. We believe that MorphMix overcomes or reduces several drawbacks of static mix networks. In particular, we argue that our approach offers good protection from traffic analysis attacks without employing cover traffic. But MorphMix also introduces new challenges. One is that an adversary can easily operate several malicious nodes in the system and try to break the anonymity of legitimate users by getting full control over their anonymous paths. To counter this attack, we have developed a collusion detection mechanism, which allows to identify compromised paths with high probability before they are being used."
]
}
|
1110.5454
|
2951057303
|
Latent feature models are widely used to decompose data into a small number of components. Bayesian nonparametric variants of these models, which use the Indian buffet process (IBP) as a prior over latent features, allow the number of features to be determined from the data. We present a generalization of the IBP, the distance dependent Indian buffet process (dd-IBP), for modeling non-exchangeable data. It relies on distances defined between data points, biasing nearby data to share more features. The choice of distance measure allows for many kinds of dependencies, including temporal and spatial. Further, the original IBP is a special case of the dd-IBP. In this paper, we develop the dd-IBP and theoretically characterize its feature-sharing properties. We derive a Markov chain Monte Carlo sampler for a linear Gaussian model with a dd-IBP prior and study its performance on several non-exchangeable data sets.
|
Conditional on a draw from the beta process, the feature representation @math of data point @math is generated by drawing from the (BeP) with base measure @math : @math . If @math is discrete, then @math , where @math . In other words, feature @math is activated with probability @math independently for all data points. Sampling @math from the compound beta-Bernoulli process is equivalent to sampling @math directly from the IBP when @math and @math @cite_14 .
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"1517266559"
],
"abstract": [
"We show that the beta process is the de Finetti mixing distribution underlying the Indian buffet process of [2]. This result shows that the beta process plays the role for the Indian buffet process that the Dirichlet process plays for the Chinese restaurant process, a parallel that guides us in deriving analogs for the beta process of the many known extensions of the Dirichlet process. In particular we define Bayesian hierarchies of beta processes and use the connection to the beta process to develop posterior inference algorithms for the Indian buffet process. We also present an application to document classification, exploring a relationship between the hierarchical beta process and smoothed naive Bayes models."
]
}
|
1110.5454
|
2951057303
|
Latent feature models are widely used to decompose data into a small number of components. Bayesian nonparametric variants of these models, which use the Indian buffet process (IBP) as a prior over latent features, allow the number of features to be determined from the data. We present a generalization of the IBP, the distance dependent Indian buffet process (dd-IBP), for modeling non-exchangeable data. It relies on distances defined between data points, biasing nearby data to share more features. The choice of distance measure allows for many kinds of dependencies, including temporal and spatial. Further, the original IBP is a special case of the dd-IBP. In this paper, we develop the dd-IBP and theoretically characterize its feature-sharing properties. We derive a Markov chain Monte Carlo sampler for a linear Gaussian model with a dd-IBP prior and study its performance on several non-exchangeable data sets.
|
Miller, Griffiths and Jordan @cite_12 proposed a phylogenetic IBP'' that encodes tree-structured dependencies between data. Doshi-Velez and Ghahramani @cite_13 proposed a correlated IBP'' that couples data points and features through a set of latent clusters. Both of these models relax exchangeability, but they do not allow dependencies to be specified directly in terms of distances between data. Furthermore, inference for these models requires more intensive computation than does the standard IBP. The MCMC algorithm presented by @cite_12 for the phylogenetic IBP involves both dynamic programming and auxiliary variable sampling. Similarly, the MCMC algorithm for the correlated IBP involves sampling latent clusters in addition to latent features. Our model also incurs extra computational cost relative to the traditional IBP due to the computation of reachability (quadratic in the number of observations); however, it permits a richer specification of the dependency structure between observations than either the phylogenetic or the correlated IBP.
|
{
"cite_N": [
"@cite_13",
"@cite_12"
],
"mid": [
"1491018170",
"2132521624"
],
"abstract": [
"We are often interested in explaining data through a set of hidden factors or features. When the number of hidden features is unknown, the Indian Buffet Process (IBP) is a nonparametric latent feature model that does not bound the number of active features in dataset. However, the IBP assumes that all latent features are uncorrelated, making it inadequate for many realworld problems. We introduce a framework for correlated non-parametric feature models, generalising the IBP. We use this framework to generate several specific models and demonstrate applications on realworld datasets.",
"Nonparametric Bayesian models are often based on the assumption that the objects being modeled are exchangeable. While appropriate in some applications (e.g., bag-of-words models for documents), exchangeability is sometimes assumed simply for computational reasons; non-exchangeable models might be a better choice for applications based on subject matter. Drawing on ideas from graphical models and phylogenetics, we describe a non-exchangeable prior for a class of nonparametric latent feature models that is nearly as efficient computationally as its exchangeable counterpart. Our model is applicable to the general setting in which the dependencies between objects can be expressed using a tree, where edge lengths indicate the strength of relationships. We demonstrate an application to modeling probabilistic choice."
]
}
|
1110.5454
|
2951057303
|
Latent feature models are widely used to decompose data into a small number of components. Bayesian nonparametric variants of these models, which use the Indian buffet process (IBP) as a prior over latent features, allow the number of features to be determined from the data. We present a generalization of the IBP, the distance dependent Indian buffet process (dd-IBP), for modeling non-exchangeable data. It relies on distances defined between data points, biasing nearby data to share more features. The choice of distance measure allows for many kinds of dependencies, including temporal and spatial. Further, the original IBP is a special case of the dd-IBP. In this paper, we develop the dd-IBP and theoretically characterize its feature-sharing properties. We derive a Markov chain Monte Carlo sampler for a linear Gaussian model with a dd-IBP prior and study its performance on several non-exchangeable data sets.
|
Recently, @cite_28 presented a novel way of introducing dependency into latent feature models based on the beta process. Instead of defining distances between customers, each dish is associated with a latent covariate vector, and distances are defined between each customer's (observed) covariates and the dish-specific covariates. Customers then choose dishes with probability proportional to the customer-dish proximity. This construction comes with a significant computational advantage for data sets where the time complexity is tied predominantly to the number of observations. The downside of this construction is that the MCMC algorithm used for inference must sample a separate covariate vector for each dish, which may scale poorly if the covariate dimensionality is high.
|
{
"cite_N": [
"@cite_28"
],
"mid": [
"2134032957"
],
"abstract": [
"A new Levy process prior is proposed for an uncountable collection of covariate-dependent feature-learning measures; the model is called the kernel beta process (KBP). Available covariates are handled efficiently via the kernel construction, with covariates assumed observed with each data sample (\"customer\"), and latent covariates learned for each feature (\"dish\"). Each customer selects dishes from an infinite buffet, in a manner analogous to the beta process, with the added constraint that a customer first decides probabilistically whether to \"consider\" a dish, based on the distance in covariate space between the customer and dish. If a customer does consider a particular dish, that dish is then selected probabilistically as in the beta process. The beta process is recovered as a limiting case of the KBP. An efficient Gibbs sampler is developed for computations, and state-of-the-art results are presented for image processing and music analysis tasks."
]
}
|
1110.4697
|
1966102514
|
We consider a switched (queuing) network in which there are constraints on which queues may be served simultaneously; such networks have been used to effectively model input-queued switches and wireless networks. The scheduling policy for such a network specifies which queues to serve at any point in time, based on the current state or past history of the system. In the main result of this paper, we provide a new class of online scheduling policies that achieve optimal queue-size scaling for a class of switched networks including input-queued switches. In particular, it establishes the validity of a conjecture (documented in Shah, Tsitsiklis and Zhong [Queueing Syst. 68 (2011) 375-384]) about optimal queue-size scaling for input-queued switches.
|
Another line of works -- so-called large-deviations analysis -- concerns exponentially decaying bounds on the tail probability of the steady-state distributions of queue sizes. established that the maximum weight policy with weight parameter @math , MW- @math , optimizes the tail exponent of the @math norm of the queue-size vector. showed that a so-called exponential rule'' optimizes the tail exponent of the max norm of the queue-size vector. However, these works do not characterize the tail exponent explicitly. See @cite_24 which has the best known explicit bounds on the tail exponent.
|
{
"cite_N": [
"@cite_24"
],
"mid": [
"2951664871"
],
"abstract": [
"We consider a switched network, a fairly general constrained queueing network model that has been used successfully to model the detailed packet-level dynamics in communication networks, such as input-queued switches and wireless networks. The main operational issue in this model is that of deciding which queues to serve, subject to certain constraints. In this paper, we study qualitative performance properties of the well known @math -weighted scheduling policies. The stability, in the sense of positive recurrence, of these policies has been well understood. We establish exponential upper bounds on the tail of the steady-state distribution of the backlog. Along the way, we prove finiteness of the expected steady-state backlog when @math , a property that was known only for @math . Finally, we analyze the excursions of the maximum backlog over a finite time horizon for @math . As a consequence, for @math , we establish the full state space collapse property."
]
}
|
1110.4697
|
1966102514
|
We consider a switched (queuing) network in which there are constraints on which queues may be served simultaneously; such networks have been used to effectively model input-queued switches and wireless networks. The scheduling policy for such a network specifies which queues to serve at any point in time, based on the current state or past history of the system. In the main result of this paper, we provide a new class of online scheduling policies that achieve optimal queue-size scaling for a class of switched networks including input-queued switches. In particular, it establishes the validity of a conjecture (documented in Shah, Tsitsiklis and Zhong [Queueing Syst. 68 (2011) 375-384]) about optimal queue-size scaling for input-queued switches.
|
In the context of input-queued switches, the example that has primarily motivated this work, the policy that we propose has the average total queue size bounded within factor @math of the same quantity induced by policy, in the heavy-traffic limit. Furthermore, this result does not require conditions like complete resource pooling. More generally, our policy provides non-asymptotic bounds on queue sizes for every arrival rate and switch size. The policy even admits exponential tail bounds with respect to the stationary distribution; and the exponent of these tail bounds is . These results are significant improvements on the state-of-the-art bounds for best performing policies for input-queued switches. As noted in the introduction, our bound on the average total queue size is @math times better than the existing bound for the maximum-weight policy, and @math times better than that for the batching policy in @cite_17 . (Here @math is the number of queues, and @math the system load.) For more details of these results, see @cite_2 .
|
{
"cite_N": [
"@cite_2",
"@cite_17"
],
"mid": [
"2065649422",
"2136380306"
],
"abstract": [
"We review some known results and state a few versions of an open problem related to the scaling of the total queue size (in steady state) in an n×n input-queued switch, as a function of the port number n and the load factor ?. Loosely speaking, the question is whether the total number of packets in queue, under either the maximum weight policy or under an optimal policy, scales (ignoring any logarithmic factors) as O(n (1??)).",
"We consider the fundamental delay bounds for scheduling packets In an N times N packet switch operating under the crossbar constraint. Algorithms that make scheduling decisions without considering queue backlog are shown to incur an average delay of at least O(N). We then prove that O(log(N)) delay is achievable with a simple frame based algorithm that uses queue backlog information. This is the best known delay bound for packet switches, and is the first analytical proof that sublinear delay is achievable in a packet switch with random inputs."
]
}
|
1110.4723
|
2095897379
|
In many real-world situations, different and often opposite opinions, innovations, or products are competing with one another for their social influence in a networked society. In this paper, we study competitive influence propagation in social networks under the competitive linear threshold (CLT) model, an extension to the classic linear threshold model. Under the CLT model, we focus on the problem that one entity tries to block the influence propagation of its competing entity as much as possible by strategically selecting a number of seed nodes that could initiate its own influence propagation. We call this problem the influence blocking maximization (IBM) problem. We prove that the objective function of IBM in the CLT model is submodular, and thus a greedy algorithm could achieve 1-1 e approximation ratio. However, the greedy algorithm requires Monte-Carlo simulations of competitive influence propagation, which makes the algorithm not efficient. We design an efficient algorithm CLDAG, which utilizes the properties of the CLT model, to address this issue. We conduct extensive simulations of CLDAG, the greedy algorithm, and other baseline algorithms on real-world and synthetic datasets. Our results show that CLDAG is able to provide best accuracy in par with the greedy algorithm and often better than other algorithms, while it is two orders of magnitude faster than the greedy algorithm.
|
Independent cascade model and linear threshold model are two extensively studied influence diffusions models originally summarized by @cite_21 , based on earlier works of @cite_16 @cite_24 @cite_6 . prove that the generalized versions of these two models are equivalent @cite_21 . Based on the IC and LT model, Kempe et.al @cite_21 @cite_15 propose a greedy algorithm to solve the influence maximization problem (brought about by Richardson @cite_8 ) to maximize the spreading of a single piece of ideas, innovations, etc. under these two models. Many follow-up studies propose alternative heuristics and try to solve the influence maximization problem more efficiently @cite_14 @cite_12 @cite_13 @cite_22 @cite_2 @cite_18 . In terms of efficient algorithm design, our work follows the idea in @cite_22 @cite_2 of finding efficient local graph structures to speed up the computation. In particular, our CLDAG algorithm is similar to the LDAG algorithm of @cite_2 , which is also based on the DAG structure, but our CLDAG algorithm is novel in dealing with competitive influence diffusion using the dynamic programming method.
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_22",
"@cite_8",
"@cite_21",
"@cite_6",
"@cite_24",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_12"
],
"mid": [
"",
"1944367023",
"1984069252",
"2056609785",
"",
"2187996512",
"",
"",
"",
"2041157860",
"",
"1543143196"
],
"abstract": [
"",
"When we consider the problem of finding influential nodes for information diffusion in a large-scale social network based on the Independent Cascade Model (ICM), we need to compute the expected number of nodes influenced by a given set of nodes. However, a good estimate of this quantity needs a large amount of computation in the ICM. In this paper, we propose two natural special cases of the ICM such that a good estimate of this quantity can be efficiently computed. Using real large-scale social networks, we experimentally demonstrate that for extracting influential nodes, the proposed models can provide novel ranking methods that are different from the ICM, typical methods of social network analysis, and “PageRank” method. Moreover, we experimentally demonstrate that when the propagation probabilities through links are small, they can give good approximations to the ICM for finding sets of influential nodes.",
"Influence maximization, defined by Kempe, Kleinberg, and Tardos (2003), is the problem of finding a small set of seed nodes in a social network that maximizes the spread of influence under certain influence cascade models. The scalability of influence maximization is a key factor for enabling prevalent viral marketing in large-scale online social networks. Prior solutions, such as the greedy algorithm of (2003) and its improvements are slow and not scalable, while other heuristic algorithms do not provide consistently good performance on influence spreads. In this paper, we design a new heuristic algorithm that is easily scalable to millions of nodes and edges in our experiments. Our algorithm has a simple tunable parameter for users to control the balance between the running time and the influence spread of the algorithm. Our results from extensive simulations on several real-world and synthetic networks demonstrate that our algorithm is currently the best scalable solution to the influence maximization problem: (a) our algorithm scales beyond million-sized graphs where the greedy algorithm becomes infeasible, and (b) in all size ranges, our algorithm performs consistently well in influence spread --- it is always among the best algorithms, and in most cases it significantly outperforms all other scalable heuristics to as much as 100 --260 increase in influence spread.",
"Viral marketing takes advantage of networks of influence among customers to inexpensively achieve large changes in behavior. Our research seeks to put it on a firmer footing by mining these networks from data, building probabilistic models of them, and using these models to choose the best viral marketing plan. Knowledge-sharing sites, where customers review products and advise each other, are a fertile source for this type of data mining. In this paper we extend our previous techniques, achieving a large reduction in computational cost, and apply them to data from a knowledge-sharing site. We optimize the amount of marketing funds spent on each customer, rather than just making a binary decision on whether to market to him. We take into account the fact that knowledge of the network is partial, and that gathering that knowledge can itself have a cost. Our results show the robustness and utility of our approach.",
"",
"Aggregate level simulation procedures have been used in many areas of marketing. In this paper we show how individual level simulations may be used support marketing theory development. More specifically, we show how a certain type of simulations that is based on complex systems studies (in this case Stochastic Cellular Automata) may be used to generalize diffusion theory one of the fundamental theories of new product marketing. Cellular Automata models are simulations of global consequences, based on local interactions between individual members of a population, that are widely used in complex system analysis across disciplines. In this study we demonstrate how the Cellular Automata approach can help untangle complex marketing research problems. Specifically, we address two major issues facing current theory of innovation diffusion: The first is general lack of data at the individual level, while the second is the resultant inability of marketing researchers to empirically validate the main assumptions used in the aggregate models of innovation diffusion. Using a computer-based Cellular Automata Diffusion Simulation, we demonstrate how such problems can be overcome. More specifically, we show that relaxing the commonly used assumption of homogeneity in the consumers’ communication behavior is not a barrier to aggregate modeling. Thus we show that notwithstanding some exceptions, the well-known Bass model performs well on aggregate data when the assumption that that all adopters have a possible equal effect on all other potential adopters is relaxed. Through Cellular Automata we are better able to understand how individual level assumptions influence aggregate level parameter values, and learn the strengths and limitations of the aggregate level analysis. We believe that this study can serve as a demonstration towards a much wider use of Cellular Automata models for complex marketing research phenomena.",
"",
"",
"",
"Models of collective behavior are developed for situations where actors have two alternatives and the costs and or benefits of each depend on how many other actors choose which alternative. The key concept is that of \"threshold\": the number or proportion of others who must make one decision before a given actor does so; this is the point where net benefits begin to exceed net costs for that particular actor. Beginning with a frequency distribution of thresholds, the models allow calculation of the ultimate or \"equilibrium\" number making each decision. The stability of equilibrium results against various possible changes in threshold distributions is considered. Stress is placed on the importance of exact distributions distributions for outcomes. Groups with similar average preferences may generate very different results; hence it is hazardous to infer individual dispositions from aggregate outcomes or to assume that behavior was directed by ultimately agreed-upon norms. Suggested applications are to riot ...",
"",
"In this paper, we consider the problem of selecting, for any given positive integer k, the top-k nodes in a social network, based on a certain measure appropriate for the social network. This problem is relevant in many settings such as analysis of co-authorship networks, diffusion of information, viral marketing, etc. However, in most situations, this problem turns out to be NP-hard. The existing approaches for solving this problem are based on approximation algorithms and assume that the objective function is sub-modular. In this paper, we propose a novel and intuitive algorithm based on the Shapley value, for efficiently computing an approximate solution to this problem. Our proposed algorithm does not use the sub-modularity of the underlying objective function and hence it is a general approach. We demonstrate the efficacy of the algorithm using a co-authorship data set from e-print arXiv (www.arxiv.org), having 8361 authors."
]
}
|
1110.4493
|
2951532727
|
We introduce the first grammar-compressed representation of a sequence that supports searches in time that depends only logarithmically on the size of the grammar. Given a text @math that is represented by a (context-free) grammar of @math (terminal and nonterminal) symbols and size @math (measured as the sum of the lengths of the right hands of the rules), a basic grammar-based representation of @math takes @math bits of space. Our representation requires @math bits of space, for any @math . It can find the positions of the @math occurrences of a pattern of length @math in @math in @math time, and extract any substring of length @math of @math in time @math , where @math is the height of the grammar tree.
|
Grammar-based methods can achieve universal compression @cite_20 . Unlike statistical methods, that exploit frequencies to achieve compression, grammar-based methods exploit repetitions in the text, and thus they are especially suitable for compressing highly repetitive sequence collections. These collections, containing long identical substrings, possibly far away from each other, arise when managing software repositories, versioned documents, temporal databases, transaction logs, periodic publications, and computational biology sequence databases.
|
{
"cite_N": [
"@cite_20"
],
"mid": [
"2130956967"
],
"abstract": [
"We investigate a type of lossless source code called a grammar-based code, which, in response to any input data string x over a fixed finite alphabet, selects a context-free grammar G sub x representing x in the sense that x is the unique string belonging to the language generated by G sub x . Lossless compression of x takes place indirectly via compression of the production rules of the grammar G sub x . It is shown that, subject to some mild restrictions, a grammar-based code is a universal code with respect to the family of finite-state information sources over the finite alphabet. Redundancy bounds for grammar-based codes are established. Reduction rules for designing grammar-based codes are presented."
]
}
|
1110.4493
|
2951532727
|
We introduce the first grammar-compressed representation of a sequence that supports searches in time that depends only logarithmically on the size of the grammar. Given a text @math that is represented by a (context-free) grammar of @math (terminal and nonterminal) symbols and size @math (measured as the sum of the lengths of the right hands of the rules), a basic grammar-based representation of @math takes @math bits of space. Our representation requires @math bits of space, for any @math . It can find the positions of the @math occurrences of a pattern of length @math in @math in @math time, and extract any substring of length @math of @math in time @math , where @math is the height of the grammar tree.
|
Finding the smallest grammar @math that represents a given text @math is NP-complete @cite_26 @cite_8 . Moreover, the smallest grammar is never smaller than an LZ77 parse @cite_14 of @math . A simple method to achieve an @math -approximation to the smallest grammar size is to parse @math using LZ77 and then to convert it into a grammar @cite_26 . A more sophisticated approximation achieves ratio @math , where @math is the size of @math .
|
{
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_8"
],
"mid": [
"2107745473",
"1973608346",
"2113004376"
],
"abstract": [
"A universal algorithm for sequential data compression is presented. Its performance is investigated with respect to a nonprobabilistic model of constrained sources. The compression ratio achieved by the proposed universal code uniformly approaches the lower bounds on the compression ratios attainable by block-to-variable codes and variable-to-block codes designed to match a completely specified source.",
"We introduce new type of context-free grammars, AVL-grammars, and show their applicability to grammar-based compression. Using this type of grammars we present O(n log |Σ|) time and O(log n)-ratio approximation of minimal grammar-based compression of a given string of length n over an alphabet Σ and O(k log n) time transformation of LZ77 encoding of size k into a grammar-based encoding of size O(k log n). A preliminary version of this paper has been presented in Rytter (Combinatorial Pattern Matching, Lecture Notes in Computer Science, vol. 2373, Springer, Berlin, June 2000, pp. 20-31), independently of (STOC, 2002), where grammar-based approximation has been attacked with different construction and a more complicated type of grammars (α-balanced grammars for α ≤ 1 - ½ √2). The AVL-grammar is a very natural and simple tool for grammar based compression, it is a straightforward extension of the classical AVL-tree.",
"This paper addresses the smallest grammar problem: What is the smallest context-free grammar that generates exactly one given string spl sigma ? This is a natural question about a fundamental object connected to many fields such as data compression, Kolmogorov complexity, pattern identification, and addition chains. Due to the problem's inherent complexity, our objective is to find an approximation algorithm which finds a small grammar for the input string. We focus attention on the approximation ratio of the algorithm (and implicitly, the worst case behavior) to establish provable performance guarantees and to address shortcomings in the classical measure of redundancy in the literature. Our first results are concern the hardness of approximating the smallest grammar problem. Most notably, we show that every efficient algorithm for the smallest grammar problem has approximation ratio at least 8569 8568 unless P=NP. We then bound approximation ratios for several of the best known grammar-based compression algorithms, including LZ78, B ISECTION, SEQUENTIAL, LONGEST MATCH, GREEDY, and RE-PAIR. Among these, the best upper bound we show is O(n sup 1 2 ). We finish by presenting two novel algorithms with exponentially better ratios of O(log sup 3 n) and O(log(n m sup * )), where m sup * is the size of the smallest grammar for that input. The latter algorithm highlights a connection between grammar-based compression and LZ77."
]
}
|
1110.4493
|
2951532727
|
We introduce the first grammar-compressed representation of a sequence that supports searches in time that depends only logarithmically on the size of the grammar. Given a text @math that is represented by a (context-free) grammar of @math (terminal and nonterminal) symbols and size @math (measured as the sum of the lengths of the right hands of the rules), a basic grammar-based representation of @math takes @math bits of space. Our representation requires @math bits of space, for any @math . It can find the positions of the @math occurrences of a pattern of length @math in @math in @math time, and extract any substring of length @math of @math in time @math , where @math is the height of the grammar tree.
|
More ambitious than just extracting arbitrary substring from @math is to ask for indexed searches, that is, finding all the @math occurrences in @math of a given pattern @math . Self-indexes are compressed text representations that support both operations, extract and search , in time depending only polylogarithmically on @math . They have appeared in the last decade @cite_35 , and have focused mostly on statistical compression. As a result, they work well on classical texts, but not on repetitive collections @cite_33 . Some of those self-indexes have been adapted to repetitive collections @cite_33 , but they cannot reach the compression ratio of the best grammar-based methods. Searching for patterns on grammar-compressed text has been faced mostly in sequential form @cite_13 , that is, scanning the whole grammar. The best result @cite_34 achieves time @math . This may be @math , but still linear in the size of the compressed text. There exist a few self-indexes based on LZ78-like compression @cite_3 @cite_31 @cite_27 , but LZ78 is among the weakest grammar-based compressors. In particular, LZ78 has been shown not to be competitive on highly repetitive collections @cite_33 .
|
{
"cite_N": [
"@cite_35",
"@cite_33",
"@cite_3",
"@cite_27",
"@cite_31",
"@cite_34",
"@cite_13"
],
"mid": [
"2159647614",
"2147217460",
"",
"1979271813",
"2584243174",
"2111487449",
"2160748429"
],
"abstract": [
"We design two compressed data structures for the full-text indexing problem that support efficient substring searches using roughly the space required for storing the text in compressed form.Our first compressed data structure retrieves the occ occurrences of a pattern P[1,p] within a text T[1,n] in O(p p occ log1pe n) time for any chosen e, 0 k (T) p o(n) bits of storage, where H k (T) is the kth order empirical entropy of T. The space usage is Θ(n) bits in the worst case and o(n) bits for compressible texts. This data structure exploits the relationship between suffix arrays and the Burrows--Wheeler Transform, and can be regarded as a compressed suffix array.Our second compressed data structure achieves O(ppocc) query time using O(nH k (T)loge n) p o(n) bits of storage for any chosen e, 0<e<1. Therefore, it provides optimal output-sensitive query time using o(nlog n) bits in the worst case. This second data structure builds upon the first one and exploits the interplay between two compressors: the Burrows--Wheeler Transform and the LZ78 algorithm.",
"A repetitive sequence collection is one where portions of a base sequence of length n are repeated many times with small variations, forming a collection of total length N . Examples of such collections are version control data and genome sequences of individuals, where the differences can be expressed by lists of basic edit operations. Flexible and efficient data analysis on a such typically huge collection is plausible using suffix trees. However, suffix tree occupies O (N logN ) bits, which very soon inhibits in-memory analyses. Recent advances in full-text self-indexing reduce the space of suffix tree to O (N log*** ) bits, where *** is the alphabet size. In practice, the space reduction is more than 10-fold, for example on suffix tree of Human Genome. However, this reduction factor remains constant when more sequences are added to the collection. We develop a new family of self-indexes suited for the repetitive sequence collection setting. Their expected space requirement depends only on the length n of the base sequence and the number s of variations in its repeated copies. That is, the space reduction factor is no longer constant, but depends on N n . We believe the structures developed in this work will provide a fundamental basis for storage and retrieval of individual genomes as they become available due to rapid progress in the sequencing technologies.",
"",
"A compressed full-text self-index for a text T, of size u, is a data structure used to search for patterns P, of size m, in T, that requires reduced space, i.e. space that depends on the empirical entropy (H k or H 0) of T, and is, furthermore, able to reproduce any substring of T. In this paper we present a new compressed self-index able to locate the occurrences of P in O((m + occ)log u) time, where occ is the number of occurrences. The fundamental improvement over previous LZ78 based indexes is the reduction of the search time dependency on m from O(m 2) to O(m). To achieve this result we point out the main obstacle to linear time algorithms based on LZ78 data compression and expose and explore the nature of a recurrent structure in LZ-indexes, the @math suffix tree. We show that our method is very competitive in practice by comparing it against other state of the art compressed indexes.",
"The LZ-index is a compressed full-text self-index able to represent a text T 1...u , over an alphabet of size a = O(polylog(u)) and with k-th order empirical entropy H k (T), using 4uH k (T) + o(u log σ) bits for any k = o(log σ u). It can report all the occ occurrences of a pattern P 1...m in T in O(m3 log σ + (m + occ) log u) worst case time. Its main drawback is the factor 4 in its space complexity, which makes it larger than other state-of-the-art alternatives. In this paper we present two different approaches to reduce the space requirement of LZ-index. In both cases we achieve (2 + e)uH k (T) + o(u log σ) bits of space, for any constant e > 0, and we simultaneously improve the search time to O(m 2 log m + (m + occ) log u). Both indexes support displaying any sub-text of length l in optimal O(l logσ u) time. In addition, we show how the space can be squeezed to (1 + ∈)uH k (T) + o(u log σ) to obtain a structure with O(m 2 ) average search time for m ≥ 2log σ , u.",
"We introduce a general framework which is suitable to capture the essence of compressed pattern matching according to various dictionary-based compressions. It is a formal system to represent a string by a pair of dictionary D and sequence S of phrases in D. The basic operations are concatenation, truncation, and repetition. We also propose a compressed pattern matching algorithm for the framework. The goal is to find all occurrences of a pattern in a text without decompression, which is one of the most active topics in string matching. Our framework includes such compression methods as Lempel-Ziv family (LZ77, LZSS, LZ78, LZW), RE-PAIR, SEQUITUR, and the static dictionary-based method. The proposed algorithm runs in O((||D|| + |S|)- height(D) + m2 + r) time with O(||D|| + m2) space, where ||D|| is the size of D, |S| is the number of tokens in S, height(D) is the maximum dependency of tokens in D, m is the pattern length, and r is the number of pattern occurrences. For a subclass of the framework that contains no truncation, the time complexity is O(||D|| + |S| + m2 + r).",
"Digitized images are known to be extremely space consuming. However, regularities in the images can often be exploited to reduce the necessary storage area. Thus, many systems store images in a compressed form. The authors propose that compression be used as a time saving tool, in addition to its traditional role of space saving. They introduce a new pattern matching paradigm, compressed matching. A text array T and pattern array P are given in compressed forms c(T) and c(P). They seek all appearances of P in T, without decompressing T. This achieves a search time that is sublinear in the size of the uncompressed text mod T mod . They show that for the two-dimensional run-length compression there is a O( mod c(T) mod log mod P mod + mod P mod ), or almost optimal algorithm. The algorithm uses a novel multidimensional pattern matching technique, two-dimensional periodicity analysis. >"
]
}
|
1110.4493
|
2951532727
|
We introduce the first grammar-compressed representation of a sequence that supports searches in time that depends only logarithmically on the size of the grammar. Given a text @math that is represented by a (context-free) grammar of @math (terminal and nonterminal) symbols and size @math (measured as the sum of the lengths of the right hands of the rules), a basic grammar-based representation of @math takes @math bits of space. Our representation requires @math bits of space, for any @math . It can find the positions of the @math occurrences of a pattern of length @math in @math in @math time, and extract any substring of length @math of @math in time @math , where @math is the height of the grammar tree.
|
The only self-index supporting general grammar compressors @cite_21 operates on straight-line programs'' (SLPs), where the right hands of the rules are of length 1 or 2. Given such a grammar they achieve, among other tradeoffs, @math bits of space and @math search time, where @math is the height of the parse tree of the grammar. A general grammar of @math symbols and size @math can be converted into a SLP of @math rules.
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"1570532020"
],
"abstract": [
"Straight-line programs (SLPs) offer powerful text compression by representing a text T[1,u] in terms of a restricted context-free grammar of n rules, so that T can be recovered in O(u) time. However, the problem of operating the grammar in compressed form has not been studied much. We present a grammar representation whose size is of the same order of that of a plain SLP representation, and can answer other queries apart from expanding nonterminals. This can be of independent interest. We then extend it to achieve the first grammar representation able of extracting text substrings, and of searching the text for patterns, in time o(n). We also give byproducts on representing binary relations."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.