id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1310.6945 | Optimal Scalar Quantization for Parameter Estimation | cs.IT math.IT | In this paper, we study an asymptotic approximation of the Fisher information
for the estimation of a scalar parameter using quantized measurements. We show
that, as the number of quantization intervals tends to infinity, the loss of
Fisher information induced by quantization decreases exponentially as a
function of the number of quantization bits. A characterization of the optimal
quantizer through its interval density and an analytical expression for the
Fisher information are obtained. A comparison between optimal uniform and
non-uniform quantization for the location and scale estimation problems shows
that non-uniform quantization is only slightly better. As the optimal
quantization intervals are shown to depend on the unknown parameters, by
applying adaptive algorithms that jointly estimate the parameter and set the
thresholds in the location and scale estimation problems, we show that the
asymptotic results can be approximately obtained in practice using only 4 or 5
quantization bits.
|
1310.6998 | Predicting the NFL using Twitter | cs.SI cs.LG physics.soc-ph stat.ML | We study the relationship between social media output and National Football
League (NFL) games, using a dataset containing messages from Twitter and NFL
game statistics. Specifically, we consider tweets pertaining to specific teams
and games in the NFL season and use them alongside statistical game data to
build predictive models for future game outcomes (which team will win?) and
sports betting outcomes (which team will win with the point spread? will the
total points be over/under the line?). We experiment with several feature sets
and find that simple features using large volumes of tweets can match or exceed
the performance of more traditional features that use game statistics.
|
1310.7001 | Scalable Synchronization and Reciprocity Calibration for Distributed
Multiuser MIMO | cs.NI cs.IT math.IT | Large-scale distributed Multiuser MIMO (MU-MIMO) is a promising wireless
network architecture that combines the advantages of "massive MIMO" and "small
cells." It consists of several Access Points (APs) connected to a central
server via a wired backhaul network and acting as a large distributed antenna
system. We focus on the downlink, which is both more demanding in terms of
traffic and more challenging in terms of implementation than the uplink. In
order to enable multiuser joint precoding of the downlink signals, channel
state information at the transmitter side is required. We consider Time
Division Duplex (TDD), where the {\em downlink} channels can be learned from
the user uplink pilot signals, thanks to channel reciprocity. Furthermore,
coherent multiuser joint precoding is possible only if the APs maintain a
sufficiently accurate relative timing and phase synchronization. AP
synchronization and TDD reciprocity calibration are two key problems to be
solved in order to enable distributed MU-MIMO downlink. In this paper, we
propose novel over-the-air synchronization and calibration protocols that scale
well with the network size. The proposed schemes can be applied to networks
formed by a large number of APs, each of which is driven by an inexpensive
802.11-grade clock and has a standard RF front-end, not explicitly designed to
be reciprocal. Our protocols can incorporate, as a building block, any suitable
timing and frequency estimator. Here we revisit the problem of joint ML timing
and frequency estimation and use the corresponding Cramer-Rao bound to evaluate
the performance of the synchronization protocol. Overall, the proposed
synchronization and calibration schemes are shown to achieve sufficient
accuracy for satisfactory distributed MU-MIMO performance.
|
1310.7028 | Multiplicativity of completely bounded $p$-norms implies a strong
converse for entanglement-assisted capacity | quant-ph cs.IT math.IT | The fully quantum reverse Shannon theorem establishes the optimal rate of
noiseless classical communication required for simulating the action of many
instances of a noisy quantum channel on an arbitrary input state, while also
allowing for an arbitrary amount of shared entanglement of an arbitrary form.
Turning this theorem around establishes a strong converse for the
entanglement-assisted classical capacity of any quantum channel. This paper
proves the strong converse for entanglement-assisted capacity by a completely
different approach and identifies a bound on the strong converse exponent for
this task. Namely, we exploit the recent entanglement-assisted "meta-converse"
theorem of Matthews and Wehner, several properties of the recently established
sandwiched Renyi relative entropy (also referred to as the quantum Renyi
divergence), and the multiplicativity of completely bounded $p$-norms due to
Devetak et al. The proof here demonstrates the extent to which the Arimoto
approach can be helpful in proving strong converse theorems, it provides an
operational relevance for the multiplicativity result of Devetak et al., and it
adds to the growing body of evidence that the sandwiched Renyi relative entropy
is the correct quantum generalization of the classical concept for all
$\alpha>1$.
|
1310.7048 | Scaling SVM and Least Absolute Deviations via Exact Data Reduction | cs.LG stat.ML | The support vector machine (SVM) is a widely used method for classification.
Although many efforts have been devoted to develop efficient solvers, it
remains challenging to apply SVM to large-scale problems. A nice property of
SVM is that the non-support vectors have no effect on the resulting classifier.
Motivated by this observation, we present fast and efficient screening rules to
discard non-support vectors by analyzing the dual problem of SVM via
variational inequalities (DVI). As a result, the number of data instances to be
entered into the optimization can be substantially reduced. Some appealing
features of our screening method are: (1) DVI is safe in the sense that the
vectors discarded by DVI are guaranteed to be non-support vectors; (2) the data
set needs to be scanned only once to run the screening, whose computational
cost is negligible compared to that of solving the SVM problem; (3) DVI is
independent of the solvers and can be integrated with any existing efficient
solvers. We also show that the DVI technique can be extended to detect
non-support vectors in the least absolute deviations regression (LAD). To the
best of our knowledge, there are currently no screening methods for LAD. We
have evaluated DVI on both synthetic and real data sets. Experiments indicate
that DVI significantly outperforms the existing state-of-the-art screening
rules for SVM, and is very effective in discarding non-support vectors for LAD.
The speedup gained by DVI rules can be up to two orders of magnitude.
|
1310.7062 | Real-Time Planning with Primitives for Dynamic Walking over Uneven
Terrain | cs.SY cs.RO | We present an algorithm for receding-horizon motion planning using a finite
family of motion primitives for underactuated dynamic walking over uneven
terrain. The motion primitives are defined as virtual holonomic constraints,
and the special structure of underactuated mechanical systems operating subject
to virtual constraints is used to construct closed-form solutions and a special
binary search tree that dramatically speed up motion planning. We propose a
greedy depth-first search and discuss improvement using energy-based
heuristics. The resulting algorithm can plan several footsteps ahead in a
fraction of a second for both the compass-gait walker and a planar
7-Degree-of-freedom/five-link walker.
|
1310.7112 | Computation Over Gaussian Networks With Orthogonal Components | cs.IT math.IT | Function computation of arbitrarily correlated discrete sources over Gaussian
networks with orthogonal components is studied. Two classes of functions are
considered: the arithmetic sum function and the type function. The arithmetic
sum function in this paper is defined as a set of multiple weighted arithmetic
sums, which includes averaging of the sources and estimating each of the
sources as special cases. The type or frequency histogram function counts the
number of occurrences of each argument, which yields many important statistics
such as mean, variance, maximum, minimum, median, and so on. The proposed
computation coding first abstracts Gaussian networks into the corresponding
modulo sum multiple-access channels via nested lattice codes and linear network
coding and then computes the desired function by using linear Slepian-Wolf
source coding. For orthogonal Gaussian networks (with no broadcast and
multiple-access components), the computation capacity is characterized for a
class of networks. For Gaussian networks with multiple-access components (but
no broadcast), an approximate computation capacity is characterized for a class
of networks.
|
1310.7114 | Efficient Information Theoretic Clustering on Discrete Lattices | cs.CV | We consider the problem of clustering data that reside on discrete, low
dimensional lattices. Canonical examples for this setting are found in image
segmentation and key point extraction. Our solution is based on a recent
approach to information theoretic clustering where clusters result from an
iterative procedure that minimizes a divergence measure. We replace costly
processing steps in the original algorithm by means of convolutions. These
allow for highly efficient implementations and thus significantly reduce
runtime. This paper therefore bridges a gap between machine learning and signal
processing.
|
1310.7115 | Studying a Chaotic Spiking Neural Model | cs.AI cs.NE | Dynamics of a chaotic spiking neuron model are being studied mathematically
and experimentally. The Nonlinear Dynamic State neuron (NDS) is analysed to
further understand the model and improve it. Chaos has many interesting
properties such as sensitivity to initial conditions, space filling, control
and synchronization. As suggested by biologists, these properties may be
exploited and play vital role in carrying out computational tasks in human
brain. The NDS model has some limitations; in thus paper the model is
investigated to overcome some of these limitations in order to enhance the
model. Therefore, the models parameters are tuned and the resulted dynamics are
studied. Also, the discretization method of the model is considered. Moreover,
a mathematical analysis is carried out to reveal the underlying dynamics of the
model after tuning of its parameters. The results of the aforementioned methods
revealed some facts regarding the NDS attractor and suggest the stabilization
of a large number of unstable periodic orbits (UPOs) which might correspond to
memories in phase space.
|
1310.7123 | Nomographic Functions: Efficient Computation in Clustered Gaussian
Sensor Networks | cs.IT math.IT | In this paper, a clustered wireless sensor network is considered that is
modeled as a set of coupled Gaussian multiple-access channels. The objective of
the network is not to reconstruct individual sensor readings at designated
fusion centers but rather to reliably compute some functions thereof. Our
particular attention is on real-valued functions that can be represented as a
post-processed sum of pre-processed sensor readings. Such functions are called
nomographic functions and their special structure permits the utilization of
the interference property of the Gaussian multiple-access channel to reliably
compute many linear and nonlinear functions at significantly higher rates than
those achievable with standard schemes that combat interference. Motivated by
this observation, a computation scheme is proposed that combines a suitable
data pre- and post-processing strategy with a nested lattice code designed to
protect the sum of pre-processed sensor readings against the channel noise.
After analyzing its computation rate performance, it is shown that at the cost
of a reduced rate, the scheme can be extended to compute every continuous
function of the sensor readings in a finite succession of steps, where in each
step a different nomographic function is computed. This demonstrates the
fundamental role of nomographic representations.
|
1310.7134 | Modeling Oligarchs' Campaign Donations and Ideological Preferences with
Simulated Agent-Based Spatial Elections | cs.MA physics.soc-ph | In this paper, we investigate the interactions among oligarchs, political
parties, and voters using an agent-based modeling approach. We introduce the
OLIGO model, which is based on the spatial model of democracy, where voters
have positions in a policy space and vote for the party that appears closest to
them, and parties move in policy space to seek more votes. We extend the
existing literature on agent-based models of political economy in the following
manner: (1) by introducing a new class of agents- oligarchs - that represent
leaders of firms in a common industry who lobby for beneficial subsidies
through campaign donations; and (2) by investigating the effects of ideological
preferences of the oligarchs on legislative action. We test hypotheses from the
literature in political economics on the behavior of oligarchs and political
parties as they interact, under conditions of imperfect information and bounded
rationality. Our key results indicate that (1) oligarchs tend to donate less to
political campaigns when the parties are more resistant to changing their
policies, or when voters are more informed; and (2) if Oligarchs donate to
parties based on a combination of ideological and profit motivations, Oligarchs
will tend to donate at a lower equilibrium level, due to the influence of lost
profits. We validate these outcomes via comparisons to real world polling data
on changes in party support over time.
|
1310.7158 | Outage Constrained Robust Secure Transmission for MISO Wiretap Channels | cs.IT math.IT | In this paper we consider the robust secure beamformer design for MISO
wiretap channels. Assume that the eavesdroppers' channels are only partially
available at the transmitter, we seek to maximize the secrecy rate under the
transmit power and secrecy rate outage probability constraint. The outage
probability constraint requires that the secrecy rate exceeds certain threshold
with high probability. Therefore including such constraint in the design
naturally ensures the desired robustness. Unfortunately, the presence of the
probabilistic constraints makes the problem non-convex and hence difficult to
solve. In this paper, we investigate the outage probability constrained secrecy
rate maximization problem using a novel two-step approach. Under a wide range
of uncertainty models, our developed algorithms can obtain high-quality
solutions, sometimes even exact global solutions, for the robust secure
beamformer design problem. Simulation results are presented to verify the
effectiveness and robustness of the proposed algorithms.
|
1310.7159 | Using concatenated algebraic geometry codes in channel polarization | cs.IT math.AG math.IT | Polar codes were introduced by Arikan in 2008 and are the first family of
error-correcting codes achieving the symmetric capacity of an arbitrary
binary-input discrete memoryless channel under low complexity encoding and
using an efficient successive cancellation decoding strategy. Recently,
non-binary polar codes have been studied, in which one can use different
algebraic geometry codes to achieve better error decoding probability. In this
paper, we study the performance of binary polar codes that are obtained from
non-binary algebraic geometry codes using concatenation. For binary polar codes
(i.e. binary kernels) of a given length $n$, we compare numerically the use of
short algebraic geometry codes over large fields versus long algebraic geometry
codes over small fields. We find that for each $n$ there is an optimal choice.
For binary kernels of size up to $n \leq 1,800$ a concatenated Reed-Solomon
code outperforms other choices. For larger kernel sizes concatenated Hermitian
codes or Suzuki codes will do better.
|
1310.7163 | Generalized Thompson Sampling for Contextual Bandits | cs.LG cs.AI stat.ML stat.OT | Thompson Sampling, one of the oldest heuristics for solving multi-armed
bandits, has recently been shown to demonstrate state-of-the-art performance.
The empirical success has led to great interests in theoretical understanding
of this heuristic. In this paper, we approach this problem in a way very
different from existing efforts. In particular, motivated by the connection
between Thompson Sampling and exponentiated updates, we propose a new family of
algorithms called Generalized Thompson Sampling in the expert-learning
framework, which includes Thompson Sampling as a special case. Similar to most
expert-learning algorithms, Generalized Thompson Sampling uses a loss function
to adjust the experts' weights. General regret bounds are derived, which are
also instantiated to two important loss functions: square loss and logarithmic
loss. In contrast to existing bounds, our results apply to quite general
contextual bandits. More importantly, they quantify the effect of the "prior"
distribution on the regret bounds.
|
1310.7170 | Object Recognition System Design in Computer Vision: a Universal
Approach | cs.CV | The first contribution of this paper is architecture of a multipurpose
system, which delegates a range of object detection tasks to a classifier,
applied in special grid positions of the tested image. The second contribution
is Gray Level-Radius Co-occurrence Matrix, which describes local image texture
and topology and, unlike common second order statistics methods, is robust to
image resolution. The third contribution is a parametrically controlled
automatic synthesis of unlimited number of numerical features for
classification. The fourth contribution is a method of optimizing parameters C
and gamma in LibSVM-based classifier, which is 20-100 times faster than the
commonly applied method. The work is essentially experimental, with
demonstration of various methods for definition of objects of interest in
images and video sequences.
|
1310.7198 | Anti-rumor dynamics and emergence of the timing threshold on complex
network | physics.soc-ph cs.SI | Anti-rumor dynamics is proposed on the basis of rumor dynamics and the
characteristics of anti-rumor dynamics are explored by both mean-field
equations and numerical simulations on complex network. The main metrics we
study are the timing effect of combating rumor and the identification of
influential nodes, which are what an efficient strategy against rumor may
concern about. The results indicate that, there exists robust time dependence
of anti-rumor dynamics and the timing threshold emerges as a consequence of
launching the anti-rumor at different delay time after the beginning of rumor
spreading. The timing threshold as a critical feature is further verified on a
series of Barabasi-Albert scale-free networks (BA networks), where anti-rumor
dynamics arises explicitly. The timing threshold is a network-dependent
quantity and its value decreases as the average degree of the BA network
increases until close to zero. Meanwhile, coreness also constitutes a better
topological descriptor to identify hubs. Our results will hopefully be useful
for the understanding of spreading behaviors of rumor and anti-rumor and
suggest a possible avenue for further study of interplays of multiple pieces of
information on complex network.
|
1310.7205 | Algorithms for Timed Consistency Models | cs.DC cs.DB cs.DS | One of the major challenges in distributed systems is establishing
consistency among replicated data in a timely fashion. While the consistent
ordering of events has been extensively researched, the time span to reach a
consistent state is mostly considered an effect of the chosen consistency
model, rather than being considered a parameter itself. This paper argues that
it is possible to give guarantees on the timely consistency of an operation.
Subsequent to an update the cloud and all connected clients will either be
consistent with the update within the defined upper bound of time or the update
will be returned. This paper suggests the respective algorithms and protocols
capable of producing such comprehensive Timed Consistency, as conceptually
proposed by Torres-Rojas et al. The solution offers business customers an
increasing level of predictability and adjustability. The temporal certainty
concerning the execution makes the cloud a more attractive tool for
time-critical or mission-critical applications fearing the poor availability of
Strong Consistency in cloud environments.
|
1310.7217 | Compressed Sensing SAR Imaging with Multilook Processing | cs.IT cs.CV math.IT | Multilook processing is a widely used speckle reduction approach in synthetic
aperture radar (SAR) imaging. Conventionally, it is achieved by incoherently
summing of some independent low-resolution images formulated from overlapping
subbands of the SAR signal. However, in the context of compressive sensing (CS)
SAR imaging, where the samples are collected at sub-Nyquist rate, the data
spectrum is highly aliased that hinders the direct application of the existing
multilook techniques. In this letter, we propose a new CS-SAR imaging method
that can realize multilook processing simultaneously during image
reconstruction. The main idea is to replace the SAR observation matrix by the
inverse of multilook procedures, which is then combined with random sampling
matrix to yield a multilook CS-SAR observation model. Then a joint sparse
regularization model, considering pixel dependency of subimages, is derived to
form multilook images. The suggested SAR imaging method can not only
reconstruct sparse scene efficiently below Nyquist rate, but is also able to
achieve a comparable reduction of speckles during reconstruction. Simulation
results are finally provided to demonstrate the effectiveness of the proposed
method.
|
1310.7262 | Input Design for Model Discrimination and Fault Detection via Convex
Relaxation | cs.SY math.OC | This paper addresses the design of input signals for the purpose of
discriminating among a finite set of models dynamic systems within a given
finite time interval. A motivating application is fault detection and
isolation. We propose several specific optimization problems, with objectives
or constraints based on signal power, signal amplitude, and probability of
successful model discrimination. Since these optimization problems are
nonconvex, we suggest a suboptimal solution via a random search algorithm
guided by the semidefinite relaxation (SDR) and analyze the accuracy of the
suboptimal solution. We conclude with a simple example taken from a benchmark
problem on fault detection for wind turbines.
|
1310.7276 | On the extraction of instantaneous frequencies from ridges in
time-frequency representations of signals | cs.CE math.NA physics.data-an | The extraction of oscillatory components and their properties from different
time-frequency representations, such as windowed Fourier transform and wavelet
transform, is an important topic in signal processing. The first step in this
procedure is to find an appropriate ridge curve: a sequence of amplitude peak
positions (ridge points), corresponding to the component of interest. This is
not a trivial issue, and the optimal method for extraction is still not settled
or agreed. We discuss and develop procedures that can be used for this task and
compare their performance on both simulated and real data. In particular, we
propose a method which, in contrast to many other approaches, is highly
adaptive so that it does not need any parameter adjustment for the signal to be
analysed. Being based on dynamic path optimization and fixed point iteration,
the method is very fast, and its superior accuracy is also demonstrated. In
addition, we investigate the advantages and drawbacks that synchrosqueezing
offers in relation to curve extraction. The codes used in this work are freely
available for download.
|
1310.7282 | Massive MIMO Systems: Signal Processing Challenges and Research Trends | cs.IT math.IT | This article presents a tutorial on multiuser multiple-antenna wireless
systems with a very large number of antennas, known as massive multi-input
multi-output (MIMO) systems. Signal processing challenges and future trends in
the area of massive MIMO systems are presented and key application scenarios
are detailed. A linear algebra approach is considered for the description of
the system and data models of massive MIMO architectures. The operational
requirements of massive MIMO systems are discussed along with their operation
in time-division duplexing mode, resource allocation and calibration
requirements. In particular, transmit and receiver processing algorithms are
examined in light of the specific needs of massive MIMO systems. Simulation
results illustrate the performance of transmit and receive processing
algorithms under scenarios of interest. Key problems are discussed and future
trends in the area of massive MIMO systems are pointed out.
|
1310.7297 | Scalable Visibility Color Map Construction in Spatial Databases | cs.DB | Recent advances in 3D modeling provide us with real 3D datasets to answer
queries, such as "What is the best position for a new billboard?" and "Which
hotel room has the best view?" in the presence of obstacles. These applications
require measuring and differentiating the visibility of an object (target) from
different viewpoints in a dataspace, e.g., a billboard may be seen from two
viewpoints but is readable only from the viewpoint closer to the target. In
this paper, we formulate the above problem of quantifying the visibility of
(from) a target object from (of) the surrounding area with a visibility color
map (VCM). A VCM is essentially defined as a surface color map of the space,
where each viewpoint of the space is assigned a color value that denotes the
visibility measure of the target from that viewpoint. Measuring the visibility
of a target even from a single viewpoint is an expensive operation, as we need
to consider factors such as distance, angle, and obstacles between the
viewpoint and the target. Hence, a straightforward approach to construct the
VCM that requires visibility computation for every viewpoint of the surrounding
space of the target, is prohibitively expensive in terms of both I/Os and
computation, especially for a real dataset comprising of thousands of
obstacles. We propose an efficient approach to compute the VCM based on a key
property of the human vision that eliminates the necessity of computing the
visibility for a large number of viewpoints of the space. To further reduce the
computational overhead, we propose two approximations; namely, minimum bounding
rectangle and tangential approaches with guaranteed error bounds. Our extensive
experiments demonstrate the effectiveness and efficiency of our solutions to
construct the VCM for real 2D and 3D datasets.
|
1310.7300 | Relax but stay in control: from value to algorithms for online Markov
decision processes | cs.LG math.OC stat.ML | Online learning algorithms are designed to perform in non-stationary
environments, but generally there is no notion of a dynamic state to model
constraints on current and future actions as a function of past actions.
State-based models are common in stochastic control settings, but commonly used
frameworks such as Markov Decision Processes (MDPs) assume a known stationary
environment. In recent years, there has been a growing interest in combining
the above two frameworks and considering an MDP setting in which the cost
function is allowed to change arbitrarily after each time step. However, most
of the work in this area has been algorithmic: given a problem, one would
develop an algorithm almost from scratch. Moreover, the presence of the state
and the assumption of an arbitrarily varying environment complicate both the
theoretical analysis and the development of computationally efficient methods.
This paper describes a broad extension of the ideas proposed by Rakhlin et al.
to give a general framework for deriving algorithms in an MDP setting with
arbitrarily changing costs. This framework leads to a unifying view of existing
methods and provides a general procedure for constructing new ones. Several new
methods are presented, and one of them is shown to have important advantages
over a similar method developed from scratch via an online version of
approximate dynamic programming.
|
1310.7305 | Optimized Markov Chain Monte Carlo for Signal Detection in MIMO Systems:
an Analysis of Stationary Distribution and Mixing Time | cs.IT math.IT | In this paper we introduce an optimized Markov Chain Monte Carlo (MCMC)
technique for solving the integer least-squares (ILS) problems, which include
Maximum Likelihood (ML) detection in Multiple-Input Multiple-Output (MIMO)
systems. Two factors contribute to the speed of finding the optimal solution by
the MCMC detector: the probability of the optimal solution in the stationary
distribution, and the mixing time of the MCMC detector. Firstly, we compute the
optimal value of the "temperature" parameter, in the sense that the temperature
has the desirable property that once the Markov chain has mixed to its
stationary distribution, there is polynomially small probability
($1/\mbox{poly}(N)$, instead of exponentially small) of encountering the
optimal solution. This temperature is shown to be at most
$O(\sqrt{SNR}/\ln(N))$, where $SNR$ is the signal-to-noise ratio, and $N$ is
the problem dimension. Secondly, we study the mixing time of the underlying
Markov chain of the proposed MCMC detector. We find that, the mixing time of
MCMC is closely related to whether there is a local minimum in the lattice
structures of ILS problems. For some lattices without local minima, the mixing
time of the Markov chain is independent of $SNR$, and grows polynomially in the
problem dimension; for lattices with local minima, the mixing time grows
unboundedly as $SNR$ grows, when the temperature is set, as in conventional
wisdom, to be the standard deviation of noises. Our results suggest that, to
ensure fast mixing for a fixed dimension $N$, the temperature for MCMC should
instead be set as $\Omega(\sqrt{SNR})$ in general. Simulation results show that
the optimized MCMC detector efficiently achieves approximately ML detection in
MIMO systems having a huge number of transmit and receive dimensions.
|
1310.7311 | On the Degrees of Freedom of Asymmetric MIMO Interference Broadcast
Channels | cs.IT math.IT | In this paper, we study the degrees of freedom (DoF) of the asymmetric
multi-input-multi-output interference broadcast channel (MIMO-IBC). By
introducing a notion of connection pattern chain, we generalize the genie chain
proposed in [11] to derive and prove the necessary condition of IA feasibility
for asymmetric MIMO-IBC, which is denoted as irreducible condition. It is
necessary for both linear interference alignment (IA) and asymptotic IA
feasibility in MIMO-IBC with arbitrary configurations. In a special class of
asymmetric two-cell MIMOIBC, the irreducible condition is proved to be the
sufficient and necessary condition for asymptotic IA feasibility, while the
combination of proper condition and irreducible condition is proved to the
sufficient and necessary condition for linear IA feasibility. From these
conditions, we derive the information theoretic maximal DoF per user and the
maximal DoF per user achieved by linear IA, and these DoFs are also the DoF per
user upper-bounds of asymmetric G-cell MIMO-IBC with asymptotic IA and linear
IA, respectively.
|
1310.7320 | High Dimensional Robust M-Estimation: Asymptotic Variance via
Approximate Message Passing | math.ST cs.IT math.IT stat.TH | In a recent article (Proc. Natl. Acad. Sci., 110(36), 14557-14562), El Karoui
et al. study the distribution of robust regression estimators in the regime in
which the number of parameters p is of the same order as the number of samples
n. Using numerical simulations and `highly plausible' heuristic arguments, they
unveil a striking new phenomenon. Namely, the regression coefficients contain
an extra Gaussian noise component that is not explained by classical concepts
such as the Fisher information matrix. We show here that that this phenomenon
can be characterized rigorously techniques that were developed by the authors
to analyze the Lasso estimator under high-dimensional asymptotics. We introduce
an approximate message passing (AMP) algorithm to compute M-estimators and
deploy state evolution to evaluate the operating characteristics of AMP and so
also M-estimates. Our analysis clarifies that the `extra Gaussian noise'
encountered in this problem is fundamentally similar to phenomena already
studied for regularized least squares in the setting n<p.
|
1310.7324 | Distributed Estimation of a Parametric Field: Algorithms and Performance
Analysis | cs.IT math.IT | This paper presents a distributed estimator for a deterministic parametric
physical field sensed by a homogeneous sensor network and develops a new
transformed expression for the Cramer-Rao lower bound (CRLB) on the variance of
distributed estimates. The proposed transformation reduces a multidimensional
integral representation of the CRLB to an expression involving an infinite sum.
Stochastic models used in this paper assume additive noise in both the
observation and transmission channels. Two cases of data transmission are
considered. The first case assumes a linear analog modulation of raw
observations prior to their transmission to a fusion center. In the second
case, each sensor quantizes its observation to $M$ levels, and the quantized
data are communicated to a fusion center. In both cases, parallel additive
white Gaussian channels are assumed. The paper develops an iterative
expectation-maximization (EM) algorithm to estimate unknown parameters of a
parametric field, and its linearized version is adopted for numerical analysis.
The performance of the developed numerical solution is compared to the
performance of a simple iterative approach based on Newton's approximation.
While the developed solution has a higher complexity than Newton's solution, it
is more robust with respect to the choice of initial parameters and has a
better estimation accuracy. Numerical examples are provided for the case of a
field modeled as a Gaussian bell, and illustrate the advantages of using the
transformed expression for the CRLB. However, the distributed estimator and the
derived CRLB are general and can be applied to any parametric field. The
dependence of the mean-square error (MSE) on the number of quantization levels,
the number of sensors in the network and the SNR of the observation and
transmission channels are analyzed. The variance of the estimates is compared
to the derived CRLB.
|
1310.7346 | Excitable human dynamics driven by extrinsic events in massive
communities | physics.soc-ph cs.CE cs.SI | Using empirical data from a social media site (Twitter) and on trading
volumes of financial securities, we analyze the correlated human activity in
massive social organizations. The activity, typically excited by real-world
events and measured by the occurrence rate of international brand names and
trading volumes, is characterized by intermittent fluctuations with bursts of
high activity separated by quiescent periods. These fluctuations are broadly
distributed with an inverse cubic tail and have long-range temporal
correlations with a $1/f$ power spectrum. We describe the activity by a
stochastic point process and derive the distribution of activity levels from
the corresponding stochastic differential equation. The distribution and the
corresponding power spectrum are fully consistent with the empirical
observations.
|
1310.7367 | Semantic Description of Web Services | cs.AI | The tasks of semantic web service (discovery, selection, composition, and
execution) are supposed to enable seamless interoperation between systems,
whereby human intervention is kept at a minimum. In the field of Web service
description research, the exploitation of descriptions of services through
semantics is a better support for the life-cycle of Web services. The large
number of developed ontologies, languages of representations, and integrated
frameworks supporting the discovery, composition and invocation of services is
a good indicator that research in the field of Semantic Web Services (SWS) has
been considerably active. We provide in this paper a detailed classification of
the approaches and solutions, indicating their core characteristics and
objectives required and provide indicators for the interested reader to follow
up further insights and details about these solutions and related software.
|
1310.7368 | Formulation and Steady-state Analysis of LMS Adaptive Networks for
Distributed Estimation in the Presence of Transmission Errors | cs.SY cs.NI | This article presents the formulation and steady-state analysis of the
distributed estimation algorithms based on the diffusion cooperation scheme in
the presence of errors due to the unreliable data transfer among nodes. In
particular, we highlight the impact of transmission errors on the least-mean
squares (LMS) adaptive networks. We develop the closed-form expressions of the
steady-state mean-square deviation (MSD) which is helpful to assess the effects
of the imperfect information flow on on the behavior of the diffusion LMS
algorithm in terms of the steady-state error. The model is then validated by
performing Monte Carlo simulations. It is shown that local and global MSD
curves are not necessarily monotonic increasing functions of the error
probability. We also assess sufficient conditions that ensure mean and
mean-square stability of diffusion LMS strategies in the presence of
transmission errors. Moreover, issues such as scalability in the sense of
network size and regressor size, spatially correlated observations, as well as
the effect of the distribution of the noise variance are studied.
While the proposed theoretical framework is general in the sense that it is
not confined to a particular source of error during information diffusion, for
practical reasons we additionally study a specific scenario where errors occur
at the medium access control (MAC) level. We develop a model to quantify the
MAC-level transmission errors according to the network topology and system
parameters for a set of nodes employing a backoff procedure to access the
channel. To overcome the problem of unreliable data exchange, we propose an
enhanced combining rule that can be deployed in order to improve the
performance of diffusion estimation algorithms by using the knowledge of the
properties of the transmission errors.
|
1310.7418 | Infinite Secret Sharing -- Examples | cs.CR cs.IT math.IT math.PR | The motivation for extending secret sharing schemes to cases when either the
set of players is infinite or the domain from which the secret and/or the
shares are drawn is infinite or both, is similar to the case when switching to
abstract probability spaces from classical combinatorial probability. It might
shed new light on old problems, could connect seemingly unrelated problems, and
unify diverse phenomena.
Definitions equivalent in the finitary case could be very much different when
switching to infinity, signifying their difference. The standard requirement
that qualified subsets should be able to determine the secret has different
interpretations in spite of the fact that, by assumption, all participants have
infinite computing power. The requirement that unqualified subsets should have
no, or limited information on the secret suggests that we also need some
probability distribution. In the infinite case events with zero probability are
not necessarily impossible, and we should decide whether bad events with zero
probability are allowed or not.
In this paper, rather than giving precise definitions, we enlist an abundance
of hopefully interesting infinite secret sharing schemes. These schemes touch
quite diverse areas of mathematics such as projective geometry, stochastic
processes and Hilbert spaces. Nevertheless our main tools are from probability
theory. The examples discussed here serve as foundation and illustration to the
more theory oriented companion paper.
|
1310.7423 | Infinite Probabilistic Secret Sharing | cs.CR cs.IT math.IT math.PR | A probabilistic secret sharing scheme is a joint probability distribution of
the shares and the secret together with a collection of secret recovery
functions. The study of schemes using arbitrary probability spaces and
unbounded number of participants allows us to investigate their abstract
properties, to connect the topic to other branches of mathematics, and to
discover new design paradigms. A scheme is perfect if unqualified subsets have
no information on the secret, that is, their total share is independent of the
secret. By relaxing this security requirement, three other scheme types are
defined. Our first result is that every (infinite) access structure can be
realized by a perfect scheme where the recovery functions are non-measurable.
The construction is based on a paradoxical pair of independent random variables
which determine each other. Restricting the recovery functions to be measurable
ones, we give a complete characterization of access structures realizable by
each type of the schemes. In addition, either a vector-space or a Hilbert-space
based scheme is constructed realizing the access structure. While the former
one uses the traditional uniform distributions, the latter one uses Gaussian
distributions, leading to a new design paradigm.
|
1310.7425 | User Selection in MIMO Interfering Broadcast Channels | cs.IT math.IT | Interference alignment aims to achieve maximum degrees of freedom in an
interference system. For achieving Interference alignment in interfering
broadcast systems a closed-form solution is proposed in [1] which is an
extension of the grouping scheme in [2]. In a downlink scenario where there are
a large number of users, the base station is required to select a subset of
users such that the sum rate is maximized. To search for the optimal user
subset using brute-force approach is computationally exhaustive because of the
large number of possible user subset combinations. We propose a user selection
algorithm achieving sum rate close to that of optimal solution. The algorithm
employs coordinate ascent approach and exploits orthogonality between the
desired signal space and the interference channel space in the reciprocal
system to select the user at each step. For the sake of completeness, we have
also extended the sum rate approach based algorithm to Interfering broadcast
channel. The complexity of both these algorithms is shown to be linear with
respect to the total number of users as compared to exponential in brute-force
search.
|
1310.7428 | Musical recommendations and personalization in a social network | cs.IR | This paper presents a set of algorithms used for music recommendations and
personalization in a general purpose social network www.ok.ru, the second
largest social network in the CIS visited by more then 40 millions users per
day. In addition to classical recommendation features like "recommend a
sequence" and "find similar items" the paper describes novel algorithms for
construction of context aware recommendations, personalization of the service,
handling of the cold-start problem, and more. All algorithms described in the
paper are working on-line and are able to detect and address changes in the
user's behavior and needs in the real time.
The core component of the algorithms is a taste graph containing information
about different entities (users, tracks, artists, etc.) and relations between
them (for example, user A likes song B with certainty X, track B created by
artist C, artist C is similar to artist D with certainty Y and so on). Using
the graph it is possible to select tracks a user would most probably like, to
arrange them in a way that they match each other well, to estimate which items
from a fixed list are most relevant for the user, and more.
In addition, the paper describes the approach used to estimate algorithms
efficiency and analyze the impact of different recommendation related features
on the users' behavior and overall activity at the service.
|
1310.7433 | Unified Subharmonic Oscillation Conditions for Peak or Average Current
Mode Control | cs.SY math.DS nlin.CD | This paper is an extension of the author's recent research in which only buck
converters were analyzed. Similar analysis can be equally applied to other
types of converters. In this paper, a unified model is proposed for buck,
boost, and buck-boost converters under peak or average current mode control to
predict the occurrence of subharmonic oscillation. Based on the unified model,
the associated stability conditions are derived in closed forms. The same
stability condition can be applied to buck, boost, and buck-boost converters.
Based on the closed-form conditions, the effects of various converter
parameters including the compensator poles and zeros on the stability can be
clearly seen, and these parameters can be consolidated into a few ones.
High-order compensators such as type-II and PI compensators are considered.
Some new plots are also proposed for design purpose to avoid the instability.
The instability is found to be associated with large crossover frequency. A
conservative stability condition, agreed with the past research, is derived.
The effect of the voltage loop ripple on the instability is also analyzed.
|
1310.7440 | Neural perceptual model to global-local vision for recognition of the
logical structure of administrative documents | cs.CV | This paper gives the definition of Transparent Neural Network "TNN" for the
simulation of the globallocal vision and its application to the segmentation of
administrative document image. We have developed and have adapted a recognition
method which models the contextual effects reported from studies in
experimental psychology. Then, we evaluated and tested the TNN and the
multi-layer perceptron "MLP", which showed its effectiveness in the field of
the recognition, in order to show that the TNN is clearer for the user and more
powerful on the level of the recognition. Indeed, the TNN is the only system
which makes it possible to recognize the document and its structure.
|
1310.7441 | Hierarchical Clustering of Hyperspectral Images using Rank-Two
Nonnegative Matrix Factorization | cs.CV cs.IR math.OC | In this paper, we design a hierarchical clustering algorithm for
high-resolution hyperspectral images. At the core of the algorithm, a new
rank-two nonnegative matrix factorizations (NMF) algorithm is used to split the
clusters, which is motivated by convex geometry concepts. The method starts
with a single cluster containing all pixels, and, at each step, (i) selects a
cluster in such a way that the error at the next step is minimized, and (ii)
splits the selected cluster into two disjoint clusters using rank-two NMF in
such a way that the clusters are well balanced and stable. The proposed method
can also be used as an endmember extraction algorithm in the presence of pure
pixels. The effectiveness of this approach is illustrated on several synthetic
and real-world hyperspectral images, and shown to outperform standard
clustering techniques such as k-means, spherical k-means and standard NMF.
|
1310.7442 | Ranking basic belief assignments in decision making under uncertain
environment | cs.AI | Dempster-Shafer theory (D-S theory) is widely used in decision making under
the uncertain environment. Ranking basic belief assignments (BBAs) now is an
open issue. Existing evidence distance measures cannot rank the BBAs in the
situations when the propositions have their own ranking order or their inherent
measure of closeness. To address this issue, a new ranking evidence distance
(RED) measure is proposed. Compared with the existing evidence distance
measures including the Jousselme's distance and the distance between betting
commitments, the proposed RED measure is much more general due to the fact that
the order of the propositions in the systems is taken into consideration. If
there is no order or no inherent measure of closeness in the propositions, our
proposed RED measure is reduced to the existing evidence distance. Numerical
examples show that the proposed RED measure is an efficient alternative to rank
BBAs in decision making under uncertain environment.
|
1310.7443 | On Convergent Finite Difference Schemes for Variational - PDE Based
Image Processing | cs.CV math.NA | We study an adaptive anisotropic Huber functional based image restoration
scheme. By using a combination of L2-L1 regularization functions, an adaptive
Huber functional based energy minimization model provides denoising with edge
preservation in noisy digital images. We study a convergent finite difference
scheme based on continuous piecewise linear functions and use a variable
splitting scheme, namely the Split Bregman, to obtain the discrete minimizer.
Experimental results are given in image denoising and comparison with additive
operator splitting, dual fixed point, and projected gradient schemes illustrate
that the best convergence rates are obtained for our algorithm.
|
1310.7447 | Impulse Noise Removal In Speech Using Wavelets | cs.CV | A new method for removing impulse noise from speech in the wavelet transform
domain is proposed. The method utilizes the multiresolution property of the
wavelet transform, which provides finer time resolution at the higher
frequencies than the short-time Fourier transform (STFT), to effectively
identify and remove impulse noise. It uses two features of speech to
discriminate speech from impulse noise: one is the slow time-varying nature of
speech and the other is the Lipschitz regularity of the speech components. On
the basis of these features, an algorithm has been developed to identify and
suppress wavelet coefficients that correspond to impulse noise. Experiment
results show that the new method is able to significantly reduce impulse noise
without degrading the quality of the speech signal or introducing any audible
artifacts.
|
1310.7448 | An iterative algorithm for computed tomography image reconstruction from
limited-angle projections | cs.CV | In application of tomography imaging, limited-angle problem is a quite
practical and important issue. In this paper, an iterative
reprojection-reconstruction (IRR) algorithm using a modified Papoulis-Gerchberg
(PG) iterative scheme is developed for reconstruction from limited-angle
projections which contain noise. The proposed algorithm has two iterative
update processes, one is the extrapolation of unknown data, and the other is
the modification of the known noisy observation data. And the algorithm
introduces scaling factors to control the two processes, respectively. The
convergence of the algorithm is guaranteed, and the method of choosing the
scaling factors is given with energy constraints. The simulation result
demonstrates our conclusions and indicates that the algorithm proposed in this
paper can obviously improve the reconstruction quality.
|
1310.7473 | Connectivity of confined 3D Networks with Anisotropically Radiating
Nodes | cs.IT cond-mat.dis-nn cond-mat.stat-mech cs.NI math.IT | Nodes in ad hoc networks with randomly oriented directional antenna patterns
typically have fewer short links and more long links which can bridge together
otherwise isolated subnetworks. This network feature is known to improve
overall connectivity in 2D random networks operating at low channel path loss.
To this end, we advance recently established results to obtain analytic
expressions for the mean degree of 3D networks for simple but practical
anisotropic gain profiles, including those of patch, dipole and end-fire array
antennas. Our analysis reveals that for homogeneous systems (i.e. neglecting
boundary effects) directional radiation patterns are superior to the isotropic
case only when the path loss exponent is less than the spatial dimension.
Moreover, we establish that ad hoc networks utilizing directional transmit and
isotropic receive antennas (or vice versa) are always sub-optimally connected
regardless of the environment path loss. We extend our analysis to investigate
boundary effects in inhomogeneous systems, and study the geometrical reasons
why directional radiating nodes are at a disadvantage to isotropic ones.
Finally, we discuss multi-directional gain patterns consisting of many equally
spaced lobes which could be used to mitigate boundary effects and improve
overall network connectivity.
|
1310.7525 | Coding theorems for compound problems via quantum R\'enyi divergences | quant-ph cs.IT math-ph math.IT math.MP | Recently, a new notion of quantum R\'enyi divergences has been introduced by
M\"uller-Lennert, Dupuis, Szehr, Fehr and Tomamichel, J.Math.Phys. 54:122203,
(2013), and Wilde, Winter, Yang, Commun.Math.Phys. 331:593--622, (2014), that
has found a number of applications in strong converse theorems. Here we show
that these new R\'enyi divergences are also useful tools to obtain coding
theorems in the direct domain of various problems. We demonstrate this by
giving new and considerably simplified proofs for the achievability parts of
Stein's lemma with composite null hypothesis, universal state compression, and
the classical capacity of compound classical-quantum channels, based on
single-shot error bounds already available in the literature, and simple
properties of the quantum R\'enyi divergences. The novelty of our proofs is
that the composite/compound coding theorems can be almost directly obtained
from the single-shot error bounds, with essentially the same effort as for the
case of simple null-hypothesis/single source/single channel.
|
1310.7529 | Successive Nonnegative Projection Algorithm for Robust Nonnegative Blind
Source Separation | stat.ML cs.LG math.NA math.OC | In this paper, we propose a new fast and robust recursive algorithm for
near-separable nonnegative matrix factorization, a particular nonnegative blind
source separation problem. This algorithm, which we refer to as the successive
nonnegative projection algorithm (SNPA), is closely related to the popular
successive projection algorithm (SPA), but takes advantage of the nonnegativity
constraint in the decomposition. We prove that SNPA is more robust than SPA and
can be applied to a broader class of nonnegative matrices. This is illustrated
on some synthetic data sets, and on a real-world hyperspectral image.
|
1310.7532 | Matchmaker, Matchmaker, Make Me a Match: Migration of Populations via
Marriages in the Past | physics.soc-ph cond-mat.dis-nn cs.CE nlin.AO q-bio.PE | The study of human mobility is both of fundamental importance and of great
potential value. For example, it can be leveraged to facilitate efficient city
planning and improve prevention strategies when faced with epidemics. The
newfound wealth of rich sources of data---including banknote flows, mobile
phone records, and transportation data---has led to an explosion of attempts to
characterize modern human mobility. Unfortunately, the dearth of comparable
historical data makes it much more difficult to study human mobility patterns
from the past. In this paper, we present an analysis of long-term human
migration, which is important for processes such as urbanization and the spread
of ideas. We demonstrate that the data record from Korean family books (called
"jokbo") can be used to estimate migration patterns via marriages from the past
750 years. We apply two generative models of long-term human mobility to
quantify the relevance of geographical information to human marriage records in
the data, and we find that the wide variety in the geographical distributions
of the clans poses interesting challenges for the direct application of these
models. Using the different geographical distributions of clans, we quantify
the "ergodicity" of clans in terms of how widely and uniformly they have spread
across Korea, and we compare these results to those obtained using surname data
from the Czech Republic. To examine population flow in more detail, we also
construct and examine a population-flow network between regions. Based on the
correlation between ergodicity and migration in Korea, we identify two
different types of migration patterns: diffusive and convective. We expect the
analysis of diffusive versus convective effects in population flows to be
widely applicable to the study of mobility and migration patterns across
different cultures.
|
1310.7536 | New Constructions of Codes for Asymmetric Channels via Concatenation | cs.IT math.IT | We present new constructions of codes for asymmetric channels for both binary
and nonbinary alphabets, based on methods of generalized code concatenation.
For the binary asymmetric channel, our methods construct nonlinear
single-error-correcting codes from ternary outer codes. We show that some of
the Varshamov-Tenengol'ts-Constantin-Rao codes, a class of binary nonlinear
codes for this channel, have a nice structure when viewed as ternary codes. In
many cases, our ternary construction yields even better codes. For the
nonbinary asymmetric channel, our methods construct linear codes for many
lengths and distances which are superior to the linear codes of the same length
capable of correcting the same number of symmetric errors.
In the binary case, Varshamov has shown that almost all good linear codes for
the asymmetric channel are also good for the symmetric channel. Our results
indicate that Varshamov's argument does not extend to the nonbinary case, i.e.,
one can find better linear codes for asymmetric channels than for symmetric
ones.
|
1310.7552 | An Algorithm for Exact Super-resolution and Phase Retrieval | cs.IT cs.NA math.IT | We explore a fundamental problem of super-resolving a signal of interest from
a few measurements of its low-pass magnitudes. We propose a 2-stage tractable
algorithm that, in the absence of noise, admits perfect super-resolution of an
$r$-sparse signal from $2r^2-2r+2$ low-pass magnitude measurements. The spike
locations of the signal can assume any value over a continuous disk, without
increasing the required sample size. The proposed algorithm first employs a
conventional super-resolution algorithm (e.g. the matrix pencil approach) to
recover unlabeled sets of signal correlation coefficients, and then applies a
simple sorting algorithm to disentangle and retrieve the true parameters in a
deterministic manner. Our approach can be adapted to multi-dimensional spike
models and random Fourier sampling by replacing its first step with other
harmonic retrieval algorithms.
|
1310.7568 | Interlimb neural connection is not required for gait transition in
quadruped locomotion | q-bio.QM cs.RO cs.SY | Quadrupeds transition spontaneously to various gait patterns (e.g., walk,
trot, pace, gallop) in response to the locomotion speed. The generation of
these gait patterns has been the subject of debate for a long time. We propose
a coupled oscillator model that is coupled with the physical interactions of
the body. The results of this study showed that the gait pattern transitions
spontaneously to walking/trotting/pacing/bounding in manner similar to that of
real quadruped animals when the resonating portion of the body is changed
according to the speed of leg movement. We also observed that pacing is
expressed exclusively instead of trotting by changing the physical
characteristics. In addition to leading to understanding of the principles of
locomotion in living things, the coupled oscillator model proposed in this
study is expected to lead to the creation of a legged robot that can select an
energy-efficient gait and transition to it spontaneously.
|
1310.7610 | Distributed Reinforcement Learning via Gossip | cs.DC cs.AI math.OC | We consider the classical TD(0) algorithm implemented on a network of agents
wherein the agents also incorporate the updates received from neighboring
agents using a gossip-like mechanism. The combined scheme is shown to converge
for both discounted and average cost problems.
|
1310.7648 | Wireless-Powered Relays in Cooperative Communications: Time-Switching
Relaying Protocols and Throughput Analysis | cs.IT math.IT | We consider wireless-powered amplify-and-forward and decode-and-forward
relaying in cooperative communications, where an energy constrained relay node
first harvests energy through the received radio-frequency signal from the
source and then uses the harvested energy to forward the source information to
the destination node. We propose time-switching based energy harvesting (EH)
and information transmission (IT) protocols with two modes of EH at the relay.
For continuous time EH, the EH time can be any percentage of the total
transmission block time. For discrete time EH, the whole transmission block is
either used for EH or IT. The proposed protocols are attractive because they do
not require channel state information at the transmitter side and enable relay
transmission with preset fixed transmission power. We derive analytical
expressions of the achievable throughput for the proposed protocols. The
derived expressions are verified by comparison with simulations and allow the
system performance to be determined as a function of the system parameters.
Finally, we show that the proposed protocols outperform the existing fixed time
duration EH protocols in the literature, since they intelligently track the
level of the harvested energy to switch between EH and IT in an online fashion,
allowing efficient use of resources.
|
1310.7652 | Connectivity and Giant Component of Stochastic Kronecker Graphs | math.CO cs.DM cs.SI | Stochastic Kronecker graphs are a model for complex networks where each edge
is present independently according the Kronecker (tensor) product of a fixed
matrix k-by-k matrix P with entries in [0,1]. We develop a novel correspondence
between the adjacencies in a general stochastic Kronecker graph and the action
of a fixed Markov chain. Using this correspondence we are able to generalize
the arguments of Horn and Radcliffe on the emergence of the giant component
from the case where k = 2 to arbitrary k. We are also able to use this
correspondence to completely analyze the connectivity of a general stochastic
Kronecker graph.
|
1310.7665 | Counting Triangles in Real-World Graph Streams: Dealing with Repeated
Edges and Time Windows | cs.DS cs.DM cs.SI | Real-world graphs often manifest as a massive temporal stream of edges. The
need for real-time analysis of such large graph streams has led to progress on
low memory, one-pass streaming graph algorithms. These algorithms were designed
for simple graphs, assuming an edge is not repeated in the stream. Real graph
streams however, are almost always multigraphs i.e., they contain many
duplicate edges. The assumption of no repeated edges requires an extra pass
*storing all the edges* just for deduplication, which defeats the purpose of
small memory algorithms.
We describe an algorithm for estimating the triangle count of a multigraph
stream of edges. We show that all previous streaming algorithms for triangle
counting fail for multigraph streams, despite their impressive accuracies for
simple graphs. The bias created by duplicate edges is a major problem, and
leads these algorithms astray. Our algorithm avoids these biases through
careful debiasing strategies and has provable theoretical guarantees and
excellent empirical performance. Our algorithm builds on the previously
introduced wedge sampling methodology.
Another challenge in analyzing temporal graphs is finding the right temporal
window size. Our algorithm seamlessly handles multiple time windows, and does
not require committing to any window size(s) a priori. We apply our algorithm
to discover fascinating transitivity and triangle trends in real-world graph
streams.
|
1310.7679 | Structured Optimal Transmission Control in Network-coded Two-way Relay
Channels | cs.SY stat.ML | This paper considers a transmission control problem in network-coded two-way
relay channels (NC-TWRC), where the relay buffers random symbol arrivals from
two users, and the channels are assumed to be fading. The problem is modeled by
a discounted infinite horizon Markov decision process (MDP). The objective is
to find a transmission control policy that minimizes the symbol delay, buffer
overflow and transmission power consumption and error rate simultaneously and
in the long run. By using the concepts of submodularity, multimodularity and
L-natural convexity, we study the structure of the optimal policy searched by
dynamic programming (DP) algorithm. We show that the optimal transmission
policy is nondecreasing in queue occupancies or/and channel states under
certain conditions such as the chosen values of parameters in the MDP model,
channel modeling method, modulation scheme and the preservation of stochastic
dominance in the transitions of system states. The results derived in this
paper can be used to relieve the high complexity of DP and facilitate real-time
control.
|
1310.7682 | Contextualizing concepts using a mathematical generalization of the
quantum formalism | q-bio.NC cs.AI quant-ph | We outline the rationale and preliminary results of using the state context
property (SCOP) formalism, originally developed as a generalization of quantum
mechanics, to describe the contextual manner in which concepts are evoked, used
and combined to generate meaning. The quantum formalism was developed to cope
with problems arising in the description of (i) the measurement process, and
(ii) the generation of new states with new properties when particles become
entangled. Similar problems arising with concepts motivated the formal
treatment introduced here. Concepts are viewed not as fixed representations,
but entities existing in states of potentiality that require interaction with a
context-a stimulus or another concept-to 'collapse' to an instantiated form
(e.g. exemplar, prototype, or other possibly imaginary instance). The stimulus
situation plays the role of the measurement in physics, acting as context that
induces a change of the cognitive state from superposition state to collapsed
state. The collapsed state is more likely to consist of a conjunction of
concepts for associative than analytic thought because more stimulus or concept
properties take part in the collapse. We provide two contextual measures of
conceptual distance-one using collapse probabilities and the other weighted
properties-and show how they can be applied to conjunctions using the pet fish
problem.
|
1310.7729 | Optimal cooperative motion planning for vehicles at intersections | cs.SY | We consider the problem of cooperative intersection management. It arises in
automated transportation systems for people or goods but also in multi-robots
environment. Therefore many solutions have been proposed to avoid collisions.
The main problem is to determine collision-free but also deadlock-free and
optimal algorithms. Even with a simple definition of optimality, finding a
global optimum is a problem of high complexity, especially for open systems
involving a large and varying number of vehicles. This paper advocates the use
of a mathematical framework based on a decomposition of the problem into a
continuous optimization part and a scheduling problem. The paper emphasizes
connections between the usual notion of vehicle priority and an abstract
formulation of the scheduling problem in the coordination space. A constructive
locally optimal algorithm is proposed. More generally, this work opens up for
new computationally efficient cooperative motion planning algorithms.
|
1310.7769 | Temporal stability in human interaction networks | cs.SI physics.soc-ph | This paper reports on stable (or invariant) properties of human interaction
networks, with benchmarks derived from public email lists. Activity, recognized
through messages sent, along time and topology were observed in snapshots in a
timeline, and at different scales. Our analysis shows that activity is
practically the same for all networks across timescales ranging from seconds to
months. The principal components of the participants in the topological metrics
space remain practically unchanged as different sets of messages are
considered. The activity of participants follows the expected scale-free trace,
thus yielding the hub, intermediary and peripheral classes of vertices by
comparison against the Erd\"os-R\'enyi model. The relative sizes of these three
sectors are essentially the same for all email lists and the same along time.
Typically, $<15\%$ of the vertices are hubs, 15-45\% are intermediary and
$>45\%$ are peripheral vertices. Similar results for the distribution of
participants in the three sectors and for the relative importance of the
topological metrics were obtained for 12 additional networks from Facebook,
Twitter and ParticipaBR. These properties are consistent with the literature
and may be general for human interaction networks, which has important
implications for establishing a typology of participants based on quantitative
criteria.
|
1310.7780 | The Information Geometry of Mirror Descent | stat.ML cs.LG | Information geometry applies concepts in differential geometry to probability
and statistics and is especially useful for parameter estimation in exponential
families where parameters are known to lie on a Riemannian manifold.
Connections between the geometric properties of the induced manifold and
statistical properties of the estimation problem are well-established. However
developing first-order methods that scale to larger problems has been less of a
focus in the information geometry community. The best known algorithm that
incorporates manifold structure is the second-order natural gradient descent
algorithm introduced by Amari. On the other hand, stochastic approximation
methods have led to the development of first-order methods for optimizing noisy
objective functions. A recent generalization of the Robbins-Monro algorithm
known as mirror descent, developed by Nemirovski and Yudin is a first order
method that induces non-Euclidean geometries. However current analysis of
mirror descent does not precisely characterize the induced non-Euclidean
geometry nor does it consider performance in terms of statistical relative
efficiency. In this paper, we prove that mirror descent induced by Bregman
divergences is equivalent to the natural gradient descent algorithm on the dual
Riemannian manifold. Using this equivalence, it follows that (1) mirror descent
is the steepest descent direction along the Riemannian manifold of the
exponential family; (2) mirror descent with log-likelihood loss applied to
parameter estimation in exponential families asymptotically achieves the
classical Cram\'er-Rao lower bound and (3) natural gradient descent for
manifolds corresponding to exponential families can be implemented as a
first-order method through mirror descent.
|
1310.7782 | Individual Biases, Cultural Evolution, and the Statistical Nature of
Language Universals: The Case of Colour Naming Systems | physics.soc-ph cs.CL cs.MA q-bio.PE | Language universals have long been attributed to an innate Universal Grammar.
An alternative explanation states that linguistic universals emerged
independently in every language in response to shared cognitive or perceptual
biases. A computational model has recently shown how this could be the case,
focusing on the paradigmatic example of the universal properties of colour
naming patterns, and producing results in quantitative agreement with the
experimental data. Here we investigate the role of an individual perceptual
bias in the framework of the model. We study how, and to what extent, the
structure of the bias influences the corresponding linguistic universal
patterns. We show that the cultural history of a group of speakers introduces
population-specific constraints that act against the pressure for uniformity
arising from the individual bias, and we clarify the interplay between these
two forces.
|
1310.7794 | Energy Efficiency Optimization in Relay-Assisted MIMO Systems with
Perfect and Statistical CSI | cs.IT math.IT math.OC | A framework for energy-efficient resource allocation in a single-user,
amplify-and-forward relay-assisted MIMO system is devised in this paper.
Previous results in this area have focused on rate maximization or sum power
minimization problems, whereas fewer results are available when bits/Joule
energy efficiency (EE) optimization is the goal. The performance metric to
optimize is the ratio between the system's achievable rate and the total
consumed power. The optimization is carried out with respect to the source and
relay precoding matrices, subject to QoS and power constraints. Such a
challenging non-convex problem is tackled by means of fractional programming
and and alternating maximization algorithms, for various CSI assumptions at the
source and relay. In particular the scenarios of perfect CSI and those of
statistical CSI for either the source-relay or the relay-destination channel
are addressed. Moreover, sufficient conditions for beamforming optimality are
derived, which is useful in simplifying the system design. Numerical results
are provided to corroborate the validity of the theoretical findings.
|
1310.7795 | An Unsupervised Feature Learning Approach to Improve Automatic Incident
Detection | cs.LG | Sophisticated automatic incident detection (AID) technology plays a key role
in contemporary transportation systems. Though many papers were devoted to
study incident classification algorithms, few study investigated how to enhance
feature representation of incidents to improve AID performance. In this paper,
we propose to use an unsupervised feature learning algorithm to generate higher
level features to represent incidents. We used real incident data in the
experiments and found that effective feature mapping function can be learnt
from the data crosses the test sites. With the enhanced features, detection
rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are
significantly improved in all of the three representative cases. This approach
also provides an alternative way to reduce the amount of labeled data, which is
expensive to obtain, required in training better incident classifiers since the
feature learning is unsupervised.
|
1310.7799 | Backhaul Limited Asymmetric Cooperation for MIMO Cellular Networks via
Semidefinite Relaxation | cs.IT math.IT | Multicell cooperation has recently attracted tremendous attention because of
its ability to eliminate intercell interference and increase spectral
efficiency. However, the enormous amount of information being exchanged,
including channel state information and user data, over backhaul links may
deteriorate the network performance in a realistic system. This paper adopts a
backhaul cost metric that considers the number of active directional
cooperation links, which gives a first order measurement of the backhaul
loading required in asymmetric Multiple-Input Multiple-Output (MIMO)
cooperation. We focus on a downlink scenario for multi-antenna base stations
and single-antenna mobile stations. The design problem is minimizing the number
of active directional cooperation links and jointly optimizing the beamforming
vectors among the cooperative BSs subject to
signal-to-interference-and-noise-ratio (SINR) constraints at the mobile
station. This problem is non-convex and solving it requires combinatorial
search. A practical algorithm based on smooth approximation and semidefinite
relaxation is proposed to solve the combinatorial problem efficiently. We show
that semidefinite relaxation is tight with probability 1 in our algorithm and
stationary convergence is guaranteed. Simulation results show the saving of
backhaul cost and power consumption is notable compared with several baseline
schemes and its effectiveness is demonstrated.
|
1310.7813 | Smoothness-Constrained Image Recovery from Block-Based Random
Projections | cs.CV cs.IT math.IT | In this paper we address the problem of visual quality of images
reconstructed from block-wise random projections. Independent reconstruction of
the blocks can severely affect visual quality, by displaying artifacts along
block borders. We propose a method to enforce smoothness across block borders
by modifying the sensing and reconstruction process so as to employ partially
overlapping blocks. The proposed algorithm accomplishes this by computing a
fast preview from the blocks, whose purpose is twofold. On one hand, it allows
to enforce a set of constraints to drive the reconstruction algorithm towards a
smooth solution, imposing the similarity of block borders. On the other hand,
the preview is used as a predictor of the entire block, allowing to recover the
prediction error, only. The quality improvement over the result of independent
reconstruction can be easily assessed both visually and in terms of PSNR and
SSIM index.
|
1310.7828 | A Complete Parameterized Complexity Analysis of Bounded Planning | cs.AI cs.DS | The propositional planning problem is a notoriously difficult computational
problem, which remains hard even under strong syntactical and structural
restrictions. Given its difficulty it becomes natural to study planning in the
context of parameterized complexity. In this paper we continue the work
initiated by Downey, Fellows and Stege on the parameterized complexity of
planning with respect to the parameter "length of the solution plan." We
provide a complete classification of the parameterized complexity of the
planning problem under two of the most prominent syntactical restrictions,
i.e., the so called PUBS restrictions introduced by Baeckstroem and Nebel and
restrictions on the number of preconditions and effects as introduced by
Bylander. We also determine which of the considered fixed-parameter tractable
problems admit a polynomial kernel and which don't.
|
1310.7829 | About Summarization in Large Fuzzy Databases | cs.DB cs.IR | Moved by the need increased for modeling of the fuzzy data, the success of
the systems of exact generation of summary of data, we propose in this paper, a
new approach of generation of summary from fuzzy data called Fuzzy-SaintEtiQ.
This approach is an extension of the SaintEtiQ model to support the fuzzy data.
It presents the following optimizations such as 1) the minimization of the
expert risk; 2) the construction of a more detailed and more precise summaries
hierarchy, and 3) the co-operation with the user by giving him fuzzy summaries
in different hierarchical levels
|
1310.7839 | Achieving maximum energy-efficiency in multi-relay OFDMA cellular
networks: a fractional programming approach | cs.IT math.IT | In this paper, the joint power and subcarrier allocation problem is solved in
the context of maximizing the energy-efficiency (EE) of a multi-user,
multi-relay orthogonal frequency division multiple access (OFDMA) cellular
network, where the objective function is formulated as the ratio of the
spectral-efficiency (SE) over the total power dissipation. It is proven that
the fractional programming problem considered is quasi-concave so that
Dinkelbach's method may be employed for finding the optimal solution at a low
complexity. This method solves the above-mentioned master problem by solving a
series of parameterized concave secondary problems. These secondary problems
are solved using a dual decomposition approach, where each secondary problem is
further decomposed into a number of similar subproblems. The impact of various
system parameters on the attainable EE and SE of the system employing both EE
maximization (EEM) and SE maximization (SEM) algorithms is characterized. In
particular, it is observed that increasing the number of relays for a range of
cell sizes, although marginally increases the attainable SE, reduces the EE
significantly. It is noted that the highest SE and EE are achieved, when the
relays are placed closer to the BS to take advantage of the resultant
line-of-sight link. Furthermore, increasing both the number of available
subcarriers and the number of active user equipment (UE) increases both the EE
and the total SE of the system as a benefit of the increased frequency and
multi-user diversity, respectively. Finally, it is demonstrated that as
expected, increasing the available power tends to improve the SE, when using
the SEM algorithm. By contrast, given a sufficiently high available power, the
EEM algorithm attains the maximum achievable EE and a suboptimal SE.
|
1310.7852 | Conditional Entropy based User Selection for Multiuser MIMO Systems | cs.IT math.IT | We consider the problem of user subset selection for maximizing the sum rate
of downlink multi-user MIMO systems. The brute-force search for the optimal
user set becomes impractical as the total number of users in a cell increase.
We propose a user selection algorithm based on conditional differential
entropy. We apply the proposed algorithm on Block diagonalization scheme.
Simulation results show that the proposed conditional entropy based algorithm
offers better alternatives than the existing user selection algorithms.
Furthermore, in terms of sum rate, the solution obtained by the proposed
algorithm turns out to be close to the optimal solution with significantly
lower computational complexity than brute-force search.
|
1310.7868 | Automatic Classification of Variable Stars in Catalogs with missing data | astro-ph.IM cs.LG stat.ML | We present an automatic classification method for astronomical catalogs with
missing data. We use Bayesian networks, a probabilistic graphical model, that
allows us to perform inference to pre- dict missing values given observed data
and dependency relationships between variables. To learn a Bayesian network
from incomplete data, we use an iterative algorithm that utilises sampling
methods and expectation maximization to estimate the distributions and
probabilistic dependencies of variables from data with missing values. To test
our model we use three catalogs with missing data (SAGE, 2MASS and UBVI) and
one complete catalog (MACHO). We examine how classification accuracy changes
when information from missing data catalogs is included, how our method
compares to traditional missing data approaches and at what computational cost.
Integrating these catalogs with missing data we find that classification of
variable objects improves by few percent and by 15% for quasar detection while
keeping the computational cost the same.
|
1310.7935 | The Unreasonable Fundamental Incertitudes Behind Bitcoin Mining | cs.CR cs.CE cs.SI | Bitcoin is a "crypto currency", a decentralized electronic payment scheme
based on cryptography which has recently gained excessive popularity.
Scientific research on bitcoin is less abundant. A paper at Financial
Cryptography 2012 conference explains that it is a system which "uses no fancy
cryptography", and is "by no means perfect". It depends on a well-known
cryptographic standard SHA-256. In this paper we revisit the cryptographic
process which allows one to make money by producing bitcoins. We reformulate
this problem as a Constrained Input Small Output (CISO) hashing problem and
reduce the problem to a pure block cipher problem. We estimate the speed of
this process and we show that the cost of this process is less than it seems
and it depends on a certain cryptographic constant which we estimated to be at
most 1.86. These optimizations enable bitcoin miners to save tens of millions
of dollars per year in electricity bills. Miners who set up mining operations
face many economic incertitudes such as high volatility. In this paper we point
out that there are fundamental incertitudes which depend very strongly on the
bitcoin specification. The energy efficiency of bitcoin miners have already
been improved by a factor of about 10,000, and we claim that further
improvements are inevitable. Better technology is bound to be invented, would
it be quantum miners. More importantly, the specification is likely to change.
A major change have been proposed in May 2013 at Bitcoin conference in San
Diego by Dan Kaminsky. However, any sort of change could be flatly rejected by
the community which have heavily invested in mining with the current
technology. Another question is the reward halving scheme in bitcoin. The
current bitcoin specification mandates a strong 4-year cyclic property. We find
this property totally unreasonable and harmful and explain why and how it needs
to be changed.
|
1310.7950 | Technical Report: Distribution Temporal Logic: Combining Correctness
with Quality of Estimation | cs.SY cs.AI cs.LO | We present a new temporal logic called Distribution Temporal Logic (DTL)
defined over predicates of belief states and hidden states of partially
observable systems. DTL can express properties involving uncertainty and
likelihood that cannot be described by existing logics. A co-safe formulation
of DTL is defined and algorithmic procedures are given for monitoring
executions of a partially observable Markov decision process with respect to
such formulae. A simulation case study of a rescue robotics application
outlines our approach.
|
1310.7951 | IRM4MLS: the influence reaction model for multi-level simulation | cs.MA | In this paper, a meta-model called IRM4MLS, that aims to be a generic ground
to specify and execute multi-level agent-based models is presented. It relies
on the influence/reaction principle and more specifically on IRM4S. Simulation
models for IRM4MLS are defined. The capabilities and possible extensions of the
meta-model are discussed.
|
1310.7957 | A Random Walk Model for Item Recommendation in Folksonomies | cs.IR | Social tagging, as a novel approach to information organization and
discovery, has been widely adopted in many Web2.0 applications. The tags
provide a new type of information that can be exploited by recommender systems.
Nevertheless, the sparsity of ternary <user, tag, item> interaction data limits
the performance of tag-based collaborative filtering. This paper proposes a
random-walk-based algorithm to deal with the sparsity problem in social tagging
data, which captures the potential transitive associations between users and
items through their interaction with tags. In particular, two smoothing
strategies are presented from both the user-centric and item-centric
perspectives. Experiments on real-world data sets empirically demonstrate the
efficacy of the proposed algorithm.
|
1310.7961 | Evaluation the efficiency of artificial bee colony and the firefly
algorithm in solving the continuous optimization problem | cs.NE cs.AI | Now the Meta-Heuristic algorithms have been used vastly in solving the
problem of continuous optimization. In this paper the Artificial Bee Colony
(ABC) algorithm and the Firefly Algorithm (FA) are valuated. And for presenting
the efficiency of the algorithms and also for more analysis of them, the
continuous optimization problems which are of the type of the problems of vast
limit of answer and the close optimized points are tested. So, in this paper
the efficiency of the ABC algorithm and FA are presented for solving the
continuous optimization problems and also the said algorithms are studied from
the accuracy in reaching the optimized solution and the resulting time and the
reliability of the optimized answer points of view.
|
1310.7981 | Toward a Formal Model of the Shifting Relationship between Concepts and
Contexts during Associative Thought | q-bio.NC cs.AI | The quantum inspired State Context Property (SCOP) theory of concepts is
unique amongst theories of concepts in offering a means of incorporating that
for each concept in each different context there are an unlimited number of
exemplars, or states, of varying degrees of typicality. Working with data from
a study in which participants were asked to rate the typicality of exemplars of
a concept for different contexts, and introducing an exemplar typicality
threshold, we built a SCOP model of how states of a concept arise differently
in associative versus analytic (or divergent and convergent) modes of thought.
Introducing measures of state robustness and context relevance, we show that by
varying the threshold, the relevance of different contexts changes, and
seemingly atypical states can become typical. The formalism provides a pivotal
step toward a formal explanation of creative thought proesses.
|
1310.7991 | Learning Sparsely Used Overcomplete Dictionaries via Alternating
Minimization | cs.LG math.OC stat.ML | We consider the problem of sparse coding, where each sample consists of a
sparse linear combination of a set of dictionary atoms, and the task is to
learn both the dictionary elements and the mixing coefficients. Alternating
minimization is a popular heuristic for sparse coding, where the dictionary and
the coefficients are estimated in alternate steps, keeping the other fixed.
Typically, the coefficients are estimated via $\ell_1$ minimization, keeping
the dictionary fixed, and the dictionary is estimated through least squares,
keeping the coefficients fixed. In this paper, we establish local linear
convergence for this variant of alternating minimization and establish that the
basin of attraction for the global optimum (corresponding to the true
dictionary and the coefficients) is $\order{1/s^2}$, where $s$ is the sparsity
level in each sample and the dictionary satisfies RIP. Combined with the recent
results of approximate dictionary estimation, this yields provable guarantees
for exact recovery of both the dictionary elements and the coefficients, when
the dictionary elements are incoherent.
|
1310.7994 | Necessary and Sufficient Conditions for Novel Word Detection in
Separable Topic Models | cs.LG cs.IR stat.ML | The simplicial condition and other stronger conditions that imply it have
recently played a central role in developing polynomial time algorithms with
provable asymptotic consistency and sample complexity guarantees for topic
estimation in separable topic models. Of these algorithms, those that rely
solely on the simplicial condition are impractical while the practical ones
need stronger conditions. In this paper, we demonstrate, for the first time,
that the simplicial condition is a fundamental, algorithm-independent,
information-theoretic necessary condition for consistent separable topic
estimation. Furthermore, under solely the simplicial condition, we present a
practical quadratic-complexity algorithm based on random projections which
consistently detects all novel words of all topics using only up to
second-order empirical word moments. This algorithm is amenable to distributed
implementation making it attractive for 'big-data' scenarios involving a
network of large distributed databases.
|
1310.8004 | Online Ensemble Learning for Imbalanced Data Streams | cs.LG stat.ML | While both cost-sensitive learning and online learning have been studied
extensively, the effort in simultaneously dealing with these two issues is
limited. Aiming at this challenge task, a novel learning framework is proposed
in this paper. The key idea is based on the fusion of online ensemble
algorithms and the state of the art batch mode cost-sensitive bagging/boosting
algorithms. Within this framework, two separately developed research areas are
bridged together, and a batch of theoretically sound online cost-sensitive
bagging and online cost-sensitive boosting algorithms are first proposed.
Unlike other online cost-sensitive learning algorithms lacking theoretical
analysis of asymptotic properties, the convergence of the proposed algorithms
is guaranteed under certain conditions, and the experimental evidence with
benchmark data sets also validates the effectiveness and efficiency of the
proposed methods.
|
1310.8038 | Community Structures Are Definable in Networks: A Structural Theory of
Networks | cs.SI physics.soc-ph | We found that neither randomness in the ER model nor the preferential
attachment in the PA model is the mechanism of community structures of
networks, that community structures are universal in real networks, that
community structures are definable in networks, that communities are
interpretable in networks, and that homophyly is the mechanism of community
structures and a structural theory of networks. We proposed the notions of
entropy- and conductance-community structures. It was shown that the two
definitions of the entropy- and conductance-community structures and the notion
of modularity proposed by physicists are all equivalent in defining community
structures of networks, that neither randomness in the ER model nor
preferential attachment in the PA model is the mechanism of community
structures of networks, and that the existence of community structures is a
universal phenomenon in real networks. This poses a fundamental question: What
are the mechanisms of community structures of real networks? To answer this
question, we proposed a homophyly model of networks. It was shown that networks
of our model satisfy a series of new topological, probabilistic and
combinatorial principles, including a fundamental principle, a community
structure principle, a degree priority principle, a widths principle, an
inclusion and infection principle, a king node principle and a predicting
principle etc. The new principles provide a firm foundation for a structural
theory of networks. Our homophyly model demonstrates that homophyly is the
underlying mechanism of community structures of networks, that nodes of the
same community share common features, that power law and small world property
are never obstacles of the existence of community structures in networks, that
community structures are {\it definable} in networks, and that (natural)
communities are {\it interpretable}.
|
1310.8040 | Homophyly and Randomness Resist Cascading Failure in Networks | cs.SI physics.soc-ph | The universal properties of power law and small world phenomenon of networks
seem unavoidably obstacles for security of networking systems. Existing models
never give secure networks. We found that the essence of security is the
security against cascading failures of attacks and that nature solves the
security by mechanisms. We proposed a model of networks by the natural
mechanisms of homophyly, randomness and preferential attachment. It was shown
that homophyly creates a community structure, that homophyly and randomness
introduce ordering in the networks, and that homophyly creates inclusiveness
and introduces rules of infections. These principles allow us to provably
guarantee the security of the networks against any attacks. Our results show
that security can be achieved provably by structures, that there is a tradeoff
between the roles of structures and of thresholds in security engineering, and
that power law and small world property are never obstacles for security of
networks.
|
1310.8057 | User Effects in Beam-Space MIMO | cs.IT math.IT | The performance and design of the novel single-RF-chain beam-space MIMO
antenna concept is evaluated for the first time in the presence of the user.
First, the variations of different performance parameters are evaluated when
placing a beam-space MIMO antenna in close proximity to the user body in
several typical operating scenarios. In addition to the typical degradation of
conventional antennas in terms of radiation efficiency and impedance matching,
it is observed that the user body corrupts the power balance and the
orthogonality of the beam-space MIMO basis. However, capacity analyses show
that throughput reduction mainly stems from the absorption in user body tissues
rather than from the power imbalance and the correlation of the basis. These
results confirm that the beam-space MIMO concept, so far only demonstrated in
the absence of external perturbation, still performs very well in typical human
body interaction scenarios.
|
1310.8059 | Description and Evaluation of Semantic Similarity Measures Approaches | cs.CL | In recent years, semantic similarity measure has a great interest in Semantic
Web and Natural Language Processing (NLP). Several similarity measures have
been developed, being given the existence of a structured knowledge
representation offered by ontologies and corpus which enable semantic
interpretation of terms. Semantic similarity measures compute the similarity
between concepts/terms included in knowledge sources in order to perform
estimations. This paper discusses the existing semantic similarity methods
based on structure, information content and feature approaches. Additionally,
we present a critical evaluation of several categories of semantic similarity
approaches based on two standard benchmarks. The aim of this paper is to give
an efficient evaluation of all these measures which help researcher and
practitioners to select the measure that best fit for their requirements.
|
1310.8067 | Convergence Constrained Multiuser Transmitter-Receiver Optimization in
Single Carrier FDMA | cs.IT math.IT | Convergence constrained power allocation (CCPA) in single carrier multiuser
(MU) single-input multiple-output (SIMO) systems with turbo equalization is
considered in this paper. In order to exploit full benefit of the iterative
receiver, its convergence properties need to be considered also at the
transmitter side. The proposed scheme can guarantee that the desired quality of
service (QoS) is achieved after sufficient amount of iterations. We propose two
different successive convex approximations for solving the non-convex power
minimization problem subject to user specific QoS constraints. The results of
extrinsic information transfer (EXIT) chart analysis demonstrate that the
proposed CCPA scheme can achieve the design objective. Numerical results show
that the proposed schemes can achieve superior performance in terms of power
consumption as compared to linear receivers with and without precoding as well
as to the iterative receiver without precoding.
|
1310.8097 | Guaranteed Collision Detection With Toleranced Motions | cs.CG cs.RO | We present a method for guaranteed collision detection with toleranced
motions. The basic idea is to consider the motion as a curve in the
12-dimensional space of affine displacements, endowed with an object-oriented
Euclidean metric, and cover it with balls. The associated orbits of points,
lines, planes and polygons have particularly simple shapes that lend themselves
well to exact and fast collision queries. We present formulas for elementary
collision tests with these orbit shapes and we suggest an algorithm, based on
motion subdivision and computation of bounding balls, that can give a
no-collision guarantee. It allows a robust and efficient implementation and
parallelization. At hand of several examples we explore the asymptotic behavior
of the algorithm and compare different implementation strategies.
|
1310.8107 | Scalable Frames and Convex Geometry | math.NA cs.IT math.FA math.IT | The recently introduced and characterized scalable frames can be considered
as those frames which allow for perfect preconditioning in the sense that the
frame vectors can be rescaled to yield a tight frame. In this paper we define
$m$-scalability, a refinement of scalability based on the number of non-zero
weights used in the rescaling process, and study the connection between this
notion and elements from convex geometry. Finally, we provide results on the
topology of scalable frames. In particular, we prove that the set of scalable
frames with "small" redundancy is nowhere dense in the set of frames.
|
1310.8120 | On the Tractability of Minimal Model Computation for Some CNF Theories | cs.AI cs.LO | Designing algorithms capable of efficiently constructing minimal models of
CNFs is an important task in AI. This paper provides new results along this
research line and presents new algorithms for performing minimal model finding
and checking over positive propositional CNFs and model minimization over
propositional CNFs. An algorithmic schema, called the Generalized Elimination
Algorithm (GEA) is presented, that computes a minimal model of any positive
CNF. The schema generalizes the Elimination Algorithm (EA) [BP97], which
computes a minimal model of positive head-cycle-free (HCF) CNF theories. While
the EA always runs in polynomial time in the size of the input HCF CNF, the
complexity of the GEA depends on the complexity of the specific eliminating
operator invoked therein, which may in general turn out to be exponential.
Therefore, a specific eliminating operator is defined by which the GEA
computes, in polynomial time, a minimal model for a class of CNF that strictly
includes head-elementary-set-free (HEF) CNF theories [GLL06], which form, in
their turn, a strict superset of HCF theories. Furthermore, in order to deal
with the high complexity associated with recognizing HEF theories, an
"incomplete" variant of the GEA (called IGEA) is proposed: the resulting
schema, once instantiated with an appropriate elimination operator, always
constructs a model of the input CNF, which is guaranteed to be minimal if the
input theory is HEF. In the light of the above results, the main contribution
of this work is the enlargement of the tractability frontier for the minimal
model finding and checking and the model minimization problems.
|
1310.8135 | Large-Scale Sensor Network Localization via Rigid Subnetwork
Registration | cs.NI cs.IT cs.SY math.IT math.OC | In this paper, we describe an algorithm for sensor network localization (SNL)
that proceeds by dividing the whole network into smaller subnetworks, then
localizes them in parallel using some fast and accurate algorithm, and finally
registers the localized subnetworks in a global coordinate system. We
demonstrate that this divide-and-conquer algorithm can be used to leverage
existing high-precision SNL algorithms to large-scale networks, which could
otherwise only be applied to small-to-medium sized networks. The main
contribution of this paper concerns the final registration phase. In
particular, we consider a least-squares formulation of the registration problem
(both with and without anchor constraints) and demonstrate how this otherwise
non-convex problem can be relaxed into a tractable convex program. We provide
some preliminary simulation results for large-scale SNL demonstrating that the
proposed registration algorithm (together with an accurate localization scheme)
offers a good tradeoff between run time and accuracy.
|
1310.8146 | Dynamic adjustment: an electoral method for relaxed double
proportionality | math.OC cs.SI physics.soc-ph | We describe an electoral system for distributing seats in a parliament. It
gives proportionality for the political parties and close to proportionality
for constituencies. The system suggested here is a version of the system used
in Sweden and other Nordic countries with permanent seats in each constituency
and adjustment seats to give proportionality on the national level. In the
national election of 2010 the current Swedish system failed to give
proportionality between parties. We examine here one possible cure for this
unwanted behavior. The main difference compared to the current Swedish system
is that the number of adjustment seats is not fixed, but rather dynamically
determined to be as low as possible and still insure proportionality between
parties.
|
1310.8185 | Dynamics of popstar record sales on phonographic market -- stochastic
model | stat.AP cs.SY math.DS physics.soc-ph | We investigate weekly record sales of the world's most popular 30 artists
(2003-2013). Time series of sales have non-trivial kind of memory
(anticorrelations, strong seasonality and constant autocorrelation decay within
120 weeks). Amount of artists record sales are usually the highest in the first
week after premiere of their brand new records and then decrease to fluctuate
around zero till next album release. We model such a behavior by discrete
mean-reverting geometric jump diffusion (MRGJD) and Markov regime switching
mechanism (MRS) between the base and the promotion regimes. We can built up the
evidence through such a toy model that quantifies linear and nonlinear
dynamical components (with stationary and nonstationary parameters set), and
measure local divergence of the system with collective behavior phenomena. We
find special kind of disagreement between model and data for Christmas time due
to unusual shopping behavior. Analogies to earthquakes, product life-cycles,
and energy markets will also be discussed.
|
1310.8187 | SmartLoc: Sensing Landmarks Silently for Smartphone Based Metropolitan
Localization | cs.NI cs.CY cs.SY | We present \emph{SmartLoc}, a localization system to estimate the location
and the traveling distance by leveraging the lower-power inertial sensors
embedded in smartphones as a supplementary to GPS. To minimize the negative
impact of sensor noises, \emph{SmartLoc} exploits the intermittent strong GPS
signals and uses the linear regression to build a prediction model which is
based on the trace estimated from inertial sensors and the one computed from
the GPS. Furthermore, we utilize landmarks (e.g., bridge, traffic lights)
detected automatically and special driving patterns (e.g., turning, uphill, and
downhill) from inertial sensory data to improve the localization accuracy when
the GPS signal is weak. Our evaluations of \emph{SmartLoc} in the city
demonstrates its technique viability and significant localization accuracy
improvement compared with GPS and other approaches: the error is approximately
20m for 90% of time while the known mean error of GPS is 42.22m.
|
1310.8220 | Prediction of highly cited papers | physics.soc-ph cs.DL cs.SI | In an article written five years ago [arXiv:0809.0522], we described a method
for predicting which scientific papers will be highly cited in the future, even
if they are currently not highly cited. Applying the method to real citation
data we made predictions about papers we believed would end up being well
cited. Here we revisit those predictions, five years on, to see how well we
did. Among the over 2000 papers in our original data set, we examine the fifty
that, by the measures of our previous study, were predicted to do best and we
find that they have indeed received substantially more citations in the
intervening years than other papers, even after controlling for the number of
prior citations. On average these top fifty papers have received 23 times as
many citations in the last five years as the average paper in the data set as a
whole, and 15 times as many as the average paper in a randomly drawn control
group that started out with the same number of citations. Applying our
prediction technique to current data, we also make new predictions of papers
that we believe will be well cited in the next few years.
|
1310.8224 | Transitive Reduction of Citation Networks | physics.soc-ph cs.DL cs.SI | In many complex networks the vertices are ordered in time, and edges
represent causal connections. We propose methods of analysing such directed
acyclic graphs taking into account the constraints of causality and
highlighting the causal structure. We illustrate our approach using citation
networks formed from academic papers, patents, and US Supreme Court verdicts.
We show how transitive reduction reveals fundamental differences in the
citation practices of different areas, how it highlights particularly
interesting work, and how it can correct for the effect that the age of a
document has on its citation count. Finally, we transitively reduce null models
of citation networks with similar degree distributions and show the difference
in degree distributions after transitive reduction to illustrate the lack of
causal structure in such models.
|
1310.8226 | Bibliometric-enhanced Information Retrieval | cs.IR cs.DL physics.soc-ph | Bibliometric techniques are not yet widely used to enhance retrieval
processes in digital libraries, although they offer value-added effects for
users. In this workshop we will explore how statistical modelling of
scholarship, such as Bradfordizing or network analysis of coauthorship network,
can improve retrieval services for specific communities, as well as for large,
cross-domain collections. This workshop aims to raise awareness of the missing
link between information retrieval (IR) and bibliometrics/scientometrics and to
create a common ground for the incorporation of bibliometric-enhanced services
into retrieval at the digital library interface.
|
1310.8243 | Para-active learning | cs.LG stat.ML | Training examples are not all equally informative. Active learning strategies
leverage this observation in order to massively reduce the number of examples
that need to be labeled. We leverage the same observation to build a generic
strategy for parallelizing learning algorithms. This strategy is effective
because the search for informative examples is highly parallelizable and
because we show that its performance does not deteriorate when the sifting
process relies on a slightly outdated model. Parallel active learning is
particularly attractive to train nonlinear models with non-linear
representations because there are few practical parallel learning algorithms
for such models. We report preliminary experiments using both kernel SVMs and
SGD-trained neural networks.
|
1310.8278 | Satisfiability Modulo ODEs | cs.LO cs.SY | We study SMT problems over the reals containing ordinary differential
equations. They are important for formal verification of realistic hybrid
systems and embedded software. We develop delta-complete algorithms for SMT
formulas that are purely existentially quantified, as well as exists-forall
formulas whose universal quantification is restricted to the time variables. We
demonstrate scalability of the algorithms, as implemented in our open-source
solver dReal, on SMT benchmarks with several hundred nonlinear ODEs and
variables.
|
1310.8293 | Dimensions, Structures and Security of Networks | cs.SI physics.soc-ph | One of the main issues in modern network science is the phenomenon of
cascading failures of a small number of attacks. Here we define the dimension
of a network to be the maximal number of functions or features of nodes of the
network. It was shown that there exist linear networks which are provably
secure, where a network is linear, if it has dimension one, that the high
dimensions of networks are the mechanisms of overlapping communities, that
overlapping communities are obstacles for network security, and that there
exists an algorithm to reduce high dimensional networks to low dimensional ones
which simultaneously preserves all the network properties and significantly
amplifies security of networks. Our results explore that dimension is a
fundamental measure of networks, that there exist linear networks which are
provably secure, that high dimensional networks are insecure, and that security
of networks can be amplified by reducing dimensions.
|
1310.8294 | Community Structures Are Definable in Networks, and Universal in Real
World | cs.SI physics.soc-ph | Community detecting is one of the main approaches to understanding networks
\cite{For2010}.
However it has been a longstanding challenge to give a definition for
community structures of networks. Here we found that community structures are
definable in networks, and are universal in real world. We proposed the notions
of entropy- and conductance-community structure ratios. It was shown that the
definitions of the modularity proposed in \cite{NG2004}, and our entropy- and
conductance-community structures are equivalent in defining community
structures of networks, that randomness in the ER model \cite{ER1960} and
preferential attachment in the PA \cite{Bar1999} model are not mechanisms of
community structures of networks, and that the existence of community
structures is a universal phenomenon in real networks. Our results demonstrate
that community structure is a universal phenomenon in the real world that is
definable, solving the challenge of definition of community structures in
networks. This progress provides a foundation for a structural theory of
networks.
|
1310.8295 | Homophyly Networks -- A Structural Theory of Networks | cs.SI physics.soc-ph | A grand challenge in network science is apparently the missing of a
structural theory of networks. The authors have showed that the existence of
community structures is a universal phenomenon in real networks, and that
neither randomness nor preferential attachment is a mechanism of community
structures of network \footnote{A. Li, J. Li, and Y. Pan, Community structures
are definable in networks, and universal in the real world, To appear.}. This
poses a fundamental question: What are the mechanisms of community structures
of real networks? Here we found that homophyly is the mechanism of community
structures and a structural theory of networks. We proposed a homophyly model.
It was shown that networks of our model satisfy a series of new topological,
probabilistic and combinatorial principles, including a fundamental principle,
a community structure principle, a degree priority principle, a widths
principle, an inclusion and infection principle, a king node principle, and a
predicting principle etc, leading to a structural theory of networks. Our model
demonstrates that homophyly is the underlying mechanism of community structures
of networks, that nodes of the same community share common features, that power
law and small world property are never obstacles of the existence of community
structures in networks, and that community structures are definable in
networks.
|
1310.8320 | Safe and Efficient Screening For Sparse Support Vector Machine | cs.LG stat.ML | Screening is an effective technique for speeding up the training process of a
sparse learning model by removing the features that are guaranteed to be
inactive the process. In this paper, we present a efficient screening technique
for sparse support vector machine based on variational inequality. The
technique is both efficient and safe.
|
1310.8347 | Quantum Imaging of High-Dimensional Hilbert Spaces with Radon Transform | quant-ph cs.IT math.IT | High-dimensional Hilbert spaces possess large information encoding and
transmission capabilities. Characterizing exactly the real potential of
high-dimensional entangled systems is a cornerstone of tomography and quantum
imaging. The accuracy of the measurement apparatus and devices used in quantum
imaging is physically limited, which allows no further improvements to be made.
To extend the possibilities, we introduce a post-processing method for quantum
imaging that is based on the Radon transform and the projection-slice theorem.
The proposed solution leads to an enhanced precision and a deeper
parameterization of the information conveying capabilities of high-dimensional
Hilbert spaces. We demonstrate the method for the analysis of high-dimensional
position-momentum photonic entanglement. We show that the entropic separability
bound in terms of standard deviations is violated considerably more strongly in
comparison to the standard setting and current data processing. The results
indicate that the possibilities of the quantum imaging of high-dimensional
Hilbert spaces can be extended by applying appropriate calculations in the
post-processing phase.
|
1310.8369 | On the inverses of some classes of permutations of finite fields | math.NT cs.IT math.IT | We study the compositional inverses of some general classes of permutation
polynomials over finite fields. We show that we can write these inverses in
terms of the inverses of two other polynomials bijecting subspaces of the
finite field, where one of these is a linearized polynomial. In some cases we
are able to explicitly obtain these inverses, thus obtaining the compositional
inverse of the permutation in question. In addition we show how to compute a
linearized polynomial inducing the inverse map over subspaces on which a
prescribed linearized polynomial induces a bijection. We also obtain the
explicit compositional inverses of two classes of permutation polynomials
generalizing those whose compositional inverses were recently obtained in [22]
and [24], respectively.
|
1310.8387 | Density-based and transport-based core-periphery structures in networks | physics.soc-ph cond-mat.dis-nn cs.SI q-bio.OT | Networks often possess mesoscale structures, and studying them can yield
insights into both structure and function. It is most common to study community
structure, but numerous other types of mesoscale structures also exist. In this
paper, we examine core-periphery structures based on both density and
transport. In such structures, core network components are well-connected both
among themselves and to peripheral components, which are not well-connected to
anything. We examine core-periphery structures in a wide range of examples of
transportation, social, and financial networks---including road networks in
large urban areas, a rabbit warren, a dolphin social network, a European
interbank network, and a migration network between counties in the United
States. We illustrate that a recently developed transport-based notion of node
coreness is very useful for characterizing transportation networks. We also
generalize this notion to examine core versus peripheral edges, and we show
that the resulting diagnostic is also useful for transportation networks. To
examine the properties of transportation networks further, we develop a family
of generative models of roadlike networks. We illustrate the effect of the
dimensionality of the embedding space on transportation networks, and we
demonstrate that the correlations between different measures of coreness can be
very different for different types of networks.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.