id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1212.3530 | A Multi-Orientation Analysis Approach to Retinal Vessel Tracking | cs.CV | This paper presents a method for retinal vasculature extraction based on
biologically inspired multi-orientation analysis. We apply multi-orientation
analysis via so-called invertible orientation scores, modeling the cortical
columns in the visual system of higher mammals. This allows us to generically
deal with many hitherto complex problems inherent to vessel tracking, such as
crossings, bifurcations, parallel vessels, vessels of varying widths and
vessels with high curvature. Our approach applies tracking in invertible
orientation scores via a novel geometrical principle for curve optimization in
the Euclidean motion group SE(2). The method runs fully automatically and
provides a detailed model of the retinal vasculature, which is crucial as a
sound basis for further quantitative analysis of the retina, especially in
screening applications.
|
1212.3536 | The network structure of mathematical knowledge according to the
Wikipedia, MathWorld, and DLMF online libraries | cs.SI cs.IR math.HO physics.soc-ph | We study the network structure of Wikipedia (restricted to its mathematical
portion), MathWorld, and DLMF. We approach these three online mathematical
libraries from the perspective of several global and local network-theoretic
features, providing for each one the appropriate value or distribution, along
with comparisons that, if possible, also include the whole of the Wikipedia or
the Web. We identify some distinguishing characteristics of all three
libraries, most of them supposedly traceable to the libraries' shared nature of
relating to a very specialized domain. Among these characteristics are the
presence of a very large strongly connected component in each of the
corresponding directed graphs, the complete absence of any clear power laws
describing the distribution of local features, and the rise to prominence of
some local features (e.g., stress centrality) that can be used to effectively
search for keywords in the libraries.
|
1212.3540 | Social Network Based Search for Experts | cs.SI cs.HC cs.IR physics.soc-ph | Our system illustrates how information retrieved from social networks can be
used for suggesting experts for specific tasks. The system is designed to
facilitate the task of finding the appropriate person(s) for a job, as a
conference committee member, an advisor, etc. This short description will
demonstrate how the system works in the context of the HCIR2012 published
tasks.
|
1212.3544 | Tracking of a Mobile Target Using Generalized Polarization Tensors | cs.NA cs.CE math.NA | In this paper we apply an extended Kalman filter to track both the location
and the orientation of a mobile target from multistatic response measurements.
We also analyze the effect of the limited-view aspect on the stability and the
efficiency of our tracking approach. Our algorithm is based on the use of the
generalized polarization tensors, which can be reconstructed from the
multistatic response measurements by solving a linear system. The system has
the remarkable property that low order generalized polarization tensors are not
affected by the error caused by the instability of higher orders in the
presence of measurement noise.
|
1212.3550 | State-Dependent Multiple Access Channels with Feedback | cs.IT math.IT | In this paper, we examine discrete memoryless Multiple Access Channels (MACs)
with two-sided feedback in the presence of two correlated channel states that
are correlated in the sense of Slepian-Wolf (SW). We find achievable rate
region for this channel when the states are provided non-causally to the
transmitters and show that our achievable rate region subsumes Cover-Leung
achievable rate for the discrete memoryless MAC with two-sided feedback as its
special case. We also find the capacity region of discrete memoryless MAC with
two-sided feedback and with SW-type correlated states available causally or
strictly causally to the transmitters. We also study discrete memoryless MAC
with partial feedback in the presence of two SW-type correlated channel states
that are provided non-causally, causally, or strictly causally to the
transmitters. An achievable rate region is found when channel states are
non-causally provided to the transmitters whereas capacity regions are
characterized when channel states are causally, or strictly causally available
at the transmitters.
|
1212.3557 | Compound Multiple Access Channel with Common Message and Intersymbol
Interference | cs.IT math.IT | In this paper, we characterize the capacity region for the two-user linear
Gaussian compound Multiple Access Channel with common message (MACC) and with
intersymbol interference (ISI) under an input power constraint. The region is
obtained by converting the channel to its equivalent memoryless one by defining
an n-block memoryless circular Gaussian compound MACC model and applying the
discrete Fourier transform (DFT) to decompose the n-block channel into a set of
independent parallel channels whose capacities can be found easily. Indeed, the
capacity region of the original Gaussian compound MACC equals that of the
n-block circular Gaussian compound MACC in the limit of infinite block length.
Then by using the obtained capacity region, we derive the capacity region of
the strong interference channel with common message and ISI.
|
1212.3559 | A Dynamic Network Approach to Breakthrough Innovation | cs.SI cs.DL physics.soc-ph | This paper outlines a framework for the study of innovation that treats
discoveries as additions to evolving networks. As inventions enter they expand
or limit the reach of the ideas they build on by influencing how successive
discoveries use those ideas. The approach is grounded in novel measures of the
extent to which an innovation amplifies or disrupts the status quo. Those
measures index the effects inventions have on subsequent uses of prior
discoveries. In so doing, they characterize a theoretically important but
elusive feature of innovation. We validate our approach by showing it: (1)
discriminates among innovations of similar impact in analyses of U.S. patents;
(2) identifies discoveries that amplify and disrupt technology streams in
select case studies; (3) implies disruptive patents decrease the use of their
predecessors by 60% in difference-in-differences estimation; and, (4) yields
novel findings in analyses of patenting at 110 U.S. universities.
|
1212.3618 | Machine Learning in Proof General: Interfacing Interfaces | cs.AI cs.LG cs.LO | We present ML4PG - a machine learning extension for Proof General. It allows
users to gather proof statistics related to shapes of goals, sequences of
applied tactics, and proof tree structures from the libraries of interactive
higher-order proofs written in Coq and SSReflect. The gathered data is
clustered using the state-of-the-art machine learning algorithms available in
MATLAB and Weka. ML4PG provides automated interfacing between Proof General and
MATLAB/Weka. The results of clustering are used by ML4PG to provide proof hints
in the process of interactive proof development.
|
1212.3621 | Local Irreducibility of Tail-Biting Trellises | cs.IT math.IT math.OC | This paper investigates tail-biting trellis realizations for linear block
codes. Intrinsic trellis properties are used to characterize irreducibility on
given intervals of the time axis. It proves beneficial to always consider the
trellis and its dual simultaneously. A major role is played by trellis
properties that amount to observability and controllability for fragments of
the trellis of various lengths. For fragments of length less than the minimum
span length of the code it is shown that fragment observability and fragment
controllability are equivalent to irreducibility. For reducible trellises, a
constructive reduction procedure is presented. The considerations also lead to
a characterization for when the dual of a trellis allows a product
factorization into elementary ("atomic") trellises.
|
1212.3624 | Robust Adaptive Beamforming for General-Rank Signal Model with Positive
Semi-Definite Constraint via POTDC | cs.IT math.IT math.OC | The robust adaptive beamforming (RAB) problem for general-rank signal model
with an additional positive semi-definite constraint is considered. Using the
principle of the worst-case performance optimization, such RAB problem leads to
a difference-of-convex functions (DC) optimization problem. The existing
approaches for solving the resulted non-convex DC problem are based on
approximations and find only suboptimal solutions. Here we solve the non-convex
DC problem rigorously and give arguments suggesting that the solution is
globally optimal. Particularly, we rewrite the problem as the minimization of a
one-dimensional optimal value function whose corresponding optimization problem
is non-convex. Then, the optimal value function is replaced with another
equivalent one, for which the corresponding optimization problem is convex. The
new one-dimensional optimal value function is minimized iteratively via
polynomial time DC (POTDC) algorithm.We show that our solution satisfies the
Karush-Kuhn-Tucker (KKT) optimality conditions and there is a strong evidence
that such solution is also globally optimal. Towards this conclusion, we
conjecture that the new optimal value function is a convex function. The new
RAB method shows superior performance compared to the other state-of-the-art
general-rank RAB methods.
|
1212.3631 | Learning efficient sparse and low rank models | cs.LG | Parsimony, including sparsity and low rank, has been shown to successfully
model data in numerous machine learning and signal processing tasks.
Traditionally, such modeling approaches rely on an iterative algorithm that
minimizes an objective function with parsimony-promoting terms. The inherently
sequential structure and data-dependent complexity and latency of iterative
optimization constitute a major limitation in many applications requiring
real-time performance or involving large-scale data. Another limitation
encountered by these modeling techniques is the difficulty of their inclusion
in discriminative learning scenarios. In this work, we propose to move the
emphasis from the model to the pursuit algorithm, and develop a process-centric
view of parsimonious modeling, in which a learned deterministic
fixed-complexity pursuit process is used in lieu of iterative optimization. We
show a principled way to construct learnable pursuit process architectures for
structured sparse and robust low rank models, derived from the iteration of
proximal descent algorithms. These architectures learn to approximate the exact
parsimonious representation at a fraction of the complexity of the standard
optimization methods. We also show that appropriate training regimes allow to
naturally extend parsimonious models to discriminative settings.
State-of-the-art results are demonstrated on several challenging problems in
image and audio processing with several orders of magnitude speedup compared to
the exact optimization algorithms.
|
1212.3634 | A comparative study of root-based and stem-based approaches for
measuring the similarity between arabic words for arabic text mining
applications | cs.CL cs.IR | Representation of semantic information contained in the words is needed for
any Arabic Text Mining applications. More precisely, the purpose is to better
take into account the semantic dependencies between words expressed by the
co-occurrence frequencies of these words. There have been many proposals to
compute similarities between words based on their distributions in contexts. In
this paper, we compare and contrast the effect of two preprocessing techniques
applied to Arabic corpus: Rootbased (Stemming), and Stem-based (Light Stemming)
approaches for measuring the similarity between Arabic words with the well
known abstractive model -Latent Semantic Analysis (LSA)- with a wide variety of
distance functions and similarity measures, such as the Euclidean Distance,
Cosine Similarity, Jaccard Coefficient, and the Pearson Correlation
Coefficient. The obtained results show that, on the one hand, the variety of
the corpus produces more accurate results; on the other hand, the Stem-based
approach outperformed the Root-based one because this latter affects the words
meanings.
|
1212.3638 | Energy-Efficient Resource Allocation in Multiuser OFDM Systems with
Wireless Information and Power Transfer | cs.IT math.IT | In this paper, we study the resource allocation algorithm design for
multiuser orthogonal frequency division multiplexing (OFDM) downlink systems
with simultaneous wireless information and power transfer. The algorithm design
is formulated as a non-convex optimization problem for maximizing the energy
efficiency of data transmission (bit/Joule delivered to the users). In
particular, the problem formulation takes into account the minimum required
system data rate, heterogeneous minimum required power transfers to the users,
and the circuit power consumption. Subsequently, by exploiting the method of
time-sharing and the properties of nonlinear fractional programming, the
considered non-convex optimization problem is solved using an efficient
iterative resource allocation algorithm. For each iteration, the optimal power
allocation and user selection solution are derived based on Lagrange dual
decomposition. Simulation results illustrate that the proposed iterative
resource allocation algorithm achieves the maximum energy efficiency of the
system and reveal how energy efficiency, system capacity, and wireless power
transfer benefit from the presence of multiple users in the system.
|
1212.3640 | On the Design of Artificial-Noise-Aided Secure Multi-Antenna
Transmission in Slow Fading Channels | cs.IT math.IT | In this paper, we investigate the design of artificial-noise-aided secure
multi-antenna transmission in slow fading channels. The primary design concerns
include the transmit power allocation and the rate parameters of the wiretap
code. We consider two scenarios with different complexity levels: i) the design
parameters are chosen to be fixed for all transmissions, ii) they are
adaptively adjusted based on the instantaneous channel feedback from the
intended receiver. In both scenarios, we provide explicit design solutions for
achieving the maximal throughput subject to a secrecy constraint, given by a
maximum allowable secrecy outage probability. We then derive accurate
approximations for the maximal throughput in both scenarios in the high
signal-to-noise ratio region, and give new insights into the additional power
cost for achieving a higher security level, whilst maintaining a specified
target throughput. In the end, the throughput gain of adaptive transmission
over non-adaptive transmission is also quantified and analyzed.
|
1212.3654 | Sum-Rate Maximization with Minimum Power Consumption for MIMO DF Two-Way
Relaying: Part II - Network Optimization | cs.IT math.IT | In Part II of this two-part paper, a sum-rate-maximizing power allocation
with minimum power consumption is found for multiple-input multiple-output
(MIMO) decode-and-forward (DF) two-way relaying (TWR) in a network optimization
scenario. In this scenario, the relay and the source nodes jointly optimize
their power allocation strategies to achieve network optimality. Unlike the
relay optimization scenario considered in part I which features low complexity
but does not achieve network optimality, the network-level optimal power
allocation can be achieved in the network optimization scenario at the cost of
higher complexity. The network optimization problem is considered in two cases
each with several subcases. It is shown that the considered problem, which is
originally nonconvex, can be transferred into different convex problems for all
but two subcases. For the remaining two subcases, one for each case, it is
proved that the optimal strategies for the source nodes and the relay must
satisfy certain properties. Based on these properties, an algorithm is proposed
for finding the optimal solution. The effect of asymmetry in the number of
antennas, power limits, and channel statistics is also considered. Such
asymmetry is shown to have a negative effect on both the achievable sum-rate
and the power allocation efficiency in MIMO DF TWR. Simulation results
demonstrate the performance of the proposed algorithm and the effect of
asymmetry in the system.
|
1212.3669 | A metric for software vulnerabilities classification | cs.SE cs.LG | Vulnerability discovery and exploits detection are two wide areas of study in
software engineering. This preliminary work tries to combine existing methods
with machine learning techniques to define a metric classification of
vulnerable computer programs. First a feature set has been defined and later
two models have been tested against real world vulnerabilities. A relation
between the classifier choice and the features has also been outlined.
|
1212.3689 | A Tight Upper Bound for the Third-Order Asymptotics for Most Discrete
Memoryless Channels | cs.IT math.IT | This paper shows that the logarithm of the epsilon-error capacity (average
error probability) for n uses of a discrete memoryless channel is upper bounded
by the normal approximation plus a third-order term that does not exceed 1/2
log n + O(1) if the epsilon-dispersion of the channel is positive. This matches
a lower bound by Y. Polyanskiy (2010) for discrete memoryless channels with
positive reverse dispersion. If the epsilon-dispersion vanishes, the logarithm
of the epsilon-error capacity is upper bounded by the n times the capacity plus
a constant term except for a small class of DMCs and epsilon >= 1/2.
|
1212.3690 | Capacity Bounds for Dirty Paper with Exponential Dirt | cs.IT math.IT | The additive exponential noise channel with additive exponential interference
(AENC-AEI) known non-causally at the transmitter is studied. This channel can
be considered as an exponential version of the discrete memoryless channel with
state known non-causally at the encoder considered by Gelfand and Pinsker. We
make use of Gelfand-Pinsker classic capacity Theorem to derive inner and outer
bounds on the capacity of this channel under a non-negative input constraint as
well as a constraint on the mean value of the input. First we obtain an outer
bound for AENC-AEI. Then by using the input distribution achieving the outer
bound, we derive an inner bound which this inner bound coincides with the
obtained outer bound at high signal to noise ratios (SNRs) and therefore, gives
the capacity of the AENC-AEI at high SNRs.
|
1212.3704 | Some Constacyclic Codes over Finite Chain Rings | cs.IT math.IT | For $\lambda$ an $n$-th power of a unit in a finite chain ring we prove that
$\lambda$-constacyclic repeated-root codes over some finite chain rings are
equivalent to cyclic codes. This allows us to simplify the structure of some
constacylic codes. We also study the $\alpha +p \beta$-constacyclic codes of
length $p^s$ over the Galois ring $GR(p^e,r)$.
|
1212.3747 | Cluster-based Transform Domain Communication Systems for High Spectrum
Efficiency | cs.NI cs.IT math.IT | This paper presents a cluster-based transform domain communication system
(TDCS) to improve spectrum efficiency. Unlike the utilities of clusters in
orthogonal frequency division multiplex (OFDM) systems, the cluster-based TDCS
framework divides entire unoccupied spectrum bins into $L$ clusters, where each
one represents a data steam independently, to achieve $L$ times of spectrum
efficiency compared to that of the traditional one. Among various schemes of
spectrum bin spacing and allocation, the TDCS with random allocation scheme
appears to be an ideal candidate to significantly improve spectrum efficiency
without seriously degrading power efficiency. In multipath fading channel, the
coded TDCS with random allocation scheme achieves robust BER performance due to
a large degree of frequency diversity. Furthermore, our study shows that the
smaller spectrum bin spacing should be configured for the cluster-based TDCS to
achieve higher spectrum efficiency and more robust BER performance.
|
1212.3753 | Simultaneously Structured Models with Application to Sparse and Low-rank
Matrices | cs.IT math.IT math.OC | The topic of recovery of a structured model given a small number of linear
observations has been well-studied in recent years. Examples include recovering
sparse or group-sparse vectors, low-rank matrices, and the sum of sparse and
low-rank matrices, among others. In various applications in signal processing
and machine learning, the model of interest is known to be structured in
several ways at the same time, for example, a matrix that is simultaneously
sparse and low-rank.
Often norms that promote each individual structure are known, and allow for
recovery using an order-wise optimal number of measurements (e.g., $\ell_1$
norm for sparsity, nuclear norm for matrix rank). Hence, it is reasonable to
minimize a combination of such norms. We show that, surprisingly, if we use
multi-objective optimization with these norms, then we can do no better,
order-wise, than an algorithm that exploits only one of the present structures.
This result suggests that to fully exploit the multiple structures, we need an
entirely new convex relaxation, i.e. not one that is a function of the convex
relaxations used for each structure. We then specialize our results to the case
of sparse and low-rank matrices. We show that a nonconvex formulation of the
problem can recover the model from very few measurements, which is on the order
of the degrees of freedom of the matrix, whereas the convex problem obtained
from a combination of the $\ell_1$ and nuclear norms requires many more
measurements. This proves an order-wise gap between the performance of the
convex and nonconvex recovery problems in this case. Our framework applies to
arbitrary structure-inducing norms as well as to a wide range of measurement
ensembles. This allows us to give performance bounds for problems such as
sparse phase retrieval and low-rank tensor completion.
|
1212.3765 | Biologically Inspired Spiking Neurons : Piecewise Linear Models and
Digital Implementation | cs.LG cs.NE q-bio.NC | There has been a strong push recently to examine biological scale simulations
of neuromorphic algorithms to achieve stronger inference capabilities. This
paper presents a set of piecewise linear spiking neuron models, which can
reproduce different behaviors, similar to the biological neuron, both for a
single neuron as well as a network of neurons. The proposed models are
investigated, in terms of digital implementation feasibility and costs,
targeting large scale hardware implementation. Hardware synthesis and physical
implementations on FPGA show that the proposed models can produce precise
neural behaviors with higher performance and considerably lower implementation
costs compared with the original model. Accordingly, a compact structure of the
models which can be trained with supervised and unsupervised learning
algorithms has been developed. Using this structure and based on a spike rate
coding, a character recognition case study has been implemented and tested.
|
1212.3767 | Visual Objects Classification with Sliding Spatial Pyramid Matching | cs.CV | We present a method for visual object classification using only a single
feature, transformed color SIFT with a variant of Spatial Pyramid Matching
(SPM) that we called Sliding Spatial Pyramid Matching (SSPM), trained with an
ensemble of linear regression (provided by LINEAR) to obtained state of the art
result on Caltech-101 of 83.46%. SSPM is a special version of SPM where instead
of dividing an image into K number of regions, a subwindow of fixed size is
slide around the image with a fixed step size. For each subwindow, a histogram
of visual words is generated. To obtained the visual vocabulary, instead of
performing K-means clustering, we randomly pick N exemplars from the training
set and encode them with a soft non-linear mapping method. We then trained 15
models, each with a different visual word size with linear regression. All 15
models are then averaged together to form a single strong model.
|
1212.3777 | The Arduino as a Hardware Random-Number Generator | cs.CR cs.IT math.IT | Cheap micro-controllers, such as the Arduino or other controllers based on
the Atmel AVR CPUs are being deployed in a wide variety of projects, ranging
from sensors networks to robotic submarines. In this paper, we investigate the
feasibility of using the Arduino as a true random number generator (TRNG). The
Arduino Reference Manual recommends using it to seed a pseudo random number
generator (PRNG) due to its ability to read random atmospheric noise from its
analog pins. This is an enticing application since true bits of entropy are
hard to come by. Unfortunately, we show by statistical methods that the
atmospheric noise of an Arduino is largely predictable in a variety of
settings, and is thus a weak source of entropy. We explore various methods to
extract true randomness from the micro-controller and conclude that it should
not be used to produce randomness from its analog pins.
|
1212.3782 | Can Selfish Groups be Self-Enforcing? | cs.DM cs.GT cs.SI | Algorithmic graph theory has thoroughly analyzed how, given a network
describing constraints between various nodes, groups can be formed among these
so that the resulting configuration optimizes a \emph{global} metric. In
contrast, for various social and economic networks, groups are formed \emph{de
facto} by the choices of selfish players. A fundamental problem in this setting
is the existence and convergence to a \emph{self-enforcing} configuration:
assignment of players into groups such that no player has an incentive to move
into another group than hers. Motivated by information sharing on social
networks -- and the difficult tradeoff between its benefits and the associated
privacy risk -- we study the possible emergence of such stable configurations
in a general selfish group formation game.
Our paper considers this general game for the first time, and it completes
its analysis. We show that convergence critically depends on the level of
\emph{collusions} among the players -- which allow multiple players to move
simultaneously as long as \emph{all of them} benefit. Solving a previously open
problem we exactly show when, depending on collusions, convergence occurs
within polynomial time, non-polynomial time, and when it never occurs. We also
prove that previously known bounds on convergence time are all loose: by a
novel combinatorial analysis of the evolution of this game we are able to
provide the first \emph{asymptotically exact} formula on its convergence.
Moreover, we extend these results by providing a complete analysis when groups
may \emph{overlap}, and for general utility functions representing
\emph{multi-modal} interactions. Finally, we prove that collusions have a
significant and \emph{positive} effect on the \emph{efficiency} of the
equilibrium that is attained.
|
1212.3789 | Adjoint-Based Optimal Control of Time-Dependent Free Boundary Problems | math.OC cs.CE cs.NA | In this paper we show a simplified optimisation approach for free boundary
problems in arbitrary space dimensions. This approach is mainly based on an
extended operator splitting which allows a decoupling of the domain deformation
and solving the remaining partial differential equation. First we give a short
introduction to free boundary problems and the problems occurring in
optimisation. Then we introduce the extended operator splitting and apply it to
a general minimisation subject to a time-dependent scalar-valued partial
differential equation. This yields a time-discretised optimisation problem
which allows us a quite simple application of adjoint-based optimisation
methods. Finally, we verify this approach numerically by the optimisation of a
flow problem (Navier-Stokes equation) and the final shape of a Stefan-type
problem.
|
1212.3799 | Compressed Sensing Based on Random Symmetric Bernoulli Matrix | cs.IT math.IT | The task of compressed sensing is to recover a sparse vector from a small
number of linear and non-adaptive measurements, and the problem of finding a
suitable measurement matrix is very important in this field. While most recent
works focused on random matrices with entries drawn independently from certain
probability distributions, in this paper we show that a partial random
symmetric Bernoulli matrix whose entries are not independent, can be used to
recover signal from observations successfully with high probability. The
experimental results also show that the proposed matrix is a suitable
measurement matrix.
|
1212.3817 | Probability Bracket Notation: Markov Sequence Projector of Visible and
Hidden Markov Models in Dynamic Bayesian Networks | cs.AI math.PR | With the symbolic framework of Probability Bracket Notation (PBN), the Markov
Sequence Projector (MSP) is introduced to expand the evolution formula of
Homogeneous Markov Chains (HMCs). The well-known weather example, a Visible
Markov Model (VMM), illustrates that the full joint probability of a VMM
corresponds to a specifically projected Markov state sequence in the expanded
evolution formula. In a Hidden Markov Model (HMM), the probability basis
(P-basis) of the hidden Markov state sequence and the P-basis of the
observation sequence exist in the sequential event space. The full joint
probability of an HMM is the product of the (unknown) projected hidden sequence
of Markov states and their transformations into the observation P-bases. The
Viterbi algorithm is applied to the famous Weather-Stone HMM example to
determine the most likely weather-state sequence given the observed stone-state
sequence. Our results are verified using the Elvira software package. Using the
PBN, we unify the evolution formulas for Markov models like VMMs, HMMs, and
factorial HMMs (with discrete time). We briefly investigated the extended HMM,
addressing the feedback issue, and the continuous-time VMM and HMM (with
discrete or continuous states). All these models are subclasses of Dynamic
Bayesian Networks (DBNs) essential for Machine Learning (ML) and Artificial
Intelligence (AI).
|
1212.3844 | Three-Receiver Broadcast Channel with Side Information | cs.IT math.IT | Three-Receiver broadcast channels (BC) are of interest due to their
information-theoretic differences with two-receiver one. In this paper, we
derive achievable rate regions for two classes of 3-receiver BC with side
information (SI), i.e. Multilevel BC (MBC) and 3-receiver less noisy BC, using
a combination of superposition coding, Gelfand-Pinsker binning scheme and
Nair-El Gamal indirect decoding. Our rate region for MBC subsumes Steinberg
rate region for 2-receiver degraded BC with SI as its special case. We will
also show that the obtained achievable rate regions in the first two cases are
tight for several classes of non-deterministic, semi-deterministic, and
deterministic 3-receiver BC when SI is available both at the transmitter and at
the receivers. We also prove that as far as a receiver is deterministic in the
three-receiver less noisy BC, the presence of side information at that receiver
does not affect the capacity region. We have also provided the writing on dirty
paper (WDP) property for 3-receiver BC is provided as an example. In the last
section, we provide simple bounds on the capacity region of the Additive
Exponential noise three-receiver broadcast channels with Additive Exponential
interference (AEN-3BC-EI).
|
1212.3850 | Belief Propagation for Continuous State Spaces: Stochastic
Message-Passing with Quantitative Guarantees | cs.IT cs.LG math.IT stat.ML | The sum-product or belief propagation (BP) algorithm is a widely used
message-passing technique for computing approximate marginals in graphical
models. We introduce a new technique, called stochastic orthogonal series
message-passing (SOSMP), for computing the BP fixed point in models with
continuous random variables. It is based on a deterministic approximation of
the messages via orthogonal series expansion, and a stochastic approximation
via Monte Carlo estimates of the integral updates of the basis coefficients. We
prove that the SOSMP iterates converge to a \delta-neighborhood of the unique
BP fixed point for any tree-structured graph, and for any graphs with cycles in
which the BP updates satisfy a contractivity condition. In addition, we
demonstrate how to choose the number of basis coefficients as a function of the
desired approximation accuracy \delta and smoothness of the compatibility
functions. We illustrate our theory with both simulated examples and in
application to optical flow estimation.
|
1212.3852 | The International-Migration Network | physics.soc-ph cs.SI | This paper studies international migration from a complex-network
perspective. We define the international-migration network (IMN) as the
weighted-directed graph where nodes are world countries and links account for
the stock of migrants originated in a given country and living in another
country at a given point in time. We characterize the binary and weighted
architecture of the network and its evolution over time in the period
1960-2000. We find that the IMN is organized around a modular structure
characterized by a small-world pattern displaying disassortativity and high
clustering, with power-law distributed weighted-network statistics. We also
show that a parsimonious gravity model of migration can account for most of
observed IMN topological structure. Overall, our results suggest that
socio-economic, geographical and political factors are more important than
local-network properties in shaping the structure of the IMN.
|
1212.3853 | Incentives for P2P-Assisted Content Distribution: If You Can't Beat 'Em,
Join 'Em | cs.SI | The rapid growth of content distribution on the Internet has brought with it
proportional increases in the costs of distributing content. Adding to
distribution costs is the fact that digital content is easily duplicable, and
hence can be shared in an illicit peer-to-peer (P2P) manner that generates no
revenue for the content provider. In this paper, we study whether the content
provider can recover lost revenue through a more innovative approach to
distribution. In particular, we evaluate the benefits of a hybrid
revenue-sharing system that combines a legitimate P2P swarm and a centralized
client-server approach. We show how the revenue recovered by the content
provider using a server-supported legitimate P2P swarm can exceed that of the
monopolistic scheme by an order of magnitude. Our analytical results are
obtained in a fluid model, and supported by stochastic simulations.
|
1212.3859 | On Capacity Region of Wiretap Networks | cs.IT math.IT | In this paper we consider the problem of secure network coding where an
adversary has access to an unknown subset of links chosen from a known
collection of links subsets. We study the capacity region of such networks,
commonly called "wiretap networks", subject to weak and strong secrecy
constraints, and consider both zero-error and asymptotically zero-error
communication. We prove that in general discrete memoryless networks modeled by
discrete memoryless channels, the capacity region subject to strong secrecy
requirement and the capacity region subject to weak secrecy requirement are
equal. In particular, this result shows that requiring strong secrecy in a
wiretap network with asymptotically zero probability of error does not shrink
the capacity region compared to the case of weak secrecy requirement. We also
derive inner and outer bounds on the network coding capacity region of wiretap
networks subject to weak secrecy constraint, for both zero probability of error
and asymptotically zero probability of error, in terms of the entropic region.
|
1212.3866 | Agnostic insurability of model classes | math.ST cs.IT math.IT stat.TH | Motivated by problems in insurance, our task is to predict finite upper
bounds on a future draw from an unknown distribution $p$ over the set of
natural numbers. We can only use past observations generated independently and
identically distributed according to $p$. While $p$ is unknown, it is known to
belong to a given collection ${\cal P}$ of probability distributions on the
natural numbers.
The support of the distributions $p \in {\cal P}$ may be unbounded, and the
prediction game goes on for \emph{infinitely} many draws. We are allowed to
make observations without predicting upper bounds for some time. But we must,
with probability 1, start and then continue to predict upper bounds after a
finite time irrespective of which $p \in {\cal P}$ governs the data.
If it is possible, without knowledge of $p$ and for any prescribed confidence
however close to 1, to come up with a sequence of upper bounds that is never
violated over an infinite time window with confidence at least as big as
prescribed, we say the model class ${\cal P}$ is \emph{insurable}.
We completely characterize the insurability of any class ${\cal P}$ of
distributions over natural numbers by means of a condition on how the
neighborhoods of distributions in ${\cal P}$ should be, one that is both
necessary and sufficient.
|
1212.3873 | Learning Markov Decision Processes for Model Checking | cs.LG cs.LO cs.SE | Constructing an accurate system model for formal model verification can be
both resource demanding and time-consuming. To alleviate this shortcoming,
algorithms have been proposed for automatically learning system models based on
observed system behaviors. In this paper we extend the algorithm on learning
probabilistic automata to reactive systems, where the observed system behavior
is in the form of alternating sequences of inputs and outputs. We propose an
algorithm for automatically learning a deterministic labeled Markov decision
process model from the observed behavior of a reactive system. The proposed
learning algorithm is adapted from algorithms for learning deterministic
probabilistic finite automata, and extended to include both probabilistic and
nondeterministic transitions. The algorithm is empirically analyzed and
evaluated by learning system models of slot machines. The evaluation is
performed by analyzing the probabilistic linear temporal logic properties of
the system as well as by analyzing the schedulers, in particular the optimal
schedulers, induced by the learned models.
|
1212.3881 | Optimal forwarding ratio on dynamical networks with heterogeneous
mobility | physics.soc-ph cs.SI | As the discovery of non-Poissonian statistics of human mobility trajectories,
more attention has been paid to understanding the role of these patterns in
different dynamics. In this study, we first introduce the heterogeneous
mobility of mobile agents into dynamical networks, and then investigate the
forwarding strategy on the heterogeneous dynamical networks. We find that the
faster speed and the higher proportion of high-speed agents can enhance the
network throughput and reduce the mean traveling time in the case of random
forwarding. A hierarchical structure in the dependence of high-speed is
observed: the network throughput remains unchanged in small and large
high-speed value. It is interesting to find that the slightly preferential
forwarding to high-speed agents can maximize the network capacity. Through
theoretical analysis and numerical simulations, we show that the optimal
forwarding ratio stems from local structural heterogeneity of low-speed agents.
|
1212.3883 | Bayes Information-theoretic Radar Waveform Design and Delay-Doppler
Resolution for Extended Targets | cs.IT math.IT | In this paper, we consider the problem of information-theoretic waveform
design for active sensing systems such as radar for extended targets. Contrary
to the popular formulation of the problem in the estimation-theoretic context,
we are rather interested in a Bayes decision theoretic approach where a target
present in the environment belongs to two or more classes whose priors are
known. Optimal information theory based transmit waveforms are designed by
maximizing mutual information (MI) between the received signal and the target
impulse response, resulting in a novel iterative design equation. We also
derive signal to noise ratio (SNR) maximization based waveforms. In an effort
to quantize the benefits of such a design approach, the delay-Doppler ambiguity
function of information-theoretic waveforms are presented and is compared with
Barker codes of similar time-bandwidth product. It is found that the ambiguity
function of information-theoretic waveforms has very sharp main lobe in general
and excellent time autocorrelation properties in particular.
|
1212.3886 | Amplitudes of mono-components and representation by generalized sampling
functions | cs.IT math.IT | A mono-component is a real-valued signal of finite energy that has
non-negative instantaneous frequencies, which may be defined as the derivative
of the phase function of the given real-valued signal through the approach of
canonical amplitude-phase modulation. We study in this article how the
amplitude is determined by its phase in a canonical amplitude-phase modulation.
Our finding is that such an amplitude can be perfectly reconstructed by a
sampling formula using the so-called generalized sampling functions and their
Hilbert transforms. The regularity of such an amplitude is identified to be at
least continuous. Meanwhile, we also make a very interesting and new
characterization of the band-limited functions.
|
1212.3900 | A Tutorial on Probabilistic Latent Semantic Analysis | stat.ML cs.LG | In this tutorial, I will discuss the details about how Probabilistic Latent
Semantic Analysis (PLSA) is formalized and how different learning algorithms
are proposed to learn the model.
|
1212.3903 | Full-Rate, Full-Diversity, Finite Feedback Space-Time Schemes with
Minimum Feedback and Transmission Duration | cs.IT math.IT | In this paper a MIMO quasi static block fading channel with finite N-ary
delay-free, noise-free feedback is considered. The transmitter uses a set of N
Space-Time Block Codes (STBCs), one corresponding to each of the N possible
feedback values, to encode and transmit information. The feedback function used
at the receiver and the N component STBCs used at the transmitter together
constitute a Finite Feedback Scheme (FFS). Although a number of FFSs are
available in the literature that provably achieve full-diversity, there is no
known universal criterion to determine whether a given arbitrary FFS achieves
full-diversity or not. Further, all known full-diversity FFSs for T<N_t where
N_t is the number of transmit antennas, have rate at the most 1. In this paper
a universal necessary condition for any FFS to achieve full-diversity is given,
using which the notion of Feedback-Transmission duration optimal (FT-Optimal)
FFSs - schemes that use minimum amount of feedback N given the transmission
duration T, and minimum transmission duration given the amount of feedback to
achieve full-diversity - is introduced. When there is no feedback (N=1) an
FT-optimal scheme consists of a single STBC with T=N_t, and the universal
necessary condition reduces to the well known necessary and sufficient
condition for an STBC to achieve full-diversity: every non-zero codeword
difference matrix of the STBC must be of rank N_t. Also, a sufficient condition
for full-diversity is given for the FFSs in which the component STBC with the
largest minimum Euclidean distance is chosen. Using this sufficient condition
full-rate (rate N_t) full-diversity FT-Optimal schemes are constructed for all
(N_t,T,N) with NT=N_t. These are the first full-rate full-diversity FFSs
reported in the literature for T<N_t. Simulation results show that the new
schemes have the best error performance among all known FFSs.
|
1212.3906 | Simple Search Engine Model: Adaptive Properties | cs.IR | In this paper we study the relationship between query and search engine by
exploring the adaptive properties based on a simple search engine. We used set
theory and utilized the words and terms for defining singleton space of event
in a search engine model, and then provided the inclusion between one singleton
to another.
|
1212.3913 | Group Component Analysis for Multiblock Data: Common and Individual
Feature Extraction | cs.CV cs.LG | Very often data we encounter in practice is a collection of matrices rather
than a single matrix. These multi-block data are naturally linked and hence
often share some common features and at the same time they have their own
individual features, due to the background in which they are measured and
collected. In this study we proposed a new scheme of common and individual
feature analysis (CIFA) that processes multi-block data in a linked way aiming
at discovering and separating their common and individual features. According
to whether the number of common features is given or not, two efficient
algorithms were proposed to extract the common basis which is shared by all
data. Then feature extraction is performed on the common and the individual
spaces separately by incorporating the techniques such as dimensionality
reduction and blind source separation. We also discussed how the proposed CIFA
can significantly improve the performance of classification and clustering
tasks by exploiting common and individual features of samples respectively. Our
experimental results show some encouraging features of the proposed methods in
comparison to the state-of-the-art methods on synthetic and real data.
|
1212.3922 | Interroom radiative couplings through windows and large openings in
buildings: Proposal of a simplified model | cs.CE | A simplified model of indoor short wave radiation couplings adapted to
multi-zone simulations is proposed, thanks to a simplifying hypothesis and to
the introduction of an indoor short wave exchange matrix. The specific
properties of this matrix appear useful to quantify the thermal radiation
exchanges between the zones separated by windows or large openings. Integrated
in CODYRUN software, this module is detailed and compared to experimental
measurements carried out on a real scale tropical building.
|
1212.3924 | Building ventilation: A pressure airflow model computer generation and
elements of validation | cs.CE | The calculation of airflows is of great importance for detailed building
thermal simulation computer codes, these airflows most frequently constituting
an important thermal coupling between the building and the outside on one hand,
and the different thermal zones on the other. The driving effects of air
movement, which are the wind and the thermal buoyancy, are briefly outlined and
we look closely at their coupling in the case of buildings, by exploring the
difficulties associated with large openings. Some numerical problems tied to
the resolving of the non-linear system established are also covered. Part of a
detailled simulation software (CODYRUN), the numerical implementation of this
airflow model is explained, insisting on data organization and processing
allowing the calculation of the airflows. Comparisons are then made between the
model results and in one hand analytical expressions and in another and
experimental measurements in case of a collective dwelling.
|
1212.3925 | Elaboration of global quality standards for natural and low energy
cooling in French tropical island buildings | cs.CE | Electric load profiles of tropical islands in developed countries are
characterised by morning, midday and evening peaks arising from all year round
high power demand in the commercial and residential sectors, due mostly to air
conditioning appliances and bad thermal conception of the building. The work
presented in this paper has led to the conception of a global quality standards
obtained through optimized bioclimatic urban planning and architectural design,
the use of passive cooling architectural components, natural ventilation and
energy efficient systems such as solar water heaters. We evaluated, with the
aid of an airflow and thermal building simulation software (CODYRUN), the
impact of each technical solution on thermal comfort within the building. These
technical solutions have been implemented in 280 new pilot dwelling projects
through the year 1996.
|
1212.3928 | A validation methodology aid for improving a thermal building model:
Case of diffuse radiation accounting in a tropical climate | cs.CE | As part of our efforts to complete the software CODYRUN validation, we chose
as test building a block of flats constructed in Reunion Island, which has a
humid tropical climate. The sensitivity analysis allowed us to study the
effects of both diffuse and direct solar radiation on our model of this
building. With regard to the choice and location of sensors, this stage of the
study also led us to measure the solar radiation falling on the windows. The
comparison of measured and predicted radiation clearly showed that our
predictions over-estimated the incoming solar radiation, and we were able to
trace the problem to the algorithm which calculates diffuse solar radiation. By
calculating view factors between the windows and the associated shading
devices, changes to the original program allowed us to improve the predictions,
and so this article shows the importance of sensitivity analysis in this area
of research.
|
1212.3930 | Detailed weather data generator for building simulations | cs.CE | Thermal buildings simulation softwares need meteorological files in thermal
comfort, energetic evaluation studies. Few tools can make significant
meteorological data available such as generated typical year, representative
days, or artificial meteorological database. This paper deals about the
presentation of a new software, RUNEOLE, used to provide weather data in
buildings applications with a method adapted to all kind of climates. RUNEOLE
associates three modules of description, modelling and generation of weather
data. The statistical description of an existing meteorological database makes
typical representative days available and leads to the creation of model
libraries. The generation module leads to the generation of non existing
sequences. This software tends to be usable for the searchers and designers, by
means of interactivity, facilitated use and easy communication. The conceptual
basis of this tool will be exposed and we'll propose two examples of
applications in building physics for tropical humid climates.
|
1212.3964 | Advanced Bloom Filter Based Algorithms for Efficient Approximate Data
De-Duplication in Streams | cs.IR | Applications involving telecommunication call data records, web pages, online
transactions, medical records, stock markets, climate warning systems, etc.,
necessitate efficient management and processing of such massively exponential
amount of data from diverse sources. De-duplication or Intelligent Compression
in streaming scenarios for approximate identification and elimination of
duplicates from such unbounded data stream is a greater challenge given the
real-time nature of data arrival. Stable Bloom Filters (SBF) addresses this
problem to a certain extent. .
In this work, we present several novel algorithms for the problem of
approximate detection of duplicates in data streams. We propose the Reservoir
Sampling based Bloom Filter (RSBF) combining the working principle of reservoir
sampling and Bloom Filters. We also present variants of the novel Biased
Sampling based Bloom Filter (BSBF) based on biased sampling concepts. We also
propose a randomized load balanced variant of the sampling Bloom Filter
approach to efficiently tackle the duplicate detection. In this work, we thus
provide a generic framework for de-duplication using Bloom Filters. Using
detailed theoretical analysis we prove analytical bounds on the false positive
rate, false negative rate and convergence rate of the proposed structures. We
exhibit that our models clearly outperform the existing methods. We also
demonstrate empirical analysis of the structures using real-world datasets (3
million records) and also with synthetic datasets (1 billion records) capturing
various input distributions.
|
1212.3996 | Increasing Air Traffic: What is the Problem? | cs.AI cs.SY | Nowadays, huge efforts are made to modernize the air traffic management
systems to cope with uncertainty, complexity and sub-optimality. An answer is
to enhance the information sharing between the stakeholders. This paper
introduces a framework that bridges the gap between air traffic management and
air traffic control on the one hand, and bridges the gap between the ground,
the approach and the en-route centers on the other hand. An original system is
presented, that has three essential components: the trajectory models, the
optimization process, and the monitoring process. The uncertainty of the
trajectory is modeled with a Bayesian Network, where the nodes are associated
to two types of random variables: the time of overflight on metering points of
the airspace, and the traveling time of the routes linking these points. The
resulting Bayesian Network covers the complete airspace, and Monte- Carlo
simulations are done to estimate the probabilities of sector congestion and
delays. On top of this trajectory model, an optimization process minimizes
these probabilities by tuning the parameters of the Bayesian trajectory model
related to overflight times on metering points. The last component is the
monitoring process, that continuously updates the situation of the airspace,
modifying the trajectories uncertainties according to actual positions of
aircraft. After each update, a new optimal set of overflight times is computed,
and can be communicated to the controllers as clearances for the aircraft
pilots. The paper presents a formal specification of this global optimization
problem, whose underlying rationale was derived with the help of air traffic
controllers at Thales Air Systems.
|
1212.3998 | Online Learning for Ground Trajectory Prediction | cs.AI cs.SY | This paper presents a model based on an hybrid system to numerically simulate
the climbing phase of an aircraft. This model is then used within a trajectory
prediction tool. Finally, the Covariance Matrix Adaptation Evolution Strategy
(CMA-ES) optimization algorithm is used to tune five selected parameters, and
thus improve the accuracy of the model. Incorporated within a trajectory
prediction tool, this model can be used to derive the order of magnitude of the
prediction error over time, and thus the domain of validity of the trajectory
prediction. A first validation experiment of the proposed model is based on the
errors along time for a one-time trajectory prediction at the take off of the
flight with respect to the default values of the theoretical BADA model. This
experiment, assuming complete information, also shows the limit of the model. A
second experiment part presents an on-line trajectory prediction, in which the
prediction is continuously updated based on the current aircraft position. This
approach raises several issues, for which improvements of the basic model are
proposed, and the resulting trajectory prediction tool shows statistically
significantly more accurate results than those of the default model.
|
1212.4029 | Compelled to do the right thing | physics.soc-ph cs.SI nlin.AO physics.comp-ph | We use a model of opinion formation to study the consequences of some
mechanisms attempting to enforce the right behaviour in a society. We start
from a model where the possible choices are not equivalent (such is the case
when the agents decide to comply or not with a law) and where an imitation
mechanism allow the agents to change their behaviour based on the influence of
a group of partners. In addition, we consider the existence of two social
constraints: a) an external authority, called monitor, that imposes the correct
behaviour with infinite persuasion and b) an educated group of agents that act
upon their fellows but never change their own opinion, i.e., they exhibit
infinite adamancy. We determine the minimum number of monitors to induce an
effective change in the behaviour of the social group, and the size of the
educated group that produces the same effect. Also, we compare the results for
the cases of random social interactions and agents placed on a network. We have
verified that a small number of monitors are enough to change the behaviour of
the society. This also happens with a relatively small educated group in the
case of random interactions.
|
1212.4034 | 5GNOW: Challenging the LTE Design Paradigms of Orthogonality and
Synchronicity | cs.IT cs.NI math.IT | LTE and LTE-Advanced have been optimized to deliver high bandwidth pipes to
wireless users. The transport mechanisms have been tailored to maximize single
cell performance by enforcing strict synchronism and orthogonality within a
single cell and within a single contiguous frequency band. Various emerging
trends reveal major shortcomings of those design criteria: 1) The fraction of
machine-type-communications (MTC) is growing fast. Transmissions of this kind
are suffering from the bulky procedures necessary to ensure strict synchronism.
2) Collaborative schemes have been introduced to boost capacity and coverage
(CoMP), and wireless networks are becoming more and more heterogeneous
following the non-uniform distribution of users. Tremendous efforts must be
spent to collect the gains and to manage such systems under the premise of
strict synchronism and orthogonality. 3) The advent of the Digital Agenda and
the introduction of carrier aggregation are forcing the transmission systems to
deal with fragmented spectrum. 5GNOW is an European research project supported
by the European Commission within FP7 ICT Call 8. It will question the design
targets of LTE and LTE-Advanced having these shortcomings in mind and the
obedience to strict synchronism and orthogonality will be challenged. It will
develop new PHY and MAC layer concepts being better suited to meet the upcoming
needs with respect to service variety and heterogeneous transmission setups.
Wireless transmission networks following the outcomes of 5GNOW will be better
suited to meet the manifoldness of services, device classes and transmission
setups present in envisioned future scenarios like smart cities. The
integration of systems relying heavily on MTC into the communication network
will be eased. The per-user experience will be more uniform and satisfying. To
ensure this 5GNOW will contribute to upcoming 5G standardization.
|
1212.4080 | A Hierarchical Exact Accelerated Stochastic Simulation Algorithm | q-bio.MN cs.CE cs.DS | A new algorithm, "HiER-leap", is derived which improves on the computational
properties of the ER-leap algorithm for exact accelerated simulation of
stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical
or divide-and-conquer organization of reaction channels into tightly coupled
"blocks" and is thereby able to speed up systems with many reaction channels.
Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the
reaction propensities to define a rejection sampling algorithm with inexpensive
early rejection and acceptance steps. But in HiER-leap, large portions of
intra-block sampling may be done in parallel. An accept/reject step is used to
synchronize across blocks. This method scales well when many reaction channels
are present and has desirable asymptotic properties. The algorithm is exact,
parallelizable and achieves a significant speedup over SSA and ER-leap on
certain problems. This algorithm offers a potentially important step towards
efficient in silico modeling of entire organisms.
|
1212.4093 | Co-clustering separately exchangeable network data | math.ST cs.SI math.CO stat.ML stat.TH | This article establishes the performance of stochastic blockmodels in
addressing the co-clustering problem of partitioning a binary array into
subsets, assuming only that the data are generated by a nonparametric process
satisfying the condition of separate exchangeability. We provide oracle
inequalities with rate of convergence $\mathcal{O}_P(n^{-1/4})$ corresponding
to profile likelihood maximization and mean-square error minimization, and show
that the blockmodel can be interpreted in this setting as an optimal
piecewise-constant approximation to the generative nonparametric model. We also
show for large sample sizes that the detection of co-clusters in such data
indicates with high probability the existence of co-clusters of equal size and
asymptotically equivalent connectivity in the underlying generative process.
|
1212.4137 | Alternating Maximization: Unifying Framework for 8 Sparse PCA
Formulations and Efficient Parallel Codes | stat.ML cs.LG math.OC | Given a multivariate data set, sparse principal component analysis (SPCA)
aims to extract several linear combinations of the variables that together
explain the variance in the data as much as possible, while controlling the
number of nonzero loadings in these combinations. In this paper we consider 8
different optimization formulations for computing a single sparse loading
vector; these are obtained by combining the following factors: we employ two
norms for measuring variance (L2, L1) and two sparsity-inducing norms (L0, L1),
which are used in two different ways (constraint, penalty). Three of our
formulations, notably the one with L0 constraint and L1 variance, have not been
considered in the literature. We give a unifying reformulation which we propose
to solve via a natural alternating maximization (AM) method. We show the the AM
method is nontrivially equivalent to GPower (Journ\'{e}e et al; JMLR
11:517--553, 2010) for all our formulations. Besides this, we provide 24
efficient parallel SPCA implementations: 3 codes (multi-core, GPU and cluster)
for each of the 8 problems. Parallelism in the methods is aimed at i) speeding
up computations (our GPU code can be 100 times faster than an efficient serial
code written in C++), ii) obtaining solutions explaining more variance and iii)
dealing with big data problems (our cluster code is able to solve a 357 GB
problem in about a minute).
|
1212.4174 | Feature Clustering for Accelerating Parallel Coordinate Descent | stat.ML cs.DC cs.LG math.OC | Large-scale L1-regularized loss minimization problems arise in
high-dimensional applications such as compressed sensing and high-dimensional
supervised learning, including classification and regression problems.
High-performance algorithms and implementations are critical to efficiently
solving these problems. Building upon previous work on coordinate descent
algorithms for L1-regularized problems, we introduce a novel family of
algorithms called block-greedy coordinate descent that includes, as special
cases, several existing algorithms such as SCD, Greedy CD, Shotgun, and
Thread-Greedy. We give a unified convergence analysis for the family of
block-greedy algorithms. The analysis suggests that block-greedy coordinate
descent can better exploit parallelism if features are clustered so that the
maximum inner product between features in different blocks is small. Our
theoretical convergence analysis is supported with experimental re- sults using
data from diverse real-world applications. We hope that algorithmic approaches
and convergence analysis we provide will not only advance the field, but will
also encourage researchers to systematically explore the design space of
algorithms for solving large-scale L1-regularization problems.
|
1212.4194 | Effect of Coupling on the Epidemic Threshold in Interconnected Complex
Networks: A Spectral Analysis | physics.soc-ph cs.SI math.DS physics.bio-ph | In epidemic modeling, the term infection strength indicates the ratio of
infection rate and cure rate. If the infection strength is higher than a
certain threshold -- which we define as the epidemic threshold - then the
epidemic spreads through the population and persists in the long run. For a
single generic graph representing the contact network of the population under
consideration, the epidemic threshold turns out to be equal to the inverse of
the spectral radius of the contact graph. However, in a real world scenario it
is not possible to isolate a population completely: there is always some
interconnection with another network, which partially overlaps with the contact
network. Results for epidemic threshold in interconnected networks are limited
to homogeneous mixing populations and degree distribution arguments. In this
paper, we adopt a spectral approach. We show how the epidemic threshold in a
given network changes as a result of being coupled with another network with
fixed infection strength. In our model, the contact network and the
interconnections are generic. Using bifurcation theory and algebraic graph
theory, we rigorously derive the epidemic threshold in interconnected networks.
These results have implications for the broad field of epidemic modeling and
control. Our analytical results are supported by numerical simulations.
|
1212.4198 | Underlay Cognitive Radios with Capacity Guarantees for Primary Users | cs.IT cs.NI math.IT | To use the spectrum efficiently, cognitive radios leverage knowledge of the
channel state information (CSI) to optimize the performance of the secondary
users (SUs) while limiting the interference to the primary users (PUs). The
algorithms in this paper are designed to maximize the weighted ergodic
sum-capacity of SUs, which transmit orthogonally and adhere simultaneously to
constraints limiting: i) the long-term (ergodic) capacity loss caused to each
PU receiver; ii) the long-term interference power at each PU receiver; and iii)
the long-term power at each SU transmitter. Formulations accounting for
short-term counterparts of i) and ii) are also discussed. Although the
long-term capacity constraints are non-convex, the resultant optimization
problem exhibits zero-duality gap and can be efficiently solved in the dual
domain. The optimal allocation schemes (power and rate loadings, frequency
bands to be accessed, and SU links to be activated) are a function of the CSI
of the primary and secondary networks as well as the Lagrange multipliers
associated with the long-term constraints. The optimal resource allocation
algorithms are first designed under the assumption that the CSI is perfect,
then the modifications needed to accommodate different forms of imperfect CSI
(quantized, noisy, and outdated) are analyzed.
|
1212.4210 | From compression to compressed sensing | cs.IT math.IT | Can compression algorithms be employed for recovering signals from their
underdetermined set of linear measurements? Addressing this question is the
first step towards applying compression algorithms for compressed sensing (CS).
In this paper, we consider a family of compression algorithms $\mathcal{C}_r$,
parametrized by rate $r$, for a compact class of signals $\mathcal{Q} \subset
\mathds{R}^n$. The set of natural images and JPEG at different rates are
examples of $\mathcal{Q}$ and $\mathcal{C}_r$, respectively. We establish a
connection between the rate-distortion performance of $\mathcal{C}_r$, and the
number of linear measurements required for successful recovery in CS. We then
propose compressible signal pursuit (CSP) algorithm and prove that, with high
probability, it accurately and robustly recovers signals from an
underdetermined set of linear measurements. We also explore the performance of
CSP in the recovery of infinite dimensional signals.
|
1212.4269 | Accelerated Time-of-Flight Mass Spectrometry | math.OC cs.CE stat.ML | We study a simple modification to the conventional time of flight mass
spectrometry (TOFMS) where a \emph{variable} and (pseudo)-\emph{random} pulsing
rate is used which allows for traces from different pulses to overlap. This
modification requires little alteration to the currently employed hardware.
However, it requires a reconstruction method to recover the spectrum from
highly aliased traces. We propose and demonstrate an efficient algorithm that
can process massive TOFMS data using computational resources that can be
considered modest with today's standards. This approach can be used to improve
duty cycle, speed, and mass resolving power of TOFMS at the same time. We
expect this to extend the applicability of TOFMS to new domains.
|
1212.4287 | Prediction of Parallel Speed-ups for Las Vegas Algorithms | cs.DC cs.AI | We propose a probabilistic model for the parallel execution of Las Vegas
algorithms, i.e., randomized algorithms whose runtime might vary from one
execution to another, even with the same input. This model aims at predicting
the parallel performances (i.e., speedups) by analysis the runtime distribution
of the sequential runs of the algorithm. Then, we study in practice the case of
a particular Las Vegas algorithm for combinatorial optimization, on three
classical problems, and compare with an actual parallel implementation up to
256 cores. We show that the prediction can be quite accurate, matching the
actual speedups very well up to 100 parallel cores and then with a deviation of
about 20% up to 256 cores.
|
1212.4303 | On the notion of balance in social network analysis | cs.SI math.CO math.PR physics.soc-ph | The notion of "balance" is fundamental for sociologists who study social
networks. In formal mathematical terms, it concerns the distribution of triad
configurations in actual networks compared to random networks of the same edge
density. On reading Charles Kadushin's recent book "Understanding Social
Networks", we were struck by the amount of confusion in the presentation of
this concept in the early sections of the book. This confusion seems to lie
behind his flawed analysis of a classical empirical data set, namely the karate
club graph of Zachary. Our goal here is twofold. Firstly, we present the notion
of balance in terms which are logically consistent, but also consistent with
the way sociologists use the term. The main message is that the notion can only
be meaningfully applied to undirected graphs. Secondly, we correct the analysis
of triads in the karate club graph. This results in the interesting observation
that the graph is, in a precise sense, quite "unbalanced". We show that this
lack of balance is characteristic of a wide class of starlike-graphs, and
discuss possible sociological interpretations of this fact, which may be useful
in many other situations.
|
1212.4315 | Assessing Sentiment Strength in Words Prior Polarities | cs.CL | Many approaches to sentiment analysis rely on lexica where words are tagged
with their prior polarity - i.e. if a word out of context evokes something
positive or something negative. In particular, broad-coverage resources like
SentiWordNet provide polarities for (almost) every word. Since words can have
multiple senses, we address the problem of how to compute the prior polarity of
a word starting from the polarity of each sense and returning its polarity
strength as an index between -1 and 1. We compare 14 such formulae that appear
in the literature, and assess which one best approximates the human judgement
of prior polarities, with both regression and classification models.
|
1212.4347 | Bayesian Group Nonnegative Matrix Factorization for EEG Analysis | cs.LG stat.ML | We propose a generative model of a group EEG analysis, based on appropriate
kernel assumptions on EEG data. We derive the variational inference update rule
using various approximation techniques. The proposed model outperforms the
current state-of-the-art algorithms in terms of common pattern extraction. The
validity of the proposed model is tested on the BCI competition dataset.
|
1212.4373 | A trust-based security mechanism for nomadic users in pervasive systems | cs.CR cs.AI | The emergence of network technologies and the appearance of new varied
applications in terms of services and resources, has created new security
problems for which existing solutions and mechanisms are inadequate, especially
problems of identification and authentication. In a highly distributed and
pervasive system, a uniform and centralized security management is not an
option. It then becomes necessary to give more autonomy to security systems by
providing them with mechanisms that allows a dynamic and flexible cooperation
and collaboration between the actors in the system.
|
1212.4375 | Lumpings of Markov chains, entropy rate preservation, and higher-order
lumpability | cs.IT math.IT math.PR | A lumping of a Markov chain is a coordinate-wise projection of the chain. We
characterise the entropy rate preservation of a lumping of an aperiodic and
irreducible Markov chain on a finite state space by the random growth rate of
the cardinality of the realisable preimage of a finite-length trajectory of the
lumped chain and by the information needed to reconstruct original trajectories
from their lumped images. Both are purely combinatorial criteria, depending
only on the transition graph of the Markov chain and the lumping function. A
lumping is strongly k-lumpable, iff the lumped process is a k-th order Markov
chain for each starting distribution of the original Markov chain. We
characterise strong k-lumpability via tightness of stationary entropic bounds.
In the sparse setting, we give sufficient conditions on the lumping to both
preserve the entropy rate and be strongly k-lumpable.
|
1212.4490 | Sketch-to-Design: Context-based Part Assembly | cs.GR cs.CV | Designing 3D objects from scratch is difficult, especially when the user
intent is fuzzy without a clear target form. In the spirit of
modeling-by-example, we facilitate design by providing reference and
inspiration from existing model contexts. We rethink model design as navigating
through different possible combinations of part assemblies based on a large
collection of pre-segmented 3D models. We propose an interactive
sketch-to-design system, where the user sketches prominent features of parts to
combine. The sketched strokes are analyzed individually and in context with the
other parts to generate relevant shape suggestions via a design gallery
interface. As the session progresses and more parts get selected, contextual
cues becomes increasingly dominant and the system quickly converges to a final
design. As a key enabler, we use pre-learned part-based contextual information
to allow the user to quickly explore different combinations of parts. Our
experiments demonstrate the effectiveness of our approach for efficiently
designing new variations from existing shapes.
|
1212.4507 | Variational Optimization | stat.ML cs.LG cs.NA | We discuss a general technique that can be used to form a differentiable
bound on the optima of non-differentiable or discrete objective functions. We
form a unified description of these methods and consider under which
circumstances the bound is concave. In particular we consider two concrete
applications of the method, namely sparse learning and support vector
classification.
|
1212.4522 | A Multi-View Embedding Space for Modeling Internet Images, Tags, and
their Semantics | cs.CV cs.IR cs.LG cs.MM | This paper investigates the problem of modeling Internet images and
associated text or tags for tasks such as image-to-image search, tag-to-image
search, and image-to-tag search (image annotation). We start with canonical
correlation analysis (CCA), a popular and successful approach for mapping
visual and textual features to the same latent space, and incorporate a third
view capturing high-level image semantics, represented either by a single
category or multiple non-mutually-exclusive concepts. We present two ways to
train the three-view embedding: supervised, with the third view coming from
ground-truth labels or search keywords; and unsupervised, with semantic themes
automatically obtained by clustering the tags. To ensure high accuracy for
retrieval tasks while keeping the learning process scalable, we combine
multiple strong visual features and use explicit nonlinear kernel mappings to
efficiently approximate kernel CCA. To perform retrieval, we use a specially
designed similarity function in the embedded space, which substantially
outperforms the Euclidean distance. The resulting system produces compelling
qualitative results and outperforms a number of two-view baselines on retrieval
tasks on three large-scale Internet image datasets.
|
1212.4527 | GMM-Based Hidden Markov Random Field for Color Image and 3D Volume
Segmentation | cs.CV | In this project, we first study the Gaussian-based hidden Markov random field
(HMRF) model and its expectation-maximization (EM) algorithm. Then we
generalize it to Gaussian mixture model-based hidden Markov random field. The
algorithm is implemented in MATLAB. We also apply this algorithm to color image
segmentation problems and 3D volume segmentation problems.
|
1212.4565 | Truthy: Enabling the Study of Online Social Networks | cs.SI cs.DL physics.soc-ph | The broad adoption of online social networking platforms has made it possible
to study communication networks at an unprecedented scale. Digital trace data
can be compiled into large data sets of online discourse. However, it is a
challenge to collect, store, filter, and analyze large amounts of data, even by
experts in the computational sciences. Here we describe our recent extensions
to Truthy, a system that collects Twitter data to analyze discourse in near
real-time. We introduce several interactive visualizations and analytical tools
with the goal of enabling citizens, journalists, and researchers to understand
and study online social networks at multiple scales.
|
1212.4608 | Perceptually Motivated Shape Context Which Uses Shape Interiors | cs.CV | In this paper, we identify some of the limitations of current-day shape
matching techniques. We provide examples of how contour-based shape matching
techniques cannot provide a good match for certain visually similar shapes. To
overcome this limitation, we propose a perceptually motivated variant of the
well-known shape context descriptor. We identify that the interior properties
of the shape play an important role in object recognition and develop a
descriptor that captures these interior properties. We show that our method can
easily be augmented with any other shape matching algorithm. We also show from
our experiments that the use of our descriptor can significantly improve the
retrieval rates.
|
1212.4617 | Improved Multiuser Detection in Asynchronous Flat-Fading Non-Gaussian
Channels | cs.SY cs.NI | In this paper, a new M-estimator based multiuser detection in asynchronous
flat-fading non-Gaussian CDMA channels is considered. A new closed-form
expression is derived for the characteristic function of the multiple-access
interference signals. Simulation results are provided to prove the
effectiveness of the derived bit-error probabilities obtained with this
expression in asynchronous flat-fading non-Gaussian CDMA channels.
|
1212.4626 | MAC with Action-Dependent State Information at One Encoder | cs.IT math.IT | Problems dealing with the ability to take an action that affects the states
of state-dependent communication channels are of timely interest and
importance. Therefore, we extend the study of action-dependent channels, which
until now focused on point-to-point models, to multiple-access channels (MAC).
In this paper, we consider a two-user, state-dependent MAC, in which one of the
encoders, called the informed encoder, is allowed to take an action that
affects the formation of the channel states. Two independent messages are to be
sent through the channel: a common message known to both encoders and a private
message known only to the informed encoder. In addition, the informed encoder
has access to the sequence of channel states in a non-causal manner. Our
framework generalizes previously evaluated settings of state dependent
point-to-point channels with actions and MACs with common messages. We derive a
single letter characterization of the capacity region for this setting. Using
this general result, we obtain and compute the capacity region for the Gaussian
action-dependent MAC. The unique methods used in solving the Gaussian case are
then applied to obtain the capacity of the Gaussian action-dependent
point-to-point channel; a problem was left open until this work. Finally, we
establish some dualities between action-dependent channel coding and source
coding problems. Specifically, we obtain a duality between the considered MAC
setting and the rate distortion model known as "Successive Refinement with
Actions". This is done by developing a set of simple duality principles that
enable us to successfully evaluate the outcome of one problem given the other.
|
1212.4638 | Polynomial functions of degree 20 which are APN infinitely often | cs.IT cs.CR math.IT | We give all the polynomials functions of degree 20 which are APN over an
infinity of field extensions and show they are all CCZ-equivalent to the
function $x^5$, which is a new step in proving the conjecture of Aubry, McGuire
and Rodier.
|
1212.4648 | Algebraic modelling and performance evaluation of acyclic fork-join
queueing networks | math.OC cs.SY | Simple lower and upper bounds on mean cycle time in stochastic acyclic
fork-join queueing networks are derived using a (max,+)-algebra based
representation of network dynamics. The behaviour of the bounds under various
assumptions concerning the service times in the networks is discussed, and
related numerical examples are presented.
|
1212.4649 | Exponential error bounds on parameter modulation-estimation for discrete
memoryless channels | cs.IT math.IT | We consider the problem of modulation and estimation of a random parameter
$U$ to be conveyed across a discrete memoryless channel. Upper and lower bounds
are derived for the best achievable exponential decay rate of a general moment
of the estimation error, $\bE|\hat{U}-U|^\rho$, $\rho\ge 0$, when both the
modulator and the estimator are subjected to optimization. These exponential
error bounds turn out to be intimately related to error exponents of channel
coding and to channel capacity. While in general, there is some gap between the
upper and the lower bound, they asymptotically coincide both for very small and
for very large values of the moment power $\rho$. This means that our
achievability scheme, which is based on simple quantization of $U$ followed by
channel coding, is nearly optimum in both limits. Some additional properties of
the bounds are discussed and demonstrated, and finally, an extension to the
case of a multidimensional parameter vector is outlined, with the principal
conclusion that our upper and lower bound asymptotically coincide also for a
high dimensionality.
|
1212.4653 | Convolutional Codes Derived From Group Character Codes | cs.IT math.IT quant-ph | New families of unit memory as well as multi-memory convolutional codes are
constructed algebraically in this paper. These convolutional codes are derived
from the class of group character codes. The proposed codes have basic
generator matrices, consequently, they are non catastrophic. Additionally, the
new code parameters are better than the ones available in the literature.
|
1212.4654 | On Classical and Quantum MDS-Convolutional BCH Codes | quant-ph cs.IT math.IT | Several new families of multi-memory classical convolutional
Bose-Chaudhuri-Hocquenghem (BCH) codes as well as families of unit-memory
quantum convolutional codes are constructed in this paper. Our unit-memory
classical and quantum convolutional codes are optimal in the sense that they
attain the classical (quantum) generalized Singleton bound. The constructions
presented in this paper are performed algebraically and not by computational
search.
|
1212.4663 | Concentration of Measure Inequalities in Information Theory,
Communications and Coding (Second Edition) | cs.IT math.IT math.PR | During the last two decades, concentration inequalities have been the subject
of exciting developments in various areas, including convex geometry,
functional analysis, statistical physics, high-dimensional statistics, pure and
applied probability theory, information theory, theoretical computer science,
and learning theory. This monograph focuses on some of the key modern
mathematical tools that are used for the derivation of concentration
inequalities, on their links to information theory, and on their various
applications to communications and coding. In addition to being a survey, this
monograph also includes various new recent results derived by the authors. The
first part of the monograph introduces classical concentration inequalities for
martingales, as well as some recent refinements and extensions. The power and
versatility of the martingale approach is exemplified in the context of codes
defined on graphs and iterative decoding algorithms, as well as codes for
wireless communication. The second part of the monograph introduces the entropy
method, an information-theoretic technique for deriving concentration
inequalities. The basic ingredients of the entropy method are discussed first
in the context of logarithmic Sobolev inequalities, which underlie the
so-called functional approach to concentration of measure, and then from a
complementary information-theoretic viewpoint based on transportation-cost
inequalities and probability in metric spaces. Some representative results on
concentration for dependent random variables are briefly summarized, with
emphasis on their connections to the entropy method. Finally, we discuss
several applications of the entropy method to problems in communications and
coding, including strong converses, empirical distributions of good channel
codes, and an information-theoretic converse for concentration of measure.
|
1212.4674 | Natural Language Understanding Based on Semantic Relations between
Sentences | cs.CL | In this paper, we define event expression over sentences of natural language
and semantic relations between events. Based on this definition, we formally
consider text understanding process having events as basic unit.
|
1212.4675 | Analysis of Large-scale Traffic Dynamics using Non-negative Tensor
Factorization | cs.LG | In this paper, we present our work on clustering and prediction of temporal
dynamics of global congestion configurations in large-scale road networks.
Instead of looking into temporal traffic state variation of individual links,
or of small areas, we focus on spatial congestion configurations of the whole
network. In our work, we aim at describing the typical temporal dynamic
patterns of this network-level traffic state and achieving long-term prediction
of the large-scale traffic dynamics, in a unified data-mining framework. To
this end, we formulate this joint task using Non-negative Tensor Factorization
(NTF), which has been shown to be a useful decomposition tools for multivariate
data sequences. Clustering and prediction are performed based on the compact
tensor factorization results. Experiments on large-scale simulated data
illustrate the interest of our method with promising results for long-term
forecast of traffic evolution.
|
1212.4702 | Simple Search Engine Model: Adaptive Properties for Doubleton | cs.IR cs.DM | In this paper we study the relationship between query and search engine by
exploring the adaptive properties for doubleton as a space of event based on a
simple search engine. We employ set theory for defining doubleton and generate
some properties.
|
1212.4703 | Semi-explicit Parareal method based on convergence acceleration
technique | cs.SY math.CA math.NA | The Parareal algorithm is used to solve time-dependent problems considering
multiple solvers that may work in parallel. The key feature is a initial rough
approximation of the solution that is iteratively refined by the parallel
solvers. We report a derivation of the Parareal method that uses a convergence
acceleration technique to improve the accuracy of the solution. Our approach
uses firstly an explicit ODE solver to perform the parallel computations with
different time-steps and then, a decomposition of the solution into specific
convergent series, based on an extrapolation method, allows to refine the
precision of the solution. Our proposed method exploits basic explicit
integration methods, such as for example the explicit Euler scheme, in order to
preserve the simplicity of the global parallel algorithm. The first part of the
paper outlines the proposed method applied to the simple explicit Euler scheme
and then the derivation of the classical Parareal algorithm is discussed and
illustrated with numerical examples.
|
1212.4717 | Quickest Detection with Discretely Controlled Observations | math.PR cs.IT cs.SY math.IT math.OC | We study a continuous time Bayesian quickest detection problem in which
observation times are a scarce resource. The agent, limited to making a finite
number of discrete observations, must adaptively decide his observation
strategy to minimize detection delay and the probability of false alarm. Under
two different models of observation rights, we establish the existence of
optimal strategies, and formulate an algorithmic approach to the problem via
jump operators. We describe algorithms for these problems, and illustrate them
with some numerical results. As the number of observation rights tends to
infinity, we also show convergence to the classical continuous observation
problem of Shiryaev.
|
1212.4751 | Opinion formation model for markets with a social temperature and fear | physics.soc-ph cond-mat.stat-mech cs.SI q-fin.GN | In the spirit of behavioral finance, we study the process of opinion
formation among investors using a variant of the 2D Voter Model with a tunable
social temperature. Further, a feedback acting on the temperature is
introduced, such that social temperature reacts to market imbalances and thus
becomes time dependent. In this toy market model, social temperature represents
nervousness of agents towards market imbalances representing speculative risk.
We use the knowledge about the discontinuous Generalized Voter Model phase
transition to determine critical fixed points. The system exhibits metastable
phases around these fixed points characterized by structured lattice states,
with intermittent excursions away from the fixed points. The statistical
mechanics of the model is characterized and its relation to dynamics of opinion
formation among investors in real markets is discussed.
|
1212.4775 | Role Mining with Probabilistic Models | cs.CR cs.LG stat.ML | Role mining tackles the problem of finding a role-based access control (RBAC)
configuration, given an access-control matrix assigning users to access
permissions as input. Most role mining approaches work by constructing a large
set of candidate roles and use a greedy selection strategy to iteratively pick
a small subset such that the differences between the resulting RBAC
configuration and the access control matrix are minimized. In this paper, we
advocate an alternative approach that recasts role mining as an inference
problem rather than a lossy compression problem. Instead of using combinatorial
algorithms to minimize the number of roles needed to represent the
access-control matrix, we derive probabilistic models to learn the RBAC
configuration that most likely underlies the given matrix.
Our models are generative in that they reflect the way that permissions are
assigned to users in a given RBAC configuration. We additionally model how
user-permission assignments that conflict with an RBAC configuration emerge and
we investigate the influence of constraints on role hierarchies and on the
number of assignments. In experiments with access-control matrices from
real-world enterprises, we compare our proposed models with other role mining
methods. Our results show that our probabilistic models infer roles that
generalize well to new system users for a wide variety of data, while other
models' generalization abilities depend on the dataset given.
|
1212.4777 | A Practical Algorithm for Topic Modeling with Provable Guarantees | cs.LG cs.DS stat.ML | Topic models provide a useful method for dimensionality reduction and
exploratory data analysis in large text corpora. Most approaches to topic model
inference have been based on a maximum likelihood objective. Efficient
algorithms exist that approximate this objective, but they have no provable
guarantees. Recently, algorithms have been introduced that provide provable
bounds, but these algorithms are not practical because they are inefficient and
not robust to violations of model assumptions. In this paper we present an
algorithm for topic model inference that is both provable and practical. The
algorithm produces results comparable to the best MCMC implementations while
running orders of magnitude faster.
|
1212.4779 | StaticGreedy: solving the scalability-accuracy dilemma in influence
maximization | cs.SI cs.DS physics.soc-ph | Influence maximization, defined as a problem of finding a set of seed nodes
to trigger a maximized spread of influence, is crucial to viral marketing on
social networks. For practical viral marketing on large scale social networks,
it is required that influence maximization algorithms should have both
guaranteed accuracy and high scalability. However, existing algorithms suffer a
scalability-accuracy dilemma: conventional greedy algorithms guarantee the
accuracy with expensive computation, while the scalable heuristic algorithms
suffer from unstable accuracy.
In this paper, we focus on solving this scalability-accuracy dilemma. We
point out that the essential reason of the dilemma is the surprising fact that
the submodularity, a key requirement of the objective function for a greedy
algorithm to approximate the optimum, is not guaranteed in all conventional
greedy algorithms in the literature of influence maximization. Therefore a
greedy algorithm has to afford a huge number of Monte Carlo simulations to
reduce the pain caused by unguaranteed submodularity. Motivated by this
critical finding, we propose a static greedy algorithm, named StaticGreedy, to
strictly guarantee the submodularity of influence spread function during the
seed selection process. The proposed algorithm makes the computational expense
dramatically reduced by two orders of magnitude without loss of accuracy.
Moreover, we propose a dynamical update strategy which can speed up the
StaticGreedy algorithm by 2-7 times on large scale social networks.
|
1212.4788 | easyGWAS: An integrated interspecies platform for performing genome-wide
association studies | q-bio.GN cs.CE cs.DL stat.AP | Motivation: The rapid growth in genome-wide association studies (GWAS) in
plants and animals has brought about the need for a central resource that
facilitates i) performing GWAS, ii) accessing data and results of other GWAS,
and iii) enabling all users regardless of their background to exploit the
latest statistical techniques without having to manage complex software and
computing resources.
Results: We present easyGWAS, a web platform that provides methods, tools and
dynamic visualizations to perform and analyze GWAS. In addition, easyGWAS makes
it simple to reproduce results of others, validate findings, and access larger
sample sizes through merging of public datasets.
Availability: Detailed method and data descriptions as well as tutorials are
available in the supplementary materials. easyGWAS is available at
http://easygwas.tuebingen.mpg.de/.
Contact: dominik.grimm@tuebingen.mpg.de
|
1212.4799 | Towards common-sense reasoning via conditional simulation: legacies of
Turing in Artificial Intelligence | cs.AI math.LO stat.ML | The problem of replicating the flexibility of human common-sense reasoning
has captured the imagination of computer scientists since the early days of
Alan Turing's foundational work on computation and the philosophy of artificial
intelligence. In the intervening years, the idea of cognition as computation
has emerged as a fundamental tenet of Artificial Intelligence (AI) and
cognitive science. But what kind of computation is cognition?
We describe a computational formalism centered around a probabilistic Turing
machine called QUERY, which captures the operation of probabilistic
conditioning via conditional simulation. Through several examples and analyses,
we demonstrate how the QUERY abstraction can be used to cast common-sense
reasoning as probabilistic inference in a statistical model of our observations
and the uncertain structure of the world that generated that experience. This
formulation is a recent synthesis of several research programs in AI and
cognitive science, but it also represents a surprising convergence of several
of Turing's pioneering insights in AI, the foundations of computation, and
statistics.
|
1212.4804 | Low Speed Automation, a French Initiative | cs.RO | Nowadays, vehicle safety is constantly increasing thanks to the improvement
of vehicle passive and active safety. However, on a daily usage of the car,
traffic jams remains a problem. With limited space for road infrastructure,
automation of the driving task on specific situation seems to be a possible
solution. The French project ABV, which stands for low speed automation, tries
to demonstrate the feasibility of the concept and to prove the benefits. In
this article, we describe the scientific background of the project and expected
outputs.
|
1212.4846 | Operational semantics for product-form solution | cs.PF cs.SY | In this paper we present product-form solutions from the point of view of
stochastic process algebra. In previous work we have shown how to derive
product-form solutions for a formalism called Labelled Markov Automata (LMA).
LMA are very useful as their relation with the Continuous Time Markov Chains is
very direct. The disadvantage of using LMA is that the proofs of properties are
cumbersome. In fact, in LMA it is not possible to use the inductive structure
of the language in a proof. In this paper we consider a simple stochastic
process algebra that has the great advantage of simplifying the proofs. This
simple language has been inspired by PEPA, however, detailed analysis of the
semantics of cooperation will show the differences between the two formalisms.
It will also be shown that the semantics of the cooperation in process algebra
influences the correctness of the derivation of the product-form solutions.
|
1212.4871 | Automatic post-picking using MAPPOS improves particle image detection
from Cryo-EM micrographs | stat.ML cs.CV | Cryo-electron microscopy (cryo-EM) studies using single particle
reconstruction are extensively used to reveal structural information on
macromolecular complexes. Aiming at the highest achievable resolution, state of
the art electron microscopes automatically acquire thousands of high-quality
micrographs. Particles are detected on and boxed out from each micrograph using
fully- or semi-automated approaches. However, the obtained particles still
require laborious manual post-picking classification, which is one major
bottleneck for single particle analysis of large datasets. We introduce MAPPOS,
a supervised post-picking strategy for the classification of boxed particle
images, as additional strategy adding to the already efficient automated
particle picking routines. MAPPOS employs machine learning techniques to train
a robust classifier from a small number of characteristic image features. In
order to accurately quantify the performance of MAPPOS we used simulated
particle and non-particle images. In addition, we verified our method by
applying it to an experimental cryo-EM dataset and comparing the results to the
manual classification of the same dataset. Comparisons between MAPPOS and
manual post-picking classification by several human experts demonstrated that
merely a few hundred sample images are sufficient for MAPPOS to classify an
entire dataset with a human-like performance. MAPPOS was shown to greatly
accelerate the throughput of large datasets by reducing the manual workload by
orders of magnitude while maintaining a reliable identification of non-particle
images.
|
1212.4898 | Network Risk Limiting Dispatch: Optimal Control and Price of Uncertainty | math.OC cs.IT cs.SY math.IT | Increased uncertainty due to high penetration of renewables imposes
significant costs to the system operators. The added costs depend on several
factors including market design, performance of renewable generation
forecasting and the specific dispatch procedure. Quantifying these costs has
been limited to small sample Monte Carlo approaches applied specific dispatch
algorithms. The computational complexity and accuracy of these approaches has
limited the understanding of tradeoffs between different factors. {In this work
we consider a two-stage stochastic economic dispatch problem. Our goal is to
provide an analytical quantification and an intuitive understanding of the
effects of uncertainties and network congestion on the dispatch procedure and
the optimal cost.} We first consider an uncongested network and calculate the
risk limiting dispatch. In addition, we derive the price of uncertainty, a
number that characterizes the intrinsic impact of uncertainty on the
integration cost of renewables. Then we extend the results to a network where
one link can become congested. Under mild conditions, we calculate price of
uncertainty even in this case. We show that risk limiting dispatch is given by
a set of deterministic equilibrium equations. The dispatch solution yields an
important insight: congested links do not create isolated nodes, even in a
two-node network. In fact, the network can support backflows in congested
links, that are useful to reduce the uncertainty by averaging supply across the
network. We demonstrate the performance of our approach in standard IEEE
benchmark networks.
|
1212.4899 | New inequalities of Mill's ratio and Its Application to The Inverse
Q-function Approximation | cs.IT math.IT math.ST stat.TH | In this paper, we investigate the Mill's ratio estimation problem and get two
new inequalities. Compared to the well known results obtained by Gordon, they
becomes tighter. Furthermore, we also discuss the inverse Q-function
approximation problem and present some useful results on the inverse solution.
Numerical results confirm the validness of our theoretical analysis. In
addition, we also present a conjecture on the bounds of inverse solution on
Q-function.
|
1212.4902 | On the Capacity Region and the Generalized Degrees of Freedom Region for
the MIMO Interference Channel with Feedback | cs.IT math.IT | In this paper, we study the effect of feedback on two-user MIMO interference
channels. The capacity region of MIMO interference channels with feedback is
characterized within a constant number of bits, where this constant is
independent of the channel matrices. Further, it is shown that the capacity
region of a MIMO interference channel with feedback and its reciprocal
interference channel are within a constant number of bits. Finally, the
generalized degrees of freedom region for the MIMO interference channel with
feedback is characterized.
|
1212.4906 | SMML estimators for 1-dimensional continuous data | cs.IT math.IT math.ST stat.ML stat.TH | A method is given for calculating the strict minimum message length (SMML)
estimator for 1-dimensional exponential families with continuous sufficient
statistics. A set of $n$ equations are found that the $n$ cut-points of the
SMML estimator must satisfy. These equations can be solved using Newton's
method and this approach is used to produce new results and to replicate
results that C. S. Wallace obtained using his boundary rules for the SMML
estimator. A rigorous proof is also given that, despite being composed of step
functions, the posterior probability corresponding to the SMML estimator is a
continuous function of the data.
|
1212.4912 | A Torelli-like Theorem for Smooth Plane Curves | math.AG cs.IT math.IT | The Information-Theoretic Schottky Problem treats the period matrix of a
compact Riemann Surface as a compressible signal. In this case, the period
matrix of a smooth plane curve is characterized by only 4 of its columns, a
significant compression.
|
1212.4914 | Growing Random Geometric Graph Models of Super-linear Scaling Law | physics.soc-ph cs.SI | Recent researches on complex systems highlighted the so-called super-linear
growth phenomenon. As the system size $P$ measured as population in cities or
active users in online communities increases, the total activities $X$ measured
as GDP or number of new patents, crimes in cities generated by these people
also increases but in a faster rate. This accelerating growth phenomenon can be
well described by a super-linear power law $X \propto P^{\gamma}$($\gamma>1$).
However, the explanation on this phenomenon is still lack. In this paper, we
propose a modeling framework called growing random geometric models to explain
the super-linear relationship. A growing network is constructed on an abstract
geometric space. The new coming node can only survive if it just locates on an
appropriate place in the space where other nodes exist, then new edges are
connected with the adjacent nodes whose number is determined by the density of
existing nodes. Thus the total number of edges can grow with the number of
nodes in a faster speed exactly following the super-linear power law. The
models cannot only reproduce a lot of observed phenomena in complex networks,
e.g., scale-free degree distribution and asymptotically size-invariant
clustering coefficient, but also resemble the known patterns of cities, such as
fractal growing, area-population and diversity-population scaling relations,
etc. Strikingly, only one important parameter, the dimension of the geometric
space, can really influence the super-linear growth exponent $\gamma$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.