id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1302.0272 | A Study of Influential Factors in the Adoption and Diffusion of B2C
E-Commerce | cs.CY cs.SI | This paper looks at the present standing of ecommerce in Saudi Arabia, as
well as the challenges and strengths of Business to Customers (B2C) electronic
commerce. Many studies have been conducted around the world in order to gain a
better understanding of the demands, needs and effectiveness of online
commerce. A study was undertaken to review the literature identifying the
factors influencing the adoption and diffusion of B2C e-commerce. It found four
distinct categories: businesses, customers, environmental and governmental
support, which must all be considered when creating an e-commerce
infrastructure. A concept matrix was used to provide a comparison of important
factors in different parts of the world. The study found that e-commerce in
Saudi Arabia was lacking in Governmental support as well as relevant
involvement by both customers and retailers.
|
1302.0274 | Directedness of information flow in mobile phone communication networks | physics.soc-ph cs.SI | Without having direct access to the information that is being exchanged,
traces of information flow can be obtained by looking at temporal sequences of
user interactions. These sequences can be represented as causality trees whose
statistics result from a complex interplay between the topology of the
underlying (social) network and the time correlations among the communications.
Here, we study causality trees in mobile-phone data, which can be represented
as a dynamical directed network. This representation of the data reveals the
existence of super-spreaders and super-receivers. We show that the tree
statistics, respectively the information spreading process, are extremely
sensitive to the in-out degree correlation exhibited by the users. We also
learn that a given information, e.g., a rumor, would require users to
retransmit it for more than 30 hours in order to cover a macroscopic fraction
of the system. Our analysis indicates that topological node-node correlations
of the underlying social network, while allowing the existence of information
loops, they also promote information spreading. Temporal correlations, and
therefore causality effects, are only visible as local phenomena and during
short time scales. These results are obtained through a combination of theory
and data analysis techniques.
|
1302.0286 | Stochastic maximum principle for optimal control of SPDEs | math.OC cs.SY math.PR | We prove a version of the maximum principle, in the sense of Pontryagin, for
the optimal control of a stochastic partial differential equation driven by a
finite dimensional Wiener process. The equation is formulated in a
semi-abstract form that allows direct applications to a large class of
controlled stochastic parabolic equations. We allow for a diffusion coefficient
dependent on the control parameter, and the space of control actions is
general, so that in particular we need to introduce two adjoint processes. The
second adjoint process takes values in a suitable space of operators on $L^4$.
|
1302.0296 | Interference Networks with No CSIT: Impact of Topology | cs.IT math.IT | We consider partially-connected $K$-user interference networks, where the
transmitters have no knowledge about the channel gain values, but they are
aware of network topology (or connectivity). We introduce several linear
algebraic and graph theoretic concepts to derive new topology-based outer
bounds and inner bounds on the symmetric degrees-of-freedom (DoF) of these
networks. We evaluate our bounds for two classes of networks to demonstrate
their tightness for most networks in these classes, quantify the gain of our
inner bounds over benchmark interference management strategies, and illustrate
the effect of network topology on these gains.
|
1302.0309 | Highly Available Transactions: Virtues and Limitations (Extended
Version) | cs.DB | To minimize network latency and remain online during server failures and
network partitions, many modern distributed data storage systems eschew
transactional functionality, which provides strong semantic guarantees for
groups of multiple operations over multiple data items. In this work, we
consider the problem of providing Highly Available Transactions (HATs):
transactional guarantees that do not suffer unavailability during system
partitions or incur high network latency. We introduce a taxonomy of highly
available systems and analyze existing ACID isolation and distributed data
consistency guarantees to identify which can and cannot be achieved in HAT
systems. This unifies the literature on weak transactional isolation, replica
consistency, and highly available systems. We analytically and experimentally
quantify the availability and performance benefits of HATs--often two to three
orders of magnitude over wide-area networks--and discuss their necessary
semantic compromises.
|
1302.0315 | Sparse Multiple Kernel Learning with Geometric Convergence Rate | cs.LG stat.ML | In this paper, we study the problem of sparse multiple kernel learning (MKL),
where the goal is to efficiently learn a combination of a fixed small number of
kernels from a large pool that could lead to a kernel classifier with a small
prediction error. We develop an efficient algorithm based on the greedy
coordinate descent algorithm, that is able to achieve a geometric convergence
rate under appropriate conditions. The convergence rate is achieved by
measuring the size of functional gradients by an empirical $\ell_2$ norm that
depends on the empirical data distribution. This is in contrast to previous
algorithms that use a functional norm to measure the size of gradients, which
is independent from the data samples. We also establish a generalization error
bound of the learned sparse kernel classifier using the technique of local
Rademacher complexity.
|
1302.0317 | Distributed simulation of city inundation by coupled surface and
subsurface porous flow for urban flood decision support system | cs.CE | We present a decision support system for flood early warning and disaster
management. It includes the models for data-driven meteorological predictions,
for simulation of atmospheric pressure, wind, long sea waves and seiches; a
module for optimization of flood barrier gates operation; models for stability
assessment of levees and embankments, for simulation of city inundation
dynamics and citizens evacuation scenarios. The novelty of this paper is a
coupled distributed simulation of surface and subsurface flows that can predict
inundation of low-lying inland zones far from the submerged waterfront areas,
as observed in St. Petersburg city during the floods. All the models are
wrapped as software services in the CLAVIRE platform for urgent computing,
which provides workflow management and resource orchestration.
|
1302.0321 | Signal reconstruction in linear mixing systems with different error
metrics | cs.IT math.IT | We consider the problem of reconstructing a signal from noisy measurements in
linear mixing systems. The reconstruction performance is usually quantified by
standard error metrics such as squared error, whereas we consider any additive
error metric. Under the assumption that relaxed belief propagation (BP) can
compute the posterior in the large system limit, we propose a simple, fast, and
highly general algorithm that reconstructs the signal by minimizing the
user-defined error metric. For two example metrics, we provide performance
analysis and convincing numerical results. Finally, our algorithm can be
adjusted to minimize the $\ell_\infty$ error, which is not additive.
Interestingly, $\ell_{\infty}$ minimization only requires to apply a Wiener
filter to the output of relaxed BP.
|
1302.0324 | A New Constructive Method to Optimize Neural Network Architecture and
Generalization | cs.NE | In this paper, after analyzing the reasons of poor generalization and
overfitting in neural networks, we consider some noise data as a singular value
of a continuous function - jump discontinuity point. The continuous part can be
approximated with the simplest neural networks, which have good generalization
performance and optimal network architecture, by traditional algorithms such as
constructive algorithm for feed-forward neural networks with incremental
training, BP algorithm, ELM algorithm, various constructive algorithm, RBF
approximation and SVM. At the same time, we will construct RBF neural networks
to fit the singular value with every error in, and we prove that a function
with jumping discontinuity points can be approximated by the simplest neural
networks with a decay RBF neural networks in by each error, and a function with
jumping discontinuity point can be constructively approximated by a decay RBF
neural networks in by each error and the constructive part have no
generalization influence to the whole machine learning system which will
optimize neural network architecture and generalization performance, reduce the
overfitting phenomenon by avoid fitting the noisy data.
|
1302.0328 | Bayesian Entropy Estimation for Countable Discrete Distributions | cs.IT math.IT | We consider the problem of estimating Shannon's entropy $H$ from discrete
data, in cases where the number of possible symbols is unknown or even
countably infinite. The Pitman-Yor process, a generalization of Dirichlet
process, provides a tractable prior distribution over the space of countably
infinite discrete distributions, and has found major applications in Bayesian
non-parametric statistics and machine learning. Here we show that it also
provides a natural family of priors for Bayesian entropy estimation, due to the
fact that moments of the induced posterior distribution over $H$ can be
computed analytically. We derive formulas for the posterior mean (Bayes' least
squares estimate) and variance under Dirichlet and Pitman-Yor process priors.
Moreover, we show that a fixed Dirichlet or Pitman-Yor process prior implies a
narrow prior distribution over $H$, meaning the prior strongly determines the
entropy estimate in the under-sampled regime. We derive a family of continuous
mixing measures such that the resulting mixture of Pitman-Yor processes
produces an approximately flat prior over $H$. We show that the resulting
Pitman-Yor Mixture (PYM) entropy estimator is consistent for a large class of
distributions. We explore the theoretical properties of the resulting
estimator, and show that it performs well both in simulation and in application
to real data.
|
1302.0334 | Class Algebra for Ontology Reasoning | cs.AI | Class algebra provides a natural framework for sharing of ISA hierarchies
between users that may be unaware of each other's definitions. This permits
data from relational databases, object-oriented databases, and tagged XML
documents to be unioned into one distributed ontology, sharable by all users
without the need for prior negotiation or the development of a "standard"
ontology for each field. Moreover, class algebra produces a functional
correspondence between a class's class algebraic definition (i.e. its "intent")
and the set of all instances which satisfy the expression (i.e. its "extent").
The framework thus provides assistance in quickly locating examples and
counterexamples of various definitions. This kind of information is very
valuable when developing models of the real world, and serves as an invaluable
tool assisting in the proof of theorems concerning these class algebra
expressions. Finally, the relative frequencies of objects in the ISA hierarchy
can produce a useful Boolean algebra of probabilities. The probabilities can be
used by traditional information-theoretic classification methodologies to
obtain optimal ways of classifying objects in the database.
|
1302.0336 | Sharp Inequalities for $f$-divergences | math.ST cs.IT math.IT math.OC math.PR stat.ML stat.TH | $f$-divergences are a general class of divergences between probability
measures which include as special cases many commonly used divergences in
probability, mathematical statistics and information theory such as
Kullback-Leibler divergence, chi-squared divergence, squared Hellinger
distance, total variation distance etc. In this paper, we study the problem of
maximizing or minimizing an $f$-divergence between two probability measures
subject to a finite number of constraints on other $f$-divergences. We show
that these infinite-dimensional optimization problems can all be reduced to
optimization problems over small finite dimensional spaces which are tractable.
Our results lead to a comprehensive and unified treatment of the problem of
obtaining sharp inequalities between $f$-divergences. We demonstrate that many
of the existing results on inequalities between $f$-divergences can be obtained
as special cases of our results and we also improve on some existing non-sharp
inequalities.
|
1302.0337 | Perancangan basisdata sistem informasi penggajian | cs.DB | The purpose of this research is to design database scheme of information
system at XYZ University. By using database design methods (conceptual scheme,
logical scheme, & physical scheme) the writer designs payroll information
system. The physical scheme is compatible with Borland Delphi Database Engine
Scheme to support the implementation of the I.S. After 3 (three) steps we get 7
(seven) tables, dan 6 (six) forms. By using this shemce, the system can produce
several reports quickly, accurately, efficiently, and effectively.
|
1302.0347 | An Efficient CCA2-Secure Variant of the McEliece Cryptosystem in the
Standard Model | cs.CR cs.IT math.IT | Recently, a few chosen-ciphertext secure (CCA2-secure) variants of the
McEliece public-key encryption (PKE) scheme in the standard model were
introduced. All the proposed schemes are based on encryption repetition
paradigm and use general transformation from CPA-secure scheme to a CCA2-secure
one. Therefore, the resulting encryption scheme needs \textit{separate}
encryption and has \textit{large} key size compared to the original scheme,
which complex public key size problem in the code-based PKE schemes. Thus, the
proposed schemes are not sufficiently efficient to be used in practice.
In this work, we propose an efficient CCA2-secure variant of the McEliece PKE
scheme in the standard model. The main novelty is that, unlike previous
approaches, our approach is a generic conversion and can be applied to
\textit{any} one-way trapdoor function (OW-TDF), the lowest-level security
notion in the context of public-key cryptography, resolving a big fundamental
and central problem that has remained unsolved in the past two decades.
|
1302.0351 | New Dimension Value Introduction for In-Memory What-If Analysis | cs.DB | OLAP systems operate on historical data and provide answers to analysts
queries. Recent in-memory implementations provide significant performance
improvement for real time ad-hoc analysis. Philosophy and techniques of what-if
analysis on data warehouse and in-memory data store based OLAP systems have
been covered in great detail before but exploration of new dimension value
(attribute) introduction has been limited in the context of what-if analysis.
We extend the approach of Andrey Balmin et al of using select modify operator
on data graph to introduce new values for dimensions and measures in a
read-only in-memory data store as scenarios. Our system constructs scenarios
without materializing the rows and stores the row information as queries. The
rows associated with the scenarios are constructed as and when required by an
ad-hoc query.
|
1302.0386 | Fast Damage Recovery in Robotics with the T-Resilience Algorithm | cs.RO cs.AI cs.LG | Damage recovery is critical for autonomous robots that need to operate for a
long time without assistance. Most current methods are complex and costly
because they require anticipating each potential damage in order to have a
contingency plan ready. As an alternative, we introduce the T-resilience
algorithm, a new algorithm that allows robots to quickly and autonomously
discover compensatory behaviors in unanticipated situations. This algorithm
equips the robot with a self-model and discovers new behaviors by learning to
avoid those that perform differently in the self-model and in reality. Our
algorithm thus does not identify the damaged parts but it implicitly searches
for efficient behaviors that do not use them. We evaluate the T-Resilience
algorithm on a hexapod robot that needs to adapt to leg removal, broken legs
and motor failures; we compare it to stochastic local search, policy gradient
and the self-modeling algorithm proposed by Bongard et al. The behavior of the
robot is assessed on-board thanks to a RGB-D sensor and a SLAM algorithm. Using
only 25 tests on the robot and an overall running time of 20 minutes,
T-Resilience consistently leads to substantially better results than the other
approaches.
|
1302.0393 | Lambek vs. Lambek: Functorial Vector Space Semantics and String Diagrams
for Lambek Calculus | math.LO cs.CL math.CT | The Distributional Compositional Categorical (DisCoCat) model is a
mathematical framework that provides compositional semantics for meanings of
natural language sentences. It consists of a computational procedure for
constructing meanings of sentences, given their grammatical structure in terms
of compositional type-logic, and given the empirically derived meanings of
their words. For the particular case that the meaning of words is modelled
within a distributional vector space model, its experimental predictions,
derived from real large scale data, have outperformed other empirically
validated methods that could build vectors for a full sentence. This success
can be attributed to a conceptually motivated mathematical underpinning, by
integrating qualitative compositional type-logic and quantitative modelling of
meaning within a category-theoretic mathematical framework.
The type-logic used in the DisCoCat model is Lambek's pregroup grammar.
Pregroup types form a posetal compact closed category, which can be passed, in
a functorial manner, on to the compact closed structure of vector spaces,
linear maps and tensor product. The diagrammatic versions of the equational
reasoning in compact closed categories can be interpreted as the flow of word
meanings within sentences. Pregroups simplify Lambek's previous type-logic, the
Lambek calculus, which has been extensively used to formalise and reason about
various linguistic phenomena. The apparent reliance of the DisCoCat on
pregroups has been seen as a shortcoming. This paper addresses this concern, by
pointing out that one may as well realise a functorial passage from the
original type-logic of Lambek, a monoidal bi-closed category, to vector spaces,
or to any other model of meaning organised within a monoidal bi-closed
category. The corresponding string diagram calculus, due to Baez and Stay, now
depicts the flow of word meanings.
|
1302.0394 | The weight distributions of some cyclic codes with three or four
nonzeros over F3 | cs.IT math.IT | Because of efficient encoding and decoding algorithms, cyclic codes are an
important family of linear block codes, and have applications in communica-
tion and storage systems. However, their weight distributions are known only
for a few cases mainly on the codes with one or two nonzeros. In this paper,
the weight distributions of two classes of cyclic codes with three or four
nonzeros are determined.
|
1302.0398 | Towards efficient decoding of classical-quantum polar codes | quant-ph cs.IT math.IT | Known strategies for sending bits at the capacity rate over a general channel
with classical input and quantum output (a cq channel) require the decoder to
implement impractically complicated collective measurements. Here, we show that
a fully collective strategy is not necessary in order to recover all of the
information bits. In fact, when coding for a large number N uses of a cq
channel W, N I(W_acc) of the bits can be recovered by a non-collective strategy
which amounts to coherent quantum processing of the results of product
measurements, where I(W_acc) is the accessible information of the channel W. In
order to decode the other N (I(W) - I(W_acc)) bits, where I(W) is the Holevo
rate, our conclusion is that the receiver should employ collective
measurements. We also present two other results: 1) collective Fuchs-Caves
measurements (quantum likelihood ratio measurements) can be used at the
receiver to achieve the Holevo rate and 2) we give an explicit form of the
Helstrom measurements used in small-size polar codes. The main approach used to
demonstrate these results is a quantum extension of Arikan's polar codes.
|
1302.0406 | Generalization Guarantees for a Binary Classification Framework for
Two-Stage Multiple Kernel Learning | cs.LG stat.ML | We present generalization bounds for the TS-MKL framework for two stage
multiple kernel learning. We also present bounds for sparse kernel learning
formulations within the TS-MKL framework.
|
1302.0413 | Learning to Rank for Expert Search in Digital Libraries of Academic
Publications | cs.IR cs.DL | The task of expert finding has been getting increasing attention in
information retrieval literature. However, the current state-of-the-art is
still lacking in principled approaches for combining different sources of
evidence in an optimal way. This paper explores the usage of learning to rank
methods as a principled approach for combining multiple estimators of
expertise, derived from the textual contents, from the graph-structure with the
citation patterns for the community of experts, and from profile information
about the experts. Experiments made over a dataset of academic publications,
for the area of Computer Science, attest for the adequacy of the proposed
approaches.
|
1302.0420 | Benchmarking some Portuguese S&T system research units: 2nd Edition | cs.DL cs.IR | The increasing use of productivity and impact metrics for evaluation and
comparison, not only of individual researchers but also of institutions,
universities and even countries, has prompted the development of bibliometrics.
Currently, metrics are becoming widely accepted as an easy and balanced way to
assist the peer review and evaluation of scientists and/or research units,
provided they have adequate precision and recall.
This paper presents a benchmarking study of a selected list of representative
Portuguese research units, based on a fairly complete set of parameters:
bibliometric parameters, number of competitive projects and number of PhDs
produced. The study aimed at collecting productivity and impact data from the
selected research units in comparable conditions i.e., using objective metrics
based on public information, retrievable on-line and/or from official sources
and thus verifiable and repeatable. The study has thus focused on the activity
of the 2003-06 period, where such data was available from the latest official
evaluation.
The main advantage of our study was the application of automatic tools,
achieving relevant results at a reduced cost. Moreover, the results over the
selected units suggest that this kind of analyses will be very useful to
benchmark scientific productivity and impact, and assist peer review.
|
1302.0422 | Set-Membership Constrained Conjugate Gradient Beamforming Algorithms | cs.IT math.IT | In this work a constrained adaptive filtering strategy based on conjugate
gradient (CG) and set-membership (SM) techniques is presented for adaptive
beamforming. A constraint on the magnitude of the array output is imposed to
derive an adaptive algorithm that performs data-selective updates when
calculating the beamformer's parameters. We consider a linearly constrained
minimum variance (LCMV) optimization problem with the bounded constraint based
on this strategy and propose a CG type algorithm for implementation. The
proposed algorithm has data-selective updates, a variable forgetting factor and
performs one iteration per update to reduce the computational complexity. The
updated parameters construct a space of feasible solutions that enforce the
constraints. We also introduce two time-varying bounding schemes to measure the
quality of the parameters that could be included in the parameter space. A
comprehensive complexity and performance analysis between the proposed and
existing algorithms are provided. Simulations are performed to show the
enhanced convergence and tracking performance of the proposed algorithm as
compared to existing techniques.
|
1302.0435 | Parallel D2-Clustering: Large-Scale Clustering of Discrete Distributions | cs.LG cs.CV | The discrete distribution clustering algorithm, namely D2-clustering, has
demonstrated its usefulness in image classification and annotation where each
object is represented by a bag of weighed vectors. The high computational
complexity of the algorithm, however, limits its applications to large-scale
problems. We present a parallel D2-clustering algorithm with substantially
improved scalability. A hierarchical structure for parallel computing is
devised to achieve a balance between the individual-node computation and the
integration process of the algorithm. Additionally, it is shown that even with
a single CPU, the hierarchical structure results in significant speed-up.
Experiments on real-world large-scale image data, Youtube video data, and
protein sequence data demonstrate the efficiency and wide applicability of the
parallel D2-clustering algorithm. The loss in clustering accuracy is minor in
comparison with the original sequential algorithm.
|
1302.0439 | Correcting Camera Shake by Incremental Sparse Approximation | cs.CV cs.GR | The problem of deblurring an image when the blur kernel is unknown remains
challenging after decades of work. Recently there has been rapid progress on
correcting irregular blur patterns caused by camera shake, but there is still
much room for improvement. We propose a new blind deconvolution method using
incremental sparse edge approximation to recover images blurred by camera
shake. We estimate the blur kernel first from only the strongest edges in the
image, then gradually refine this estimate by allowing for weaker and weaker
edges. Our method competes with the benchmark deblurring performance of the
state-of-the-art while being significantly faster and easier to generalize.
|
1302.0446 | Sparse Camera Network for Visual Surveillance -- A Comprehensive Survey | cs.CV | Technological advances in sensor manufacture, communication, and computing
are stimulating the development of new applications that are transforming
traditional vision systems into pervasive intelligent camera networks. The
analysis of visual cues in multi-camera networks enables a wide range of
applications, from smart home and office automation to large area surveillance
and traffic surveillance. While dense camera networks - in which most cameras
have large overlapping fields of view - are well studied, we are mainly
concerned with sparse camera networks. A sparse camera network undertakes large
area surveillance using as few cameras as possible, and most cameras have
non-overlapping fields of view with one another. The task is challenging due to
the lack of knowledge about the topological structure of the network,
variations in the appearance and motion of specific tracking targets in
different views, and the difficulties of understanding composite events in the
network. In this review paper, we present a comprehensive survey of recent
research results to address the problems of intra-camera tracking, topological
structure learning, target appearance modeling, and global activity
understanding in sparse camera networks. A number of current open research
issues are discussed.
|
1302.0449 | Design of optimal sparse interconnection graphs for synchronization of
oscillator networks | math.OC cs.SY | We study the optimal design of a conductance network as a means for
synchronizing a given set of oscillators. Synchronization is achieved when all
oscillator voltages reach consensus, and performance is quantified by the
mean-square deviation from the consensus value. We formulate optimization
problems that address the trade-off between synchronization performance and the
number and strength of oscillator couplings. We promote the sparsity of the
coupling network by penalizing the number of interconnection links. For
identical oscillators, we establish convexity of the optimization problem and
demonstrate that the design problem can be formulated as a semidefinite
program. Finally, for special classes of oscillator networks we derive explicit
analytical expressions for the optimal conductance values.
|
1302.0450 | Algorithms for leader selection in stochastically forced consensus
networks | math.OC cs.RO cs.SY | We are interested in assigning a pre-specified number of nodes as leaders in
order to minimize the mean-square deviation from consensus in stochastically
forced networks. This problem arises in several applications including control
of vehicular formations and localization in sensor networks. For networks with
leaders subject to noise, we show that the Boolean constraints (a node is
either a leader or it is not) are the only source of nonconvexity. By relaxing
these constraints to their convex hull we obtain a lower bound on the global
optimal value. We also use a simple but efficient greedy algorithm to identify
leaders and to compute an upper bound. For networks with leaders that perfectly
follow their desired trajectories, we identify an additional source of
nonconvexity in the form of a rank constraint. Removal of the rank constraint
and relaxation of the Boolean constraints yields a semidefinite program for
which we develop a customized algorithm well-suited for large networks. Several
examples ranging from regular lattices to random graphs are provided to
illustrate the effectiveness of the developed algorithms.
|
1302.0459 | On the performance of 1-level LDPC lattices | cs.IT math.IT | The low-density parity-check (LDPC) lattices perform very well in high
dimensions under generalized min-sum iterative decoding algorithm. In this work
we focus on 1-level LDPC lattices. We show that these lattices are the same as
lattices constructed based on Construction A and low-density lattice-code
(LDLC) lattices. In spite of having slightly lower coding gain, 1-level regular
LDPC lattices have remarkable performances. The lower complexity nature of the
decoding algorithm for these type of lattices allows us to run it for higher
dimensions easily. Our simulation results show that a 1-level LDPC lattice of
size 10000 can work as close as 1.1 dB at normalized error probability (NEP) of
$10^{-5}$.This can also be reported as 0.6 dB at symbol error rate (SER) of
$10^{-5}$ with sum-product algorithm.
|
1302.0463 | Modeling citation networks based on vigorousness and dormancy | physics.soc-ph cond-mat.stat-mech cs.DL cs.SI | In citation networks, the activity of papers usually decreases with age and
dormant papers may be discovered and become fashionable again. To model this
phenomenon, a competition mechanism is suggested which incorporates two
factors: vigorousness and dormancy. Based on this idea, a citation network
model is proposed, in which a node has two discrete stage: vigorous and
dormant. Vigorous nodes can be deactivated and dormant nodes may be activated
and become vigorous. The evolution of the network couples addition of new nodes
and state transitions of old ones. Both analytical calculation and numerical
simulation show that the degree distribution of nodes in generated networks
displays a good right-skewed behavior. Particularly, scale-free networks are
obtained as the deactivated vertex is target selected and exponential networks
are realized for the random-selected case. Moreover, the measurement of four
real-world citation networks achieves a good agreement with the stochastic
model.
|
1302.0487 | On the dynamic compressibility of sets | cs.CC cs.IT math.IT | We define a new notion of compressibility of a set of numbers through the
dynamics of a polynomial function. We provide approaches to solve the problem
by reducing it to the multi-criteria traveling salesman problem through a
series of transformations. We then establish computational complexity results
by giving some NP-completeness proofs. We also discuss about a notion of
$\epsilon$ K-compressibility of a set, with regard to lossy compression and
deduce the necessary condition for the given set to be $\epsilon$
K-compressible. Finally, we conclude by providing a list of open problems
solutions to which could extend the applicability the our technique.
|
1302.0488 | A multi-lane traffic simulation model via continuous cellular automata | cs.MA nlin.CG | Traffic models based on cellular automata have high computational efficiency
because of their simplicity in describing unrealistic vehicular behavior and
the versatility of cellular automata to be implemented on parallel processing.
On the other hand, the other microscopic traffic models such as car-following
models are computationally more expensive, but they have more realistic driver
behaviors and detailed vehicle characteristics. We propose a new class between
these two categories, defining a traffic model based on continuous cellular
automata where we combine the efficiency of cellular automata models with the
accuracy of the other microscopic models. More precisely, we introduce a
stochastic cellular automata traffic model in which the space is not
coarse-grain but continuous. The continuity also allows us to embed a
multi-agent fuzzy system proposed to handle uncertainties in decision making on
road traffic. Therefore, we simulate different driver behaviors and study the
effect of various compositions of vehicles within the traffic stream from the
macroscopic point of view. The experimental results show that our model is able
to reproduce the typical traffic flow phenomena showing a variety of effects
due to the heterogeneity of traffic.
|
1302.0490 | Improved Bounds on RIP for Generalized Orthogonal Matching Pursuit | cs.IT math.IT | Generalized Orthogonal Matching Pursuit (gOMP) is a natural extension of OMP
algorithm where unlike OMP, it may select $N (\geq1)$ atoms in each iteration.
In this paper, we demonstrate that gOMP can successfully reconstruct a
$K$-sparse signal from a compressed measurement $ {\bf y}={\bf \Phi x}$ by
$K^{th}$ iteration if the sensing matrix ${\bf \Phi}$ satisfies restricted
isometry property (RIP) of order $NK$ where $\delta_{NK} < \frac
{\sqrt{N}}{\sqrt{K}+2\sqrt{N}}$. Our bound offers an improvement over the very
recent result shown in \cite{wang_2012b}. Moreover, we present another bound
for gOMP of order $NK+1$ with $\delta_{NK+1} < \frac
{\sqrt{N}}{\sqrt{K}+\sqrt{N}}$ which exactly relates to the near optimal bound
of $\delta_{K+1} < \frac {1}{\sqrt{K}+1}$ for OMP (N=1) as shown in
\cite{wang_2012a}.
|
1302.0494 | Local Structure Matching Driven by Joint-Saliency-Structure Adaptive
Kernel Regression | cs.CV | For nonrigid image registration, matching the particular structures (or the
outliers) that have missing correspondence and/or local large deformations, can
be more difficult than matching the common structures with small deformations
in the two images. Most existing works depend heavily on the outlier
segmentation to remove the outlier effect in the registration. Moreover, these
works do not handle simultaneously the missing correspondences and local large
deformations. In this paper, we defined the nonrigid image registration as a
local adaptive kernel regression which locally reconstruct the moving image's
dense deformation vectors from the sparse deformation vectors in the
multi-resolution block matching. The kernel function of the kernel regression
adapts its shape and orientation to the reference image's structure to gather
more deformation vector samples of the same structure for the iterative
regression computation, whereby the moving image's local deformations could be
compliant with the reference image's local structures. To estimate the local
deformations around the outliers, we use joint saliency map that highlights the
corresponding saliency structures (called Joint Saliency Structures, JSSs) in
the two images to guide the dense deformation reconstruction by emphasizing
those JSSs' sparse deformation vectors in the kernel regression. The
experimental results demonstrate that by using local JSS adaptive kernel
regression, the proposed method achieves almost the best performance in
alignment of all challenging image pairs with outlier structures compared with
other five state-of-the-art nonrigid registration algorithms.
|
1302.0522 | Minimum Distance Distribution of Irregular Generalized LDPC Code
Ensembles | cs.IT math.IT | In this paper, the minimum distance distribution of irregular generalized
LDPC (GLDPC) code ensembles is investigated. Two classes of GLDPC code
ensembles are analyzed; in one case, the Tanner graph is regular from the
variable node perspective, and in the other case the Tanner graph is completely
unstructured and irregular. In particular, for the former ensemble class we
determine exactly which ensembles have minimum distance growing linearly with
the block length with probability approaching unity with increasing block
length. This work extends previous results concerning LDPC and regular GLDPC
codes to the case where a hybrid mixture of check node types is used.
|
1302.0533 | Low-Complexity Reduced-Rank Beamforming Algorithms | cs.IT math.IT | A reduced-rank framework with set-membership filtering (SMF) techniques is
presented for adaptive beamforming problems encountered in radar systems. We
develop and analyze stochastic gradient (SG) and recursive least squares
(RLS)-type adaptive algorithms, which achieve an enhanced convergence and
tracking performance with low computational cost as compared to existing
techniques. Simulations show that the proposed algorithms have a superior
performance to prior methods, while the complexity is lower.
|
1302.0540 | A game-theoretic framework for classifier ensembles using weighted
majority voting with local accuracy estimates | cs.LG | In this paper, a novel approach for the optimal combination of binary
classifiers is proposed. The classifier combination problem is approached from
a Game Theory perspective. The proposed framework of adapted weighted majority
rules (WMR) is tested against common rank-based, Bayesian and simple majority
models, as well as two soft-output averaging rules. Experiments with ensembles
of Support Vector Machines (SVM), Ordinary Binary Tree Classifiers (OBTC) and
weighted k-nearest-neighbor (w/k-NN) models on benchmark datasets indicate that
this new adaptive WMR model, employing local accuracy estimators and the
analytically computed optimal weights outperform all the other simple
combination rules.
|
1302.0558 | Evolutionary dynamics of time-resolved social interactions | physics.soc-ph cs.SI | Cooperation among unrelated individuals is frequently observed in social
groups when their members combine efforts and resources to obtain a shared
benefit that is unachievable by an individual alone. However, understanding why
cooperation arises despite the natural tendency of individuals towards selfish
behavior is still an open problem and represents one of the most fascinating
challenges in evolutionary dynamics. Recently, the structural characterization
of the networks in which social interactions take place has shed some light on
the mechanisms by which cooperative behavior emerges and eventually overcomes
the natural temptation to defect. In particular, it has been found that the
heterogeneity in the number of social ties and the presence of tightly knit
communities lead to a significant increase in cooperation as compared with the
unstructured and homogeneous connection patterns considered in classical
evolutionary dynamics. Here, we investigate the role of social-ties dynamics
for the emergence of cooperation in a family of social dilemmas. Social
interactions are in fact intrinsically dynamic, fluctuating, and intermittent
over time, and they can be represented by time-varying networks. By considering
two experimental data sets of human interactions with detailed time
information, we show that the temporal dynamics of social ties has a dramatic
impact on the evolution of cooperation: the dynamics of pairwise interactions
favors selfish behavior.
|
1302.0561 | Breaking the coherence barrier: A new theory for compressed sensing | cs.IT math.IT math.NA | This paper provides an extension of compressed sensing which bridges a
substantial gap between existing theory and its current use in real-world
applications. It introduces a mathematical framework that generalizes the three
standard pillars of compressed sensing - namely, sparsity, incoherence and
uniform random subsampling - to three new concepts: asymptotic sparsity,
asymptotic incoherence and multilevel random sampling. The new theorems show
that compressed sensing is also possible, and reveals several advantages, under
these substantially relaxed conditions. The importance of this is threefold.
First, inverse problems to which compressed sensing is currently applied are
typically coherent. The new theory provides the first comprehensive
mathematical explanation for a range of empirical usages of compressed sensing
in real-world applications, such as medical imaging, microscopy, spectroscopy
and others. Second, in showing that compressed sensing does not require
incoherence, but instead that asymptotic incoherence is sufficient, the new
theory offers markedly greater flexibility in the design of sensing mechanisms.
Third, by using asymptotic incoherence and multi-level sampling to exploit not
just sparsity, but also structure, i.e. asymptotic sparsity, the new theory
shows that substantially improved reconstructions can be obtained from fewer
measurements.
|
1302.0569 | A Class of Three-Weight Cyclic Codes | cs.IT math.IT | Cyclic codes are a subclass of linear codes and have applications in consumer
electronics, data storage systems, and communication systems as they have
efficient encoding and decoding algorithms. In this paper, a class of
three-weight cyclic codes over $\gf(p)$ whose duals have two zeros is
presented, where $p$ is an odd prime. The weight distribution of this class of
cyclic codes is settled. Some of the cyclic codes are optimal. The duals of a
subclass of the cyclic codes are also studied and proved to be optimal.
|
1302.0579 | A Universal Quantum Circuit Scheme For Finding Complex Eigenvalues | quant-ph cs.IT math.IT | We present a general quantum circuit design for finding eigenvalues of
non-unitary matrices on quantum computers using the iterative phase estimation
algorithm. In particular, we show how the method can be used for the simulation
of resonance states for quantum systems.
|
1302.0581 | SMML estimators for exponential families with continuous sufficient
statistics | cs.IT math.IT math.ST stat.ML stat.TH | The minimum message length principle is an information theoretic criterion
that links data compression with statistical inference. This paper studies the
strict minimum message length (SMML) estimator for $d$-dimensional exponential
families with continuous sufficient statistics, for all $d \ge 1$. The
partition of an SMML estimator is shown to consist of convex polytopes (i.e.
convex polygons when $d=2$) which can be described explicitly in terms of the
assertions and coding probabilities. While this result is known, we give a new
proof based on the calculus of variations, and this approach gives some
interesting new inequalities for SMML estimators. We also use this result to
construct an SMML estimator for a $2$-dimensional normal random variable with
known variance and a normal prior on its mean.
|
1302.0585 | Wireless Information and Power Transfer: A Dynamic Power Splitting
Approach | cs.IT math.IT | Scavenging energy from ambient radio signals, namely wireless energy
harvesting (WEH), has recently drawn significant attention. In this paper, we
consider a point-to-point wireless link over the flat-fading channel, where the
receiver replenishes energy via WEH from the signals sent by the transmitter.
We consider a SISO (single-input single-output) system where the single-antenna
receiver cannot decode information and harvest energy independently from the
same signal received. Under this practical constraint, we propose a dynamic
power splitting (DPS) scheme, where the received signal is split into two
streams with adjustable power levels for information decoding and energy
harvesting separately based on the instantaneous channel condition that is
assumed to be known at the receiver. We derive the optimal power splitting rule
at the receiver to achieve various trade-offs between the maximum ergodic
capacity for information transfer and the maximum average harvested energy for
power transfer. Moreover, for the case when the channel state information is
also known at the transmitter, we investigate the joint optimization of
transmitter power control and receiver power splitting. Finally, we extend the
result for DPS to the SIMO (single-input multiple-output) system and
investigate a low-complexity power splitting scheme termed antenna switching.
|
1302.0614 | Outage Capacity for the Optical MIMO Channel | cs.IT math.IT | MIMO processing techniques in fiber optical communications have been proposed
as a promising approach to meet increasing demand for information throughput.
In this context, the multiple channels correspond to the multiple modes and/or
multiple cores in the fiber. In this paper we characterize the distribution of
the mutual information with Gaussian input in a simple channel model for this
system. Assuming significant cross talk between cores, negligible
backscattering and near-lossless propagation in the fiber, we model the
transmission channel as a random complex unitary matrix. The loss in the
transmission may be parameterized by a number of unutilized channels in the
fiber. We analyze the system in a dual fashion. First, we evaluate a
closed-form expression for the outage probability, which is handy for small
matrices. We also apply the asymptotic approach, in particular the Coulomb gas
method from statistical mechanics, to obtain closed-form results for the
ergodic mutual information, its variance as well as the outage probability for
Gaussian input in the limit of large number of cores/modes. By comparing our
analytic results to simulations, we see that, despite the fact that this method
is nominally valid for large number of modes, our method is quite accurate even
for small to modest number of channels.
|
1302.0634 | Cross-Gramian-Based Combined State and Parameter Reduction for
Large-Scale Control Systems | math.OC cs.SY math.DS | This work introduces the empirical cross gramian for
multiple-input-multiple-output systems. The cross gramian is a tool for
reducing the state space of control systems, which conjoins controllability and
observability information into a single matrix and does not require balancing.
Its empirical gramian variant extends the application of the cross gramian to
nonlinear systems. Furthermore, for parametrized systems, the empirical
gramians can also be utilized for sensitivity analysis or parameter
identification and thus for parameter reduction. This work also introduces the
empirical joint gramian, which is derived from the empirical cross gramian. The
joint gramian not only allows a reduction of the parameter space, but also the
combined state and parameter space reduction, which is tested on a linear and a
nonlinear control system. Controllability- and observability-based combined
reduction methods are also presented, which are benchmarked against the joint
gramian.
|
1302.0635 | Projection Design For Statistical Compressive Sensing: A Tight Frame
Based Approach | cs.IT math.IT | In this paper, we develop a framework to design sensing matrices for
compressive sensing applications that lead to good mean squared error (MSE)
performance subject to sensing cost constraints. By capitalizing on the MSE of
the oracle estimator, whose performance has been shown to act as a benchmark to
the performance of standard sparse recovery algorithms, we use the fact that a
Parseval tight frame is the closest design - in the Frobenius norm sense - to
the solution of a convex relaxation of the optimization problem that relates to
the minimization of the MSE of the oracle estimator with respect to the
equivalent sensing matrix, subject to sensing energy constraints. Based on this
result, we then propose two sensing matrix designs that exhibit two key
properties: i) the designs are closed form rather than iterative; ii) the
designs exhibit superior performance in relation to other designs in the
literature, which is revealed by our numerical investigation in various
scenarios with different sparse recovery algorithms including basis pursuit
de-noise (BPDN), the Dantzig selector and orthogonal matching pursuit (OMP).
|
1302.0677 | Two types of well followed users in the followership networks of Twitter | cs.SI physics.soc-ph | In the Twitter blogosphere, the number of followers is probably the most
basic and succinct quantity for measuring popularity of users. However, the
number of followers can be manipulated in various ways; we can even buy
follows. Therefore, alternative popularity measures for Twitter users on the
basis of, for example, users' tweets and retweets, have been developed. In the
present work, we take a purely network approach to this fundamental question.
First, we find that two relatively distinct types of users possessing a large
number of followers exist, in particular for Japanese, Russian, and Korean
users among the seven language groups that we examined. A first type of user
follows a small number of other users. A second type of user follows
approximately the same number of other users as the number of follows that the
user receives. Then, we compare local (i.e., egocentric) followership networks
around the two types of users with many followers. We show that the second
type, which is presumably uninfluential users despite its large number of
followers, is characterized by high link reciprocity, a large number of friends
(i.e., those whom a user follows) for the followers, followers' high link
reciprocity, large clustering coefficient, large fraction of the second type of
users among the followers, and a small PageRank. Our network-based results
support that the number of followers used alone is a misleading measure of
user's popularity. We propose that the number of friends, which is simple to
measure, also helps us to assess the popularity of Twitter users.
|
1302.0689 | Multi-scale Visual Attention & Saliency Modelling with Decision Theory | cs.CV | Bottom-up saliency, an early human visual processing, behaves like binary
classification of interest and null hypothesis. Its discriminant power, mutual
information of image features and class distribution, is closely related to
saliency value by the well-known centre-surround theory. As classification
accuracy very much depends on window sizes, the discriminant saliency (power)
varies according to sampling scales. Discriminating power estimation in
multi-scales framework needs integrating with wavelet transformation and then
estimating statistical discrepancy of two consecutive scales (centre-surround
windows) by Hidden Markov Tree (HMT) model. Finally, multi-scale discriminant
saliency (MDIS) maps are combined by the maximum information rule to synthesize
a final saliency map. All MDIS maps are evaluated with standard quantitative
tools (NSS,LCC,AUC) on N.Bruce's database with ground truth data as
eye-tracking locations ; as well assessed qualitatively by visual examination
of individual cases. For evaluating MDIS against well-known AIM saliency
method, simulations are needed and described in details with several
interesting conclusions, drawn for further research directions.
|
1302.0692 | Epidemiologically optimal static networks from temporal network data | physics.soc-ph cs.SI q-bio.PE | Network epidemiology's most important assumption is that the contact
structure over which infectious diseases propagate can be represented as a
static network. However, contacts are highly dynamic, changing at many time
scales. In this paper, we investigate conceptually simple methods to construct
static graphs for network epidemiology from temporal contact data. We evaluate
these methods on empirical and synthetic model data. For almost all our cases,
the network representation that captures most relevant information is a
so-called exponential-threshold network. In these, each contact contributes
with a weight decreasing exponentially with time, and there is an edge between
a pair of vertices if the weight between them exceeds a threshold. Networks of
aggregated contacts over an optimally chosen time window perform almost as good
as the exponential-threshold networks. On the other hand, networks of
accumulated contacts over the entire sampling time, and networks of concurrent
partnerships, perform worse. We discuss these observations in the context of
the temporal and topological structure of the data sets.
|
1302.0710 | ThermInfo: Collecting, Retrieving, and Estimating Reliable
Thermochemical Data | cs.CE | Standard enthalpies of formation are used for assessing the efficiency and
safety of chemical processes in the chemical industry. However, the number of
compounds for which the enthalpies of formation are available is many orders of
magnitude smaller than the number of known compounds. Thermochemical data
prediction methods are therefore clearly needed. Several commercial and free
chemical databases are currently available, the NIST WebBook being the most
used free source. To overcome this problem a cheminformatics system was
designed and built with two main objectives in mind: collecting and retrieving
critically evaluated thermochemical values, and estimating new data. In its
present version, by using cheminformatics techniques, ThermInfo allows the
retrieval of the value of a thermochemical property, such as a gas-phase
standard enthalpy of formation, by inputting, for example, the molecular
structure or the name of a compound. The same inputs can also be used to
estimate data (presently restricted to non-polycyclic hydrocarbons) by using
the Extended Laidler Bond Additivity (ELBA) method. The information system is
publicly available at http://www.therminfo.com or
http://therminfo.lasige.di.fc.ul.pt. ThermInfo's strength lies in the data
quality, availability (free access), search capabilities, and, in particular,
prediction ability, based on a user-friendly interface that accepts inputs in
several formats.
|
1302.0723 | Multi-Robot Informative Path Planning for Active Sensing of
Environmental Phenomena: A Tale of Two Algorithms | cs.LG cs.AI cs.MA cs.RO | A key problem of robotic environmental sensing and monitoring is that of
active sensing: How can a team of robots plan the most informative observation
paths to minimize the uncertainty in modeling and predicting an environmental
phenomenon? This paper presents two principled approaches to efficient
information-theoretic path planning based on entropy and mutual information
criteria for in situ active sensing of an important broad class of
widely-occurring environmental phenomena called anisotropic fields. Our
proposed algorithms are novel in addressing a trade-off between active sensing
performance and time efficiency. An important practical consequence is that our
algorithms can exploit the spatial correlation structure of Gaussian
process-based anisotropic fields to improve time efficiency while preserving
near-optimal active sensing performance. We analyze the time complexity of our
algorithms and prove analytically that they scale better than state-of-the-art
algorithms with increasing planning horizon length. We provide theoretical
guarantees on the active sensing performance of our algorithms for a class of
exploration tasks called transect sampling, which, in particular, can be
improved with longer planning time and/or lower spatial correlation along the
transect. Empirical evaluation on real-world anisotropic field data shows that
our algorithms can perform better or at least as well as the state-of-the-art
algorithms while often incurring a few orders of magnitude less computational
time, even when the field conditions are less favorable.
|
1302.0739 | Benchmarking community detection methods on social media data | cs.SI physics.soc-ph | Benchmarking the performance of community detection methods on empirical
social network data has been identified as critical for improving these
methods. In particular, while most current research focuses on detecting
communities in data that has been digitally extracted from large social media
and telecommunications services, most evaluation of this research is based on
small, hand-curated datasets. We argue that these two types of networks differ
so significantly that by evaluating algorithms solely on the former, we know
little about how well they perform on the latter. To address this problem, we
consider the difficulties that arise in constructing benchmarks based on
digitally extracted network data, and propose a task-based strategy which we
feel addresses these difficulties. To demonstrate that our scheme is effective,
we use it to carry out a substantial benchmark based on Facebook data. The
benchmark reveals that some of the most popular algorithms fail to detect
fine-grained community structure.
|
1302.0744 | Explicit MBR All-Symbol Locality Codes | cs.IT math.IT | Node failures are inevitable in distributed storage systems (DSS). To enable
efficient repair when faced with such failures, two main techniques are known:
Regenerating codes, i.e., codes that minimize the total repair bandwidth; and
codes with locality, which minimize the number of nodes participating in the
repair process. This paper focuses on regenerating codes with locality, using
pre-coding based on Gabidulin codes, and presents constructions that utilize
minimum bandwidth regenerating (MBR) local codes. The constructions achieve
maximum resilience (i.e., optimal minimum distance) and have maximum capacity
(i.e., maximum rate). Finally, the same pre-coding mechanism can be combined
with a subclass of fractional-repetition codes to enable maximum resilience and
repair-by-transfer simultaneously.
|
1302.0749 | Multi-Way Information Exchange Over Completely-Connected Interference
Networks with a Multi-Antenna Relay | cs.IT math.IT | This paper considers a fully-connected interference network with a relay in
which multiple users equipped with a single antenna want to exchange multiple
unicast messages with other users in the network by sharing the relay equipped
with multiple antennas. For such a network, the degrees of freedom (DoF) are
derived by considering various message exchange scenarios: a multi-user
fully-connected Y channel, a two-pair two-way interference channel with the
relay, and a two-pair two-way X channel with the relay. Further, considering
distributed relays employing a single antenna in the two-way interference
channel and the three-user fully-connected Y channel, achievable sum-DoF are
also derived in the two-way interference channel and the three-user
fully-connected Y channel. A major implication of the derived DoF results is
that a relay with multiple antennas or multiple relays employing a single
antenna increases the capacity scaling law of the multi-user interference
network when multiple directional information flows are considered, even if the
networks are fully-connected and all nodes operate in half-duplex. These
results reveal that the relay is useful in the multi-way interference network
with practical considerations.
|
1302.0753 | Rooted Trees with Probabilities Revisited | cs.IT math.IT | Rooted trees with probabilities are convenient to represent a class of random
processes with memory. They allow to describe and analyze variable length codes
for data compression and distribution matching. In this work, the Leaf-Average
Node-Sum Interchange Theorem (LANSIT) and the well-known applications to path
length and leaf entropy are re-stated. The LANSIT is then applied to
informational divergence. Next, the differential LANSIT is derived, which
allows to write normalized functionals of leaf distributions as an average of
functionals of branching distributions. Joint distributions of random variables
and the corresponding conditional distributions are special cases of leaf
distributions and branching distributions. Using the differential LANSIT,
Pinsker's inequality is formulated for rooted trees with probabilities, with an
application to the approximation of product distributions. In particular, it is
shown that if the normalized informational divergence of a distribution and a
product distribution approaches zero, then the entropy rate approaches the
entropy rate of the product distribution.
|
1302.0756 | Decomposition by Partial Linearization: Parallel Optimization of
Multi-Agent Systems | cs.IT math.IT math.OC | We propose a novel decomposition framework for the distributed optimization
of general nonconvex sum-utility functions arising naturally in the system
design of wireless multiuser interfering systems. Our main contributions are:
i) the development of the first class of (inexact) Jacobi best-response
algorithms with provable convergence, where all the users simultaneously and
iteratively solve a suitably convexified version of the original sum-utility
optimization problem; ii) the derivation of a general dynamic pricing mechanism
that provides a unified view of existing pricing schemes that are based,
instead, on heuristics; and iii) a framework that can be easily particularized
to well-known applications, giving rise to very efficient practical (Jacobi or
Gauss-Seidel) algorithms that outperform existing adhoc methods proposed for
very specific problems. Interestingly, our framework contains as special cases
well-known gradient algorithms for nonconvex sum-utility problems, and many
blockcoordinate descent schemes for convex functions.
|
1302.0780 | Internal models for nonlinear output agreement and optimal flow control | cs.SY math.OC | This paper studies the problem of output agreement in networks of nonlinear
dynamical systems under time-varying disturbances. Necessary and sufficient
conditions for output agreement are derived for the class of incrementally
passive systems. Following this, it is shown that the optimal distribution
problem in dynamic inventory systems with time-varying supply and demand can be
cast as a special version of the output agreement problem. We show in
particular that the time-varying optimal distribution problem can be solved by
applying an internal model controller to the dual variables of a certain convex
network optimization problem.
|
1302.0785 | Beyond Markov Chains, Towards Adaptive Memristor Network-based Music
Generation | cs.ET cs.AI cs.NE cs.SD | We undertook a study of the use of a memristor network for music generation,
making use of the memristor's memory to go beyond the Markov hypothesis. Seed
transition matrices are created and populated using memristor equations, and
which are shown to generate musical melodies and change in style over time as a
result of feedback into the transition matrix. The spiking properties of simple
memristor networks are demonstrated and discussed with reference to
applications of music making. The limitations of simulating composing memristor
networks in von Neumann hardware is discussed and a hardware solution based on
physical memristor properties is presented.
|
1302.0797 | Comparison of Ant-Inspired Gatherer Allocation Approaches using
Memristor-Based Environmental Models | cs.NE | Memristors are used to compare three gathering techniques in an
already-mapped environment where resource locations are known. The All Site
model, which apportions gatherers based on the modeled memristance of that
path, proves to be good at increasing overall efficiency and decreasing time to
fully deplete an environment, however it only works well when the resources are
of similar quality. The Leaf Cutter method, based on Leaf Cutter Ant behaviour,
assigns all gatherers first to the best resource, and once depleted, uses the
All Site model to spread them out amongst the rest. The Leaf Cutter model is
better at increasing resource influx in the short-term and vastly out-performs
the All Site model in a more varied environments. It is demonstrated that
memristor based abstractions of gatherer models provide potential methods for
both the comparison and implementation of agent controls.
|
1302.0806 | On the Fundamental Feedback-vs-Performance Tradeoff over the MISO-BC
with Imperfect and Delayed CSIT | cs.IT math.IT | This work considers the multiuser multiple-input single-output (MISO)
broadcast channel (BC), where a transmitter with M antennas transmits
information to K single-antenna users, and where - as expected - the quality
and timeliness of channel state information at the transmitter (CSIT) is
imperfect. Motivated by the fundamental question of how much feedback is
necessary to achieve a certain performance, this work seeks to establish bounds
on the tradeoff between degrees-of-freedom (DoF) performance and CSIT feedback
quality. Specifically, this work provides a novel DoF region outer bound for
the general K-user MISO BC with partial current CSIT, which naturally bridges
the gap between the case of having no current CSIT (only delayed CSIT, or no
CSIT) and the case with full CSIT. The work then characterizes the minimum CSIT
feedback that is necessary for any point of the sum DoF, which is optimal for
the case with M >= K, and the case with M=2, K=3.
|
1302.0870 | Centrality-constrained graph embedding | stat.ML cs.CV math.OC | Visual rendering of graphs is a key task in the mapping of complex network
data. Although most graph drawing algorithms emphasize aesthetic appeal,
certain applications such as travel-time maps place more importance on
visualization of structural network properties. The present paper advocates a
graph embedding approach with centrality considerations to comply with node
hierarchy. The problem is formulated as one of constrained multi-dimensional
scaling (MDS), and it is solved via block coordinate descent iterations with
successive approximations and guaranteed convergence to a KKT point. In
addition, a regularization term enforcing graph smoothness is incorporated with
the goal of reducing edge crossings. Experimental results demonstrate that the
algorithm converges, and can be used to efficiently embed large graphs on the
order of thousands of nodes.
|
1302.0891 | Large-Scale Fading Behavior for a Cellular Network with Uniform Spatial
Distribution | cs.IT cs.NI math.IT | Large-scale fading (LSF) between interacting nodes is a fundamental element
in radio communications, responsible for weakening the propagation, and thus
worsening the service quality. Given the importance of channel-losses in
general, and the inevitability of random spatial geometry in real-life wireless
networks, it was then natural to merge these two paradigms together in order to
obtain an improved stochastical model for the LSF indicator. Therefore, in
exact closed-form notation, we generically derived the LSF distribution between
a prepositioned reference base-station and an arbitrary node for a
multi-cellular random network model. In fact, we provided an explicit and
definitive formulation that considered at once: the lattice profile, the users'
random geometry, the effect of the far-field phenomenon, the path-loss
behavior, and the stochastic impact of channel scatters. The veracity and
accuracy of the theoretical analysis were also confirmed through Monte Carlo
simulations.
|
1302.0895 | Exact Sparse Recovery with L0 Projections | stat.ML cs.IT cs.LG math.IT math.ST stat.TH | Many applications concern sparse signals, for example, detecting anomalies
from the differences between consecutive images taken by surveillance cameras.
This paper focuses on the problem of recovering a K-sparse signal x in N
dimensions. In the mainstream framework of compressed sensing (CS), the vector
x is recovered from M non-adaptive linear measurements y = xS, where S (of size
N x M) is typically a Gaussian (or Gaussian-like) design matrix, through some
optimization procedure such as linear programming (LP).
In our proposed method, the design matrix S is generated from an
$\alpha$-stable distribution with $\alpha\approx 0$. Our decoding algorithm
mainly requires one linear scan of the coordinates, followed by a few
iterations on a small number of coordinates which are "undetermined" in the
previous iteration. Comparisons with two strong baselines, linear programming
(LP) and orthogonal matching pursuit (OMP), demonstrate that our algorithm can
be significantly faster in decoding speed and more accurate in recovery
quality, for the task of exact spare recovery. Our procedure is robust against
measurement noise. Even when there are no sufficient measurements, our
algorithm can still reliably recover a significant portion of the nonzero
coordinates.
To provide the intuition for understanding our method, we also analyze the
procedure by assuming an idealistic setting. Interestingly, when K=2, the
"idealized" algorithm achieves exact recovery with merely 3 measurements,
regardless of N. For general K, the required sample size of the "idealized"
algorithm is about 5K.
|
1302.0907 | Bootstrap Methods for the Empirical Study of Decision-Making and
Information Flows in Social Systems | cs.IT cs.SI math.IT physics.soc-ph stat.ME | We characterize the statistical bootstrap for the estimation of
information-theoretic quantities from data, with particular reference to its
use in the study of large-scale social phenomena. Our methods allow one to
preserve, approximately, the underlying axiomatic relationships of information
theory---in particular, consistency under arbitrary coarse-graining---that
motivate use of these quantities in the first place, while providing
reliability comparable to the state of the art for Bayesian estimators. We show
how information-theoretic quantities allow for rigorous empirical study of the
decision-making capacities of rational agents and the time-asymmetric flows of
information in distributed systems. We provide illustrative examples by
reference to ongoing collaborative work on the semantic structure of the
British Criminal Court system and the conflict dynamics of the contemporary
Afghanistan insurgency.
|
1302.0908 | The Traffic Phases of Road Networks | math.OC cs.SY math.DS | We study the relation between the average traffic flow and the vehicle
density on road networks that we call 2D-traffic fundamental diagram. We show
that this diagram presents mainly four phases. We analyze different cases.
First, the case of a junction managed with a priority rule is presented, four
traffic phases are identified and described, and a good analytic approximation
of the fundamental diagram is obtained by computing a generalized eigenvalue of
the dynamics of the system. Then, the model is extended to the case of two
junctions, and finally to a regular city. The system still presents mainly four
phases. The role of a critical circuit of non-priority roads appears clearly in
the two junctions case. In Section 4, we use traffic light controls to improve
the traffic diagram. We present the improvements obtained by open-loop, local
feedback, and global feedback strategies. A comparison based on the response
times to reach the stationary regime is also given. Finally, we show the
importance of the design of the junction. It appears that if the junction is
enough large, the traffic is almost not slowed down by the junction.
|
1302.0914 | Beyond Worst-Case Analysis for Joins with Minesweeper | cs.DB | We describe a new algorithm, Minesweeper, that is able to satisfy stronger
runtime guarantees than previous join algorithms (colloquially, `beyond
worst-case guarantees') for data in indexed search trees. Our first
contribution is developing a framework to measure this stronger notion of
complexity, which we call {\it certificate complexity}, that extends notions of
Barbay et al. and Demaine et al.; a certificate is a set of propositional
formulae that certifies that the output is correct. This notion captures a
natural class of join algorithms. In addition, the certificate allows us to
define a strictly stronger notion of runtime complexity than traditional
worst-case guarantees. Our second contribution is to develop a dichotomy
theorem for the certificate-based notion of complexity. Roughly, we show that
Minesweeper evaluates $\beta$-acyclic queries in time linear in the certificate
plus the output size, while for any $\beta$-cyclic query there is some instance
that takes superlinear time in the certificate (and for which the output is no
larger than the certificate size). We also extend our certificate-complexity
analysis to queries with bounded treewidth and the triangle query.
|
1302.0951 | Channel Coding and Lossy Source Coding Using a Constrained Random Number
Generator | cs.IT math.IT | Stochastic encoders for channel coding and lossy source coding are introduced
with a rate close to the fundamental limits, where the only restriction is that
the channel input alphabet and the reproduction alphabet of the lossy source
code are finite. Random numbers, which satisfy a condition specified by a
function and its value, are used to construct stochastic encoders. The proof of
the theorems is based on the hash property of an ensemble of functions, where
the results are extended to general channels/sources and alternative formulas
are introduced for channel capacity and the rate-distortion region. Since an
ensemble of sparse matrices has a hash property, we can construct a code by
using sparse matrices, where the sum-product algorithm can be used for encoding
and decoding by assuming that channels/sources are memoryless.
|
1302.0952 | A Family of Five-Weight Cyclic Codes and Their Weight Enumerators | cs.IT math.IT | Cyclic codes are a subclass of linear codes and have applications in consumer
electronics, data storage systems, and communication systems as they have
efficient encoding and decoding algorithms. In this paper, a family of $p$-ary
cyclic codes whose duals have three zeros are proposed. The weight distribution
of this family of cyclic codes is determined. It turns out that the proposed
cyclic codes have five nonzero weights.
|
1302.0962 | Improved Accuracy of PSO and DE using Normalization: an Application to
Stock Price Prediction | cs.NE cs.LG | Data Mining is being actively applied to stock market since 1980s. It has
been used to predict stock prices, stock indexes, for portfolio management,
trend detection and for developing recommender systems. The various algorithms
which have been used for the same include ANN, SVM, ARIMA, GARCH etc. Different
hybrid models have been developed by combining these algorithms with other
algorithms like roughest, fuzzy logic, GA, PSO, DE, ACO etc. to improve the
efficiency. This paper proposes DE-SVM model (Differential EvolutionSupport
vector Machine) for stock price prediction. DE has been used to select best
free parameters combination for SVM to improve results. The paper also compares
the results of prediction with the outputs of SVM alone and PSO-SVM model
(Particle Swarm Optimization). The effect of normalization of data on the
accuracy of prediction has also been studied.
|
1302.0963 | RandomBoost: Simplified Multi-class Boosting through Randomization | cs.LG | We propose a novel boosting approach to multi-class classification problems,
in which multiple classes are distinguished by a set of random projection
matrices in essence. The approach uses random projections to alleviate the
proliferation of binary classifiers typically required to perform multi-class
classification. The result is a multi-class classifier with a single
vector-valued parameter, irrespective of the number of classes involved. Two
variants of this approach are proposed. The first method randomly projects the
original data into new spaces, while the second method randomly projects the
outputs of learned weak classifiers. These methods are not only conceptually
simple but also effective and easy to implement. A series of experiments on
synthetic, machine learning and visual recognition data sets demonstrate that
our proposed methods compare favorably to existing multi-class boosting
algorithms in terms of both the convergence rate and classification accuracy.
|
1302.0971 | Validasi data dengan menggunakan objek lookup pada borland delphi 7.0 | cs.DB | Developing an application with some tables must concern the validation of
input (specially in Table Child). In order to maximize the accuracy and data
input validation. Its called lookup (took data from other dataset). There are
two ways to look up data from Table Parent: 1) Using Objects (DBLookupComboBox
and DBookupListBox), or 2) Arranging the properties of data types fields (shown
by using DBGrid). In this article is using Borland Delphi software (Inprise
product). The method is offered using 5 (five) practise steps: 1) Relational
Database Scheme, 2) Form Design, 3) Object DatabasesRelationships Scheme, 4)
Properties and Field Type Arrangement, and 5) Procedures. The result of this
paper are: 1) The relationship that using lookup objects are valid, and 2)
Delphi Lookup Objects can be used for 1-1, 1-N, and M-N relationship.
|
1302.0974 | A Comparison of Relaxations of Multiset Cannonical Correlation Analysis
and Applications | cs.LG | Canonical correlation analysis is a statistical technique that is used to
find relations between two sets of variables. An important extension in pattern
analysis is to consider more than two sets of variables. This problem can be
expressed as a quadratically constrained quadratic program (QCQP), commonly
referred to Multi-set Canonical Correlation Analysis (MCCA). This is a
non-convex problem and so greedy algorithms converge to local optima without
any guarantees on global optimality. In this paper, we show that despite being
highly structured, finding the optimal solution is NP-Hard. This motivates our
relaxation of the QCQP to a semidefinite program (SDP). The SDP is convex, can
be solved reasonably efficiently and comes with both absolute and
output-sensitive approximation quality. In addition to theoretical guarantees,
we do an extensive comparison of the QCQP method and the SDP relaxation on a
variety of synthetic and real world data. Finally, we present two useful
extensions: we incorporate kernel methods and computing multiple sets of
canonical vectors.
|
1302.1007 | Image Denoising Using Interquartile Range Filter with Local Averaging | cs.CV | Image denoising is one of the fundamental problems in image processing. In
this paper, a novel approach to suppress noise from the image is conducted by
applying the interquartile range (IQR) which is one of the statistical methods
used to detect outlier effect from a dataset. A window of size kXk was
implemented to support IQR filter. Each pixel outside the IQR range of the kXk
window is treated as noisy pixel. The estimation of the noisy pixels was
obtained by local averaging. The essential advantage of applying IQR filter is
to preserve edge sharpness better of the original image. A variety of test
images have been used to support the proposed filter and PSNR was calculated
and compared with median filter. The experimental results on standard test
images demonstrate this filter is simpler and better performing than median
filter.
|
1302.1008 | CSIT Sharing over Finite Capacity Backhaul for Spatial Interference
Alignment | cs.IT math.IT | Cellular systems that employ time division duplexing (TDD) transmission are
good candidates for implementation of interference alignment (IA) in the
downlink since channel reciprocity enables the estimation of the channel state
by the base stations (BS) in the uplink phase. However, the interfering BSs
need to share their channel estimates via backhaul links of finite capacity. A
quantization scheme is proposed which reduces the amount of information
exchange (compared to conventional methods) required to achieve IA in a TDD
system. The scaling (with the transmit power) of the number of bits to be
exchanged between the BSs that is sufficient to preserve the multiplexing gain
of IA is derived.
|
1302.1020 | Block-to-Block Distribution Matching | cs.IT math.IT | In this work, binary block-to-block distribution matching is considered. m
independent and uniformly distributed bits are mapped to n output bits
resembling a target product distribution. A rate R is called achieved by a
sequence of encoder-decoder pairs, if for m,n to infinity, (1) m/n approaches
R, (2) the informational divergence per bit of the output distribution and the
target distribution goes to zero, and (3) the probability of erroneous decoding
goes to zero. It is shown that the maximum achievable rate is equal to the
entropy of the target distribution. A practical encoder-decoder pair is
constructed that provably achieves the maximum rate in the limit. Numerical
results illustrate that the suggested system operates close to the limits with
reasonable complexity. The key idea is to internally use a fixed-to-variable
length matcher and to compensate underflow by random mapping and to cast an
error when overflow occurs.
|
1302.1035 | Leveraging Automorphisms of Quantum Codes for Fault-Tolerant Quantum
Computation | quant-ph cs.IT math.IT | Fault-tolerant quantum computation is a technique that is necessary to build
a scalable quantum computer from noisy physical building blocks. Key for the
implementation of fault-tolerant computations is the ability to perform a
universal set of quantum gates that act on the code space of an underlying
quantum code. To implement such a universal gate set fault-tolerantly is an
expensive task in terms of physical operations, and any possible shortcut to
save operations is potentially beneficial and might lead to a reduction in
overhead for fault-tolerant computations. We show how the automorphism group of
a quantum code can be used to implement some operators on the encoded quantum
states in a fault-tolerant way by merely permuting the physical qubits. We
derive conditions that a code has to satisfy in order to have a large group of
operations that can be implemented transversally when combining transversal
CNOT with automorphisms. We give several examples for quantum codes with large
groups, including codes with parameters [[8,3,3]], [[15,7,3]], [[22,8,4]], and
[[31,11,5]].
|
1302.1043 | The price of bandit information in multiclass online classification | cs.LG | We consider two scenarios of multiclass online learning of a hypothesis class
$H\subseteq Y^X$. In the {\em full information} scenario, the learner is
exposed to instances together with their labels. In the {\em bandit} scenario,
the true label is not exposed, but rather an indication whether the learner's
prediction is correct or not. We show that the ratio between the error rates in
the two scenarios is at most $8\cdot|Y|\cdot \log(|Y|)$ in the realizable case,
and $\tilde{O}(\sqrt{|Y|})$ in the agnostic case. The results are tight up to a
logarithmic factor and essentially answer an open question from (Daniely et.
al. - Multiclass learnability and the erm principle).
We apply these results to the class of $\gamma$-margin multiclass linear
classifiers in $\reals^d$. We show that the bandit error rate of this class is
$\tilde{\Theta}(\frac{|Y|}{\gamma^2})$ in the realizable case and
$\tilde{\Theta}(\frac{1}{\gamma}\sqrt{|Y|T})$ in the agnostic case. This
resolves an open question from (Kakade et. al. - Efficient bandit algorithms
for online multiclass prediction).
|
1302.1079 | Cognitive Access Policies under a Primary ARQ process via
Forward-Backward Interference Cancellation | cs.IT cs.SY math.IT | This paper introduces a novel technique for access by a cognitive Secondary
User (SU) using best-effort transmission to a spectrum with an incumbent
Primary User (PU), which uses Type-I Hybrid ARQ. The technique leverages the
primary ARQ protocol to perform Interference Cancellation (IC) at the SU
receiver (SUrx). Two IC mechanisms that work in concert are introduced: Forward
IC, where SUrx, after decoding the PU message, cancels its interference in the
(possible) following PU retransmissions of the same message, to improve the SU
throughput; Backward IC, where SUrx performs IC on previous SU transmissions,
whose decoding failed due to severe PU interference. Secondary access policies
are designed that determine the secondary access probability in each state of
the network so as to maximize the average long-term SU throughput by
opportunistically leveraging IC, while causing bounded average long-term PU
throughput degradation and SU power expenditure. It is proved that the optimal
policy prescribes that the SU prioritizes its access in the states where SUrx
knows the PU message, thus enabling IC. An algorithm is provided to optimally
allocate additional secondary access opportunities in the states where the PU
message is unknown. Numerical results are shown to assess the throughput gain
provided by the proposed techniques.
|
1302.1094 | Analysis Based Blind Compressive Sensing | cs.IT math.IT | In this work we address the problem of blindly reconstructing compressively
sensed signals by exploiting the co-sparse analysis model. In the analysis
model it is assumed that a signal multiplied by an analysis operator results in
a sparse vector. We propose an algorithm that learns the operator adaptively
during the reconstruction process. The arising optimization problem is tackled
via a geometric conjugate gradient approach. Different types of sampling noise
are handled by simply exchanging the data fidelity term. Numerical experiments
are performed for measurements corrupted with Gaussian as well as impulsive
noise to show the effectiveness of our method.
|
1302.1105 | Open Access, library and publisher competition, and the evolution of
general commerce | cs.DL cs.CY cs.SI math.HO physics.soc-ph | Discussions of the economics of scholarly communication are usually devoted
to Open Access, rising journal prices, publisher profits, and boycotts. That
ignores what seems a much more important development in this market.
Publishers, through the oft-reviled "Big Deal" packages, are providing much
greater and more egalitarian access to the journal literature, an approximation
to true Open Access. In the process they are also marginalizing libraries, and
obtaining a greater share of the resources going into scholarly communication.
This is enabling a continuation of publisher profits as well as of what for
decades has been called "unsustainable journal price escalation". It is also
inhibiting the spread of Open Access, and potentially leading to an oligopoly
of publishers controlling distribution through large-scale licensing.
The "Big Deal" practices are worth studying for several general reasons. The
degree to which publishers succeed in diminishing the role of libraries may be
an indicator of the degree and speed at which universities transform
themselves. More importantly, these "Big Deals" appear to point the way to the
future of the whole economy, where progress is characterized by declining
privacy, increasing price discrimination, increasing opaqueness in pricing,
increasing reliance on low-paid or upaid work of others for profits, and
business models that depend on customer inertia.
|
1302.1123 | Large Scale Distributed Acoustic Modeling With Back-off N-grams | cs.CL | The paper revives an older approach to acoustic modeling that borrows from
n-gram language modeling in an attempt to scale up both the amount of training
data and model size (as measured by the number of parameters in the model), to
approximately 100 times larger than current sizes used in automatic speech
recognition. In such a data-rich setting, we can expand the phonetic context
significantly beyond triphones, as well as increase the number of Gaussian
mixture components for the context-dependent states that allow it. We have
experimented with contexts that span seven or more context-independent phones,
and up to 620 mixture components per state. Dealing with unseen phonetic
contexts is accomplished using the familiar back-off technique used in language
modeling due to implementation simplicity. The back-off acoustic model is
estimated, stored and served using MapReduce distributed computing
infrastructure.
Speech recognition experiments are carried out in an N-best list rescoring
framework for Google Voice Search. Training big models on large amounts of data
proves to be an effective way to increase the accuracy of a state-of-the-art
automatic speech recognition system. We use 87,000 hours of training data
(speech along with transcription) obtained by filtering utterances in Voice
Search logs on automatic speech recognition confidence. Models ranging in size
between 20--40 million Gaussians are estimated using maximum likelihood
training. They achieve relative reductions in word-error-rate of 11% and 6%
when combined with first-pass models trained using maximum likelihood, and
boosted maximum mutual information, respectively. Increasing the context size
beyond five phones (quinphones) does not help.
|
1302.1128 | On the Relation of Delay Equations to First-Order Hyperbolic Partial
Differential Equations | math.OC cs.SY math.AP math.DS | This paper establishes the equivalence between systems described by a single
first-order hyperbolic partial differential equation and systems described by
integral delay equations. System-theoretic results are provided for both
classes of systems (among them converse Lyapunov results). The proposed
framework can allow the study of discontinuous solutions for nonlinear systems
described by a single first-order hyperbolic partial differential equation
under the effect of measurable inputs acting on the boundary and/or on the
differential equation. An illustrative example shows that the conversion of a
system described by a single first-order hyperbolic partial differential
equation to an integral delay system can simplify considerably the solution of
the corresponding robust feedback stabilization problem.
|
1302.1143 | Evolvability Is Inevitable: Increasing Evolvability Without the Pressure
to Adapt | cs.NE q-bio.PE | Why evolvability appears to have increased over evolutionary time is an
important unresolved biological question. Unlike most candidate explanations,
this paper proposes that increasing evolvability can result without any
pressure to adapt. The insight is that if evolvability is heritable, then an
unbiased drifting process across genotypes can still create a distribution of
phenotypes biased towards evolvability, because evolvable organisms diffuse
more quickly through the space of possible phenotypes. Furthermore, because
phenotypic divergence often correlates with founding niches, niche founders may
on average be more evolvable, which through population growth provides a
genotypic bias towards evolvability. Interestingly, the combination of these
two mechanisms can lead to increasing evolvability without any pressure to
out-compete other organisms, as demonstrated through experiments with a series
of simulated models. Thus rather than from pressure to adapt, evolvability may
inevitably result from any drift through genotypic space combined with
evolution's passive tendency to accumulate niches.
|
1302.1155 | An Effective Procedure for Computing "Uncomputable" Functions | cs.AI | We give an effective procedure that produces a natural number in its output
from any natural number in its input, that is, it computes a total function.
The elementary operations of the procedure are Turing-computable. The procedure
has a second input which can contain the Goedel number of any Turing-computable
total function whose range is a subset of the set of the Goedel numbers of all
Turing-computable total functions. We prove that the second input cannot be set
to the Goedel number of any Turing-computable function that computes the output
from any natural number in its first input. In this sense, there is no Turing
program that computes the output from its first input. The procedure is used to
define creative procedures which compute functions that are not
Turing-computable. We argue that creative procedures model an aspect of
reasoning that cannot be modeled by Turing machines.
|
1302.1156 | A Non-Binary Associative Memory with Exponential Pattern Retrieval
Capacity and Iterative Learning: Extended Results | cs.NE | We consider the problem of neural association for a network of non-binary
neurons. Here, the task is to first memorize a set of patterns using a network
of neurons whose states assume values from a finite number of integer levels.
Later, the same network should be able to recall previously memorized patterns
from their noisy versions. Prior work in this area consider storing a finite
number of purely random patterns, and have shown that the pattern retrieval
capacities (maximum number of patterns that can be memorized) scale only
linearly with the number of neurons in the network.
In our formulation of the problem, we concentrate on exploiting redundancy
and internal structure of the patterns in order to improve the pattern
retrieval capacity. Our first result shows that if the given patterns have a
suitable linear-algebraic structure, i.e. comprise a sub-space of the set of
all possible patterns, then the pattern retrieval capacity is in fact
exponential in terms of the number of neurons. The second result extends the
previous finding to cases where the patterns have weak minor components, i.e.
the smallest eigenvalues of the correlation matrix tend toward zero. We will
use these minor components (or the basis vectors of the pattern null space) to
both increase the pattern retrieval capacity and error correction capabilities.
An iterative algorithm is proposed for the learning phase, and two simple
neural update algorithms are presented for the recall phase. Using analytical
results and simulations, we show that the proposed methods can tolerate a fair
amount of errors in the input while being able to memorize an exponentially
large number of patterns.
|
1302.1157 | Excess-Risk of Distributed Stochastic Learners | math.OC cs.DC cs.MA cs.SI | This work studies the learning ability of consensus and diffusion distributed
learners from continuous streams of data arising from different but related
statistical distributions. Four distinctive features for diffusion learners are
revealed in relation to other decentralized schemes even under left-stochastic
combination policies. First, closed-form expressions for the evolution of their
excess-risk are derived for strongly-convex risk functions under a diminishing
step-size rule. Second, using these results, it is shown that the diffusion
strategy improves the asymptotic convergence rate of the excess-risk relative
to non-cooperative schemes. Third, it is shown that when the in-network
cooperation rules are designed optimally, the performance of the diffusion
implementation can outperform that of naive centralized processing. Finally,
the arguments further show that diffusion outperforms consensus strategies
asymptotically, and that the asymptotic excess-risk expression is invariant to
the particular network topology. The framework adopted in this work studies
convergence in the stronger mean-square-error sense, rather than in
distribution, and develops tools that enable a close examination of the
differences between distributed strategies in terms of asymptotic behavior, as
well as in terms of convergence rates.
|
1302.1170 | Computability of the entropy of one-tape Turing Machines | cs.FL cs.CC cs.IT math.DS math.IT | We prove that the maximum speed and the entropy of a one-tape Turing machine
are computable, in the sense that we can approximate them to any given
precision $\epsilon$. This is contrary to popular belief, as all dynamical
properties are usually undecidable for Turing machines. The result is quite
specific to one-tape Turing machines, as it is not true anymore for two-tape
Turing machines by the results of Blondel et al., and uses the approach of
crossing sequences introduced by Hennie.
|
1302.1178 | Overview of EIREX 2012: Social Media | cs.IR | The third Information Retrieval Education through EXperimentation track
(EIREX 2012) was run at the University Carlos III of Madrid, during the 2012
spring semester. EIREX 2012 is the third in a series of experiments designed to
foster new Information Retrieval (IR) education methodologies and resources,
with the specific goal of teaching undergraduate IR courses from an
experimental perspective. For an introduction to the motivation behind the
EIREX experiments, see the first sections of [Urbano et al., 2011a]. For
information on other editions of EIREX and related data, see the website at
http://ir.kr.inf.uc3m.es/eirex/. The EIREX series have the following goals: a)
to help students get a view of the Information Retrieval process as they would
find it in a real-world scenario, either industrial or academic; b) to make
students realize the importance of laboratory experiments in Computer Science
and have them initiated in their execution and analysis; c) to create a public
repository of resources to teach Information Retrieval courses; d) to seek the
collaboration and active participation of other Universities in this endeavor.
This overview paper summarizes the results of the EIREX 2012 track, focusing on
the creation of the test collection and the analysis to assess its reliability.
|
1302.1211 | Quantum Lyapunov Control Based on the Average Value of an Imaginary
Mechanical Quantity | cs.SY math-ph math.MP | The convergence of closed quantum systems in the degenerate cases to the
desired target state by using the quantum Lyapunov control based on the average
value of an imaginary mechanical quantity is studied. On the basis of the
existing methods which can only ensure the single-control Hamiltonian systems
converge toward a set, we design the control laws to make the multi-control
Hamiltonian systems converge to the desired target state. The convergence of
the control system is proved, and the convergence to the desired target state
is analyzed. How to make these conditions of convergence to the target state to
be satisfied is proved or analyzed. Finally, numerical simulations for a three
level system in the degenrate case transfering form an initial eigenstate to a
target superposition state are studied to verify the effectiveness of the
proposed control method.
|
1302.1216 | Secure Communication Via an Untrusted Non-Regenerative Relay in Fading
Channels | cs.IT math.IT | We investigate a relay network where the source can potentially utilize an
untrusted non-regenerative relay to augment its direct transmission of a
confidential message to the destination. Since the relay is untrusted, it is
desirable to protect the confidential data from it while simultaneously making
use of it to increase the reliability of the transmission. We first examine the
secrecy outage probability (SOP) of the network assuming a single antenna
relay, and calculate the exact SOP for three different schemes: direct
transmission without using the relay, conventional non-regenerative relaying,
and cooperative jamming by the destination. Subsequently, we conduct an
asymptotic analysis of the SOPs to determine the optimal policies in different
operating regimes. We then generalize to the multi-antenna relay case and
investigate the impact of the number of relay antennas on the secrecy
performance. Finally, we study a scenario where the relay has only a single RF
chain which necessitates an antenna selection scheme, and we show that unlike
the case where all antennas are used, under certain conditions the cooperative
jamming scheme with antenna selection provides a diversity advantage for the
receiver. Numerical results are presented to verify the theoretical predictions
of the preferred transmission policies.
|
1302.1232 | When are the most informative components for inference also the
principal components? | math.ST cs.DS cs.IT cs.LG math.IT math.PR stat.TH | Which components of the singular value decomposition of a signal-plus-noise
data matrix are most informative for the inferential task of detecting or
estimating an embedded low-rank signal matrix? Principal component analysis
ascribes greater importance to the components that capture the greatest
variation, i.e., the singular vectors associated with the largest singular
values. This choice is often justified by invoking the Eckart-Young theorem
even though that work addresses the problem of how to best represent a
signal-plus-noise matrix using a low-rank approximation and not how to
best_infer_ the underlying low-rank signal component.
Here we take a first-principles approach in which we start with a
signal-plus-noise data matrix and show how the spectrum of the noise-only
component governs whether the principal or the middle components of the
singular value decomposition of the data matrix will be the informative
components for inference. Simply put, if the noise spectrum is supported on a
connected interval, in a sense we make precise, then the use of the principal
components is justified. When the noise spectrum is supported on multiple
intervals, then the middle components might be more informative than the
principal components.
The end result is a proper justification of the use of principal components
in the setting where the noise matrix is i.i.d. Gaussian and the identification
of scenarios, generically involving heterogeneous noise models such as mixtures
of Gaussians, where the middle components might be more informative than the
principal components so that they may be exploited to extract additional
processing gain. Our results show how the blind use of principal components can
lead to suboptimal or even faulty inference because of phase transitions that
separate a regime where the principal components are informative from a regime
where they are uninformative.
|
1302.1236 | Sharp RIP Bound for Sparse Signal and Low-Rank Matrix Recovery | cs.IT math.IT | This paper establishes a sharp condition on the restricted isometry property
(RIP) for both the sparse signal recovery and low-rank matrix recovery. It is
shown that if the measurement matrix $A$ satisfies the RIP condition
$\delta_k^A<1/3$, then all $k$-sparse signals $\beta$ can be recovered exactly
via the constrained $\ell_1$ minimization based on $y=A\beta$. Similarly, if
the linear map $\cal M$ satisfies the RIP condition $\delta_r^{\cal M}<1/3$,
then all matrices $X$ of rank at most $r$ can be recovered exactly via the
constrained nuclear norm minimization based on $b={\cal M}(X)$. Furthermore, in
both cases it is not possible to do so in general when the condition does not
hold. In addition, noisy cases are considered and oracle inequalities are given
under the sharp RIP condition.
|
1302.1256 | Repairing Multiple Failures in the Suh-Ramchandran Regenerating Codes | cs.IT math.IT | Using the idea of interference alignment, Suh and Ramchandran constructed a
class of minimum-storage regenerating codes which can repair one systematic or
one parity-check node with optimal repair bandwidth. With the same code
structure, we show that in addition to single node failure, double node
failures can be repaired collaboratively with optimal repair bandwidth as well.
We give an example of how to repair double failures in the Suh-Ramchandran
regenerating code with six nodes, and give the proof for the general case.
|
1302.1258 | A Comparison of Superposition Coding Schemes | cs.IT math.IT | There are two variants of superposition coding schemes. Cover's original
superposition coding scheme has code clouds of the identical shape, while
Bergmans's superposition coding scheme has code clouds of independently
generated shapes. These two schemes yield identical achievable rate regions in
several scenarios, such as the capacity region for degraded broadcast channels.
This paper shows that under the optimal maximum likelihood decoding, these two
superposition coding schemes can result in different rate regions. In
particular, it is shown that for the two-receiver broadcast channel, Cover's
superposition coding scheme can achieve rates strictly larger than Bergmans's
scheme.
|
1302.1270 | Diffusion of Cooperative Behavior in Decentralized Cognitive Radio
Networks with Selfish Spectrum Sensors | cs.IT cs.GT math.IT | This work investigates the diffusion of cooperative behavior over time in a
decentralized cognitive radio network with selfish spectrum-sensing users. The
users can individually choose whether or not to participate in cooperative
spectrum sensing, in order to maximize their individual payoff defined in terms
of the sensing false-alarm rate and transmit energy expenditure. The system is
modeled as a partially connected network with a statistical distribution of the
degree of the users, who play their myopic best responses to the actions of
their neighbors at each iteration. Based on this model, we investigate the
existence and characterization of Bayesian Nash Equilibria for the diffusion
game. The impacts of network topology, channel fading statistics, sensing
protocol, and multiple antennas on the outcome of the diffusion process are
analyzed next. Simulation results that demonstrate how conducive different
network scenarios are to the diffusion of cooperation are presented for further
insight, and we conclude with a discussion on additional refinements and issues
worth pursuing.
|
1302.1294 | Image Interpolation Using Kriging Technique for Spatial Data | cs.CV | Image interpolation has been used spaciously by customary interpolation
techniques. Recently, Kriging technique has been widely implemented in
simulation area and geostatistics for prediction. In this article, Kriging
technique was used instead of the classical interpolation methods to predict
the unknown points in the digital image array. The efficiency of the proposed
technique was proven using the PSNR and compared with the traditional
interpolation techniques. The results showed that Kriging technique is almost
accurate as cubic interpolation and in some images Kriging has higher accuracy.
A miscellaneous test images have been used to consolidate the proposed
technique.
|
1302.1296 | Hybrid Image Segmentation using Discerner Cluster in FCM and Histogram
Thresholding | cs.CV | Image thresholding has played an important role in image segmentation. This
paper presents a hybrid approach for image segmentation based on the
thresholding by fuzzy c-means (THFCM) algorithm for image segmentation. The
goal of the proposed approach is to find a discerner cluster able to find an
automatic threshold. The algorithm is formulated by applying the standard FCM
clustering algorithm to the frequencies (y-values) on the smoothed histogram.
Hence, the frequencies of an image can be used instead of the conventional
whole data of image. The cluster that has the highest peak which represents the
maximum frequency in the image histogram will play as an excellent role in
determining a discerner cluster to the grey level image. Then, the pixels
belong to the discerner cluster represent an object in the gray level histogram
while the other clusters represent a background. Experimental results with
standard test images have been obtained through the proposed approach (THFCM).
|
1302.1300 | Kriging Interpolation Filter to Reduce High Density Salt and Pepper
Noise | cs.CV | Image denoising is a critical issue in the field of digital image processing.
This paper proposes a novel Salt & Pepper noise suppression by developing a
Kriging Interpolation Filter (KIF) for image denoising. Gray-level images
degraded with Salt & Pepper noise have been considered. A sequential search for
noise detection was made using kXk window size to determine non-noisy pixels
only. The non-noisy pixels are passed into Kriging interpolation method to
predict their absent neighbor pixels that were noisy pixels at the first phase.
The utilization of Kriging interpolation filter proves that it is very
impressive to suppress high noise density. It has been found that Kriging
Interpolation filter achieves noise reduction without loss of edges and
detailed information. Comparisons with existing algorithms are done using
quality metrics like PSNR and MSE to assess the proposed filter.
|
1302.1302 | Quasi-Static SIMO Fading Channels at Finite Blocklength | cs.IT math.IT | We investigate the maximal achievable rate for a given blocklength and error
probability over quasi-static single-input multiple-output (SIMO) fading
channels. Under mild conditions on the channel gains, it is shown that the
channel dispersion is zero regardless of whether the fading realizations are
available at the transmitter and/or the receiver. The result follows from
computationally and analytically tractable converse and achievability bounds.
Through numerical evaluation, we verify that, in some scenarios, zero
dispersion indeed entails fast convergence to outage capacity as the
blocklength increases. In the example of a particular 1*2 SIMO Rician channel,
the blocklength required to achieve 90% of capacity is about an order of
magnitude smaller compared to the blocklength required for an AWGN channel with
the same capacity.
|
1302.1326 | Cloud Computing framework for Computer Vision Research:An Introduction | cs.CV cs.DC | Cloud computing offers the potential to help scientists to process massive
number of computing resources often required in machine learning application
such as computer vision problems. This proposal would like to show that which
benefits can be obtained from cloud in order to help medical image analysis
users (including scientists, clinicians, and research institutes). As security
and privacy of algorithms are important for most of algorithms inventors, these
algorithms can be hidden in a cloud to allow the users to use the algorithms as
a package without any access to see/change their inside. In another word, in
the user part, users send their images to the cloud and configure the algorithm
via an interface. In the cloud part, the algorithms are applied to this image
and the results are returned back to the user. My proposal has two parts: (1)
investigate the potential of cloud computing for computer vision problems and
(2) study the components of a proposed cloud-based framework for medical image
analysis application and develop them (depending on the length of the
internship). The investigation part will involve a study on several aspects of
the problem including security, usability (for medical end users of the
service), appropriate programming abstractions for vision problems, scalability
and resource requirements. In the second part of this proposal I am going to
thoroughly study of the proposed framework components and their relations and
develop them. The proposed cloud-based framework includes an integrated
environment to enable scientists and clinicians to access to the previous and
current medical image analysis algorithms using a handful user interface
without any access to the algorithm codes and procedures.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.