id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1012.2384
|
Clustering Drives Assortativity and Community Structure in Ensembles of
Networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Clustering, assortativity, and communities are key features of complex
networks. We probe dependencies between these attributes and find that
ensembles with strong clustering display both high assortativity by degree and
prominent community structure, while ensembles with high assortativity are much
less biased towards clustering or community structure. Further, clustered
networks can amplify small homophilic bias for trait assortativity. This marked
asymmetry suggests that transitivity, rather than homophily, drives the
standard nonsocial/social network dichotomy.
|
1012.2405
|
Quantum walks on complex networks with connection instabilities and
community structure
|
quant-ph cond-mat.dis-nn cs.SI physics.data-an physics.soc-ph
|
A continuous-time quantum walk is investigated on complex networks with the
characteristic property of community structure, which is shared by most
real-world networks. Motivated by the prospect of viable quantum networks, I
focus on the effects of network instabilities in the form of broken links, and
examine the response of the quantum walk to such failures. It is shown that the
reconfiguration of the quantum walk is determined by the community structure of
the network. In this context, quantum walks based on the adjacency and
Laplacian matrices of the network are compared, and their responses to link
failures is analyzed.
|
1012.2462
|
Can Partisan Voting Lead to Truth?
|
physics.soc-ph cs.SI
|
We study an extension of the voter model in which each agent is endowed with
an innate preference for one of two states that we term as "truth" or
"falsehood". Due to interactions with neighbors, an agent that innately prefers
truth can be persuaded to adopt a false opinion (and thus be discordant with
its innate preference) or the agent can possess an internally concordant "true"
opinion. Parallel states exist for agents that inherently prefer falsehood. We
determine the conditions under which a population of such agents can ultimately
reach a consensus for the truth, a consensus for falsehood, or reach an impasse
where an agent tends to adopt the opinion that is in internal concordance with
its innate preference so that consensus is never achieved.
|
1012.2491
|
Affine Invariant, Model-Based Object Recognition Using Robust Metrics
and Bayesian Statistics
|
cs.CV
|
We revisit the problem of model-based object recognition for intensity images
and attempt to address some of the shortcomings of existing Bayesian methods,
such as unsuitable priors and the treatment of residuals with a non-robust
error norm. We do so by using a refor- mulation of the Huber metric and
carefully chosen prior distributions. Our proposed method is invariant to
2-dimensional affine transforma- tions and, because it is relatively easy to
train and use, it is suited for general object matching problems.
|
1012.2496
|
On the Implementation of GNU Prolog
|
cs.PL cs.AI
|
GNU Prolog is a general-purpose implementation of the Prolog language, which
distinguishes itself from most other systems by being, above all else, a
native-code compiler which produces standalone executables which don't rely on
any byte-code emulator or meta-interpreter. Other aspects which stand out
include the explicit organization of the Prolog system as a multipass compiler,
where intermediate representations are materialized, in Unix compiler
tradition. GNU Prolog also includes an extensible and high-performance finite
domain constraint solver, integrated with the Prolog language but implemented
using independent lower-level mechanisms. This article discusses the main
issues involved in designing and implementing GNU Prolog: requirements, system
organization, performance and portability issues as well as its position with
respect to other Prolog system implementations and the ISO standardization
initiative.
|
1012.2514
|
Context Aware End-to-End Connectivity Management
|
cs.LG cs.NI
|
In a dynamic heterogeneous environment, such as pervasive and ubiquitous
computing, context-aware adaptation is a key concept to meet the varying
requirements of different users. Connectivity is an important context source
that can be utilized for optimal management of diverse networking resources.
Application QoS (Quality of service) is another important issue that should be
taken into consideration for design of a context-aware system. This paper
presents connectivity from the view point of context awareness, identifies
various relevant raw connectivity contexts, and discusses how high-level
context information can be abstracted from the raw context information.
Further, rich context information is utilized in various policy representation
with respect to user profile and preference, application characteristics,
device capability, and network QoS conditions. Finally, a context-aware
end-to-end evaluation algorithm is presented for adaptive connectivity
management in a multi-access wireless network. Unlike the currently existing
algorithms, the proposed algorithm takes into account user QoS parameters, and
therefore, it is more practical.
|
1012.2596
|
A Unified MGF-Based Capacity Analysis of Diversity Combiners over
Generalized Fading Channels
|
cs.IT math.IT math.ST stat.OT stat.TH
|
Unified exact average capacity results for L-branch coherent diversity
receivers including equal-gain combining (EGC) and maximal-ratio combining
(MRC) are not known. This paper develops a novel generic framework for the
capacity analysis of $L$-branch EGC/MRC over generalized fading channels. The
framework is used to derive new results for the Gamma shadowed generalized
Nakagami-m fading model which can be a suitable model for the fading
environments encountered by high frequency (60 GHz and above) communications.
The mathematical formalism is illustrated with some selected numerical and
simulation results confirming the correctness of our newly proposed framework.
|
1012.2598
|
Extended Generalized-K (EGK): A New Simple and General Model for
Composite Fading Channels
|
cs.IT math.IT math.PR math.ST stat.TH
|
In this paper, we introduce a generalized composite fading distribution
(termed extended generalized-K (EGK)) to model the envelope and the power of
the received signal in millimeter wave (60 GHz or above) and free-space optical
channels. We obtain the first and the second-order statistics of the received
signal envelope characterized by the EGK composite fading distribution. In
particular, expressions for probability density function, cumulative
distribution function, level crossing rate and average fade duration, and
fractional moments are derived. In addition performance measures such as amount
of fading, average bit error probability, outage probability, average capacity,
and outage capacity are offered in closed-form. Selected numerical and computer
simulation examples validate the accuracy of the presented mathematical
analysis.
|
1012.2599
|
A Tutorial on Bayesian Optimization of Expensive Cost Functions, with
Application to Active User Modeling and Hierarchical Reinforcement Learning
|
cs.LG
|
We present a tutorial on Bayesian optimization, a method of finding the
maximum of expensive cost functions. Bayesian optimization employs the Bayesian
technique of setting a prior over the objective function and combining it with
evidence to get a posterior function. This permits a utility-based selection of
the next observation to make on the objective function, which must take into
account both exploration (sampling from areas of high uncertainty) and
exploitation (sampling areas likely to offer improvement over the current best
observation). We also present two detailed extensions of Bayesian optimization,
with experiments---active user modelling with preferences, and hierarchical
reinforcement learning---and a discussion of the pros and cons of Bayesian
optimization based on our experiences.
|
1012.2603
|
Real-time Visual Tracking Using Sparse Representation
|
cs.CV
|
The $\ell_1$ tracker obtains robustness by seeking a sparse representation of
the tracking object via $\ell_1$ norm minimization \cite{Xue_ICCV_09_Track}.
However, the high computational complexity involved in the $ \ell_1 $ tracker
restricts its further applications in real time processing scenario. Hence we
propose a Real Time Compressed Sensing Tracking (RTCST) by exploiting the
signal recovery power of Compressed Sensing (CS). Dimensionality reduction and
a customized Orthogonal Matching Pursuit (OMP) algorithm are adopted to
accelerate the CS tracking. As a result, our algorithm achieves a real-time
speed that is up to $6,000$ times faster than that of the $\ell_1$ tracker.
Meanwhile, RTCST still produces competitive (sometimes even superior) tracking
accuracy comparing to the existing $\ell_1$ tracker. Furthermore, for a
stationary camera, a further refined tracker is designed by integrating a
CS-based background model (CSBM). This CSBM-equipped tracker coined as RTCST-B,
outperforms most state-of-the-arts with respect to both accuracy and
robustness. Finally, our experimental results on various video sequences, which
are verified by a new metric---Tracking Success Probability (TSP), show the
excellence of the proposed algorithms.
|
1012.2606
|
Modeling urban housing market dynamics: can the socio-spatial
segregation preserve some social diversity?
|
physics.soc-ph cs.SI
|
Addressing issues of social diversity, we introduce a model of housing
transactions between agents who are heterogeneous in their willingness to pay.
A key assumption is that agents' preferences for a location depend on both an
intrinsic attractiveness and on the social characteristics of the neighborhood.
The stationary space distribution of income is analytically and numerically
characterized. The main results are that socio-spatial segregation occurs if --
and only if -- the social influence is strong enough, but even so, some social
diversity is preserved at most locations. Comparison with data on the Paris
housing market shows that the results reproduce general trends of price
distribution and spatial income segregation.
|
1012.2609
|
Inverse-Category-Frequency based supervised term weighting scheme for
text categorization
|
cs.LG cs.AI
|
Term weighting schemes often dominate the performance of many classifiers,
such as kNN, centroid-based classifier and SVMs. The widely used term weighting
scheme in text categorization, i.e., tf.idf, is originated from information
retrieval (IR) field. The intuition behind idf for text categorization seems
less reasonable than IR. In this paper, we introduce inverse category frequency
(icf) into term weighting scheme and propose two novel approaches, i.e., tf.icf
and icf-based supervised term weighting schemes. The tf.icf adopts icf to
substitute idf factor and favors terms occurring in fewer categories, rather
than fewer documents. And the icf-based approach combines icf and relevance
frequency (rf) to weight terms in a supervised way. Our cross-classifier and
cross-corpus experiments have shown that our proposed approaches are superior
or comparable to six supervised term weighting schemes and three traditional
schemes in terms of macro-F1 and micro-F1.
|
1012.2621
|
Throughput and Latency of Acyclic Erasure Networks with Feedback in a
Finite Buffer Regime
|
cs.IT math.IT
|
The exact Markov modeling analysis of erasure networks with finite buffers is
an extremely hard problem due to the large number of states in the system. In
such networks, packets are lost due to either link erasures or blocking by the
full buffers. In this paper, we propose a novel method that iteratively
estimates the performance parameters of the network and more importantly
reduces the computational complexity compared to the exact analysis. This is
the first work that analytically studies the effect of finite memory on the
throughput and latency in general wired acyclic networks with erasure links. As
a case study, a random packet routing scheme with ideal feedback on the links
is used. The proposed framework yields a fairly accurate estimate of the
probability distribution of buffer occupancies at the intermediate nodes using
which we can not only identify the congested and starving nodes but also obtain
analytical expressions for throughput and average delay of a packet in the
network. The theoretical framework presented here can be applied to many wired
networks, from Internet to more futuristic applications such as
networks-on-chip under various communication and network coding scenarios.
|
1012.2622
|
Study of Throughput and Latency in Finite-buffer Coded Networks
|
cs.IT math.IT
|
Exact queueing analysis of erasure networks with network coding in a finite
buffer regime is an extremely hard problem due to the large number of states in
the network. In such networks, packets are lost due to either link erasures or
due to blocking due to full buffers. In this paper, a block-by-block random
linear network coding scheme with feedback on the links is selected for
reliability and more importantly guaranteed decoding of each block. We propose
a novel method that iteratively estimates the performance parameters of the
network and more importantly reduces the computational complexity compared to
the exact analysis. The proposed framework yields an accurate estimate of the
distribution of buffer occupancies at the intermediate nodes using which we
obtain analytical expressions for network throughput and delay distribution of
a block of packets.
|
1012.2628
|
Throughput and Latency in Finite-Buffer Line Networks
|
cs.IT math.IT
|
This work investigates the effect of finite buffer sizes on the throughput
capacity and packet delay of line networks with packet erasure links that have
perfect feedback. These performance measures are shown to be linked to the
stationary distribution of an underlying irreducible Markov chain that models
the system exactly. Using simple strategies, bounds on the throughput capacity
are derived. The work then presents two iterative schemes to approximate the
steady-state distribution of node occupancies by decoupling the chain to
smaller queueing blocks. These approximate solutions are used to understand the
effect of buffer sizes on throughput capacity and the distribution of packet
delay. Using the exact modeling for line networks, it is shown that the
throughput capacity is unaltered in the absence of hop-by-hop feedback provided
packet-level network coding is allowed. Finally, using simulations, it is
confirmed that the proposed framework yields accurate estimates of the
throughput capacity and delay distribution and captures the vital trends and
tradeoffs in these networks.
|
1012.2633
|
Personalized Data Set for Analysis
|
cs.DB cs.CR
|
Data Management portfolio within an organization has seen an upsurge in
initiatives for compliance, security, repurposing and storage within and
outside the organization. When such initiatives are being put to practice care
must be taken while granting access to data repositories for analysis and
mining activities. Also, initiatives such as Master Data Management, cloud
computing and self service business intelligence have raised concerns in the
arena of regulatory compliance and data privacy, especially when a large data
set of an organization are being outsourced for testing, consolidation and data
management. Here, an approach is presented where a new service layer is
introduced, by data governance group, in the architecture for data management
and can be used for preserving privacy of sensitive information.
|
1012.2648
|
Distributed XML Design
|
cs.DB cs.CC cs.DC
|
A distributed XML document is an XML document that spans several machines. We
assume that a distribution design of the document tree is given, consisting of
an XML kernel-document T[f1,...,fn] where some leaves are "docking points" for
external resources providing XML subtrees (f1,...,fn, standing, e.g., for Web
services or peers at remote locations). The top-down design problem consists
in, given a type (a schema document that may vary from a DTD to a tree
automaton) for the distributed document, "propagating" locally this type into a
collection of types, that we call typing, while preserving desirable
properties. We also consider the bottom-up design which consists in, given a
type for each external resource, exhibiting a global type that is enforced by
the local types, again with natural desirable properties. In the article, we
lay out the fundamentals of a theory of distributed XML design, analyze
problems concerning typing issues in this setting, and study their complexity.
|
1012.2661
|
Categorial Minimalist Grammar
|
cs.CL math.LO
|
We first recall some basic notions on minimalist grammars and on categorial
grammars. Next we shortly introduce partially commutative linear logic, and our
representation of minimalist grammars within this categorial system, the
so-called categorial minimalist grammars. Thereafter we briefly present
\lambda\mu-DRT (Discourse Representation Theory) an extension of \lambda-DRT
(compositional DRT) in the framework of \lambda\mu calculus: it avoids type
raising and derives different readings from a single semantic representation,
in a setting which follows discourse structure. We run a complete example which
illustrates the various structures and rules that are needed to derive a
semantic representation from the categorial view of a transformational
syntactic analysis.
|
1012.2662
|
Cusp points in the parameter space of RPR-2PRR parallel manipulator
|
cs.RO
|
This paper investigates the existence conditions of cusp points in the design
parameter space of the R\underline{P}R-2P\underline{R}R parallel manipulators.
Cusp points make possible non-singular assembly-mode changing motion, which can
possibly increase the size of the aspect, i.e. the maximum singularity free
workspace. The method used is based on the notion of discriminant varieties and
Cylindrical Algebraic Decomposition, and resorts to Gr\"obner bases for the
solutions of systems of equations.
|
1012.2666
|
Joint space and workspace analysis of a two-DOF closed-chain manipulator
|
cs.RO
|
The aim of this paper is to compute of the generalized aspects, i.e. the
maximal singularity-free domains in the Cartesian product of the joint space
and workspace, for a planar parallel mechanism in using quadtree model and
interval analysis based method. The parallel mechanisms can admit several
solutions to the inverses and the direct kinematic models. These singular
configurations divide the joint space and the workspace in several not
connected domains. To compute this domains, the quadtree model can be made by
using a discretization of the space. Unfortunately, with this method, some
singular configurations cannot be detected as a single point in the joint
space. The interval analysis based method allow us to assure that no
singularities are not found and to reduce the computing times. This approach is
tested on a simple planar parallel mechanism with two degrees of freedom.
|
1012.2668
|
On the determination of cusp points of 3-R\underline{P}R parallel
manipulators
|
cs.RO
|
This paper investigates the cuspidal configurations of 3-RPR parallel
manipulators that may appear on their singular surfaces in the joint space.
Cusp points play an important role in the kinematic behavior of parallel
manipulators since they make possible a non-singular change of assembly mode.
In previous works, the cusp points were calculated in sections of the joint
space by solving a 24th-degree polynomial without any proof that this
polynomial was the only one that gives all solutions. The purpose of this study
is to propose a rigorous methodology to determine the cusp points of
3-R\underline{P}R manipulators and to certify that all cusp points are found.
This methodology uses the notion of discriminant varieties and resorts to
Gr\"obner bases for the solutions of systems of equations.
|
1012.2673
|
On the Role of Feedback in LT Codes
|
cs.IT math.IT
|
This paper concerns application of feedback in LT codes. The considered type
of feedback is acknowledgments, where information on which symbols have been
decoded is given to the transmitter. We identify an important adaptive
mechanism in standard LT codes, which is crucial to their ability to perform
well under any channel conditions. We show how precipitate application of
acknowledgments can interfere with this adaptive mechanism and lead to
significant performance degradation. Moreover, our analysis reveals that even
sensible use of acknowledgments has very low potential in standard LT codes.
Motivated by this, we analyze the impact of acknowledgments on multi layer LT
codes, i.e. LT codes with unequal error protection. In this case, feedback
proves advantageous. We show that by using only a single feedback message, it
is possible to achieve a noticeable performance improvement compared to
standard LT codes.
|
1012.2699
|
Prognostic Watch of the Electric Power System
|
cs.SI
|
A prognostic watch of the electric power system (EPS)is framed up, which
detects the threat to EPS for a day ahead according to the characteristic times
for a day ahead and according to the droop for a day ahead. Therefore, a
prognostic analysis of the EPS development for a day ahead is carried out. Also
the power grid, the electricity marker state, the grid state and the level of
threat for a power grid are found for a day ahead. The accuracy of the built up
prognostic watch is evaluated.
|
1012.2713
|
Phase Transitions of Plan Modification in Conformant Planning
|
cs.AI cs.CC
|
We explore phase transitions of plan modification, which mainly focus on the
conformant planning problems. By analyzing features of plan modification in
conformant planning problems, quantitative results are obtained. If the number
of operators is less than, almost all conformant planning problems can't be
solved with plan modification. If the number of operators is more than, almost
all conformant planning problems can be solved with plan modification. The
results of the experiments also show that there exists an experimental
threshold of density (ratio of number of operators to number of propositions),
which separates the region where almost all conformant planning problems can't
be solved with plan modification from the region where almost all conformant
planning problems can be solved with plan modification.
|
1012.2726
|
Role-based similarity in directed networks
|
physics.soc-ph cs.SI nlin.AO q-bio.MN
|
The widespread relevance of increasingly complex networks requires methods to
extract meaningful coarse-grained representations of such systems. For
undirected graphs, standard community detection methods use criteria largely
based on density of connections to provide such representations. We propose a
method for grouping nodes in directed networks based on the role of the nodes
in the network, understood in terms of patterns of incoming and outgoing flows.
The role groupings are obtained through the clustering of a similarity matrix,
formed by the distances between feature vectors that contain the number of in
and out paths of all lengths for each node. Hence nodes operating in a similar
flow environment are grouped together although they may not themselves be
densely connected. Our method, which includes a scale factor that reveals
robust groupings based on increasingly global structure, provides an
alternative criterion to uncover structure in networks where there is an
implicit flow transfer in the system. We illustrate its application to a
variety of data from ecology, world trade and cellular metabolism.
|
1012.2751
|
Universal communication part I: modulo additive channels
|
cs.IT math.IT
|
Which communication rates can be attained over a channel whose output is an
unknown (possibly stochastic) function of the input that may vary arbitrarily
in time with no a-priori model? Following the spirit of the finite-state
compressibility of a sequence, defined by Lempel and Ziv, a "capacity" is
defined for such a channel as the highest rate achievable by a designer knowing
the particular relation that indeed exists between the input and output for all
times, yet is constrained to use a fixed finite-length block communication
scheme without feedback, i.e. use the same encoder and decoder over each block.
In the case of the modulo additive channel, where the output sequence is
obtained by modulo addition of an unknown individual sequence to the input
sequence, this capacity is upper bounded by a function of the finite state
compressibility of the noise sequence. A universal communication scheme with
feedback that attains this capacity universally, without prior knowledge of the
noise sequence, is presented.
|
1012.2774
|
Modeling Social Networks with Overlapping Communities Using Hypergraphs
and Their Line Graphs
|
cs.SI physics.soc-ph
|
We propose that hypergraphs can be used to model social networks with
overlapping communities. The nodes of the hypergraphs represent the
communities. The hyperlinks of the hypergraphs denote the individuals who may
participate in multiple communities. The hypergraphs are not easy to analyze,
however, the line graphs of hypergraphs are simple graphs or weighted graphs,
so that the network theory can be applied. We define the overlapping depth $k$
of an individual by the number of communities that overlap in that individual,
and we prove that the minimum adjacency eigenvalue of the corresponding line
graph is not smaller than $-k_{max}$, which is the maximum overlapping depth of
the whole network. Based on hypergraphs with preferential attachment, we
establish a network model which incorporates overlapping communities with
tunable overlapping parameters $k$ and $w$. By comparing with the Hyves social
network, we show that our social network model possesses high clustering,
assortative mixing, power-law degree distribution and short average path
length.
|
1012.2782
|
Symmetry invariance for adapting biological systems
|
cs.SY math.OC physics.bio-ph q-bio.QM
|
We study in this paper certain properties of the responses of dynamical
systems to external inputs. The motivation arises from molecular systems
biology. and, in particular, the recent discovery of an important transient
property, related to Weber's law in psychophysics: "fold-change detection" in
adapting systems, the property that scale uncertainty does not affect
responses. FCD appears to play an important role in key signaling transduction
mechanisms in eukaryotes, including the ERK and Wnt pathways, as well as in
E.coli and possibly other prokaryotic chemotaxis pathways. In this paper, we
provide further theoretical results regarding this property. Far more
generally, we develop a necessary and sufficient characterization of adapting
systems whose transient behaviors are invariant under the action of a set
(often, a group) of symmetries in their sensory field. A particular instance is
FCD, which amounts to invariance under the action of the multiplicative group
of positive real numbers. Our main result is framed in terms of a notion which
extends equivariant actions of compact Lie groups. Its proof relies upon
control theoretic tools, and in particular the uniqueness theorem for minimal
realizations.
|
1012.2787
|
Comparison of Planar Parallel Manipulator Architectures based on a
Multi-objective Design Optimization Approach
|
cs.RO
|
This paper deals with the comparison of planar parallel manipulator
architectures based on a multi-objective design optimization approach. The
manipulator architectures are compared with regard to their mass in motion and
their regular workspace size, i.e., the objective functions. The optimization
problem is subject to constraints on the manipulator dexterity and stiffness.
For a given external wrench, the displacements of the moving platform have to
be smaller than given values throughout the obtained maximum regular dexterous
workspace. The contributions of the paper are highlighted with the study of
3-RPR, 3-RPR and 3-RPR planar parallel manipulator architectures, which are
compared by means of their Pareto frontiers obtained with a genetic algorithm.
|
1012.2789
|
Experimental Comparison of Representation Methods and Distance Measures
for Time Series Data
|
cs.AI
|
The previous decade has brought a remarkable increase of the interest in
applications that deal with querying and mining of time series data. Many of
the research efforts in this context have focused on introducing new
representation methods for dimensionality reduction or novel similarity
measures for the underlying data. In the vast majority of cases, each
individual work introducing a particular method has made specific claims and,
aside from the occasional theoretical justifications, provided quantitative
experimental observations. However, for the most part, the comparative aspects
of these experiments were too narrowly focused on demonstrating the benefits of
the proposed methods over some of the previously introduced ones. In order to
provide a comprehensive validation, we conducted an extensive experimental
study re-implementing eight different time series representations and nine
similarity measures and their variants, and testing their effectiveness on
thirty-eight time series data sets from a wide variety of application domains.
In this paper, we give an overview of these different techniques and present
our comparative experimental findings regarding their effectiveness. In
addition to providing a unified validation of some of the existing
achievements, our experiments also indicate that, in some cases, certain claims
in the literature may be unduly optimistic.
|
1012.2858
|
Relational transducers for declarative networking
|
cs.DB
|
Motivated by a recent conjecture concerning the expressiveness of declarative
networking, we propose a formal computation model for "eventually consistent"
distributed querying, based on relational transducers. A tight link has been
conjectured between coordination-freeness of computations, and monotonicity of
the queries expressed by such computations. Indeed, we propose a formal
definition of coordination-freeness and confirm that the class of monotone
queries is captured by coordination-free transducer networks.
Coordination-freeness is a semantic property, but the syntactic class that we
define of "oblivious" transducers also captures the same class of monotone
queries. Transducer networks that are not coordination-free are much more
powerful.
|
1012.3005
|
On the Combinatorial Multi-Armed Bandit Problem with Markovian Rewards
|
math.OC cs.LG cs.NI cs.SY math.PR
|
We consider a combinatorial generalization of the classical multi-armed
bandit problem that is defined as follows. There is a given bipartite graph of
$M$ users and $N \geq M$ resources. For each user-resource pair $(i,j)$, there
is an associated state that evolves as an aperiodic irreducible finite-state
Markov chain with unknown parameters, with transitions occurring each time the
particular user $i$ is allocated resource $j$. The user $i$ receives a reward
that depends on the corresponding state each time it is allocated the resource
$j$. The system objective is to learn the best matching of users to resources
so that the long-term sum of the rewards received by all users is maximized.
This corresponds to minimizing regret, defined here as the gap between the
expected total reward that can be obtained by the best-possible static matching
and the expected total reward that can be achieved by a given algorithm. We
present a polynomial-storage and polynomial-complexity-per-step
matching-learning algorithm for this problem. We show that this algorithm can
achieve a regret that is uniformly arbitrarily close to logarithmic in time and
polynomial in the number of users and resources. This formulation is broadly
applicable to scheduling and switching problems in networks and significantly
extends prior results in the area.
|
1012.3013
|
Aspects of Multi-Dimensional Modelling of Substellar Atmospheres
|
astro-ph.SR astro-ph.EP cs.CE physics.ao-ph physics.flu-dyn
|
Theoretical arguments and observations suggest that the atmospheres of Brown
Dwarfs and planets are very dynamic on chemical and on physical time scales.
The modelling of such substellar atmospheres has, hence, been much more
demanding than initially anticipated. This Splinter
(http://star-www.st-and.ac.uk/~ch80/CS16/MultiDSplinter_CS16.html) has combined
new developments in atmosphere modelling, with novel observational techniques,
and new challenges arising from planetary and space weather observations.
|
1012.3018
|
On the size of data structures used in symbolic model checking
|
cs.AI cs.CC cs.DS cs.LO
|
Temporal Logic Model Checking is a verification method in which we describe a
system, the model, and then we verify whether some properties, expressed in a
temporal logic formula, hold in the system. It has many industrial
applications. In order to improve performance, some tools allow preprocessing
of the model, verifying on-line a set of properties reusing the same compiled
model; we prove that the complexity of the Model Checking problem, without any
preprocessing or preprocessing the model or the formula in a polynomial data
structure, is the same. As a result preprocessing does not always exponentially
improve performance.
Symbolic Model Checking algorithms work by manipulating sets of states, and
these sets are often represented by BDDs. It has been observed that the size of
BDDs may grow exponentially as the model and formula increase in size. As a
side result, we formally prove that a superpolynomial increase of the size of
these BDDs is unavoidable in the worst case. While this exponential growth has
been empirically observed, to the best of our knowledge it has never been
proved so far in general terms. This result not only holds for all types of
BDDs regardless of the variable ordering, but also for more powerful data
structures, such as BEDs, RBCs, MTBDDs, and ADDs.
|
1012.3023
|
Generating constrained random graphs using multiple edge switches
|
cs.SI physics.soc-ph
|
The generation of random graphs using edge swaps provides a reliable method
to draw uniformly random samples of sets of graphs respecting some simple
constraints, e.g. degree distributions. However, in general, it is not
necessarily possible to access all graphs obeying some given con- straints
through a classical switching procedure calling on pairs of edges. We therefore
propose to get round this issue by generalizing this classical approach through
the use of higher-order edge switches. This method, which we denote by "k-edge
switching", makes it possible to progres- sively improve the covered portion of
a set of constrained graphs, thereby providing an increasing, asymptotically
certain confidence on the statistical representativeness of the obtained
sample.
|
1012.3059
|
Confidence Sets in Time-Series Filtering
|
cs.IT math.IT math.ST stat.TH
|
The problem of filtering of finite-alphabet stationary ergodic time series is
considered. A method for constructing a confidence set for the (unknown) signal
is proposed, such that the resulting set has the following properties: First,
it includes the unknown signal with probability $\gamma$, where $\gamma$ is a
parameter supplied to the filter. Second, the size of the confidence sets grows
exponentially with the rate that is asymptotically equal to the conditional
entropy of the signal given the data. Moreover, it is shown that this rate is
optimal.
|
1012.3098
|
On the Impact of Mutation-Selection Balance on the Runtime of
Evolutionary Algorithms
|
cs.NE nlin.AO q-bio.PE
|
The interplay between mutation and selection plays a fundamental role in the
behaviour of evolutionary algorithms (EAs). However, this interplay is still
not completely understood. This paper presents a rigorous runtime analysis of a
non-elitist population-based EA that uses the linear ranking selection
mechanism. The analysis focuses on how the balance between parameter $\eta$,
controlling the selection pressure in linear ranking, and parameter $\chi$
controlling the bit-wise mutation rate, impacts the runtime of the algorithm.
The results point out situations where a correct balance between selection
pressure and mutation rate is essential for finding the optimal solution in
polynomial time. In particular, it is shown that there exist fitness functions
which can only be solved in polynomial time if the ratio between parameters
$\eta$ and $\chi$ is within a narrow critical interval, and where a small
change in this ratio can increase the runtime exponentially. Furthermore, it is
shown quantitatively how the appropriate parameter choice depends on the
characteristics of the fitness function. In addition to the original results on
the runtime of EAs, this paper also introduces a very useful analytical tool,
i.e., multi-type branching processes, to the runtime analysis of non-elitist
population-based EAs.
|
1012.3148
|
To study the phenomenon of the Moravec's Paradox
|
cs.AI cs.RO
|
"Encoded in the large, highly evolved sensory and motor portions of the human
brain is a billion years of experience about the nature of the world and how to
survive in it. The deliberate process we call reasoning is, I believe, the
thinnest veneer of human thought, effective only because it is supported by
this much older and much powerful, though usually unconscious, sensor motor
knowledge. We are all prodigious Olympians in perceptual and motor areas, so
good that we make the difficult look easy. Abstract thought, though, is a new
trick, perhaps less than 100 thousand years old. We have not yet mastered it.
It is not all that intrinsically difficult; it just seems so when we do it."-
Hans Moravec Moravec's paradox is involved with the fact that it is the
seemingly easier day to day problems that are harder to implement in a machine,
than the seemingly complicated logic based problems of today. The results prove
that most artificially intelligent machines are as adept if not more than us at
under-taking long calculations or even play chess, but their logic brings them
nowhere when it comes to carrying out everyday tasks like walking, facial
gesture recognition or speech recognition.
|
1012.3189
|
Maximizing Expected Utility for Stochastic Combinatorial Optimization
Problems
|
cs.DS cs.DB
|
We study the stochastic versions of a broad class of combinatorial problems
where the weights of the elements in the input dataset are uncertain. The class
of problems that we study includes shortest paths, minimum weight spanning
trees, and minimum weight matchings, and other combinatorial problems like
knapsack. We observe that the expected value is inadequate in capturing
different types of {\em risk-averse} or {\em risk-prone} behaviors, and instead
we consider a more general objective which is to maximize the {\em expected
utility} of the solution for some given utility function, rather than the
expected weight (expected weight becomes a special case). Under the assumption
that there is a pseudopolynomial time algorithm for the {\em exact} version of
the problem (This is true for the problems mentioned above), we can obtain the
following approximation results for several important classes of utility
functions: (1) If the utility function $\uti$ is continuous, upper-bounded by a
constant and $\lim_{x\rightarrow+\infty}\uti(x)=0$, we show that we can obtain
a polynomial time approximation algorithm with an {\em additive error}
$\epsilon$ for any constant $\epsilon>0$. (2) If the utility function $\uti$ is
a concave increasing function, we can obtain a polynomial time approximation
scheme (PTAS). (3) If the utility function $\uti$ is increasing and has a
bounded derivative, we can obtain a polynomial time approximation scheme. Our
results recover or generalize several prior results on stochastic shortest
path, stochastic spanning tree, and stochastic knapsack. Our algorithm for
utility maximization makes use of the separability of exponential utility and a
technique to decompose a general utility function into exponential utility
functions, which may be useful in other stochastic optimization problems.
|
1012.3198
|
Network MIMO with Linear Zero-Forcing Beamforming: Large System
Analysis, Impact of Channel Estimation and Reduced-Complexity Scheduling
|
cs.IT math.IT
|
We consider the downlink of a multi-cell system with multi-antenna base
stations and single-antenna user terminals, arbitrary base station cooperation
clusters, distance-dependent propagation pathloss, and general "fairness"
requirements. Base stations in the same cooperation cluster employ joint
transmission with linear zero-forcing beamforming, subject to sum or per-base
station power constraints. Inter-cluster interference is treated as noise at
the user terminals. Analytic expressions for the system spectral efficiency are
found in the large-system limit where both the numbers of users and antennas
per base station tend to infinity with a given ratio. In particular, for the
per-base station power constraint, we find new results in random matrix theory,
yielding the squared Frobenius norm of submatrices of the Moore-Penrose
pseudo-inverse for the structured non-i.i.d. channel matrix resulting from the
cooperation cluster, user distribution, and path-loss coefficients. The
analysis is extended to the case of non-ideal Channel State Information at the
Transmitters (CSIT) obtained through explicit downlink channel training and
uplink feedback. Specifically, our results illuminate the trade-off between the
benefit of a larger number of cooperating antennas and the cost of estimating
higher-dimensional channel vectors. Furthermore, our analysis leads to a new
simplified downlink scheduling scheme that pre-selects the users according to
probabilities obtained from the large-system results, depending on the desired
fairness criterion. The proposed scheme performs close to the optimal
(finite-dimensional) opportunistic user selection while requiring significantly
less channel state feedback, since only a small fraction of pre-selected users
must feed back their channel state information.
|
1012.3201
|
Cyclic and Quasi-Cyclic LDPC Codes on Row and Column Constrained
Parity-Check Matrices and Their Trapping Sets
|
cs.IT math.IT
|
This paper is concerned with construction and structural analysis of both
cyclic and quasi-cyclic codes, particularly LDPC codes. It consists of three
parts. The first part shows that a cyclic code given by a parity-check matrix
in circulant form can be decomposed into descendant cyclic and quasi-cyclic
codes of various lengths and rates. Some fundamental structural properties of
these descendant codes are developed, including the characterizations of the
roots of the generator polynomial of a cyclic descendant code. The second part
of the paper shows that cyclic and quasi-cyclic descendant LDPC codes can be
derived from cyclic finite geometry LDPC codes using the results developed in
first part of the paper. This enlarges the repertoire of cyclic LDPC codes. The
third part of the paper analyzes the trapping sets of regular LDPC codes whose
parity-check matrices satisfy a certain constraint on their rows and columns.
Several classes of finite geometry and finite field cyclic and quasi-cyclic
LDPC codes with large minimum weights are shown to have no harmful trapping
sets with size smaller than their minimum weights. Consequently, their
performance error-floors are dominated by their minimum weights.
|
1012.3216
|
TILT: Transform Invariant Low-rank Textures
|
cs.CV
|
In this paper, we show how to efficiently and effectively extract a class of
"low-rank textures" in a 3D scene from 2D images despite significant
corruptions and warping. The low-rank textures capture geometrically meaningful
structures in an image, which encompass conventional local features such as
edges and corners as well as all kinds of regular, symmetric patterns
ubiquitous in urban environments and man-made objects. Our approach to finding
these low-rank textures leverages the recent breakthroughs in convex
optimization that enable robust recovery of a high-dimensional low-rank matrix
despite gross sparse errors. In the case of planar regions with significant
affine or projective deformation, our method can accurately recover both the
intrinsic low-rank texture and the precise domain transformation, and hence the
3D geometry and appearance of the planar regions. Extensive experimental
results demonstrate that this new technique works effectively for many regular
and near-regular patterns or objects that are approximately low-rank, such as
symmetrical patterns, building facades, printed texts, and human faces.
|
1012.3252
|
A mathematical model for networks with structures in the mesoscale
|
physics.soc-ph cond-mat.stat-mech cs.SI math.CO
|
The new concept of multilevel network is introduced in order to embody some
topological properties of complex systems with structures in the mesoscale
which are not completely captured by the classical models. This new model,
which generalizes the hyper-network and hyper-structure models, fits perfectly
with several real-life complex systems, including social and public
transportation networks. We present an analysis of the structural properties of
the multilevel network, including the clustering and the metric structures.
Some analytical relationships amongst the efficiency and clustering coefficient
of this new model and the corresponding parameters of the underlying network
are obtained. Finally some random models for multilevel networks are given to
illustrate how different multilevel structures can produce similar underlying
networks and therefore that the mesoscale structure should be taken into
account in many applications.
|
1012.3278
|
Collaborative Knowledge Creation and Management in Information Retrieval
|
cs.IR
|
The final goal of Information Retrieval (IR) is knowledge production.
However, it has been argued that knowledge production is not an individual
effort but a collaborative effort. Collaboration in information retrieval is
geared towards knowledge sharing and creation of new knowledge by users. This
paper discusses Collaborative Information Retrieval (CIR) and how it culminates
to knowledge creation. It explains how created knowledge is organized and
structured. It describes a functional architecture for the development of a CIR
prototype called MECOCIR. Some of the features of the prototype are presented
as well as how they facilitate collaborative knowledge exploitation. Knowledge
creation is explained through the knowledge conversion/transformation processes
proposed by Nonaka and CIR activities that facilitate these processes are
high-lighted and discussed
|
1012.3280
|
A new Recommender system based on target tracking: a Kalman Filter
approach
|
cs.AI
|
In this paper, we propose a new approach for recommender systems based on
target tracking by Kalman filtering. We assume that users and their seen
resources are vectors in the multidimensional space of the categories of the
resources. Knowing this space, we propose an algorithm based on a Kalman filter
to track users and to predict the best prediction of their future position in
the recommendation space.
|
1012.3310
|
The asymptotical error of broadcast gossip averaging algorithms
|
math.OC cs.SY math.PR
|
In problems of estimation and control which involve a network, efficient
distributed computation of averages is a key issue. This paper presents
theoretical and simulation results about the accumulation of errors during the
computation of averages by means of iterative "broadcast gossip" algorithms.
Using martingale theory, we prove that the expectation of the accumulated error
can be bounded from above by a quantity which only depends on the mixing
parameter of the algorithm and on few properties of the network: its size, its
maximum degree and its spectral gap. Both analytical results and computer
simulations show that in several network topologies of applicative interest the
accumulated error goes to zero as the size of the network grows large.
|
1012.3311
|
Validating XML Documents in the Streaming Model with External Memory
|
cs.DS cs.DB
|
We study the problem of validating XML documents of size $N$ against general
DTDs in the context of streaming algorithms. The starting point of this work is
a well-known space lower bound. There are XML documents and DTDs for which
$p$-pass streaming algorithms require $\Omega(N/p)$ space.
We show that when allowing access to external memory, there is a
deterministic streaming algorithm that solves this problem with memory space
$O(\log^2 N)$, a constant number of auxiliary read/write streams, and $O(\log
N)$ total number of passes on the XML document and auxiliary streams.
An important intermediate step of this algorithm is the computation of the
First-Child-Next-Sibling (FCNS) encoding of the initial XML document in a
streaming fashion. We study this problem independently, and we also provide
memory efficient streaming algorithms for decoding an XML document given in its
FCNS encoding.
Furthermore, validating XML documents encoding binary trees in the usual
streaming model without external memory can be done with sublinear memory.
There is a one-pass algorithm using $O(\sqrt{N \log N})$ space, and a
bidirectional two-pass algorithm using $O(\log^2 N)$ space performing this
task.
|
1012.3312
|
Dynamic Capitalization and Visualization Strategy in Collaborative
Knowledge Management System for EI Process
|
cs.AI
|
Knowledge is attributed to human whose problem-solving behavior is subjective
and complex. In today's knowledge economy, the need to manage knowledge
produced by a community of actors cannot be overemphasized. This is due to the
fact that actors possess some level of tacit knowledge which is generally
difficult to articulate. Problem-solving requires searching and sharing of
knowledge among a group of actors in a particular context. Knowledge expressed
within the context of a problem resolution must be capitalized for future
reuse. In this paper, an approach that permits dynamic capitalization of
relevant and reliable actors' knowledge in solving decision problem following
Economic Intelligence process is proposed. Knowledge annotation method and
temporal attributes are used for handling the complexity in the communication
among actors and in contextualizing expressed knowledge. A prototype is built
to demonstrate the functionalities of a collaborative Knowledge Management
system based on this approach. It is tested with sample cases and the result
showed that dynamic capitalization leads to knowledge validation hence
increasing reliability of captured knowledge for reuse. The system can be
adapted to various domains
|
1012.3320
|
Data Conflict Resolution Using Trust Mappings
|
cs.DB cs.AI
|
In massively collaborative projects such as scientific or community
databases, users often need to agree or disagree on the content of individual
data items. On the other hand, trust relationships often exist between users,
allowing them to accept or reject other users' beliefs by default. As those
trust relationships become complex, however, it becomes difficult to define and
compute a consistent snapshot of the conflicting information. Previous
solutions to a related problem, the update reconciliation problem, are
dependent on the order in which the updates are processed and, therefore, do
not guarantee a globally consistent snapshot. This paper proposes the first
principled solution to the automatic conflict resolution problem in a community
database. Our semantics is based on the certain tuples of all stable models of
a logic program. While evaluating stable models in general is well known to be
hard, even for very simple logic programs, we show that the conflict resolution
problem admits a PTIME solution. To the best of our knowledge, ours is the
first PTIME algorithm that allows conflict resolution in a principled way. We
further discuss extensions to negative beliefs and prove that some of these
extensions are hard. This work is done in the context of the BeliefDB project
at the University of Washington, which focuses on the efficient management of
conflicts in community databases.
|
1012.3323
|
On the transfer matrix of a MIMO system
|
cs.IT math-ph math.IT math.MP
|
We develop a deterministic ab-initio model for the input-output relationship
of a multiple-input multiple-output (MIMO) wireless channel, starting from the
Maxwell equations combined with Ohm's Law. The main technical tools are
scattering and geometric perturbation theories. The derived relationship can
lead us to a deep understanding of how the propagation conditions and the
coupling effects between the elements of multiple-element arrays affect the
properties of a MIMO channel, e.g. its capacity and its number of degrees of
freedom.
|
1012.3336
|
Dynamic Knowledge Capitalization through Annotation among Economic
Intelligence Actors in a Collaborative Environment
|
cs.AI
|
The shift from industrial economy to knowledge economy in today's world has
revolutionalized strategic planning in organizations as well as their problem
solving approaches. The point of focus today is knowledge and service
production with more emphasis been laid on knowledge capital. Many
organizations are investing on tools that facilitate knowledge sharing among
their employees and they are as well promoting and encouraging collaboration
among their staff in order to build the organization's knowledge capital with
the ultimate goal of creating a lasting competitive advantage for their
organizations. One of the current leading approaches used for solving
organization's decision problem is the Economic Intelligence (EI) approach
which involves interactions among various actors called EI actors. These actors
collaborate to ensure the overall success of the decision problem solving
process. In the course of the collaboration, the actors express knowledge which
could be capitalized for future reuse. In this paper, we propose in the first
place, an annotation model for knowledge elicitation among EI actors. Because
of the need to build a knowledge capital, we also propose a dynamic knowledge
capitalisation approach for managing knowledge produced by the actors. Finally,
the need to manage the interactions and the interdependencies among
collaborating EI actors, led to our third proposition which constitute an
awareness mechanism for group work management.
|
1012.3359
|
Curve Reconstruction in Riemannian Manifolds: Ordering Motion Frames
|
cs.CG cs.GR cs.RO math.DG
|
In this article we extend the computational geometric curve reconstruction
approach to curves in Riemannian manifolds. We prove that the minimal spanning
tree, given a sufficiently dense sample, correctly reconstructs the smooth arcs
and further closed and simple curves in Riemannian manifolds. The proof is
based on the behaviour of the curve segment inside the tubular neighbourhood of
the curve. To take care of the local topological changes of the manifold, the
tubular neighbourhood is constructed in consideration with the injectivity
radius of the underlying Riemannian manifold. We also present examples of
successfully reconstructed curves and show an applications of curve
reconstruction to ordering motion frames.
|
1012.3409
|
Uncovering space-independent communities in spatial networks
|
physics.soc-ph cs.SI
|
Many complex systems are organized in the form of a network embedded in
space. Important examples include the physical Internet infrastucture, road
networks, flight connections, brain functional networks and social networks.
The effect of space on network topology has recently come under the spotlight
because of the emergence of pervasive technologies based on geo-localization,
which constantly fill databases with people's movements and thus reveal their
trajectories and spatial behaviour. Extracting patterns and regularities from
the resulting massive amount of human mobility data requires the development of
appropriate tools for uncovering information in spatially-embedded networks. In
contrast with most works that tend to apply standard network metrics to any
type of network, we argue in this paper for a careful treatment of the
constraints imposed by space on network topology. In particular, we focus on
the problem of community detection and propose a modularity function adapted to
spatial networks. We show that it is possible to factor out the effect of space
in order to reveal more clearly hidden structural similarities between the
nodes. Methods are tested on a large mobile phone network and
computer-generated benchmarks where the effect of space has been incorporated.
|
1012.3410
|
Descriptive-complexity based distance for fuzzy sets
|
cs.AI
|
A new distance function dist(A,B) for fuzzy sets A and B is introduced. It is
based on the descriptive complexity, i.e., the number of bits (on average) that
are needed to describe an element in the symmetric difference of the two sets.
The distance gives the amount of additional information needed to describe any
one of the two sets given the other. We prove its mathematical properties and
perform pattern clustering on data based on this distance.
|
1012.3439
|
List-decoding of binary Goppa codes up to the binary Johnson bound
|
cs.IT math.IT
|
We study the list-decoding problem of alternant codes, with the notable case
of classical Goppa codes. The major consideration here is to take into account
the size of the alphabet, which shows great influence on the list-decoding
radius. This amounts to compare the \emph{generic} Johnson bound to the
\emph{$q$-ary} Johnson bound. This difference is important when $q$ is very
small. Essentially, the most favourable case is $q=2$, for which the decoding
radius is greatly improved, notably when the relative minimum distance gets
close to 1/2. Even though the announced result, which is the list-decoding
radius of binary Goppa codes, is new, it can be rather easily made up from
previous sources (V. Guruswami, R. M. Roth and I. Tal, R .M. Roth), which may
be a little bit unknown, and in which the case of binary Goppa codes has
apparently not been thought at. Only D. J. Bernstein treats the case of binary
Goppa codes in a preprint. References are given in the introduction. We propose
an autonomous treatment and also a complexity analysis of the studied
algorithm, which is quadratic in the blocklength $n$, when decoding at some
distance of the relative maximum decoding radius, and in $O(n^7)$ when reaching
the maximum radius.
|
1012.3476
|
Adaptive Parallel Tempering for Stochastic Maximum Likelihood Learning
of RBMs
|
stat.ML cs.NE
|
Restricted Boltzmann Machines (RBM) have attracted a lot of attention of
late, as one the principle building blocks of deep networks. Training RBMs
remains problematic however, because of the intractibility of their partition
function. The maximum likelihood gradient requires a very robust sampler which
can accurately sample from the model despite the loss of ergodicity often
incurred during learning. While using Parallel Tempering in the negative phase
of Stochastic Maximum Likelihood (SML-PT) helps address the issue, it imposes a
trade-off between computational complexity and high ergodicity, and requires
careful hand-tuning of the temperatures. In this paper, we show that this
trade-off is unnecessary. The choice of optimal temperatures can be automated
by minimizing average return time (a concept first proposed by [Katzgraber et
al., 2006]) while chains can be spawned dynamically, as needed, thus minimizing
the computational overhead. We show on a synthetic dataset, that this results
in better likelihood scores.
|
1012.3502
|
Rules of Thumb for Information Acquisition from Large and Redundant Data
|
cs.IR cs.DB physics.data-an
|
We develop an abstract model of information acquisition from redundant data.
We assume a random sampling process from data which provide information with
bias and are interested in the fraction of information we expect to learn as
function of (i) the sampled fraction (recall) and (ii) varying bias of
information (redundancy distributions). We develop two rules of thumb with
varying robustness. We first show that, when information bias follows a Zipf
distribution, the 80-20 rule or Pareto principle does surprisingly not hold,
and we rather expect to learn less than 40% of the information when randomly
sampling 20% of the overall data. We then analytically prove that for large
data sets, randomized sampling from power-law distributions leads to "truncated
distributions" with the same power-law exponent. This second rule is very
robust and also holds for distributions that deviate substantially from a
strict power law. We further give one particular family of powerlaw functions
that remain completely invariant under sampling. Finally, we validate our model
with two large Web data sets: link distributions to domains and tag
distributions on delicious.com.
|
1012.3506
|
Local-Testability and Self-Correctability of q-ary Sparse Linear Codes
|
cs.CC cs.IT math.IT
|
We prove that q-ary sparse codes with small bias are self-correctable and
locally testable. We generalize a result of Kaufman and Sudan that proves the
local testability and correctability of binary sparse codes with small bias. We
use properties of q-ary Krawtchouk polynomials and the McWilliams identity
-that relates the weight distribution of a code to the weight distribution of
its dual- to derive bounds on the error probability of the randomized tester
and self-corrector we are analyzing.
|
1012.3583
|
A fast no-rejection algorithm for the Category Game
|
physics.comp-ph cs.SI physics.soc-ph
|
The Category Game is a multi-agent model that accounts for the emergence of
shared categorization patterns in a population of interacting individuals. In
the framework of the model, linguistic categories appear as long lived
consensus states that are constantly reshaped and re-negotiated by the
communicating individuals. It is therefore crucial to investigate the long time
behavior to gain a clear understanding of the dynamics. However, it turns out
that the evolution of the emerging category system is so slow, already for
small populations, that such an analysis has remained so far impossible. Here,
we introduce a fast no-rejection algorithm for the Category Game that
disentangles the physical simulation time from the CPU time, thus opening the
way for thorough analysis of the model. We verify that the new algorithm is
equivalent to the old one in terms of the emerging phenomenology and we
quantify the CPU performances of the two algorithms, pointing out the neat
advantages offered by the no-rejection one. This technical advance has already
opened the way to new investigations of the model, thus helping to shed light
on the fundamental issue of categorization.
|
1012.3607
|
Accurate prediction of gene expression by integration of DNA sequence
statistics with detailed modeling of transcription regulation
|
q-bio.MN cond-mat.stat-mech cs.CE physics.bio-ph q-bio.SC
|
Gene regulation involves a hierarchy of events that extend from specific
protein-DNA interactions to the combinatorial assembly of nucleoprotein
complexes. The effects of DNA sequence on these processes have typically been
studied based either on its quantitative connection with single-domain binding
free energies or on empirical rules that combine different DNA motifs to
predict gene expression trends on a genomic scale. The middle-point approach
that quantitatively bridges these two extremes, however, remains largely
unexplored. Here, we provide an integrated approach to accurately predict gene
expression from statistical sequence information in combination with detailed
biophysical modeling of transcription regulation by multidomain binding on
multiple DNA sites. For the regulation of the prototypical lac operon, this
approach predicts within 0.3-fold accuracy transcriptional activity over a
10,000-fold range from DNA sequence statistics for different intracellular
conditions.
|
1012.3638
|
Determination of the Integrated Sidelobe Level of Sets of Rotated
Legendre Sequences
|
cs.IT math.IT
|
Sequences sets with low aperiodic auto- and cross-correlations play an
important role in many applications like communications, radar and other active
sensing applications. The use of antipodal sequences reduces hardware
requirements while increases the difficult of the task of signal design. In
this paper we present a method for the computation of the Integrated Sidelobe
Level (ISL), and we use it to calculate the asymptotic expression for the ISL
of a set of sequences formed by different rotations of a Legendre sequence.
|
1012.3646
|
Minimum-Time Frictionless Atom Cooling in Harmonic Traps
|
math.OC cs.SY quant-ph
|
Frictionless atom cooling in harmonic traps is formulated as a time-optimal
control problem and a synthesis of optimal controlled trajectories is obtained.
|
1012.3651
|
Cascades on a class of clustered random networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We present an analytical approach to determining the expected cascade size in
a broad range of dynamical models on the class of random networks with
arbitrary degree distribution and nonzero clustering introduced in [M.E.J.
Newman, Phys. Rev. Lett. 103, 058701 (2009)]. A condition for the existence of
global cascades is derived as well as a general criterion which determines
whether increasing the level of clustering will increase, or decrease, the
expected cascade size. Applications, examples of which are provided, include
site percolation, bond percolation, and Watts' threshold model; in all cases
analytical results give excellent agreement with numerical simulations.
|
1012.3656
|
Adaptive Cluster Expansion (ACE): A Multilayer Network for Estimating
Probability Density Functions
|
cs.NE cs.CV
|
We derive an adaptive hierarchical method of estimating high dimensional
probability density functions. We call this method of density estimation the
"adaptive cluster expansion" or ACE for short. We present an application of
this approach, based on a multilayer topographic mapping network, that
adaptively estimates the joint probability density function of the pixel values
of an image, and presents this result as a "probability image". We apply this
to the problem of identifying statistically anomalous regions in otherwise
statistically homogeneous images.
|
1012.3697
|
Analysis of Agglomerative Clustering
|
cs.DS cs.CG cs.LG
|
The diameter $k$-clustering problem is the problem of partitioning a finite
subset of $\mathbb{R}^d$ into $k$ subsets called clusters such that the maximum
diameter of the clusters is minimized. One early clustering algorithm that
computes a hierarchy of approximate solutions to this problem (for all values
of $k$) is the agglomerative clustering algorithm with the complete linkage
strategy. For decades, this algorithm has been widely used by practitioners.
However, it is not well studied theoretically. In this paper, we analyze the
agglomerative complete linkage clustering algorithm. Assuming that the
dimension $d$ is a constant, we show that for any $k$ the solution computed by
this algorithm is an $O(\log k)$-approximation to the diameter $k$-clustering
problem. Our analysis does not only hold for the Euclidean distance but for any
metric that is based on a norm. Furthermore, we analyze the closely related
$k$-center and discrete $k$-center problem. For the corresponding agglomerative
algorithms, we deduce an approximation factor of $O(\log k)$ as well.
|
1012.3705
|
Stochastic Vector Quantisers
|
cs.NE cs.CV
|
In this paper a stochastic generalisation of the standard Linde-Buzo-Gray
(LBG) approach to vector quantiser (VQ) design is presented, in which the
encoder is implemented as the sampling of a vector of code indices from a
probability distribution derived from the input vector, and the decoder is
implemented as a superposition of reconstruction vectors, and the stochastic VQ
is optimised using a minimum mean Euclidean reconstruction distortion
criterion, as in the LBG case. Numerical simulations are used to demonstrate
how this leads to self-organisation of the stochastic VQ, where different
stochastically sampled code indices become associated with different input
subspaces. This property may be used to automate the process of splitting
high-dimensional input vectors into low-dimensional blocks before encoding
them.
|
1012.3722
|
Energy stable and momentum conserving hybrid finite element method for
the incompressible Navier-Stokes equations
|
cs.CE math.NA
|
A hybrid method for the incompressible Navier--Stokes equations is presented.
The method inherits the attractive stabilizing mechanism of upwinded
discontinuous Galerkin methods when momentum advection becomes significant,
equal-order interpolations can be used for the velocity and pressure fields,
and mass can be conserved locally. Using continuous Lagrange multiplier spaces
to enforce flux continuity across cell facets, the number of global degrees of
freedom is the same as for a continuous Galerkin method on the same mesh.
Different from our earlier investigations on the approach for the
Navier--Stokes equations, the pressure field in this work is discontinuous
across cell boundaries. It is shown that this leads to very good local mass
conservation and, for an appropriate choice of finite element spaces, momentum
conservation. Also, a new form of the momentum transport terms for the method
is constructed such that global energy stability is guaranteed, even in the
absence of a point-wise solenoidal velocity field. Mass conservation, momentum
conservation and global energy stability are proved for the time-continuous
case, and for a fully discrete scheme. The presented analysis results are
supported by a range of numerical simulations.
|
1012.3724
|
The Development of Dominance Stripes and Orientation Maps in a
Self-Organising Visual Cortex Network (VICON)
|
cs.NE cs.CV
|
A self-organising neural network is presented that is based on a rigorous
Bayesian analysis of the information contained in individual neural firing
events. This leads to a visual cortex network (VICON) that has many of the
properties emerge when a mammalian visual cortex is exposed to data arriving
from two imaging sensors (i.e. the two retinae), such as dominance stripes and
orientation maps.
|
1012.3788
|
A New Formula for the BER of Binary Modulations with Dual-Branch
Selection over Generalized-K Composite Fading Channels
|
cs.IT math.IT math.PR math.ST stat.TH
|
Error performance is one of the main performance measures and derivation of
its closed-form expression has proved to be quite involved for certain systems.
In this letter, a unified closed-form expression, applicable to different
binary modulation schemes, for the bit error rate of dual-branch selection
diversity based systems undergoing independent but not necessarily identically
distributed generalized-K fading is derived in terms of the extended
generalized bivariate Meijer G-function.
|
1012.3790
|
Improving PPM Algorithm Using Dictionaries
|
cs.IT math.IT
|
We propose a method to improve traditional character-based PPM text
compression algorithms. Consider a text file as a sequence of alternating words
and non-words, the basic idea of our algorithm is to encode non-words and
prefixes of words using character-based context models and encode suffixes of
words using dictionary models. By using dictionary models, the algorithm can
encode multiple characters as a whole, and thus enhance the compression
efficiency. The advantages of the proposed algorithm are: 1) it does not
require any text preprocessing; 2) it does not need any explicit codeword to
identify switch between context and dictionary models; 3) it can be applied to
any character-based PPM algorithms without incurring much additional
computational cost. Test results show that significant improvements can be
obtained over character-based PPM, especially in low order cases.
|
1012.3793
|
A robust ranking algorithm to spamming
|
cs.IR physics.data-an
|
Ranking problem of web-based rating system has attracted many attentions. A
good ranking algorithm should be robust against spammer attack. Here we
proposed a correlation based reputation algorithm to solve the ranking problem
of such rating systems where user votes some objects with ratings. In this
algorithm, reputation of user is iteratively determined by the correlation
coefficient between his/her rating vector and the corresponding objects'
weighted average rating vector. Comparing with iterative refinement (IR) and
mean score algorithm, results for both artificial and real data indicate that,
the present algorithm shows a higher robustness against spammer attack.
|
1012.3802
|
Detecting Image Forgeries using Geometric Cues
|
cs.CV
|
This chapter presents a framework for detecting fake regions by using various
methods including watermarking technique and blind approaches. In particular,
we describe current categories on blind approaches which can be divided into
five: pixel-based techniques, format-based techniques, camera-based techniques,
physically-based techniques and geometric-based techniques. Then we take a
second look on the geometric-based techniques and further categorize them in
detail. In the following section, the state-of-the-art methods involved in the
geometric technique are elaborated.
|
1012.3805
|
Element Retrieval using Namespace Based on keyword search over XML
Documents
|
cs.IR
|
Querying over XML elements using keyword search is steadily gaining
popularity. The traditional similarity measure is widely employed in order to
effectively retrieve various XML documents. A number of authors have already
proposed different similarity-measure methods that take advantage of the
structure and content of XML documents. They do not, however, consider the
similarity between latent semantic information of element texts and that of
keywords in a query. Although many algorithms on XML element search are
available, some of them have the high computational complexity due to searching
a huge number of elements. In this paper, we propose a new algorithm that makes
use of the semantic similarity between elements instead of between entire XML
documents, considering not only the structure and content of an XML document,
but also semantic information of namespaces in elements. We compare our
algorithm with the three other algorithms by testing on the real datasets. The
experiments have demonstrated that our proposed method is able to improve the
query accuracy, as well as to reduce the running time.
|
1012.3853
|
On the CNF encoding of cardinality constraints and beyond
|
cs.AI cs.LO
|
In this report, we propose a quick survey of the currently known techniques
for encoding a Boolean cardinality constraint into a CNF formula, and we
discuss about the relevance of these encodings. We also propose models to
facilitate analysis and design of CNF encodings for Boolean constraints.
|
1012.3875
|
Optimal and Robust Transmit Designs for MISO Channel Secrecy by
Semidefinite Programming
|
cs.IT math.IT
|
In recent years there has been growing interest in study of multi-antenna
transmit designs for providing secure communication over the physical layer.
This paper considers the scenario of an intended multi-input single-output
channel overheard by multiple multi-antenna eavesdroppers. Specifically, we
address the transmit covariance optimization for secrecy-rate maximization
(SRM) of that scenario. The challenge of this problem is that it is a nonconvex
optimization problem. This paper shows that the SRM problem can actually be
solved in a convex and tractable fashion, by recasting the SRM problem as a
semidefinite program (SDP). The SRM problem we solve is under the premise of
perfect channel state information (CSI). This paper also deals with the
imperfect CSI case. We consider a worst-case robust SRM formulation under
spherical CSI uncertainties, and we develop an optimal solution to it, again
via SDP. Moreover, our analysis reveals that transmit beamforming is generally
the optimal transmit strategy for SRM of the considered scenario, for both the
perfect and imperfect CSI cases. Simulation results are provided to illustrate
the secrecy-rate performance gains of the proposed SDP solutions compared to
some suboptimal transmit designs.
|
1012.3877
|
Queue-Aware Dynamic Clustering and Power Allocation for Network MIMO
Systems via Distributive Stochastic Learning
|
cs.LG
|
In this paper, we propose a two-timescale delay-optimal dynamic clustering
and power allocation design for downlink network MIMO systems. The dynamic
clustering control is adaptive to the global queue state information (GQSI)
only and computed at the base station controller (BSC) over a longer time
scale. On the other hand, the power allocations of all the BSs in one cluster
are adaptive to both intra-cluster channel state information (CCSI) and
intra-cluster queue state information (CQSI), and computed at the cluster
manager (CM) over a shorter time scale. We show that the two-timescale
delay-optimal control can be formulated as an infinite-horizon average cost
Constrained Partially Observed Markov Decision Process (CPOMDP). By exploiting
the special problem structure, we shall derive an equivalent Bellman equation
in terms of Pattern Selection Q-factor to solve the CPOMDP. To address the
distributive requirement and the issue of exponential memory requirement and
computational complexity, we approximate the Pattern Selection Q-factor by the
sum of Per-cluster Potential functions and propose a novel distributive online
learning algorithm to estimate the Per-cluster Potential functions (at each CM)
as well as the Lagrange multipliers (LM) (at each BS). We show that the
proposed distributive online learning algorithm converges almost surely (with
probability 1). By exploiting the birth-death structure of the queue dynamics,
we further decompose the Per-cluster Potential function into sum of Per-cluster
Per-user Potential functions and formulate the instantaneous power allocation
as a Per-stage QSI-aware Interference Game played among all the CMs. We also
propose a QSI-aware Simultaneous Iterative Water-filling Algorithm (QSIWFA) and
show that it can achieve the Nash Equilibrium (NE).
|
1012.3947
|
Interpolation in Equilibrium Logic and Answer Set Programming: the
Propositional Case
|
cs.LO cs.AI
|
Interpolation is an important property of classical and many non classical
logics that has been shown to have interesting applications in computer science
and AI. Here we study the Interpolation Property for the propositional version
of the non-monotonic system of equilibrium logic, establishing weaker or
stronger forms of interpolation depending on the precise interpretation of the
inference relation. These results also yield a form of interpolation for ground
logic programs under the answer sets semantics. For disjunctive logic programs
we also study the property of uniform interpolation that is closely related to
the concept of variable forgetting.
|
1012.3951
|
Diffusion-geometric maximally stable component detection in deformable
shapes
|
cs.CV
|
Maximally stable component detection is a very popular method for feature
analysis in images, mainly due to its low computation cost and high
repeatability. With the recent advance of feature-based methods in geometric
shape analysis, there is significant interest in finding analogous approaches
in the 3D world. In this paper, we formulate a diffusion-geometric framework
for stable component detection in non-rigid 3D shapes, which can be used for
geometric feature detection and description. A quantitative evaluation of our
method on the SHREC'10 feature detection benchmark shows its potential as a
source of high-quality features.
|
1012.3953
|
PhyloGrid: a development for a workflow in Phylogeny
|
cs.CE q-bio.OT
|
In this work we present the development of a workflow based on Taverna which
is going to be implemented for calculations in Phylogeny by means of the
MrBayes tool. It has a friendly interface developed with the Gridsphere
framework. The user is able to define the parameters for doing the Bayesian
calculation, determine the model of evolution, check the accuracy of the
results in the intermediate stages as well as do a multiple alignment of the
sequences previously to the final result. To do this, no knowledge from his/her
side about the computational procedure is required.
|
1012.3956
|
Advances in the Biomedical Applications of the EELA Project
|
cs.CE q-bio.OT
|
In the last years an increasing demand for Grid Infrastructures has resulted
in several international collaborations. This is the case of the EELA Project,
which has brought together collaborating groups of Latin America and Europe.
One year ago we presented this e-infrastructure used, among others, by the
Biomedical groups for the studies of oncological analysis, neglected diseases,
sequence alignments and computation phylogenetics. After this period, the
achieved advances are summarised in this paper.
|
1012.4046
|
Artificial Intelligence in Reverse Supply Chain Management: The State of
the Art
|
cs.AI
|
Product take-back legislation forces manufacturers to bear the costs of
collection and disposal of products that have reached the end of their useful
lives. In order to reduce these costs, manufacturers can consider reuse,
remanufacturing and/or recycling of components as an alternative to disposal.
The implementation of such alternatives usually requires an appropriate reverse
supply chain management. With the concepts of reverse supply chain are gaining
popularity in practice, the use of artificial intelligence approaches in these
areas is also becoming popular. As a result, the purpose of this paper is to
give an overview of the recent publications concerning the application of
artificial intelligence techniques to reverse supply chain with emphasis on
certain types of product returns.
|
1012.4050
|
Motif Analysis in the Amazon Product Co-Purchasing Network
|
cs.SI physics.soc-ph
|
Online stores like Amazon and Ebay are growing by the day. Fewer people go to
departmental stores as opposed to the convenience of purchasing from stores
online. These stores may employ a number of techniques to advertise and
recommend the appropriate product to the appropriate buyer profile. This
article evaluates various 3-node and 4-node motifs occurring in such networks.
Community structures are evaluated too.These results may provide interesting
insights into user behavior and a better understanding of marketing techniques.
|
1012.4051
|
Survey & Experiment: Towards the Learning Accuracy
|
cs.LG
|
To attain the best learning accuracy, people move on with difficulties and
frustrations. Though one can optimize the empirical objective using a given set
of samples, its generalization ability to the entire sample distribution
remains questionable. Even if a fair generalization guarantee is offered, one
still wants to know what is to happen if the regularizer is removed, and/or how
well the artificial loss (like the hinge loss) relates to the accuracy.
For such reason, this report surveys four different trials towards the
learning accuracy, embracing the major advances in supervised learning theory
in the past four years. Starting from the generic setting of learning, the
first two trials introduce the best optimization and generalization bounds for
convex learning, and the third trial gets rid of the regularizer. As an
innovative attempt, the fourth trial studies the optimization when the
objective is exactly the accuracy, in the special case of binary
classification. This report also analyzes the last trial through experiments.
|
1012.4072
|
Stochastic Control of Event-Driven Feedback in Multi-Antenna
Interference Channels
|
cs.IT math.IT
|
Spatial interference avoidance is a simple and effective way of mitigating
interference in multi-antenna wireless networks. The deployment of this
technique requires channel-state information (CSI) feedback from each receiver
to all interferers, resulting in substantial network overhead. To address this
issue, this paper proposes the method of distributive control that
intelligently allocates CSI bits over multiple feedback links and adapts
feedback to channel dynamics. For symmetric channel distributions, it is
optimal for each receiver to equally allocate the average sum-feedback rate for
different feedback links, thereby decoupling their control. Using the criterion
of minimum sum-interference power, the optimal feedback-control policy is shown
using stochastic-optimization theory to exhibit opportunism. Specifically, a
specific feedback link is turned on only when the corresponding transmit-CSI
error is significant or interference-channel gain large, and the optimal number
of feedback bits increases with this gain. For high mobility and considering
the sphere-cap-quantized-CSI model, the optimal feedback-control policy is
shown to perform water-filling in time, where the number of feedback bits
increases logarithmically with the corresponding interference-channel gain.
Furthermore, we consider asymmetric channel distributions with heterogeneous
path losses and high mobility, and prove the existence of a unique optimal
policy for jointly controlling multiple feedback links. Given the
sphere-cap-quantized-CSI model, this policy is shown to perform water-filling
over feedback links. Finally, simulation demonstrates that feedback-control
yields significant throughput gains compared with the conventional
differential-feedback method.
|
1012.4074
|
A fast divide-and-conquer algorithm for indexing human genome sequences
|
cs.DB
|
Since the release of human genome sequences, one of the most important
research issues is about indexing the genome sequences, and the suffix tree is
most widely adopted for that purpose. The traditional suffix tree construction
algorithms have severe performance degradation due to the memory bottleneck
problem. The recent disk-based algorithms also have limited performance
improvement due to random disk accesses. Moreover, they do not fully utilize
the recent CPUs with multiple cores. In this paper, we propose a fast algorithm
based on 'divide-and-conquer' strategy for indexing the human genome sequences.
Our algorithm almost eliminates random disk accesses by accessing the disk in
the unit of contiguous chunks. In addition, our algorithm fully utilizes the
multi-core CPUs by dividing the genome sequences into multiple partitions and
then assigning each partition to a different core for parallel processing.
Experimental results show that our algorithm outperforms the previous fastest
DIGEST algorithm by up to 3.5 times.
|
1012.4088
|
Fractal Analysis on Human Behaviors Dynamics
|
physics.soc-ph cs.SI
|
The study of human dynamics has attracted much interest from many fields
recently. In this paper, the fractal characteristic of human behaviors is
investigated from the perspective of time series constructed with the amount of
library loans. The Hurst exponents and length of non-periodic cycles calculated
through Rescaled Range Analysis indicate that the time series of human
behaviors is fractal with long-range correlation. Then the time series are
converted to complex networks by visibility graph algorithm. The topological
properties of the networks, such as scale-free property, small-world effect and
hierarchical structure imply that close relationships exist between the amounts
of repetitious actions performed by people during certain periods of time,
especially for some important days. Finally, the networks obtained are verified
to be not fractal and self-similar using box-counting method. Our work implies
the intrinsic regularity shown in human collective repetitious behaviors.
|
1012.4116
|
lp-Recovery of the Most Significant Subspace among Multiple Subspaces
with Outliers
|
stat.ML cs.CV math.FA
|
We assume data sampled from a mixture of d-dimensional linear subspaces with
spherically symmetric distributions within each subspace and an additional
outlier component with spherically symmetric distribution within the ambient
space (for simplicity we may assume that all distributions are uniform on their
corresponding unit spheres). We also assume mixture weights for the different
components. We say that one of the underlying subspaces of the model is most
significant if its mixture weight is higher than the sum of the mixture weights
of all other subspaces. We study the recovery of the most significant subspace
by minimizing the lp-averaged distances of data points from d-dimensional
subspaces, where p>0. Unlike other lp minimization problems, this minimization
is non-convex for all p>0 and thus requires different methods for its analysis.
We show that if 0<p<=1, then for any fraction of outliers the most significant
subspace can be recovered by lp minimization with overwhelming probability
(which depends on the generating distribution and its parameters). We show that
when adding small noise around the underlying subspaces the most significant
subspace can be nearly recovered by lp minimization for any 0<p<=1 with an
error proportional to the noise level. On the other hand, if p>1 and there is
more than one underlying subspace, then with overwhelming probability the most
significant subspace cannot be recovered or nearly recovered. This last result
does not require spherically symmetric outliers.
|
1012.4126
|
Self-Organising Stochastic Encoders
|
cs.NE cs.CV
|
The processing of mega-dimensional data, such as images, scales linearly with
image size only if fixed size processing windows are used. It would be very
useful to be able to automate the process of sizing and interconnecting the
processing windows. A stochastic encoder that is an extension of the standard
Linde-Buzo-Gray vector quantiser, called a stochastic vector quantiser (SVQ),
includes this required behaviour amongst its emergent properties, because it
automatically splits the input space into statistically independent subspaces,
which it then separately encodes. Various optimal SVQs have been obtained, both
analytically and numerically. Analytic solutions which demonstrate how the
input space is split into independent subspaces may be obtained when an SVQ is
used to encode data that lives on a 2-torus (e.g. the superposition of a pair
of uncorrelated sinusoids). Many numerical solutions have also been obtained,
using both SVQs and chains of linked SVQs: (1) images of multiple independent
targets (encoders for single targets emerge), (2) images of multiple correlated
targets (various types of encoder for single and multiple targets emerge), (3)
superpositions of various waveforms (encoders for the separate waveforms emerge
- this is a type of independent component analysis (ICA)), (4) maternal and
foetal ECGs (another example of ICA), (5) images of textures (orientation maps
and dominance stripes emerge). Overall, SVQs exhibit a rich variety of
self-organising behaviour, which effectively discovers the internal structure
of the training data. This should have an immediate impact on "intelligent"
computation, because it reduces the need for expert human intervention in the
design of data processing algorithms.
|
1012.4161
|
Lattice Code Design for the Rayleigh Fading Wiretap Channel
|
cs.IT math.IT
|
It has been shown recently that coding for the Gaussian Wiretap Channel can
be done with nested lattices. A fine lattice intended to the legitimate user
must be designed as a usual lattice code for the Gaussian Channel, while a
coarse lattice is added to introduce confusion at the eavesdropper, whose theta
series must be minimized. We present a design criterion for both the fine and
coarse lattice to obtain wiretap lattice codes for the Rayleigh fading Wiretap
Channel.
|
1012.4173
|
A Self-Organising Neural Network for Processing Data from Multiple
Sensors
|
cs.NE cs.CV
|
This paper shows how a folded Markov chain network can be applied to the
problem of processing data from multiple sensors, with an emphasis on the
special case of 2 sensors. It is necessary to design the network so that it can
transform a high dimensional input vector into a posterior probability, for
which purpose the partitioned mixture distribution network is ideally suited.
The underlying theory is presented in detail, and a simple numerical simulation
is given that shows the emergence of ocular dominance stripes.
|
1012.4194
|
Equation-Free Multiscale Computational Analysis of Individual-Based
Epidemic Dynamics on Networks
|
math.NA cs.SI nlin.AO physics.soc-ph
|
The surveillance, analysis and ultimately the efficient long-term prediction
and control of epidemic dynamics appear to be one of the major challenges
nowadays. Detailed atomistic mathematical models play an important role towards
this aim. In this work it is shown how one can exploit the Equation Free
approach and optimization methods such as Simulated Annealing to bridge
detailed individual-based epidemic simulation with coarse-grained,
systems-level, analysis. The methodology provides a systematic approach for
analyzing the parametric behavior of complex/ multi-scale epidemic simulators
much more efficiently than simply simulating forward in time. It is shown how
steady state and (if required) time-dependent computations, stability
computations, as well as continuation and numerical bifurcation analysis can be
performed in a straightforward manner. The approach is illustrated through a
simple individual-based epidemic model deploying on a random regular connected
graph. Using the individual-based microscopic simulator as a black box
coarse-grained timestepper and with the aid of Simulated Annealing I compute
the coarse-grained equilibrium bifurcation diagram and analyze the stability of
the stationary states sidestepping the necessity of obtaining explicit closures
at the macroscopic level under a pairwise representation perspective.
|
1012.4225
|
Delay and Redundancy in Lossless Source Coding
|
cs.IT math.IT
|
The penalty incurred by imposing a finite delay constraint in lossless source
coding of a memoryless source is investigated. It is well known that for the
so-called block-to-variable and variable-to-variable codes, the redundancy
decays at best polynomially with the delay, where in this case the delay is
identified with the source block length or maximal source phrase length,
respectively. In stark contrast, it is shown that for sequential codes (e.g., a
delay-limited arithmetic code) the redundancy can be made to decay
exponentially with the delay constraint. The corresponding redundancy-delay
exponent is shown to be at least as good as the R\'enyi entropy of order 2 of
the source, but (for almost all sources) not better than a quantity depending
on the minimal source symbol probability and the alphabet size.
|
1012.4241
|
A New Technique for Text Data Compression
|
cs.CR cs.IT math.IT
|
In this paper we use ternary representation of numbers for compressing text
data. We use a binary map for ternary digits and introduce a way to use the
binary 11-pair, which has never been use for coding data before, and we futher
use 4-Digits ternary representation of alphabet with lowercase and uppercase
with some extra symbols that are most commonly used in day to day life. We find
a way to minimize the length of the bits string, which is only possible in
ternary representation thus drastically reducing the length of the code. We
also find some connection between this technique of coding dat and Fibonacci
numbers.
|
1012.4249
|
Travel Time Estimation Using Floating Car Data
|
cs.LG
|
This report explores the use of machine learning techniques to accurately
predict travel times in city streets and highways using floating car data
(location information of user vehicles on a road network). The aim of this
report is twofold, first we present a general architecture of solving this
problem, then present and evaluate few techniques on real floating car data
gathered over a month on a 5 Km highway in New Delhi.
|
1012.4250
|
Differential Privacy versus Quantitative Information Flow
|
cs.IT cs.CR cs.DB math.IT
|
Differential privacy is a notion of privacy that has become very popular in
the database community. Roughly, the idea is that a randomized query mechanism
provides sufficient privacy protection if the ratio between the probabilities
of two different entries to originate a certain answer is bound by e^\epsilon.
In the fields of anonymity and information flow there is a similar concern for
controlling information leakage, i.e. limiting the possibility of inferring the
secret information from the observables. In recent years, researchers have
proposed to quantify the leakage in terms of the information-theoretic notion
of mutual information. There are two main approaches that fall in this
category: One based on Shannon entropy, and one based on R\'enyi's min entropy.
The latter has connection with the so-called Bayes risk, which expresses the
probability of guessing the secret. In this paper, we show how to model the
query system in terms of an information-theoretic channel, and we compare the
notion of differential privacy with that of mutual information. We show that
the notion of differential privacy is strictly stronger, in the sense that it
implies a bound on the mutual information, but not viceversa.
|
1012.4290
|
Bit recycling for scaling random number generators
|
cs.IT math.IT math.NA math.PR
|
Many Random Number Generators (RNG) are available nowadays; they are divided
in two categories, hardware RNG, that provide "true" random numbers, and
algorithmic RNG, that generate pseudo random numbers (PRNG). Both types usually
generate random numbers $(X_n)$ as independent uniform samples in a range
$0,\cdots,2^{b-1}$, with $b = 8, 16, 32$ or $b = 64$. In applications, it is
instead sometimes desirable to draw random numbers as independent uniform
samples $(Y_n)$ in a range $1, \cdots, M$, where moreover M may change between
drawings. Transforming the sequence $(X_n)$ to $(Y_n)$ is sometimes known as
scaling. We discuss different methods for scaling the RNG, both in term of
mathematical efficiency and of computational speed.
|
1012.4327
|
Using virtual human for an interactive customer-oriented constrained
environment design
|
cs.RO
|
For industrial product design, it is very important to take into account
assembly/disassembly and maintenance operations during the conceptual and
prototype design stage. For these operations or other similar operations in a
constrained environment, trajectory planning is always a critical and difficult
issue for evaluating the design or for the users' convenience. In this paper, a
customer-oriented approach is proposed to partially solve ergonomic issues
encountered during the design stage of a constrained environment. A single
objective optimization based method is taken from the literature to generate
the trajectory in a constrained environment automatically. A motion capture
based method assists to guide the trajectory planning interactively if a local
minimum is encountered within the single objective optimization. At last, a
multi-objective evaluation method is proposed to evaluate the operations
generated by the algorithm
|
1012.4374
|
R\'egularisation et optimisation pour l'imagerie sismique des fondations
de pyl\^ones
|
cs.CE
|
This research report summarizes the progress of work carried out jointly by
the IRCCyN and the \'Ecole Polytechnique de Montr\'eal about the resolution of
the inverse problem for the seismic imaging of transmission overhead line
structure foundations. Several methods aimed at mapping the underground medium
are considered. More particularly, we focus on methods based on a bilinear
formulation of the forward problem on one hand (CSI, modified gradient, etc.)
and on methods based on a "primal" formulation on the other hand. The
performances of these methods are compared using synthetic data. This work was
partially funded by RTE (R\'eseau de Transport d'\'Electricit\'e), which has
initiated the project, and was carried out in collaboration with EDF R&D
(\'Electricit\'e de France - Recherche et D\'eveloppement).
|
1012.4396
|
Selection in Scientific Networks
|
cs.SI cs.CY cs.DL nlin.AO physics.soc-ph
|
One of the most interesting scientific challenges nowadays deals with the
analysis and the understanding of complex networks' dynamics. A major issue is
the definition of new frameworks for the exploration of the dynamics at play in
real dynamic networks. Here, we focus on scientific communities by analyzing
the "social part" of Science through a descriptive approach that aims at
identifying the social determinants (e.g. goals and potential interactions
among individuals) behind the emergence and the resilience of scientific
communities. We consider that scientific communities are at the same time
communities of practice (through co-authorship) and that they exist also as
representations in the scientists' mind, since references to other scientists'
works is not merely an objective link to a relevant work, but it reveals social
objects that one manipulates and refers to. In this paper we identify the
patterns about the evolution of a scientific field by analyzing a portion of
the arXiv repository covering a period of 10 years of publications in physics.
As a citation represents a deliberative selection related to the relevance of a
work in its scientific domain, our analysis approaches the co-existence between
co-authorship and citation behaviors in a community by focusing on the most
proficient and cited authors interactions patterns. We focus in turn, on how
these patterns are affected by the selection process of citations. Such a
selection a) produces self-organization because it is played by a group of
individuals which act, compete and collaborate in a common environment in order
to advance Science and b) determines the success (emergence) of both topics and
scientists working on them. The dataset is analyzed a) at a global level, e.g.
the network evolution, b) at the meso-level, e.g. communities emergence, and c)
at a micro-level, e.g. nodes' aggregation patterns.
|
1012.4401
|
A Note on a Characterization of R\'enyi Measures and its Relation to
Composite Hypothesis Testing
|
cs.IT math.IT
|
The R\'enyi information measures are characterized in terms of their Shannon
counterparts, and properties of the former are recovered from first principle
via the associated properties of the latter. Motivated by this
characterization, a two-sensor composite hypothesis testing problem is
presented, and the optimal worst case miss-detection exponent is obtained in
terms of a R\'enyi divergence.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.