id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0911.1678
|
Industrial-Strength Formally Certified SAT Solving
|
cs.LO cs.AI
|
Boolean Satisfiability (SAT) solvers are now routinely used in the
verification of large industrial problems. However, their application in
safety-critical domains such as the railways, avionics, and automotive
industries requires some form of assurance for the results, as the solvers can
(and sometimes do) have bugs. Unfortunately, the complexity of modern, highly
optimized SAT solvers renders impractical the development of direct formal
proofs of their correctness. This paper presents an alternative approach where
an untrusted, industrial-strength, SAT solver is plugged into a trusted,
formally certified, SAT proof checker to provide industrial-strength certified
SAT solving. The key novelties and characteristics of our approach are (i) that
the checker is automatically extracted from the formal development, (ii), that
the combined system can be used as a standalone executable program independent
of any supporting theorem prover, and (iii) that the checker certifies any SAT
solver respecting the agreed format for satisfiability and unsatisfiability
claims. The core of the system is a certified checker for unsatisfiability
claims that is formally designed and verified in Coq. We present its formal
design and outline the correctness proofs. The actual standalone checker is
automatically extracted from the the Coq development. An evaluation of the
certified checker on a representative set of industrial benchmarks from the SAT
Race Competition shows that, albeit it is slower than uncertified SAT checkers,
it is significantly faster than certified checkers implemented on top of an
interactive theorem prover.
|
0911.1685
|
Multi-Objective Optimisation Method for Posture Prediction and Analysis
with Consideration of Fatigue Effect and its Application Case
|
cs.RO
|
Automation technique has been widely used in manufacturing industry, but
there are still manual handling operations required in assembly and maintenance
work in industry. Inappropriate posture and physical fatigue might result in
musculoskeletal disorders (MSDs) in such physical jobs. In ergonomics and
occupational biomechanics, virtual human modelling techniques have been
employed to design and optimize the manual operations in design stage so as to
avoid or decrease potential MSD risks. In these methods, physical fatigue is
only considered as minimizing the muscle or joint stress, and the fatigue
effect along time for the posture is not considered enough. In this study,
based on the existing methods and multiple objective optimisation method (MOO),
a new posture prediction and analysis method is proposed for predicting the
optimal posture and evaluating the physical fatigue in the manual handling
operation. The posture prediction and analysis problem is mathematically
described and a special application case is demonstrated for analyzing a
drilling assembly operation in European Aeronautic Defence & Space Company
(EADS) in this paper.
|
0911.1691
|
Vertical partitioning of relational OLTP databases using integer
programming
|
cs.DB cs.PF
|
A way to optimize performance of relational row store databases is to reduce
the row widths by vertically partitioning tables into table fractions in order
to minimize the number of irrelevant columns/attributes read by each
transaction. This paper considers vertical partitioning algorithms for
relational row-store OLTP databases with an H-store-like architecture, meaning
that we would like to maximize the number of single-sited transactions. We
present a model for the vertical partitioning problem that, given a schema
together with a vertical partitioning and a workload, estimates the costs
(bytes read/written by storage layer access methods and bytes transferred
between sites) of evaluating the workload on the given partitioning. The cost
model allows for arbitrarily prioritizing load balancing of sites vs. total
cost minimization. We show that finding a minimum-cost vertical partitioning in
this model is NP-hard and present two algorithms returning solutions in which
single-sitedness of read queries is preserved while allowing column replication
(which may allow a drastically reduced cost compared to disjoint partitioning).
The first algorithm is a quadratic integer program that finds optimal
minimum-cost solutions with respect to the model, and the second algorithm is a
more scalable heuristic based on simulated annealing. Experiments show that the
algorithms can reduce the cost of the model objective by 37% when applied to
the TPC-C benchmark and the heuristic is shown to obtain solutions with cost
close to the ones found using the quadratic program.
|
0911.1707
|
A Dynamic Vulnerability Map to Assess the Risk of Road Network Traffic
Utilization
|
cs.AI physics.soc-ph
|
Le Havre agglomeration (CODAH) includes 16 establishments classified Seveso
with high threshold. In the literature, we construct vulnerability maps to help
decision makers assess the risk. Such approaches remain static and do take into
account the population displacement in the estimation of the vulnerability. We
propose a decision making tool based on a dynamic vulnerability map to evaluate
the difficulty of evacuation in the different sectors of CODAH. We use a
Geographic Information system (GIS) to visualize the map which evolves with the
road traffic state through a detection of communities in large graphs
algorithm.
|
0911.1708
|
Different goals in multiscale simulations and how to reach them
|
cs.AI nlin.AO
|
In this paper we sum up our works on multiscale programs, mainly simulations.
We first start with describing what multiscaling is about, how it helps
perceiving signal from a background noise in a ?ow of data for example, for a
direct perception by a user or for a further use by another program. We then
give three examples of multiscale techniques we used in the past, maintaining a
summary, using an environmental marker introducing an history in the data and
finally using a knowledge on the behavior of the different scales to really
handle them at the same time.
|
0911.1713
|
Isometries and Construction of Permutation Arrays
|
math.CO cs.IT math.IT
|
An (n,d)-permutation code is a subset C of Sym(n) such that the Hamming
distance d_H between any two distinct elements of C is at least equal to d. In
this paper, we use the characterisation of the isometry group of the metric
space (Sym(n),d_H) in order to develop generating algorithms with rejection of
isomorphic objects. To classify the (n,d)-permutation codes up to isometry, we
construct invariants and study their efficiency. We give the numbers of
non-isometric (4,3)- and (5,4)- permutation codes. Maximal and balanced
(n,d)-permutation codes are enumerated in a constructive way.
|
0911.1743
|
Analysis of peeling decoder for MET ensembles
|
cs.IT math.IT
|
The peeling decoder introduced by Luby, et al. allows analysis of LDPC
decoding for the binary erasure channel (BEC). For irregular ensembles, they
analyze the decoder state as a Markov process and present a solution to the
differential equations describing the process mean. Multi-edge type (MET)
ensembles allow greater precision through specifying graph connectivity. We
generalize the the peeling decoder for MET ensembles and derive analogous
differential equations. We offer a new change of variables and solution to the
node fraction evolutions in the general (MET) case. This result is preparatory
to investigating finite-length ensemble behavior.
|
0911.1745
|
Sequence Folding, Lattice Tiling, and Multidimensional Coding
|
cs.IT math.CO math.IT
|
Folding a sequence $S$ into a multidimensional box is a well-known method
which is used as a multidimensional coding technique. The operation of folding
is generalized in a way that the sequence $S$ can be folded into various shapes
and not just a box. The new definition of folding is based on a lattice tiling
for the given shape $\cS$ and a direction in the $D$-dimensional integer grid.
Necessary and sufficient conditions that a lattice tiling for $\cS$ combined
with a direction define a folding of a sequence into $\cS$ are derived. The
immediate and most impressive application is some new lower bounds on the
number of dots in two-dimensional synchronization patterns. This can be also
generalized for multidimensional synchronization patterns. The technique and
its application for two-dimensional synchronization patterns, raise some
interesting problems in discrete geometry. We will also discuss these problems.
It is also shown how folding can be used to construct multidimensional
error-correcting codes. Finally, by using the new definition of folding,
multidimensional pseudo-random arrays with various shapes are generated.
|
0911.1763
|
The Replicator Equation as an Inference Dynamic
|
math.DS cs.IT math.IT
|
The replicator equation is interpreted as a continuous inference equation and
a formal similarity between the discrete replicator equation and Bayesian
inference is described. Further connections between inference and the
replicator equation are given including a discussion of information divergences
and exponential families as solutions for the replicator dynamic, using Fisher
information and information geometry.
|
0911.1764
|
Escort Evolutionary Game Theory
|
math.DS cs.IT math.DG math.IT
|
A family of replicator-like dynamics, called the escort replicator equation,
is constructed using information-geometric concepts and generalized information
entropies and diverenges from statistical thermodynamics. Lyapunov functions
and escort generalizations of basic concepts and constructions in evolutionary
game theory are given, such as an escorted Fisher's Fundamental theorem and
generalizations of the Shahshahani geometry.
|
0911.1813
|
Interactive Privacy via the Median Mechanism
|
cs.CR cs.CC cs.DB cs.DS
|
We define a new interactive differentially private mechanism -- the median
mechanism -- for answering arbitrary predicate queries that arrive online.
Relative to fixed accuracy and privacy constraints, this mechanism can answer
exponentially more queries than the previously best known interactive privacy
mechanism (the Laplace mechanism, which independently perturbs each query
result). Our guarantee is almost the best possible, even for non-interactive
privacy mechanisms. Conceptually, the median mechanism is the first privacy
mechanism capable of identifying and exploiting correlations among queries in
an interactive setting.
We also give an efficient implementation of the median mechanism, with
running time polynomial in the number of queries, the database size, and the
domain size. This efficient implementation guarantees privacy for all input
databases, and accurate query results for almost all input databases. The
dependence of the privacy on the number of queries in this mechanism improves
over that of the best previously known efficient mechanism by a
super-polynomial factor, even in the non-interactive setting.
|
0911.1826
|
Arithmetic completely regular codes
|
math.CO cs.IT math.IT
|
In this paper, we explore completely regular codes in the Hamming graphs and
related graphs. Experimental evidence suggests that many completely regular
codes have the property that the eigenvalues of the code are in arithmetic
progression. In order to better understand these "arithmetic completely regular
codes", we focus on cartesian products of completely regular codes and products
of their corresponding coset graphs in the additive case. Employing earlier
results, we are then able to prove a theorem which nearly classifies these
codes in the case where the graph admits a completely regular partition into
such codes (e.g, the cosets of some additive completely regular code).
Connections to the theory of distance-regular graphs are explored and several
open questions are posed.
|
0911.1842
|
Standards for Language Resources
|
cs.CL
|
The goal of this paper is two-fold: to present an abstract data model for
linguistic annotations and its implementation using XML, RDF and related
standards; and to outline the work of a newly formed committee of the
International Standards Organization (ISO), ISO/TC 37/SC 4 Language Resource
Management, which will use this work as its starting point.
|
0911.1849
|
The Feasibility of Interference Alignment over Measured MIMO-OFDM
Channels
|
cs.IT math.IT
|
Interference alignment (IA) has been shown to achieve the maximum achievable
degrees of freedom in the interference channel. This results in sum rate
scaling linearly with the number of users in the high signal-to-noise-ratio
(SNR) regime. Linear scaling is achieved by precoding transmitted signals to
align interference subspaces at the receivers, given channel knowledge of all
transmit-receive pairs, effectively reducing the number of discernible
interferers. The theory of IA was derived under assumptions about the richness
of scattering in the propagation channel; practical channels do not guarantee
such ideal characteristics. This paper presents the first experimental study of
IA in measured multiple-input multiple-output orthogonal frequency-division
multiplexing (MIMO-OFDM) interference channels. Our measurement campaign
includes a variety of indoor and outdoor measurement scenarios at The
University of Texas at Austin. We show that IA achieves the claimed scaling
factors, or degrees of freedom, in several measured channel settings for a 3
user, 2 antennas per node setup. In addition to verifying the claimed
performance, we characterize the effect of Kronecker spatial correlation on sum
rate and present two other correlation measures, which we show are more tightly
related to the achieved sum rate.
|
0911.1934
|
On a Gel'fand-Yaglom-Peres theorem for f-divergences
|
cs.IT math.IT math.ST stat.TH
|
It is shown that the $f$-divergence between two probability measures $P$ and
$R$ equals the supremum of the same $f$-divergence computed over all finite
measurable partitions of the original space, thus generalizing results
previously proved by Gel'fand and Yaglom and by Peres for the Information
Divergence and more recently by Dukkipati, Bhatnagar and Murty for the Tsallis'
and Renyi's divergences.
|
0911.1965
|
Active Learning for Mention Detection: A Comparison of Sentence
Selection Strategies
|
cs.CL cs.AI
|
We propose and compare various sentence selection strategies for active
learning for the task of detecting mentions of entities. The best strategy
employs the sum of confidences of two statistical classifiers trained on
different views of the data. Our experimental results show that, compared to
the random selection strategy, this strategy reduces the amount of required
labeled training data by over 50% while achieving the same performance. The
effect is even more significant when only named mentions are considered: the
system achieves the same performance by using only 42% of the training data
required by the random selection strategy.
|
0911.2022
|
Interference Channels With Arbitrarily Correlated Sources
|
cs.IT math.IT
|
Communicating arbitrarily correlated sources over interference channels is
considered in this paper. A sufficient condition is found for the lossless
transmission of a pair of correlated sources over a discrete memoryless
interference channel. With independent sources, the sufficient condition
reduces to the Han-Kobayashi achievable rate region for the interference
channel. For a special correlation structure (in the sense of Slepian-Wolf,
1973), the proposed region reduces to the known achievable region for
interference channels with common information. A simple example is given to
show that the separation approach, with Slepian-Wolf encoding followed by
optimal channel coding, is strictly suboptimal.
|
0911.2023
|
Opportunistic capacity and error exponent regions for compound channel
with feedback
|
cs.IT math.IT
|
Variable length communication over a compound channel with feedback is
considered. Traditionally, capacity of a compound channel without feedback is
defined as the maximum rate that is determined before the start of
communication such that communication is reliable. This traditional definition
is pessimistic. In the presence of feedback, an opportunistic definition is
given. Capacity is defined as the maximum rate that is determined at the end of
communication such that communication is reliable. Thus, the transmission rate
can adapt to the channel chosen by nature. Under this definition, feedback
communication over a compound channel is conceptually similar to multi-terminal
communication. Transmission rate is a vector rather than a scalar; channel
capacity is a region rather than a scalar; error exponent is a region rather
than a scalar. In this paper, variable length communication over a compound
channel with feedback is formulated, its opportunistic capacity region is
characterized, and lower bounds for its error exponent region are provided..
|
0911.2053
|
Interference Mitigation Through Limited Receiver Cooperation
|
cs.IT math.IT
|
Interference is a major issue limiting the performance in wireless networks.
Cooperation among receivers can help mitigate interference by forming
distributed MIMO systems. The rate at which receivers cooperate, however, is
limited in most scenarios. How much interference can one bit of receiver
cooperation mitigate? In this paper, we study the two-user Gaussian
interference channel with conferencing decoders to answer this question in a
simple setting. We identify two regions regarding the gain from receiver
cooperation: linear and saturation regions. In the linear region receiver
cooperation is efficient and provides a degrees-of-freedom gain, which is
either one cooperation bit buys one more bit or two cooperation bits buy one
more bit until saturation. In the saturation region receiver cooperation is
inefficient and provides a power gain, which is at most a constant regardless
of the rate at which receivers cooperate. The conclusion is drawn from the
characterization of capacity region to within two bits. The proposed strategy
consists of two parts: (1) the transmission scheme, where superposition
encoding with a simple power split is employed, and (2) the cooperative
protocol, where one receiver quantize-bin-and-forwards its received signal, and
the other after receiving the side information decode-bin-and-forwards its
received signal.
|
0911.2197
|
On the relation between plausibility logic and the maximum-entropy
principle: a numerical study
|
math.PR cs.IT math.IT physics.data-an
|
What is the relationship between plausibility logic and the principle of
maximum entropy? When does the principle give unreasonable or wrong results?
When is it appropriate to use the rule `expectation = average'? Can
plausibility logic give the same answers as the principle, and better answers
if those of the principle are unreasonable? To try to answer these questions,
this study offers a numerical collection of plausibility distributions given by
the maximum-entropy principle and by plausibility logic for a set of fifteen
simple problems: throwing dice.
|
0911.2258
|
Discrete Hamilton-Jacobi Theory
|
math.OC cs.SY
|
We develop a discrete analogue of Hamilton-Jacobi theory in the framework of
discrete Hamiltonian mechanics. The resulting discrete Hamilton-Jacobi equation
is discrete only in time. We describe a discrete analogue of Jacobi's solution
and also prove a discrete version of the geometric Hamilton-Jacobi theorem. The
theory applied to discrete linear Hamiltonian systems yields the discrete
Riccati equation as a special case of the discrete Hamilton-Jacobi equation. We
also apply the theory to discrete optimal control problems, and recover some
well-known results, such as the Bellman equation (discrete-time HJB equation)
of dynamic programming and its relation to the costate variable in the
Pontryagin maximum principle. This relationship between the discrete
Hamilton-Jacobi equation and Bellman equation is exploited to derive a
generalized form of the Bellman equation that has controls at internal stages.
|
0911.2280
|
PageRank Optimization by Edge Selection
|
cs.DS cs.CC cs.SI
|
The importance of a node in a directed graph can be measured by its PageRank.
The PageRank of a node is used in a number of application contexts - including
ranking websites - and can be interpreted as the average portion of time spent
at the node by an infinite random walk. We consider the problem of maximizing
the PageRank of a node by selecting some of the edges from a set of edges that
are under our control. By applying results from Markov decision theory, we show
that an optimal solution to this problem can be found in polynomial time. Our
core solution results in a linear programming formulation, but we also provide
an alternative greedy algorithm, a variant of policy iteration, which runs in
polynomial time, as well. Finally, we show that, under the slight modification
for which we are given mutually exclusive pairs of edges, the problem of
PageRank optimization becomes NP-hard.
|
0911.2284
|
A New Look at the Classical Entropy of Written English
|
cs.CL
|
A simple method for finding the entropy and redundancy of a reasonable long
sample of English text by direct computer processing and from first principles
according to Shannon theory is presented. As an example, results on the entropy
of the English language have been obtained based on a total of 20.3 million
characters of written English, considering symbols from one to five hundred
characters in length. Besides a more realistic value of the entropy of English,
a new perspective on some classic entropy-related concepts is presented. This
method can also be extended to other Latin languages. Some implications for
practical applications such as plagiarism-detection software, and the minimum
number of words that should be used in social Internet network messaging, are
discussed.
|
0911.2323
|
A Type System for Required/Excluded Elements in CLS
|
cs.LO cs.CE
|
The calculus of looping sequences is a formalism for describing the evolution
of biological systems by means of term rewriting rules. We enrich this calculus
with a type discipline to guarantee the soundness of reduction rules with
respect to some biological properties deriving from the requirement of certain
elements, and the repellency of others. As an example, we model a toy system
where the repellency of a certain element is captured by our type system and
forbids another element to exit a compartment.
|
0911.2324
|
Deterministic Autopoietic Automata
|
cs.NE cs.FL
|
This paper studies two issues related to the paper on Computing by
Self-reproduction: Autopoietic Automata by Jiri Wiedermann. It is shown that
all results presented there extend to deterministic computations. In
particular, nondeterminism is not needed for a lineage to generate all
autopoietic automata.
|
0911.2327
|
An Intuitive Automated Modelling Interface for Systems Biology
|
cs.PL cs.CE cs.LO q-bio.QM
|
We introduce a natural language interface for building stochastic pi calculus
models of biological systems. In this language, complex constructs describing
biochemical events are built from basic primitives of association, dissociation
and transformation. This language thus allows us to model biochemical systems
modularly by describing their dynamics in a narrative-style language, while
making amendments, refinements and extensions on the models easy. We
demonstrate the language on a model of Fc-gamma receptor phosphorylation during
phagocytosis. We provide a tool implementation of the translation into a
stochastic pi calculus language, Microsoft Research's SPiM.
|
0911.2330
|
Diffusion Controlled Reactions, Fluctuation Dominated Kinetics, and
Living Cell Biochemistry
|
cs.CE cs.OH q-bio.QM
|
In recent years considerable portion of the computer science community has
focused its attention on understanding living cell biochemistry and efforts to
understand such complication reaction environment have spread over wide front,
ranging from systems biology approaches, through network analysis (motif
identification) towards developing language and simulators for low level
biochemical processes. Apart from simulation work, much of the efforts are
directed to using mean field equations (equivalent to the equations of
classical chemical kinetics) to address various problems (stability,
robustness, sensitivity analysis, etc.). Rarely is the use of mean field
equations questioned. This review will provide a brief overview of the
situations when mean field equations fail and should not be used. These
equations can be derived from the theory of diffusion controlled reactions, and
emerge when assumption of perfect mixing is used.
|
0911.2346
|
Asymmetric Multilevel Diversity Coding and Asymmetric Gaussian Multiple
Descriptions
|
cs.IT math.IT
|
We consider the asymmetric multilevel diversity (A-MLD) coding problem, where
a set of $2^K-1$ information sources, ordered in a decreasing level of
importance, is encoded into $K$ messages (or descriptions). There are $2^K-1$
decoders, each of which has access to a non-empty subset of the encoded
messages. Each decoder is required to reproduce the information sources up to a
certain importance level depending on the combination of descriptions available
to it. We obtain a single letter characterization of the achievable rate region
for the 3-description problem. In contrast to symmetric multilevel diversity
coding, source-separation coding is not sufficient in the asymmetric case, and
ideas akin to network coding need to be used strategically. Based on the
intuitions gained in treating the A-MLD problem, we derive inner and outer
bounds for the rate region of the asymmetric Gaussian multiple description (MD)
problem with three descriptions. Both the inner and outer bounds have a similar
geometric structure to the rate region template of the A-MLD coding problem,
and moreover, we show that the gap between them is small, which results in an
approximate characterization of the asymmetric Gaussian three description rate
region.
|
0911.2381
|
Analytical Determination of Fractal Structure in Stochastic Time Series
|
physics.data-an cond-mat.stat-mech cs.LG nlin.CD stat.ME
|
Current methods for determining whether a time series exhibits fractal
structure (FS) rely on subjective assessments on estimators of the Hurst
exponent (H). Here, I introduce the Bayesian Assessment of Scaling, an
analytical framework for drawing objective and accurate inferences on the FS of
time series. The technique exploits the scaling property of the diffusion
associated to a time series. The resulting criterion is simple to compute and
represents an accurate characterization of the evidence supporting different
hypotheses on the scaling regime of a time series. Additionally, a closed-form
Maximum Likelihood estimator of H is derived from the criterion, and this
estimator outperforms the best available estimators.
|
0911.2390
|
How Creative Should Creators Be To Optimize the Evolution of Ideas? A
Computational Model
|
cs.AI cs.NE physics.soc-ph
|
There are both benefits and drawbacks to creativity. In a social group it is
not necessary for all members to be creative to benefit from creativity; some
merely imitate or enjoy the fruits of others' creative efforts. What proportion
should be creative? This paper contains a very preliminary investigation of
this question carried out using a computer model of cultural evolution referred
to as EVOC (for EVOlution of Culture). EVOC is composed of neural network based
agents that evolve fitter ideas for actions by (1) inventing new ideas through
modification of existing ones, and (2) imitating neighbors' ideas. The ideal
proportion with respect to fitness of ideas occurs when thirty to forty percent
of the individuals is creative. When creators are inventing 50% of iterations
or less, mean fitness of actions in the society is a positive function of the
ratio of creators to imitators; otherwise mean fitness of actions starts to
drop when the ratio of creators to imitators exceeds approximately 30%. For all
levels or creativity, the diversity of ideas in a population is positively
correlated with the ratio of creative agents.
|
0911.2405
|
Emotion: Appraisal-coping model for the "Cascades" problem
|
cs.AI
|
Modelling emotion has become a challenge nowadays. Therefore, several models
have been produced in order to express human emotional activity. However, only
a few of them are currently able to express the close relationship existing
between emotion and cognition. An appraisal-coping model is presented here,
with the aim to simulate the emotional impact caused by the evaluation of a
particular situation (appraisal), along with the consequent cognitive reaction
intended to face the situation (coping). This model is applied to the
"Cascades" problem, a small arithmetical exercise designed for ten-year-old
pupils. The goal is to create a model corresponding to a child's behaviour when
solving the problem using his own strategies.
|
0911.2501
|
Emotion : mod\`ele d'appraisal-coping pour le probl\`eme des Cascades
|
cs.AI
|
Modeling emotion has become a challenge nowadays. Therefore, several models
have been produced in order to express human emotional activity. However, only
a few of them are currently able to express the close relationship existing
between emotion and cognition. An appraisal-coping model is presented here,
with the aim to simulate the emotional impact caused by the evaluation of a
particular situation (appraisal), along with the consequent cognitive reaction
intended to face the situation (coping). This model is applied to the
?Cascades? problem, a small arithmetical exercise designed for ten-year-old
pupils. The goal is to create a model corresponding to a child's behavior when
solving the problem using his own strategies.
|
0911.2551
|
Minimax Robust Quickest Change Detection
|
cs.IT math.IT math.ST stat.TH
|
The popular criteria of optimality for quickest change detection procedures
are the Lorden criterion, the Shiryaev-Roberts-Pollak criterion, and the
Bayesian criterion. In this paper a robust version of these quickest change
detection problems is considered when the pre-change and post-change
distributions are not known exactly but belong to known uncertainty classes of
distributions. For uncertainty classes that satisfy a specific condition, it is
shown that one can identify least favorable distributions (LFDs) from the
uncertainty classes, such that the detection rule designed for the LFDs is
optimal for the robust problem in a minimax sense. The condition is similar to
that required for the identification of LFDs for the robust hypothesis testing
problem originally studied by Huber. An upper bound on the delay incurred by
the robust test is also obtained in the asymptotic setting under the Lorden
criterion of optimality. This bound quantifies the delay penalty incurred to
guarantee robustness. When the LFDs can be identified, the proposed test is
easier to implement than the CUSUM test based on the Generalized Likelihood
Ratio (GLR) statistic which is a popular approach for such robust change
detection problems. The proposed test is also shown to give better performance
than the GLR test in simulations for some parameter values.
|
0911.2564
|
Distributed Coalition Formation Games for Secure Wireless Transmission
|
cs.IT math.IT
|
Cooperation among wireless nodes has been recently proposed for improving the
physical layer (PHY) security of wireless transmission in the presence of
multiple eavesdroppers. While existing PHY security literature answered the
question ``what are the link-level secrecy rate gains from cooperation?'', this
paper attempts to answer the question of ``how to achieve those gains in a
practical decentralized wireless network and in the presence of a cost for
information exchange?''. For this purpose, we model the PHY security
cooperation problem as a coalitional game with non-transferable utility and
propose a distributed algorithm for coalition formation. Through the proposed
algorithm, the wireless users can cooperate and self-organize into disjoint
independent coalitions, while maximizing their secrecy rate taking into account
the security costs during information exchange. We analyze the resulting
coalitional structures for both decode-and-forward and amplify-and-forward
cooperation and study how the users can adapt the network topology to
environmental changes such as mobility. Through simulations, we assess the
performance of the proposed algorithm and show that, by coalition formation
using decode-and-forward, the average secrecy rate per user is increased of up
to 25.3 % and 24.4 % (for a network with 45 users) relative to the
non-cooperative and amplify-and-forward cases, respectively.
|
0911.2632
|
Measuring contextual citation impact of scientific journals
|
cs.DL cs.IR
|
This paper explores a new indicator of journal citation impact, denoted as
source normalized impact per paper (SNIP). It measures a journal's contextual
citation impact, taking into account characteristics of its properly defined
subject field, especially the frequency at which authors cite other papers in
their reference lists, the rapidity of maturing of citation impact, and the
extent to which a database used for the assessment covers the field's
literature. It further develops Eugene Garfield's notions of a field's
'citation potential' defined as the average length of references lists in a
field and determining the probability of being cited, and the need in fair
performance assessments to correct for differences between subject fields. A
journal's subject field is defined as the set of papers citing that journal.
SNIP is defined as the ratio of the journal's citation count per paper and the
citation potential in its subject field. It aims to allow direct comparison of
sources in different subject fields. Citation potential is shown to vary not
only between journal subject categories - groupings of journals sharing a
research field - or disciplines (e.g., journals in mathematics, engineering and
social sciences tend to have lower values than titles in life sciences), but
also between journals within the same subject category. For instance, basic
journals tend to show higher citation potentials than applied or clinical
journals, and journals covering emerging topics higher than periodicals in
classical subjects or more general journals. SNIP corrects for such
differences. Its strengths and limitations are critically discussed, and
suggestions are made for further research. All empirical results are derived
from Elsevier's Scopus.
|
0911.2746
|
Model Selection: Two Fundamental Measures of Coherence and Their
Algorithmic Significance
|
cs.IT math.IT math.ST stat.TH
|
The problem of model selection arises in a number of contexts, such as
compressed sensing, subset selection in linear regression, estimation of
structures in graphical models, and signal denoising. This paper generalizes
the notion of \emph{incoherence} in the existing literature on model selection
and introduces two fundamental measures of coherence---termed as the worst-case
coherence and the average coherence---among the columns of a design matrix. In
particular, it utilizes these two measures of coherence to provide an in-depth
analysis of a simple one-step thresholding (OST) algorithm for model selection.
One of the key insights offered by the ensuing analysis is that OST is feasible
for model selection as long as the design matrix obeys an easily verifiable
property. In addition, the paper also characterizes the model-selection
performance of OST in terms of the worst-case coherence, \mu, and establishes
that OST performs near-optimally in the low signal-to-noise ratio regime for N
x C design matrices with \mu = O(N^{-1/2}). Finally, in contrast to some of the
existing literature on model selection, the analysis in the paper is
nonasymptotic in nature, it does not require knowledge of the true model order,
it is applicable to generic (random or deterministic) design matrices, and it
neither requires submatrices of the design matrix to have full rank, nor does
it assume a statistical prior on the values of the nonzero entries of the data
vector.
|
0911.2784
|
On Bregman Distances and Divergences of Probability Measures
|
cs.IT math.IT math.PR math.ST stat.TH
|
The paper introduces scaled Bregman distances of probability distributions
which admit non-uniform contributions of observed events. They are introduced
in a general form covering not only the distances of discrete and continuous
stochastic observations, but also the distances of random processes and
signals. It is shown that the scaled Bregman distances extend not only the
classical ones studied in the previous literature, but also the information
divergence and the related wider class of convex divergences of probability
measures. An information processing theorem is established too, but only in the
sense of invariance w.r.t. statistically sufficient transformations and not in
the sense of universal monotonicity. Pathological situations where coding can
increase the classical Bregman distance are illustrated by a concrete example.
In addition to the classical areas of application of the Bregman distances and
convex divergences such as recognition, classification, learning and evaluation
of proximity of various features and signals, the paper mentions a new
application in 3D-exploratory data analysis. Explicit expressions for the
scaled Bregman distances are obtained in general exponential families, with
concrete applications in the binomial, Poisson and Rayleigh families, and in
the families of exponential processes such as the Poisson and diffusion
processes including the classical examples of the Wiener process and geometric
Brownian motion.
|
0911.2829
|
Proceedings Fifth Workshop on Developments in Computational
Models--Computational Models From Nature
|
cs.CE cs.AI cs.CC cs.FL cs.LO cs.NE cs.PL
|
The special theme of DCM 2009, co-located with ICALP 2009, concerned
Computational Models From Nature, with a particular emphasis on computational
models derived from physics and biology. The intention was to bring together
different approaches - in a community with a strong foundational background as
proffered by the ICALP attendees - to create inspirational cross-boundary
exchanges, and to lead to innovative further research. Specifically DCM 2009
sought contributions in quantum computation and information, probabilistic
models, chemical, biological and bio-inspired ones, including spatial models,
growth models and models of self-assembly. Contributions putting to the test
logical or algorithmic aspects of computing (e.g., continuous computing with
dynamical systems, or solid state computing models) were also very much
welcomed.
|
0911.2847
|
Cooperative Precoding/Resource Allocation Games under Spectral Mask and
Total Power Constraints
|
cs.IT math.IT
|
The use of orthogonal signaling schemes such as time-, frequency-, or
code-division multiplexing (T-, F-, CDM) in multi-user systems allows for
power-efficient simple receivers. It is shown in this paper that by using
orthogonal signaling on frequency selective fading channels, the cooperative
Nash bargaining (NB)-based precoding games for multi-user systems, which aim at
maximizing the information rates of all users, are simplified to the
corresponding cooperative resource allocation games. The latter provides
additional practically desired simplifications to transmitter design and
significantly reduces the overhead during user cooperation. The complexity of
the corresponding precoding/resource allocation games, however, depends on the
constraints imposed on the users. If only spectral mask constraints are
present, the corresponding cooperative NB problem can be formulated as a convex
optimization problem and solved efficiently in a distributed manner using dual
decomposition based algorithm. However, the NB problem is non-convex if total
power constraints are also imposed on the users. In this case, the complexity
associate with finding the NB solution is unacceptably high. Therefore, the
multi-user systems are categorized into bandwidth- and power-dominant based on
a bottleneck resource, and different manners of cooperation are developed for
each type of systems for the case of two-users. Such classification guarantees
that the solution obtained in each case is Pareto-optimal and actually can be
identical to the optimal solution, while the complexity is significantly
reduced. Simulation results demonstrate the efficiency of the proposed
cooperative precoding/resource allocation strategies and the reduced complexity
of the proposed algorithms.
|
0911.2865
|
Neural Networks for Dynamic Shortest Path Routing Problems - A Survey
|
cs.NE cs.AI
|
This paper reviews the overview of the dynamic shortest path routing problem
and the various neural networks to solve it. Different shortest path
optimization problems can be solved by using various neural networks
algorithms. The routing in packet switched multi-hop networks can be described
as a classical combinatorial optimization problem i.e. a shortest path routing
problem in graphs. The survey shows that the neural networks are the best
candidates for the optimization of dynamic shortest path routing problems due
to their fastness in computation comparing to other softcomputing and
metaheuristics algorithms
|
0911.2873
|
Relating Granger causality to directed information theory for networks
of stochastic processes
|
cs.IT math.IT
|
This paper addresses the problem of inferring circulation of information
between multiple stochastic processes. We discuss two possible frameworks in
which the problem can be studied: directed information theory and Granger
causality. The main goal of the paper is to study the connection between these
two frameworks. In the case of directed information theory, we stress the
importance of Kramer's causal conditioning. This type of conditioning is
necessary not only in the definition of the directed information but also for
handling causal side information. We also show how directed information
decomposes into the sum of two measures, the first one related to Schreiber's
transfer entropy quantifies the dynamical aspects of causality, whereas the
second one, termed instantaneous information exchange, quantifies the
instantaneous aspect of causality. After having recalled the definition of
Granger causality, we establish its connection with directed information
theory. The connection is particularly studied in the Gaussian case, showing
that Geweke's measures of Granger causality correspond to the transfer entropy
and the instantaneous information exchange. This allows to propose an
information theoretic formulation of Granger causality.
|
0911.2889
|
Global communications in multiprocessor simulations of flames
|
cs.DC cs.CE cs.MS cs.PF
|
In this paper we investigate performance of global communications in a
particular parallel code. The code simulates dynamics of expansion of premixed
spherical flames using an asymptotic model of Sivashinsky type and a spectral
numerical algorithm. As a result, the code heavily relies on global all-to-all
interprocessor communications implementing transposition of the distributed
data array in which numerical solution to the problem is stored. This global
data interdependence makes interprocessor connectivity of the HPC system as
important as the floating-point power of the processors of which the system is
built. Our experiments show that efficient numerical simulation of this
particular model, with global data interdependence, on modern HPC systems is
possible. Prospects of performance of more sophisticated models of flame
dynamics are analysed as well.
|
0911.2900
|
Computation Speed of the F.A.S.T. Model
|
cs.MA physics.soc-ph
|
The F.A.S.T. model for microscopic simulation of pedestrians was formulated
with the idea of parallelizability and small computation times in general in
mind, but so far it was never demonstrated, if it can in fact be implemented
efficiently for execution on a multi-core or multi-CPU system. In this
contribution results are given on computation times for the F.A.S.T. model on
an eight-core PC.
|
0911.2902
|
Simulation of Pedestrians Crossing a Street
|
cs.MA
|
The simulation of vehicular traffic as well as pedestrian dynamics meanwhile
both have a decades long history. The success of this conference series, PED
and others show that the interest in these topics is still strongly increasing.
This contribution deals with a combination of both systems: pedestrians
crossing a street. In a VISSIM simulation for varying demand jam sizes of
vehicles as well as pedestrians and the travel times of the pedestrians are
measured and compared. The study is considered as a study of VISSIM's con ict
area functionality as such, as there is no empirical data available to use for
calibration issues. Above a vehicle demand threshold the results show a
non-monotonic dependence of pedestrians' travel time on pedestrian demand.
|
0911.2904
|
Sequential anomaly detection in the presence of noise and limited
feedback
|
cs.LG
|
This paper describes a methodology for detecting anomalies from sequentially
observed and potentially noisy data. The proposed approach consists of two main
elements: (1) {\em filtering}, or assigning a belief or likelihood to each
successive measurement based upon our ability to predict it from previous noisy
observations, and (2) {\em hedging}, or flagging potential anomalies by
comparing the current belief against a time-varying and data-adaptive
threshold. The threshold is adjusted based on the available feedback from an
end user. Our algorithms, which combine universal prediction with recent work
on online convex programming, do not require computing posterior distributions
given all current observations and involve simple primal-dual parameter
updates. At the heart of the proposed approach lie exponential-family models
which can be used in a wide variety of contexts and applications, and which
yield methods that achieve sublinear per-round regret against both static and
slowly varying product distributions with marginals drawn from the same
exponential family. Moreover, the regret against static distributions coincides
with the minimax value of the corresponding online strongly convex game. We
also prove bounds on the number of mistakes made during the hedging step
relative to the best offline choice of the threshold with access to all
estimated beliefs and feedback signals. We validate the theory on synthetic
data drawn from a time-varying distribution over binary vectors of high
dimensionality, as well as on the Enron email dataset.
|
0911.2922
|
Sparse Eigenvectors of the Discrete Fourier Transform
|
cs.IT math.IT
|
We construct a basis of sparse eigenvectors for the N-dimensional discrete
Fourier transform. The sparsity differs from the optimal by at most a factor of
four. When N is a perfect square, the basis is orthogonal.
|
0911.2942
|
Breaching Euclidean Distance-Preserving Data Perturbation Using Few
Known Inputs
|
cs.DB cs.CR
|
We examine Euclidean distance-preserving data perturbation as a tool for
privacy-preserving data mining. Such perturbations allow many important data
mining algorithms e.g. hierarchical and k-means clustering), with only minor
modification, to be applied to the perturbed data and produce exactly the same
results as if applied to the original data. However, the issue of how well the
privacy of the original data is preserved needs careful study. We engage in
this study by assuming the role of an attacker armed with a small set of known
original data tuples (inputs). Little work has been done examining this kind of
attack when the number of known original tuples is less than the number of data
dimensions. We focus on this important case, develop and rigorously analyze an
attack that utilizes any number of known original tuples. The approach allows
the attacker to estimate the original data tuple associated with each perturbed
tuple and calculate the probability that the estimation results in a privacy
breach. On a real 16-dimensional dataset, we show that the attacker, with 4
known original tuples, can estimate an original unknown tuple with less than 7%
error with probability exceeding 0.8.
|
0911.2948
|
Spatial Analysis of Opportunistic Downlink Relaying in a Two-Hop
Cellular System
|
cs.IT cs.NI math.IT stat.ME
|
We consider a two-hop cellular system in which the mobile nodes help the base
station by relaying information to the dead spots. While two-hop cellular
schemes have been analyzed previously, the distribution of the node locations
has not been explicitly taken into account. In this paper, we model the node
locations of the base stations and the mobile stations as a point process on
the plane and then analyze the performance of two different two-hop schemes in
the downlink. In one scheme the node nearest to the destination that has
decoded information from the base station in the first hop is used as the
relay. In the second scheme the node with the best channel to the relay that
received information in the first hop acts as a relay. In both these schemes we
obtain the success probability of the two hop scheme, accounting for the
interference from all other cells. We use tools from stochastic geometry and
point process theory to analyze the two hop schemes. Besides the results
obtained a main contribution of the paper is to introduce a mathematical
framework that can be used to analyze arbitrary relaying schemes. Some of the
main contributions of this paper are the analytical techniques introduced for
the inclusion of the spatial locations of the nodes into the mathematical
analysis.
|
0911.2952
|
Cooperative Feedback for Multi-Antenna Cognitive Radio Networks
|
cs.IT math.IT
|
Cognitive beamforming (CB) is a multi-antenna technique for efficient
spectrum sharing between primary users (PUs) and secondary users (SUs) in a
cognitive radio network. Specifically, a multi-antenna SU transmitter applies
CB to suppress the interference to the PU receivers as well as enhance the
corresponding SU-link performance. In this paper, for a
multiple-input-single-output (MISO) SU channel coexisting with a
single-input-single-output (SISO) PU channel, we propose a new and practical
paradigm for designing CB based on the finite-rate cooperative feedback from
the PU receiver to the SU transmitter. Specifically, the PU receiver
communicates to the SU transmitter the quantized SU-to-PU channel direction
information (CDI) for computing the SU transmit beamformer, and the
interference power control (IPC) signal that regulates the SU transmission
power according to the tolerable interference margin at the PU receiver. Two CB
algorithms based on cooperative feedback are proposed: one restricts the SU
transmit beamformer to be orthogonal to the quantized SU-to-PU channel
direction and the other relaxes such a constraint. In addition, cooperative
feedforward of the SU CDI from the SU transmitter to the PU receiver is
exploited to allow more efficient cooperative feedback. The outage
probabilities of the SU link for different CB and cooperative
feedback/feedforward algorithms are analyzed, from which the optimal
bit-allocation tradeoff between the CDI and IPC feedback is characterized.
|
0911.2974
|
A Dynamic Near-Optimal Algorithm for Online Linear Programming
|
cs.DS cs.LG
|
A natural optimization model that formulates many online resource allocation
and revenue management problems is the online linear program (LP) in which the
constraint matrix is revealed column by column along with the corresponding
objective coefficient. In such a model, a decision variable has to be set each
time a column is revealed without observing the future inputs and the goal is
to maximize the overall objective function. In this paper, we provide a
near-optimal algorithm for this general class of online problems under the
assumption of random order of arrival and some mild conditions on the size of
the LP right-hand-side input. Specifically, our learning-based algorithm works
by dynamically updating a threshold price vector at geometric time intervals,
where the dual prices learned from the revealed columns in the previous period
are used to determine the sequential decisions in the current period. Due to
the feature of dynamic learning, the competitiveness of our algorithm improves
over the past study of the same problem. We also present a worst-case example
showing that the performance of our algorithm is near-optimal.
|
0911.3108
|
On game psychology: an experiment on the chess board/screen, should you
always "do your best", and why the programs with prescribed weaknesses cannot
be our good friends?
|
cs.AI cs.GT math.HO
|
It is noted that some unusual moves against a strong chess program greatly
weaken its ability to see the serious targets of the game, and its whole level
of play... It is suggested to create programs with different weaknesses in
order to analyze similar human behavior. Finally, a new version of chess,
"Chess Corrida" is suggested.
|
0911.3125
|
A computational model of the bottlenose dolphin sonar:
Feature-extracting method
|
cs.CE
|
The data describing a process of echo-image formation in bottlenose dolphin
sonar perception were accumulated in our experimental explorations. These data
were formalized mathematically and used in the computational model, comparative
testing of which in echo-discrimination tasks revealed no less capabilities
then those of bottlenose dolphins.
|
0911.3209
|
Apply Ant Colony Algorithm to Search All Extreme Points of Function
|
cs.AI cs.NE
|
To find all extreme points of multimodal functions is called extremum
problem, which is a well known difficult issue in optimization fields. Applying
ant colony optimization (ACO) to solve this problem is rarely reported. The
method of applying ACO to solve extremum problem is explored in this paper.
Experiment shows that the solution error of the method presented in this paper
is less than 10^-8. keywords: Extremum Problem; Ant Colony Optimization (ACO)
|
0911.3213
|
Optimum estimation via gradients of partition functions and information
measures: a statistical-mechanical perspective
|
cs.IT math.IT
|
In continuation to a recent work on the statistical--mechanical analysis of
minimum mean square error (MMSE) estimation in Gaussian noise via its relation
to the mutual information (the I-MMSE relation), here we propose a simple and
more direct relationship between optimum estimation and certain information
measures (e.g., the information density and the Fisher information), which can
be viewed as partition functions and hence are amenable to analysis using
statistical--mechanical techniques. The proposed approach has several
advantages, most notably, its applicability to general sources and channels, as
opposed to the I-MMSE relation and its variants which hold only for certain
classes of channels (e.g., additive white Gaussian noise channels). We then
demonstrate the derivation of the conditional mean estimator and the MMSE in a
few examples. Two of these examples turn out to be generalizable to a fairly
wide class of sources and channels. For this class, the proposed approach is
shown to yield an approximate conditional mean estimator and an MMSE formula
that has the flavor of a single-letter expression. We also show how our
approach can easily be generalized to situations of mismatched estimation.
|
0911.3241
|
Optimal Control in Two-Hop Relay Routing
|
cs.NI cs.IT math.IT
|
We study the optimal control of propagation of packets in delay tolerant
mobile ad-hoc networks. We consider a two-hop forwarding policy under which the
expected number of nodes carrying copies of the packets obeys a linear
dynamics. We exploit this property to formulate the problem in the framework of
linear quadratic optimal control which allows us to obtain closed-form
expressions for the optimal control and to study numerically the tradeoffs by
varying various parameters that define the cost.
|
0911.3256
|
Enumerative Coding for Grassmannian Space
|
cs.IT math.IT
|
The Grassmannian space $\Gr$ is the set of all $k-$dimensional subspaces of
the vector space~\smash{$\F_q^n$}. Recently, codes in the Grassmannian have
found an application in network coding. The main goal of this paper is to
present efficient enumerative encoding and decoding techniques for the
Grassmannian. These coding techniques are based on two different orders for the
Grassmannian induced by different representations of $k$-dimensional subspaces
of $\F_q^n$. One enumerative coding method is based on a Ferrers diagram
representation and on an order for $\Gr$ based on this representation. The
complexity of this enumerative coding is $O(k^{5/2} (n-k)^{5/2})$ digit
operations. Another order of the Grassmannian is based on a combination of an
identifying vector and a reduced row echelon form representation of subspaces.
The complexity of the enumerative coding, based on this order, is
$O(nk(n-k)\log n\log\log n)$ digits operations. A combination of the two
methods reduces the complexity on average by a constant factor.
|
0911.3262
|
Moderate-Density Parity-Check Codes
|
cs.IT math.IT
|
We propose a new type of short to moderate block-length, linear
error-correcting codes, called moderate-density parity-check (MDPC) codes. The
number of ones of the parity-check matrix of the codes presented is typically
higher than the number of ones of the parity-check matrix of low-density
parity-check (LDPC) codes. But, still lower than those of the parity-check
matrix of classical block codes. The proposed MDPC codes are cyclic and are
designed by constructing idempotents using cyclotomic cosets. The construction
is simple and allows finding short block-length, high-rate codes with good
minimum distance. Inspired by some recent iterative soft-input soft-output
(SISO) decoders used in a context of classical block codes, we propose a low
complexity, efficient, iterative decoder called Auto-Diversity (AD) decoder. AD
decoder is based on belief propagation (BP) decoder and takes advantage of the
fundamental property of automorphism group of the constructed cyclic code.
|
0911.3280
|
Automated languages phylogeny from Levenshtein distance
|
cs.CL q-bio.PE q-bio.QM
|
Languages evolve over time in a process in which reproduction, mutation and
extinction are all possible, similar to what happens to living organisms. Using
this similarity it is possible, in principle, to build family trees which show
the degree of relatedness between languages.
The method used by modern glottochronology, developed by Swadesh in the
1950s, measures distances from the percentage of words with a common historical
origin. The weak point of this method is that subjective judgment plays a
relevant role.
Recently we proposed an automated method that avoids the subjectivity, whose
results can be replicated by studies that use the same database and that
doesn't require a specific linguistic knowledge. Moreover, the method allows a
quick comparison of a large number of languages.
We applied our method to the Indo-European and Austronesian families,
considering in both cases, fifty different languages. The resulting trees are
similar to those of previous studies, but with some important differences in
the position of few languages and subgroups. We believe that these differences
carry new information on the structure of the tree and on the phylogenetic
relationships within families.
|
0911.3292
|
Automated words stability and languages phylogeny
|
cs.CL physics.soc-ph q-bio.PE
|
The idea of measuring distance between languages seems to have its roots in
the work of the French explorer Dumont D'Urville (D'Urville 1832). He collected
comparative words lists of various languages during his voyages aboard the
Astrolabe from 1826 to1829 and, in his work about the geographical division of
the Pacific, he proposed a method to measure the degree of relation among
languages. The method used by modern glottochronology, developed by Morris
Swadesh in the 1950s (Swadesh 1952), measures distances from the percentage of
shared cognates, which are words with a common historical origin. Recently, we
proposed a new automated method which uses normalized Levenshtein distance
among words with the same meaning and averages on the words contained in a
list. Another classical problem in glottochronology is the study of the
stability of words corresponding to different meanings. Words, in fact, evolve
because of lexical changes, borrowings and replacement at a rate which is not
the same for all of them. The speed of lexical evolution is different for
different meanings and it is probably related to the frequency of use of the
associated words (Pagel et al. 2007). This problem is tackled here by an
automated methodology only based on normalized Levenshtein distance.
|
0911.3298
|
Understanding the Principles of Recursive Neural networks: A Generative
Approach to Tackle Model Complexity
|
cs.NE cs.LG
|
Recursive Neural Networks are non-linear adaptive models that are able to
learn deep structured information. However, these models have not yet been
broadly accepted. This fact is mainly due to its inherent complexity. In
particular, not only for being extremely complex information processing models,
but also because of a computational expensive learning phase. The most popular
training method for these models is back-propagation through the structure.
This algorithm has been revealed not to be the most appropriate for structured
processing due to problems of convergence, while more sophisticated training
methods enhance the speed of convergence at the expense of increasing
significantly the computational cost. In this paper, we firstly perform an
analysis of the underlying principles behind these models aimed at
understanding their computational power. Secondly, we propose an approximate
second order stochastic learning algorithm. The proposed algorithm dynamically
adapts the learning rate throughout the training phase of the network without
incurring excessively expensive computational effort. The algorithm operates in
both on-line and batch modes. Furthermore, the resulting learning scheme is
robust against the vanishing gradients problem. The advantages of the proposed
algorithm are demonstrated with a real-world application example.
|
0911.3304
|
Keystroke Dynamics Authentication For Collaborative Systems
|
cs.LG
|
We present in this paper a study on the ability and the benefits of using a
keystroke dynamics authentication method for collaborative systems.
Authentication is a challenging issue in order to guarantee the security of use
of collaborative systems during the access control step. Many solutions exist
in the state of the art such as the use of one time passwords or smart-cards.
We focus in this paper on biometric based solutions that do not necessitate any
additional sensor. Keystroke dynamics is an interesting solution as it uses
only the keyboard and is invisible for users. Many methods have been published
in this field. We make a comparative study of many of them considering the
operational constraints of use for collaborative systems.
|
0911.3318
|
Re-Pair Compression of Inverted Lists
|
cs.IR cs.DS
|
Compression of inverted lists with methods that support fast intersection
operations is an active research topic. Most compression schemes rely on
encoding differences between consecutive positions with techniques that favor
small numbers. In this paper we explore a completely different alternative: We
use Re-Pair compression of those differences. While Re-Pair by itself offers
fast decompression at arbitrary positions in main and secondary memory, we
introduce variants that in addition speed up the operations required for
inverted list intersection. We compare the resulting data structures with
several recent proposals under various list intersection algorithms, to
conclude that our Re-Pair variants offer an interesting time/space tradeoff for
this problem, yet further improvements are required for it to improve upon the
state of the art.
|
0911.3347
|
Optimal strategies for computing symmetric Boolean functions in
collocated networks
|
cs.IT math.IT
|
We address the problem of finding optimal strategies for computing Boolean
symmetric functions. We consider a collocated network, where each node's
transmissions can be heard by every other node. Each node has a Boolean
measurement and we wish to compute a given Boolean function of these
measurements with zero error. We allow for block computation to enhance data
fusion efficiency, and determine the minimum worst-case total bits to be
communicated to perform the desired computation. We restrict attention to the
class of symmetric Boolean functions, which only depend on the number of 1s
among the n measurements.
We define three classes of functions, namely threshold functions, delta
functions and interval functions. We provide exactly optimal strategies for the
first two classes, and an order-optimal strategy with optimal preconstant for
interval functions. Using these results, we can characterize the complexity of
computing percentile type functions, which is of great interest. In our
analysis, we use lower bounds from communication complexity theory, and provide
an achievable scheme using information theoretic tools.
|
0911.3349
|
Seeing Science
|
astro-ph.IM cs.CV cs.GR stat.AP
|
The ability to represent scientific data and concepts visually is becoming
increasingly important due to the unprecedented exponential growth of
computational power during the present digital age. The data sets and
simulations scientists in all fields can now create are literally thousands of
times as large as those created just 20 years ago. Historically successful
methods for data visualization can, and should, be applied to today's huge data
sets, but new approaches, also enabled by technology, are needed as well.
Increasingly, "modular craftsmanship" will be applied, as relevant
functionality from the graphically and technically best tools for a job are
combined as-needed, without low-level programming.
|
0911.3357
|
Fundamentals of Large Sensor Networks: Connectivity, Capacity, Clocks
and Computation
|
cs.NI cs.IT math.IT
|
Sensor networks potentially feature large numbers of nodes that can sense
their environment over time, communicate with each other over a wireless
network, and process information. They differ from data networks in that the
network as a whole may be designed for a specific application. We study the
theoretical foundations of such large scale sensor networks, addressing four
fundamental issues- connectivity, capacity, clocks and function computation.
To begin with, a sensor network must be connected so that information can
indeed be exchanged between nodes. The connectivity graph of an ad-hoc network
is modeled as a random graph and the critical range for asymptotic connectivity
is determined, as well as the critical number of neighbors that a node needs to
connect to. Next, given connectivity, we address the issue of how much data can
be transported over the sensor network. We present fundamental bounds on
capacity under several models, as well as architectural implications for how
wireless communication should be organized.
Temporal information is important both for the applications of sensor
networks as well as their operation.We present fundamental bounds on the
synchronizability of clocks in networks, and also present and analyze
algorithms for clock synchronization. Finally we turn to the issue of gathering
relevant information, that sensor networks are designed to do. One needs to
study optimal strategies for in-network aggregation of data, in order to
reliably compute a composite function of sensor measurements, as well as the
complexity of doing so. We address the issue of how such computation can be
performed efficiently in a sensor network and the algorithms for doing so, for
some classes of functions.
|
0911.3411
|
Measuring the Meaning of Words in Contexts: An automated analysis of
controversies about Monarch butterflies, Frankenfoods, and stem cells
|
cs.CL cs.IR physics.soc-ph
|
Co-words have been considered as carriers of meaning across different domains
in studies of science, technology, and society. Words and co-words, however,
obtain meaning in sentences, and sentences obtain meaning in their contexts of
use. At the science/society interface, words can be expected to have different
meanings: the codes of communication that provide meaning to words differ on
the varying sides of the interface. Furthermore, meanings and interfaces may
change over time. Given this structuring of meaning across interfaces and over
time, we distinguish between metaphors and diaphors as reflexive mechanisms
that facilitate the translation between contexts. Our empirical focus is on
three recent scientific controversies: Monarch butterflies, Frankenfoods, and
stem-cell therapies. This study explores new avenues that relate the study of
co-word analysis in context with the sociological quest for the analysis and
processing of meaning.
|
0911.3415
|
Can Scientific Journals be Classified in terms of Aggregated
Journal-Journal Citation Relations using the Journal Citation Reports?
|
cs.DL cs.IR physics.soc-ph
|
The aggregated citation relations among journals included in the Science
Citation Index provide us with a huge matrix which can be analyzed in various
ways. Using principal component analysis or factor analysis, the factor scores
can be used as indicators of the position of the cited journals in the citing
dimensions of the database. Unrotated factor scores are exact, and the
extraction of principal components can be made stepwise since the principal
components are independent. Rotation may be needed for the designation, but in
the rotated solution a model is assumed. This assumption can be legitimated on
pragmatic or theoretical grounds. Since the resulting outcomes remain sensitive
to the assumptions in the model, an unambiguous classification is no longer
possible in this case. However, the factor-analytic solutions allow us to test
classifications against the structures contained in the database. This will be
demonstrated for the delineation of a set of biochemistry journals.
|
0911.3416
|
Classification and Powerlaws: The Logarithmic Transformation
|
cs.IR cs.DL physics.soc-ph
|
Logarithmic transformation of the data has been recommended by the literature
in the case of highly skewed distributions such as those commonly found in
information science. The purpose of the transformation is to make the data
conform to the lognormal law of error for inferential purposes. How does this
transformation affect the analysis? We factor analyze and visualize the
citation environment of the Journal of the American Chemical Society (JACS)
before and after a logarithmic transformation. The transformation strongly
reduces the variance necessary for classificatory purposes and therefore is
counterproductive to the purposes of the descriptive statistics. We recommend
against the logarithmic transformation when sets cannot be defined
unambiguously. The intellectual organization of the sciences is reflected in
the curvilinear parts of the citation distributions, while negative powerlaws
fit excellently to the tails of the distributions.
|
0911.3422
|
Co-occurrence Matrices and their Applications in Information Science:
Extending ACA to the Web Environment
|
cs.IR cs.DL physics.soc-ph
|
Co-occurrence matrices, such as co-citation, co-word, and co-link matrices,
have been used widely in the information sciences. However, confusion and
controversy have hindered the proper statistical analysis of this data. The
underlying problem, in our opinion, involved understanding the nature of
various types of matrices. This paper discusses the difference between a
symmetrical co-citation matrix and an asymmetrical citation matrix as well as
the appropriate statistical techniques that can be applied to each of these
matrices, respectively. Similarity measures (like the Pearson correlation
coefficient or the cosine) should not be applied to the symmetrical co-citation
matrix, but can be applied to the asymmetrical citation matrix to derive the
proximity matrix. The argument is illustrated with examples. The study then
extends the application of co-occurrence matrices to the Web environment where
the nature of the available data and thus data collection methods are different
from those of traditional databases such as the Science Citation Index. A set
of data collected with the Google Scholar search engine is analyzed using both
the traditional methods of multivariate analysis and the new visualization
software Pajek that is based on social network analysis and graph theory.
|
0911.3482
|
Complexity of Networks (reprise)
|
cs.IT math.IT nlin.AO q-bio.PE
|
Network or graph structures are ubiquitous in the study of complex systems.
Often, we are interested in complexity trends of these system as it evolves
under some dynamic. An example might be looking at the complexity of a food web
as species enter an ecosystem via migration or speciation, and leave via
extinction.
In a previous paper, a complexity measure of networks was proposed based on
the {\em complexity is information content} paradigm. To apply this paradigm to
any object, one must fix two things: a representation language, in which
strings of symbols from some alphabet describe, or stand for the objects being
considered; and a means of determining when two such descriptions refer to the
same object. With these two things set, the information content of an object
can be computed in principle from the number of equivalent descriptions
describing a particular object.
The previously proposed representation language had the deficiency that the
fully connected and empty networks were the most complex for a given number of
nodes. A variation of this measure, called zcomplexity, applied a compression
algorithm to the resulting bitstring representation, to solve this problem.
Unfortunately, zcomplexity proved too computationally expensive to be
practical.
In this paper, I propose a new representation language that encodes the
number of links along with the number of nodes and a representation of the
linklist. This, like zcomplexity, exhibits minimal complexity for fully
connected and empty networks, but is as tractable as the original measure.
...
|
0911.3514
|
Sampling and reconstructing signals from a union of linear subspaces
|
cs.IT math.IT
|
In this note we study the problem of sampling and reconstructing signals
which are assumed to lie on or close to one of several subspaces of a Hilbert
space. Importantly, we here consider a very general setting in which we allow
infinitely many subspaces in infinite dimensional Hilbert spaces. This general
approach allows us to unify many results derived recently in areas such as
compressed sensing, affine rank minimisation and analog compressed sensing.
Our main contribution is to show that a conceptually simple iterative
projection algorithms is able to recover signals from a union of subspaces
whenever the sampling operator satisfies a bi-Lipschitz embedding condition.
Importantly, this result holds for all Hilbert spaces and unions of subspaces,
as long as the sampling procedure satisfies the condition for the set of
subspaces considered. In addition to recent results for finite unions of finite
dimensional subspaces and infinite unions of subspaces in finite dimensional
spaces, we also show that this bi-Lipschitz property can hold in an analog
compressed sensing setting in which we have an infinite union of infinite
dimensional subspaces living in infinite dimensional space.
|
0911.3581
|
X-Learn: An XML-Based, Multi-agent System for Supporting "User-Device"
Adaptive E-learning
|
cs.CY cs.MA
|
In this paper we present X-Learn, an XML-based, multi-agent system for
supporting "user-device" adaptive e-learning. X-Learn is characterized by the
following features: (i) it is highly subjective, since it handles quite a rich
and detailed user profile that plays a key role during the learning activities;
(ii) it is dynamic and flexible, i.e., it is capable of reacting to variations
of exigencies and objectives; (iii) it is device-adaptive, since it decides the
learning objects to present to the user on the basis of the device she/he is
currently exploiting; (iv) it is generic, i.e., it is capable of operating in a
large variety of learning contexts; (v) it is XML based, since it exploits many
facilities of XML technology for handling and exchanging information connected
to e-learning activities. The paper reports also various experimental results
as well as a comparison between X-Learn and other related e-learning management
systems already presented in the literature.
|
0911.3600
|
"Almost automatic" and semantic integration of XML Schemas at various
"severity" levels
|
cs.DB
|
This paper presents a novel approach for the integration of a set of XML
Schemas. The proposed approach is specialized for XML, is almost automatic,
semantic and "light". As a further, original, peculiarity, it is parametric
w.r.t. a "severity" level against which the integration task is performed. The
paper describes the approach in all details, illustrates various theoretical
results, presents the experiments we have performed for testing it and,
finally, compares it with various related approaches already proposed in the
literature.
|
0911.3633
|
A Geometric Approach to Sample Compression
|
cs.LG math.CO math.GT stat.ML
|
The Sample Compression Conjecture of Littlestone & Warmuth has remained
unsolved for over two decades. This paper presents a systematic geometric
investigation of the compression of finite maximum concept classes. Simple
arrangements of hyperplanes in Hyperbolic space, and Piecewise-Linear
hyperplane arrangements, are shown to represent maximum classes, generalizing
the corresponding Euclidean result. A main result is that PL arrangements can
be swept by a moving hyperplane to unlabeled d-compress any finite maximum
class, forming a peeling scheme as conjectured by Kuzmin & Warmuth. A corollary
is that some d-maximal classes cannot be embedded into any maximum class of VC
dimension d+k, for any constant k. The construction of the PL sweeping involves
Pachner moves on the one-inclusion graph, corresponding to moves of a
hyperplane across the intersection of d other hyperplanes. This extends the
well known Pachner moves for triangulations to cubical complexes.
|
0911.3643
|
Multiple Presents: How Search Engines Re-write the Past
|
cs.IR physics.soc-ph
|
Internet search engines function in a present which changes continuously. The
search engines update their indices regularly, overwriting Web pages with newer
ones, adding new pages to the index, and losing older ones. Some search engines
can be used to search for information at the internet for specific periods of
time. However, these 'date stamps' are not determined by the first occurrence
of the pages in the Web, but by the last date at which a page was updated or a
new page was added, and the search engine's crawler updated this change in the
database. This has major implications for the use of search engines in
scholarly research as well as theoretical implications for the conceptions of
time and temporality. We examine the interplay between the different updating
frequencies by using AltaVista and Google for searches at different moments of
time. Both the retrieval of the results and the structure of the retrieved
information erodes over time.
|
0911.3668
|
Signal acquisition via polarization modulation in single photon sources
|
quant-ph cs.IT math.IT
|
A simple model system is introduced for demonstrating how a single photon
source might be used to transduce classical analog information. The theoretical
scheme results in measurements of analog source samples that are (i) quantized
in the sense of analog-to-digital conversion and (ii) corrupted by random noise
that is solely due to the quantum uncertainty in detecting the polarization
state of each photon. This noise is unavoidable if more than one bit per sample
is to be transmitted, and we show how it may be exploited in a manner inspired
by suprathreshold stochastic resonance. The system is analyzed information
theoretically, as it can be modeled as a noisy optical communication channel,
although unlike classical Poisson channels, the detector's photon statistics
are binomial. Previous results on binomial channels are adapted to demonstrate
numerically that the classical information capacity, and thus the accuracy of
the transduction, increases logarithmically with the square root of the number
of photons, N. Although the capacity is shown to be reduced when an additional
detector nonideality is present, the logarithmic increase with N remains.
|
0911.3676
|
Pipelined Encoding for Deterministic and Noisy Relay Networks
|
cs.IT math.IT
|
Recent coding strategies for deterministic and noisy relay networks are
related to the pipelining of block Markov encoding. For deterministic networks,
it is shown that pipelined encoding improves encoding delay, as opposed to
end-to-end delay. For noisy networks, it is observed that decode-and-forward
exhibits good rate scaling when the signal-to-noise ratio (SNR) increases.
|
0911.3708
|
Manipulability of Single Transferable Vote
|
cs.AI cs.CC cs.GT cs.MA
|
For many voting rules, it is NP-hard to compute a successful manipulation.
However, NP-hardness only bounds the worst-case complexity. Recent theoretical
results suggest that manipulation may often be easy in practice. We study
empirically the cost of manipulating the single transferable vote (STV) rule.
This was one of the first rules shown to be NP-hard to manipulate. It also
appears to be one of the harder rules to manipulate since it involves multiple
rounds and since, unlike many other rules, it is NP-hard for a single agent to
manipulate without weights on the votes or uncertainty about how the other
agents have voted. In almost every election in our experiments, it was easy to
compute how a single agent could manipulate the election or to prove that
manipulation by a single agent was impossible. It remains an interesting open
question if manipulation by a coalition of agents is hard to compute in
practice.
|
0911.3717
|
Artificial Neural Network-based error compensation procedure for
low-cost encoders
|
cs.NE astro-ph.IM physics.comp-ph
|
An Artificial Neural Network-based error compensation method is proposed for
improving the accuracy of resolver-based 16-bit encoders by compensating for
their respective systematic error profiles. The error compensation procedure,
for a particular encoder, involves obtaining its error profile by calibrating
it on a precision rotary table, training the neural network by using a part of
this data and then determining the corrected encoder angle by subtracting the
ANN-predicted error from the measured value of the encoder angle. Since it is
not guaranteed that all the resolvers will have exactly similar error profiles
because of the inherent differences in their construction on a micro scale, the
ANN has been trained on one error profile at a time and the corresponding
weight file is then used only for compensating the systematic error of this
particular encoder. The systematic nature of the error profile for each of the
encoders has also been validated by repeated calibration of the encoders over a
period of time and it was found that the error profiles of a particular encoder
recorded at different epochs show near reproducible behavior. The ANN-based
error compensation procedure has been implemented for 4 encoders by training
the ANN with their respective error profiles and the results indicate that the
accuracy of encoders can be improved by nearly an order of magnitude from
quoted values of ~6 arc-min to ~0.65 arc-min when their corresponding
ANN-generated weight files are used for determining the corrected encoder
angle.
|
0911.3723
|
Applications of the Dynamic Distance Potential Field Method
|
cs.MA
|
Recently the dynamic distance potential field (DDPF) was introduced as a
computationally efficient method to make agents in a simulation of pedestrians
move rather on the quickest path than the shortest. It can be considered to be
an estimated-remaining-journey-time-based one-shot dynamic assignment method
for pedestrian route choice on the operational level of dynamics. In this
contribution the method is shortly introduced and the effect of the method on
RiMEA's test case 11 is investigated.
|
0911.3753
|
Evolutionary estimation of a Coupled Markov Chain credit risk model
|
cs.NE cs.CE
|
There exists a range of different models for estimating and simulating credit
risk transitions to optimally manage credit risk portfolios and products. In
this chapter we present a Coupled Markov Chain approach to model rating
transitions and thereby default probabilities of companies. As the likelihood
of the model turns out to be a non-convex function of the parameters to be
estimated, we apply heuristics to find the ML estimators. To this extent, we
outline the model and its likelihood function, and present both a Particle
Swarm Optimization algorithm, as well as an Evolutionary Optimization algorithm
to maximize the likelihood function. Numerical results are shown which suggest
a further application of evolutionary optimization techniques for credit risk
management.
|
0911.3823
|
Google matrix and Ulam networks of intermittency maps
|
cs.IR cond-mat.dis-nn nlin.AO nlin.CD physics.soc-ph
|
We study the properties of the Google matrix of an Ulam network generated by
intermittency maps. This network is created by the Ulam method which gives a
matrix approximant for the Perron-Frobenius operator of dynamical map. The
spectral properties of eigenvalues and eigenvectors of this matrix are
analyzed. We show that the PageRank of the system is characterized by a power
law decay with the exponent $\beta$ dependent on map parameters and the Google
damping factor $\alpha$. Under certain conditions the PageRank is completely
delocalized so that the Google search in such a situation becomes inefficient.
|
0911.3842
|
Musical Genres: Beating to the Rhythms of Different Drums
|
physics.data-an cs.IR cs.SD physics.soc-ph
|
Online music databases have increased signicantly as a consequence of the
rapid growth of the Internet and digital audio, requiring the development of
faster and more efficient tools for music content analysis. Musical genres are
widely used to organize music collections. In this paper, the problem of
automatic music genre classification is addressed by exploring rhythm-based
features obtained from a respective complex network representation. A Markov
model is build in order to analyse the temporal sequence of rhythmic notation
events. Feature analysis is performed by using two multivariate statistical
approaches: principal component analysis(unsupervised) and linear discriminant
analysis (supervised). Similarly, two classifiers are applied in order to
identify the category of rhythms: parametric Bayesian classifier under gaussian
hypothesis (supervised), and agglomerative hierarchical clustering
(unsupervised). Qualitative results obtained by Kappa coefficient and the
obtained clusters corroborated the effectiveness of the proposed method.
|
0911.3872
|
Equivalence perspectives in communication, source-channel connections
and universal source-channel separation
|
cs.IT math.IT
|
An operational perspective is used to understand the relationship between
source and channel coding. This is based on a direct reduction of one problem
to another that uses random coding (and hence common randomness) but unlike all
prior work, does not involve any functional computations, in particular, no
mutual-information computations. This result is then used to prove a universal
source-channel separation theorem in the rate-distortion context where
universality is in the sense of a compound ``general channel.''
|
0911.3921
|
Error Rates of the Maximum-Likelihood Detector for Arbitrary
Constellations: Convex/Concave Behavior and Applications
|
cs.IT math.IT
|
Motivated by a recent surge of interest in convex optimization techniques,
convexity/concavity properties of error rates of the maximum likelihood
detector operating in the AWGN channel are studied and extended to
frequency-flat slow-fading channels. Generic conditions are identified under
which the symbol error rate (SER) is convex/concave for arbitrary
multi-dimensional constellations. In particular, the SER is convex in SNR for
any one- and two-dimensional constellation, and also in higher dimensions at
high SNR. Pairwise error probability and bit error rate are shown to be convex
at high SNR, for arbitrary constellations and bit mapping. Universal bounds for
the SER 1st and 2nd derivatives are obtained, which hold for arbitrary
constellations and are tight for some of them. Applications of the results are
discussed, which include optimum power allocation in spatial multiplexing
systems, optimum power/time sharing to decrease or increase (jamming problem)
error rate, an implication for fading channels ("fading is never good in low
dimensions") and optimization of a unitary-precoded OFDM system. For example,
the error rate bounds of a unitary-precoded OFDM system with QPSK modulation,
which reveal the best and worst precoding, are extended to arbitrary
constellations, which may also include coding. The reported results also apply
to the interference channel under Gaussian approximation, to the bit error rate
when it can be expressed or approximated as a non-negative linear combination
of individual symbol error rates, and to coded systems.
|
0911.3944
|
Likelihood-based semi-supervised model selection with applications to
speech processing
|
stat.ML cs.CL cs.LG stat.AP
|
In conventional supervised pattern recognition tasks, model selection is
typically accomplished by minimizing the classification error rate on a set of
so-called development data, subject to ground-truth labeling by human experts
or some other means. In the context of speech processing systems and other
large-scale practical applications, however, such labeled development data are
typically costly and difficult to obtain. This article proposes an alternative
semi-supervised framework for likelihood-based model selection that leverages
unlabeled data by using trained classifiers representing each model to
automatically generate putative labels. The errors that result from this
automatic labeling are shown to be amenable to results from robust statistics,
which in turn provide for minimax-optimal censored likelihood ratio tests that
recover the nonparametric sign test as a limiting case. This approach is then
validated experimentally using a state-of-the-art automatic speech recognition
system to select between candidate word pronunciations using unlabeled speech
data that only potentially contain instances of the words under test. Results
provide supporting evidence for the utility of this approach, and suggest that
it may also find use in other applications of machine learning.
|
0911.3979
|
Making the road by searching - A search engine based on Swarm
Information Foraging
|
cs.IR cs.HC
|
Search engines are nowadays one of the most important entry points for
Internet users and a central tool to solve most of their information needs.
Still, there exist a substantial amount of users' searches which obtain
unsatisfactory results. Needless to say, several lines of research aim to
increase the relevancy of the results users retrieve. In this paper the authors
frame this problem within the much broader (and older) one of information
overload. They argue that users' dissatisfaction with search engines is a
currently common manifestation of such a problem, and propose a different angle
from which to tackle with it. As it will be discussed, their approach shares
goals with a current hot research topic (namely, learning to rank for
information retrieval) but, unlike the techniques commonly applied in that
field, their technique cannot be exactly considered machine learning and,
additionally, it can be used to change the search engine's response in
real-time, driven by the users behavior. Their proposal adapts concepts from
Swarm Intelligence (in particular, Ant Algorithms) from an Information Foraging
point of view. It will be shown that the technique is not only feasible, but
also an elegant solution to the stated problem; what's more, it achieves
promising results, both increasing the performance of a major search engine for
informational queries, and substantially reducing the time users require to
answer complex information needs.
|
0911.3992
|
Storage Coding for Wear Leveling in Flash Memories
|
cs.IT math.IT
|
Flash memory is a non-volatile computer memory comprised of blocks of cells,
wherein each cell is implemented as either NAND or NOR floating gate. NAND
flash is currently the most widely used type of flash memory. In a NAND flash
memory, every block of cells consists of numerous pages; rewriting even a
single page requires the whole block to be erased and reprogrammed. Block
erasures determine both the longevity and the efficiency of a flash memory.
Therefore, when data in a NAND flash memory are reorganized, minimizing the
total number of block erasures required to achieve the desired data movement is
an important goal. This leads to the flash data movement problem studied in
this paper. We show that coding can significantly reduce the number of block
erasures required for data movement, and present several optimal or nearly
optimal data-movement algorithms based upon ideas from coding theory and
combinatorics. In particular, we show that the sorting-based (non-coding)
schemes require at least O(nlogn) erasures to move data among n blocks, whereas
coding-based schemes require only O(n) erasures. Furthermore, coding-based
schemes use only one auxiliary block, which is the best possible, and achieve a
good balance between the number of erasures in each of the n+1 blocks.
|
0911.4046
|
Super-Linear Convergence of Dual Augmented-Lagrangian Algorithm for
Sparsity Regularized Estimation
|
stat.ML cs.LG stat.ME
|
We analyze the convergence behaviour of a recently proposed algorithm for
regularized estimation called Dual Augmented Lagrangian (DAL). Our analysis is
based on a new interpretation of DAL as a proximal minimization algorithm. We
theoretically show under some conditions that DAL converges super-linearly in a
non-asymptotic and global sense. Due to a special modelling of sparse
estimation problems in the context of machine learning, the assumptions we make
are milder and more natural than those made in conventional analysis of
augmented Lagrangian algorithms. In addition, the new interpretation enables us
to generalize DAL to wide varieties of sparse estimation problems. We
experimentally confirm our analysis in a large scale $\ell_1$-regularized
logistic regression problem and extensively compare the efficiency of DAL
algorithm to previously proposed algorithms on both synthetic and benchmark
datasets.
|
0911.4167
|
Wyner-Ziv Coding over Broadcast Channels: Digital Schemes
|
cs.IT math.IT
|
This paper addresses lossy transmission of a common source over a broadcast
channel when there is correlated side information at the receivers, with
emphasis on the quadratic Gaussian and binary Hamming cases. A digital scheme
that combines ideas from the lossless version of the problem, i.e.,
Slepian-Wolf coding over broadcast channels, and dirty paper coding, is
presented and analyzed. This scheme uses layered coding where the common layer
information is intended for both receivers and the refinement information is
destined only for one receiver. For the quadratic Gaussian case, a quantity
characterizing the overall quality of each receiver is identified in terms of
channel and side information parameters. It is shown that it is more
advantageous to send the refinement information to the receiver with "better"
overall quality. In the case where all receivers have the same overall quality,
the presented scheme becomes optimal. Unlike its lossless counterpart, however,
the problem eludes a complete characterization.
|
0911.4178
|
Folksonomic Tag Clouds as an Aid to Content Indexing
|
cs.IR cs.HC
|
Social tagging systems have recently developed as a popular method of data
organisation on the Internet. These systems allow users to organise their
content in a way that makes sense to them, rather than forcing them to use a
pre-determined and rigid set of categorisations. These folksonomies provide
well populated sources of unstructured tags describing web resources which
could potentially be used as semantic index terms for these resources. However
getting people to agree on what tags best describe a resource is a difficult
problem, therefore any feature which increases the consistency and stability of
terms chosen would be extremely beneficial. We investigate how the provision of
a tag cloud, a weighted list of terms commonly used to assist in browsing a
folksonomy, during the tagging process itself influences the tags produced and
how difficult the user perceived the task to be. We show that illustrating the
most popular tags to users assists in the tagging process and encourages a
stable and consistent folksonomy to form.
|
0911.4207
|
An information theoretic approach to statistical dependence: copula
information
|
q-fin.ST cs.IT math.IT physics.data-an stat.AP
|
We discuss the connection between information and copula theories by showing
that a copula can be employed to decompose the information content of a
multivariate distribution into marginal and dependence components, with the
latter quantified by the mutual information. We define the information excess
as a measure of deviation from a maximum entropy distribution. The idea of
marginal invariant dependence measures is also discussed and used to show that
empirical linear correlation underestimates the amplitude of the actual
correlation in the case of non-Gaussian marginals. The mutual information is
shown to provide an upper bound for the asymptotic empirical log-likelihood of
a copula. An analytical expression for the information excess of T-copulas is
provided, allowing for simple model identification within this family. We
illustrate the framework in a financial data set.
|
0911.4219
|
Message Passing Algorithms for Compressed Sensing: I. Motivation and
Construction
|
cs.IT math.IT
|
In a recent paper, the authors proposed a new class of low-complexity
iterative thresholding algorithms for reconstructing sparse signals from a
small set of linear measurements \cite{DMM}. The new algorithms are broadly
referred to as AMP, for approximate message passing. This is the first of two
conference papers describing the derivation of these algorithms, connection
with the related literature, extensions of the original framework, and new
empirical evidence.
In particular, the present paper outlines the derivation of AMP from standard
sum-product belief propagation, and its extension in several directions. We
also discuss relations with formal calculations based on statistical mechanics
methods.
|
0911.4222
|
Message Passing Algorithms for Compressed Sensing: II. Analysis and
Validation
|
cs.IT math.IT
|
In a recent paper, the authors proposed a new class of low-complexity
iterative thresholding algorithms for reconstructing sparse signals from a
small set of linear measurements \cite{DMM}. The new algorithms are broadly
referred to as AMP, for approximate message passing. This is the second of two
conference papers describing the derivation of these algorithms, connection
with related literature, extensions of original framework, and new empirical
evidence.
This paper describes the state evolution formalism for analyzing these
algorithms, and some of the conclusions that can be drawn from this formalism.
We carried out extensive numerical simulations to confirm these predictions. We
present here a few representative results.
|
0911.4230
|
Introduction to Bioinformatics
|
cs.CE
|
Bioinformatics is a new discipline that addresses the need to manage and
interpret the data that in the past decade was massively generated by genomic
research. This discipline represents the convergence of genomics, biotechnology
and information technology, and encompasses analysis and interpretation of
data, modeling of biological phenomena, and development of algorithms and
statistics. This article presents an introduction to bioinformatics
|
0911.4262
|
Towards Industrialized Conception and Production of Serious Games
|
cs.LG cs.HC
|
Serious Games (SGs) have experienced a tremendous outburst these last years.
Video game companies have been producing fun, user-friendly SGs, but their
educational value has yet to be proven. Meanwhile, cognition research scientist
have been developing SGs in such a way as to guarantee an educational gain, but
the fun and attractive characteristics featured often would not meet the
public's expectations. The ideal SG must combine these two aspects while still
being economically viable. In this article, we propose a production chain model
to efficiently conceive and produce SGs that are certified for their
educational gain and fun qualities. Each step of this chain will be described
along with the human actors, the tools and the documents that intervene.
|
0911.4292
|
Similarity Measures, Author Cocitation Analysis, and Information Theory
|
cs.IR physics.soc-ph
|
The use of Pearson's correlation coefficient in Author Cocitation Analysis
was compared with Salton's cosine measure in a number of recent contributions.
Unlike the Pearson correlation, the cosine is insensitive to the number of
zeros. However, one has the option of applying a logarithmic transformation in
correlation analysis. Information calculus is based on both the logarithmic
transformation and provides a non-parametric statistics. Using this methodology
one can cluster a document set in a precise way and express the differences in
terms of bits of information. The algorithm is explained and used on the data
set which was made the subject of this discussion.
|
0911.4302
|
An Indicator of Research Front Activity: Measuring Intellectual
Organization as Uncertainty Reduction in Document Sets
|
cs.DL cs.IR physics.soc-ph
|
When using scientific literature to model scholarly discourse, a research
specialty can be operationalized as an evolving set of related documents. Each
publication can be expected to contribute to the further development of the
specialty at the research front. The specific combinations of title words and
cited references in a paper can then be considered as a signature of the
knowledge claim in the paper: new words and combinations of words can be
expected to represent variation, while each paper is at the same time
selectively positioned into the intellectual organization of a field using
context-relevant references. Can the mutual information among these three
dimensions--title words, cited references, and sequence numbers--be used as an
indicator of the extent to which intellectual organization structures the
uncertainty prevailing at a research front? The effect of the discovery of
nanotubes (1991) on the previously existing field of fullerenes is used as a
test case. Thereafter, this method is applied to science studies with a focus
on scientometrics using various sample delineations. An emerging research front
about citation analysis can be indicated.
|
0911.4329
|
Structural Consistency: Enabling XML Keyword Search to Eliminate
Spurious Results Consistently
|
cs.DB
|
XML keyword search is a user-friendly way to query XML data using only
keywords. In XML keyword search, to achieve high precision without sacrificing
recall, it is important to remove spurious results not intended by the user.
Efforts to eliminate spurious results have enjoyed some success by using the
concepts of LCA or its variants, SLCA and MLCA. However, existing methods still
could find many spurious results. The fundamental cause for the occurrence of
spurious results is that the existing methods try to eliminate spurious results
locally without global examination of all the query results and, accordingly,
some spurious results are not consistently eliminated. In this paper, we
propose a novel keyword search method that removes spurious results
consistently by exploiting the new concept of structural consistency.
|
0911.4385
|
Bio-inspired speed detection and discrimination
|
cs.CV cs.NE
|
In the field of computer vision, a crucial task is the detection of motion
(also called optical flow extraction). This operation allows analysis such as
3D reconstruction, feature tracking, time-to-collision and novelty detection
among others. Most of the optical flow extraction techniques work within a
finite range of speeds. Usually, the range of detection is extended towards
higher speeds by combining some multiscale information in a serial
architecture. This serial multi-scale approach suffers from the problem of
error propagation related to the number of scales used in the algorithm. On the
other hand, biological experiments show that human motion perception seems to
follow a parallel multiscale scheme. In this work we present a bio-inspired
parallel architecture to perform detection of motion, providing a wide range of
operation and avoiding error propagation associated with the serial
architecture. To test our algorithm, we perform relative error comparisons
between both classical and proposed techniques, showing that the parallel
architecture is able to achieve motion detection with results similar to the
serial approach.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.