id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1005.2815
|
Evolving Genes to Balance a Pole
|
cs.AI
|
We discuss how to use a Genetic Regulatory Network as an evolutionary
representation to solve a typical GP reinforcement problem, the pole balancing.
The network is a modified version of an Artificial Regulatory Network proposed
a few years ago, and the task could be solved only by finding a proper way of
connecting inputs and outputs to the network. We show that the representation
is able to generalize well over the problem domain, and discuss the performance
of different models of this kind.
|
1005.2819
|
SABRE: A Tool for Stochastic Analysis of Biochemical Reaction Networks
|
cs.CE cs.MS q-bio.MN
|
The importance of stochasticity within biological systems has been shown
repeatedly during the last years and has raised the need for efficient
stochastic tools. We present SABRE, a tool for stochastic analysis of
biochemical reaction networks. SABRE implements fast adaptive uniformization
(FAU), a direct numerical approximation algorithm for computing transient
solutions of biochemical reaction networks. Biochemical reactions networks
represent biological systems studied at a molecular level and these reactions
can be modeled as transitions of a Markov chain. SABRE accepts as input the
formalism of guarded commands, which it interprets either as continuous-time or
as discrete-time Markov chains. Besides operating in a stochastic mode, SABRE
may also perform a deterministic analysis by directly computing a mean-field
approximation of the system under study. We illustrate the different
functionalities of SABRE by means of biological case studies.
|
1005.2839
|
Construction of Codes for Network Coding
|
cs.IT math.IT
|
Based on ideas of K\"otter and Kschischang we use constant dimension
subspaces as codewords in a network. We show a connection to the theory of
q-analogues of a combinatorial designs, which has been studied in Braun, Kerber
and Laue as a purely combinatorial object. For the construction of network
codes we successfully modified methods (construction with prescribed
automorphisms) originally developed for the q-analogues of a combinatorial
designs. We then give a special case of that method which allows the
construction of network codes with a very large ambient space and we also show
how to decode such codes with a very small number of operations.
|
1005.2880
|
General Classes of Lower Bounds on the Probability of Error in Multiple
Hypothesis Testing
|
cs.IT math.IT
|
In this paper, two new classes of lower bounds on the probability of error
for $m$-ary hypothesis testing are proposed. Computation of the minimum
probability of error which is attained by the maximum a-posteriori probability
(MAP) criterion is usually not tractable. The new classes are derived using
Holder's inequality and reverse Holder's inequality. The bounds in these
classes provide good prediction of the minimum probability of error in multiple
hypothesis testing. The new classes generalize and extend existing bounds and
their relation to some existing upper bounds is presented. It is shown that the
tightest bounds in these classes asymptotically coincide with the optimum
probability of error provided by the MAP criterion for binary or multiple
hypothesis testing problem. These bounds are compared with other existing lower
bounds in several typical detection and classification problems in terms of
tightness and computational complexity.
|
1005.2898
|
Saturation Throughput - Delay Analysis of IEEE 802.11 DCF in Fading
Channel
|
cs.NI cs.IT cs.PF math.IT
|
In this paper, we analytically analyzed the impact of an error-prone channel
over all performance measures in a trafficsaturated IEEE 802.11 WLAN. We
calculated station's transmission probability by using the modified Markov
chain model of the backoff window size that considers the frame-error rates and
maximal allowable number of retransmission attempts. The frame error rate has a
significant impact over theoretical throughput, mean frame delay, and discard
probability. The peak throughput of a WLAN is insensitive of the maximal number
of retransmissions. Discard probabilities are insensitive to the station access
method, Basic or RTS/CTS.
|
1005.2949
|
Filter Bank Fusion Frames
|
cs.IT math.IT math.RT
|
In this paper we characterize and construct novel oversampled filter banks
implementing fusion frames. A fusion frame is a sequence of orthogonal
projection operators whose sum can be inverted in a numerically stable way.
When properly designed, fusion frames can provide redundant encodings of
signals which are optimally robust against certain types of noise and erasures.
However, up to this point, few implementable constructions of such frames were
known; we show how to construct them using oversampled filter banks. In this
work, we first provide polyphase domain characterizations of filter bank fusion
frames. We then use these characterizations to construct filter bank fusion
frame versions of discrete wavelet and Gabor transforms, emphasizing those
specific finite impulse response filters whose frequency responses are
well-behaved.
|
1005.2967
|
Controlled Hopwise Averaging: Bandwidth/Energy-Efficient Asynchronous
Distributed Averaging for Wireless Networks
|
math.OC cs.DC cs.SY
|
This paper addresses the problem of averaging numbers across a wireless
network from an important, but largely neglected, viewpoint: bandwidth/energy
efficiency. We show that existing distributed averaging schemes have several
drawbacks and are inefficient, producing networked dynamical systems that
evolve with wasteful communications. Motivated by this, we develop Controlled
Hopwise Averaging (CHA), a distributed asynchronous algorithm that attempts to
"make the most" out of each iteration by fully exploiting the broadcast nature
of wireless medium and enabling control of when to initiate an iteration. We
show that CHA admits a common quadratic Lyapunov function for analysis, derive
bounds on its exponential convergence rate, and show that they outperform the
convergence rate of Pairwise Averaging for some common graphs. We also
introduce a new way to apply Lyapunov stability theory, using the Lyapunov
function to perform greedy, decentralized, feedback iteration control. Finally,
through extensive simulation on random geometric graphs, we show that CHA is
substantially more efficient than several existing schemes, requiring far fewer
transmissions to complete an averaging task.
|
1005.3004
|
Observable dynamics and coordinate systems for automotive target
tracking
|
cs.RO
|
We investigate several coordinate systems and dynamical vector fields for
target tracking to be used in driver assistance systems. We show how to express
the discrete dynamics of maneuvering target vehicles in arbitrary coordinates
starting from the target's and the own (ego) vehicle's assumed dynamical model
in global coordinates. We clarify the notion of "ego compensation" and show how
non-inertial effects are to be included when using a body-fixed coordinate
system for target tracking. We finally compare the tracking error of different
combinations of target tracking coordinates and dynamical vector fields for
simulated data.
|
1005.3093
|
A remark about orthogonal matching pursuit algorithm
|
cs.IT math.IT math.NA
|
In this note, we investigate the theoretical properties of Orthogonal
Matching Pursuit (OMP), a class of decoder to recover sparse signal in
compressed sensing. In particular, we show that the OMP decoder can give
$(p,q)$ instance optimality for a large class of encoders with $1\leq p\leq q
\leq 2$ and $(p,q)\neq (2,2)$. We also show that, if the encoding matrix is
drawn from an appropriate distribution, then the OMP decoder is $(2,2)$
instance optimal in probability.
|
1005.3124
|
An improved HeatS+ProbS hybrid recommendation algorithm based on
heterogeneous initial resource configurations
|
physics.soc-ph cs.IR
|
Network-based recommendation algorithms for user-object link predictions have
achieved significant developments in recent years. For bipartite graphs, the
reallocation of resource in such algorithms is analogous to heat spreading
(HeatS) or probability spreading (ProbS) processes. The best algorithm to date
is a hybrid of the HeatS and ProbS techniques with homogenous initial resource
configurations, which fulfills simultaneously high accuracy and large
diversity. We investigate the effect of heterogeneity in initial configurations
on the HeatS+ProbS hybrid algorithm and find that both recommendation accuracy
and diversity can be further improved in this new setting. Numerical
experiments show that the improvement is robust.
|
1005.3184
|
Key Distribution Protocols Based on Extractors Under the Condition of
Noisy Channels in the Presence of an Active Adversary
|
cs.IT math.IT
|
We consider in this paper the information-theoretic secure key distribution
problem over main and wire-tap noise channels with a public discussion in
presence of an active adversary. In contrast to the solution proposed by
ourselves for a similar problem using hashing for privacy amplification, in the
current paper we use a technique of extractors.
We propose modified key distribution protocols for which we prove explicit
estimates of key rates without the use of estimates with uncertain coefficients
in notations $O,\Omega,\Theta$.
This leads in the new conclusion that the use of extractors is superior to
the use of hash functions only with the very large key lengths $\ell$ (of order
$\ell>10^5$ bits).
We suggest hybrid key distribution protocols consisting from two
consecutively executed stages. At the fist stage it is generated a short
authentication key based on hash function, whereas at the second stage it is
generated the final key with the use of extractors. We show that in fact the
use of extraction procedure is effective only at the second stage. We get also
some constructive estimates of the key rates for such protocols.
|
1005.3185
|
Dynamical issues in interactive representation of physical objects
|
cs.GR cs.HC cs.RO
|
The quality of a simulator equipped with a haptic interface is given by the
dynamical properties of its components: haptic interface, simulator and control
system. Some application areas of such kind of simulator like musical
synthesis, animation or more general, instrumental art have specific
requirements as for the "haptic rendering" of small movements that go beyond
the usual haptic interfaces allow. Object properties variability and different
situations of object combination represent important aspects of such type of
application which makes that the user can be interested as much in the
restitution of certain global properties of an entire object domain as in the
restitution of properties that are specific to an isolate object. In the
traditional approaches, the usual criteria are founded on the paradigm of
transparency and are related to the impedance error introduced by the technical
aspects of the system. As a general aim, rather than to minimize these effects,
we look to characterize them by physical metaphors conferring to haptic medium
the role of a tool. This positioning leads to firstly analyze the natural human
object interaction as a simplified evolutive system and then considers its
synthesis in the case of the interactive physical simulation. By means of a
frequential method, this approach is presented for some elementary
configurations of the simulator
|
1005.3238
|
Power Control and Performance Analysis of Outage-Limited Cellular
Network with MUD-SIC and Macro-Diversity
|
cs.IT math.IT
|
In this paper, we analyze the uplink goodput (bits/sec/Hz successfully
decoded) and per-user packet outage in a cellular network using multi-user
detection with successive interference cancellation (MUD-SIC). We consider
non-ergodic fading channels where microscopic fading channel information is not
available at the transmitters. As a result, packet outage occurs whenever the
data rate of packet transmissions exceeds the instantaneous mutual information
even if powerful channel coding is applied for protection. We are interested to
study the role of macro-diversity (MDiv) between multiple base stations on the
MUD-SIC performance where the effect of potential error-propagation during the
SIC processing is taken into account. While the jointly optimal power and
decoding order in the MUD-SIC are NP hard problem, we derive a simple on/off
power control and asymptotically optimal decoding order with respect to the
transmit power. Based on the information theoretical framework, we derive the
closed-form expressions on the total system goodput as well as the per-user
packet outage probability. We show that the system goodput does not scale with
SNR due to mutual interference in the SIC process and macro-diversity (MDiv)
could alleviate the problem and benefit to the system goodput.
|
1005.3290
|
Minimax state estimation for linear continuous differential-algebraic
equations
|
math.OC cs.SY
|
This paper describes a minimax state estimation approach for linear
Differential-Algebraic Equations (DAE) with uncertain parameters. The approach
addresses continuous-time DAE with non-stationary rectangular matrices and
uncertain bounded deterministic input. An observation's noise is supposed to be
random with zero mean and unknown bounded correlation function. Main results
are a Generalized Kalman Duality (GKD) principle and sub-optimal minimax state
estimation algorithm. GKD is derived by means of Young-Fenhel duality theorem.
GKD proves that the minimax estimate coincides with a solution to a Dual
Control Problem (DCP) with DAE constraints. The latter is ill-posed and,
therefore, the DCP is solved by means of Tikhonov regularization approach
resulting a sub-optimal state estimation algorithm in the form of filter. We
illustrate the approach by an synthetic example and we discuss connections with
impulse-observability.
|
1005.3338
|
Feedback Capacity of the Gaussian Interference Channel to within 2 Bits
|
cs.IT math.IT
|
We characterize the capacity region to within 2 bits/s/Hz and the symmetric
capacity to within 1 bit/s/Hz for the two-user Gaussian interference channel
(IC) with feedback. We develop achievable schemes and derive a new outer bound
to arrive at this conclusion. One consequence of the result is that feedback
provides multiplicative gain, i.e., the gain becomes arbitrarily large for
certain channel parameters. It is a surprising result because feedback has been
so far known to provide no gain in memoryless point-to-point channels and only
bounded additive gain in multiple access channels. The gain comes from using
feedback to maximize resource utilization, thereby enabling more efficient
resource sharing between the interfering users. The result makes use of a
deterministic model to provide insights into the Gaussian channel. This
deterministic model is a special case of El Gamal-Costa deterministic model and
as a side-generalization, we establish the exact feedback capacity region of
this general class of deterministic ICs.
|
1005.3350
|
Minimum Variance Multi-Frequency Distortionless Restriction for Digital
Wideband Beamformer
|
cs.IT math.IT
|
This paper proposes a digital amplitude-phase weighting array based a minimum
variance multi-frequency distortionless restriction (MVMFDR) to aviod the
frequency band signal distortion in digital beamformer and too short time delay
line (TDL) requirement in analoge wideband TDL array.
|
1005.3358
|
The Role of Provenance Management in Accelerating the Rate of
Astronomical Research
|
astro-ph.IM cs.IR
|
The availability of vast quantities of data through electronic archives has
transformed astronomical research. It has also enabled the creation of new
products, models and simulations, often from distributed input data and models,
that are themselves made electronically available. These products will only
provide maximal long-term value to astronomers when accompanied by records of
their provenance; that is, records of the data and processes used in the
creation of such products. We use the creation of image mosaics with the
Montage grid-enabled mosaic engine to emphasize the necessity of provenance
management and to understand the science requirements that higher-level
products impose on provenance management technologies. We describe experiments
with one technology, the "Provenance Aware Service Oriented Architecture"
(PASOA), that stores provenance information at each step in the computation of
a mosaic. The results inform the technical specifications of provenance
management systems, including the need for extensible systems built on common
standards. Finally, we describe examples of provenance management technology
emerging from the fields of geophysics and oceanography that have applicability
to astronomy applications.
|
1005.3390
|
Critical control of a genetic algorithm
|
cs.NE cond-mat.stat-mech
|
Based on speculations coming from statistical mechanics and the conjectured
existence of critical states, I propose a simple heuristic in order to control
the mutation probability and the population size of a genetic algorithm.
|
1005.3439
|
Small World Property of a Rock Joint(Complexity of Frictional
Interfaces: A Complex Network Perspective)
|
physics.geo-ph cond-mat.dis-nn cs.CE nlin.AO
|
The shear strength and stick-slip behavior of a rough rock joint are analyzed
using the complex network approach. We develop a network approach on
correlation patterns of void spaces of an evolvable rough fracture (crack type
II). Correlation among networks properties with the hydro -mechanical
attributes (obtained from experimental tests) of fracture before and after slip
is the direct result of the revealed non-contacts networks. Joint distribution
of locally and globally filtered correlation gives a close relation to the
contact zones attachment-detachment sequences through the evolution of shear
strength of the rock joint. Especially spread of node's degree rate to spread
of clustering coefficient rate yielded possible stick and slip sequences during
the displacements. Our method can be developed to investigate the complexity of
stick-slip behavior of faults as well as energy /stress localization on
crumpled shells/sheets in which ridge networks are controlling the energy
distribution.
|
1005.3486
|
Exploration of AWGNC and BSC Pseudocodeword Redundancy
|
cs.IT math.IT
|
The AWGNC, BSC, and max-fractional pseudocodeword redundancy of a code is
defined as the smallest number of rows in a parity-check matrix such that the
corresponding minimum pseudoweight is equal to the minimum Hamming distance of
the code. This paper provides new results on the AWGNC, BSC, and max-fractional
pseudocodeword redundancies of codes. The pseudocodeword redundancies for all
codes of small length (at most 9) are computed. Also, comprehensive results are
provided on the cases of cyclic codes of length at most 250 for which the
eigenvalue bound of Vontobel and Koetter is sharp.
|
1005.3502
|
Using machine learning to make constraint solver implementation
decisions
|
cs.AI
|
Programs to solve so-called constraint problems are complex pieces of
software which require many design decisions to be made more or less
arbitrarily by the implementer. These decisions affect the performance of the
finished solver significantly. Once a design decision has been made, it cannot
easily be reversed, although a different decision may be more appropriate for a
particular problem.
We investigate using machine learning to make these decisions automatically
depending on the problem to solve with the alldifferent constraint as an
example. Our system is capable of making non-trivial, multi-level decisions
that improve over always making a default choice.
|
1005.3529
|
Network Synchronization in a Noisy Environment with Time Delays:
Fundamental Limits and Trade-Offs
|
cond-mat.stat-mech cond-mat.dis-nn cs.MA
|
We study the effects of nonzero time delays in stochastic synchronization
problems with linear couplings in an arbitrary network. Using the known exact
threshold value from the theory of differential equations with delays, we
provide the synchronizability threshold for an arbitrary network. Further, by
constructing the scaling theory of the underlying fluctuations, we establish
the absolute limit of synchronization efficiency in a noisy environment with
uniform time delays, i.e., the minimum attainable value of the width of the
synchronization landscape. Our results have also strong implications for
optimization and trade-offs in network synchronization with delays.
|
1005.3561
|
Two-Way Writing on Dirty Paper
|
cs.IT math.IT
|
In this paper, the Two-Way Channel (TWC) with Cannel State Information (CSI)
is investigated. First, an achievable rate region is derived for the discrete
memoryless channel. Then by extending the result to the Gaussian TWC with
additive interference noise, it is shown that the capacity region of the later
channel is the same as the capacity when there is no interference, i.e. a
two-way version of Costa's writing on dirty paper problem is established.
|
1005.3566
|
Evolution with Drifting Targets
|
cs.LG
|
We consider the question of the stability of evolutionary algorithms to
gradual changes, or drift, in the target concept. We define an algorithm to be
resistant to drift if, for some inverse polynomial drift rate in the target
function, it converges to accuracy 1 -- \epsilon , with polynomial resources,
and then stays within that accuracy indefinitely, except with probability
\epsilon , at any one time. We show that every evolution algorithm, in the
sense of Valiant (2007; 2009), can be converted using the Correlational Query
technique of Feldman (2008), into such a drift resistant algorithm. For certain
evolutionary algorithms, such as for Boolean conjunctions, we give bounds on
the rates of drift that they can resist. We develop some new evolution
algorithms that are resistant to significant drift. In particular, we give an
algorithm for evolving linear separators over the spherically symmetric
distribution that is resistant to a drift rate of O(\epsilon /n), and another
algorithm over the more general product normal distributions that resists a
smaller drift rate.
The above translation result can be also interpreted as one on the robustness
of the notion of evolvability itself under changes of definition. As a second
result in that direction we show that every evolution algorithm can be
converted to a quasi-monotonic one that can evolve from any starting point
without the performance ever dipping significantly below that of the starting
point. This permits the somewhat unnatural feature of arbitrary performance
degradations to be removed from several known robustness translations.
|
1005.3579
|
Graph-Structured Multi-task Regression and an Efficient Optimization
Method for General Fused Lasso
|
stat.ML cs.LG math.OC
|
We consider the problem of learning a structured multi-task regression, where
the output consists of multiple responses that are related by a graph and the
correlated response variables are dependent on the common inputs in a sparse
but synergistic manner. Previous methods such as l1/l2-regularized multi-task
regression assume that all of the output variables are equally related to the
inputs, although in many real-world problems, outputs are related in a complex
manner. In this paper, we propose graph-guided fused lasso (GFlasso) for
structured multi-task regression that exploits the graph structure over the
output variables. We introduce a novel penalty function based on fusion penalty
to encourage highly correlated outputs to share a common set of relevant
inputs. In addition, we propose a simple yet efficient proximal-gradient method
for optimizing GFlasso that can also be applied to any optimization problems
with a convex smooth loss and the general class of fusion penalty defined on
arbitrary graph structures. By exploiting the structure of the non-smooth
''fusion penalty'', our method achieves a faster convergence rate than the
standard first-order method, sub-gradient method, and is significantly more
scalable than the widely adopted second-order cone-programming and
quadratic-programming formulations. In addition, we provide an analysis of the
consistency property of the GFlasso model. Experimental results not only
demonstrate the superiority of GFlasso over the standard lasso but also show
the efficiency and scalability of our proximal-gradient method.
|
1005.3620
|
Threshold effects in parameter estimation as phase transitions in
statistical mechanics
|
cs.IT cond-mat.dis-nn cond-mat.stat-mech math.IT
|
Threshold effects in the estimation of parameters of non-linearly modulated,
continuous-time, wide-band waveforms, are examined from a statistical physics
perspective. These threshold effects are shown to be analogous to phase
transitions of certain disordered physical systems in thermal equilibrium. The
main message, in this work, is in demonstrating that this physical point of
view may be insightful for understanding the interactions between two or more
parameters to be estimated, from the aspects of the threshold effect.
|
1005.3681
|
Learning Kernel-Based Halfspaces with the Zero-One Loss
|
cs.LG
|
We describe and analyze a new algorithm for agnostically learning
kernel-based halfspaces with respect to the \emph{zero-one} loss function.
Unlike most previous formulations which rely on surrogate convex loss functions
(e.g. hinge-loss in SVM and log-loss in logistic regression), we provide finite
time/sample guarantees with respect to the more natural zero-one loss function.
The proposed algorithm can learn kernel-based halfspaces in worst-case time
$\poly(\exp(L\log(L/\epsilon)))$, for $\emph{any}$ distribution, where $L$ is a
Lipschitz constant (which can be thought of as the reciprocal of the margin),
and the learned classifier is worse than the optimal halfspace by at most
$\epsilon$. We also prove a hardness result, showing that under a certain
cryptographic assumption, no algorithm can learn kernel-based halfspaces in
time polynomial in $L$.
|
1005.3729
|
Compressive Sensing over the Grassmann Manifold: a Unified Geometric
Framework
|
cs.IT cs.DM math.IT
|
$\ell_1$ minimization is often used for finding the sparse solutions of an
under-determined linear system. In this paper we focus on finding sharp
performance bounds on recovering approximately sparse signals using $\ell_1$
minimization, possibly under noisy measurements. While the restricted isometry
property is powerful for the analysis of recovering approximately sparse
signals with noisy measurements, the known bounds on the achievable sparsity
(The "sparsity" in this paper means the size of the set of nonzero or
significant elements in a signal vector.) level can be quite loose. The
neighborly polytope analysis which yields sharp bounds for ideally sparse
signals cannot be readily generalized to approximately sparse signals. Starting
from a necessary and sufficient condition, the "balancedness" property of
linear subspaces, for achieving a certain signal recovery accuracy, we give a
unified \emph{null space Grassmann angle}-based geometric framework for
analyzing the performance of $\ell_1$ minimization. By investigating the
"balancedness" property, this unified framework characterizes sharp
quantitative tradeoffs between the considered sparsity and the recovery
accuracy of the $\ell_{1}$ optimization. As a consequence, this generalizes the
neighborly polytope result for ideally sparse signals. Besides the robustness
in the "strong" sense for \emph{all} sparse signals, we also discuss the
notions of "weak" and "sectional" robustness. Our results concern fundamental
properties of linear subspaces and so may be of independent mathematical
interest.
|
1005.3773
|
Behavioral Simulations in MapReduce
|
cs.DB cs.DC
|
In many scientific domains, researchers are turning to large-scale behavioral
simulations to better understand important real-world phenomena. While there
has been a great deal of work on simulation tools from the high-performance
computing community, behavioral simulations remain challenging to program and
automatically scale in parallel environments. In this paper we present BRACE
(Big Red Agent-based Computation Engine), which extends the MapReduce framework
to process these simulations efficiently across a cluster. We can leverage
spatial locality to treat behavioral simulations as iterated spatial joins and
greatly reduce the communication between nodes. In our experiments we achieve
nearly linear scale-up on several realistic simulations.
Though processing behavioral simulations in parallel as iterated spatial
joins can be very efficient, it can be much simpler for the domain scientists
to program the behavior of a single agent. Furthermore, many simulations
include a considerable amount of complex computation and message passing
between agents, which makes it important to optimize the performance of a
single node and the communication across nodes. To address both of these
challenges, BRACE includes a high-level language called BRASIL (the Big Red
Agent SImulation Language). BRASIL has object oriented features for programming
simulations, but can be compiled to a data-flow representation for automatic
parallelization and optimization. We show that by using various optimization
techniques, we can achieve both scalability and single-node performance similar
to that of a hand-coded simulation.
|
1005.3818
|
Public and private resource trade-offs for a quantum channel
|
quant-ph cs.IT math.IT
|
Collins and Popescu realized a powerful analogy between several resources in
classical and quantum information theory. The Collins-Popescu analogy states
that public classical communication, private classical communication, and
secret key interact with one another somewhat similarly to the way that
classical communication, quantum communication, and entanglement interact. This
paper discusses the information-theoretic treatment of this analogy for the
case of noisy quantum channels. We determine a capacity region for a quantum
channel interacting with the noiseless resources of public classical
communication, private classical communication, and secret key. We then compare
this region with the classical-quantum-entanglement region from our prior
efforts and explicitly observe the information-theoretic consequences of the
strong correlations in entanglement and the lack of a super-dense coding
protocol in the public-private-secret-key setting. The region simplifies for
several realistic, physically-motivated channels such as entanglement-breaking
channels, Hadamard channels, and quantum erasure channels, and we are able to
compute and plot the region for several examples of these channels.
|
1005.3873
|
Improved OMP Approach to Sparse Multi-path Channel Estimation via
Adaptive Inter-atom Interference Mitigation
|
cs.IT math.IT
|
Since most components of sparse multi-path channel (SMPC) are zero, impulse
response of SMPC can be recovered from a short training sequence. Though the
ordinary orthogonal matching pursuit (OMP) algorithm provides a very fast
implementation of SMPC estimation, it suffers from inter-atom interference
(IAI), especially in the case of SMPC with a large delay spread and short
training sequence. In this paper, an adaptive IAI mitigation method is proposed
to improve the performance of SMPC estimation based on a general OMP algorithm.
Unlike the ordinary OMP algorithm, a sensing dictionary is designed adaptively
and posterior information is utilized efficiently to prevent false atoms from
being selected due to serious IAI. Numeral experiments illustrate that the
proposed general OMP algorithm based on adaptive IAI mitigation outperform both
the ordinary OMP algorithm and the general OMP algorithm based on non-adaptive
IAI mitigation.
|
1005.3889
|
Capacity and Modulations with Peak Power Constraint
|
cs.IT math.IT
|
A practical communication channel often suffers from constraints on input
other than the average power, such as the peak power constraint. In order to
compare achievable rates with different constellations as well as the channel
capacity under such constraints, it is crucial to take these constraints into
consideration properly. In this paper, we propose a direct approach to compare
the achievable rates of practical input constellations and the capacity under
such constraints. As an example, we study the discrete-time complex-valued
additive white Gaussian noise (AWGN) channel and compare the capacity under the
peak power constraint with the achievable rates of phase shift keying (PSK) and
quadrature amplitude modulation (QAM) input constellations.
|
1005.3902
|
Morphonette: a morphological network of French
|
cs.CL
|
This paper describes in details the first version of Morphonette, a new
French morphological resource and a new radically lexeme-based method of
morphological analysis. This research is grounded in a paradigmatic conception
of derivational morphology where the morphological structure is a structure of
the entire lexicon and not one of the individual words it contains. The
discovery of this structure relies on a measure of morphological similarity
between words, on formal analogy and on the properties of two morphological
paradigms:
|
1005.3968
|
A Scheme of Concatenated Quantum Code to Protect against both
Computational Error and an Erasure
|
cs.IT math.IT quant-ph
|
We present a description of encoding/decoding for a concatenated quantum code
that enables both protection against quantum computational errors and the
occurrence of one quantum erasure. For this, it is presented how encoding and
decoding for quantum graph codes are done, which will provide the protection
against the occurrence of computational errors (external code). As internal
code is used encoding and decoding via scheme of GHZ states for protection
against the occurrence of one quantum erasure.
|
1005.4005
|
Optical phase extraction algorithm based on the continuous wavelet and
the Hilbert transforms
|
cs.CE
|
In this paper we present an algorithm for optical phase evaluation based on
the wavelet transform technique. The main advantage of this method is that it
requires only one fringe pattern. This algorithm is based on the use of a
second {\pi}/2 phase shifted fringe pattern where it is calculated via the
Hilbert transform. To test its validity, the algorithm was used to demodulate a
simulated fringe pattern giving the phase distribution with a good accuracy.
|
1005.4020
|
Image Segmentation by Using Threshold Techniques
|
cs.CV
|
This paper attempts to undertake the study of segmentation image techniques
by using five threshold methods as Mean method, P-tile method, Histogram
Dependent Technique (HDT), Edge Maximization Technique (EMT) and visual
Technique and they are compared with one another so as to choose the best
technique for threshold segmentation techniques image. These techniques applied
on three satellite images to choose base guesses for threshold segmentation
image.
|
1005.4025
|
A Soft Computing Model for Physicians' Decision Process
|
cs.AI
|
In this paper the author presents a kind of Soft Computing Technique, mainly
an application of fuzzy set theory of Prof. Zadeh [16], on a problem of Medical
Experts Systems. The choosen problem is on design of a physician's decision
model which can take crisp as well as fuzzy data as input, unlike the
traditional models. The author presents a mathematical model based on fuzzy set
theory for physician aided evaluation of a complete representation of
information emanating from the initial interview including patient past
history, present symptoms, and signs observed upon physical examination and
results of clinical and diagnostic tests.
|
1005.4032
|
Combining Multiple Feature Extraction Techniques for Handwritten
Devnagari Character Recognition
|
cs.CV cs.AI
|
In this paper we present an OCR for Handwritten Devnagari Characters. Basic
symbols are recognized by neural classifier. We have used four feature
extraction techniques namely, intersection, shadow feature, chain code
histogram and straight line fitting features. Shadow features are computed
globally for character image while intersection features, chain code histogram
features and line fitting features are computed by dividing the character image
into different segments. Weighted majority voting technique is used for
combining the classification decision obtained from four Multi Layer
Perceptron(MLP) based classifier. On experimentation with a dataset of 4900
samples the overall recognition rate observed is 92.80% as we considered top
five choices results. This method is compared with other recent methods for
Handwritten Devnagari Character Recognition and it has been observed that this
approach has better success rate than other methods.
|
1005.4034
|
Face Synthesis (FASY) System for Generation of a Face Image from Human
Description
|
cs.CV
|
This paper aims at generating a new face based on the human like description
using a new concept. The FASY (FAce SYnthesis) System is a Face Database
Retrieval and new Face generation System that is under development. One of its
main features is the generation of the requested face when it is not found in
the existing database, which allows a continuous growing of the database also.
|
1005.4035
|
Classification of Polar-Thermal Eigenfaces using Multilayer Perceptron
for Human Face Recognition
|
cs.CV
|
This paper presents a novel approach to handle the challenges of face
recognition. In this work thermal face images are considered, which minimizes
the affect of illumination changes and occlusion due to moustache, beards,
adornments etc. The proposed approach registers the training and testing
thermal face images in polar coordinate, which is capable to handle
complicacies introduced by scaling and rotation. Polar images are projected
into eigenspace and finally classified using a multi-layer perceptron. In the
experiments we have used Object Tracking and Classification Beyond Visible
Spectrum (OTCBVS) database benchmark thermal face images. Experimental results
show that the proposed approach significantly improves the verification and
identification performance and the success rate is 97.05%.
|
1005.4044
|
Reduction of Feature Vectors Using Rough Set Theory for Human Face
Recognition
|
cs.CV
|
In this paper we describe a procedure to reduce the size of the input feature
vector. A complex pattern recognition problem like face recognition involves
huge dimension of input feature vector. To reduce that dimension here we have
used eigenspace projection (also called as Principal Component Analysis), which
is basically transformation of space. To reduce further we have applied feature
selection method to select indispensable features, which will remain in the
final feature vectors. Features those are not selected are removed from the
final feature vector considering them as redundant or superfluous. For
selection of features we have used the concept of reduct and core from rough
set theory. This method has shown very good performance. It is worth to mention
that in some cases the recognition rate increases with the decrease in the
feature vector dimension.
|
1005.4103
|
LACBoost and FisherBoost: Optimally Building Cascade Classifiers
|
cs.CV
|
Object detection is one of the key tasks in computer vision. The cascade
framework of Viola and Jones has become the de facto standard. A classifier in
each node of the cascade is required to achieve extremely high detection rates,
instead of low overall classification error. Although there are a few reported
methods addressing this requirement in the context of object detection, there
is no a principled feature selection method that explicitly takes into account
this asymmetric node learning objective. We provide such a boosting algorithm
in this work. It is inspired by the linear asymmetric classifier (LAC) of Wu et
al. in that our boosting algorithm optimizes a similar cost function. The new
totally-corrective boosting algorithm is implemented by the column generation
technique in convex optimization. Experimental results on face detection
suggest that our proposed boosting algorithms can improve the state-of-the-art
methods in detection performance.
|
1005.4115
|
Bucklin Voting is Broadly Resistant to Control
|
cs.CC cs.MA
|
Electoral control models ways of changing the outcome of an election via such
actions as adding/deleting/partitioning either candidates or voters. These
actions modify an election's participation structure and aim at either making a
favorite candidate win ("constructive control") or prevent a despised candidate
from winning ("destructive control"), which yields a total of 22 standard
control scenarios. To protect elections from such control attempts,
computational complexity has been used to show that electoral control, though
not impossible, is computationally prohibitive. Among natural voting systems
with a polynomial-time winner problem, the two systems with the highest number
of proven resistances to control types (namely 19 out of 22) are
"sincere-strategy preference-based approval voting" (SP-AV, a modification of a
system proposed by Brams and Sanver) and fallback voting. Both are hybrid
systems; e.g., fallback voting combines approval with Bucklin voting. In this
paper, we study the control complexity of Bucklin voting itself and show that
it behaves equally well in terms of control resistance for the 20 cases
investigated so far. As Bucklin voting is a special case of fallback voting,
all resistances shown for Bucklin voting in this paper strengthen the
corresponding resistance for fallback voting.
|
1005.4118
|
Incremental Training of a Detector Using Online Sparse
Eigen-decomposition
|
cs.CV
|
The ability to efficiently and accurately detect objects plays a very crucial
role for many computer vision tasks. Recently, offline object detectors have
shown a tremendous success. However, one major drawback of offline techniques
is that a complete set of training data has to be collected beforehand. In
addition, once learned, an offline detector can not make use of newly arriving
data. To alleviate these drawbacks, online learning has been adopted with the
following objectives: (1) the technique should be computationally and storage
efficient; (2) the updated classifier must maintain its high classification
accuracy. In this paper, we propose an effective and efficient framework for
learning an adaptive online greedy sparse linear discriminant analysis (GSLDA)
model. Unlike many existing online boosting detectors, which usually apply
exponential or logistic loss, our online algorithm makes use of LDA's learning
criterion that not only aims to maximize the class-separation criterion but
also incorporates the asymmetrical property of training data distributions. We
provide a better alternative for online boosting algorithms in the context of
training a visual object detector. We demonstrate the robustness and efficiency
of our methods on handwriting digit and face data sets. Our results confirm
that object detection tasks benefit significantly when trained in an online
manner.
|
1005.4159
|
The Complexity of Manipulating $k$-Approval Elections
|
cs.AI
|
An important problem in computational social choice theory is the complexity
of undesirable behavior among agents, such as control, manipulation, and
bribery in election systems. These kinds of voting strategies are often
tempting at the individual level but disastrous for the agents as a whole.
Creating election systems where the determination of such strategies is
difficult is thus an important goal.
An interesting set of elections is that of scoring protocols. Previous work
in this area has demonstrated the complexity of misuse in cases involving a
fixed number of candidates, and of specific election systems on unbounded
number of candidates such as Borda. In contrast, we take the first step in
generalizing the results of computational complexity of election misuse to
cases of infinitely many scoring protocols on an unbounded number of
candidates. Interesting families of systems include $k$-approval and $k$-veto
elections, in which voters distinguish $k$ candidates from the candidate set.
Our main result is to partition the problems of these families based on their
complexity. We do so by showing they are polynomial-time computable, NP-hard,
or polynomial-time equivalent to another problem of interest. We also
demonstrate a surprising connection between manipulation in election systems
and some graph theory problems.
|
1005.4178
|
Optimal Exact-Regenerating Codes for Distributed Storage at the MSR and
MBR Points via a Product-Matrix Construction
|
cs.IT cs.DC cs.NI math.IT
|
Regenerating codes are a class of distributed storage codes that optimally
trade the bandwidth needed for repair of a failed node with the amount of data
stored per node of the network. Minimum Storage Regenerating (MSR) codes
minimize first, the amount of data stored per node, and then the repair
bandwidth, while Minimum Bandwidth Regenerating (MBR) codes carry out the
minimization in the reverse order. An [n, k, d] regenerating code permits the
data to be recovered by connecting to any k of the n nodes in the network,
while requiring that repair of a failed node be made possible by connecting
(using links of lesser capacity) to any d nodes. Previous, explicit and general
constructions of exact-regenerating codes have been confined to the case n=d+1.
In this paper, we present optimal, explicit constructions of MBR codes for all
feasible values of [n, k, d] and MSR codes for all [n, k, d >= 2k-2], using a
product-matrix framework. The particular product-matrix nature of the
constructions is shown to significantly simplify system operation. To the best
of our knowledge, these are the first constructions of exact-regenerating codes
that allow the number n of nodes in the distributed storage network, to be
chosen independent of the other parameters. The paper also contains a simpler
description, in the product-matrix framework, of a previously constructed MSR
code in which the parameter d satisfies [n=d+1, k, d >= 2k-1].
|
1005.4200
|
A Robust Beamformer Based on Weighted Sparse Constraint
|
cs.IT math.IT
|
Applying a sparse constraint on the beam pattern has been suggested to
suppress the sidelobe level of a minimum variance distortionless response
(MVDR) beamformer. In this letter, we introduce a weighted sparse constraint in
the beamformer design to provide a lower sidelobe level and deeper nulls for
interference avoidance, as compared with a conventional MVDR beamformer. The
proposed beamformer also shows improved robustness against the mismatch between
the steering angle and the direction of arrival (DOA) of the desired signal,
caused by imperfect estimation of DOA.
|
1005.4216
|
Classification of LULC Change Detection using Remotely Sensed Data for
Coimbatore City, Tamilnadu, India
|
cs.CV
|
Maps are used to describe far-off places . It is an aid for navigation and
military strategies. Mapping of the lands are important and the mapping work is
based on (i). Natural resource management & development (ii). Information
technology ,(iii). Environmental development ,(iv). Facility management and
(v). e-governance. The Landuse / Landcover system espoused by almost all
Organisations and scientists, engineers and remote sensing community who are
involved in mapping of earth surface features, is a system which is derived
from the united States Geological Survey (USGS) LULC classification system. The
application of RS and GIS involves influential of homogeneous zones, drift
analysis of land use integration of new area changes or change detection
etc.,National Remote Sensing Agency(NRSA) Govt. of India has devised a
generalized LULC classification system respect to the Indian conditions based
on the various categories of Earth surface features , resolution of available
satellite data, capabilities of sensors and present and future applications.
The profusion information of the earth surface offered by the high resolution
satellite images for remote sensing applications. Using change detection
methodologies to extract the target changes in the areas from high resolution
images and rapidly updates geodatabase information processing.Traditionally,
classification approaches have focused on per-pixel technologies. Pixels within
areas assumed to be automatically homogeneous are analyzed independently.
|
1005.4263
|
Facial Recognition Technology: An analysis with scope in India
|
cs.MA
|
A facial recognition system is a computer application for automatically
identifying or verifying a person from a digital image or a video frame from a
video source. One of the way is to do this is by comparing selected facial
features from the image and a facial database.It is typically used in security
systems and can be compared to other biometrics such as fingerprint or eye iris
recognition systems. In this paper we focus on 3-D facial recognition system
and biometric facial recognision system. We do critics on facial recognision
system giving effectiveness and weaknesses. This paper also introduces scope of
recognision system in India.
|
1005.4267
|
Content Base Image Retrieval Using Phong Shading
|
cs.MM cs.IR
|
The digital image data is rapidly expanding in quantity and heterogeneity.
The traditional information retrieval techniques does not meet the user's
demand, so there is need to develop an efficient system for content based image
retrieval. Content based image retrieval means retrieval of images from
database on the basis of visual features of image like as color, texture etc.
In our proposed method feature are extracted after applying Phong shading on
input image. Phong shading, flattering out the dull surfaces of the image The
features are extracted using color, texture & edge density methods. Feature
extracted values are used to find the similarity between input query image and
the data base image. It can be measure by the Euclidean distance formula. The
experimental result shows that the proposed approach has a better retrieval
results with phong shading.
|
1005.4270
|
Clustering Time Series Data Stream - A Literature Survey
|
cs.IR
|
Mining Time Series data has a tremendous growth of interest in today's world.
To provide an indication various implementations are studied and summarized to
identify the different problems in existing applications. Clustering time
series is a trouble that has applications in an extensive assortment of fields
and has recently attracted a large amount of research. Time series data are
frequently large and may contain outliers. In addition, time series are a
special type of data set where elements have a temporal ordering. Therefore
clustering of such data stream is an important issue in the data mining
process. Numerous techniques and clustering algorithms have been proposed
earlier to assist clustering of time series data streams. The clustering
algorithms and its effectiveness on various applications are compared to
develop a new method to solve the existing problem. This paper presents a
survey on various clustering algorithms available for time series datasets.
Moreover, the distinctiveness and restriction of previous research are
discussed and several achievable topics for future study are recognized.
Furthermore the areas that utilize time series clustering are also summarized.
|
1005.4272
|
Inaccuracy Minimization by Partioning Fuzzy Data Sets - Validation of
Analystical Methodology
|
cs.AI
|
In the last two decades, a number of methods have been proposed for
forecasting based on fuzzy time series. Most of the fuzzy time series methods
are presented for forecasting of car road accidents. However, the forecasting
accuracy rates of the existing methods are not good enough. In this paper, we
compared our proposed new method of fuzzy time series forecasting with existing
methods. Our method is based on means based partitioning of the historical data
of car road accidents. The proposed method belongs to the kth order and
time-variant methods. The proposed method can get the best forecasting accuracy
rate for forecasting the car road accidents than the existing methods.
|
1005.4285
|
Local Minima of a Quadratic Binary Functional with Quasi-Hebbian
Connection Matrix
|
cond-mat.dis-nn cs.NE
|
The local minima of a quadratic functional depending on binary variables are
discussed. An arbitrary connection matrix can be presented in the form of
quasi-Hebbian expansion where each pattern is supplied with its own individual
weight. For such matrices statistical physics methods allow one to derive an
equation describing local minima of the functional. A model where only one
weight differs from other ones is discussed in details. In this case the
above-mention equation can be solved analytically. Obtained results are
confirmed by computer simulations.
|
1005.4292
|
Application Of Fuzzy System In Segmentation Of MRI Brain Tumor
|
cs.CV
|
Segmentation of images holds an important position in the area of image
processing. It becomes more important whi le typically dealing with medical
images where presurgery and post surgery decisions are required for the purpose
of initiating and speeding up the recovery process. Segmentation of 3-D tumor
structures from magnetic resonance images (MRI) is a very challenging problem
due to the variability of tumor geometry and intensity patterns. Level set
evolution combining global smoothness with the flexibility of topology changes
offers significant advantages over the conventional statistical classification
followed by mathematical morphology. Level set evolution with constant
propagation needs to be initialized either completely inside or outside the
tumor and can leak through weak or missing boundary parts. Replacing the
constant propagation term by a statistical force overcomes these limitations
and results in a convergence to a stable solution. Using MR images presenting
tumors, probabilities for background and tumor regions are calculated from a
pre- and post-contrast difference image and mixture modeling fit of the
histogram. The whole image is used for initialization of the level set
evolution to segment the tumor boundaries.
|
1005.4298
|
Distantly Labeling Data for Large Scale Cross-Document Coreference
|
cs.AI cs.IR cs.LG
|
Cross-document coreference, the problem of resolving entity mentions across
multi-document collections, is crucial to automated knowledge base construction
and data mining tasks. However, the scarcity of large labeled data sets has
hindered supervised machine learning research for this task. In this paper we
develop and demonstrate an approach based on ``distantly-labeling'' a data set
from which we can train a discriminative cross-document coreference model. In
particular we build a dataset of more than a million people mentions extracted
from 3.5 years of New York Times articles, leverage Wikipedia for distant
labeling with a generative model (and measure the reliability of such
labeling); then we train and evaluate a conditional random field coreference
model that has factors on cross-document entities as well as mention-pairs.
This coreference model obtains high accuracy in resolving mentions and entities
that are not present in the training data, indicating applicability to
non-Wikipedia data. Given the large amount of data, our work is also an
exercise demonstrating the scalability of our approach.
|
1005.4316
|
Bayesian Cram\'{e}r-Rao Bound for Noisy Non-Blind and Blind Compressed
Sensing
|
cs.IT math.IT
|
In this paper, we address the theoretical limitations in reconstructing
sparse signals (in a known complete basis) using compressed sensing framework.
We also divide the CS to non-blind and blind cases. Then, we compute the
Bayesian Cramer-Rao bound for estimating the sparse coefficients while the
measurement matrix elements are independent zero mean random variables.
Simulation results show a large gap between the lower bound and the performance
of the practical algorithms when the number of measurements are low.
|
1005.4344
|
Max-stable sketches: estimation of Lp-norms, dominance norms and point
queries for non-negative signals
|
cs.DS cs.DB
|
Max-stable random sketches can be computed efficiently on fast streaming
positive data sets by using only sequential access to the data. They can be
used to answer point and Lp-norm queries for the signal. There is an intriguing
connection between the so-called p-stable (or sum-stable) and the max-stable
sketches. Rigorous performance guarantees through error-probability estimates
are derived and the algorithmic implementation is discussed.
|
1005.4376
|
Characterizing the community structure of complex networks
|
physics.soc-ph cs.IR
|
Community structure is one of the key properties of complex networks and
plays a crucial role in their topology and function. While an impressive amount
of work has been done on the issue of community detection, very little
attention has been so far devoted to the investigation of communities in real
networks. We present a systematic empirical analysis of the statistical
properties of communities in large information, communication, technological,
biological, and social networks. We find that the mesoscopic organization of
networks of the same category is remarkably similar. This is reflected in
several characteristics of community structure, which can be used as
``fingerprints'' of specific network categories. While community size
distributions are always broad, certain categories of networks consist mainly
of tree-like communities, while others have denser modules. Average path
lengths within communities initially grow logarithmically with community size,
but the growth saturates or slows down for communities larger than a
characteristic size. This behaviour is related to the presence of hubs within
communities, whose roles differ across categories. Also the community
embeddedness of nodes, measured in terms of the fraction of links within their
communities, has a characteristic distribution for each category. Our findings
are verified by the use of two fundamentally different community detection
methods.
|
1005.4446
|
Genetic algorithms and the art of Zen
|
cs.NE cs.AI
|
In this paper we present a novel genetic algorithm (GA) solution to a simple
yet challenging commercial puzzle game known as the Zen Puzzle Garden (ZPG). We
describe the game in detail, before presenting a suitable encoding scheme and
fitness function for candidate solutions. We then compare the performance of
the genetic algorithm with that of the A* algorithm. Our results show that the
GA is competitive with informed search in terms of solution quality, and
significantly out-performs it in terms of computational resource requirements.
We conclude with a brief discussion of the implications of our findings for
game solving and other "real world" problems.
|
1005.4447
|
Evidence Algorithm and System for Automated Deduction: A Retrospective
View
|
cs.AI cs.LO
|
A research project aimed at the development of an automated theorem proving
system was started in Kiev (Ukraine) in early 1960s. The mastermind of the
project, Academician V.Glushkov, baptized it "Evidence Algorithm", EA. The work
on the project lasted, off and on, more than 40 years. In the framework of the
project, the Russian and English versions of the System for Automated
Deduction, SAD, were constructed. They may be already seen as powerful
theorem-proving assistants.
|
1005.4457
|
Pipeline-Centric Provenance Model
|
astro-ph.IM cs.IR
|
In this paper we propose a new provenance model which is tailored to a class
of workflow-based applications. We motivate the approach with use cases from
the astronomy community. We generalize the class of applications the approach
is relevant to and propose a pipeline-centric provenance model. Finally, we
evaluate the benefits in terms of storage needed by the approach when applied
to an astronomy application.
|
1005.4461
|
On Multiple Decoding Attempts for Reed-Solomon Codes: A Rate-Distortion
Approach
|
cs.IT math.IT
|
One popular approach to soft-decision decoding of Reed-Solomon (RS) codes is
based on using multiple trials of a simple RS decoding algorithm in combination
with erasing or flipping a set of symbols or bits in each trial. This paper
presents a framework based on rate-distortion (RD) theory to analyze these
multiple-decoding algorithms. By defining an appropriate distortion measure
between an error pattern and an erasure pattern, the successful decoding
condition, for a single errors-and-erasures decoding trial, becomes equivalent
to distortion being less than a fixed threshold. Finding the best set of
erasure patterns also turns into a covering problem which can be solved
asymptotically by rate-distortion theory. Thus, the proposed approach can be
used to understand the asymptotic performance-versus-complexity trade-off of
multiple errors-and-erasures decoding of RS codes.
This initial result is also extended a few directions. The rate-distortion
exponent (RDE) is computed to give more precise results for moderate
blocklengths. Multiple trials of algebraic soft-decision (ASD) decoding are
analyzed using this framework. Analytical and numerical computations of the RD
and RDE functions are also presented. Finally, simulation results show that
sets of erasure patterns designed using the proposed methods outperform other
algorithms with the same number of decoding trials.
|
1005.4472
|
Distributive Power Control Algorithm for Multicarrier Interference
Network over Time-Varying Fading Channels - Tracking Performance Analysis and
Optimization
|
cs.IT math.IT
|
Distributed power control over interference limited network has received an
increasing intensity of interest over the past few years. Distributed solutions
(like the iterative water-filling, gradient projection, etc.) have been
intensively investigated under \emph{quasi-static} channels. However, as such
distributed solutions involve iterative updating and explicit message passing,
it is unrealistic to assume that the wireless channel remains unchanged during
the iterations. Unfortunately, the behavior of those distributed solutions
under \emph{time-varying} channels is in general unknown. In this paper, we
shall investigate the distributed scaled gradient projection algorithm (DSGPA)
in a $K$ pairs multicarrier interference network under a finite-state Markov
channel (FSMC) model. We shall analyze the \emph{convergence property} as well
as \emph{tracking performance} of the proposed DSGPA. Our analysis shows that
the proposed DSGPA converges to a limit region rather than a single point under
the FSMC model. We also show that the order of growth of the tracking errors is
given by $\mathcal{O}\(1 \big/ \bar{N}\)$, where $\bar{N}$ is the \emph{average
sojourn time} of the FSMC. Based on the analysis, we shall derive the
\emph{tracking error optimal scaling matrices} via Markov decision process
modeling. We shall show that the tracking error optimal scaling matrices can be
implemented distributively at each transmitter. The numerical results show the
superior performance of the proposed DSGPA over three baseline schemes, such as
the gradient projection algorithm with a constant stepsize.
|
1005.4496
|
Combining Naive Bayes and Decision Tree for Adaptive Intrusion Detection
|
cs.AI
|
In this paper, a new learning algorithm for adaptive network intrusion
detection using naive Bayesian classifier and decision tree is presented, which
performs balance detections and keeps false positives at acceptable level for
different types of network attacks, and eliminates redundant attributes as well
as contradictory examples from training data that make the detection model
complex. The proposed algorithm also addresses some difficulties of data mining
such as handling continuous attribute, dealing with missing attribute values,
and reducing noise in training data. Due to the large volumes of security audit
data as well as the complex and dynamic properties of intrusion behaviours,
several data miningbased intrusion detection techniques have been applied to
network-based traffic data and host-based data in the last decades. However,
there remain various issues needed to be examined towards current intrusion
detection systems (IDS). We tested the performance of our proposed algorithm
with existing learning algorithms by employing on the KDD99 benchmark intrusion
detection dataset. The experimental results prove that the proposed algorithm
achieved high detection rates (DR) and significant reduce false positives (FP)
for different types of network intrusions using limited computational
resources.
|
1005.4592
|
Automated Reasoning and Presentation Support for Formalizing Mathematics
in Mizar
|
cs.AI
|
This paper presents a combination of several automated reasoning and proof
presentation tools with the Mizar system for formalization of mathematics. The
combination forms an online service called MizAR, similar to the SystemOnTPTP
service for first-order automated reasoning. The main differences to
SystemOnTPTP are the use of the Mizar language that is oriented towards human
mathematicians (rather than the pure first-order logic used in SystemOnTPTP),
and setting the service in the context of the large Mizar Mathematical Library
of previous theorems,definitions, and proofs (rather than the isolated problems
that are solved in SystemOnTPTP). These differences poses new challenges and
new opportunities for automated reasoning and for proof presentation tools.
This paper describes the overall structure of MizAR, and presents the automated
reasoning systems and proof presentation tools that are combined to make MizAR
a useful mathematical service.
|
1005.4695
|
Providing Scalable Data Services in Ubiquitous Networks
|
cs.DB cs.NI
|
Topology is a fundamental part of a network that governs connectivity between
nodes, the amount of data flow and the efficiency of data flow between nodes.
In traditional networks, due to physical limitations, topology remains static
for the course of the network operation. Ubiquitous data networks (UDNs),
alternatively, are more adaptive and can be configured for changes in their
topology. This flexibility in controlling their topology makes them very
appealing and an attractive medium for supporting "anywhere, any place"
communication. However, it raises the problem of designing a dynamic topology.
The dynamic topology design problem is of particular interest to application
service providers who need to provide cost-effective data services on a
ubiquitous network. In this paper we describe algorithms that decide when and
how the topology should be reconfigured in response to a change in the data
communication requirements of the network. In particular, we describe and
compare a greedy algorithm, which is often used for topology reconfiguration,
with a non-greedy algorithm based on metrical task systems. Experiments show
the algorithm based on metrical task system has comparable performance to the
greedy algorithm at a much lower reconfiguration cost.
|
1005.4697
|
The Lambek-Grishin calculus is NP-complete
|
cs.CL
|
The Lambek-Grishin calculus LG is the symmetric extension of the
non-associative Lambek calculus NL. In this paper we prove that the
derivability problem for LG is NP-complete.
|
1005.4714
|
Defining and Mining Functional Dependencies in Probabilistic Databases
|
cs.DB
|
Functional dependencies -- traditional, approximate and conditional are of
critical importance in relational databases, as they inform us about the
relationships between attributes. They are useful in schema normalization, data
rectification and source selection. Most of these were however developed in the
context of deterministic data. Although uncertain databases have started
receiving attention, these dependencies have not been defined for them, nor are
fast algorithms available to evaluate their confidences. This paper defines the
logical extensions of various forms of functional dependencies for
probabilistic databases and explores the connections between them. We propose a
pruning-based exact algorithm to evaluate the confidence of functional
dependencies, a Monte-Carlo based algorithm to evaluate the confidence of
approximate functional dependencies and algorithms for their conditional
counterparts in probabilistic databases. Experiments are performed on both
synthetic and real data evaluating the performance of these algorithms in
assessing the confidence of dependencies and mining them from data. We believe
that having these dependencies and algorithms available for probabilistic
databases will drive adoption of probabilistic data storage in the industry.
|
1005.4717
|
Smoothing proximal gradient method for general structured sparse
regression
|
stat.ML cs.LG math.OC stat.AP stat.CO
|
We study the problem of estimating high-dimensional regression models
regularized by a structured sparsity-inducing penalty that encodes prior
structural information on either the input or output variables. We consider two
widely adopted types of penalties of this kind as motivating examples: (1) the
general overlapping-group-lasso penalty, generalized from the group-lasso
penalty; and (2) the graph-guided-fused-lasso penalty, generalized from the
fused-lasso penalty. For both types of penalties, due to their nonseparability
and nonsmoothness, developing an efficient optimization method remains a
challenging problem. In this paper we propose a general optimization approach,
the smoothing proximal gradient (SPG) method, which can solve structured sparse
regression problems with any smooth convex loss under a wide spectrum of
structured sparsity-inducing penalties. Our approach combines a smoothing
technique with an effective proximal gradient method. It achieves a convergence
rate significantly faster than the standard first-order methods, subgradient
methods, and is much more scalable than the most widely used interior-point
methods. The efficiency and scalability of our method are demonstrated on both
simulation experiments and real genetic data sets.
|
1005.4752
|
A database approach to information retrieval: The remarkable
relationship between language models and region models
|
cs.IR cs.DB
|
In this report, we unify two quite distinct approaches to information
retrieval: region models and language models. Region models were developed for
structured document retrieval. They provide a well-defined behaviour as well as
a simple query language that allows application developers to rapidly develop
applications. Language models are particularly useful to reason about the
ranking of search results, and for developing new ranking approaches. The
unified model allows application developers to define complex language modeling
approaches as logical queries on a textual database. We show a remarkable
one-to-one relationship between region queries and the language models they
represent for a wide variety of applications: simple ad-hoc search,
cross-language retrieval, video retrieval, and web search.
|
1005.4769
|
A Network Coding Approach to Loss Tomography
|
cs.IT cs.NI math.IT
|
Network tomography aims at inferring internal network characteristics based
on measurements at the edge of the network. In loss tomography, in particular,
the characteristic of interest is the loss rate of individual links and
multicast and/or unicast end-to-end probes are typically used. Independently,
recent advances in network coding have shown that there are advantages from
allowing intermediate nodes to process and combine, in addition to just
forward, packets. In this paper, we study the problem of loss tomography in
networks with network coding capabilities. We design a framework for estimating
link loss rates, which leverages network coding capabilities, and we show that
it improves several aspects of tomography including the identifiability of
links, the trade-off between estimation accuracy and bandwidth efficiency, and
the complexity of probe path selection. We discuss the cases of inferring link
loss rates in a tree topology and in a general topology. In the latter case,
the benefits of our approach are even more pronounced compared to standard
techniques, but we also face novel challenges, such as dealing with cycles and
multiple paths between sources and receivers. Overall, this work makes the
connection between active network tomography and network coding.
|
1005.4774
|
Fairness in Combinatorial Auctions
|
cs.GT cs.MA
|
The market economy deals with many interacting agents such as buyers and
sellers who are autonomous intelligent agents pursuing their own interests. One
such multi-agent system (MAS) that plays an important role in auctions is the
combinatorial auctioning system (CAS). We use this framework to define our
concept of fairness in terms of what we call as "basic fairness" and "extended
fairness". The assumptions of quasilinear preferences and dominant strategies
are taken into consideration while explaining fairness. We give an algorithm to
ensure fairness in a CAS using a Generalized Vickrey Auction (GVA). We use an
algorithm of Sandholm to achieve optimality. Basic and extended fairness are
then analyzed according to the dominant strategy solution concept.
|
1005.4815
|
A Formal Specification of Dynamic Protocols for Open Agent Systems
|
cs.MA
|
Multi-agent systems where the agents are developed by parties with competing
interests, and where there is no access to an agent's internal state, are often
classified as `open'. The member agents of such systems may inadvertently fail
to, or even deliberately choose not to, conform to the system specification.
Consequently, it is necessary to specify the normative relations that may exist
between the agents, such as permission, obligation, and institutional power.
The specification of open agent systems of this sort is largely seen as a
design-time activity. Moreover, there is no support for run-time specification
modification. Due to environmental, social, or other conditions, however, it is
often required to revise the specification during the system execution. To
address this requirement, we present an infrastructure for `dynamic'
specifications, that is, specifications that may be modified at run-time by the
agents. The infrastructure consists of well-defined procedures for proposing a
modification of the `rules of the game', as well as decision-making over and
enactment of proposed modifications. We evaluate proposals for rule
modification by modelling a dynamic specification as a metric space, and by
considering the effects of accepting a proposal on system utility. Furthermore,
we constrain the enactment of proposals that do not meet the evaluation
criteria. We employ the action language C+ to formalise dynamic specifications,
and the `Causal Calculator' implementation of C+ to execute the specifications.
We illustrate our infrastructure by presenting a dynamic specification of a
resource-sharing protocol.
|
1005.4834
|
Spectral Shape of Check-Hybrid GLDPC Codes
|
cs.IT math.IT
|
This paper analyzes the asymptotic exponent of both the weight spectrum and
the stopping set size spectrum for a class of generalized low-density
parity-check (GLDPC) codes. Specifically, all variable nodes (VNs) are assumed
to have the same degree (regular VN set), while the check node (CN) set is
assumed to be composed of a mixture of different linear block codes (hybrid CN
set). A simple expression for the exponent (which is also referred to as the
growth rate or the spectral shape) is developed. This expression is consistent
with previous results, including the case where the normalized weight or
stopping set size tends to zero. Furthermore, it is shown how certain symmetry
properties of the local weight distribution at the CNs induce a symmetry in the
overall weight spectral shape function.
|
1005.4853
|
Analog Matching of Colored Sources to Colored Channels
|
cs.IT math.IT
|
Analog (uncoded) transmission provides a simple and robust scheme for
communicating a Gaussian source over a Gaussian channel under the mean squared
error (MSE) distortion measure. Unfortunately, its performance is usually
inferior to the all-digital, separation-based source-channel coding solution,
which requires exact knowledge of the channel at the encoder. The loss comes
from the fact that except for very special cases, e.g. white source and channel
of matching bandwidth (BW), it is impossible to achieve perfect matching of
source to channel and channel to source by linear means. We show that by
combining prediction and modulo-lattice operations, it is possible to match any
colored Gaussian source to any colored Gaussian noise channel (of possibly
different BW), hence achieve Shannon's optimum attainable performance $R(D)=C$.
Furthermore, when the source and channel BWs are equal (but otherwise their
spectra are arbitrary), this scheme is asymptotically robust in the sense that
for high signal-to-noise ratio a single encoder (independent of the noise
variance) achieves the optimum performance. The derivation is based upon a
recent modulo-lattice modulation scheme for transmitting a Wyner-Ziv source
over a dirty-paper channel.
|
1005.4877
|
Set-Monotonicity Implies Kelly-Strategyproofness
|
cs.MA
|
This paper studies the strategic manipulation of set-valued social choice
functions according to Kelly's preference extension, which prescribes that one
set of alternatives is preferred to another if and only if all elements of the
former are preferred to all elements of the latter. It is shown that
set-monotonicity---a new variant of Maskin-monotonicity---implies
Kelly-strategyproofness in comprehensive subdomains of the linear domain.
Interestingly, there are a handful of appealing Condorcet extensions---such as
the top cycle, the minimal covering set, and the bipartisan set---that satisfy
set-monotonicity even in the unrestricted linear domain, thereby answering
questions raised independently by Barber\`a (1977) and Kelly (1977).
|
1005.4895
|
Parallel QR decomposition in LTE-A systems
|
cs.OH cs.IT math.IT
|
The QR Decomposition (QRD) of communication channel matrices is a fundamental
prerequisite to several detection schemes in Multiple-Input Multiple-Output
(MIMO) communication systems. Herein, the main feature of the QRD is to
transform the non-causal system into a causal system, where consequently
efficient detection algorithms based on the Successive Interference
Cancellation (SIC) or Sphere Decoder (SD) become possible. Also, QRD can be
used as a light but efficient antenna selection scheme. In this paper, we
address the study of the QRD methods and compare their efficiency in terms of
computational complexity and error rate performance. Moreover, a particular
attention is paid to the parallelism of the QRD algorithms since it reduces the
latency of the matrix factorization.
|
1005.4951
|
The diversity-multiplexing tradeoff of the symmetric MIMO half-duplex
relay channel
|
cs.IT math.IT
|
This paper has been withdrawn due to an error in one of the equations in the
extended portion.
|
1005.4963
|
Integrating Structured Metadata with Relational Affinity Propagation
|
cs.AI
|
Structured and semi-structured data describing entities, taxonomies and
ontologies appears in many domains. There is a huge interest in integrating
structured information from multiple sources; however integrating structured
data to infer complex common structures is a difficult task because the
integration must aggregate similar structures while avoiding structural
inconsistencies that may appear when the data is combined. In this work, we
study the integration of structured social metadata: shallow personal
hierarchies specified by many individual users on the SocialWeb, and focus on
inferring a collection of integrated, consistent taxonomies. We frame this task
as an optimization problem with structural constraints. We propose a new
inference algorithm, which we refer to as Relational Affinity Propagation (RAP)
that extends affinity propagation (Frey and Dueck 2007) by introducing
structural constraints. We validate the approach on a real-world social media
dataset, collected from the photosharing website Flickr. Our empirical results
show that our proposed approach is able to construct deeper and denser
structures compared to an approach using only the standard affinity propagation
algorithm.
|
1005.4985
|
User Scheduling for Cooperative Base Station Transmission Exploiting
Channel Asymmetry
|
cs.IT math.IT
|
We study low-signalling overhead scheduling for downlink coordinated
multi-point (CoMP) transmission with multi-antenna base stations (BSs) and
single-antenna users. By exploiting the asymmetric channel feature, i.e., the
pathloss differences towards different BSs, we derive a metric to judge
orthogonality among users only using their average channel gains, based on
which we propose a semi-orthogonal scheduler that can be applied in a two-stage
transmission strategy. Simulation results demonstrate that the proposed
scheduler performs close to the semi-orthogonal scheduler with full channel
information, especially when each BS is with more antennas and the celledge
region is large. Compared with other overhead reduction strategies, the
proposed scheduler requires much less training overhead to achieve the same
cell-average data rate.
|
1005.4989
|
A Formalization of the Turing Test
|
cs.AI
|
The paper offers a mathematical formalization of the Turing test. This
formalization makes it possible to establish the conditions under which some
Turing machine will pass the Turing test and the conditions under which every
Turing machine (or every Turing machine of the special class) will fail the
Turing test.
|
1005.4997
|
Network analysis of a corpus of undeciphered Indus civilization
inscriptions indicates syntactic organization
|
cs.CL physics.data-an physics.soc-ph
|
Archaeological excavations in the sites of the Indus Valley civilization
(2500-1900 BCE) in Pakistan and northwestern India have unearthed a large
number of artifacts with inscriptions made up of hundreds of distinct signs. To
date there is no generally accepted decipherment of these sign sequences and
there have been suggestions that the signs could be non-linguistic. Here we
apply complex network analysis techniques to a database of available Indus
inscriptions, with the aim of detecting patterns indicative of syntactic
organization. Our results show the presence of patterns, e.g., recursive
structures in the segmentation trees of the sequences, that suggest the
existence of a grammar underlying these inscriptions.
|
1005.5035
|
Dynamic Motion Modelling for Legged Robots
|
cs.RO
|
An accurate motion model is an important component in modern-day robotic
systems, but building such a model for a complex system often requires an
appreciable amount of manual effort. In this paper we present a motion model
representation, the Dynamic Gaussian Mixture Model (DGMM), that alleviates the
need to manually design the form of a motion model, and provides a direct means
of incorporating auxiliary sensory data into the model. This representation and
its accompanying algorithms are validated experimentally using an 8-legged
kinematically complex robot, as well as a standard benchmark dataset. The
presented method not only learns the robot's motion model, but also improves
the model's accuracy by incorporating information about the terrain surrounding
the robot.
|
1005.5054
|
Coordinated transmit and receive processing with adaptive multi-stream
selection
|
cs.IT math.IT
|
In this paper, we propose an adaptive coordinated Tx-Rx beamforming scheme
for inter-user interference cancellation, when a base station (BS) communicates
with multiple users that each has multiple receive antennas. The conventional
coordinated Tx-Rx beamforming scheme transmits a fixed number of data streams
for each user regardless of the instantaneous channel states, that is, all the
users, no matter they are with ill-conditioned or well-conditioned channels,
have the same number of data streams. However, in the proposed adaptive
coordinated Tx-Rx beamforming scheme, we adaptively select the number of
streams per user to solve the inefficient problem of the conventional
coordinated Tx-Rx beamforming scheme. As a result, the BER performance is
improved. Simulation results show that the proposed algorithm outperforms the
conventional co-ordinated Tx-Rx beamforming algorithm by 2.5dB at a target BER
of 10^-2
|
1005.5063
|
Keys through ARQ: Theory and Practice
|
cs.IT cs.CR math.IT
|
This paper develops a novel framework for sharing secret keys using the
Automatic Repeat reQuest (ARQ) protocol. We first characterize the underlying
information theoretic limits, under different assumptions on the channel
spatial and temporal correlation function. Our analysis reveals a novel role of
"dumb antennas" in overcoming the negative impact of spatial correlation on the
achievable secrecy rates. We further develop an adaptive rate allocation
policy, which achieves higher secrecy rates in temporally correlated channels,
and explicit constructions for ARQ secrecy coding that enjoy low implementation
complexity. Building on this theoretical foundation, we propose a unified
framework for ARQ-based secrecy in Wi-Fi networks. By exploiting the existing
ARQ mechanism in the IEEE 802.11 standard, we develop security overlays that
offer strong security guarantees at the expense of only minor modifications in
the medium access layer. Our numerical results establish the achievability of
non-zero secrecy rates even when the eavesdropper channel is less noisy, on the
average, than the legitimate channel, while our linux-based prototype
demonstrates the efficiency of our ARQ overlays in mitigating all known,
passive and active, Wi-Fi attacks at the expense of a minimal increase in the
link setup time and a small loss in throughput.
|
1005.5065
|
Upper-lower bounded-complexity QRD-M for spatial multiplexing MIMO-OFDM
systems
|
cs.IT math.IT
|
Multiple-input multiple-output (MIMO) technology applied with orthogonal
frequency division multiplexing (OFDM) is considered as the ultimate solution
to increase channel capacity without any additional spectral resources. At the
receiver side, the challenge resides in designing low complexity detection
algorithms capable of separating independent streams sent simultaneously from
different antennas. In this paper, we introduce an upper-lower
bounded-complexity QRD-M algorithm (ULBC QRD-M). In the proposed algorithm we
solve the problem of high extreme complexity of the conventional sphere
decoding by fixing the upper bound complexity to that of the conventional
QRD-M. On the other hand, ULBC QRD-M intelligently cancels all unnecessary
hypotheses to achieve very low computational requirements. Analyses and
simulation results show that the proposed algorithm achieves the performance of
conventional QRD-M with only 26% of the required computations.
|
1005.5114
|
Growing a Tree in the Forest: Constructing Folksonomies by Integrating
Structured Metadata
|
cs.AI
|
Many social Web sites allow users to annotate the content with descriptive
metadata, such as tags, and more recently to organize content hierarchically.
These types of structured metadata provide valuable evidence for learning how a
community organizes knowledge. For instance, we can aggregate many personal
hierarchies into a common taxonomy, also known as a folksonomy, that will aid
users in visualizing and browsing social content, and also to help them in
organizing their own content. However, learning from social metadata presents
several challenges, since it is sparse, shallow, ambiguous, noisy, and
inconsistent. We describe an approach to folksonomy learning based on
relational clustering, which exploits structured metadata contained in personal
hierarchies. Our approach clusters similar hierarchies using their structure
and tag statistics, then incrementally weaves them into a deeper, bushier tree.
We study folksonomy learning using social metadata extracted from the
photo-sharing site Flickr, and demonstrate that the proposed approach addresses
the challenges. Moreover, comparing to previous work, the approach produces
larger, more accurate folksonomies, and in addition, scales better.
|
1005.5115
|
Improving GPS/INS Integration through Neural Networks
|
cs.NE
|
The Global Positioning Systems (GPS) and Inertial Navigation System (INS)
technology have attracted a considerable importance recently because of its
large number of solutions serving both military as well as civilian
applications. This paper aims to develop a more efficient and especially a
faster method for processing the GPS signal in case of INS signal loss without
losing the accuracy of the data. The conventional or usual method consists of
processing data through a neural network and obtaining accurate positioning
output data. The new or improved method adds selective filtering at the
low-band frequency, the mid-band frequency and the high band frquency, before
processing the GPS data through the neural network, so that the processing time
is decreased significantly while the accuracy remains the same.
|
1005.5124
|
Proofs, proofs, proofs, and proofs
|
cs.AI cs.LO math.HO
|
In logic there is a clear concept of what constitutes a proof and what not. A
proof is essentially defined as a finite sequence of formulae which are either
axioms or derived by proof rules from formulae earlier in the sequence.
Sociologically, however, it is more difficult to say what should constitute a
proof and what not. In this paper we will look at different forms of proofs and
try to clarify the concept of proof in the wider meaning of the term. This has
implications on how proofs should be represented formally.
|
1005.5141
|
On Recursive Edit Distance Kernels with Application to Time Series
Classification
|
cs.LG cs.IR
|
This paper proposes some extensions to the work on kernels dedicated to
string or time series global alignment based on the aggregation of scores
obtained by local alignments. The extensions we propose allow to construct,
from classical recursive definition of elastic distances, recursive edit
distance (or time-warp) kernels that are positive definite if some sufficient
conditions are satisfied. The sufficient conditions we end-up with are original
and weaker than those proposed in earlier works, although a recursive
regularizing term is required to get the proof of the positive definiteness as
a direct consequence of the Haussler's convolution theorem. The classification
experiment we conducted on three classical time warp distances (two of which
being metrics), using Support Vector Machine classifier, leads to conclude
that, when the pairwise distance matrix obtained from the training data is
\textit{far} from definiteness, the positive definite recursive elastic kernels
outperform in general the distance substituting kernels for the classical
elastic distances we have tested.
|
1005.5170
|
Wirtinger's Calculus in general Hilbert Spaces
|
cs.LG math.CV
|
The present report, has been inspired by the need of the author and its
colleagues to understand the underlying theory of Wirtinger's Calculus and to
further extend it to include the kernel case. The aim of the present manuscript
is twofold: a) it endeavors to provide a more rigorous presentation of the
related material, focusing on aspects that the author finds more insightful and
b) it extends the notions of Wirtinger's calculus on general Hilbert spaces
(such as Reproducing Hilbert Kernel Spaces).
|
1005.5181
|
Compression Rate Method for Empirical Science and Application to
Computer Vision
|
cs.CV
|
This philosophical paper proposes a modified version of the scientific
method, in which large databases are used instead of experimental observations
as the necessary empirical ingredient. This change in the source of the
empirical data allows the scientific method to be applied to several aspects of
physical reality that previously resisted systematic interrogation. Under the
new method, scientific theories are compared by instantiating them as
compression programs, and examining the codelengths they achieve on a database
of measurements related to a phenomenon of interest. Because of the
impossibility of compressing random data, "real world" data can only be
compressed by discovering and exploiting the empirical structure it exhibits.
The method also provides a new way of thinking about two longstanding issues in
the philosophy of science: the problem of induction and the problem of
demarcation.
The second part of the paper proposes to reformulate computer vision as an
empirical science of visual reality, by applying the new method to large
databases of natural images. The immediate goal of the proposed reformulation
is to repair the chronic difficulties in evaluation experienced by the field of
computer vision. The reformulation should bring a wide range of benefits,
including a substantially increased degree of methodological rigor, the ability
to justify complex theories without overfitting, a scalable evaluation
paradigm, and the potential to make systematic progress. A crucial argument is
that the change is not especially drastic, because most computer vision tasks
can be reformulated as specialized image compression techniques. Finally, a
concrete proposal is discussed in which a database is produced by recording
from a roadside video camera, and compression is achieved by developing a
computational understanding of the appearance of moving cars.
|
1005.5197
|
Ranked bandits in metric spaces: learning optimally diverse rankings
over large document collections
|
cs.LG cs.DS
|
Most learning to rank research has assumed that the utility of different
documents is independent, which results in learned ranking functions that
return redundant results. The few approaches that avoid this have rather
unsatisfyingly lacked theoretical foundations, or do not scale. We present a
learning-to-rank formulation that optimizes the fraction of satisfied users,
with several scalable algorithms that explicitly takes document similarity and
ranking context into account. Our formulation is a non-trivial common
generalization of two multi-armed bandit models from the literature: "ranked
bandits" (Radlinski et al., ICML 2008) and "Lipschitz bandits" (Kleinberg et
al., STOC 2008). We present theoretical justifications for this approach, as
well as a near-optimal algorithm. Our evaluation adds optimizations that
improve empirical performance, and shows that our algorithms learn orders of
magnitude more quickly than previous approaches.
|
1005.5253
|
Using Soft Constraints To Learn Semantic Models Of Descriptions Of
Shapes
|
cs.CL cs.AI cs.HC cs.LG
|
The contribution of this paper is to provide a semantic model (using soft
constraints) of the words used by web-users to describe objects in a language
game; a game in which one user describes a selected object of those composing
the scene, and another user has to guess which object has been described. The
given description needs to be non ambiguous and accurate enough to allow other
users to guess the described shape correctly.
To build these semantic models the descriptions need to be analyzed to
extract the syntax and words' classes used. We have modeled the meaning of
these descriptions using soft constraints as a way for grounding the meaning.
The descriptions generated by the system took into account the context of the
object to avoid ambiguous descriptions, and allowed users to guess the
described object correctly 72% of the times.
|
1005.5268
|
An Empirical Study of the Manipulability of Single Transferable Voting
|
cs.AI cs.GT cs.MA
|
Voting is a simple mechanism to combine together the preferences of multiple
agents. Agents may try to manipulate the result of voting by mis-reporting
their preferences. One barrier that might exist to such manipulation is
computational complexity. In particular, it has been shown that it is NP-hard
to compute how to manipulate a number of different voting rules. However,
NP-hardness only bounds the worst-case complexity. Recent theoretical results
suggest that manipulation may often be easy in practice. In this paper, we
study empirically the manipulability of single transferable voting (STV) to
determine if computational complexity is really a barrier to manipulation. STV
was one of the first voting rules shown to be NP-hard. It also appears one of
the harder voting rules to manipulate. We sample a number of distributions of
votes including uniform and real world elections. In almost every election in
our experiments, it was easy to compute how a single agent could manipulate the
election or to prove that manipulation by a single agent was impossible.
|
1005.5270
|
Symmetries of Symmetry Breaking Constraints
|
cs.AI
|
Symmetry is an important feature of many constraint programs. We show that
any problem symmetry acting on a set of symmetry breaking constraints can be
used to break symmetry. Different symmetries pick out different solutions in
each symmetry class. This simple but powerful idea can be used in a number of
different ways. We describe one application within model restarts, a search
technique designed to reduce the conflict between symmetry breaking and the
branching heuristic. In model restarts, we restart search periodically with a
random symmetry of the symmetry breaking constraints. Experimental results show
that this symmetry breaking technique is effective in practice on some standard
benchmark problems.
|
1005.5271
|
A Restful Approach for Managing Citizen profiles Using A Semantic
Support
|
cs.IR
|
Several steps are missing in the current high-speed race towards the holistic
support of citizen needs in the domain of eGovernment. This paper is focused on
how to provide support for the citizen profile. This profile, in a wide sense,
includes personal information as well documents in possession of the citizen.
This also involves the provision of those mechanisms required to publish,
access and submit the convenient information to a Public Administration in due
curse of a transactional services provided with the last one. Main features of
the system are related to interoperability and possibilities for its inclusion
in a cost effective manner in already developed platforms. To make that
possible, this approach will take full advantage of semantic technologies and
the RESTful paradigm to design the entire system. The paper presents the
overall system with some notes on the deployment of the solution for its
further reuse in similar contexts.
|
1005.5337
|
Using a Kernel Adatron for Object Classification with RCS Data
|
cs.LG stat.ML
|
Rapid identification of object from radar cross section (RCS) signals is
important for many space and military applications. This identification is a
problem in pattern recognition which either neural networks or support vector
machines should prove to be high-speed. Bayesian networks would also provide
value but require significant preprocessing of the signals. In this paper, we
describe the use of a support vector machine for object identification from
synthesized RCS data. Our best results are from data fusion of X-band and
S-band signals, where we obtained 99.4%, 95.3%, 100% and 95.6% correct
identification for cylinders, frusta, spheres, and polygons, respectively. We
also compare our results with a Bayesian approach and show that the SVM is
three orders of magnitude faster, as measured by the number of floating point
operations.
|
1005.5348
|
Error Analysis of Approximated PCRLBs for Nonlinear Dynamics
|
stat.AP cs.IT math.IT math.NA
|
In practical nonlinear filtering, the assessment of achievable filtering
performance is important. In this paper, we focus on the problem of efficiently
approximate the posterior Cramer-Rao lower bound (CRLB) in a recursive manner.
By using Gaussian assumptions, two types of approximations for calculating the
CRLB are proposed: An exact model using the state estimate as well as a
Taylor-series-expanded model using both of the state estimate and its error
covariance, are derived. Moreover, the difference between the two approximated
CRLBs is also formulated analytically. By employing the particle filter (PF)
and the unscented Kalman filter (UKF) to compute, simulation results reveal
that the approximated CRLB using mean-covariance-based model outperforms that
using the mean-based exact model. It is also shown that the theoretical
difference between the estimated CRLBs can be improved through an improved
filtering method.
|
1005.5361
|
VHDL Implementation of different Turbo Encoder using Log-MAP Decoder
|
cs.IT math.IT
|
Turbo code is a great achievement in the field of communication system. It
can be created by connecting a turbo encoder and a decoder serially. A Turbo
encoder is build with parallel concatenation of two simple convolutional codes.
By varying the number of memory element (encoder configuration), code rate (1/2
or 1/3), block size of data and iteration, we can achieve better BER
performance. Turbo code also consists of interleaver unit and its BER
performance also depends on interleaver size. Turbo Decoder can be implemented
using different algorithm, but Log -MAP decoding algorithm is less
computationaly complex with respect to MAP (maximux a posteriori) algorithm,
without compromising its BER performance, nearer to Shannon limit. A register
transfer level (RTL) turbo encoder is designed and simulated using VHDL (Very
high speed integrated circuit Hardware Description Language). In this paper
VHDL model of different turbo encoder are implemented using Log MAP decoder and
its performance are compared and verified with corresponding MATLAB simulated
results.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.