id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0904.1193
|
Coherence Analysis of Iterative Thresholding Algorithms
|
cs.IT math.IT
|
There is a recent surge of interest in developing algorithms for finding
sparse solutions of underdetermined systems of linear equations $y = \Phi x$.
In many applications, extremely large problem sizes are envisioned, with at
least tens of thousands of equations and hundreds of thousands of unknowns. For
such problem sizes, low computational complexity is paramount. The best studied
$\ell_1$ minimization algorithm is not fast enough to fulfill this need.
Iterative thresholding algorithms have been proposed to address this problem.
In this paper we want to analyze two of these algorithms theoretically, and
give sufficient conditions under which they recover the sparsest solution.
|
0904.1227
|
Learning convex bodies is hard
|
cs.LG cs.CG
|
We show that learning a convex body in $\RR^d$, given random samples from the
body, requires $2^{\Omega(\sqrt{d/\eps})}$ samples. By learning a convex body
we mean finding a set having at most $\eps$ relative symmetric difference with
the input body. To prove the lower bound we construct a hard to learn family of
convex bodies. Our construction of this family is very simple and based on
error correcting codes.
|
0904.1229
|
Finding an Unknown Acyclic Orientation of a Given Graph
|
math.CO cs.IT math.IT
|
Let c(G) be the smallest number of edges we have to test in order to
determine an unknown acyclic orientation of the given graph G in the worst
case. For example, if G is the complete graph on n vertices, then c(G) is the
smallest number of comparisons needed to sort n numbers.
We prove that c(G)\le (1/4+o(1))n^2 for any graph G on n vertices, answering
in the affirmative a question of Aigner, Triesch, and Tuza [Discrete
Mathematics, 144 (1995) 3-10]. Also, we show that, for every e>0, it is NP-hard
to approximate the parameter c(G) within a multiplicative factor 74/73-e.
|
0904.1234
|
Mapping the evolution of scientific fields
|
physics.soc-ph cs.DL cs.IR
|
Despite the apparent cross-disciplinary interactions among scientific fields,
a formal description of their evolution is lacking. Here we describe a novel
approach to study the dynamics and evolution of scientific fields using a
network-based analysis. We build an idea network consisting of American
Physical Society Physics and Astronomy Classification Scheme (PACS) numbers as
nodes representing scientific concepts. Two PACS numbers are linked if there
exist publications that reference them simultaneously. We locate scientific
fields using a community finding algorithm, and describe the time evolution of
these fields over the course of 1985-2006. The communities we identify map to
known scientific fields, and their age depends on their size and activity. We
expect our approach to quantifying the evolution of ideas to be relevant for
making predictions about the future of science and thus help to guide its
development.
|
0904.1258
|
An Investigation Report on Auction Mechanism Design
|
cs.AI cs.MA
|
Auctions are markets with strict regulations governing the information
available to traders in the market and the possible actions they can take.
Since well designed auctions achieve desirable economic outcomes, they have
been widely used in solving real-world optimization problems, and in
structuring stock or futures exchanges. Auctions also provide a very valuable
testing-ground for economic theory, and they play an important role in
computer-based control systems.
Auction mechanism design aims to manipulate the rules of an auction in order
to achieve specific goals. Economists traditionally use mathematical methods,
mainly game theory, to analyze auctions and design new auction forms. However,
due to the high complexity of auctions, the mathematical models are typically
simplified to obtain results, and this makes it difficult to apply results
derived from such models to market environments in the real world. As a result,
researchers are turning to empirical approaches.
This report aims to survey the theoretical and empirical approaches to
designing auction mechanisms and trading strategies with more weights on
empirical ones, and build the foundation for further research in the field.
|
0904.1281
|
Asymptotically Optimal Joint Source-Channel Coding with Minimal Delay
|
cs.IT math.IT
|
We present and analyze a joint source-channel coding strategy for the
transmission of a Gaussian source across a Gaussian channel in n channel uses
per source symbol. Among all such strategies, our scheme has the following
properties: i) the resulting mean-squared error scales optimally with the
signal-to-noise ratio, and ii) the scheme is easy to implement and the incurred
delay is minimal, in the sense that a single source symbol is encoded at a
time.
|
0904.1289
|
Language Diversity across the Consonant Inventories: A Study in the
Framework of Complex Networks
|
cs.CL physics.comp-ph physics.soc-ph
|
n this paper, we attempt to explain the emergence of the linguistic diversity
that exists across the consonant inventories of some of the major language
families of the world through a complex network based growth model. There is
only a single parameter for this model that is meant to introduce a small
amount of randomness in the otherwise preferential attachment based growth
process. The experiments with this model parameter indicates that the choice of
consonants among the languages within a family are far more preferential than
it is across the families. The implications of this result are twofold -- (a)
there is an innate preference of the speakers towards acquiring certain
linguistic structures over others and (b) shared ancestry propels the stronger
preferential connection between the languages within a family than across them.
Furthermore, our observations indicate that this parameter might bear a
correlation with the period of existence of the language families under
investigation.
|
0904.1299
|
On the Communication of Scientific Results: The Full-Metadata Format
|
cs.DL cs.IR physics.comp-ph physics.ins-det
|
In this paper, we introduce a scientific format for text-based data files,
which facilitates storing and communicating tabular data sets. The so-called
Full-Metadata Format builds on the widely used INI-standard and is based on
four principles: readable self-documentation, flexible structure, fail-safe
compatibility, and searchability. As a consequence, all metadata required to
interpret the tabular data are stored in the same file, allowing for the
automated generation of publication-ready tables and graphs and the semantic
searchability of data file collections. The Full-Metadata Format is introduced
on the basis of three comprehensive examples. The complete format and syntax is
given in the appendix.
|
0904.1313
|
A Class of Novel STAP Algorithms Using Sparse Recovery Technique
|
cs.IT math.IT
|
A class of novel STAP algorithms based on sparse recovery technique were
presented. Intrinsic sparsity of distribution of clutter and target energy on
spatial-frequency plane was exploited from the viewpoint of compressed sensing.
The original sample data and distribution of target and clutter energy was
connected by a ill-posed linear algebraic equation and popular $L_1$
optimization method could be utilized to search for its solution with sparse
characteristic. Several new filtering algorithm acting on this solution were
designed to clean clutter component on spatial-frequency plane effectively for
detecting invisible targets buried in clutter. The method above is called
CS-STAP in general. CS-STAP showed their advantage compared with conventional
STAP technique, such as SMI, in two ways: Firstly, the resolution of CS-STAP on
estimation for distribution of clutter and target energy is ultra-high such
that clutter energy might be annihilated almost completely by carefully tuned
filter. Output SCR of CS-STAP algorithms is far superior to the requirement of
detection; Secondly, a much smaller size of training sample support compared
with SMI method is requested for CS-STAP method. Even with only one snapshot
(from target range cell) could CS-STAP method be able to reveal the existence
of target clearly. CS-STAP method display its great potential to be used in
heterogeneous situation. Experimental result on dataset from mountaintop
program has provided the evidence for our assertion on CS-STAP.
|
0904.1331
|
Primitive Polynomials, Singer Cycles, and Word-Oriented Linear Feedback
Shift Registers
|
math.CO cs.IT math.IT
|
Using the structure of Singer cycles in general linear groups, we prove that
a conjecture of Zeng, Han and He (2007) holds in the affirmative in a special
case, and outline a plausible approach to prove it in the general case. This
conjecture is about the number of primitive $\sigma$-LFSRs of a given order
over a finite field, and it generalizes a known formula for the number of
primitive LFSRs, which, in turn, is the number of primitive polynomials of a
given degree over a finite field. Moreover, this conjecture is intimately
related to an open question of Niederreiter (1995) on the enumeration of
splitting subspaces of a given dimension.
|
0904.1366
|
A Unified Approach to Ranking in Probabilistic Databases
|
cs.DB cs.DS
|
The dramatic growth in the number of application domains that naturally
generate probabilistic, uncertain data has resulted in a need for efficiently
supporting complex querying and decision-making over such data. In this paper,
we present a unified approach to ranking and top-k query processing in
probabilistic databases by viewing it as a multi-criteria optimization problem,
and by deriving a set of features that capture the key properties of a
probabilistic dataset that dictate the ranked result. We contend that a single,
specific ranking function may not suffice for probabilistic databases, and we
instead propose two parameterized ranking functions, called PRF-w and PRF-e,
that generalize or can approximate many of the previously proposed ranking
functions. We present novel generating functions-based algorithms for
efficiently ranking large datasets according to these ranking functions, even
if the datasets exhibit complex correlations modeled using probabilistic
and/xor trees or Markov networks. We further propose that the parameters of the
ranking function be learned from user preferences, and we develop an approach
to learn those parameters. Finally, we present a comprehensive experimental
study that illustrates the effectiveness of our parameterized ranking
functions, especially PRF-e, at approximating other ranking functions and the
scalability of our proposed algorithms for exact or approximate ranking.
|
0904.1369
|
Cooperative Transmission for Wireless Relay Networks Using Limited
Feedback
|
cs.IT math.IT
|
To achieve the available performance gains in half-duplex wireless relay
networks, several cooperative schemes have been earlier proposed using either
distributed space-time coding or distributed beamforming for the transmitter
without and with channel state information (CSI), respectively. However, these
schemes typically have rather high implementation and/or decoding complexities,
especially when the number of relays is high. In this paper, we propose a
simple low-rate feedback-based approach to achieve maximum diversity with a low
decoding and implementation complexity. To further improve the performance of
the proposed scheme, the knowledge of the second-order channel statistics is
exploited to design long-term power loading through maximizing the receiver
signal-to-noise ratio (SNR) with appropriate constraints. This maximization
problem is approximated by a convex feasibility problem whose solution is shown
to be close to the optimal one in terms of the error probability. Subsequently,
to provide robustness against feedback errors and further decrease the feedback
rate, an extended version of the distributed Alamouti code is proposed. It is
also shown that our scheme can be generalized to the differential transmission
case, where it can be applied to wireless relay networks with no CSI available
at the receiver.
|
0904.1409
|
MIMO Downlink Scheduling with Non-Perfect Channel State Knowledge
|
cs.IT math.IT
|
Downlink scheduling schemes are well-known and widely investigated under the
assumption that the channel state is perfectly known to the scheduler. In the
multiuser MIMO (broadcast) case, downlink scheduling in the presence of
non-perfect channel state information (CSI) is only scantly treated. In this
paper we provide a general framework that addresses the problem systematically.
Also, we illuminate the key role played by the channel state prediction error:
our scheme treats in a fundamentally different way users with small channel
prediction error ("predictable" users) and users with large channel prediction
error ("non-predictable" users), and can be interpreted as a near-optimal
opportunistic time-sharing strategy between MIMO downlink beamforming to
predictable users and space-time coding to nonpredictable users. Our results,
based on a realistic MIMO channel model used in 3GPP standardization, show that
the proposed algorithms can significantly outperform a conventional
"mismatched" scheduling scheme that treats the available CSI as if it was
perfect.
|
0904.1444
|
Spatial and Temporal Correlation of the Interference in ALOHA Ad Hoc
Networks
|
cs.IT cs.NI math.IT math.PR
|
Interference is a main limiting factor of the performance of a wireless ad
hoc network. The temporal and the spatial correlation of the interference makes
the outages correlated temporally (important for retransmissions) and spatially
correlated (important for routing). In this letter we quantify the temporal and
spatial correlation of the interference in a wireless ad hoc network whose
nodes are distributed as a Poisson point process on the plane when ALOHA is
used as the multiple-access scheme.
|
0904.1446
|
Concavity of entropy under thinning
|
cs.IT math.IT
|
Building on the recent work of Johnson (2007) and Yu (2008), we prove that
entropy is a concave function with respect to the thinning operation T_a. That
is, if X and Y are independent random variables on Z_+ with ultra-log-concave
probability mass functions, then H(T_a X+T_{1-a} Y)>= a H(X)+(1-a)H(Y), 0 <= a
<= 1, where H denotes the discrete entropy. This is a discrete analogue of the
inequality (h denotes the differential entropy) h(sqrt(a) X + sqrt{1-a} Y)>= a
h(X)+(1-a) h(Y), 0 <= a <= 1, which holds for continuous X and Y with finite
variances and is equivalent to Shannon's entropy power inequality. As a
consequence we establish a special case of a conjecture of Shepp and Olkin
(1981).
|
0904.1538
|
Shannon-Kotel'nikov Mappings for Analog Point-to-Point Communications
|
cs.IT math.IT
|
In this paper an approach to joint source-channel coding (JSCC) named
Shannon-Kotel'nikov mappings (S-K mappings) is presented. S-K mappings are
continuous, or piecewise continuous direct source-to-channel mappings operating
directly on amplitude continuous and discrete time signals. Such mappings
include several existing JSCC schemes as special cases. Many existing
approaches to analog- or hybrid discrete analog JSCC provide both excellent
performance as well as robustness to variations in noise level. This at low
delay and relatively low complexity. However, a theory explaining their
performance and behaviour on a general basis, as well as guidelines on how to
construct close to optimal mappings in general, does not currently exist.
Therefore, such mappings are often found based on educated guesses inspired of
configurations that are known in advance to produce good solutions, combination
of already existing mappings, numerical optimization or machine learning
methods. The objective of this paper is to introduce a theoretical framework
for analysis of analog- or hybrid discrete analog S-K mappings. This framework
will enable calculation of distortion when applying such schemes on
point-to-point links, reveal more about their fundamental nature, and provide
guidelines on how they should be constructed in order to perform well at both
low and arbitrary complexity and delay. Such guidelines will likely help
constrain solutions to numerical approaches and help explain why machine
learning approaches finds the solutions they do. This task is difficult and we
do not provide a complete framework at this stage: We focus on high SNR and
memoryless sources with an arbitrary continuous unimodal density function and
memoryless Gaussian channels. We also provide example of mappings based on
surfaces which are chosen based on the provided theory.
|
0904.1579
|
Online prediction of ovarian cancer
|
cs.AI cs.LG
|
In this paper we apply computer learning methods to diagnosing ovarian cancer
using the level of the standard biomarker CA125 in conjunction with information
provided by mass-spectrometry. We are working with a new data set collected
over a period of 7 years. Using the level of CA125 and mass-spectrometry peaks,
our algorithm gives probability predictions for the disease. To estimate
classification accuracy we convert probability predictions into strict
predictions. Our algorithm makes fewer errors than almost any linear
combination of the CA125 level and one peak's intensity (taken on the log
scale). To check the power of our algorithm we use it to test the hypothesis
that CA125 and the peaks do not contain useful information for the prediction
of the disease at a particular time before the diagnosis. Our algorithm
produces $p$-values that are better than those produced by the algorithm that
has been previously applied to this data set. Our conclusion is that the
proposed algorithm is more reliable for prediction on new data.
|
0904.1613
|
On the closed-form solution of the rotation matrix arising in computer
vision problems
|
cs.CV
|
We show the closed-form solution to the maximization of trace(A'R), where A
is given and R is unknown rotation matrix. This problem occurs in many computer
vision tasks involving optimal rotation matrix estimation. The solution has
been continuously reinvented in different fields as part of specific problems.
We summarize the historical evolution of the problem and present the general
proof of the solution. We contribute to the proof by considering the degenerate
cases of A and discuss the uniqueness of R.
|
0904.1629
|
Fuzzy inference based mentality estimation for eye robot agent
|
cs.RO cs.AI cs.HC
|
Household robots need to communicate with human beings in a friendly fashion.
To achieve better understanding of displayed information, an importance and a
certainty of the information should be communicated together with the main
information. The proposed intent expression system aims to convey this
additional information using an eye robot. The eye motions are represented as
states in a pleasure-arousal space model. Change of the model state is
calculated by fuzzy inference according to the importance and certainty of the
displayed information. This change influences the arousal-sleep coordinate in
the space which corresponds to activeness in communication. The eye robot
provides a basic interface for the mascot robot system which is an easy to
understand information terminal for home environments in a humatronics society.
|
0904.1631
|
Intent expression using eye robot for mascot robot system
|
cs.RO cs.AI cs.HC
|
An intent expression system using eye robots is proposed for a mascot robot
system from a viewpoint of humatronics. The eye robot aims at providing a basic
interface method for an information terminal robot system. To achieve better
understanding of the displayed information, the importance and the degree of
certainty of the information should be communicated along with the main
content. The proposed intent expression system aims at conveying this
additional information using the eye robot system. Eye motions are represented
as the states in a pleasure-arousal space model. Changes in the model state are
calculated by fuzzy inference according to the importance and degree of
certainty of the displayed information. These changes influence the
arousal-sleep coordinates in the space that corresponds to levels of liveliness
during communication. The eye robot provides a basic interface for the mascot
robot system that is easy to be understood as an information terminal for home
environments in a humatronics society.
|
0904.1672
|
CP-logic: A Language of Causal Probabilistic Events and Its Relation to
Logic Programming
|
cs.AI cs.LO
|
This papers develops a logical language for representing probabilistic causal
laws. Our interest in such a language is twofold. First, it can be motivated as
a fundamental study of the representation of causal knowledge. Causality has an
inherent dynamic aspect, which has been studied at the semantical level by
Shafer in his framework of probability trees. In such a dynamic context, where
the evolution of a domain over time is considered, the idea of a causal law as
something which guides this evolution is quite natural. In our formalization, a
set of probabilistic causal laws can be used to represent a class of
probability trees in a concise, flexible and modular way. In this way, our work
extends Shafer's by offering a convenient logical representation for his
semantical objects.
Second, this language also has relevance for the area of probabilistic logic
programming. In particular, we prove that the formal semantics of a theory in
our language can be equivalently defined as a probability distribution over the
well-founded models of certain logic programs, rendering it formally quite
similar to existing languages such as ICL or PRISM. Because we can motivate and
explain our language in a completely self-contained way as a representation of
probabilistic causal laws, this provides a new way of explaining the intuitions
behind such probabilistic logic programs: we can say precisely which knowledge
such a program expresses, in terms that are equally understandable by a
non-logician. Moreover, we also obtain an additional piece of knowledge
representation methodology for probabilistic logic programs, by showing how
they can express probabilistic causal laws.
|
0904.1692
|
Error Bounds for Repeat-Accumulate Codes Decoded via Linear Programming
|
cs.IT math.IT
|
We examine regular and irregular repeat-accumulate (RA) codes with repetition
degrees which are all even. For these codes and with a particular choice of an
interleaver, we give an upper bound on the decoding error probability of a
linear-programming based decoder which is an inverse polynomial in the block
length. Our bound is valid for any memoryless, binary-input, output-symmetric
(MBIOS) channel. This result generalizes the bound derived by Feldman et al.,
which was for regular RA(2) codes.
|
0904.1700
|
Recovering the state sequence of hidden Markov models using mean-field
approximations
|
cond-mat.dis-nn cond-mat.stat-mech cs.LG
|
Inferring the sequence of states from observations is one of the most
fundamental problems in Hidden Markov Models. In statistical physics language,
this problem is equivalent to computing the marginals of a one-dimensional
model with a random external field. While this task can be accomplished through
transfer matrix methods, it becomes quickly intractable when the underlying
state space is large.
This paper develops several low-complexity approximate algorithms to address
this inference problem when the state space becomes large. The new algorithms
are based on various mean-field approximations of the transfer matrix. Their
performances are studied in detail on a simple realistic model for DNA
pyrosequencing.
|
0904.1712
|
Turbo Packet Combining for Broadband Space-Time BICM Hybrid-ARQ Systems
with Co-Channel Interference
|
cs.IT math.IT
|
In this paper, efficient turbo packet combining for single carrier (SC)
broadband multiple-input--multiple-output (MIMO) hybrid--automatic repeat
request (ARQ) transmission with unknown co-channel interference (CCI) is
studied. We propose a new frequency domain soft minimum mean square error
(MMSE)-based signal level combining technique where received signals and
channel frequency responses (CFR)s corresponding to all retransmissions are
used to decode the data packet. We provide a recursive implementation algorithm
for the introduced scheme, and show that both its computational complexity and
memory requirements are quite insensitive to the ARQ delay, i.e., maximum
number of ARQ rounds. Furthermore, we analyze the asymptotic performance, and
show that under a sum-rank condition on the CCI MIMO ARQ channel, the proposed
packet combining scheme is not interference-limited. Simulation results are
provided to demonstrate the gains offered by the proposed technique.
|
0904.1730
|
Feedback-based online network coding
|
cs.NI cs.IT math.IT
|
Current approaches to the practical implementation of network coding are
batch-based, and often do not use feedback, except possibly to signal
completion of a file download. In this paper, the various benefits of using
feedback in a network coded system are studied. It is shown that network coding
can be performed in a completely online manner, without the need for batches or
generations, and that such online operation does not affect the throughput.
Although these ideas are presented in a single-hop packet erasure broadcast
setting, they naturally extend to more general lossy networks which employ
network coding in the presence of feedback. The impact of feedback on queue
size at the sender and decoding delay at the receivers is studied. Strategies
for adaptive coding based on feedback are presented, with the goal of
minimizing the queue size and delay. The asymptotic behavior of these metrics
is characterized, in the limit of the traffic load approaching capacity.
Different notions of decoding delay are considered, including an
order-sensitive notion which assumes that packets are useful only when
delivered in order. Our work may be viewed as a natural extension of Automatic
Repeat reQuest (ARQ) schemes to coded networks.
|
0904.1812
|
Two Designs of Space-Time Block Codes Achieving Full Diversity with
Partial Interference Cancellation Group Decoding
|
cs.IT math.IT
|
A partial interference cancellation (PIC) group decoding based space-time
block code (STBC) design criterion was recently proposed by Guo and Xia, where
the decoding complexity and the code rate trade-off is dealt when the full
diversity is achieved. In this paper, two designs of STBC are proposed for any
number of transmit antennas that can obtain full diversity when a PIC group
decoding (with a particular grouping scheme) is applied at receiver. With the
PIC group decoding and an appropriate grouping scheme for the decoding, the
proposed STBC are shown to obtain the same diversity gain as the ML decoding,
but have a low decoding complexity. The first proposed STBC is designed with
multiple diagonal layers and it can obtain the full diversity for two-layer
design with the PIC group decoding and the rate is up to 2 symbols per channel
use. But with PIC-SIC group decoding, the first proposed STBC can obtain full
diversity for any number of layers and the rate can be full. The second
proposed STBC can obtain full diversity and a rate up to 9/4 with the PIC group
decoding. Some code design examples are given and simulation results show that
the newly proposed STBC can well address the rate-performance-complexity
tradeoff of the MIMO systems.
|
0904.1840
|
Higher Dimensional Consensus: Learning in Large-Scale Networks
|
cs.IT cs.DC math.IT math.OC
|
The paper presents higher dimension consensus (HDC) for large-scale networks.
HDC generalizes the well-known average-consensus algorithm. It divides the
nodes of the large-scale network into anchors and sensors. Anchors are nodes
whose states are fixed over the HDC iterations, whereas sensors are nodes that
update their states as a linear combination of the neighboring states. Under
appropriate conditions, we show that the sensor states converge to a linear
combination of the anchor states. Through the concept of anchors, HDC captures
in a unified framework several interesting network tasks, including distributed
sensor localization, leader-follower, distributed Jacobi to solve linear
systems of algebraic equations, and, of course, average-consensus. In many
network applications, it is of interest to learn the weights of the distributed
linear algorithm so that the sensors converge to a desired state. We term this
inverse problem the HDC learning problem. We pose learning in HDC as a
constrained non-convex optimization problem, which we cast in the framework of
multi-objective optimization (MOP) and to which we apply Pareto optimality. We
prove analytically relevant properties of the MOP solutions and of the Pareto
front from which we derive the solution to learning in HDC. Finally, the paper
shows how the MOP approach resolves interesting tradeoffs (speed of convergence
versus quality of the final state) arising in learning in HDC in resource
constrained networks.
|
0904.1888
|
On Fodor on Darwin on Evolution
|
cs.NE cs.LG
|
Jerry Fodor argues that Darwin was wrong about "natural selection" because
(1) it is only a tautology rather than a scientific law that can support
counterfactuals ("If X had happened, Y would have happened") and because (2)
only minds can select. Hence Darwin's analogy with "artificial selection" by
animal breeders was misleading and evolutionary explanation is nothing but
post-hoc historical narrative. I argue that Darwin was right on all counts.
|
0904.1892
|
Lattice Strategies for the Dirty Multiple Access Channel
|
cs.IT math.IT
|
A generalization of the Gaussian dirty-paper problem to a multiple access
setup is considered. There are two additive interference signals, one known to
each transmitter but none to the receiver. The rates achievable using Costa's
strategies (i.e. by a random binning scheme induced by Costa's auxiliary random
variables) vanish in the limit when the interference signals are strong. In
contrast, it is shown that lattice strategies ("lattice precoding") can achieve
positive rates independent of the interferences, and in fact in some cases -
which depend on the noise variance and power constraints - they are optimal. In
particular, lattice strategies are optimal in the limit of high SNR. It is also
shown that the gap between the achievable rate region and the capacity region
is at most 0.167 bit. Thus, the dirty MAC is another instance of a network
setup, like the Korner-Marton modulo-two sum problem, where linear coding is
potentially better than random binning. Lattice transmission schemes and
conditions for optimality for the asymmetric case, where there is only one
interference which is known to one of the users (who serves as a "helper" to
the other user), and for the "common interference" case are also derived. In
the former case the gap between the helper achievable rate and its capacity is
at most 0.085 bit.
|
0904.1897
|
Refined Coding Bounds and Code Constructions for Coherent Network Error
Correction
|
cs.IT math.IT
|
Coherent network error correction is the error-control problem in network
coding with the knowledge of the network codes at the source and sink nodes.
With respect to a given set of local encoding kernels defining a linear network
code, we obtain refined versions of the Hamming bound, the Singleton bound and
the Gilbert-Varshamov bound for coherent network error correction. Similar to
its classical counterpart, this refined Singleton bound is tight for linear
network codes. The tightness of this refined bound is shown by two construction
algorithms of linear network codes achieving this bound. These two algorithms
illustrate different design methods: one makes use of existing network coding
algorithms for error-free transmission and the other makes use of classical
error-correcting codes. The implication of the tightness of the refined
Singleton bound is that the sink nodes with higher maximum flow values can have
higher error correction capabilities.
|
0904.1907
|
Average Entropy Functions
|
cs.IT cs.RO math.IT
|
The closure of the set of entropy functions associated with n discrete
variables, Gammar*n, is a convex cone in (2n-1)- dimensional space, but its
full characterization remains an open problem. In this paper, we map Gammar*n
to an n-dimensional region Phi*n by averaging the joint entropies with the same
number of variables, and show that the simpler Phi*n can be characterized
solely by the Shannon-type information inequalities
|
0904.1910
|
Compressive Sampling with Known Spectral Energy Density
|
cs.IT cs.CE math.FA math.IT
|
A method to improve l1 performance of the CS (Compressive Sampling) for
A-scan SFCW-GPR (Stepped Frequency Continuous Wave-Ground Penetrating Radar)
signals with known spectral energy density is proposed. Instead of random
sampling, the proposed method selects the location of samples to follow the
distribution of the spectral energy. Samples collected from three different
measurement methods; the uniform sampling, random sampling, and energy
equipartition sampling, are used to reconstruct a given monocycle signal whose
spectral energy density is known. Objective performance evaluation in term of
PSNR (Peak Signal to Noise Ratio) indicates empirically that the CS
reconstruction of random sampling outperform the uniform sampling, while the
energy equipartition sampling outperforms both of them. These results suggest
that similar performance improvement can be achieved for the compressive SFCW
(Stepped Frequency Continuous Wave) radar, allowing even higher acquisition
speed.
|
0904.1931
|
KiWi: A Scalable Subspace Clustering Algorithm for Gene Expression
Analysis
|
cs.DB cs.AI q-bio.GN
|
Subspace clustering has gained increasing popularity in the analysis of gene
expression data. Among subspace cluster models, the recently introduced
order-preserving sub-matrix (OPSM) has demonstrated high promise. An OPSM,
essentially a pattern-based subspace cluster, is a subset of rows and columns
in a data matrix for which all the rows induce the same linear ordering of
columns. Existing OPSM discovery methods do not scale well to increasingly
large expression datasets. In particular, twig clusters having few genes and
many experiments incur explosive computational costs and are completely pruned
off by existing methods. However, it is of particular interest to determine
small groups of genes that are tightly coregulated across many conditions. In
this paper, we present KiWi, an OPSM subspace clustering algorithm that is
scalable to massive datasets, capable of discovering twig clusters and
identifying negative as well as positive correlations. We extensively validate
KiWi using relevant biological datasets and show that KiWi correctly assigns
redundant probes to the same cluster, groups experiments with common clinical
annotations, differentiates real promoter sequences from negative control
sequences, and shows good association with cis-regulatory motif predictions.
|
0904.1956
|
Ergodic Layered Erasure One-Sided Interference Channels
|
cs.IT math.IT
|
The sum capacity of a class of layered erasure one-sided interference
channels is developed under the assumption of no channel state information at
the transmitters. Outer bounds are presented for this model and are shown to be
tight for the following sub-classes: i) weak, ii) strong (mix of strong but not
very strong (SnVS) and very strong (VS)), iii) ergodic very strong (mix of
strong and weak), and (iv) a sub-class of mixed interference (mix of SnVS and
weak). Each sub-class is uniquely defined by the fading statistics.
|
0904.1989
|
Personalized Recommendation via Integrated Diffusion on User-Item-Tag
Tripartite Graphs
|
cs.IR
|
Personalized recommender systems are confronting great challenges of
accuracy, diversification and novelty, especially when the data set is sparse
and lacks accessorial information, such as user profiles, item attributes and
explicit ratings. Collaborative tags contain rich information about
personalized preferences and item contents, and are therefore potential to help
in providing better recommendations. In this paper, we propose a recommendation
algorithm based on an integrated diffusion on user-item-tag tripartite graphs.
We use three benchmark data sets, Del.icio.us, MovieLens and BibSonomy, to
evaluate our algorithm. Experimental results demonstrate that the usage of tag
information can significantly improve accuracy, diversification and novelty of
recommendations.
|
0904.2012
|
Simplicial Databases
|
cs.DB cs.IR
|
In this paper, we define a category DB, called the category of simplicial
databases, whose objects are databases and whose morphisms are data-preserving
maps. Along the way we give a precise formulation of the category of relational
databases, and prove that it is a full subcategory of DB. We also prove that
limits and colimits always exist in DB and that they correspond to queries such
as select, join, union, etc.
One feature of our construction is that the schema of a simplicial database
has a natural geometric structure: an underlying simplicial set. The geometry
of a schema is a way of keeping track of relationships between distinct tables,
and can be thought of as a system of foreign keys. The shape of a schema is
generally intuitive (e.g. the schema for round-trip flights is a circle
consisting of an edge from $A$ to $B$ and an edge from $B$ to $A$), and as
such, may be useful for analyzing data.
We give several applications of our approach, as well as possible advantages
it has over the relational model. We also indicate some directions for further
research.
|
0904.2022
|
Absdet-Pseudo-Codewords and Perm-Pseudo-Codewords: Definitions and
Properties
|
cs.IT cs.DM math.IT
|
The linear-programming decoding performance of a binary linear code crucially
depends on the structure of the fundamental cone of the parity-check matrix
that describes the code. Towards a better understanding of fundamental cones
and the vectors therein, we introduce the notion of absdet-pseudo-codewords and
perm-pseudo-codewords: we give the definitions, we discuss some simple
examples, and we list some of their properties.
|
0904.2037
|
Boosting through Optimization of Margin Distributions
|
cs.LG cs.CV
|
Boosting has attracted much research attention in the past decade. The
success of boosting algorithms may be interpreted in terms of the margin
theory. Recently it has been shown that generalization error of classifiers can
be obtained by explicitly taking the margin distribution of the training data
into account. Most of the current boosting algorithms in practice usually
optimizes a convex loss function and do not make use of the margin
distribution. In this work we design a new boosting algorithm, termed
margin-distribution boosting (MDBoost), which directly maximizes the average
margin and minimizes the margin variance simultaneously. This way the margin
distribution is optimized. A totally-corrective optimization algorithm based on
column generation is proposed to implement MDBoost. Experiments on UCI datasets
show that MDBoost outperforms AdaBoost and LPBoost in most cases.
|
0904.2051
|
Joint-sparse recovery from multiple measurements
|
cs.IT math.IT
|
The joint-sparse recovery problem aims to recover, from sets of compressed
measurements, unknown sparse matrices with nonzero entries restricted to a
subset of rows. This is an extension of the single-measurement-vector (SMV)
problem widely studied in compressed sensing. We analyze the recovery
properties for two types of recovery algorithms. First, we show that recovery
using sum-of-norm minimization cannot exceed the uniform recovery rate of
sequential SMV using $\ell_1$ minimization, and that there are problems that
can be solved with one approach but not with the other. Second, we analyze the
performance of the ReMBo algorithm [M. Mishali and Y. Eldar, IEEE Trans. Sig.
Proc., 56 (2008)] in combination with $\ell_1$ minimization, and show how
recovery improves as more measurements are taken. From this analysis it follows
that having more measurements than number of nonzero rows does not improve the
potential theoretical recovery rate.
|
0904.2096
|
A Distributed Software Architecture for Collaborative Teleoperation
based on a VR Platform and Web Application Interoperability
|
cs.HC cs.GR cs.MM cs.RO
|
Augmented Reality and Virtual Reality can provide to a Human Operator (HO) a
real help to complete complex tasks, such as robot teleoperation and
cooperative teleassistance. Using appropriate augmentations, the HO can
interact faster, safer and easier with the remote real world. In this paper, we
present an extension of an existing distributed software and network
architecture for collaborative teleoperation based on networked human-scaled
mixed reality and mobile platform. The first teleoperation system was composed
by a VR application and a Web application. However the 2 systems cannot be used
together and it is impossible to control a distant robot simultaneously. Our
goal is to update the teleoperation system to permit a heterogeneous
collaborative teleoperation between the 2 platforms. An important feature of
this interface is based on different Mobile platforms to control one or many
robots.
|
0904.2160
|
Inferring Dynamic Bayesian Networks using Frequent Episode Mining
|
cs.LG
|
Motivation: Several different threads of research have been proposed for
modeling and mining temporal data. On the one hand, approaches such as dynamic
Bayesian networks (DBNs) provide a formal probabilistic basis to model
relationships between time-indexed random variables but these models are
intractable to learn in the general case. On the other, algorithms such as
frequent episode mining are scalable to large datasets but do not exhibit the
rigorous probabilistic interpretations that are the mainstay of the graphical
models literature.
Results: We present a unification of these two seemingly diverse threads of
research, by demonstrating how dynamic (discrete) Bayesian networks can be
inferred from the results of frequent episode mining. This helps bridge the
modeling emphasis of the former with the counting emphasis of the latter.
First, we show how, under reasonable assumptions on data characteristics and on
influences of random variables, the optimal DBN structure can be computed using
a greedy, local, algorithm. Next, we connect the optimality of the DBN
structure with the notion of fixed-delay episodes and their counts of distinct
occurrences. Finally, to demonstrate the practical feasibility of our approach,
we focus on a specific (but broadly applicable) class of networks, called
excitatory networks, and show how the search for the optimal DBN structure can
be conducted using just information from frequent episodes. Application on
datasets gathered from mathematical models of spiking neurons as well as real
neuroscience datasets are presented.
Availability: Algorithmic implementations, simulator codebases, and datasets
are available from our website at http://neural-code.cs.vt.edu/dbn
|
0904.2237
|
On Binary Cyclic Codes with Five Nonzero Weights
|
cs.IT cs.DM math.CO math.IT
|
Let $q=2^n$, $0\leq k\leq n-1$, $n/\gcd(n,k)$ be odd and $k\neq n/3, 2n/3$.
In this paper the value distribution of following exponential sums
\[\sum\limits_{x\in \bF_q}(-1)^{\mathrm{Tr}_1^n(\alpha x^{2^{2k}+1}+\beta
x^{2^k+1}+\ga x)}\quad(\alpha,\beta,\ga\in \bF_{q})\] is determined. As an
application, the weight distribution of the binary cyclic code $\cC$, with
parity-check polynomial $h_1(x)h_2(x)h_3(x)$ where $h_1(x)$, $h_2(x)$ and
$h_3(x)$ are the minimal polynomials of $\pi^{-1}$, $\pi^{-(2^k+1)}$ and
$\pi^{-(2^{2k}+1)}$ respectively for a primitive element $\pi$ of $\bF_q$, is
also determined.
|
0904.2302
|
A Fundamental Characterization of Stability in Broadcast Queueing
Systems
|
cs.NI cs.IT math.IT
|
Stability with respect to a given scheduling policy has become an important
issue for wireless communication systems, but it is hard to prove in particular
scenarios. In this paper two simple conditions for stability in broadcast
channels are derived, which are easy to check. Heuristically, the conditions
imply that if the queue length in the system becomes large, the rate allocation
is always the solution of a weighted sum rate maximization problem.
Furthermore, the change of the weight factors between two time slots becomes
smaller and the weight factors of the users, whose queues are bounded while the
other queues expand, tend to zero. Then it is shown that for any mean arrival
rate vector inside the ergodic achievable rate region the system is stable in
the strong sense when the given scheduling policy complies with the conditions.
In this case the policy is so-called throughput-optimal. Subsequently, some
results on the necessity of the presented conditions are provided. Finally, in
several application examples it is shown that the results in the paper provide
a convenient way to verify the throughput-optimal policies.
|
0904.2311
|
Source Coding with a Side Information "Vending Machine"
|
cs.IT math.IT
|
We study source coding in the presence of side information, when the system
can take actions that affect the availability, quality, or nature of the side
information. We begin by extending the Wyner-Ziv problem of source coding with
decoder side information to the case where the decoder is allowed to choose
actions affecting the side information. We then consider the setting where
actions are taken by the encoder, based on its observation of the source.
Actions may have costs that are commensurate with the quality of the side
information they yield, and an overall per-symbol cost constraint may be
imposed. We characterize the achievable tradeoffs between rate, distortion, and
cost in some of these problem settings. Among our findings is the fact that
even in the absence of a cost constraint, greedily choosing the action
associated with the `best' side information is, in general, sub-optimal. A few
examples are worked out.
|
0904.2320
|
Why Global Performance is a Poor Metric for Verifying Convergence of
Multi-agent Learning
|
cs.MA cs.LG
|
Experimental verification has been the method of choice for verifying the
stability of a multi-agent reinforcement learning (MARL) algorithm as the
number of agents grows and theoretical analysis becomes prohibitively complex.
For cooperative agents, where the ultimate goal is to optimize some global
metric, the stability is usually verified by observing the evolution of the
global performance metric over time. If the global metric improves and
eventually stabilizes, it is considered a reasonable verification of the
system's stability.
The main contribution of this note is establishing the need for better
experimental frameworks and measures to assess the stability of large-scale
adaptive cooperative systems. We show an experimental case study where the
stability of the global performance metric can be rather deceiving, hiding an
underlying instability in the system that later leads to a significant drop in
performance. We then propose an alternative metric that relies on agents' local
policies and show, experimentally, that our proposed metric is more effective
(than the traditional global performance metric) in exposing the instability of
MARL algorithms.
|
0904.2375
|
The Zeta Function of a Periodic-Finite-Type Shift
|
cs.IT math.IT
|
The class of periodic-finite-type shifts (PFT's) is a class of sofic shifts
that strictly includes the class of shifts of finite type (SFT's), and the zeta
function of a PFT is a generating function for the number of periodic sequences
in the shift. In this paper, we derive a useful formula for the zeta function
of a PFT. This formula allows the zeta function of a PFT to be computed more
efficiently than the specialization of a formula known for a generic sofic
shift
|
0904.2401
|
A Combinatorial Study of Linear Deterministic Relay Networks
|
cs.IT math.IT
|
In the last few years the so--called "linear deterministic" model of relay
channels has gained popularity as a means of studying the flow of information
over wireless communication networks, and this approach generalizes the model
of wireline networks which is standard in network optimization. There is recent
work extending the celebrated max--flow/min--cut theorem to the capacity of a
unicast session over a linear deterministic relay network which is modeled by a
layered directed graph. This result was first proved by a random coding scheme
over large blocks of transmitted signals. We demonstrate the same result with a
simple, deterministic, polynomial--time algorithm which takes as input a single
transmitted signal instead of a long block of signals. Our capacity-achieving
transmission scheme for a two--layer network requires the extension of a
one--dimensional Rado--Hall transversal theorem on the independent subsets of
rows of a row--partitioned matrix into a two--dimensional variation for block
matrices. To generalize our approach to larger networks we use the
submodularity of the capacity of a cut for our model and show that our complete
transmission scheme can be obtained by solving a linear program over the
intersection of two polymatroids. We prove that our transmission scheme can
achieve the max-flow/min-cut capacity by applying a theorem of Edmonds about
such linear programs. We use standard submodular function minimization
techniques as part of our polynomial--time algorithm to construct our
capacity-achieving transmission scheme.
|
0904.2441
|
Reliable Identification of RFID Tags Using Multiple Independent Reader
Sessions
|
cs.IT math.IT
|
Radio Frequency Identification (RFID) systems are gaining momentum in various
applications of logistics, inventory, etc. A generic problem in such systems is
to ensure that the RFID readers can reliably read a set of RFID tags, such that
the probability of missing tags stays below an acceptable value. A tag may be
missing (left unread) due to errors in the communication link towards the
reader e.g. due to obstacles in the radio path. The present paper proposes
techniques that use multiple reader sessions, during which the system of
readers obtains a running estimate of the probability to have at least one tag
missing. Based on such an estimate, it is decided whether an additional reader
session is required. Two methods are proposed, they rely on the statistical
independence of the tag reading errors across different reader sessions, which
is a plausible assumption when e.g. each reader session is executed on
different readers. The first method uses statistical relationships that are
valid when the reader sessions are independent. The second method is obtained
by modifying an existing capture-recapture estimator. The results show that,
when the reader sessions are independent, the proposed mechanisms provide a
good approximation to the probability of missing tags, such that the number of
reader sessions made, meets the target specification. If the assumption of
independence is violated, the estimators are still useful, but they should be
corrected by a margin of additional reader sessions to ensure that the target
probability of missing tags is met.
|
0904.2448
|
All that Glisters is not Galled
|
cs.DM cs.CE q-bio.PE
|
Galled trees, evolutionary networks with isolated reticulation cycles, have
appeared under several slightly different definitions in the literature. In
this paper we establish the actual relationships between the main four such
alternative definitions: namely, the original galled trees, level-1 networks,
nested networks with nesting depth 1, and evolutionary networks with
arc-disjoint reticulation cycles.
|
0904.2477
|
Joint Range of R\'enyi Entropies
|
cs.IT math.IT math.PR
|
The exact range of the joined values of several R\'{e}nyi entropies is
determined. The method is based on topology with special emphasis on the
orientation of the objects studied. Like in the case when only two orders of
R\'{e}nyi entropies are studied one can parametrize upper and lower bounds but
an explicit formula for a tight upper or lower bound cannot be given.
|
0904.2482
|
Good Concatenated Code Ensembles for the Binary Erasure Channel
|
cs.IT math.IT
|
In this work, we give good concatenated code ensembles for the binary erasure
channel (BEC). In particular, we consider repeat multiple-accumulate (RMA) code
ensembles formed by the serial concatenation of a repetition code with multiple
accumulators, and the hybrid concatenated code (HCC) ensembles recently
introduced by Koller et al. (5th Int. Symp. on Turbo Codes & Rel. Topics,
Lausanne, Switzerland) consisting of an outer multiple parallel concatenated
code serially concatenated with an inner accumulator. We introduce stopping
sets for iterative constituent code oriented decoding using maximum a
posteriori erasure correction in the constituent codes. We then analyze the
asymptotic stopping set distribution for RMA and HCC ensembles and show that
their stopping distance hmin, defined as the size of the smallest nonempty
stopping set, asymptotically grows linearly with the block length. Thus, these
code ensembles are good for the BEC. It is shown that for RMA code ensembles,
contrary to the asymptotic minimum distance dmin, whose growth rate coefficient
increases with the number of accumulate codes, the hmin growth rate coefficient
diminishes with the number of accumulators. We also consider random puncturing
of RMA code ensembles and show that for sufficiently high code rates, the
asymptotic hmin does not grow linearly with the block length, contrary to the
asymptotic dmin, whose growth rate coefficient approaches the Gilbert-Varshamov
bound as the rate increases. Finally, we give iterative decoding thresholds for
the different code ensembles to compare the convergence properties.
|
0904.2585
|
Interference Relay Channels - Part I: Transmission Rates
|
cs.IT math.IT
|
We analyze the performance of a system composed of two interfering
point-to-point links where the transmitters can exploit a common relay to
improve their individual transmission rate. When the relay uses the
amplify-and-forward protocol we prove that it is not always optimal (in some
sense defined later on) to exploit all the relay transmit power and derive the
corresponding optimal amplification factor. For the case of the
decode-and-forward protocol, already investigated in [1], we show that this
protocol, through the cooperation degree between each transmitter and the
relay, is the only one that naturally introduces a game between the
transmitters. For the estimate-and-forward protocol, we derive two rate regions
for the general case of discrete interference relay channels (IRCs) and
specialize these results to obtain the Gaussian case; these regions correspond
to two compression schemes at the relay, having different resolution levels.
These schemes are compared analytically in some special cases. All the results
mentioned are illustrated by simulations, given in this part, and exploited to
study power allocation games in multi-band IRCs in the second part of this
two-part paper.
|
0904.2587
|
Interference Relay Channels - Part II: Power Allocation Games
|
cs.IT math.IT
|
In the first part of this paper we have derived achievable transmission rates
for the (single-band) interference relay channel (IRC) when the relay
implements either the amplify-and-forward, decode-and-forward or
estimate-and-forward protocol. Here, we consider wireless networks that can be
modeled by a multi-band IRC. We tackle the existence issue of Nash equilibria
(NE) in these networks where each information source is assumed to selfishly
allocate its power between the available bands in order to maximize its
individual transmission rate. Interestingly, it is possible to show that the
three power allocation (PA) games (corresponding to the three protocols
assumed) under investigation are concave, which guarantees the existence of a
pure NE after Rosen [3]. Then, as the relay can also optimize several
parameters e.g., its position and transmit power, it is further considered as
the leader of a Stackelberg game where the information sources are the
followers. Our theoretical analysis is illustrated by simulations giving more
insights on the addressed issues.
|
0904.2595
|
A Methodology for Learning Players' Styles from Game Records
|
cs.AI cs.LG
|
We describe a preliminary investigation into learning a Chess player's style
from game records. The method is based on attempting to learn features of a
player's individual evaluation function using the method of temporal
differences, with the aid of a conventional Chess engine architecture. Some
encouraging results were obtained in learning the styles of two recent Chess
world champions, and we report on our attempt to use the learnt styles to
discriminate between the players from game records by trying to detect who was
playing white and who was playing black. We also discuss some limitations of
our approach and propose possible directions for future research. The method we
have presented may also be applicable to other strategic games, and may even be
generalisable to other domains where sequences of agents' actions are recorded.
|
0904.2623
|
Exponential Family Graph Matching and Ranking
|
cs.LG cs.AI
|
We present a method for learning max-weight matching predictors in bipartite
graphs. The method consists of performing maximum a posteriori estimation in
exponential families with sufficient statistics that encode permutations and
data features. Although inference is in general hard, we show that for one very
relevant application - web page ranking - exact inference is efficient. For
general model instances, an appropriate sampler is readily available. Contrary
to existing max-margin matching models, our approach is statistically
consistent and, in addition, experiments with increasing sample sizes indicate
superior improvement over such models. We apply the method to graph matching in
computer vision as well as to a standard benchmark dataset for learning web
page ranking, in which we obtain state-of-the-art results, in particular
improving on max-margin variants. The drawback of this method with respect to
max-margin alternatives is its runtime for large graphs, which is comparatively
high.
|
0904.2695
|
Compressive Diffraction Tomography for Weakly Scattering
|
cs.CE cs.IT math.IT
|
An appealing requirement from the well-known diffraction tomography (DT)
exists for success reconstruction from few-view and limited-angle data.
Inspired by the well-known compressive sensing (CS), the accurate
super-resolution reconstruction from highly sparse data for the weakly scatters
has been investigated in this paper. To realize the compressive data
measurement, in particular, to obtain the super-resolution reconstruction with
highly sparse data, the compressive system which is realized by surrounding the
probed obstacles by the random media has been proposed and empirically studied.
Several interesting conclusions have been drawn: (a) if the desired resolution
is within the range from to, the K-sparse N-unknowns imaging can be obtained
exactly bymeasurements, which is comparable to the required number of
measurement by the Gaussian random matrix in the literatures of compressive
sensing. (b) With incorporating the random media which is used to enforce the
multi-path effect of wave propagation, the resulting measurement matrix is
incoherence with wavelet matrix, in other words, when the probed obstacles are
sparse with the framework of wavelet, the required number of measurements for
successful reconstruction is similar as above. (c) If the expected resolution
is lower than, the required number of measurements of proposed compressive
system is almost identical to the case of free space. (d) There is also a
requirement to make the tradeoff between the imaging resolutions and the number
of measurements. In addition, by the introduction of complex Gaussian variable
the kind of fast sparse Bayesian algorithm has been slightly modified to deal
with the complex-valued optimization with sparse constraints.
|
0904.2827
|
Principle of development
|
cs.AI
|
Today, science have a powerful tool for the description of reality - the
numbers. However, the concept of number was not immediately, lets try to trace
the evolution of the concept. The numbers emerged as the need for accurate
estimates of the amount in order to permit a comparison of some objects. So if
you see to it how many times a day a person uses the numbers and compare, it
becomes evident that the comparison is used much more frequently. However, the
comparison is not possible without two opposite basic standards. Thus, to
introduce the concept of comparison, must have two opposing standards, in turn,
the operation of comparison is necessary to introduce the concept of number.
Arguably, the scientific description of reality is impossible without the
concept of opposites.
In this paper analyzes the concept of opposites, as the basis for the
introduction of the principle of development.
|
0904.2861
|
A simple algorithm for decoding both errors and erasures of Reed-Solomon
codes
|
cs.IT math.IT
|
A simple algorithm for decoding both errors and erasures of Reed-Solomon
codes is described.
|
0904.2863
|
Error Scaling Laws for Linear Optimal Estimation from Relative
Measurements
|
cs.IT math.IT
|
We study the problem of estimating vector-valued variables from noisy
"relative" measurements. This problem arises in several sensor network
applications. The measurement model can be expressed in terms of a graph, whose
nodes correspond to the variables and edges to noisy measurements of the
difference between two variables. We take an arbitrary variable as the
reference and consider the optimal (minimum variance) linear unbiased estimate
of the remaining variables.
We investigate how the error in the optimal linear unbiased estimate of a
node variable grows with the distance of the node to the reference node. We
establish a classification of graphs, namely, dense or sparse in Rd,1<= d <=3,
that determines how the linear unbiased optimal estimation error of a node
grows with its distance from the reference node. In particular, if a graph is
dense in 1,2, or 3D, then a node variable's estimation error is upper bounded
by a linear, logarithmic, or bounded function of distance from the reference,
respectively. Corresponding lower bounds are obtained if the graph is sparse in
1, 2 and 3D.
Our results also show that naive measures of graph density, such as node
degree, are inadequate predictors of the estimation error. Being true for the
optimal linear unbiased estimate, these scaling laws determine
algorithm-independent limits on the estimation accuracy achievable in large
graphs.
|
0904.2921
|
Inter-Session Network Coding with Strategic Users: A Game-Theoretic
Analysis of Network Coding
|
cs.IT math.IT
|
A common assumption in the existing network coding literature is that the
users are cooperative and non-selfish. However, this assumption can be violated
in practice. In this paper, we analyze inter-session network coding in a wired
network using game theory. We assume selfish users acting strategically to
maximize their own utility, leading to a resource allocation game among users.
In particular, we study the well-known butterfly network topology where a
bottleneck link is shared by several network coding and routing flows. We prove
the existence of a Nash equilibrium for a wide range of utility functions. We
show that the number of Nash equilibria can be large (even infinite) for
certain choices of system parameters. This is in sharp contrast to a similar
game setting with traditional packet forwarding where the Nash equilibrium is
always unique. We then characterize the worst-case efficiency bounds, i.e., the
Price-of-Anarchy (PoA), compared to an optimal and cooperative network design.
We show that by using a novel discriminatory pricing scheme which charges
encoded and forwarded packets differently, we can improve the PoA. However,
regardless of the discriminatory pricing scheme being used, the PoA is still
worse than for the case when network coding is not applied. This implies that,
although inter-session network coding can improve performance compared to
ordinary routing, it is significantly more sensitive to users' strategic
behaviour. For example, in a butterfly network where the side links have zero
cost, the efficiency can be as low as 25%. If the side links have non-zero
cost, then the efficiency can further reduce to only 20%. These results
generalize the well-known result of guaranteed 67% worst-case efficiency for
traditional packet forwarding networks.
|
0904.2953
|
Towards an Intelligent System for Risk Prevention and Management
|
cs.AI cs.MA
|
Making a decision in a changeable and dynamic environment is an arduous task
owing to the lack of information, their uncertainties and the unawareness of
planners about the future evolution of incidents. The use of a decision support
system is an efficient solution of this issue. Such a system can help emergency
planners and responders to detect possible emergencies, as well as to suggest
and evaluate possible courses of action to deal with the emergency. We are
interested in our work to the modeling of a monitoring preventive and emergency
management system, wherein we stress the generic aspect. In this paper we
propose an agent-based architecture of this system and we describe a first step
of our approach which is the modeling of information and their representation
using a multiagent system.
|
0904.2954
|
Agent-Based Decision Support System to Prevent and Manage Risk
Situations
|
cs.AI cs.MA
|
The topic of risk prevention and emergency response has become a key social
and political concern. One approach to address this challenge is to develop
Decision Support Systems (DSS) that can help emergency planners and responders
to detect emergencies, as well as to suggest possible course of actions to deal
with the emergency. Our research work comes in this framework and aims to
develop a DSS that must be generic as much as possible and independent from the
case study.
|
0904.3060
|
An efficient quantum search engine on unsorted database
|
cs.DB cs.DS
|
We consider the problem of finding one or more desired items out of an
unsorted database. Patel has shown that if the database permits quantum
queries, then mere digitization is sufficient for efficient search for one
desired item. The algorithm, called factorized quantum search algorithm,
presented by him can locate the desired item in an unsorted database using
$O(log_{4}N)$ queries to factorized oracles. But the algorithm requires that
all the property values must be distinct from each other. In this paper, we
discuss how to make a database satisfy the requirements, and present a quantum
search engine based on the algorithm. Our goal is achieved by introducing
auxiliary files for the property values that are not distinct, and converting
every complex query request into a sequence of calls to factorized quantum
search algorithm. The query complexity of our algorithm is $O(P*Q*M*log_{4}N)$,
where P is the number of the potential simple query requests in the complex
query request, Q is the maximum number of calls to the factorized quantum
search algorithm of the simple queries, M is the number of the auxiliary files
for the property on which our algorithm are searching for desired items. This
implies that to manage an unsorted database on an actual quantum computer is
possible and efficient.
|
0904.3063
|
Using Dissortative Mating Genetic Algorithms to Track the Extrema of
Dynamic Deceptive Functions
|
cs.NE
|
Traditional Genetic Algorithms (GAs) mating schemes select individuals for
crossover independently of their genotypic or phenotypic similarities. In
Nature, this behaviour is known as random mating. However, non-random schemes -
in which individuals mate according to their kinship or likeness - are more
common in natural systems. Previous studies indicate that, when applied to GAs,
negative assortative mating (a specific type of non-random mating, also known
as dissortative mating) may improve their performance (on both speed and
reliability) in a wide range of problems. Dissortative mating maintains the
genetic diversity at a higher level during the run, and that fact is frequently
observed as an explanation for dissortative GAs ability to escape local optima
traps. Dynamic problems, due to their specificities, demand special care when
tuning a GA, because diversity plays an even more crucial role than it does
when tackling static ones. This paper investigates the behaviour of
dissortative mating GAs, namely the recently proposed Adaptive Dissortative
Mating GA (ADMGA), on dynamic trap functions. ADMGA selects parents according
to their Hamming distance, via a self-adjustable threshold value. The method,
by keeping population diversity during the run, provides an effective means to
deal with dynamic problems. Tests conducted with deceptive and nearly deceptive
trap functions indicate that ADMGA is able to outperform other GAs, some
specifically designed for tracking moving extrema, on a wide range of tests,
being particularly effective when speed of change is not very fast. When
comparing the algorithm to a previously proposed dissortative GA, results show
that performance is equivalent on the majority of the experiments, but ADMGA
performs better when solving the hardest instances of the test set.
|
0904.3148
|
CRT-Based High Speed Parallel Architecture for Long BCH Encoding
|
cs.AR cs.IT math.IT
|
BCH (Bose-Chaudhuri-Hocquenghen) error correcting codes ([1]-[2]) are now
widely used in communication systems and digital technology. Direct LFSR(linear
feedback shifted register)-based encoding of a long BCH code suffers from
serial-in and serial-out limitation and large fanout effect of some XOR gates.
This makes the LFSR-based encoders of long BCH codes cannot keep up with the
data transmission speed in some applications. Several parallel long parallel
encoders for long cyclic codes have been proposed in [3]-[8]. The technique for
eliminating the large fanout effect by J-unfolding method and some algebraic
manipulation was presented in [7] and [8] . In this paper we propose a
CRT(Chinese Remainder Theorem)-based parallel architecture for long BCH
encoding. Our novel technique can be used to eliminate the fanout bottleneck.
The only restriction on the speed of long BCH encoding of our CRT-based
architecture is $log_2N$, where $N$ is the length of the BCH code.
|
0904.3151
|
Efficient Construction of Neighborhood Graphs by the Multiple Sorting
Method
|
cs.DS cs.LG
|
Neighborhood graphs are gaining popularity as a concise data representation
in machine learning. However, naive graph construction by pairwise distance
calculation takes $O(n^2)$ runtime for $n$ data points and this is
prohibitively slow for millions of data points. For strings of equal length,
the multiple sorting method (Uno, 2008) can construct an $\epsilon$-neighbor
graph in $O(n+m)$ time, where $m$ is the number of $\epsilon$-neighbor pairs in
the data. To introduce this remarkably efficient algorithm to continuous
domains such as images, signals and texts, we employ a random projection method
to convert vectors to strings. Theoretical results are presented to elucidate
the trade-off between approximation quality and computation time. Empirical
results show the efficiency of our method in comparison to fast nearest
neighbor alternatives.
|
0904.3165
|
Fading Broadcast Channels with State Information at the Receivers
|
cs.IT math.IT
|
Despite considerable progress on the information-theoretic broadcast channel,
the capacity region of fading broadcast channels with channel state known at
the receivers but unknown at the transmitter remains unresolved. We address
this subject by introducing a layered erasure broadcast channel model in which
each component channel has a state that specifies the received signal levels in
an instance of a deterministic binary expansion channel. We find the capacity
region of this class of broadcast channels. The capacity achieving strategy
assigns each signal level to the user that derives the maximum expected rate
from that level. The outer bound is based on a channel enhancement that creates
a degraded broadcast channel for which the capacity region is known. This same
approach is then used to find inner and outer bounds to the capacity region of
fading Gaussian broadcast channels. The achievability scheme employs a
superposition of binary inputs. For intermittent AWGN channels and for Rayleigh
fading channels, the achievable rates are observed to be with 1-2 bits of the
outer bound at high SNR. We also prove that the achievable rate region is
within 6.386 bits/s/Hz of the capacity region for all fading AWGN broadcast
channels.
|
0904.3310
|
FastLMFI: An Efficient Approach for Local Maximal Patterns Propagation
and Maximal Patterns Superset Checking
|
cs.DB cs.AI cs.DS
|
Maximal frequent patterns superset checking plays an important role in the
efficient mining of complete Maximal Frequent Itemsets (MFI) and maximal search
space pruning. In this paper we present a new indexing approach, FastLMFI for
local maximal frequent patterns (itemset) propagation and maximal patterns
superset checking. Experimental results on different sparse and dense datasets
show that our work is better than the previous well known progressive focusing
technique. We have also integrated our superset checking approach with an
existing state of the art maximal itemsets algorithm Mafia, and compare our
results with current best maximal itemsets algorithms afopt-max and FP
(zhu)-max. Our results outperform afopt-max and FP (zhu)-max on dense (chess
and mushroom) datasets on almost all support thresholds, which shows the
effectiveness of our approach.
|
0904.3312
|
HybridMiner: Mining Maximal Frequent Itemsets Using Hybrid Database
Representation Approach
|
cs.DB cs.AI cs.DS
|
In this paper we present a novel hybrid (arraybased layout and vertical
bitmap layout) database representation approach for mining complete Maximal
Frequent Itemset (MFI) on sparse and large datasets. Our work is novel in terms
of scalability, item search order and two horizontal and vertical projection
techniques. We also present a maximal algorithm using this hybrid database
representation approach. Different experimental results on real and sparse
benchmark datasets show that our approach is better than previous state of art
maximal algorithms.
|
0904.3316
|
Ramp: Fast Frequent Itemset Mining with Efficient Bit-Vector Projection
Technique
|
cs.DB cs.AI cs.DS
|
Mining frequent itemset using bit-vector representation approach is very
efficient for dense type datasets, but highly inefficient for sparse datasets
due to lack of any efficient bit-vector projection technique. In this paper we
present a novel efficient bit-vector projection technique, for sparse and dense
datasets. To check the efficiency of our bit-vector projection technique, we
present a new frequent itemset mining algorithm Ramp (Real Algorithm for Mining
Patterns) build upon our bit-vector projection technique. The performance of
the Ramp is compared with the current best (all, maximal and closed) frequent
itemset mining algorithms on benchmark datasets. Different experimental results
on sparse and dense datasets show that mining frequent itemset using Ramp is
faster than the current best algorithms, which show the effectiveness of our
bit-vector projection idea. We also present a new local maximal frequent
itemsets propagation and maximal itemset superset checking approach FastLMFI,
build upon our PBR bit-vector projection technique. Our different computational
experiments suggest that itemset maximality checking using FastLMFI is fast and
efficient than a previous will known progressive focusing approach.
|
0904.3319
|
Fast Algorithms for Mining Interesting Frequent Itemsets without Minimum
Support
|
cs.DB cs.AI cs.DS
|
Real world datasets are sparse, dirty and contain hundreds of items. In such
situations, discovering interesting rules (results) using traditional frequent
itemset mining approach by specifying a user defined input support threshold is
not appropriate. Since without any domain knowledge, setting support threshold
small or large can output nothing or a large number of redundant uninteresting
results. Recently a novel approach of mining only N-most/Top-K interesting
frequent itemsets has been proposed, which discovers the top N interesting
results without specifying any user defined support threshold. However, mining
interesting frequent itemsets without minimum support threshold are more costly
in terms of itemset search space exploration and processing cost. Thereby, the
efficiency of their mining highly depends upon three main factors (1) Database
representation approach used for itemset frequency counting, (2) Projection of
relevant transactions to lower level nodes of search space and (3) Algorithm
implementation technique. Therefore, to improve the efficiency of mining
process, in this paper we present two novel algorithms called (N-MostMiner and
Top-K-Miner) using the bit-vector representation approach which is very
efficient in terms of itemset frequency counting and transactions projection.
In addition to this, several efficient implementation techniques of N-MostMiner
and Top-K-Miner are also present which we experienced in our implementation.
Our experimental results on benchmark datasets suggest that the NMostMiner and
Top-K-Miner are very efficient in terms of processing time as compared to
current best algorithms BOMO and TFP.
|
0904.3320
|
Using Association Rules for Better Treatment of Missing Values
|
cs.DB cs.AI cs.DS
|
The quality of training data for knowledge discovery in databases (KDD) and
data mining depends upon many factors, but handling missing values is
considered to be a crucial factor in overall data quality. Today real world
datasets contains missing values due to human, operational error, hardware
malfunctioning and many other factors. The quality of knowledge extracted,
learning and decision problems depend directly upon the quality of training
data. By considering the importance of handling missing values in KDD and data
mining tasks, in this paper we propose a novel Hybrid Missing values Imputation
Technique (HMiT) using association rules mining and hybrid combination of
k-nearest neighbor approach. To check the effectiveness of our HMiT missing
values imputation technique, we also perform detail experimental results on
real world datasets. Our results suggest that the HMiT technique is not only
better in term of accuracy but it also take less processing time as compared to
current best missing values imputation technique based on k-nearest neighbor
approach, which shows the effectiveness of our missing values imputation
technique.
|
0904.3321
|
Introducing Partial Matching Approach in Association Rules for Better
Treatment of Missing Values
|
cs.DB cs.AI cs.DS
|
Handling missing values in training datasets for constructing learning models
or extracting useful information is considered to be an important research task
in data mining and knowledge discovery in databases. In recent years, lot of
techniques are proposed for imputing missing values by considering attribute
relationships with missing value observation and other observations of training
dataset. The main deficiency of such techniques is that, they depend upon
single approach and do not combine multiple approaches, that why they are less
accurate. To improve the accuracy of missing values imputation, in this paper
we introduce a novel partial matching concept in association rules mining,
which shows better results as compared to full matching concept that we
described in our previous work. Our imputation technique combines the partial
matching concept in association rules with k-nearest neighbor approach. Since
this is a hybrid technique, therefore its accuracy is much better than as
compared to those techniques which depend upon single approach. To check the
efficiency of our technique, we also provide detail experimental results on
number of benchmark datasets which show better results as compared to previous
approaches.
|
0904.3340
|
Lossy Compression in Near-Linear Time via Efficient Random Codebooks and
Databases
|
cs.IT math.IT
|
The compression-complexity trade-off of lossy compression algorithms that are
based on a random codebook or a random database is examined. Motivated, in
part, by recent results of Gupta-Verd\'{u}-Weissman (GVW) and their underlying
connections with the pattern-matching scheme of Kontoyiannis' lossy Lempel-Ziv
algorithm, we introduce a non-universal version of the lossy Lempel-Ziv method
(termed LLZ). The optimality of LLZ for memoryless sources is established, and
its performance is compared to that of the GVW divide-and-conquer approach.
Experimental results indicate that the GVW approach often yields better
compression than LLZ, but at the price of much higher memory requirements. To
combine the advantages of both, we introduce a hybrid algorithm (HYB) that
utilizes both the divide-and-conquer idea of GVW and the single-database
structure of LLZ. It is proved that HYB shares with GVW the exact same
rate-distortion performance and implementation complexity, while, like LLZ,
requiring less memory, by a factor which may become unbounded, depending on the
choice or the relevant design parameters. Experimental results are also
presented, illustrating the performance of all three methods on data generated
by simple discrete memoryless sources. In particular, the HYB algorithm is
shown to outperform existing schemes for the compression of some simple
discrete sources with respect to the Hamming distortion criterion.
|
0904.3351
|
A Subsequence-Histogram Method for Generic Vocabulary Recognition over
Deletion Channels
|
cs.IT cs.DS math.IT stat.AP
|
We consider the problem of recognizing a vocabulary--a collection of words
(sequences) over a finite alphabet--from a potential subsequence of one of its
words. We assume the given subsequence is received through a deletion channel
as a result of transmission of a random word from one of the two generic
underlying vocabularies. An exact maximum a posterior (MAP) solution for this
problem counts the number of ways a given subsequence can be derived from
particular subsets of candidate vocabularies, requiring exponential time or
space.
We present a polynomial approximation algorithm for this problem. The
algorithm makes no prior assumption about the rules and patterns governing the
structure of vocabularies. Instead, through off-line processing of
vocabularies, it extracts data regarding regularity patterns in the
subsequences of each vocabulary. In the recognition phase, the algorithm just
uses this data, called subsequence-histogram, to decide in favor of one of the
vocabularies. We provide examples to demonstrate the performance of the
algorithm and show that it can achieve the same performance as MAP in some
situations.
Potential applications include bioinformatics, storage systems, and search
engines.
|
0904.3352
|
Optimistic Initialization and Greediness Lead to Polynomial Time
Learning in Factored MDPs - Extended Version
|
cs.AI cs.LG
|
In this paper we propose an algorithm for polynomial-time reinforcement
learning in factored Markov decision processes (FMDPs). The factored optimistic
initial model (FOIM) algorithm, maintains an empirical model of the FMDP in a
conventional way, and always follows a greedy policy with respect to its model.
The only trick of the algorithm is that the model is initialized
optimistically. We prove that with suitable initialization (i) FOIM converges
to the fixed point of approximate value iteration (AVI); (ii) the number of
steps when the agent makes non-near-optimal decisions (with respect to the
solution of AVI) is polynomial in all relevant quantities; (iii) the per-step
costs of the algorithm are also polynomial. To our best knowledge, FOIM is the
first algorithm with these properties. This extended version contains the
rigorous proofs of the main theorem. A version of this paper appeared in
ICML'09.
|
0904.3356
|
A method for Hedging in continuous time
|
cs.IT cs.AI math.IT math.PR
|
We present a method for hedging in continuous time.
|
0904.3444
|
Comment to "Coverage by Randomly Deployed Wireless Sensor Networks"
|
cs.IT math.IT
|
It is a correction paper on "P.J. Wan and C.W. Yi, "Coverage by Randomly
Deployed Wireless Sensor Networks", IEEE Transaction On Information Theory,
vol.52, No.6, June 2006."
In the above paper, Lemma (4), on page 2659 play the key role for deriving
the main results in the paper. The statement as well as the proof of Lemma (4),
page $2659,$ is not correct. We have given the correct version of Lemma. This
change in Lemma leads a drastic change in all the result derived in the above
paper.
|
0904.3469
|
Toggling operators in computability logic
|
cs.LO cs.AI math.LO
|
Computability logic (CL) (see http://www.cis.upenn.edu/~giorgi/cl.html ) is a
research program for redeveloping logic as a formal theory of computability, as
opposed to the formal theory of truth which it has more traditionally been.
Formulas in CL stand for interactive computational problems, seen as games
between a machine and its environment; logical operators represent operations
on such entities; and "truth" is understood as existence of an effective
solution. The formalism of CL is open-ended, and may undergo series of
extensions as the studies of the subject advance. So far three -- parallel,
sequential and choice -- sorts of conjunction and disjunction have been
studied. The present paper adds one more natural kind to this collection,
termed toggling. The toggling operations can be characterized as lenient
versions of choice operations where choices are retractable, being allowed to
be reconsidered any finite number of times. This way, they model
trial-and-error style decision steps in interactive computation. The main
technical result of this paper is constructing a sound and complete
axiomatization for the propositional fragment of computability logic whose
vocabulary, together with negation, includes all four -- parallel, toggling,
sequential and choice -- kinds of conjunction and disjunction. Along with
toggling conjunction and disjunction, the paper also introduces the toggling
versions of quantifiers and recurrence operations.
|
0904.3501
|
Incentive Compatible Budget Elicitation in Multi-unit Auctions
|
cs.GT cs.MA
|
In this paper, we consider the problem of designing incentive compatible
auctions for multiple (homogeneous) units of a good, when bidders have private
valuations and private budget constraints. When only the valuations are private
and the budgets are public, Dobzinski {\em et al} show that the {\em adaptive
clinching} auction is the unique incentive-compatible auction achieving
Pareto-optimality. They further show thatthere is no deterministic
Pareto-optimal auction with private budgets. Our main contribution is to show
the following Budget Monotonicity property of this auction: When there is only
one infinitely divisible good, a bidder cannot improve her utility by reporting
a budget smaller than the truth. This implies that a randomized modification to
the adaptive clinching auction is incentive compatible and Pareto-optimal with
private budgets.
The Budget Monotonicity property also implies other improved results in this
context. For revenue maximization, the same auction improves the best-known
competitive ratio due to Abrams by a factor of 4, and asymptotically approaches
the performance of the optimal single-price auction.
Finally, we consider the problem of revenue maximization (or social welfare)
in a Bayesian setting. We allow the bidders have public size constraints (on
the amount of good they are willing to buy) in addition to private budget
constraints. We show a simple poly-time computable 5.83-approximation to the
optimal Bayesian incentive compatible mechanism, that is implementable in
dominant strategies. Our technique again crucially needs the ability to prevent
bidders from over-reporting budgets via randomization.
|
0904.3612
|
Variations of the Turing Test in the Age of Internet and Virtual Reality
|
cs.AI cs.HC
|
Inspired by Hofstadter's Coffee-House Conversation (1982) and by the science
fiction short story SAM by Schattschneider (1988), we propose and discuss
criteria for non-mechanical intelligence. Firstly, we emphasize the practical
need for such tests in view of massively multiuser online role-playing games
(MMORPGs) and virtual reality systems like Second Life. Secondly, we
demonstrate Second Life as a useful framework for implementing (some iterations
of) that test.
|
0904.3642
|
Direction-of-Arrival Estimation for Temporally Correlated Narrowband
Signals
|
cs.IT math.IT
|
signal direction-of-arrival estimation using an array of sensors has been the
subject of intensive research and development during the last two decades.
Efforts have been directed to both, better solutions for the general data model
and to develop more realistic models. So far, many authors have assumed the
data to be iid samples of a multivariate statistical model. Although this
assumption reduces the complexity of the model, it may not be true in certain
situations where signals show temporal correlation. Some results are available
on the temporally correlated signal model in the literature. The temporally
correlated stochastic Cramer-Rao bound (CRB) has been calculated and an
instrumental variable-based method called IV-SSF is introduced. Also, it has
been shown that temporally correlated CRB is lower bounded by the deterministic
CRB. In this paper, we show that temporally correlated CRB is also upper
bounded by the stochastic iid CRB. We investigate the effect of temporal
correlation of the signals on the best achievable performance. We also show
that the IV-SSF method is not efficient and based on an analysis of the CRB,
propose a variation in the method which boosts its performance. Simulation
results show the improved performance of the proposed method in terms of lower
bias and error variance.
|
0904.3650
|
The use of invariant moments in hand-written character recognition
|
cs.NE
|
The goal of this paper is to present the implementation of a Radial Basis
Function neural network with built-in knowledge to recognize hand-written
characters. The neural network includes in its architecture gates controlled by
an attraction/repulsion system of coefficients. These coefficients are derived
from a preprocessing stage which groups the characters according to their
ascendant, central, or descendent components. The neural network is trained
using data from invariant moment functions. Results are compared with those
obtained using a K nearest neighbor method on the same moment data.
|
0904.3664
|
Introduction to Machine Learning: Class Notes 67577
|
cs.LG
|
Introduction to Machine learning covering Statistical Inference (Bayes, EM,
ML/MaxEnt duality), algebraic and spectral methods (PCA, LDA, CCA, Clustering),
and PAC learning (the Formal model, VC dimension, Double Sampling theorem).
|
0904.3667
|
Considerations upon the Machine Learning Technologies
|
cs.LG cs.AI
|
Artificial intelligence offers superior techniques and methods by which
problems from diverse domains may find an optimal solution. The Machine
Learning technologies refer to the domain of artificial intelligence aiming to
develop the techniques allowing the computers to "learn". Some systems based on
Machine Learning technologies tend to eliminate the necessity of the human
intelligence while the others adopt a man-machine collaborative approach.
|
0904.3669
|
Collaborative systems and multiagent systems
|
cs.MA
|
This paper presents some basic elements regarding the domain of the
collaborative systems, a domain of maximum actuality and also the multiagent
systems, developed as a result of a sound study on the one-agent systems.
|
0904.3701
|
Semantic Social Network Analysis
|
cs.AI
|
Social Network Analysis (SNA) tries to understand and exploit the key
features of social networks in order to manage their life cycle and predict
their evolution. Increasingly popular web 2.0 sites are forming huge social
network. Classical methods from social network analysis (SNA) have been applied
to such online networks. In this paper, we propose leveraging semantic web
technologies to merge and exploit the best features of each domain. We present
how to facilitate and enhance the analysis of online social networks,
exploiting the power of semantic social network analysis.
|
0904.3778
|
Word-Valued Sources: an Ergodic Theorem, an AEP and the Conservation of
Entropy
|
cs.IT math.IT
|
A word-valued source $\mathbf{Y} = Y_1,Y_2,...$ is discrete random process
that is formed by sequentially encoding the symbols of a random process
$\mathbf{X} = X_1,X_2,...$ with codewords from a codebook $\mathscr{C}$. These
processes appear frequently in information theory (in particular, in the
analysis of source-coding algorithms), so it is of interest to give conditions
on $\mathbf{X}$ and $\mathscr{C}$ for which $\mathbf{Y}$ will satisfy an
ergodic theorem and possess an Asymptotic Equipartition Property (AEP). In this
correspondence, we prove the following: (1) if $\mathbf{X}$ is asymptotically
mean stationary, then $\mathbf{Y}$ will satisfy a pointwise ergodic theorem and
possess an AEP; and, (2) if the codebook $\mathscr{C}$ is prefix-free, then the
entropy rate of $\mathbf{Y}$ is equal to the entropy rate of $\mathbf{X}$
normalized by the average codeword length.
|
0904.3780
|
Noisy Signal Recovery via Iterative Reweighted L1-Minimization
|
math.NA cs.IT math.IT
|
Compressed sensing has shown that it is possible to reconstruct sparse high
dimensional signals from few linear measurements. In many cases, the solution
can be obtained by solving an L1-minimization problem, and this method is
accurate even in the presence of noise. Recent a modified version of this
method, reweighted L1-minimization, has been suggested. Although no provable
results have yet been attained, empirical studies have suggested the reweighted
version outperforms the standard method. Here we analyze the reweighted
L1-minimization method in the noisy case, and provide provable results showing
an improvement in the error bound over the standard bounds.
|
0904.3808
|
Automated Epilepsy Diagnosis Using Interictal Scalp EEG
|
cs.AI cs.CV
|
Approximately over 50 million people worldwide suffer from epilepsy.
Traditional diagnosis of epilepsy relies on tedious visual screening by highly
trained clinicians from lengthy EEG recording that contains the presence of
seizure (ictal) activities. Nowadays, there are many automatic systems that can
recognize seizure-related EEG signals to help the diagnosis. However, it is
very costly and inconvenient to obtain long-term EEG data with seizure
activities, especially in areas short of medical resources. We demonstrate in
this paper that we can use the interictal scalp EEG data, which is much easier
to collect than the ictal data, to automatically diagnose whether a person is
epileptic. In our automated EEG recognition system, we extract three classes of
features from the EEG data and build Probabilistic Neural Networks (PNNs) fed
with these features. We optimize the feature extraction parameters and combine
these PNNs through a voting mechanism. As a result, our system achieves an
impressive 94.07% accuracy, which is very close to reported human recognition
accuracy by experienced medical professionals.
|
0904.3894
|
On Capacity Computation for the Two-User Binary Multiple-Access Channel
|
cs.IT math.IT
|
This paper deals with the problem of computing the boundary of the capacity
region for the memoryless two-user binary-input binary-output multiple-access
channel ((2,2;2)-MAC), or equivalently, the computation of input probability
distributions maximizing weighted sum-rate. This is equivalent to solving a
difficult nonconvex optimization problem. For a restricted class of
(2,2;2)-MACs and weight vectors, it is shown that, depending on an ordering
property of the channel matrix, the optimal solution is located on the
boundary, or the objective function has at most one stationary point in the
interior of the domain. For this, the problem is reduced to a pseudoconcave
one-dimensional optimization and the single-user problem.
|
0904.3944
|
Better Global Polynomial Approximation for Image Rectification
|
cs.CV cs.RO
|
When using images to locate objects, there is the problem of correcting for
distortion and misalignment in the images. An elegant way of solving this
problem is to generate an error correcting function that maps points in an
image to their corrected locations. We generate such a function by fitting a
polynomial to a set of sample points. The objective is to identify a polynomial
that passes "sufficiently close" to these points with "good" approximation of
intermediate points. In the past, it has been difficult to achieve good global
polynomial approximation using only sample points. We report on the development
of a global polynomial approximation algorithm for solving this problem. Key
Words: Polynomial approximation, interpolation, image rectification.
|
0904.3953
|
Guarded resolution for answer set programming
|
cs.AI
|
We describe a variant of resolution rule of proof and show that it is
complete for stable semantics of logic programs. We show applications of this
result.
|
0904.4006
|
Joint Source-Channel Coding on a Multiple Access Channel with Side
Information
|
cs.IT math.IT
|
We consider the problem of transmission of several distributed correlated
sources over a multiple access channel (MAC) with side information at the
sources and the decoder. Source-channel separation does not hold for this
channel. Sufficient conditions are provided for transmission of sources with a
given distortion. The source and/or the channel could have continuous alphabets
(thus Gaussian sources and Gaussian MACs are special cases). Various previous
results are obtained as special cases. We also provide several good joint
source-channel coding schemes for discrete sources and discrete/continuous
alphabet channel.
|
0904.4041
|
Content-Based Sub-Image Retrieval with Relevance Feedback
|
cs.DB cs.IR
|
The typical content-based image retrieval problem is to find images within a
database that are similar to a given query image. This paper presents a
solution to a different problem, namely that of content based sub-image
retrieval, i.e., finding images from a database that contains another image.
Note that this is different from finding a region in a (segmented) image that
is similar to another image region given as a query. We present a technique for
CBsIR that explores relevance feedback, i.e., the user's input on intermediary
results, in order to improve retrieval efficiency. Upon modeling images as a
set of overlapping and recursive tiles, we use a tile re-weighting scheme that
assigns penalties to each tile of the database images and updates the tile
penalties for all relevant images retrieved at each iteration using both the
relevant and irrelevant images identified by the user. Each tile is modeled by
means of its color content using a compact but very efficient method which can,
indirectly, capture some notion of texture as well, despite the fact that only
color information is maintained. Performance evaluation on a largely
heterogeneous dataset of over 10,000 images shows that the system can achieve a
stable average recall value of 70% within the top 20 retrieved (and presented)
images after only 5 iterations, with each such iteration taking about 2 seconds
on an off-the-shelf desktop computer.
|
0904.4057
|
Decentralized Coding Algorithms for Distributed Storage in Wireless
Sensor Networks
|
cs.IT cs.DS cs.NI math.IT
|
We consider large-scale wireless sensor networks with $n$ nodes, out of which
k are in possession, (e.g., have sensed or collected in some other way) k
information packets. In the scenarios in which network nodes are vulnerable
because of, for example, limited energy or a hostile environment, it is
desirable to disseminate the acquired information throughout the network so
that each of the n nodes stores one (possibly coded) packet so that the
original k source packets can be recovered, locally and in a computationally
simple way from any k(1 + \epsilon) nodes for some small \epsilon > 0. We
develop decentralized Fountain codes based algorithms to solve this problem.
Unlike all previously developed schemes, our algorithms are truly distributed,
that is, nodes do not know n, k or connectivity in the network, except in their
own neighborhoods, and they do not maintain any routing tables.
|
0904.4094
|
On the Upper Bounds of MDS Codes
|
math.CO cs.IT math.IT
|
Let $M_{q}(k)$ be the maximum length of MDS codes with parameters $q,k$. In
this paper, the properties of $M_{q}(k)$ are studied, and some new upper bounds
of $M_{q}(k)$ are obtained. Especially we obtain that $M_{q}(q-1)\leq
q+2(q\equiv4(mod 6)), M_{q}(q-2)\leq q+1(q\equiv4(mod 6)), M_{q}(k)\leq q+k-3
(q=36(5s+1), s\in N$ and $ k=6,7).
|
0904.4174
|
Denial of service attack in the Internet: agent-based intrusion
detection and reaction
|
cs.NI cs.MA
|
This paper deals with denial of service attack. Overview of the existing
attacks and methods is proposed. Classification scheme is presented for a
different denial of service attacks. There is considered agent-based intrusion
detection systems architecture. Considered main components and working
principles for a systems of such kind.
|
0904.4283
|
Opportunistic Spatial Orthogonalization and Its Application in Fading
Cognitive Radio Networks
|
cs.IT math.IT
|
Opportunistic Spatial Orthogonalization (OSO) is a cognitive radio scheme
that allows the existence of secondary users and hence increases the system
throughput, even if the primary user occupies all the frequency bands all the
time. Notably, this throughput advantage is obtained without sacrificing the
performance of the primary user, if the interference margin is carefully
chosen. The key idea is to exploit the spatial dimensions to orthogonalize
users and hence minimize interference. However, unlike the time and frequency
dimensions, there is no universal basis for the set of all multi-dimensional
spatial channels, which motivated the development of OSO. On one hand, OSO can
be viewed as a multi-user diversity scheme that exploits the channel randomness
and independence. On the other hand, OSO can be interpreted as an opportunistic
interference alignment scheme, where the interference from multiple secondary
users is opportunistically aligned at the direction that is orthogonal to the
primary user's signal space. In the case of multiple-input multiple-output
(MIMO) channels, the OSO scheme can be interpreted as "riding the peaks" over
the eigen-channels, and ill-conditioned MIMO channel, which is traditionally
viewed as detrimental, is shown to be beneficial with respect to the sum
throughput. Throughput advantages are thoroughly studied, both analytically and
numerically.
|
0904.4343
|
On the Achievability of Interference Alignment in the K-User Constant
MIMO Interference Channel
|
cs.IT math.IT
|
Interference alignment in the K-user MIMO interference channel with constant
channel coefficients is considered. A novel constructive method for finding the
interference alignment solution is proposed for the case where the number of
transmit antennas equals the number of receive antennas (NT = NR = N), the
number of transmitter-receiver pairs equals K = N + 1, and all interference
alignment multiplexing gains are one. The core of the method consists of
solving an eigenvalue problem that incorporates the channel matrices of all
interfering links. This procedure provides insight into the feasibility of
signal vector spaces alignment schemes in finite dimensional MIMO interference
channels.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.