id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1009.3888
|
A General Proof of Convergence for Adaptive Distributed Beamforming
Schemes
|
cs.SY
|
This work focuses on the convergence analysis of adaptive distributed
beamforming schemes that can be reformulated as local random search algorithms
via a random search framework. Once reformulated as local random search
algorithms, it is proved that under two sufficient conditions: a) the objective
function of the algorithm is continuous and all its local maxima are global
maxima, and b) the origin is an interior point within the range of the
considered transformation of the random perturbation, the corresponding
adaptive distributed beamforming schemes converge both in probability and in
mean. This proof of convergence is general since it can be applied to analyze
randomized adaptive distributed beamforming schemes with any type of objective
functions and probability measures as long as both the sufficient conditions
are satisfied. Further, this framework can be generalized to analyze an
asynchronous scheme where distributed transmitters can only update their
beamforming coefficients asynchronously. Simulation results are also provided
to validate our analyses.
|
1009.3891
|
Secure Lossy Source Coding with Side Information at the Decoders
|
cs.IT math.IT
|
This paper investigates the problem of secure lossy source coding in the
presence of an eavesdropper with arbitrary correlated side informations at the
legitimate decoder (referred to as Bob) and the eavesdropper (referred to as
Eve). This scenario consists of an encoder that wishes to compress a source to
satisfy the desired requirements on: (i) the distortion level at Bob and (ii)
the equivocation rate at Eve. It is assumed that the decoders have access to
correlated sources as side information. For instance, this problem can be seen
as a generalization of the well-known Wyner-Ziv problem taking into account the
security requirements. A complete characterization of the
rate-distortion-equivocation region for the case of arbitrary correlated side
informations at the decoders is derived. Several special cases of interest and
an application example to secure lossy source coding of binary sources in the
presence of binary and ternary side informations are also considered. It is
shown that the statistical differences between the side information at the
decoders and the presence of non-zero distortion at the legitimate decoder can
be useful in terms of secrecy. Applications of these results arise in a variety
of distributed sensor network scenarios.
|
1009.3896
|
Optimistic Rates for Learning with a Smooth Loss
|
cs.LG
|
We establish an excess risk bound of O(H R_n^2 + R_n \sqrt{H L*}) for
empirical risk minimization with an H-smooth loss function and a hypothesis
class with Rademacher complexity R_n, where L* is the best risk achievable by
the hypothesis class. For typical hypothesis classes where R_n = \sqrt{R/n},
this translates to a learning rate of O(RH/n) in the separable (L*=0) case and
O(RH/n + \sqrt{L^* RH/n}) more generally. We also provide similar guarantees
for online and stochastic convex optimization with a smooth non-negative
objective.
|
1009.3916
|
Finite-SNR Diversity-Multiplexing Tradeoff via Asymptotic Analysis of
Large MIMO Systems
|
cs.IT math.IT
|
Diversity-multiplexing tradeoff (DMT) was characterized asymptotically (SNR->
infinity) for i.i.d. Rayleigh fading channel by Zheng and Tse [1]. The
SNR-asymptotic DMT overestimates the finite-SNR one [2]. This paper outlines a
number of additional limitations and difficulties of the DMT framework and
discusses their implications. Using the recent results on the size-asymptotic
(in the number of antennas) outage capacity distribution, the finite-SNR,
size-asymptotic DMT is derived for a broad class of fading distributions. The
SNR range over which the finite-SNR DMT is accurately approximated by the
SNR-asymptotic one is characterized. The multiplexing gain definition is shown
to affect critically this range and thus should be carefully selected, so that
the SNR-asymptotic DMT is an accurate approximation at realistic SNR values and
thus has operational significance to be used as a design criteria. The finite
SNR diversity gain is shown to decrease with correlation and power imbalance in
a broad class of fading channels, and such an effect is described in a compact,
closed form. Complete characterization of the outage probability (or outage
capacity) requires not only the finite-SNR DMT, but also the SNR offset, which
is introduced and investigated as well. This offset, which is not accounted for
in the DMT framework, is shown to have a significant impact on the outage
probability for a broad class of fading channels, especially when the
multiplexing gain is small. The analytical results and conclusions are
validated via extensive Monte-Carlo simulations. Overall, the size-asymptotic
DMT represents a valuable alternative to the SNR-asymptotic one.
|
1009.3951
|
Quantifying Information Leakage in Finite Order Deterministic Programs
|
cs.CR cs.IT math.IT
|
Information flow analysis is a powerful technique for reasoning about the
sensitive information exposed by a program during its execution. While past
work has proposed information theoretic metrics (e.g., Shannon entropy,
min-entropy, guessing entropy, etc.) to quantify such information leakage, we
argue that some of these measures not only result in counter-intuitive measures
of leakage, but also are inherently prone to conflicts when comparing two
programs P1 and P2 -- say Shannon entropy predicts higher leakage for program
P1, while guessing entropy predicts higher leakage for program P2. This paper
presents the first attempt towards addressing such conflicts and derives
solutions for conflict-free comparison of finite order deterministic programs.
|
1009.3955
|
Random Sequential Renormalization of Networks I: Application to Critical
Trees
|
cond-mat.stat-mech cs.SI physics.soc-ph
|
We introduce the concept of Random Sequential Renormalization (RSR) for
arbitrary networks. RSR is a graph renormalization procedure that locally
aggregates nodes to produce a coarse grained network. It is analogous to the
(quasi-)parallel renormalization schemes introduced by C. Song {\it et al.}
(Nature {\bf 433}, 392 (2005)) and studied more recently by F. Radicchi {\it et
al.} (Phys. Rev. Lett. {\bf 101}, 148701 (2008)), but much simpler and easier
to implement. In this first paper we apply RSR to critical trees and derive
analytical results consistent with numerical simulations. Critical trees
exhibit three regimes in their evolution under RSR: (i) An initial regime
$N_0^{\nu}\lesssim N<N_0$, where $N$ is the number of nodes at some step in the
renormalization and $N_0$ is the initial size. RSR in this regime is described
by a mean field theory and fluctuations from one realization to another are
small. The exponent $\nu=1/2$ is derived using random walk arguments. The
degree distribution becomes broader under successive renormalization --
reaching a power law, $p_k\sim 1/k^{\gamma}$ with $\gamma=2$ and a variance
that diverges as $N_0^{1/2}$ at the end of this regime. Both of these results
are derived based on a scaling theory. (ii) An intermediate regime for
$N_0^{1/4}\lesssim N \lesssim N_0^{1/2}$, in which hubs develop, and
fluctuations between different realizations of the RSR are large. Crossover
functions exhibiting finite size scaling, in the critical region $N\sim
N_0^{1/2} \to \infty$, connect the behaviors in the first two regimes. (iii)
The last regime, for $1 \ll N\lesssim N_0^{1/4}$, is characterized by the
appearance of star configurations with a central hub surrounded by many leaves.
The distribution of sizes where stars first form is found numerically to be a
power law up to a cutoff that scales as $N_0^{\nu_{star}}$ with
$\nu_{star}\approx 1/4$.
|
1009.3958
|
Approximate Inference and Stochastic Optimal Control
|
cs.LG stat.ML
|
We propose a novel reformulation of the stochastic optimal control problem as
an approximate inference problem, demonstrating, that such a interpretation
leads to new practical methods for the original problem. In particular we
characterise a novel class of iterative solutions to the stochastic optimal
control problem based on a natural relaxation of the exact dual formulation.
These theoretical insights are applied to the Reinforcement Learning problem
where they lead to new model free, off policy methods for discrete and
continuous problems.
|
1009.3961
|
Optimization of ARQ Protocols in Interference Networks with QoS
Constraints
|
cs.SY cs.NI
|
We study optimal transmission strategies in interfering wireless networks,
under Quality of Service constraints. A buffered, dynamic network with multiple
sources is considered, and sources use a retransmission strategy in order to
improve packet delivery probability. The optimization problem is formulated as
a Markov Decision Process, where constraints and objective functions are ratios
of time-averaged cost functions. The optimal strategy is found as the solution
of a Linear Fractional Program, where the optimization variables are the
steady-state probability of state-action pairs. Numerical results illustrate
the dependence of optimal transmission/interference strategies on the
constraints imposed on the network.
|
1009.3984
|
A memory-efficient data structure representing exact-match overlap
graphs with application for next generation DNA assembly
|
cs.DS cs.CE
|
An exact-match overlap graph of $n$ given strings of length $\ell$ is an
edge-weighted graph in which each vertex is associated with a string and there
is an edge $(x,y)$ of weight $\omega = \ell - |ov_{max}(x,y)|$ if and only if
$\omega \leq \lambda$, where $|ov_{max}(x,y)|$ is the length of $ov_{max}(x,y)$
and $\lambda$ is a given threshold. In this paper, we show that the exact-match
overlap graphs can be represented by a compact data structure that can be
stored using at most $(2\lambda -1 )(2\lceil\log n\rceil +
\lceil\log\lambda\rceil)n$ bits with a guarantee that the basic operation of
accessing an edge takes $O(\log \lambda)$ time.
Exact-match overlap graphs have been broadly used in the context of DNA
assembly and the \emph{shortest super string problem} where the number of
strings $n$ ranges from a couple of thousands to a couple of billions, the
length $\ell$ of the strings is from 25 to 1000, depending on DNA sequencing
technologies. However, many DNA assemblers using overlap graphs are facing a
major problem of constructing and storing them. Especially, it is impossible
for these DNA assemblers to handle the huge amount of data produced by the next
generation sequencing technologies where the number of strings $n$ is usually
very large ranging from hundred million to a couple of billions. In fact, to
our best knowledge there is no DNA assemblers that can handle such a large
number of strings. Fortunately, with our compact data structure, the major
problem of constructing and storing overlap graphs is practically solved since
it only requires linear time and and linear memory. As a result, it opens the
door of possibilities to build a DNA assembler that can handle large-scale
datasets efficiently.
|
1009.4004
|
A family of statistical symmetric divergences based on Jensen's
inequality
|
cs.CV cs.IT math.IT
|
We introduce a novel parametric family of symmetric information-theoretic
distances based on Jensen's inequality for a convex functional generator. In
particular, this family unifies the celebrated Jeffreys divergence with the
Jensen-Shannon divergence when the Shannon entropy generator is chosen. We then
design a generic algorithm to compute the unique centroid defined as the
minimum average divergence. This yields a smooth family of centroids linking
the Jeffreys to the Jensen-Shannon centroid. Finally, we report on our
experimental results.
|
1009.4013
|
An Analysis of Transaction and Joint-patent Application Networks
|
cs.SI cs.CY
|
Many firms these days are opting to specialize rather than generalize as a
way of maintaining their competitiveness. Consequently, they cannot rely solely
on themselves, but must cooperate by combining their advantages. To obtain the
actual condition for this cooperation, a multi-layered network based on two
different types of data was investigated. The first type was transaction data
from Japanese firms. The network created from the data included 961,363 firms
and 7,808,760 links. The second type of data were from joint-patent
applications in Japan. The joint-patent application network included 54,197
nodes and 154,205 links. These two networks were merged into one network.
The first anaysis was based on input-output tables and three different tables
were compared. The correlation coefficients between tables revealed that
transactions were more strongly tied to joint-patent applications than the
total amount of money. The total amount of money and transactions have few
relationships and these are probably connected to joint-patent applications in
different mechanisms. The second analysis was conducted based on the p* model.
Choice, multiplicity, reciprocity, multi-reciprocity and transitivity
configurations were evaluated. Multiplicity and reciprocity configurations were
significant in all the analyzed industries. The results for multiplicity meant
that transactions and joint-patent application links were closely related.
Multi-reciprocity and transitivity configurations were significant in some of
the analyzed industries. It was difficult to find any common characteristics in
the industries. Bayesian networks were used in the third analysis. The learned
structure revealed that if a transaction link between two firms is known, the
categories of firms' industries do not affect to the existence of a patent
link.
|
1009.4046
|
Channel-coded Collision Resolution by Exploiting Symbol Misalignment
|
cs.IT math.IT
|
In random-access networks, such as the IEEE 802.11 network, different users
may transmit their packets simultaneously, resulting in packet collisions.
Traditionally, the collided packets are simply discarded. To improve
performance, advanced signal processing techniques can be applied to extract
the individual packets from the collided signals. Prior work of ours has shown
that the symbol misalignment among the collided packets can be exploited to
improve the likelihood of successfully extracting the individual packets.
However, the failure rate is still unacceptably high. This paper investigates
how channel coding can be used to reduce the failure rate. We propose and
investigate a decoding scheme that incorporates the exploitation of the
aforementioned symbol misalignment into the channel decoding process. This is a
fine-grained integration at the symbol level. In particular, collision
resolution and channel decoding are applied in an integrated manner. Simulation
results indicate that our method outperforms other schemes, including the
straightforward method in which collision resolution and channel coding are
applied separately.
|
1009.4128
|
Asymptotic Spectral Efficiency of Multi-antenna Links in Wireless
Networks with Limited Tx CSI
|
cs.IT math.IT
|
An asymptotic technique is presented for finding the spectral efficiency of
multi-antenna links in wireless networks where transmitters have
Channel-State-Information (CSI) corresponding to their target receiver.
Transmitters are assumed to transmit independent data streams on a limited
number of channel modes which limits the rank of transmit covariance matrices.
This technique is applied to spatially distributed networks to derive an
approximation for the asymptotic spectral efficiency in the
interference-limited regime as a function of link-length, interferer density,
number of antennas per receiver and transmitter, number of transmit streams and
path-loss exponent. It is found that targeted-receiver CSI, which can be
acquired with low overhead in duplex systems with reciprocity, can increase
spectral efficiency several fold, particularly when link lengths are large,
node density is high or both. Additionally, the per-link spectral efficiency is
found to be a function of the ratio of node density to the number of receiver
antennas, and that it can often be improved if nodes transmit using fewer
streams. These results are validated for finite-sized systems by Monte-Carlo
simulation and are asymptotic in the regime where the number of users and
antennas per receiver approach infinity.
|
1009.4188
|
Robust Coin Flipping
|
cs.CC cs.CR cs.IT math.IT math.PR
|
Alice seeks an information-theoretically secure source of private random
data. Unfortunately, she lacks a personal source and must use remote sources
controlled by other parties. Alice wants to simulate a coin flip of specified
bias $\alpha$, as a function of data she receives from $p$ sources; she seeks
privacy from any coalition of $r$ of them. We show: If $p/2 \leq r < p$, the
bias can be any rational number and nothing else; if $0 < r < p/2$, the bias
can be any algebraic number and nothing else. The proof uses projective
varieties, convex geometry, and the probabilistic method. Our results improve
on those laid out by Yao, who asserts one direction of the $r=1$ case in his
seminal paper [Yao82]. We also provide an application to secure multiparty
computation.
|
1009.4219
|
Safe Feature Elimination for the LASSO and Sparse Supervised Learning
Problems
|
cs.LG cs.SY math.OC
|
We describe a fast method to eliminate features (variables) in l1 -penalized
least-square regression (or LASSO) problems. The elimination of features leads
to a potentially substantial reduction in running time, specially for large
values of the penalty parameter. Our method is not heuristic: it only
eliminates features that are guaranteed to be absent after solving the LASSO
problem. The feature elimination step is easy to parallelize and can test each
feature for elimination independently. Moreover, the computational effort of
our method is negligible compared to that of solving the LASSO problem -
roughly it is the same as single gradient step. Our method extends the scope of
existing LASSO algorithms to treat larger data sets, previously out of their
reach. We show how our method can be extended to general l1 -penalized convex
problems and present preliminary results for the Sparse Support Vector Machine
and Logistic Regression problems.
|
1009.4268
|
Rank-Constrained Schur-Convex Optimization with Multiple Trace/Log-Det
Constraints
|
cs.IT math.IT
|
Rank-constrained optimization problems have received an increasing intensity
of interest recently, because many optimization problems in communications and
signal processing applications can be cast into a rank-constrained optimization
problem. However, due to the non-convex nature of rank constraints, a
systematic solution to general rank-constrained problems has remained open for
a long time. In this paper, we focus on a rank-constrained optimization problem
with a Schur-convex/concave objective function and multiple
trace/logdeterminant constraints. We first derive a structural result on the
optimal solution of the rank-constrained problem using majorization theory.
Based on the solution structure, we transform the rank-constrained problem into
an equivalent problem with a unitary constraint. After that, we derive an
iterative projected steepest descent algorithm which converges to a local
optimal solution. Furthermore, we shall show that under some special cases, we
can derive a closed-form global optimal solution. The numerical results show
the superior performance of our proposed technique over the baseline schemes.
|
1009.4269
|
Distributed Interference Cancellation in Multiple Access Channel with
Transmitter Cooperation
|
cs.IT math.IT
|
We consider a two-user Gaussian multiple access channel with two independent
additive white Gaussian interferences. Each interference is known to exactly
one transmitter non-causally. Transmitters are allowed to cooperate through
finite-capacity links. The capacity region is characterized to within 3 and 1.5
bits for the stronger user and the weaker user respectively, regardless of
channel parameters. As a by-product, we characterize the capacity region of the
case without cooperation to within 1 and 0.5 bits for the stronger user and the
weaker user respectively. These results are based on a layered modulo-lattice
transmission architecture which realizes distributed interference cancellation.
|
1009.4287
|
Tree-Structure Expectation Propagation for LDPC Decoding in Erasure
Channels
|
cs.IT math.IT
|
In this paper we present a new algorithm, denoted as TEP, to decode
low-density parity-check (LDPC) codes over the Binary Erasure Channel (BEC).
The TEP decoder is derived applying the expectation propagation (EP) algorithm
with a tree- structured approximation. Expectation Propagation (EP) is a
generalization to Belief Propagation (BP) in two ways. First, it can be used
with any exponential family distribution over the cliques in the graph. Second,
it can impose additional constraints on the marginal distributions. We use this
second property to impose pair-wise marginal constraints in some check nodes of
the LDPC code's Tanner graph. The algorithm has the same computational
complexity than BP, but it can decode a higher fraction of errors when applied
over the BEC. In this paper, we focus on the asymptotic performance of the TEP
decoder, as the block size tends to infinity. We describe the TEP decoder by a
set of differential equations that represents the residual graph evolution
during the decoding process. The solution of these equations yields the
capacity of this decoder for a given LDPC ensemble over the BEC. We show that
the achieved capacity with the TEP is higher than the BP capacity, at the same
computational complexity.
|
1009.4300
|
Robust Transceiver Design for K-Pairs Quasi-Static MIMO Interference
Channels via Semi-Definite Relaxation
|
cs.IT math.IT
|
In this paper, we propose a robust transceiver design for the K-pair
quasi-static MIMO interference channel. Each transmitter is equipped with M
antennas, each receiver is equipped with N antennas, and the k-th transmitter
sends L_k independent data streams to the desired receiver. In the literature,
there exist a variety of theoretically promising transceiver designs for the
interference channel such as interference alignment-based schemes, which have
feasibility and practical limitations. In order to address practical system
issues and requirements, we consider a transceiver design that enforces
robustness against imperfect channel state information (CSI) as well as fair
performance among the users in the interference channel. Specifically, we
formulate the transceiver design as an optimization problem to maximize the
worst-case signal-to-interference-plus-noise ratio among all users. We devise a
low complexity iterative algorithm based on alternative optimization and
semi-definite relaxation techniques. Numerical results verify the advantages of
incorporating into transceiver design for the interference channel important
practical issues such as CSI uncertainty and fairness performance.
|
1009.4318
|
Performance Analysis of Estimation of Distribution Algorithm and Genetic
Algorithm in Zone Routing Protocol
|
cs.NE
|
In this paper, Estimation of Distribution Algorithm (EDA) is used for Zone
Routing Protocol (ZRP) in Mobile Ad-hoc Network (MANET) instead of Genetic
Algorithm (GA). It is an evolutionary approach, and used when the network size
grows and the search space increases. When the destination is outside the zone,
EDA is applied to find the route with minimum cost and time. The implementation
of proposed method is compared with Genetic ZRP, i.e., GZRP and the result
demonstrates better performance for the proposed method. Since the method
provides a set of paths to the destination, it results in load balance to the
network. As both EDA and GA use random search method to reach the optimal
point, the searching cost reduced significantly, especially when the number of
data is large.
|
1009.4352
|
An Iterative Joint Linear-Programming Decoding of LDPC Codes and
Finite-State Channels
|
cs.IT math.IT
|
In this paper, we introduce an efficient iterative solver for the joint
linear-programming (LP) decoding of low-density parity-check (LDPC) codes and
finite-state channels (FSCs). In particular, we extend the approach of
iterative approximate LP decoding, proposed by Vontobel and Koetter and
explored by Burshtein, to this problem. By taking advantage of the dual-domain
structure of the joint decoding LP, we obtain a convergent iterative algorithm
for joint LP decoding whose structure is similar to BCJR-based turbo
equalization (TE). The result is a joint iterative decoder whose complexity is
similar to TE but whose performance is similar to joint LP decoding. The main
advantage of this decoder is that it appears to provide the predictability of
joint LP decoding and superior performance with the computational complexity of
TE.
|
1009.4383
|
Expansion and Search in Networks
|
cs.SI cs.NI physics.data-an physics.soc-ph
|
Borrowing from concepts in expander graphs, we study the expansion properties
of real-world, complex networks (e.g. social networks, unstructured
peer-to-peer or P2P networks) and the extent to which these properties can be
exploited to understand and address the problem of decentralized search. We
first produce samples that concisely capture the overall expansion properties
of an entire network, which we collectively refer to as the expansion
signature. Using these signatures, we find a correspondence between the
magnitude of maximum expansion and the extent to which a network can be
efficiently searched. We further find evidence that standard graph-theoretic
measures, such as average path length, fail to fully explain the level of
"searchability" or ease of information diffusion and dissemination in a
network. Finally, we demonstrate that this high expansion can be leveraged to
facilitate decentralized search in networks and show that an expansion-based
search strategy outperforms typical search methods.
|
1009.4409
|
Efficient delay-tolerant particle filtering
|
stat.AP cs.MA
|
This paper proposes a novel framework for delay-tolerant particle filtering
that is computationally efficient and has limited memory requirements. Within
this framework the informativeness of a delayed (out-of-sequence) measurement
(OOSM) is estimated using a lightweight procedure and uninformative
measurements are immediately discarded. The framework requires the
identification of a threshold that separates informative from uninformative;
this threshold selection task is formulated as a constrained optimization
problem, where the goal is to minimize tracking error whilst controlling the
computational requirements. We develop an algorithm that provides an
approximate solution for the optimization problem. Simulation experiments
provide an example where the proposed framework processes less than 40% of all
OOSMs with only a small reduction in tracking accuracy.
|
1009.4489
|
Complex Networks and Symmetry II: Reciprocity and Evolution of World
Trade
|
q-fin.GN cs.SI nlin.AO physics.soc-ph
|
We exploit the symmetry concepts developed in the companion review of this
article to introduce a stochastic version of link reversal symmetry, which
leads to an improved understanding of the reciprocity of directed networks. We
apply our formalism to the international trade network and show that a strong
embedding in economic space determines particular symmetries of the network,
while the observed evolution of reciprocity is consistent with a symmetry
breaking taking place in production space. Our results show that networks can
be strongly affected by symmetry-breaking phenomena occurring in embedding
spaces, and that stochastic network symmetries can successfully suggest, or
rule out, possible underlying mechanisms.
|
1009.4495
|
Unary Coding for Neural Network Learning
|
cs.NE
|
This paper presents some properties of unary coding of significance for
biological learning and instantaneously trained neural networks.
|
1009.4503
|
On Repetition Protocols and Power Control for Multiple Access
Block-Fading Channels
|
cs.IT math.IT
|
In this paper we study the long-term throughput performance of repetition
protocols coupled with power control for multiple access block-fading channels.
We propose to use the feedback bits to inform the transmitter about the
decoding status and the instantaneous channel quality. We determine the
throughput of simple and practically inspired protocols; we show remarkable
throughput improvements, especially at low and moderate SNR, when compared to
protocols where the feedback bits are used for acknowledgment only or for power
control only; we show that the throughput is very close to the ultimate ergodic
multi-user water-filling capacity for small number of feedback bits and/or
retransmissions. For symmetric Rayleigh fading channels, numerical results show
that the throughput improvement is mainly due to the ability to perform a power
control, rather than to retransmit.
|
1009.4509
|
Complex networks derived from cellular automata
|
nlin.CG cs.SI math-ph math.MP
|
We propose a method for deriving networks from one-dimensional binary
cellular automata. The derived networks are usually directed and have
structural properties corresponding to the dynamical behaviors of their
cellular automata. Network parameters, particularly the efficiency and the
degree distribution, show that the dependence of efficiency on the grid size is
characteristic and can be used to classify cellular automata and that derived
networks exhibit various degree distributions. In particular, a class IV rule
of Wolfram's classification produces a network having a scale-free
distribution.
|
1009.4556
|
A new closed-loop output error method for parameter identification of
robot dynamics
|
cs.RO
|
Off-line robot dynamic identification methods are mostly based on the use of
the inverse dynamic model, which is linear with respect to the dynamic
parameters. This model is sampled while the robot is tracking reference
trajectories that excite the system dynamics. This allows using linear
least-squares techniques to estimate the parameters. The efficiency of this
method has been proved through the experimental identification of many
prototypes and industrial robots. However, this method requires the joint
force/torque and position measurements and the estimate of the joint velocity
and acceleration, through the bandpass filtering of the joint position at high
sampling rates. The proposed new method requires only the joint force/torque
measurement. It is a closed-loop output error method where the usual joint
position output is replaced by the joint force/torque. It is based on a
closed-loop simulation of the robot using the direct dynamic model, the same
structure of the control law, and the same reference trajectory for both the
actual and the simulated robot. The optimal parameters minimize the 2-norm of
the error between the actual force/torque and the simulated force/torque. This
is a non-linear least-squares problem which is dramatically simplified using
the inverse dynamic model to obtain an analytical expression of the simulated
force/torque, linear in the parameters. A validation experiment on a 2
degree-of-freedom direct drive robot shows that the new method is efficient.
|
1009.4564
|
A Constructive Algorithm for Feedforward Neural Networks for Medical
Diagnostic Reasoning
|
cs.NE
|
This research is to search for alternatives to the resolution of complex
medical diagnosis where human knowledge should be apprehended in a general
fashion. Successful application examples show that human diagnostic
capabilities are significantly worse than the neural diagnostic system. Our
research describes a constructive neural network algorithm with
backpropagation; offer an approach for the incremental construction of
nearminimal neural network architectures for pattern classification. The
algorithm starts with minimal number of hidden units in the single hidden
layer; additional units are added to the hidden layer one at a time to improve
the accuracy of the network and to get an optimal size of a neural network. Our
algorithm was tested on several benchmarking classification problems including
Cancer1, Heart, and Diabetes with good generalization ability.
|
1009.4566
|
An Algorithm to Extract Rules from Artificial Neural Networks for
Medical Diagnosis Problems
|
cs.NE
|
Artificial neural networks (ANNs) have been successfully applied to solve a
variety of classification and function approximation problems. Although ANNs
can generally predict better than decision trees for pattern classification
problems, ANNs are often regarded as black boxes since their predictions cannot
be explained clearly like those of decision trees. This paper presents a new
algorithm, called rule extraction from ANNs (REANN), to extract rules from
trained ANNs for medical diagnosis problems. A standard three-layer feedforward
ANN with four-phase training is the basis of the proposed algorithm. In the
first phase, the number of hidden nodes in ANNs is determined automatically by
a constructive algorithm. In the second phase, irrelevant connections and input
nodes are removed from trained ANNs without sacrificing the predictive accuracy
of ANNs. The continuous activation values of the hidden nodes are discretized
by using an efficient heuristic clustering algorithm in the third phase.
Finally, rules are extracted from compact ANNs by examining the discretized
activation values of the hidden nodes. Extensive experimental studies on three
benchmark classification problems, i.e. breast cancer, diabetes and lenses,
demonstrate that REANN can generate high quality rules from ANNs, which are
comparable with other methods in terms of number of rules, average number of
conditions for a rule, and predictive accuracy.
|
1009.4569
|
Fastest Distributed Consensus on Star-Mesh Hybrid Sensor Networks
|
cs.IT cs.DM math.IT
|
Solving Fastest Distributed Consensus (FDC) averaging problem over sensor
networks with different topologies has received some attention recently and one
of the well known topologies in this issue is star-mesh hybrid topology. Here
in this work we present analytical solution for the problem of FDC algorithm by
means of stratification and semidefinite programming, for the Star-Mesh Hybrid
network with K-partite core (SMHK) which has rich symmetric properties. Also
the variations of asymptotic and per step convergence rate of SMHK network
versus its topological parameters have been studied numerically.
|
1009.4570
|
Extraction of Symbolic Rules from Artificial Neural Networks
|
cs.NE
|
Although backpropagation ANNs generally predict better than decision trees do
for pattern classification problems, they are often regarded as black boxes,
i.e., their predictions cannot be explained as those of decision trees. In many
applications, it is desirable to extract knowledge from trained ANNs for the
users to gain a better understanding of how the networks solve the problems. A
new rule extraction algorithm, called rule extraction from artificial neural
networks (REANN) is proposed and implemented to extract symbolic rules from
ANNs. A standard three-layer feedforward ANN is the basis of the algorithm. A
four-phase training algorithm is proposed for backpropagation learning.
Explicitness of the extracted rules is supported by comparing them to the
symbolic rules generated by other methods. Extracted rules are comparable with
other methods in terms of number of rules, average number of conditions for a
rule, and predictive accuracy. Extensive experimental studies on several
benchmarks classification problems, such as breast cancer, iris, diabetes, and
season classification problems, demonstrate the effectiveness of the proposed
approach with good generalization ability.
|
1009.4572
|
Medical diagnosis using neural network
|
cs.NE
|
This research is to search for alternatives to the resolution of complex
medical diagnosis where human knowledge should be apprehended in a general
fashion. Successful application examples show that human diagnostic
capabilities are significantly worse than the neural diagnostic system. This
paper describes a modified feedforward neural network constructive algorithm
(MFNNCA), a new algorithm for medical diagnosis. The new constructive algorithm
with backpropagation; offer an approach for the incremental construction of
near-minimal neural network architectures for pattern classification. The
algorithm starts with minimal number of hidden units in the single hidden
layer; additional units are added to the hidden layer one at a time to improve
the accuracy of the network and to get an optimal size of a neural network. The
MFNNCA was tested on several benchmarking classification problems including the
cancer, heart disease and diabetes. Experimental results show that the MFNNCA
can produce optimal neural network architecture with good generalization
ability.
|
1009.4574
|
A hybrid learning algorithm for text classification
|
cs.NE cs.IR cs.LG
|
Text classification is the process of classifying documents into predefined
categories based on their content. Existing supervised learning algorithms to
automatically classify text need sufficient documents to learn accurately. This
paper presents a new algorithm for text classification that requires fewer
documents for training. Instead of using words, word relation i.e association
rules from these words is used to derive feature set from preclassified text
documents. The concept of Naive Bayes classifier is then used on derived
features and finally only a single concept of Genetic Algorithm has been added
for final classification. Experimental results show that the classifier build
this way is more accurate than the existing text classification systems.
|
1009.4581
|
3D-Mesh denoising using an improved vertex based anisotropic diffusion
|
cs.CV
|
This paper deals with an improvement of vertex based nonlinear diffusion for
mesh denoising. This method directly filters the position of the vertices using
Laplace, reduced centered Gaussian and Rayleigh probability density functions
as diffusivities. The use of these PDFs improves the performance of a
vertex-based diffusion method which are adapted to the underlying mesh
structure. We also compare the proposed method to other mesh denoising methods
such as Laplacian flow, mean, median, min and the adaptive MMSE filtering. To
evaluate these methods of filtering, we use two error metrics. The first is
based on the vertices and the second is based on the normals. Experimental
results demonstrate the effectiveness of our proposed method in comparison with
the existing methods.
|
1009.4582
|
Text Classification using the Concept of Association Rule of Data Mining
|
cs.LG cs.DB cs.IR
|
As the amount of online text increases, the demand for text classification to
aid the analysis and management of text is increasing. Text is cheap, but
information, in the form of knowing what classes a text belongs to, is
expensive. Automatic classification of text can provide this information at low
cost, but the classifiers themselves must be built with expensive human effort,
or trained from texts which have themselves been manually classified. In this
paper we will discuss a procedure of classifying text using the concept of
association rule of data mining. Association rule mining technique has been
used to derive feature set from pre-classified text documents. Naive Bayes
classifier is then used on derived features for final classification.
|
1009.4586
|
Optimal Bangla Keyboard Layout using Association Rule of Data Mining
|
cs.AI
|
In this paper we present an optimal Bangla Keyboard Layout, which distributes
the load equally on both hands so that maximizing the ease and minimizing the
effort. Bangla alphabet has a large number of letters, for this it is difficult
to type faster using Bangla keyboard. Our proposed keyboard will maximize the
speed of operator as they can type with both hands parallel. Here we use the
association rule of data mining to distribute the Bangla characters in the
keyboard. First, we analyze the frequencies of data consisting of monograph,
digraph and trigraph, which are derived from data wire-house, and then used
association rule of data mining to distribute the Bangla characters in the
layout. Finally, we propose a Bangla Keyboard Layout. Experimental results on
several keyboard layout shows the effectiveness of the proposed approach with
better performance.
|
1009.4595
|
Diversity Spectra of Spatial Multipath Fading Processes
|
cs.IT math.IT
|
We analyse the spatial diversity of a multipath fading process for a finite
region or curve in the plane. By means of the Karhunen-Lo\`eve (KL) expansion,
this diversity can be characterised by the eigenvalue spectrum of the spatial
autocorrelation kernel. This justifies to use the term diversity spectrum for
it. We show how the diversity spectrum can be calculated for any such
geometrical object and any fading statistics represented by the power azimuth
spectrum (PAS). We give rigorous estimates for the accuracy of the numerically
calculated eigenvalues. The numerically calculated diversity spectra provide
useful hints for the optimisation of the geometry of an antenna array.
Furthermore, for a channel coded system, they allow to evaluate the time
interleaving depth that is necessary to exploit the diversity gain of the code.
|
1009.4610
|
Performance Analysis and Design of Two Edge Type LDPC Codes for the BEC
Wiretap Channel
|
cs.IT math.IT
|
We consider transmission over a wiretap channel where both the main channel
and the wiretapper's channel are Binary Erasure Channels (BEC). We propose a
code construction method using two edge type LDPC codes based on the coset
encoding scheme. Using a standard LDPC ensemble with a given threshold over the
BEC, we give a construction for a two edge type LDPC ensemble with the same
threshold. If the given standard LDPC ensemble has degree two variable nodes,
our construction gives rise to degree one variable nodes in the code used over
the main channel. This results in zero threshold over the main channel. In
order to circumvent this problem, we numerically optimize the degree
distribution of the two edge type LDPC ensemble. We find that the resulting
ensembles are able to perform close to the boundary of the rate-equivocation
region of the wiretap channel.
There are two performance criteria for a coding scheme used over a wiretap
channel: reliability and secrecy. The reliability measure corresponds to the
probability of decoding error for the intended receiver. This can be easily
measured using density evolution recursion. However, it is more challenging to
characterize secrecy, corresponding to the equivocation of the message for the
wiretapper. M\'easson, Montanari, and Urbanke have shown how the equivocation
can be measured for a broad range of standard LDPC ensembles for transmission
over the BEC under the point-to-point setup. By generalizing the method of
M\'easson, Montanari, and Urbanke to two edge type LDPC ensembles, we show how
the equivocation for the wiretapper can be computed. We find that relatively
simple constructions give very good secrecy performance and are close to the
secrecy capacity. However finding explicit sequences of two edge type LDPC
ensembles which achieve secrecy capacity is a more difficult problem. We pose
it as an interesting open problem.
|
1009.4638
|
Novel Codes Family for Modified Spectral-Amplitude-Coding OCDMA Systems
and Performance Analysis
|
cs.IT math.IT
|
In this paper a novel family of codes for modified spectral-amplitude-coding
optical code division multiple access (SAC-OCDMA) is introduced. The proposed
codes exist for more number of processing gains comparing to the previously
reported codes. In the network using these codes, the number of users can be
extended without any essential changes in the previous transmitters. In this
study, we propose a construction method for these codes and compare their
performance with previously reported codes.
|
1009.4683
|
Efficient Computation of Optimal Trading Strategies
|
cs.CE q-fin.CP
|
Given the return series for a set of instruments, a \emph{trading strategy}
is a switching function that transfers wealth from one instrument to another at
specified times. We present efficient algorithms for constructing (ex-post)
trading strategies that are optimal with respect to the total return, the
Sterling ratio and the Sharpe ratio. Such ex-post optimal strategies are useful
analysis tools. They can be used to analyze the "profitability of a market" in
terms of optimal trading; to develop benchmarks against which real trading can
be compared; and, within an inductive framework, the optimal trades can be used
to to teach learning systems (predictors) which are then used to identify
future trading opportunities.
|
1009.4693
|
Uniqueness transition in noisy phase retrieval
|
physics.data-an cs.IT math.IT physics.optics
|
Previous criteria for the feasibility of reconstructing phase information
from intensity measurements, both in x-ray crystallography and more recently in
coherent x-ray imaging, have been based on the Maxwell constraint counting
principle. We propose a new criterion, based on Shannon's mutual information,
that is better suited for noisy data or contrast that has strong priors not
well modeled by continuous variables. A natural application is magnetic domain
imaging, where the criterion for uniqueness in the reconstruction takes the
form that the number of photons, per pixel of contrast in the image, exceeds a
certain minimum. Detailed studies of a simple model show that the uniqueness
transition is of the type exhibited by spin glasses.
|
1009.4719
|
A Fast Audio Clustering Using Vector Quantization and Second Order
Statistics
|
cs.SD cs.LG
|
This paper describes an effective unsupervised speaker indexing approach. We
suggest a two stage algorithm to speed-up the state-of-the-art algorithm based
on the Bayesian Information Criterion (BIC). In the first stage of the merging
process a computationally cheap method based on the vector quantization (VQ) is
used. Then in the second stage a more computational expensive technique based
on the BIC is applied. In the speaker indexing task a turning parameter or a
threshold is used. We suggest an on-line procedure to define the value of a
turning parameter without using development data. The results are evaluated
using 10 hours of audio data.
|
1009.4739
|
Balancing clusters to reduce response time variability in large scale
image search
|
cs.CV
|
Many algorithms for approximate nearest neighbor search in high-dimensional
spaces partition the data into clusters. At query time, in order to avoid
exhaustive search, an index selects the few (or a single) clusters nearest to
the query point. Clusters are often produced by the well-known $k$-means
approach since it has several desirable properties. On the downside, it tends
to produce clusters having quite different cardinalities. Imbalanced clusters
negatively impact both the variance and the expectation of query response
times. This paper proposes to modify $k$-means centroids to produce clusters
with more comparable sizes without sacrificing the desirable properties.
Experiments with a large scale collection of image descriptors show that our
algorithm significantly reduces the variance of response times without
seriously impacting the search quality.
|
1009.4757
|
Modeling Instantaneous Changes In Natural Scenes
|
cs.CV
|
This project aims to create 3d model of the natural world and model changes
in it instantaneously. A framework for modeling instantaneous changes natural
scenes in real time using Lagrangian Particle Framework and a fluid-particle
grid approach is presented. This project is presented in the form of a
proof-based system where we show that the design is very much possible but
currently we only have selective scripts that accomplish the given job, a
complete software however is still under work. This research can be divided
into 3 distinct sections: the first one discusses a multi-camera rig that can
measure ego-motion accurately up to 88%, how this device becomes the backbone
of our framework, and some improvements devised to optimize a know framework
for depth maps and 3d structure estimation from a single still image called
make3d. The second part discusses the fluid-particle framework to model natural
scenes, presents some algorithms that we are using to accomplish this task and
we show how an application of our framework can extend make3d to model natural
scenes in real time. This part of the research constructs a bridge between
computer vision and computer graphics so that now ideas, answers and intuitions
that arose in the domain of computer graphics can now be applied to computer
vision and natural modeling. The final part of this research improves upon what
might become the first general purpose vision system using deep belief
architectures and provides another framework to improve the lower bound on
training images for boosting by using a variation of Restricted Boltzmann
machines (RBM). We also discuss other applications that might arise from our
work in these areas.
|
1009.4766
|
Efficient L1/Lq Norm Regularization
|
cs.LG
|
Sparse learning has recently received increasing attention in many areas
including machine learning, statistics, and applied mathematics. The mixed-norm
regularization based on the L1/Lq norm with q > 1 is attractive in many
applications of regression and classification in that it facilitates group
sparsity in the model. The resulting optimization problem is, however,
challenging to solve due to the structure of the L1/Lq -regularization.
Existing work deals with special cases including q = 2,infinity, and they
cannot be easily extended to the general case. In this paper, we propose an
efficient algorithm based on the accelerated gradient method for solving the
L1/Lq -regularized problem, which is applicable for all values of q larger than
1, thus significantly extending existing work. One key building block of the
proposed algorithm is the L1/Lq -regularized Euclidean projection (EP1q). Our
theoretical analysis reveals the key properties of EP1q and illustrates why
EP1q for the general q is significantly more challenging to solve than the
special cases. Based on our theoretical analysis, we develop an efficient
algorithm for EP1q by solving two zero finding problems. Experimental results
demonstrate the efficiency of the proposed algorithm.
|
1009.4773
|
NCSA: A New Protocol for Random Multiple Access Based on Physical Layer
Network Coding
|
cs.IT cs.NI math.IT
|
This paper introduces a random multiple access method for satellite
communications, named Network Coding-based Slotted Aloha (NCSA). The goal is to
improve diversity of data bursts on a slotted-ALOHA-like channel thanks to
error correcting codes and Physical-layer Network Coding (PNC). This scheme can
be considered as a generalization of the Contention Resolution Diversity
Slotted Aloha (CRDSA) where the different replicas of this system are replaced
by the different parts of a single word of an error correcting code. The
performance of this scheme is first studied through a density evolution
approach. Then, simulations confirm the CRDSA results by showing that, for a
time frame of $400$ slots, the achievable total throughput is greater than
$0.7\times C$, where $C$ is the maximal throughput achieved by a centralized
scheme. This paper is a first analysis of the proposed scheme which open
several perspectives. The most promising approach is to integrate collided
bursts into the decoding process in order to improve the obtained performance.
|
1009.4780
|
Spectrum Sharing between Cooperative Relay and Ad-hoc Networks: Dynamic
Transmissions under Computation and Signaling Limitations
|
cs.IT math.IT math.OC
|
This paper studies a spectrum sharing scenario between a cooperative relay
network (CRN) and a nearby ad-hoc network. In particular, we consider a dynamic
spectrum access and resource allocation problem of the CRN. Based on sensing
and predicting the ad-hoc transmission behaviors, the ergodic traffic collision
time between the CRN and ad-hoc network is minimized subject to an ergodic
uplink throughput requirement for the CRN. We focus on real-time implementation
of spectrum sharing policy under practical computation and signaling
limitations. In our spectrum sharing policy, most computation tasks are
accomplished off-line. Hence, little real-time calculation is required which
fits the requirement of practical applications. Moreover, the signaling
procedure and computation process are designed carefully to reduce the time
delay between spectrum sensing and data transmission, which is crucial for
enhancing the accuracy of traffic prediction and improving the performance of
interference mitigation. The benefits of spectrum sensing and cooperative relay
techniques are demonstrated by our numerical experiments.
|
1009.4787
|
Improving the Quality of Non-Holonomic Motion by Hybridizing C-PRM Paths
|
cs.RO
|
Sampling-based motion planners are an effective means for generating
collision-free motion paths. However, the quality of these motion paths, with
respect to different quality measures such as path length, clearance,
smoothness or energy, is often notoriously low. This problem is accentuated in
the case of non-holonomic sampling-based motion planning, in which the space of
feasible motion trajectories is restricted. In this study, we combine the C-PRM
algorithm by Song and Amato with our recently introduced path-hybridization
approach, for creating high quality non-holonomic motion paths, with
combinations of several different quality measures such as path length,
smoothness or clearance, as well as the number of reverse car motions. Our
implementation includes a variety of code optimizations that result in nearly
real-time performance, and which we believe can be extended with further
optimizations to a real-time tool for the planning of high-quality car-like
motion.
|
1009.4791
|
Multi-parametric Solution-path Algorithm for Instance-weighted Support
Vector Machines
|
cs.LG
|
An instance-weighted variant of the support vector machine (SVM) has
attracted considerable attention recently since they are useful in various
machine learning tasks such as non-stationary data analysis, heteroscedastic
data modeling, transfer learning, learning to rank, and transduction. An
important challenge in these scenarios is to overcome the computational
bottleneck---instance weights often change dynamically or adaptively, and thus
the weighted SVM solutions must be repeatedly computed. In this paper, we
develop an algorithm that can efficiently and exactly update the weighted SVM
solutions for arbitrary change of instance weights. Technically, this
contribution can be regarded as an extension of the conventional solution-path
algorithm for a single regularization parameter to multiple instance-weight
parameters. However, this extension gives rise to a significant problem that
breakpoints (at which the solution path turns) have to be identified in
high-dimensional space. To facilitate this, we introduce a parametric
representation of instance weights. We also provide a geometric interpretation
in weight space using a notion of critical region: a polyhedron in which the
current affine solution remains to be optimal. Then we find breakpoints at
intersections of the solution path and boundaries of polyhedrons. Through
extensive experiments on various practical applications, we demonstrate the
usefulness of the proposed algorithm.
|
1009.4797
|
Extracting directed information flow networks: an application to
genetics and semantics
|
physics.data-an cs.SI physics.soc-ph
|
We introduce a general method to infer the directional information flow
between populations whose elements are described by n-dimensional vectors of
symbolic attributes. The method is based on the Jensen-Shannon divergence and
on the Shannon entropy and has a wide range of application. We show here the
results of two applications: first extracting the network of genetic flow
between the meadows of the seagrass Poseidonia Oceanica, where the meadow
elements are specified by sets of microsatellite markers, then we extract the
semantic flow network from a set of Wikipedia pages, showing the semantic
channels between different areas of knowledge.
|
1009.4798
|
Role of feedback and broadcasting in the naming game
|
physics.soc-ph cond-mat.stat-mech cs.GT cs.MA cs.NI q-bio.PE
|
The naming game (NG) describes the agreement dynamics of a population of
agents that interact locally in a pairwise fashion, and in recent years
statistical physics tools and techniques have greatly contributed to shed light
on its rich phenomenology. Here we investigate in details the role played by
the way in which the two agents update their states after an interaction. We
show that slightly modifying the NG rules in terms of which agent performs the
update in given circumstances (i.e. after a success) can either alter
dramatically the overall dynamics or leave it qualitatively unchanged. We
understand analytically the first case by casting the model in the broader
framework of a generalized NG. As for the second case, on the other hand, we
note that the modified rule reproducing the main features of the usual NG
corresponds in fact to a simplification of it consisting in the elimination of
feedback between the agents. This allows us to introduce and study a very
natural broadcasting scheme on networks that can be potentially relevant for
different applications, such as the design and implementation of autonomous
sensor networks, as pointed out in the recent literature.
|
1009.4823
|
Image Segmentation by Discounted Cumulative Ranking on Maximal Cliques
|
cs.CV
|
We propose a mid-level image segmentation framework that combines multiple
figure-ground hypothesis (FG) constrained at different locations and scales,
into interpretations that tile the entire image. The problem is cast as
optimization over sets of maximal cliques sampled from the graph connecting
non-overlapping, putative figure-ground segment hypotheses. Potential functions
over cliques combine unary Gestalt-based figure quality scores and pairwise
compatibilities among spatially neighboring segments, constrained by
T-junctions and the boundary interface statistics resulting from projections of
real 3d scenes. Learning the model parameters is formulated as rank
optimization, alternating between sampling image tilings and optimizing their
potential function parameters. State of the art results are reported on both
the Berkeley and the VOC2009 segmentation dataset, where a 28% improvement was
achieved.
|
1009.4877
|
Towards Quality of Service and Resource Aware Robotic Systems through
Model-Driven Software Development
|
cs.RO
|
Engineering the software development process in robotics is one of the basic
necessities towards industrial-strength service robotic systems. A major
challenge is to make the step from code-driven to model-driven systems. This is
essential to replace hand-crafted single-unit systems by systems composed out
of components with explicitly stated properties. Furthermore, this fosters
reuse by separating robotics knowledge from short-cycled implementational
technologies. Altogether, this is one but important step towards "able" robots.
This paper reports on a model-driven development process for robotic systems.
The process consists of a robotics metamodel with first explications of
non-functional properties. A model-driven toolchain based on Eclipse provides
the model transformation and code generation steps. It also provides design
time analysis of resource parameters (e.g. schedulability analysis of realtime
tasks) as a first step towards overall resource awareness in the development of
integrated robotic systems. The overall approach is underpinned by several real
world scenarios.
|
1009.4954
|
Delay-Guaranteed Cross-Layer Scheduling in Multi-Hop Wireless Networks
|
cs.IT math.IT
|
In this paper, we propose a cross-layer scheduling algorithm that achieves a
throughput "epsilon-close" to the optimal throughput in multi-hop wireless
networks with a tradeoff of O(1/epsilon) in delay guarantees. The algorithm
aims to solve a joint congestion control, routing, and scheduling problem in a
multi-hop wireless network while satisfying per-flow average end-to-end delay
guarantees and minimum data rate requirements. This problem has been solved for
both backlogged as well as arbitrary arrival rate systems. Moreover, we discuss
the design of a class of low-complexity suboptimal algorithms, the effects of
delayed feedback on the optimal algorithm, and the extensions of the proposed
algorithm to different interference models with arbitrary link capacities.
|
1009.4962
|
RGANN: An Efficient Algorithm to Extract Rules from ANNs
|
cs.NE
|
This paper describes an efficient rule generation algorithm, called rule
generation from artificial neural networks (RGANN) to generate symbolic rules
from ANNs. Classification rules are sought in many areas from automatic
knowledge acquisition to data mining and ANN rule extraction. This is because
classification rules possess some attractive features. They are explicit,
understandable and verifiable by domain experts, and can be modified, extended
and passed on as modular knowledge. A standard three-layer feedforward ANN is
the basis of the algorithm. A four-phase training algorithm is proposed for
backpropagation learning. Comparing them to the symbolic rules generated by
other methods supports explicitness of the generated rules. Generated rules are
comparable with other methods in terms of number of rules, average number of
conditions for a rule, and predictive accuracy. Extensive experimental studies
on several benchmarks classification problems, including breast cancer, wine,
season, golf-playing, and lenses classification demonstrate the effectiveness
of the proposed approach with good generalization ability.
|
1009.4964
|
Text Classification using Artificial Intelligence
|
cs.IR
|
Text classification is the process of classifying documents into predefined
categories based on their content. It is the automated assignment of natural
language texts to predefined categories. Text classification is the primary
requirement of text retrieval systems, which retrieve texts in response to a
user query, and text understanding systems, which transform text in some way
such as producing summaries, answering questions or extracting data. Existing
supervised learning algorithms for classifying text need sufficient documents
to learn accurately. This paper presents a new algorithm for text
classification using artificial intelligence technique that requires fewer
documents for training. Instead of using words, word relation i.e. association
rules from these words is used to derive feature set from pre-classified text
documents. The concept of na\"ive Bayes classifier is then used on derived
features and finally only a single concept of genetic algorithm has been added
for final classification. A system based on the proposed algorithm has been
implemented and tested. The experimental results show that the proposed system
works as a successful text classifier.
|
1009.4966
|
The minimum distance of parameterized codes on projective tori
|
math.AC cs.IT math.AG math.IT
|
Let X be a subset of a projective space, over a finite field K, which is
parameterized by the monomials arising from the edges of a clutter. Let I(X) be
the vanishing ideal of X. It is shown that I(X) is a complete intersection if
and only if X is a projective torus. In this case we determine the minimum
distance of any parameterized linear code arising from X.
|
1009.4969
|
Extended Range Profiling in Stepped-Frequency Radar with Sparse Recovery
|
cs.IT math.IT
|
The newly emerging theory of compressed sensing (CS) enables restoring a
sparse signal from inadequate number of linear projections. Based on compressed
sensing theory, a new algorithm of high-resolution range profiling for
stepped-frequency (SF) radar suffering from missing pulses is proposed. The new
algorithm recovers target range profile over multiple coarse-range-bins,
providing a wide range profiling capability. MATLAB simulation results are
presented to verify the proposed method. Furthermore, we use collected data
from real SF radar to generate extended target high-resolution range (HRR)
profile. Results are compared with `stretch' based least square method to prove
its applicability.
|
1009.4971
|
Fastest Distributed Consensus on Petal Networks
|
cs.IT cs.DM math.IT
|
Providing an analytical solution for the problem of finding Fastest
Distributed Consensus (FDC) is one of the challenging problems in the field of
sensor networks. Here in this work we present analytical solution for the
problem of fastest distributed consensus averaging algorithm by means of
stratification and semi-definite programming, for two particular types of Petal
networks, namely symmetric and Complete Cored Symmetric (CCS) Petal networks.
Our method in this paper is based on convexity of fastest distributed consensus
averaging problem, and inductive comparing of the characteristic polynomials
initiated by slackness conditions in order to find the optimal weights. Also
certain types of leaves are introduced along with their optimal weights which
are not achievable by the method used in this work if these leaves are
considered individually.
|
1009.4972
|
Speaker Identification using MFCC-Domain Support Vector Machine
|
cs.LG cs.SD
|
Speech recognition and speaker identification are important for
authentication and verification in security purpose, but they are difficult to
achieve. Speaker identification methods can be divided into text-independent
and text-dependent. This paper presents a technique of text-dependent speaker
identification using MFCC-domain support vector machine (SVM). In this work,
melfrequency cepstrum coefficients (MFCCs) and their statistical distribution
properties are used as features, which will be inputs to the neural network.
This work firstly used sequential minimum optimization (SMO) learning technique
for SVM that improve performance over traditional techniques Chunking, Osuna.
The cepstrum coefficients representing the speaker characteristics of a speech
segment are computed by nonlinear filter bank analysis and discrete cosine
transform. The speaker identification ability and convergence speed of the SVMs
are investigated for different combinations of features. Extensive experimental
results on several samples show the effectiveness of the proposed approach.
|
1009.4973
|
Performance Analysis of Pulse Shaping Technique for OFDM PAPR Reduction
|
cs.IT math.IT
|
Orthogonal Frequency Division Multiplexing (OFDM) is an attractive modulation
and multiple access techniques for channels with a nonflat frequency response,
as it saves the need for complex equalizers. It can offer high quality
performance in terms of bandwidth efficiency, robustness against multipath
fading and cost-effective implementation. However, its main disadvantage is the
high peak-to-average power ratio (PAPR) of the output signal. As a result, a
linear behavior of the system over a large dynamic range is needed and
therefore the efficiency of the output amplifier is reduced. In this paper, we
investigate the effect of some of these sets of time waveforms on the OFDM
system performance in terms of Bit Error Rate (BER). We evaluate the system
performance in AWGN channels. The obtained results indicate that the reduction
in PAPR of the investigated methods is associated with considerable improvement
in BER performance of the system, in multipath channels, as compared to
conventional OFDM. These promising results indicate that pulse shaping with
reduced PAPR is an attractive solution for an OFDM system.
|
1009.4974
|
Rotation Invariant Face Detection Using Wavelet, PCA and Radial Basis
Function Networks
|
cs.CV
|
This paper introduces a novel method for human face detection with its
orientation by using wavelet, principle component analysis (PCA) and redial
basis networks. The input image is analyzed by two-dimensional wavelet and a
two-dimensional stationary wavelet. The common goals concern are the image
clearance and simplification, which are parts of de-noising or compression. We
applied an effective procedure to reduce the dimension of the input vectors
using PCA. Radial Basis Function (RBF) neural network is then used as a
function approximation network to detect where either the input image is
contained a face or not and if there is a face exists then tell about its
orientation. We will show how RBF can perform well then back-propagation
algorithm and give some solution for better regularization of the RBF (GRNN)
network. Compared with traditional RBF networks, the proposed network
demonstrates better capability of approximation to underlying functions, faster
learning speed, better size of network, and high robustness to outliers.
|
1009.4975
|
Dynamic Adaptive Mesh Refinement for Topology Optimization
|
math.NA cs.CE
|
We present an improved method for topology optimization with both adaptive
mesh refinement and derefinement. Since the total volume fraction in topology
optimization is usually modest, after a few initial iterations the domain of
computation is largely void. Hence, it is inefficient to have many small
elements, in such regions, that contribute significantly to the overall
computational cost but contribute little to the accuracy of computation and
design. At the same time, we want high spatial resolution for accurate
three-dimensional designs to avoid postprocessing or interpretation as much as
possible. Dynamic adaptive mesh refinement (AMR) offers the possibility to
balance these two requirements. We discuss requirements on AMR for topology
optimization and the algorithmic features to implement them. The numerical
design problems demonstrate (1) that our AMR strategy for topology optimization
leads to designs that are equivalent to optimal designs on uniform meshes, (2)
how AMR strategies that do not satisfy the postulated requirements may lead to
suboptimal designs, and (3) that our AMR strategy significantly reduces the
time to compute optimal designs.
|
1009.4976
|
Text Classification using Association Rule with a Hybrid Concept of
Naive Bayes Classifier and Genetic Algorithm
|
cs.IR cs.DB cs.LG
|
Text classification is the automated assignment of natural language texts to
predefined categories based on their content. Text classification is the
primary requirement of text retrieval systems, which retrieve texts in response
to a user query, and text understanding systems, which transform text in some
way such as producing summaries, answering questions or extracting data. Now a
day the demand of text classification is increasing tremendously. Keeping this
demand into consideration, new and updated techniques are being developed for
the purpose of automated text classification. This paper presents a new
algorithm for text classification. Instead of using words, word relation i.e.
association rules is used to derive feature set from pre-classified text
documents. The concept of Naive Bayes Classifier is then used on derived
features and finally a concept of Genetic Algorithm has been added for final
classification. A system based on the proposed algorithm has been implemented
and tested. The experimental results show that the proposed system works as a
successful text classifier.
|
1009.4978
|
Extracting Symbolic Rules for Medical Diagnosis Problem
|
cs.NE
|
Neural networks (NNs) have been successfully applied to solve a variety of
application problems involving classification and function approximation.
Although backpropagation NNs generally predict better than decision trees do
for pattern classification problems, they are often regarded as black boxes,
i.e., their predictions cannot be explained as those of decision trees. In many
applications, it is desirable to extract knowledge from trained NNs for the
users to gain a better understanding of how the networks solve the problems. An
algorithm is proposed and implemented to extract symbolic rules for medical
diagnosis problem. Empirical study on three benchmarks classification problems,
such as breast cancer, diabetes, and lenses demonstrates that the proposed
algorithm generates high quality rules from NNs comparable with other methods
in terms of number of rules, average number of conditions for a rule, and
predictive accuracy.
|
1009.4981
|
An Efficient Technique for Text Compression
|
cs.IT cs.IR math.IT
|
For storing a word or the whole text segment, we need a huge storage space.
Typically a character requires 1 Byte for storing it in memory. Compression of
the memory is very important for data management. In case of memory requirement
compression for text data, lossless memory compression is needed. We are
suggesting a lossless memory requirement compression method for text data
compression. The proposed compression method will compress the text segment or
the text file based on two level approaches firstly reduction and secondly
compression. Reduction will be done using a word lookup table not using
traditional indexing system, then compression will be done using currently
available compression methods. The word lookup table will be a part of the
operating system and the reduction will be done by the operating system.
According to this method each word will be replaced by an address value. This
method can quite effectively reduce the size of persistent memory required for
text data. At the end of the first level compression with the use of word
lookup table, a binary file containing the addresses will be generated. Since
the proposed method does not use any compression algorithm in the first level
so this file can be compressed using the popular compression algorithms and
finally will provide a great deal of data compression on purely English text
data.
|
1009.4982
|
Optimal Bangla Keyboard Layout using Data Mining Technique
|
cs.AI
|
This paper presents an optimal Bangla Keyboard Layout, which distributes the
load equally on both hands so that maximizing the ease and minimizing the
effort. Bangla alphabet has a large number of letters, for this it is difficult
to type faster using Bangla keyboard. Our proposed keyboard will maximize the
speed of operator as they can type with both hands parallel. Here we use the
association rule of data mining to distribute the Bangla characters in the
keyboard. First, we analyze the frequencies of data consisting of monograph,
digraph and trigraph, which are derived from data wire-house, and then used
association rule of data mining to distribute the Bangla characters in the
layout. Experimental results on several data show the effectiveness of the
proposed approach with better performance.
|
1009.4983
|
Pattern Classification using Simplified Neural Networks
|
cs.NE
|
In recent years, many neural network models have been proposed for pattern
classification, function approximation and regression problems. This paper
presents an approach for classifying patterns from simplified NNs. Although the
predictive accuracy of ANNs is often higher than that of other methods or human
experts, it is often said that ANNs are practically "black boxes", due to the
complexity of the networks. In this paper, we have an attempted to open up
these black boxes by reducing the complexity of the network. The factor makes
this possible is the pruning algorithm. By eliminating redundant weights,
redundant input and hidden units are identified and removed from the network.
Using the pruning algorithm, we have been able to prune networks such that only
a few input units, hidden units and connections left yield a simplified
network. Experimental results on several benchmarks problems in neural networks
show the effectiveness of the proposed approach with good generalization
ability.
|
1009.4984
|
Rule Extraction using Artificial Neural Networks
|
cs.NE
|
Artificial neural networks have been successfully applied to a variety of
business application problems involving classification and regression. Although
backpropagation neural networks generally predict better than decision trees do
for pattern classification problems, they are often regarded as black boxes,
i.e., their predictions are not as interpretable as those of decision trees. In
many applications, it is desirable to extract knowledge from trained neural
networks so that the users can gain a better understanding of the solution.
This paper presents an efficient algorithm to extract rules from artificial
neural networks. We use two-phase training algorithm for backpropagation
learning. In the first phase, the number of hidden nodes of the network is
determined automatically in a constructive fashion by adding nodes one after
another based on the performance of the network on training data. In the second
phase, the number of relevant input units of the network is determined using
pruning algorithm. The pruning process attempts to eliminate as many
connections as possible from the network. Relevant and irrelevant attributes of
the data are distinguished during the training process. Those that are relevant
will be kept and others will be automatically discarded. From the simplified
networks having small number of connections and nodes we may easily able to
extract symbolic rules using the proposed algorithm. Extensive experimental
results on several benchmarks problems in neural networks demonstrate the
effectiveness of the proposed approach with good generalization ability.
|
1009.4987
|
Text Classification using Data Mining
|
cs.IR cs.DB
|
Text classification is the process of classifying documents into predefined
categories based on their content. It is the automated assignment of natural
language texts to predefined categories. Text classification is the primary
requirement of text retrieval systems, which retrieve texts in response to a
user query, and text understanding systems, which transform text in some way
such as producing summaries, answering questions or extracting data. Existing
supervised learning algorithms to automatically classify text need sufficient
documents to learn accurately. This paper presents a new algorithm for text
classification using data mining that requires fewer documents for training.
Instead of using words, word relation i.e. association rules from these words
is used to derive feature set from pre-classified text documents. The concept
of Naive Bayes classifier is then used on derived features and finally only a
single concept of Genetic Algorithm has been added for final classification. A
system based on the proposed algorithm has been implemented and tested. The
experimental results show that the proposed system works as a successful text
classifier.
|
1009.4988
|
REx: An Efficient Rule Generator
|
cs.NE
|
This paper describes an efficient algorithm REx for generating symbolic rules
from artificial neural network (ANN). Classification rules are sought in many
areas from automatic knowledge acquisition to data mining and ANN rule
extraction. This is because classification rules possess some attractive
features. They are explicit, understandable and verifiable by domain experts,
and can be modified, extended and passed on as modular knowledge. REx exploits
the first order information in the data and finds shortest sufficient
conditions for a rule of a class that can differentiate it from patterns of
other classes. It can generate concise and perfect rules in the sense that the
error rate of the rules is not worse than the inconsistency rate found in the
original data. An important feature of rule extraction algorithm, REx, is its
recursive nature. They are concise, comprehensible, order insensitive and do
not involve any weight values. Extensive experimental studies on several
benchmark classification problems, such as breast cancer, iris, season, and
golf-playing, demonstrate the effectiveness of the proposed approach with good
generalization ability.
|
1009.4991
|
Web Page Categorization Using Artificial Neural Networks
|
cs.NE cs.IR
|
Web page categorization is one of the challenging tasks in the world of ever
increasing web technologies. There are many ways of categorization of web pages
based on different approach and features. This paper proposes a new dimension
in the way of categorization of web pages using artificial neural network (ANN)
through extracting the features automatically. Here eight major categories of
web pages have been selected for categorization; these are business & economy,
education, government, entertainment, sports, news & media, job search, and
science. The whole process of the proposed system is done in three successive
stages. In the first stage, the features are automatically extracted through
analyzing the source of the web pages. The second stage includes fixing the
input values of the neural network; all the values remain between 0 and 1. The
variations in those values affect the output. Finally the third stage
determines the class of a certain web page out of eight predefined classes.
This stage is done using back propagation algorithm of artificial neural
network. The proposed concept will facilitate web mining, retrievals of
information from the web and also the search engines.
|
1009.4994
|
Text Categorization using Association Rule and Naive Bayes Classifier
|
cs.IR cs.DB
|
As the amount of online text increases, the demand for text categorization to
aid the analysis and management of text is increasing. Text is cheap, but
information, in the form of knowing what classes a text belongs to, is
expensive. Automatic categorization of text can provide this information at low
cost, but the classifiers themselves must be built with expensive human effort,
or trained from texts which have themselves been manually classified. Text
categorization using Association Rule and Na\"ive Bayes Classifier is proposed
here. Instead of using words word relation i.e association rules from these
words is used to derive feature set from pre-classified text documents. Naive
Bayes Classifier is then used on derived features for final categorization.
|
1009.5003
|
Demonstrating a Service-Enhanced Retrieval System
|
cs.IR cs.DL
|
This paper is a short description of an information retrieval system enhanced
by three model driven retrieval services: (1) co-word analysis based query
expansion, re-ranking via (2) Bradfordizing and (3) author centrality. The
different services each favor quite other - but still relevant - documents than
pure term-frequency based rankings. Each service can be interactively combined
with each other to allow an iterative retrieval refinement.
|
1009.5004
|
On reverse-engineering the KUKA Robot Language
|
cs.RO
|
Most commercial manufacturers of industrial robots require their robots to be
programmed in a proprietary language tailored to the domain - a typical
domain-specific language (DSL). However, these languages oftentimes suffer from
shortcomings such as controller-specific design, limited expressiveness and a
lack of extensibility. For that reason, we developed the extensible Robotics
API for programming industrial robots on top of a general-purpose language.
Although being a very flexible approach to programming industrial robots, a
fully-fledged language can be too complex for simple tasks. Additionally,
legacy support for code written in the original DSL has to be maintained. For
these reasons, we present a lightweight implementation of a typical robotic
DSL, the KUKA Robot Language (KRL), on top of our Robotics API. This work deals
with the challenges in reverse-engineering the language and mapping its
specifics to the Robotics API. We introduce two different approaches of
interpreting and executing KRL programs: tree-based and bytecode-based
interpretation.
|
1009.5026
|
On the Fictitious Play and Channel Selection Games
|
cs.GT cs.IT math.IT
|
Considering the interaction through mutual interference of the different
radio devices, the channel selection (CS) problem in decentralized parallel
multiple access channels can be modeled by strategic-form games. Here, we show
that the CS problem is a potential game (PG) and thus the fictitious play (FP)
converges to a Nash equilibrium (NE) either in pure or mixed strategies. Using
a 2-player 2-channel game, it is shown that convergence in mixed strategies
might lead to cycles of action profiles which lead to individual spectral
efficiencies (SE) which are worse than the SE at the worst NE in mixed and pure
strategies. Finally, exploiting the fact that the CS problem is a PG and an
aggregation game, we present a method to implement FP with local information
and minimum feedback.
|
1009.5029
|
On the complexity of the multiple stack TSP, kSTSP
|
cs.CC cs.RO
|
The multiple Stack Travelling Salesman Problem, STSP, deals with the collect
and the deliverance of n commodities in two distinct cities. The two cities are
represented by means of two edge-valued graphs (G1,d2) and (G2,d2). During the
pick-up tour, the commodities are stored into a container whose rows are
subject to LIFO constraints. As a generalisation of standard TSP, the problem
obviously is NP-hard; nevertheless, one could wonder about what combinatorial
structure of STSP does the most impact its complexity: the arrangement of the
commodities into the container, or the tours themselves? The answer is not
clear. First, given a pair (T1,T2) of pick-up and delivery tours, it is
polynomial to decide whether these tours are or not compatible. Second, for a
given arrangement of the commodities into the k rows of the container, the
optimum pick-up and delivery tours w.r.t. this arrangement can be computed
within a time that is polynomial in n, but exponential in k. Finally, we
provide instances on which a tour that is optimum for one of three distances
d1, d2 or d1+d2 lead to solutions of STSP that are arbitrarily far to the
optimum STSP.
|
1009.5030
|
Approximability of the Multiple Stack TSP
|
cs.CC cs.RO
|
STSP seeks a pair of pickup and delivery tours in two distinct networks,
where the two tours are related by LIFO contraints. We address here the problem
approximability. We notably establish that asymmetric MaxSTSP and MinSTSP12 are
APX, and propose a heuristic that yields to a 1/2, 3/4 and 3/2 standard
approximation for respectively Max2STSP, Max2STSP12 and Min2STSP12.
|
1009.5031
|
A Genetic Algorithm for the Multi-Pickup and Delivery Problem with time
windows
|
cs.NE
|
In This paper we present a genetic algorithm for the multi-pickup and
delivery problem with time windows (m-PDPTW). The m-PDPTW is an optimization
vehicles routing problem which must meet requests for transport between
suppliers and customers satisfying precedence, capacity and time constraints.
This paper purposes a brief literature review of the PDPTW, present our
approach based on genetic algorithms to minimizing the total travel distance
and thereafter the total travel cost, by showing that an encoding represents
the parameters of each individual.
|
1009.5048
|
The Most Advantageous Bangla Keyboard Layout Using Data Mining Technique
|
cs.AI
|
Bangla alphabet has a large number of letters, for this it is complicated to
type faster using Bangla keyboard. The proposed keyboard will maximize the
speed of operator as they can type with both hands parallel. Association rule
of data mining to distribute the Bangla characters in the keyboard is used
here. The frequencies of data consisting of monograph, digraph and trigraph are
analyzed, which are derived from data wire-house, and then used association
rule of data mining to distribute the Bangla characters in the layout.
Experimental results on several data show the effectiveness of the proposed
approach with better performance. This paper presents an optimal Bangla
Keyboard Layout, which distributes the load equally on both hands so that
maximizing the ease and minimizing the effort.
|
1009.5055
|
The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted
Low-Rank Matrices
|
math.OC cs.NA cs.SY
|
This paper proposes scalable and fast algorithms for solving the Robust PCA
problem, namely recovering a low-rank matrix with an unknown fraction of its
entries being arbitrarily corrupted. This problem arises in many applications,
such as image processing, web data ranking, and bioinformatic data analysis. It
was recently shown that under surprisingly broad conditions, the Robust PCA
problem can be exactly solved via convex optimization that minimizes a
combination of the nuclear norm and the $\ell^1$-norm . In this paper, we apply
the method of augmented Lagrange multipliers (ALM) to solve this convex
program. As the objective function is non-smooth, we show how to extend the
classical analysis of ALM to such new objective functions and prove the
optimality of the proposed algorithms and characterize their convergence rate.
Empirically, the proposed new algorithms can be more than five times faster
than the previous state-of-the-art algorithms for Robust PCA, such as the
accelerated proximal gradient (APG) algorithm. Moreover, the new algorithms
achieve higher precision, yet being less storage/memory demanding. We also show
that the ALM technique can be used to solve the (related but somewhat simpler)
matrix completion problem and obtain rather promising results too. We further
prove the necessary and sufficient condition for the inexact ALM to converge
globally. Matlab code of all algorithms discussed are available at
http://perception.csl.illinois.edu/matrix-rank/home.html
|
1009.5094
|
Minimal-time bioremediation of natural water resources
|
math.OC cs.SY math.DS
|
We study minimal time strategies for the treatment of pollution of large
volumes, such as lakes or natural reservoirs, with the help of an autonomous
bioreactor. The control consists in feeding the bioreactor from the resource,
the clean output returning to the resource with the same flow rate. We first
characterize the optimal policies among constant and feedback controls, under
the assumption of a uniform concentration in the resource. In a second part, we
study the influence of an inhomogeneity in the resource, considering two
measurements points. With the help of the Maximum Principle, we show that the
optimal control law is non-monotonic and terminates with a constant phase,
contrary to the homogeneous case for which the optimal flow rate is decreasing
with time. This study allows the decision makers to identify situations for
which the benefit of using non-constant flow rates is significant.
|
1009.5121
|
Noncoherent Interference Alignment: Trade Signal Power for Diversity
Towards Multiplexing
|
cs.IT math.IT
|
This paper proposes the first known universal interference alignment scheme
for general $(1\times{}1)^K$ interference networks, either Gaussian or
deterministic, with only 2 symbol extension. While interference alignment is
theoretically powerful to increase the total network throughput tremendously,
no existing scheme can achieve the degree of freedom upper bound exactly with
finite complexity. This paper starts with detailed analysis of the diagonality
problem of naive symbol extension in small $(1\times1)^3$ networks, a technique
widely regarded as necessary to achieve interference alignment with
insufficient diversity. Then, a joint bandpass noncoherent demodulation and
interference alignment scheme is proposed to solve the diagonality problem by
trading signal power for increased system diversity, which is further traded
for multiplexing improvement. Finally, the proposed noncoherent interference
alignment scheme is extended to general $(1\times{}1)^K$ cases and is proven to
achieve the degree of freedom upper bound exactly. Simulation results verify
the correctness and powerfulness of the proposed scheme and show significant
degree of freedom improvement compared to the conventional orthogonal
transmission scheme.
|
1009.5145
|
Relay Selection with Network Coding in Two-Way Relay Channels
|
cs.IT math.IT
|
In this paper, we consider the design of joint network coding (NC)and relay
selection (RS) in two-way relay channels. In the proposed schemes, two users
first sequentially broadcast their respective information to all the relays. We
propose two RS schemes, a single relay selection with NC and a dual relay
selection with NC. For both schemes, the selected relay(s) perform NC on the
received signals sent from the two users and forward them to both users. The
proposed schemes are analyzed and the exact bit error rate (BER) expressions
are derived and verified through Monte Carlo simulations. It is shown that the
dual relay selection with NC outperforms other considered relay selection
schemes in two-way relay channels. The results also reveal that the proposed NC
relay selection schemes provide a selection gain compared to a NC scheme with
no relay selection, and a network coding gain relative to a conventional relay
selection scheme with no NC.
|
1009.5146
|
Robust Linear Precoder Design for Multi-cell Downlink Transmission
|
cs.IT math.IT
|
Coordinated information processing by the base stations of multi-cell
wireless networks enhances the overall quality of communication in the network.
Such coordinations for optimizing any desired network-wide quality of service
(QoS) necessitate the base stations to acquire and share some channel state
information (CSI). With perfect knowledge of channel states, the base stations
can adjust their transmissions for achieving a network-wise QoS optimality. In
practice, however, the CSI can be obtained only imperfectly. As a result, due
to the uncertainties involved, the network is not guaranteed to benefit from a
globally optimal QoS. Nevertheless, if the channel estimation perturbations are
confined within bounded regions, the QoS measure will also lie within a bounded
region. Therefore, by exploiting the notion of robustness in the worst-case
sense some worst-case QoS guarantees for the network can be asserted. We adopt
a popular model for noisy channel estimates that assumes that estimation noise
terms lie within known hyper-spheres. We aim to design linear transceivers that
optimize a worst-case QoS measure in downlink transmissions. In particular, we
focus on maximizing the worst-case weighted sum-rate of the network and the
minimum worst-case rate of the network. For obtaining such transceiver designs,
we offer several centralized (fully cooperative) and distributed (limited
cooperation) algorithms which entail different levels of complexity and
information exchange among the base stations.
|
1009.5149
|
Towards an incremental maintenance of cyclic association rules
|
cs.DB
|
Recently, the cyclic association rules have been introduced in order to
discover rules from items characterized by their regular variation over time.
In real life situations, temporal databases are often appended or updated.
Rescanning the whole database every time is highly expensive while existing
incremental mining techniques can efficiently solve such a problem. In this
paper, we propose an incremental algorithm for cyclic association rules
maintenance. The carried out experiments of our proposal stress on its
efficiency and performance.
|
1009.5158
|
Information Capacity of Energy Harvesting Sensor Nodes
|
cs.IT math.IT
|
Sensor nodes with energy harvesting sources are gaining popularity due to
their ability to improve the network life time and are becoming a preferred
choice supporting `green communication'. We study such a sensor node with an
energy harvesting source and compare various architectures by which the
harvested energy is used. We find its Shannon capacity when it is transmitting
its observations over an AWGN channel and show that the capacity achieving
energy management policies are related to the throughput optimal policies. We
also obtain the capacity when energy conserving sleep-wake modes are supported
and an achievable rate for the system with inefficiencies in energy storage.
|
1009.5161
|
Information Physics: The New Frontier
|
math-ph cond-mat.stat-mech cs.IT math.IT math.MP
|
At this point in time, two major areas of physics, statistical mechanics and
quantum mechanics, rest on the foundations of probability and entropy. The last
century saw several significant fundamental advances in our understanding of
the process of inference, which make it clear that these are inferential
theories. That is, rather than being a description of the behavior of the
universe, these theories describe how observers can make optimal predictions
about the universe. In such a picture, information plays a critical role. What
is more is that little clues, such as the fact that black holes have entropy,
continue to suggest that information is fundamental to physics in general.
In the last decade, our fundamental understanding of probability theory has
led to a Bayesian revolution. In addition, we have come to recognize that the
foundations go far deeper and that Cox's approach of generalizing a Boolean
algebra to a probability calculus is the first specific example of the more
fundamental idea of assigning valuations to partially-ordered sets. By
considering this as a natural way to introduce quantification to the more
fundamental notion of ordering, one obtains an entirely new way of deriving
physical laws. I will introduce this new way of thinking by demonstrating how
one can quantify partially-ordered sets and, in the process, derive physical
laws. The implication is that physical law does not reflect the order in the
universe, instead it is derived from the order imposed by our description of
the universe. Information physics, which is based on understanding the ways in
which we both quantify and process information about the world around us, is a
fundamentally new approach to science.
|
1009.5208
|
Exploiting isochrony in self-triggered control
|
math.OC cs.SY math.DS
|
Event-triggered control and self-triggered control have been recently
proposed as new implementation paradigms that reduce resource usage for control
systems. In self-triggered control, the controller is augmented with the
computation of the next time instant at which the feedback control law is to be
recomputed. Since these execution instants are obtained as a function of the
plant state, we effectively close the loop only when it is required to maintain
the desired performance, thereby greatly reducing the resources required for
control. In this paper we present a new technique for the computation of the
execution instants by exploiting the concept of isochronous manifolds, also
introduced in this paper. While our previous results showed how homogeneity can
be used to compute the execution instants along some directions in the state
space, the concept of isochrony allows us to compute the executions instants
along every direction in the state space. Moreover, we also show in this paper
how to homogenize smooth control systems thus making our results applicable to
any smooth control system. The benefits of the proposed approach with respect
to existing techniques are analyzed in two examples.
|
1009.5233
|
A Simple Abstraction for Data Modeling
|
cs.DB cs.DL
|
The problems that scientists face in creating well designed databases
intersect with the concerns of data curation. Entity-relationship modeling and
its variants have been the basis of most relational data modeling for decades.
However, these abstractions and the relational model itself are intricate and
have proved not to be very accessible among scientists with limited resources
for data management. This paper explores one aspect of relational data models,
the meaning of foreign key relationships. We observe that a foreign key
produces a table relationship that generally references either an entity or
repeating attributes. This paper proposes constructing foreign keys based on
these two cases, and suggests that the method promotes intuitive data modeling
and normalization.
|
1009.5249
|
Defining and Generating Axial Lines from Street Center Lines for better
Understanding of Urban Morphologies
|
cs.CV physics.data-an
|
Axial lines are defined as the longest visibility lines for representing
individual linear spaces in urban environments. The least number of axial lines
that cover the free space of an urban environment or the space between
buildings constitute what is often called an axial map. This is a fundamental
tool in space syntax, a theory developed by Bill Hillier and his colleagues for
characterizing the underlying urban morphologies. For a long time, generating
axial lines with help of some graphic software has been a tedious manual
process that is criticized for being time consuming, subjective, or even
arbitrary. In this paper, we redefine axial lines as the least number of
individual straight line segments mutually intersected along natural streets
that are generated from street center lines using the Gestalt principle of good
continuity. Based on this new definition, we develop an automatic solution to
generating the newly defined axial lines from street center lines. We apply
this solution to six typical street networks (three from North America and
three from Europe), and generate a new set of axial lines for analyzing the
urban morphologies. Through a comparison study between the new axial lines and
the conventional or old axial lines, and between the new axial lines and
natural streets, we demonstrate with empirical evidence that the newly defined
axial lines are a better alternative in capturing the underlying urban
structure.
Keywords: Space syntax, street networks, topological analysis, traffic,
head/tail division rule
|
1009.5257
|
Approximation of DAC Codeword Distribution for Equiprobable Binary
Sources along Proper Decoding Paths
|
cs.IT math.IT
|
Distributed Arithmetic Coding (DAC) is an effective implementation of
Slepian-Wolf coding, especially for short data blocks. To research its
properties, the concept of DAC codeword distribution along proper and wrong
decoding paths has been introduced. For DAC codeword distribution of
equiprobable binary sources along proper decoding paths, the problem was
formatted as solving a system of functional equations. However, up to now, only
one closed form was obtained at rate 0.5, while in general cases, to find the
closed form of DAC codeword distribution still remains a very difficult task.
This paper proposes three kinds of approximation methods for DAC codeword
distribution of equiprobable binary sources along proper decoding paths:
numeric approximation, polynomial approximation, and Gaussian approximation.
Firstly, as a general approach, a numeric method is iterated to find the
approximation to DAC codeword distribution. Secondly, at rates lower than 0.5,
DAC codeword distribution can be well approximated by a polynomial. Thirdly, at
very low rates, a Gaussian function centered at 0.5 is proved to be a good and
simple approximation to DAC codeword distribution. A simple way to estimate the
variance of Gaussian function is also proposed. Plenty of simulation results
are given to verify theoretical analyses.
|
1009.5268
|
General Scaled Support Vector Machines
|
cs.AI
|
Support Vector Machines (SVMs) are popular tools for data mining tasks such
as classification, regression, and density estimation. However, original SVM
(C-SVM) only considers local information of data points on or over the margin.
Therefore, C-SVM loses robustness. To solve this problem, one approach is to
translate (i.e., to move without rotation or change of shape) the hyperplane
according to the distribution of the entire data. But existing work can only be
applied for 1-D case. In this paper, we propose a simple and efficient method
called General Scaled SVM (GS-SVM) to extend the existing approach to
multi-dimensional case. Our method translates the hyperplane according to the
distribution of data projected on the normal vector of the hyperplane. Compared
with C-SVM, GS-SVM has better performance on several data sets.
|
1009.5290
|
Measuring Similarity of Graphs and their Nodes by Neighbor Matching
|
cs.AI
|
The problem of measuring similarity of graphs and their nodes is important in
a range of practical problems. There is a number of proposed measures, some of
them being based on iterative calculation of similarity between two graphs and
the principle that two nodes are as similar as their neighbors are. In our
work, we propose one novel method of that sort, with a refined concept of
similarity of two nodes that involves matching of their neighbors. We prove
convergence of the proposed method and show that it has some additional
desirable properties that, to our knowledge, the existing methods lack. We
illustrate the method on two specific problems and empirically compare it to
other methods.
|
1009.5316
|
Jamming in complex networks with degree correlation
|
physics.soc-ph cs.SI physics.comp-ph
|
We study the effects of the degree-degree correlations on the pressure
congestion J when we apply a dynamical process on scale free complex networks
using the gradient network approach. We find that the pressure congestion for
disassortative (assortative) networks is lower (bigger) than the one for
uncorrelated networks which allow us to affirm that disassortative networks
enhance transport through them. This result agree with the fact that many real
world transportation networks naturally evolve to this kind of correlation. We
explain our results showing that for the disassortative case the clusters in
the gradient network turn out to be as much elongated as possible, reducing the
pressure congestion J and observing the opposite behavior for the assortative
case. Finally we apply our model to real world networks, and the results agree
with our theoretical model.
|
1009.5346
|
A Novel Approach for Cardiac Disease Prediction and Classification Using
Intelligent Agents
|
cs.MA cs.AI
|
The goal is to develop a novel approach for cardiac disease prediction and
diagnosis using intelligent agents. Initially the symptoms are preprocessed
using filter and wrapper based agents. The filter removes the missing or
irrelevant symptoms. Wrapper is used to extract the data in the data set
according to the threshold limits. Dependency of each symptom is identified
using dependency checker agent. The classification is based on the prior and
posterior probability of the symptoms with the evidence value. Finally the
symptoms are classified in to five classes namely absence, starting, mild,
moderate and serious. Using the cooperative approach the cardiac problem is
solved and verified.
|
1009.5352
|
Establishing a Multi-Thesauri-Scenario based on SKOS and
Cross-Concordances
|
cs.DL cs.IR
|
This case study proposes a scenario with three topic-related thesauri, which
have been connected with bilateral cross-concordances as part of a major
terminology mapping initiative in the project KoMoHe (Mayr & Petras, 2008). The
thesauri have already been or will be converted to SKOS and in order to not
omit the relevant crosswalks, the mapping properties of SKOS will be used for
modeling them adequately.
|
1009.5398
|
A Scenario-Based Mobile Application for Robot-Assisted Smart Digital
Homes
|
cs.RO
|
Smart homes are becoming more popular, as every day a new home appliance can
be digitally controlled. Smart Digital Homes are using a server to make
interaction with all the possible devices in one place, on a computer or
webpage. In this paper we designed and implemented a mobile application using
Windows Mobile platform that can connect to the controlling server of a Smart
Home and grants the access to the Smart Home devices and robots everywhere
possible. UML diagrams are presented to illustrate the application design
process. Robots are also considered as devices that are able to interact to
other object and devices. Scenarios are defined as a set of sequential actions
to help manage different tasks all in one place. The mobile application can
connect to the server using GPRS mobile internet and Short Message System
(SMS). Interactive home map is also designed for easier status-checking and
interacting with the devices using the mobile phones.
|
1009.5419
|
Portfolio Allocation for Bayesian Optimization
|
cs.LG
|
Bayesian optimization with Gaussian processes has become an increasingly
popular tool in the machine learning community. It is efficient and can be used
when very little is known about the objective function, making it popular in
expensive black-box optimization scenarios. It uses Bayesian methods to sample
the objective efficiently using an acquisition function which incorporates the
model's estimate of the objective and the uncertainty at any given point.
However, there are several different parameterized acquisition functions in the
literature, and it is often unclear which one to use. Instead of using a single
acquisition function, we adopt a portfolio of acquisition functions governed by
an online multi-armed bandit strategy. We propose several portfolio strategies,
the best of which we call GP-Hedge, and show that this method outperforms the
best individual acquisition function. We also provide a theoretical bound on
the algorithm's performance.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.