id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0906.3149
|
Semi-Myopic Sensing Plans for Value Optimization
|
cs.AI
|
We consider the following sequential decision problem. Given a set of items
of unknown utility, we need to select one of as high a utility as possible
(``the selection problem''). Measurements (possibly noisy) of item values prior
to selection are allowed, at a known cost. The goal is to optimize the overall
sequential decision process of measurements and selection.
Value of information (VOI) is a well-known scheme for selecting measurements,
but the intractability of the problem typically leads to using myopic VOI
estimates. In the selection problem, myopic VOI frequently badly underestimates
the value of information, leading to inferior sensing plans. We relax the
strict myopic assumption into a scheme we term semi-myopic, providing a
spectrum of methods that can improve the performance of sensing plans. In
particular, we propose the efficiently computable method of ``blinkered'' VOI,
and examine theoretical bounds for special cases. Empirical evaluation of
``blinkered'' VOI in the selection problem with normally distributed item
values shows that is performs much better than pure myopic VOI.
|
0906.3173
|
Compressed Sensing of Block-Sparse Signals: Uncertainty Relations and
Efficient Recovery
|
cs.IT math.IT
|
We consider compressed sensing of block-sparse signals, i.e., sparse signals
that have nonzero coefficients occurring in clusters. An uncertainty relation
for block-sparse signals is derived, based on a block-coherence measure, which
we introduce. We then show that a block-version of the orthogonal matching
pursuit algorithm recovers block $k$-sparse signals in no more than $k$ steps
if the block-coherence is sufficiently small. The same condition on
block-coherence is shown to guarantee successful recovery through a mixed
$\ell_2/\ell_1$-optimization approach. This complements previous recovery
results for the block-sparse case which relied on small block-restricted
isometry constants. The significance of the results presented in this paper
lies in the fact that making explicit use of block-sparsity can provably yield
better reconstruction properties than treating the signal as being sparse in
the conventional sense, thereby ignoring the additional structure in the
problem.
|
0906.3183
|
Approximate Characterizations for the Gaussian Source Broadcast
Distortion Region
|
cs.IT math.IT
|
We consider the joint source-channel coding problem of sending a Gaussian
source on a K-user Gaussian broadcast channel with bandwidth mismatch. A new
outer bound to the achievable distortion region is derived using the technique
of introducing more than one additional auxiliary random variable, which was
previously used to derive sum-rate lower bound for the symmetric Gaussian
multiple description problem. By combining this outer bound with the
achievability result based on source-channel separation, we provide approximate
characterizations of the achievable distortion region within constant
multiplicative factors. Furthermore, we show that the results can be extended
to general broadcast channels, and the performance of the source-channel
separation based approach is also within the same constant multiplicative
factors of the optimum.
|
0906.3192
|
Secured Communication over Frequency-Selective Fading Channels: a
practical Vandermonde precoding
|
cs.IT math.IT
|
In this paper, we study the frequency-selective broadcast channel with
confidential messages (BCC) in which the transmitter sends a confidential
message to receiver 1 and a common message to receivers 1 and 2. In the case of
a block transmission of N symbols followed by a guard interval of L symbols,
the frequency-selective channel can be modeled as a N * (N+L) Toeplitz matrix.
For this special type of multiple-input multiple-output (MIMO) channels, we
propose a practical Vandermonde precoding that consists of projecting the
confidential messages in the null space of the channel seen by receiver 2 while
superposing the common message. For this scheme, we provide the achievable rate
region, i.e. the rate-tuple of the common and confidential messages, and
characterize the optimal covariance inputs for some special cases of interest.
It is proved that the proposed scheme achieves the optimal degree of freedom
(d.o.f) region. More specifically, it enables to send l <= L confidential
messages and N-l common messages simultaneously over a block of N+L symbols.
Interestingly, the proposed scheme can be applied to secured multiuser
scenarios such as the K+1-user frequency-selective BCC with K confidential
messages and the two-user frequency-selective BCC with two confidential
messages. For each scenario, we provide the achievable secrecy degree of
freedom (s.d.o.f.) region of the corresponding frequency-selective BCC and
prove the optimality of the Vandermonde precoding. One of the appealing
features of the proposed scheme is that it does not require any specific
secrecy encoding technique but can be applied on top of any existing powerful
encoding schemes.
|
0906.3200
|
On the Compound MIMO Broadcast Channels with Confidential Messages
|
cs.IT math.IT
|
We study the compound multi-input multi-output (MIMO) broadcast channel with
confidential messages (BCC), where one transmitter sends a common message to
two receivers and two confidential messages respectively to each receiver. The
channel state may take one of a finite set of states, and the transmitter knows
the state set but does not know the realization of the state. We study
achievable rates with perfect secrecy in the high SNR regime by characterizing
an achievable secrecy degree of freedom (s.d.o.f.) region for two models, the
Gaussian MIMO-BCC and the ergodic fading multi-input single-output (MISO)-BCC
without a common message. We show that by exploiting an additional temporal
dimension due to state variation in the ergodic fading model, the achievable
s.d.o.f. region can be significantly improved compared to the Gaussian model
with a constant state, although at the price of a larger delay.
|
0906.3234
|
Asymptotic Analysis of MAP Estimation via the Replica Method and
Applications to Compressed Sensing
|
cs.IT math.IT
|
The replica method is a non-rigorous but well-known technique from
statistical physics used in the asymptotic analysis of large, random, nonlinear
problems. This paper applies the replica method, under the assumption of
replica symmetry, to study estimators that are maximum a posteriori (MAP) under
a postulated prior distribution. It is shown that with random linear
measurements and Gaussian noise, the replica-symmetric prediction of the
asymptotic behavior of the postulated MAP estimate of an n-dimensional vector
"decouples" as n scalar postulated MAP estimators. The result is based on
applying a hardening argument to the replica analysis of postulated posterior
mean estimators of Tanaka and of Guo and Verdu.
The replica-symmetric postulated MAP analysis can be readily applied to many
estimators used in compressed sensing, including basis pursuit, lasso, linear
estimation with thresholding, and zero norm-regularized estimation. In the case
of lasso estimation the scalar estimator reduces to a soft-thresholding
operator, and for zero norm-regularized estimation it reduces to a
hard-threshold. Among other benefits, the replica method provides a
computationally-tractable method for precisely predicting various performance
metrics including mean-squared error and sparsity pattern recovery probability.
|
0906.3235
|
Simplicity via Provability for Universal Prefix-free Turing Machines
|
cs.IT cs.LO math.IT
|
Universality is one of the most important ideas in computability theory.
There are various criteria of simplicity for universal Turing machines.
Probably the most popular one is to count the number of states/symbols. This
criterion is more complex than it may appear at a first glance. In this note we
review recent results in Algorithmic Information Theory and propose three new
criteria of simplicity for universal prefix-free Turing machines. These
criteria refer to the possibility of proving various natural properties of such
a machine (its universality, for example) in a formal theory, PA or ZFC. In all
cases some, but not all, machines are simple.
|
0906.3282
|
Maximum Error Modeling for Fault-Tolerant Computation using Maximum a
posteriori (MAP) Hypothesis
|
cs.IT math.IT
|
The application of current generation computing machines in safety-centric
applications like implantable biomedical chips and automobile safety has
immensely increased the need for reviewing the worst-case error behavior of
computing devices for fault-tolerant computation. In this work, we propose an
exact probabilistic error model that can compute the maximum error over all
possible input space in a circuit specific manner and can handle various types
of structural dependencies in the circuit. We also provide the worst-case input
vector, which has the highest probability to generate an erroneous output, for
any given logic circuit. We also present a study of circuit-specific error
bounds for fault-tolerant computation in heterogeneous circuits using the
maximum error computed for each circuit. We model the error estimation problem
as a maximum a posteriori (MAP) estimate, over the joint error probability
function of the entire circuit, calculated efficiently through an intelligent
search of the entire input space using probabilistic traversal of a binary join
tree using Shenoy-Shafer algorithm. We demonstrate this model using MCNC and
ISCAS benchmark circuits and validate it using an equivalent HSpice model. Both
results yield the same worst-case input vectors and the highest % difference of
our error model over HSpice is just 1.23%. We observe that the maximum error
probabilities are significantly larger than the average error probabilities,
and provides a much tighter error bounds for fault-tolerant computation. We
also find that the error estimates depend on the specific circuit structure and
the maximum error probabilities are sensitive to the individual gate failure
probabilities.
|
0906.3313
|
Efficient And Portable SDR Waveform Development: The Nucleus Concept
|
cs.IT cs.NI math.IT
|
Future wireless communication systems should be flexible to support different
waveforms (WFs) and be cognitive to sense the environment and tune themselves.
This has lead to tremendous interest in software defined radios (SDRs).
Constraints like throughput, latency and low energy demand high implementation
efficiency. The tradeoff of going for a highly efficient implementation is the
increase of porting effort to a new hardware (HW) platform. In this paper, we
propose a novel concept for WF development, the Nucleus concept, that exploits
the common structure in various wireless signal processing algorithms and
provides a way for efficient and portable implementation. Tool assisted WF
mapping and exploration is done efficiently by propagating the implementation
and interface properties of Nuclei. The Nucleus concept aims at providing
software flexibility with high level programmability, but at the same time
limiting HW flexibility to maximize area and energy efficiency.
|
0906.3323
|
Adaptive Regularization of Ill-Posed Problems: Application to Non-rigid
Image Registration
|
cs.CV
|
We introduce an adaptive regularization approach. In contrast to conventional
Tikhonov regularization, which specifies a fixed regularization operator, we
estimate it simultaneously with parameters. From a Bayesian perspective we
estimate the prior distribution on parameters assuming that it is close to some
given model distribution. We constrain the prior distribution to be a
Gauss-Markov random field (GMRF), which allows us to solve for the prior
distribution analytically and provides a fast optimization algorithm. We apply
our approach to non-rigid image registration to estimate the spatial
transformation between two images. Our evaluation shows that the adaptive
regularization approach significantly outperforms standard variational methods.
|
0906.3352
|
Spreading Code and Widely-Linear Receiver Design: Non-Cooperative Games
for Wireless CDMA Networks
|
cs.IT cs.GT math.IT
|
The issue of non-cooperative transceiver optimization in the uplink of a
multiuser wireless code division multiple access data network with
widely-linear detection at the receiver is considered. While previous work in
this area has focused on a simple real signal model, in this paper a baseband
complex representation of the data is used, so as to properly take into account
the I and Q components of the received signal. For the case in which the
received signal is improper, a widely-linear reception structure, processing
separately the data and their complex conjugates, is considered. Several
non-cooperative resource allocation games are considered for this new scenario,
and the performance gains granted by the use of widely-linear detection are
assessed through theoretical analysis. Numerical results confirm the validity
of the theoretical findings, and show that exploiting the improper nature of
the data in non-cooperative resource allocation brings remarkable performance
improvements in multiuser wireless systems.
|
0906.3410
|
Quasi-cyclic LDPC codes with high girth
|
cs.IT math.IT
|
We study a class of quasi-cyclic LDPC codes. We provide precise conditions
guaranteeing high girth in their Tanner graph. Experimentally, the codes we
propose perform no worse than random LDPC codes with their same parameters,
which is a significant achievement for algebraic codes.
|
0906.3421
|
Q-system Cluster Algebras, Paths and Total Positivity
|
q-fin.EC cs.SI physics.soc-ph
|
We review the solution of the $A_r$ Q-systems in terms of the partition
function of paths on a weighted graph, and show that it is possible to modify
the graphs and transfer matrices so as to provide an explicit connection to the
theory of planar networks introduced in the context of totally positive
matrices by Fomin and Zelevinsky.
|
0906.3461
|
AIS for Misbehavior Detection in Wireless Sensor Networks: Performance
and Design Principles
|
cs.NI cs.AI cs.CR cs.PF
|
A sensor network is a collection of wireless devices that are able to monitor
physical or environmental conditions. These devices (nodes) are expected to
operate autonomously, be battery powered and have very limited computational
capabilities. This makes the task of protecting a sensor network against
misbehavior or possible malfunction a challenging problem. In this document we
discuss performance of Artificial immune systems (AIS) when used as the
mechanism for detecting misbehavior.
We show that (i) mechanism of the AIS have to be carefully applied in order
to avoid security weaknesses, (ii) the choice of genes and their interaction
have a profound influence on the performance of the AIS, (iii) randomly created
detectors do not comply with limitations imposed by communications protocols
and (iv) the data traffic pattern seems not to impact significantly the overall
performance.
We identified a specific MAC layer based gene that showed to be especially
useful for detection; genes measure a network's performance from a node's
viewpoint. Furthermore, we identified an interesting complementarity property
of genes; this property exploits the local nature of sensor networks and moves
the burden of excessive communication from normally behaving nodes to
misbehaving nodes. These results have a direct impact on the design of AIS for
sensor networks and on engineering of sensor networks.
|
0906.3499
|
Convergence of fixed-point continuation algorithms for matrix rank
minimization
|
math.OC cs.IT math.IT
|
The matrix rank minimization problem has applications in many fields such as
system identification, optimal control, low-dimensional embedding, etc. As this
problem is NP-hard in general, its convex relaxation, the nuclear norm
minimization problem, is often solved instead. Recently, Ma, Goldfarb and Chen
proposed a fixed-point continuation algorithm for solving the nuclear norm
minimization problem. By incorporating an approximate singular value
decomposition technique in this algorithm, the solution to the matrix rank
minimization problem is usually obtained. In this paper, we study the
convergence/recoverability properties of the fixed-point continuation algorithm
and its variants for matrix rank minimization. Heuristics for determining the
rank of the matrix when its true rank is not known are also proposed. Some of
these algorithms are closely related to greedy algorithms in compressed
sensing. Numerical results for these algorithms for solving affinely
constrained matrix rank minimization problems are reported.
|
0906.3554
|
On the Algorithmic Nature of the World
|
cs.CC cs.IT math.IT
|
We propose a test based on the theory of algorithmic complexity and an
experimental evaluation of Levin's universal distribution to identify evidence
in support of or in contravention of the claim that the world is algorithmic in
nature. To this end we have undertaken a statistical comparison of the
frequency distributions of data from physical sources on the one
hand--repositories of information such as images, data stored in a hard drive,
computer programs and DNA sequences--and the frequency distributions generated
by purely algorithmic means on the other--by running abstract computing devices
such as Turing machines, cellular automata and Post Tag systems. Statistical
correlations were found and their significance measured.
|
0906.3585
|
Finding Significant Subregions in Large Image Databases
|
cs.DB cs.CV cs.IR
|
Images have become an important data source in many scientific and commercial
domains. Analysis and exploration of image collections often requires the
retrieval of the best subregions matching a given query. The support of such
content-based retrieval requires not only the formulation of an appropriate
scoring function for defining relevant subregions but also the design of new
access methods that can scale to large databases. In this paper, we propose a
solution to this problem of querying significant image subregions. We design a
scoring scheme to measure the similarity of subregions. Our similarity measure
extends to any image descriptor. All the images are tiled and each alignment of
the query and a database image produces a tile score matrix. We show that the
problem of finding the best connected subregion from this matrix is NP-hard and
develop a dynamic programming heuristic. With this heuristic, we develop two
index based scalable search strategies, TARS and SPARS, to query patterns in a
large image repository. These strategies are general enough to work with other
scoring schemes and heuristics. Experimental results on real image datasets
show that TARS saves more than 87% query time on small queries, and SPARS saves
up to 52% query time on large queries as compared to linear search. Qualitative
tests on synthetic and real datasets achieve precision of more than 80%.
|
0906.3667
|
A Deterministic Equivalent for the Analysis of Correlated MIMO Multiple
Access Channels
|
cs.IT math.IT
|
In this article, novel deterministic equivalents for the Stieltjes transform
and the Shannon transform of a class of large dimensional random matrices are
provided. These results are used to characterise the ergodic rate region of
multiple antenna multiple access channels, when each point-to-point propagation
channel is modelled according to the Kronecker model. Specifically, an
approximation of all rates achieved within the ergodic rate region is derived
and an approximation of the linear precoders that achieve the boundary of the
rate region as well as an iterative water-filling algorithm to obtain these
precoders are provided. An original feature of this work is that the proposed
deterministic equivalents are proved valid even for strong correlation patterns
at both communication sides. The above results are validated by Monte Carlo
simulations.
|
0906.3682
|
Large System Analysis of Linear Precoding in Correlated MISO Broadcast
Channels under Limited Feedback
|
cs.IT math.IT
|
In this paper, we study the sum rate performance of zero-forcing (ZF) and
regularized ZF (RZF) precoding in large MISO broadcast systems under the
assumptions of imperfect channel state information at the transmitter and
per-user channel transmit correlation. Our analysis assumes that the number of
transmit antennas $M$ and the number of single-antenna users $K$ are large
while their ratio remains bounded. We derive deterministic approximations of
the empirical signal-to-interference plus noise ratio (SINR) at the receivers,
which are tight as $M,K\to\infty$. In the course of this derivation, the
per-user channel correlation model requires the development of a novel
deterministic equivalent of the empirical Stieltjes transform of large
dimensional random matrices with generalized variance profile. The
deterministic SINR approximations enable us to solve various practical
optimization problems. Under sum rate maximization, we derive (i) for RZF the
optimal regularization parameter, (ii) for ZF the optimal number of users,
(iii) for ZF and RZF the optimal power allocation scheme and (iv) the optimal
amount of feedback in large FDD/TDD multi-user systems. Numerical simulations
suggest that the deterministic approximations are accurate even for small
$M,K$.
|
0906.3722
|
Two-Dimensional ARMA Modeling for Breast Cancer Detection and
Classification
|
cs.AI cs.CV physics.med-ph
|
We propose a new model-based computer-aided diagnosis (CAD) system for tumor
detection and classification (cancerous v.s. benign) in breast images.
Specifically, we show that (x-ray, ultrasound and MRI) images can be accurately
modeled by two-dimensional autoregressive-moving average (ARMA) random fields.
We derive a two-stage Yule-Walker Least-Squares estimates of the model
parameters, which are subsequently used as the basis for statistical inference
and biophysical interpretation of the breast image. We use a k-means classifier
to segment the breast image into three regions: healthy tissue, benign tumor,
and cancerous tumor. Our simulation results on ultrasound breast images
illustrate the power of the proposed approach.
|
0906.3736
|
Weight Optimization for Consensus Algorithms with Correlated Switching
Topology
|
cs.IT math.IT
|
We design the weights in consensus algorithms with spatially correlated
random topologies. These arise with: 1) networks with spatially correlated
random link failures and 2) networks with randomized averaging protocols. We
show that the weight optimization problem is convex for both symmetric and
asymmetric random graphs. With symmetric random networks, we choose the
consensus mean squared error (MSE) convergence rate as optimization criterion
and explicitly express this rate as a function of the link formation
probabilities, the link formation spatial correlations, and the consensus
weights. We prove that the MSE convergence rate is a convex, nonsmooth function
of the weights, enabling global optimization of the weights for arbitrary link
formation probabilities and link correlation structures. We extend our results
to the case of asymmetric random links. We adopt as optimization criterion the
mean squared deviation (MSdev) of the nodes states from the current average
state. We prove that MSdev is a convex function of the weights. Simulations
show that significant performance gain is achieved with our weight design
method when compared with methods available in the literature.
|
0906.3737
|
On the Beamforming Design for Efficient Interference Alignment
|
cs.IT math.IT
|
An efficient interference alignment (IA) scheme is developed for $K$-user
single-input single-output frequency selective fading interference channels.
The main idea is to steer the transmit beamforming matrices such that at each
receiver the subspace dimensions occupied by interference-free desired streams
are asymptotically the same as those occupied by all interferences. Our
proposed scheme achieves a higher multiplexing gain at any given number of
channel realizations in comparison with the original IA scheme, which is known
to achieve the optimal multiplexing gain asymptotically.
|
0906.3741
|
How opinions are received by online communities: A case study on
Amazon.com helpfulness votes
|
cs.CL cs.IR physics.data-an physics.soc-ph
|
There are many on-line settings in which users publicly express opinions. A
number of these offer mechanisms for other users to evaluate these opinions; a
canonical example is Amazon.com, where reviews come with annotations like "26
of 32 people found the following review helpful." Opinion evaluation appears in
many off-line settings as well, including market research and political
campaigns. Reasoning about the evaluation of an opinion is fundamentally
different from reasoning about the opinion itself: rather than asking, "What
did Y think of X?", we are asking, "What did Z think of Y's opinion of X?" Here
we develop a framework for analyzing and modeling opinion evaluation, using a
large-scale collection of Amazon book reviews as a dataset. We find that the
perceived helpfulness of a review depends not just on its content but also but
also in subtle ways on how the expressed evaluation relates to other
evaluations of the same product. As part of our approach, we develop novel
methods that take advantage of the phenomenon of review "plagiarism" to control
for the effects of text in opinion evaluation, and we provide a simple and
natural mathematical model consistent with our findings. Our analysis also
allows us to distinguish among the predictions of competing theories from
sociology and social psychology, and to discover unexpected differences in the
collective opinion-evaluation behavior of user populations from different
countries.
|
0906.3770
|
Automatic Defect Detection and Classification Technique from Image: A
Special Case Using Ceramic Tiles
|
cs.CV
|
Quality control is an important issue in the ceramic tile industry. On the
other hand maintaining the rate of production with respect to time is also a
major issue in ceramic tile manufacturing. Again, price of ceramic tiles also
depends on purity of texture, accuracy of color, shape etc. Considering this
criteria, an automated defect detection and classification technique has been
proposed in this report that can have ensured the better quality of tiles in
manufacturing process as well as production rate. Our proposed method plays an
important role in ceramic tiles industries to detect the defects and to control
the quality of ceramic tiles. This automated classification method helps us to
acquire knowledge about the pattern of defect within a very short period of
time and also to decide about the recovery process so that the defected tiles
may not be mixed with the fresh tiles.
|
0906.3778
|
Modified Euclidean Algorithms for Decoding Reed-Solomon Codes
|
cs.IT math.IT
|
The extended Euclidean algorithm (EEA) for polynomial greatest common
divisors is commonly used in solving the key equation in the decoding of
Reed-Solomon (RS) codes, and more generally in BCH decoding. For this
particular application, the iterations in the EEA are stopped when the degree
of the remainder polynomial falls below a threshold. While determining the
degree of a polynomial is a simple task for human beings, hardware
implementation of this stopping rule is more complicated. This paper describes
a modified version of the EEA that is specifically adapted to the RS decoding
problem. This modified algorithm requires no degree computation or comparison
to a threshold, and it uses a fixed number of iterations. Another advantage of
this modified version is in its application to the errors-and-erasures decoding
problem for RS codes where significant hardware savings can be achieved via
seamless computation.
|
0906.3782
|
On some sufficient conditions for distributed Quality-of-Service support
in wireless networks
|
cs.IT math.IT
|
Given a wireless network where some pairs of communication links interfere
with each other, we study sufficient conditions for determining whether a given
set of minimum bandwidth Quality of Service (QoS) requirements can be
satisfied. We are especially interested in algorithms which have low
communication overhead and low processing complexity. The interference in the
network is modeled using a conflict graph whose vertices are the communication
links in the network. Two links are adjacent in this graph if and only if they
interfere with each other due to being in the same vicinity and hence cannot be
simultaneously active. The problem of scheduling the transmission of the
various links is then essentially a fractional, weighted vertex coloring
problem, for which upper bounds on the fractional chromatic number are sought
using only localized information. We present some distributed algorithms for
this problem, and discuss their worst-case performance. These algorithms are
seen to be within a bounded factor away from optimal for some well known
classes of networks and interference models.
|
0906.3815
|
Hybrid Rules with Well-Founded Semantics
|
cs.LO cs.AI cs.PL
|
A general framework is proposed for integration of rules and external first
order theories. It is based on the well-founded semantics of normal logic
programs and inspired by ideas of Constraint Logic Programming (CLP) and
constructive negation for logic programs. Hybrid rules are normal clauses
extended with constraints in the bodies; constraints are certain formulae in
the language of the external theory. A hybrid program is a pair of a set of
hybrid rules and an external theory. Instances of the framework are obtained by
specifying the class of external theories, and the class of constraints. An
example instance is integration of (non-disjunctive) Datalog with ontologies
formalized as description logics.
The paper defines a declarative semantics of hybrid programs and a
goal-driven formal operational semantics. The latter can be seen as a
generalization of SLS-resolution. It provides a basis for hybrid
implementations combining Prolog with constraint solvers. Soundness of the
operational semantics is proven. Sufficient conditions for decidability of the
declarative semantics, and for completeness of the operational semantics are
given.
|
0906.3816
|
A Monte-Carlo Implementation of the SAGE Algorithm for Joint Soft
Multiuser and Channel Parameter Estimation
|
cs.IT math.IT
|
An efficient, joint transmission delay and channel parameter estimation
algorithm is proposed for uplink asynchronous direct-sequence code-division
multiple access (DS-CDMA) systems based on the space-alternating generalized
expectation maximization (SAGE) framework. The marginal likelihood of the
unknown parameters, averaged over the data sequence, as well as the expectation
and maximization steps of the SAGE algorithm are derived analytically. To
implement the proposed algorithm, a Markov Chain Monte Carlo (MCMC) technique,
called Gibbs sampling, is employed to compute the {\em a posteriori}
probabilities of data symbols in a computationally efficient way. Computer
simulations show that the proposed algorithm has excellent estimation
performance. This so-called MCMC-SAGE receiver is guaranteed to converge in
likelihood.
|
0906.3821
|
Relaying Simultaneous Multicast Messages
|
cs.IT math.IT
|
The problem of multicasting multiple messages with the help of a relay, which
may also have an independent message of its own to multicast, is considered. As
a first step to address this general model, referred to as the compound
multiple access channel with a relay (cMACr), the capacity region of the
multiple access channel with a "cognitive" relay is characterized, including
the cases of partial and rate-limited cognition. Achievable rate regions for
the cMACr model are then presented based on decode-and-forward (DF) and
compress-and-forward (CF) relaying strategies. Moreover, an outer bound is
derived for the special case in which each transmitter has a direct link to one
of the receivers while the connection to the other receiver is enabled only
through the relay terminal. Numerical results for the Gaussian channel are also
provided.
|
0906.3849
|
Squeezing the Arimoto-Blahut algorithm for faster convergence
|
cs.IT math.IT stat.CO
|
The Arimoto--Blahut algorithm for computing the capacity of a discrete
memoryless channel is revisited. A so-called ``squeezing'' strategy is used to
design algorithms that preserve its simplicity and monotonic convergence
properties, but have provably better rates of convergence.
|
0906.3864
|
The Two-Tap Input-Erasure Gaussian Channel and its Application to
Cellular Communications
|
cs.IT math.IT
|
This paper considers the input-erasure Gaussian channel. In contrast to the
output-erasure channel where erasures are applied to the output of a linear
time-invariant (LTI) system, here erasures, known to the receiver, are applied
to the inputs of the LTI system. Focusing on the case where the input symbols
are independent and identically distributed (i.i.d)., it is shown that the two
channels (input- and output-erasure) are equivalent. Furthermore, assuming that
the LTI system consists of a two-tap finite impulse response (FIR) filter, and
using simple properties of tri-diagonal matrices, an achievable rate expression
is presented in the form of an infinite sum. The results are then used to study
the benefits of joint multicell processing (MCP) over single-cell processing
(SCP) in a simple linear cellular uplink, where each mobile terminal is
received by only the two nearby base-stations (BSs). Specifically, the analysis
accounts for ergodic shadowing that simultaneously blocks the mobile terminal
(MT) signal from being received by the two BS. It is shown that the resulting
ergodic per-cell capacity with optimal MCP is equivalent to that of the two-tap
input-erasure channel. Finally, the same cellular uplink is addressed by
accounting for dynamic user activity, which is modelled by assuming that each
MT is randomly selected to be active or to remain silent throughout the whole
transmission block. For this alternative model, a similar equivalence results
to the input-erasure channel are reported.
|
0906.3883
|
Diversity Analysis of Peaky FSK Signaling in Fading Channels
|
cs.IT math.IT
|
Error performance of noncoherent detection of on-off frequency shift keying
(OOFSK) modulation over fading channels is analyzed when the receiver is
equipped with multiple antennas. The analysis is conducted for two cases: 1)
the case in which the receiver has the channel distribution knowledge only; and
2) the case in which the receiver perfectly knows the fading magnitudes. For
both cases, the maximum a posteriori probability (MAP) detection rule is
derived and analytical probability of error expressions are obtained. Numerical
and simulation results indicate that for sufficiently low duty cycle values,
lower error probabilities with respect to FSK signaling are achieved.
Equivalently, when compared to FSK modulation, OOFSK with low duty cycle
requires less energy to achieve the same probability of error, which renders
this modulation a more energy efficient transmission technique. Also, through
numerical results, the impact of number of antennas, antenna correlation, duty
cycle values, and unknown channel fading on the performance are investigated.
|
0906.3887
|
Energy-Efficient Modulation Design for Reliable Communication in
Wireless Networks
|
cs.IT math.IT
|
In this paper, we have considered the optimization of the $M$-ary quadrature
amplitude modulation (MQAM) constellation size to minimize the bit energy
consumption under average bit error rate (BER) constraints. In the computation
of the energy expenditure, the circuit, transmission, and retransmission
energies are taken into account. A combined log-normal shadowing and Rayleigh
fading model is employed to model the wireless channel. The link reliabilities
and retransmission probabilities are determined through the outage
probabilities under log-normal shadowing effects. Both single-hop and multi-hop
transmissions are considered. Through numerical results, the optimal
constellation sizes are identified. Several interesting observations with
practical implications are made. It is seen that while large constellations are
preferred at small transmission distances, constellation size should be
decreased as the distance increases. Similar trends are observed in both fixed
and variable transmit power scenarios. We have noted that variable power
schemes can attain higher energy-efficiencies. The analysis of energy-efficient
modulation design is also conducted in multi-hop linear networks. In this case,
the modulation size and routing paths are jointly optimized, and the analysis
of both the bit energy and delay experienced in the linear network is provided.
|
0906.3888
|
Effective Capacity Analysis of Cognitive Radio Channels for Quality of
Service Provisioning
|
cs.IT math.IT
|
In this paper, cognitive transmission under quality of service (QoS)
constraints is studied. In the cognitive radio channel model, it is assumed
that the secondary transmitter sends the data at two different average power
levels, depending on the activity of the primary users, which is determined by
channel sensing performed by the secondary users. A state-transition model is
constructed for this cognitive transmission channel. Statistical limitations on
the buffer lengths are imposed to take into account the QoS constraints. The
maximum throughput under these statistical QoS constraints is identified by
finding the effective capacity of the cognitive radio channel. This analysis is
conducted for fixed-power/fixed-rate, fixed-power/variable-rate, and
variable-power/variable-rate transmission schemes under different assumptions
on the availability of channel side information (CSI) at the transmitter. The
impact upon the effective capacity of several system parameters, including
channel sensing duration, detection threshold, detection and false alarm
probabilities, QoS parameters, and transmission rates, is investigated. The
performances of fixed-rate and variable-rate transmission methods are compared
in the presence of QoS limitations. It is shown that variable schemes
outperform fixed-rate transmission techniques if the detection probabilities
are high. Performance gains through adapting the power and rate are quantified
and it is shown that these gains diminish as the QoS limitations become more
stringent.
|
0906.3889
|
Energy Efficiency in the Low-SNR Regime under Queueing Constraints and
Channel Uncertainty
|
cs.IT math.IT
|
Energy efficiency of fixed-rate transmissions is studied in the presence of
queueing constraints and channel uncertainty. It is assumed that neither the
transmitter nor the receiver has channel side information prior to
transmission. The channel coefficients are estimated at the receiver via
minimum mean-square-error (MMSE) estimation with the aid of training symbols.
It is further assumed that the system operates under statistical queueing
constraints in the form of limitations on buffer violation probabilities. The
optimal fraction of power allocated to training is identified. Spectral
efficiency--bit energy tradeoff is analyzed in the low-power and wideband
regimes by employing the effective capacity formulation. In particular, it is
shown that the bit energy increases without bound in the low-power regime as
the average power vanishes. A similar conclusion is reached in the wideband
regime if the number of noninteracting subchannels grow without bound with
increasing bandwidth. On the other hand, it is proven that if the number of
resolvable independent paths and hence the number of noninteracting subchannels
remain bounded as the available bandwidth increases, the bit energy diminishes
to its minimum value in the wideband regime. For this case, expressions for the
minimum bit energy and wideband slope are derived. Overall, energy costs of
channel uncertainty and queueing constraints are identified, and the impact of
multipath richness and sparsity is determined.
|
0906.3923
|
Bayesian Forecasting of WWW Traffic on the Time Varying Poisson Model
|
cs.NI cs.LG
|
Traffic forecasting from past observed traffic data with small calculation
complexity is one of important problems for planning of servers and networks.
Focusing on World Wide Web (WWW) traffic as fundamental investigation, this
paper would deal with Bayesian forecasting of network traffic on the time
varying Poisson model from a viewpoint from statistical decision theory. Under
this model, we would show that the estimated forecasting value is obtained by
simple arithmetic calculation and expresses real WWW traffic well from both
theoretical and empirical points of view.
|
0906.3926
|
Soft Constraints for Quality Aspects in Service Oriented Architectures
|
cs.AI cs.PL
|
We propose the use of Soft Constraints as a natural way to model Service
Oriented Architecture. In the framework, constraints are used to model
components and connectors and constraint aggregation is used to represent their
interactions. The "quality of a service" is measured and considered when
performing queries to service providers. Some examples consist in the levels of
cost, performance and availability required by clients. In our framework, the
QoS scores are represented by the softness level of the constraint and the
measure of complex (web) services is computed by combining the levels of the
components.
|
0906.3988
|
Theoretical Limits on Time Delay Estimation for Ultra-Wideband Cognitive
Radios
|
cs.IT math.IT
|
In this paper, theoretical limits on time delay estimation are studied for
ultra-wideband (UWB) cognitive radio systems. For a generic UWB spectrum with
dispersed bands, the Cramer-Rao lower bound (CRLB) is derived for unknown
channel coefficients and carrier-frequency offsets (CFOs). Then, the effects of
unknown channel coefficients and CFOs are investigated for linearly and
non-linearly modulated training signals by obtaining specific CRLB expressions.
It is shown that for linear modulations with a constant envelope, the effects
of the unknown parameters can be mitigated. Finally, numerical results, which
support the theoretical analysis, are presented.
|
0906.4008
|
Two generalizations on the minimum Hamming distance of repeated-root
constacyclic codes
|
cs.IT math.IT
|
We study constacyclic codes, of length $np^s$ and $2np^s$, that are generated
by the polynomials $(x^n + \gamma)^{\ell}$ and $(x^n - \xi)^i(x^n + \xi)^j$\
respectively, where $x^n + \gamma$, $x^n - \xi$ and $x^n + \xi$ are irreducible
over the alphabet $\F_{p^a}$. We generalize the results of [5], [6] and [7] by
computing the minimum Hamming distance of these codes. As a particular case, we
determine the minimum Hamming distance of cyclic and negacyclic codes, of
length $2p^s$, over a finite field of characteristic $p$.
|
0906.4012
|
Reduced-Feedback Opportunistic Scheduling and Beamforming with GMD for
MIMO-OFDMA
|
cs.IT math.IT
|
Opportunistic scheduling and beamforming schemes have been proposed
previously by the authors for reduced-feedback MIMO-OFDMA downlink systems
where the MIMO channel of each subcarrier is decomposed into layered spatial
subchannels. It has been demonstrated that significant feedback reduction can
be achieved by returning information about only one beamforming matrix (BFM)
for all subcarriers from each MT, compared to one BFM for each subcarrier in
the conventional schemes. However, since the previously proposed channel
decomposition was derived based on singular value decomposition, the resulting
system performance is impaired by the subchannels associated with the smallest
singular values. To circumvent this obstacle, this work proposes improved
opportunistic scheduling and beamforming schemes based on geometric mean
decomposition-based channel decomposition. In addition to the inherent
advantage in reduced feedback, the proposed schemes can achieve improved system
performance by decomposing the MIMO channels into spatial subchannels with more
evenly distributed channel gains. Numerical results confirm the effectiveness
of the proposed opportunistic scheduling and beamforming schemes.
|
0906.4026
|
A Quantum-based Model for Interactive Information Retrieval (extended
version)
|
cs.IR cs.DL
|
Even the best information retrieval model cannot always identify the most
useful answers to a user query. This is in particular the case with web search
systems, where it is known that users tend to minimise their effort to access
relevant information. It is, however, believed that the interaction between
users and a retrieval system, such as a web search engine, can be exploited to
provide better answers to users. Interactive Information Retrieval (IR)
systems, in which users access information through a series of interactions
with the search system, are concerned with building models for IR, where
interaction plays a central role. There are many possible interactions between
a user and a search system, ranging from query (re)formulation to relevance
feedback. However, capturing them within a single framework is difficult and
previously proposed approaches have mostly focused on relevance feedback. In
this paper, we propose a general framework for interactive IR that is able to
capture the full interaction process in a principled way. Our approach relies
upon a generalisation of the probability framework of quantum physics, whose
strong geometric component can be a key towards a successful interactive IR
model.
|
0906.4032
|
Bayesian two-sample tests
|
cs.LG
|
In this paper, we present two classes of Bayesian approaches to the
two-sample problem. Our first class of methods extends the Bayesian t-test to
include all parametric models in the exponential family and their conjugate
priors. Our second class of methods uses Dirichlet process mixtures (DPM) of
such conjugate-exponential distributions as flexible nonparametric priors over
the unknown distributions.
|
0906.4036
|
Physical Modeling Techniques in Active Contours for Image Segmentation
|
cs.CV cs.GR
|
Physical modeling method, represented by simulation and visualization of the
principles in physics, is introduced in the shape extraction of the active
contours. The objectives of adopting this concept are to address the several
major difficulties in the application of Active Contours. Primarily, a
technique is developed to realize the topological changes of Parametric Active
Contours (Snakes). The key strategy is to imitate the process of a balloon
expanding and filling in a closed space with several objects. After removing
the touched balloon surfaces, the objects can be identified by surrounded
remaining balloon surfaces. A burned region swept by Snakes is utilized to
trace the contour and to give a criterion for stopping the movement of Snake
curve. When the Snakes terminates evolution totally, through ignoring this
criterion, it can form a connected area by evolving the Snakes again and
continuing the region burning. The contours extracted from the boundaries of
the burned area can represent the child snake of each object respectively.
Secondly, a novel scheme is designed to solve the problems of leakage of the
contour from the large gaps, and the segmentation error in Geometric Active
Contours (GAC). It divides the segmentation procedure into two processing
stages. By simulating the wave propagating in the isotropic substance at the
final stage, it can significantly enhance the effect of image force in GAC
based on Level Set and give the satisfied solutions to the two problems.
Thirdly, to support the physical models for active contours above, we introduce
a general image force field created on a template plane over the image plane.
This force is more adaptable to noisy images with complicated geometric shapes.
|
0906.4044
|
Recommender Systems for the Conference Paper Assignment Problem
|
cs.IR cs.AI
|
Conference paper assignment, i.e., the task of assigning paper submissions to
reviewers, presents multi-faceted issues for recommender systems research.
Besides the traditional goal of predicting `who likes what?', a conference
management system must take into account aspects such as: reviewer capacity
constraints, adequate numbers of reviews for papers, expertise modeling,
conflicts of interest, and an overall distribution of assignments that balances
reviewer preferences with conference objectives. Among these, issues of
modeling preferences and tastes in reviewing have traditionally been studied
separately from the optimization of paper-reviewer assignment. In this paper,
we present an integrated study of both these aspects. First, due to the paucity
of data per reviewer or per paper (relative to other recommender systems
applications) we show how we can integrate multiple sources of information to
learn paper-reviewer preference models. Second, our models are evaluated not
just in terms of prediction accuracy but in terms of the end-assignment
quality. Using a linear programming-based assignment optimization formulation,
we show how our approach better explores the space of unsupplied assignments to
maximize the overall affinities of papers assigned to reviewers. We demonstrate
our results on real reviewer preference data from the IEEE ICDM 2007
conference.
|
0906.4096
|
An Event Based Approach To Situational Representation
|
cs.DB cs.AI
|
Many application domains require representing interrelated real-world
activities and/or evolving physical phenomena. In the crisis response domain,
for instance, one may be interested in representing the state of the unfolding
crisis (e.g., forest fire), the progress of the response activities such as
evacuation and traffic control, and the state of the crisis site(s). Such a
situation representation can then be used to support a multitude of
applications including situation monitoring, analysis, and planning. In this
paper, we make a case for an event based representation of situations where
events are defined to be domain-specific significant occurrences in space and
time. We argue that events offer a unifying and powerful abstraction to
building situational awareness applications. We identify challenges in building
an Event Management System (EMS) for which traditional data and knowledge
management systems prove to be limited and suggest possible directions and
technologies to address the challenges.
|
0906.4131
|
Automatic Spatially-Adaptive Balancing of Energy Terms for Image
Segmentation
|
cs.CV
|
Image segmentation techniques are predominately based on parameter-laden
optimization. The objective function typically involves weights for balancing
competing image fidelity and segmentation regularization cost terms. Setting
these weights suitably has been a painstaking, empirical process. Even if such
ideal weights are found for a novel image, most current approaches fix the
weight across the whole image domain, ignoring the spatially-varying properties
of object shape and image appearance. We propose a novel technique that
autonomously balances these terms in a spatially-adaptive manner through the
incorporation of image reliability in a graph-based segmentation framework. We
validate on synthetic data achieving a reduction in mean error of 47% (p-value
<< 0.05) when compared to the best fixed parameter segmentation. We also
present results on medical images (including segmentations of the corpus
callosum and brain tissue in MRI data) and on natural images.
|
0906.4154
|
Distributed Fault Detection in Sensor Networks using a Recurrent Neural
Network
|
cs.NE cs.DC
|
In long-term deployments of sensor networks, monitoring the quality of
gathered data is a critical issue. Over the time of deployment, sensors are
exposed to harsh conditions, causing some of them to fail or to deliver less
accurate data. If such a degradation remains undetected, the usefulness of a
sensor network can be greatly reduced. We present an approach that learns
spatio-temporal correlations between different sensors, and makes use of the
learned model to detect misbehaving sensors by using distributed computation
and only local communication between nodes. We introduce SODESN, a distributed
recurrent neural network architecture, and a learning method to train SODESN
for fault detection in a distributed scenario. Our approach is evaluated using
data from different types of sensors and is able to work well even with
less-than-perfect link qualities and more than 50% of failed nodes.
|
0906.4162
|
A Divergence Formula for Randomness and Dimension (Short Version)
|
cs.CC cs.IT math.IT
|
If $S$ is an infinite sequence over a finite alphabet $\Sigma$ and $\beta$ is
a probability measure on $\Sigma$, then the {\it dimension} of $ S$ with
respect to $\beta$, written $\dim^\beta(S)$, is a constructive version of
Billingsley dimension that coincides with the (constructive Hausdorff)
dimension $\dim(S)$ when $\beta$ is the uniform probability measure. This paper
shows that $\dim^\beta(S)$ and its dual $\Dim^\beta(S)$, the {\it strong
dimension} of $S$ with respect to $\beta$, can be used in conjunction with
randomness to measure the similarity of two probability measures $\alpha$ and
$\beta$ on $\Sigma$. Specifically, we prove that the {\it divergence formula}
$$\dim^\beta(R) = \Dim^\beta(R) =\CH(\alpha) / (\CH(\alpha) + \D(\alpha ||
\beta))$$ holds whenever $\alpha$ and $\beta$ are computable, positive
probability measures on $\Sigma$ and $R \in \Sigma^\infty$ is random with
respect to $\alpha$. In this formula, $\CH(\alpha)$ is the Shannon entropy of
$\alpha$, and $\D(\alpha||\beta)$ is the Kullback-Leibler divergence between
$\alpha$ and $\beta$.
|
0906.4172
|
Rough Set Model for Discovering Hybrid Association Rules
|
cs.DB cs.LG
|
In this paper, the mining of hybrid association rules with rough set approach
is investigated as the algorithm RSHAR.The RSHAR algorithm is constituted of
two steps mainly. At first, to join the participant tables into a general table
to generate the rules which is expressing the relationship between two or more
domains that belong to several different tables in a database. Then we apply
the mapping code on selected dimension, which can be added directly into the
information system as one certain attribute. To find the association rules,
frequent itemsets are generated in second step where candidate itemsets are
generated through equivalence classes and also transforming the mapping code in
to real dimensions. The searching method for candidate itemset is similar to
apriori algorithm. The analysis of the performance of algorithm has been
carried out.
|
0906.4228
|
On Chase Termination Beyond Stratification
|
cs.DB cs.AI
|
We study the termination problem of the chase algorithm, a central tool in
various database problems such as the constraint implication problem,
Conjunctive Query optimization, rewriting queries using views, data exchange,
and data integration. The basic idea of the chase is, given a database instance
and a set of constraints as input, to fix constraint violations in the database
instance. It is well-known that, for an arbitrary set of constraints, the chase
does not necessarily terminate (in general, it is even undecidable if it does
or not). Addressing this issue, we review the limitations of existing
sufficient termination conditions for the chase and develop new techniques that
allow us to establish weaker sufficient conditions. In particular, we introduce
two novel termination conditions called safety and inductive restriction, and
use them to define the so-called T-hierarchy of termination conditions. We then
study the interrelations of our termination conditions with previous conditions
and the complexity of checking our conditions. This analysis leads to an
algorithm that checks membership in a level of the T-hierarchy and accounts for
the complexity of termination conditions. As another contribution, we study the
problem of data-dependent chase termination and present sufficient termination
conditions w.r.t. fixed instances. They might guarantee termination although
the chase does not terminate in the general case. As an application of our
techniques beyond those already mentioned, we transfer our results into the
field of query answering over knowledge bases where the chase on the underlying
database may not terminate, making existing algorithms applicable to broader
classes of constraints.
|
0906.4316
|
Constructive Decision Theory
|
cs.GT cs.AI econ.TH
|
In most contemporary approaches to decision making, a decision problem is
described by a sets of states and set of outcomes, and a rich set of acts,
which are functions from states to outcomes over which the decision maker (DM)
has preferences. Most interesting decision problems, however, do not come with
a state space and an outcome space. Indeed, in complex problems it is often far
from clear what the state and outcome spaces would be. We present an
alternative foundation for decision making, in which the primitive objects of
choice are syntactic programs. A representation theorem is proved in the spirit
of standard representation theorems, showing that if the DM's preference
relation on objects of choice satisfies appropriate axioms, then there exist a
set S of states, a set O of outcomes, a way of interpreting the objects of
choice as functions from S to O, a probability on S, and a utility function on
O, such that the DM prefers choice a to choice b if and only if the expected
utility of a is higher than that of b. Thus, the state space and outcome space
are subjective, just like the probability and utility; they are not part of the
description of the problem. In principle, a modeler can test for SEU behavior
without having access to states or outcomes. We illustrate the power of our
approach by showing that it can capture decision makers who are subject to
framing effects.
|
0906.4321
|
Reasoning About Knowledge of Unawareness Revisited
|
cs.AI cs.GT cs.LO
|
In earlier work, we proposed a logic that extends the Logic of General
Awareness of Fagin and Halpern [1988] by allowing quantification over primitive
propositions. This makes it possible to express the fact that an agent knows
that there are some facts of which he is unaware. In that logic, it is not
possible to model an agent who is uncertain about whether he is aware of all
formulas. To overcome this problem, we keep the syntax of the earlier paper,
but allow models where, with each world, a possibly different language is
associated. We provide a sound and complete axiomatization for this logic and
show that, under natural assumptions, the quantifier-free fragment of the logic
is characterized by exactly the same axioms as the logic of Heifetz, Meier, and
Schipper [2008].
|
0906.4326
|
A Logical Characterization of Iterated Admissibility
|
cs.AI cs.GT cs.LO
|
Brandenburger, Friedenberg, and Keisler provide an epistemic characterization
of iterated admissibility (i.e., iterated deletion of weakly dominated
strategies) where uncertainty is represented using LPSs (lexicographic
probability sequences). Their characterization holds in a rich structure called
a complete structure, where all types are possible. Here, a logical
charaacterization of iterated admisibility is given that involves only standard
probability and holds in all structures, not just complete structures. A
stronger notion of strong admissibility is then defined. Roughly speaking,
strong admissibility is meant to capture the intuition that "all the agent
knows" is that the other agents satisfy the appropriate rationality
assumptions. Strong admissibility makes it possible to relate admissibility,
canonical structures (as typically considered in completeness proofs in modal
logic), complete structures, and the notion of ``all I know''.
|
0906.4327
|
A Rough Sets Partitioning Model for Mining Sequential Patterns with Time
Constraint
|
cs.DB
|
Now a days, data mining and knowledge discovery methods are applied to a
variety of enterprise and engineering disciplines to uncover interesting
patterns from databases. The study of Sequential patterns is an important data
mining problem due to its wide applications to real world time dependent
databases. Sequential patterns are inter-event patterns ordered over a
time-period associated with specific objects under study. Analysis and
discovery of frequent sequential patterns over a predetermined time-period are
interesting data mining results, and can aid in decision support in many
enterprise applications. The problem of sequential pattern mining poses
computational challenges as a long frequent sequence contains enormous number
of frequent subsequences. Also useful results depend on the right choice of
event window. In this paper, we have studied the problem of sequential pattern
mining through two perspectives, one the computational aspect of the problem
and the other is incorporation and adjustability of time constraint. We have
used Indiscernibility relation from theory of rough sets to partition the
search space of sequential patterns and have proposed a novel algorithm that
allows previsualization of patterns and allows adjustment of time constraint
prior to execution of mining task. The algorithm Rough Set Partitioning is at
least ten times faster than the naive time constraint based sequential pattern
mining algorithm GSP. Besides this an additional knowledge of time interval of
sequential patterns is also determined with the method.
|
0906.4332
|
Updating Sets of Probabilities
|
cs.AI
|
There are several well-known justifications for conditioning as the
appropriate method for updating a single probability measure, given an
observation. However, there is a significant body of work arguing for sets of
probability measures, rather than single measures, as a more realistic model of
uncertainty. Conditioning still makes sense in this context--we can simply
condition each measure in the set individually, then combine the results--and,
indeed, it seems to be the preferred updating procedure in the literature. But
how justified is conditioning in this richer setting? Here we show, by
considering an axiomatic account of conditioning given by van Fraassen, that
the single-measure and sets-of-measures cases are very different. We show that
van Fraassen's axiomatization for the former case is nowhere near sufficient
for updating sets of measures. We give a considerably longer (and not as
compelling) list of axioms that together force conditioning in this setting,
and describe other update methods that are allowed once any of these axioms is
dropped.
|
0906.4415
|
Robust Watermarking in Multiresolution Walsh-Hadamard Transform
|
cs.CR cs.IT cs.MM math.IT
|
In this paper, a newer version of Walsh-Hadamard Transform namely
multiresolution Walsh-Hadamard Transform (MR-WHT) is proposed for images.
Further, a robust watermarking scheme is proposed for copyright protection
using MRWHT and singular value decomposition. The core idea of the proposed
scheme is to decompose an image using MR-WHT and then middle singular values of
high frequency sub-band at the coarsest and the finest level are modified with
the singular values of the watermark. Finally, a reliable watermark extraction
scheme is developed for the extraction of the watermark from the distorted
image. The experimental results show better visual imperceptibility and
resiliency of the proposed scheme against intentional or un-intentional variety
of attacks.
|
0906.4454
|
Activatability for simulation tractability of NP problems: Application
to Ecology
|
q-bio.QM cs.CE
|
Dynamics of biological-ecological systems is strongly depending on spatial
dimensions. Most of powerful simulators in ecology take into account for system
spatiality thus embedding stochastic processes. Due to the difficulty of
researching particular trajectories, biologists and computer scientists aim at
predicting the most probable trajectories of systems under study. Doing that,
they considerably reduce computation times. However, because of the largeness
of space, the execution time remains usually polynomial in time. In order to
reduce execution times we propose an activatability-based search cycle through
the process space. This cycle eliminates the redundant processes on a
statistical basis (Generalized Linear Model), and converges to the minimal
number of processes required to match simulation objectives.
|
0906.4539
|
Learning with Spectral Kernels and Heavy-Tailed Data
|
cs.LG cs.DS
|
Two ubiquitous aspects of large-scale data analysis are that the data often
have heavy-tailed properties and that diffusion-based or spectral-based methods
are often used to identify and extract structure of interest. Perhaps
surprisingly, popular distribution-independent methods such as those based on
the VC dimension fail to provide nontrivial results for even simple learning
problems such as binary classification in these two settings. In this paper, we
develop distribution-dependent learning methods that can be used to provide
dimension-independent sample complexity bounds for the binary classification
problem in these two popular settings. In particular, we provide bounds on the
sample complexity of maximum margin classifiers when the magnitude of the
entries in the feature vector decays according to a power law and also when
learning is performed with the so-called Diffusion Maps kernel. Both of these
results rely on bounding the annealed entropy of gap-tolerant classifiers in a
Hilbert space. We provide such a bound, and we demonstrate that our proof
technique generalizes to the case when the margin is measured with respect to
more general Banach space norms. The latter result is of potential interest in
cases where modeling the relationship between data elements as a dot product in
a Hilbert space is too restrictive.
|
0906.4560
|
Coordinated Weighted Sampling for Estimating Aggregates Over Multiple
Weight Assignments
|
cs.DB cs.NI
|
Many data sources are naturally modeled by multiple weight assignments over a
set of keys: snapshots of an evolving database at multiple points in time,
measurements collected over multiple time periods, requests for resources
served at multiple locations, and records with multiple numeric attributes.
Over such vector-weighted data we are interested in aggregates with respect to
one set of weights, such as weighted sums, and aggregates over multiple sets of
weights such as the $L_1$ difference.
Sample-based summarization is highly effective for data sets that are too
large to be stored or manipulated. The summary facilitates approximate
processing queries that may be specified after the summary was generated.
Current designs, however, are geared for data sets where a single {\em
scalar} weight is associated with each key.
We develop a sampling framework based on {\em coordinated weighted samples}
that is suited for multiple weight assignments and obtain estimators that are
{\em orders of magnitude tighter} than previously possible.
We demonstrate the power of our methods through an extensive empirical
evaluation on diverse data sets ranging from IP network to stock quotes data.
|
0906.4582
|
On landmark selection and sampling in high-dimensional data analysis
|
stat.ML cs.CV cs.LG
|
In recent years, the spectral analysis of appropriately defined kernel
matrices has emerged as a principled way to extract the low-dimensional
structure often prevalent in high-dimensional data. Here we provide an
introduction to spectral methods for linear and nonlinear dimension reduction,
emphasizing ways to overcome the computational limitations currently faced by
practitioners with massive datasets. In particular, a data subsampling or
landmark selection process is often employed to construct a kernel based on
partial information, followed by an approximate spectral analysis termed the
Nystrom extension. We provide a quantitative framework to analyse this
procedure, and use it to demonstrate algorithmic performance bounds on a range
of practical approaches designed to optimize the landmark selection process. We
compare the practical implications of these bounds by way of real-world
examples drawn from the field of computer vision, whereby low-dimensional
manifold structure is shown to emerge from high-dimensional video data streams.
|
0906.4589
|
Further Analysis on Resource Allocation in Wireless Communications Under
Imperfect Channel State Information
|
cs.IT math.IT
|
This paper has been withdrawn by the author due to some errors.
|
0906.4597
|
Large deviations sum-queue optimality of a radial sum-rate monotone
opportunistic scheduler
|
cs.IT math.IT
|
A centralized wireless system is considered that is serving a fixed set of
users with time varying channel capacities. An opportunistic scheduling rule in
this context selects a user (or users) to serve based on the current channel
state and user queues. Unless the user traffic is symmetric and/or the
underlying capacity region a polymatroid, little is known concerning how
performance optimal schedulers should tradeoff "maximizing current service
rate" (being opportunistic) versus "balancing unequal queues" (enhancing
user-diversity to enable future high service rate opportunities). By contrast
with currently proposed opportunistic schedulers, e.g., MaxWeight and Exp Rule,
a radial sum-rate monotone (RSM) scheduler de-emphasizes queue-balancing in
favor of greedily maximizing the system service rate as the queue-lengths are
scaled up linearly. In this paper it is shown that an RSM opportunistic
scheduler, p-Log Rule, is not only throughput-optimal, but also maximizes the
asymptotic exponential decay rate of the sum-queue distribution for a two-queue
system. The result complements existing optimality results for opportunistic
scheduling and point to RSM schedulers as a good design choice given the need
for robustness in wireless systems with both heterogeneity and high degree of
uncertainty.
|
0906.4602
|
Minimal Gr\"obner bases and the predictable leading monomial property
|
cs.IT math.IT
|
We focus on Gr\"obner bases for modules of univariate polynomial vectors over
a ring. We identify a useful property, the "predictable leading monomial (PLM)
property" that is shared by minimal Gr\"{o}bner bases of modules in F[x]^q, no
matter what positional term order is used. The PLM property is useful in a
range of applications and can be seen as a strengthening of the wellknown
predictable degree property (= row reducedness), a terminology introduced by
Forney in the 70's. Because of the presence of zero divisors, minimal
Gr\"{o}bner bases over a finite ring of the type Z_p^r (where p is a prime
integer and r is an integer >1) do not necessarily have the PLM property. In
this paper we show how to derive, from an ordered minimal Gr\"{o}bner basis, a
so-called "minimal Gr\"{o}bner p-basis" that does have a PLM property. We
demonstrate that minimal Gr\"obner p-bases lend themselves particularly well to
derive minimal realization parametrizations over Z_p^r. Applications are in
coding and sequences over Z_p^r.
|
0906.4615
|
Diversity-Multiplexing Tradeoff for the Multiple-Antenna Wire-tap
Channel
|
cs.IT math.IT
|
In this paper the fading multiple antenna (MIMO) wire-tap channel is
investigated under short term power constraints. The secret diversity gain and
the secret multiplexing gain are defined. Using these definitions, the secret
diversitymultiplexing tradeoff (DMT) is calculated analytically for no
transmitter side channel state information (CSI) and for full CSI. When there
is no CSI at the transmitter, under the assumption of Gaussian codebooks, it is
shown that the eavesdropper steals both transmitter and receiver antennas, and
the secret DMT depends on the remaining degrees of freedom. When CSI is
available at the transmitter (CSIT), the eavesdropper steals only transmitter
antennas. This dependence on the availability of CSI is unlike the DMT results
without secrecy constraints, where the DMT remains the same for no CSI and full
CSI at the transmitter under short term power constraints. A zero-forcing type
scheme is shown to achieve the secret DMT when CSIT is available.
|
0906.4643
|
The Poisson Channel with Side Information
|
cs.IT math.IT
|
The continuous-time, peak-limited, infinite-bandwidth Poisson channel with
spurious counts is considered. It is shown that if the times at which the
spurious counts occur are known noncausally to the transmitter but not to the
receiver, then the capacity is equal to that of the Poisson channel with no
spurious counts. Knowing the times at which the spurious counts occur only
causally at the transmitter does not increase capacity.
|
0906.4663
|
Acquiring Knowledge for Evaluation of Teachers Performance in Higher
Education using a Questionnaire
|
cs.LG
|
In this paper, we present the step by step knowledge acquisition process by
choosing a structured method through using a questionnaire as a knowledge
acquisition tool. Here we want to depict the problem domain as, how to evaluate
teachers performance in higher education through the use of expert system
technology. The problem is how to acquire the specific knowledge for a selected
problem efficiently and effectively from human experts and encode it in the
suitable computer format. Acquiring knowledge from human experts in the process
of expert systems development is one of the most common problems cited till
yet. This questionnaire was sent to 87 domain experts within all public and
private universities in Pakistani. Among them 25 domain experts sent their
valuable opinions. Most of the domain experts were highly qualified, well
experienced and highly responsible persons. The whole questionnaire was divided
into 15 main groups of factors, which were further divided into 99 individual
questions. These facts were analyzed further to give a final shape to the
questionnaire. This knowledge acquisition technique may be used as a learning
tool for further research work.
|
0906.4675
|
Competition for Popularity in Bipartite Networks
|
physics.soc-ph cs.SI physics.data-an
|
We present a dynamical model for rewiring and attachment in bipartite
networks in which edges are added between nodes that belong to catalogs that
can either be fixed in size or growing in size. The model is motivated by an
empirical study of data from the video rental service Netflix, which invites
its users to give ratings to the videos available in its catalog. We find that
the distribution of the number of ratings given by users and that of the number
of ratings received by videos both follow a power law with an exponential
cutoff. We also examine the activity patterns of Netflix users and find bursts
of intense video-rating activity followed by long periods of inactivity. We
derive ordinary differential equations to model the acquisition of edges by the
nodes over time and obtain the corresponding time-dependent degree
distributions. We then compare our results with the Netflix data and find good
agreement. We conclude with a discussion of how catalog models can be used to
study systems in which agents are forced to choose, rate, or prioritize their
interactions from a very large set of options.
|
0906.4690
|
Fuzzy Logic Based Method for Improving Text Summarization
|
cs.IR
|
Text summarization can be classified into two approaches: extraction and
abstraction. This paper focuses on extraction approach. The goal of text
summarization based on extraction approach is sentence selection. One of the
methods to obtain the suitable sentences is to assign some numerical measure of
a sentence for the summary called sentence weighting and then select the best
ones. The first step in summarization by extraction is the identification of
important features. In our experiment, we used 125 test documents in DUC2002
data set. Each document is prepared by preprocessing process: sentence
segmentation, tokenization, removing stop word, and word stemming. Then, we use
8 important features and calculate their score for each sentence. We propose
text summarization based on fuzzy logic to improve the quality of the summary
created by the general statistic method. We compare our results with the
baseline summarizer and Microsoft Word 2007 summarizers. The results show that
the best average precision, recall, and f-measure for the summaries were
obtained by fuzzy method.
|
0906.4692
|
On optimally partitioning a text to improve its compression
|
cs.DS cs.IT math.IT
|
In this paper we investigate the problem of partitioning an input string T in
such a way that compressing individually its parts via a base-compressor C gets
a compressed output that is shorter than applying C over the entire T at once.
This problem was introduced in the context of table compression, and then
further elaborated and extended to strings and trees. Unfortunately, the
literature offers poor solutions: namely, we know either a cubic-time algorithm
for computing the optimal partition based on dynamic programming, or few
heuristics that do not guarantee any bounds on the efficacy of their computed
partition, or algorithms that are efficient but work in some specific scenarios
(such as the Burrows-Wheeler Transform) and achieve compression performance
that might be worse than the optimal-partitioning by a $\Omega(\sqrt{\log n})$
factor. Therefore, computing efficiently the optimal solution is still open. In
this paper we provide the first algorithm which is guaranteed to compute in
$O(n \log_{1+\eps}n)$ time a partition of T whose compressed output is
guaranteed to be no more than $(1+\epsilon)$-worse the optimal one, where
$\epsilon$ may be any positive constant.
|
0906.4764
|
A Novel Bid Optimizer for Sponsored Search Auctions based on Cooperative
Game Theory
|
cs.GT cs.MA
|
In this paper, we propose a bid optimizer for sponsored keyword search
auctions which leads to better retention of advertisers by yielding attractive
utilities to the advertisers without decreasing the revenue to the search
engine. The bid optimizer is positioned as a key value added tool the search
engine provides to the advertisers. The proposed bid optimizer algorithm
transforms the reported values of the advertisers for a keyword into a
correlated bid profile using many ideas from cooperative game theory. The
algorithm is based on a characteristic form game involving the search engine
and the advertisers. Ideas from Nash bargaining theory are used in formulating
the characteristic form game to provide for a fair share of surplus among the
players involved. The algorithm then computes the nucleolus of the
characteristic form game since we find that the nucleolus is an apt way of
allocating the gains of cooperation among the search engine and the
advertisers. The algorithm next transforms the nucleolus into a correlated bid
profile using a linear programming formulation. This bid profile is input to a
standard generalized second price mechanism (GSP) for determining the
allocation of sponsored slots and the prices to be be paid by the winners. The
correlated bid profile that we determine is a locally envy-free equilibrium and
also a correlated equilibrium of the underlying game. Through detailed
simulation experiments, we show that the proposed bid optimizer retains more
customers than a plain GSP mechanism and also yields better long-run utilities
to the search engine and the advertisers.
|
0906.4779
|
Minimum Probability Flow Learning
|
cs.LG physics.data-an stat.ML
|
Fitting probabilistic models to data is often difficult, due to the general
intractability of the partition function and its derivatives. Here we propose a
new parameter estimation technique that does not require computing an
intractable normalization factor or sampling from the equilibrium distribution
of the model. This is achieved by establishing dynamics that would transform
the observed data distribution into the model distribution, and then setting as
the objective the minimization of the KL divergence between the data
distribution and the distribution produced by running the dynamics for an
infinitesimal time. Score matching, minimum velocity learning, and certain
forms of contrastive divergence are shown to be special cases of this learning
technique. We demonstrate parameter estimation in Ising models, deep belief
networks and an independent component analysis model of natural scenes. In the
Ising model case, current state of the art techniques are outperformed by at
least an order of magnitude in learning time, with lower error in recovered
coupling parameters.
|
0906.4789
|
Efficient IRIS Recognition through Improvement of Feature Extraction and
subset Selection
|
cs.CV
|
The selection of the optimal feature subset and the classification has become
an important issue in the field of iris recognition. In this paper we propose
several methods for iris feature subset selection and vector creation. The
deterministic feature sequence is extracted from the iris image by using the
contourlet transform technique. Contourlet transform captures the intrinsic
geometrical structures of iris image. It decomposes the iris image into a set
of directional sub-bands with texture details captured in different
orientations at various scales so for reducing the feature vector dimensions we
use the method for extract only significant bit and information from normalized
iris images. In this method we ignore fragile bits. And finally we use SVM
(Support Vector Machine) classifier for approximating the amount of people
identification in our proposed system. Experimental result show that most
proposed method reduces processing time and increase the classification
accuracy and also the iris feature vector length is much smaller versus the
other methods.
|
0906.4805
|
A Trivial Observation related to Sparse Recovery
|
cs.IT math.IT
|
We make a trivial modification to the elegant analysis of Garg and Khandekar
(\emph{Gradient Descent with Sparsification} ICML 2009) that replaces the
standard Restricted Isometry Property (RIP), with another RIP-type property
(which could be simpler than the RIP, but we are not sure; it could be as hard
as the RIP to check, thereby rendering this little writeup totally worthless).
|
0906.4827
|
Physical Layer Security: Coalitional Games for Distributed Cooperation
|
cs.IT cs.GT math.IT
|
Cooperation between wireless network nodes is a promising technique for
improving the physical layer security of wireless transmission, in terms of
secrecy capacity, in the presence of multiple eavesdroppers. While existing
physical layer security literature answered the question "what are the
link-level secrecy capacity gains from cooperation?", this paper attempts to
answer the question of "how to achieve those gains in a practical decentralized
wireless network and in the presence of a secrecy capacity cost for information
exchange?". For this purpose, we model the physical layer security cooperation
problem as a coalitional game with non-transferable utility and propose a
distributed algorithm for coalition formation. Through the proposed algorithm,
the wireless users can autonomously cooperate and self-organize into disjoint
independent coalitions, while maximizing their secrecy capacity taking into
account the security costs during information exchange. We analyze the
resulting coalitional structures, discuss their properties, and study how the
users can self-adapt the network topology to environmental changes such as
mobility. Simulation results show that the proposed algorithm allows the users
to cooperate and self-organize while improving the average secrecy capacity per
user up to 25.32% relative to the non-cooperative case.
|
0906.4838
|
Forecasting Model for Crude Oil Price Using Artificial Neural Networks
and Commodity Futures Prices
|
cs.NE q-fin.PM
|
This paper presents a model based on multilayer feedforward neural network to
forecast crude oil spot price direction in the short-term, up to three days
ahead. A great deal of attention was paid on finding the optimal ANN model
structure. In addition, several methods of data pre-processing were tested. Our
approach is to create a benchmark based on lagged value of pre-processed spot
price, then add pre-processed futures prices for 1, 2, 3,and four months to
maturity, one by one and also altogether. The results on the benchmark suggest
that a dynamic model of 13 lags is the optimal to forecast spot price direction
for the short-term. Further, the forecast accuracy of the direction of the
market was 78%, 66%, and 53% for one, two, and three days in future
conclusively. For all the experiments, that include futures data as an input,
the results show that on the short-term, futures prices do hold new information
on the spot price direction. The results obtained will generate comprehensive
understanding of the crude oil dynamic which help investors and individuals for
risk managements.
|
0906.4846
|
A genetic algorithm for structure-activity relationships: software
implementation
|
cs.NE
|
The design and the implementation of a genetic algorithm are described. The
applicability domain is on structure-activity relationships expressed as
multiple linear regressions and predictor variables are from families of
structure-based molecular descriptors. An experiment to compare different
selection and survival strategies was designed and realized. The genetic
algorithm was run using the designed experiment on a set of 206 polychlorinated
biphenyls searching on structure-activity relationships having known the
measured octanol-water partition coefficients and a family of molecular
descriptors. The experiment shows that different selection and survival
strategies create different partitions on the entire population of all possible
genotypes.
|
0906.4913
|
Explicit Construction of Optimal Exact Regenerating Codes for
Distributed Storage
|
cs.IT math.IT
|
Erasure coding techniques are used to increase the reliability of distributed
storage systems while minimizing storage overhead. Also of interest is
minimization of the bandwidth required to repair the system following a node
failure. In a recent paper, Wu et al. characterize the tradeoff between the
repair bandwidth and the amount of data stored per node. They also prove the
existence of regenerating codes that achieve this tradeoff.
In this paper, we introduce Exact Regenerating Codes, which are regenerating
codes possessing the additional property of being able to duplicate the data
stored at a failed node. Such codes require low processing and communication
overheads, making the system practical and easy to maintain. Explicit
construction of exact regenerating codes is provided for the minimum bandwidth
point on the storage-repair bandwidth tradeoff, relevant to
distributed-mail-server applications. A subspace based approach is provided and
shown to yield necessary and sufficient conditions on a linear code to possess
the exact regeneration property as well as prove the uniqueness of our
construction.
Also included in the paper, is an explicit construction of regenerating codes
for the minimum storage point for parameters relevant to storage in
peer-to-peer systems. This construction supports a variable number of nodes and
can handle multiple, simultaneous node failures. All constructions given in the
paper are of low complexity, requiring low field size in particular.
|
0906.4927
|
Fast Probabilistic Ranking under x-Relation Model
|
cs.DB
|
The probabilistic top-k queries based on the interplay of score and
probability, under the possible worlds semantic, become an important research
issue that considers both score and uncertainty on the same basis. In the
literature, many different probabilistic top-k queries are proposed. Almost all
of them need to compute the probability of a tuple t_i to be ranked at the j-th
position across the entire set of possible worlds. The cost of such computing
is the dominant cost and is known as O(kn^2), where n is the size of dataset.
In this paper, we propose a new novel algorithm that computes such probability
in O(kn).
|
0906.4973
|
Vision Based Navigation for a Mobile Robot with Different Field of Views
|
cs.RO
|
The basic idea behind evolutionary robotics is to evolve a set of neural
controllers for a particular task at hand. It involves use of various input
parameters such as infrared sensors, light sensors and vision based methods.
This paper aims to explore the evolution of vision based navigation in a mobile
robot. It discusses in detail the effect of different field of views for a
mobile robot. The individuals have been evolved using different FOV values and
the results have been recorded and analyzed.The optimum values for FOV have
been proposed after evaluating more than 100 different values. It has been
observed that the optimum FOV value requires lesser number of generations for
evolution and the mobile robot trained with that particular value is able to
navigate well in the environment.
|
0906.4982
|
Concept-based Recommendations for Internet Advertisement
|
cs.AI cs.CY cs.IR stat.ML
|
The problem of detecting terms that can be interesting to the advertiser is
considered. If a company has already bought some advertising terms which
describe certain services, it is reasonable to find out the terms bought by
competing companies. A part of them can be recommended as future advertising
terms to the company. The goal of this work is to propose better interpretable
recommendations based on FCA and association rules.
|
0906.5007
|
Spread of Misinformation in Social Networks
|
cs.IT cs.DC math.IT math.PR
|
We provide a model to investigate the tension between information aggregation
and spread of misinformation in large societies (conceptualized as networks of
agents communicating with each other). Each individual holds a belief
represented by a scalar. Individuals meet pairwise and exchange information,
which is modeled as both individuals adopting the average of their pre-meeting
beliefs. When all individuals engage in this type of information exchange, the
society will be able to effectively aggregate the initial information held by
all individuals. There is also the possibility of misinformation, however,
because some of the individuals are "forceful," meaning that they influence the
beliefs of (some) of the other individuals they meet, but do not change their
own opinion. The paper characterizes how the presence of forceful agents
interferes with information aggregation. Under the assumption that even
forceful agents obtain some information (however infrequent) from some others
(and additional weak regularity conditions), we first show that beliefs in this
class of societies converge to a consensus among all individuals. This
consensus value is a random variable, however, and we characterize its
behavior. Our main results quantify the extent of misinformation in the society
by either providing bounds or exact results (in some special cases) on how far
the consensus value can be from the benchmark without forceful agents (where
there is efficient information aggregation). The worst outcomes obtain when
there are several forceful agents and forceful agents themselves update their
beliefs only on the basis of information they obtain from individuals most
likely to have received their own information previously.
|
0906.5017
|
Collaborative filtering with diffusion-based similarity on tripartite
graphs
|
cs.IR
|
Collaborative tags are playing more and more important role for the
organization of information systems. In this paper, we study a personalized
recommendation model making use of the ternary relations among users, objects
and tags. We propose a measure of user similarity based on his preference and
tagging information. Two kinds of similarities between users are calculated by
using a diffusion-based process, which are then integrated for recommendation.
We test the proposed method in a standard collaborative filtering framework
with three metrics: ranking score, Recall and Precision, and demonstrate that
it performs better than the commonly used cosine similarity.
|
0906.5022
|
Chemical Power for Microscopic Robots in Capillaries
|
cs.RO physics.bio-ph
|
The power available to microscopic robots (nanorobots) that oxidize
bloodstream glucose while aggregated in circumferential rings on capillary
walls is evaluated with a numerical model using axial symmetry and
time-averaged release of oxygen from passing red blood cells. Robots about one
micron in size can produce up to several tens of picowatts, in steady-state, if
they fully use oxygen reaching their surface from the blood plasma. Robots with
pumps and tanks for onboard oxygen storage could collect oxygen to support
burst power demands two to three orders of magnitude larger. We evaluate
effects of oxygen depletion and local heating on surrounding tissue. These
results give the power constraints when robots rely entirely on ambient
available oxygen and identify aspects of the robot design significantly
affecting available power. More generally, our numerical model provides an
approach to evaluating robot design choices for nanomedicine treatments in and
near capillaries.
|
0906.5023
|
An Upper Bound on the Minimum Weight of Type II $\ZZ_{2k}$-Codes
|
math.CO cs.IT math.IT
|
In this paper, we give a new upper bound on the minimum Euclidean weight of
Type II $\ZZ_{2k}$-codes and the concept of extremality for the Euclidean
weights when $k=3,4,5,6$. Together with the known result, we demonstrate that
there is an extremal Type II $\ZZ_{2k}$-code of length $8m$ $(m \le 8)$ when
$k=3,4,5,6$.
|
0906.5034
|
Effective Focused Crawling Based on Content and Link Structure Analysis
|
cs.IR
|
A focused crawler traverses the web selecting out relevant pages to a
predefined topic and neglecting those out of concern. While surfing the
internet it is difficult to deal with irrelevant pages and to predict which
links lead to quality pages. In this paper a technique of effective focused
crawling is implemented to improve the quality of web navigation. To check the
similarity of web pages w.r.t. topic keywords a similarity function is used and
the priorities of extracted out links are also calculated based on meta data
and resultant pages generated from focused crawler. The proposed work also uses
a method for traversing the irrelevant pages that met during crawling to
improve the coverage of a specific topic.
|
0906.5038
|
A Novel Two-Stage Dynamic Decision Support based Optimal Threat
Evaluation and Defensive Resource Scheduling Algorithm for Multi Air-borne
threats
|
cs.AI
|
This paper presents a novel two-stage flexible dynamic decision support based
optimal threat evaluation and defensive resource scheduling algorithm for
multi-target air-borne threats. The algorithm provides flexibility and
optimality by swapping between two objective functions, i.e. the preferential
and subtractive defense strategies as and when required. To further enhance the
solution quality, it outlines and divides the critical parameters used in
Threat Evaluation and Weapon Assignment (TEWA) into three broad categories
(Triggering, Scheduling and Ranking parameters). Proposed algorithm uses a
variant of many-to-many Stable Marriage Algorithm (SMA) to solve Threat
Evaluation (TE) and Weapon Assignment (WA) problem. In TE stage, Threat Ranking
and Threat-Asset pairing is done. Stage two is based on a new flexible dynamic
weapon scheduling algorithm, allowing multiple engagements using
shoot-look-shoot strategy, to compute near-optimal solution for a range of
scenarios. Analysis part of this paper presents the strengths and weaknesses of
the proposed algorithm over an alternative greedy algorithm as applied to
different offline scenarios.
|
0906.5039
|
A new approach for digit recognition based on hand gesture analysis
|
cs.CV
|
We present in this paper a new approach for hand gesture analysis that allows
digit recognition. The analysis is based on extracting a set of features from a
hand image and then combining them by using an induction graph. The most
important features we extract from each image are the fingers locations, their
heights and the distance between each pair of fingers. Our approach consists of
three steps: (i) Hand detection and localization, (ii) fingers extraction and
(iii) features identification and combination to digit recognition. Each input
image is assumed to contain only one person, thus we apply a fuzzy classifier
to identify the skin pixels. In the finger extraction step, we attempt to
remove all the hand components except the fingers, this process is based on the
hand anatomy properties. The final step consists on representing histogram of
the detected fingers in order to extract features that will be used for digit
recognition. The approach is invariant to scale, rotation and translation of
the hand. Some experiments have been undertaken to show the effectiveness of
the proposed approach.
|
0906.5040
|
Towards the Patterns of Hard CSPs with Association Rule Mining
|
cs.DB cs.AI
|
The hardness of finite domain Constraint Satisfaction Problems (CSPs) is a
very important research area in Constraint Programming (CP) community. However,
this problem has not yet attracted much attention from the researchers in the
association rule mining community. As a popular data mining technique,
association rule mining has an extremely wide application area and it has
already been successfully applied to many interdisciplines. In this paper, we
study the association rule mining techniques and propose a cascaded approach to
extract the interesting patterns of the hard CSPs. As far as we know, this
problem is investigated with the data mining techniques for the first time.
Specifically, we generate the random CSPs and collect their characteristics by
solving all the CSP instances, and then apply the data mining techniques on the
data set and further to discover the interesting patterns of the hardness of
the randomly generated CSPs
|
0906.5110
|
Statistical Analysis of Privacy and Anonymity Guarantees in Randomized
Security Protocol Implementations
|
cs.CR cs.LG
|
Security protocols often use randomization to achieve probabilistic
non-determinism. This non-determinism, in turn, is used in obfuscating the
dependence of observable values on secret data. Since the correctness of
security protocols is very important, formal analysis of security protocols has
been widely studied in literature. Randomized security protocols have also been
analyzed using formal techniques such as process-calculi and probabilistic
model checking. In this paper, we consider the problem of validating
implementations of randomized protocols. Unlike previous approaches which treat
the protocol as a white-box, our approach tries to verify an implementation
provided as a black box. Our goal is to infer the secrecy guarantees provided
by a security protocol through statistical techniques. We learn the
probabilistic dependency of the observable outputs on secret inputs using
Bayesian network. This is then used to approximate the leakage of secret. In
order to evaluate the accuracy of our statistical approach, we compare our
technique with the probabilistic model checking technique on two examples:
crowds protocol and dining crypotgrapher's protocol.
|
0906.5114
|
Non-Parametric Bayesian Areal Linguistics
|
cs.CL
|
We describe a statistical model over linguistic areas and phylogeny.
Our model recovers known areas and identifies a plausible hierarchy of areal
features. The use of areas improves genetic reconstruction of languages both
qualitatively and quantitatively according to a variety of metrics. We model
linguistic areas by a Pitman-Yor process and linguistic phylogeny by Kingman's
coalescent.
|
0906.5119
|
General combination rules for qualitative and quantitative beliefs
|
cs.AI
|
Martin and Osswald \cite{Martin07} have recently proposed many
generalizations of combination rules on quantitative beliefs in order to manage
the conflict and to consider the specificity of the responses of the experts.
Since the experts express themselves usually in natural language with
linguistic labels, Smarandache and Dezert \cite{Li07} have introduced a
mathematical framework for dealing directly also with qualitative beliefs. In
this paper we recall some element of our previous works and propose the new
combination rules, developed for the fusion of both qualitative or quantitative
beliefs.
|
0906.5120
|
Comments on "A new combination of evidence based on compromise" by K.
Yamada
|
cs.CV cs.AI
|
Comments on ``A new combination of evidence based on compromise'' by K.
Yamada
|
0906.5131
|
A Comment on Nonextensive Statistical Mechanics
|
cs.IT math.IT
|
There is a conception that Boltzmann-Gibbs statistics cannot yield the long
tail distribution. This is the justification for the intensive research of
nonextensive entropies (i.e. Tsallis entropy and others). Here the error that
caused this misconception is explained and it is shown that a long tail
distribution exists in equilibrium thermodynamics for more than a century.
|
0906.5148
|
Explicit probabilistic models for databases and networks
|
cs.AI cs.DB cs.IT math.IT
|
Recent work in data mining and related areas has highlighted the importance
of the statistical assessment of data mining results. Crucial to this endeavour
is the choice of a non-trivial null model for the data, to which the found
patterns can be contrasted. The most influential null models proposed so far
are defined in terms of invariants of the null distribution. Such null models
can be used by computation intensive randomization approaches in estimating the
statistical significance of data mining results.
Here, we introduce a methodology to construct non-trivial probabilistic
models based on the maximum entropy (MaxEnt) principle. We show how MaxEnt
models allow for the natural incorporation of prior information. Furthermore,
they satisfy a number of desirable properties of previously introduced
randomization approaches. Lastly, they also have the benefit that they can be
represented explicitly. We argue that our approach can be used for a variety of
data types. However, for concreteness, we have chosen to demonstrate it in
particular for databases and networks.
|
0906.5151
|
Unsupervised Search-based Structured Prediction
|
cs.LG
|
We describe an adaptation and application of a search-based structured
prediction algorithm "Searn" to unsupervised learning problems. We show that it
is possible to reduce unsupervised learning to supervised learning and
demonstrate a high-quality unsupervised shift-reduce parsing model. We
additionally show a close connection between unsupervised Searn and expectation
maximization. Finally, we demonstrate the efficacy of a semi-supervised
extension. The key idea that enables this is an application of the predict-self
idea for unsupervised learning.
|
0906.5233
|
Restricted Global Grammar Constraints
|
cs.AI cs.FL
|
We investigate the global GRAMMAR constraint over restricted classes of
context free grammars like deterministic and unambiguous context-free grammars.
We show that detecting disentailment for the GRAMMAR constraint in these cases
is as hard as parsing an unrestricted context free grammar.We also consider the
class of linear grammars and give a propagator that runs in quadratic time.
Finally, to demonstrate the use of linear grammars, we show that a weighted
linear GRAMMAR constraint can efficiently encode the EDITDISTANCE constraint,
and a conjunction of the EDITDISTANCE constraint and the REGULAR constraint
|
0906.5278
|
Spectrum of Fractal Interpolation Functions
|
cs.IT math.IT
|
In this paper we compute the Fourier spectrum of the Fractal Interpolation
Functions FIFs as introduced by Michael Barnsley. We show that there is an
analytical way to compute them. In this paper we attempt to solve the inverse
problem of FIF by using the spectrum
|
0906.5286
|
Putting Recommendations on the Map -- Visualizing Clusters and Relations
|
cs.IR
|
For users, recommendations can sometimes seem odd or counterintuitive.
Visualizing recommendations can remove some of this mystery, showing how a
recommendation is grouped with other choices. A drawing can also lead a user's
eye to other options. Traditional 2D-embeddings of points can be used to create
a basic layout, but these methods, by themselves, do not illustrate clusters
and neighborhoods very well. In this paper, we propose the use of geographic
maps to enhance the definition of clusters and neighborhoods, and consider the
effectiveness of this approach in visualizing similarities and recommendations
arising from TV shows and music selections. All the maps referenced in this
paper can be found in http://www.research.att.com/~volinsky/maps
|
0906.5289
|
Green Cellular - Optimizing the Cellular Network for Minimal Emission
from Mobile Stations
|
cs.IT math.IT
|
Wireless systems, which include cellular phones, have become an essential
part of the modern life. However the mounting evidence that cellular radiation
might adversely affect the health of its users, leads to a growing concern
among authorities and the general public. Radiating antennas in the proximity
of the user, such as antennas of mobile phones are of special interest for this
matter. In this paper we suggest a new architecture for wireless networks,
aiming at minimal emission from mobile stations, without any additional
radiation sources. The new architecture, dubbed Green Cellular, abandons the
classical transceiver base station design and suggests the augmentation of
transceiver base stations with receive only devices. These devices, dubbed
Green Antennas, are not aiming at coverage extension but rather at minimizing
the emission from mobile stations. We discuss the implications of the Green
Cellular architecture on 3G and 4G cellular technologies. We conclude by
showing that employing the Green Cellular approach may lead to a significant
decrease in the emission from mobile stations, especially in indoor scenarios.
This is achieved without exposing the user to any additional radiation source.
|
0906.5325
|
Online Reinforcement Learning for Dynamic Multimedia Systems
|
cs.LG cs.MM
|
In our previous work, we proposed a systematic cross-layer framework for
dynamic multimedia systems, which allows each layer to make autonomous and
foresighted decisions that maximize the system's long-term performance, while
meeting the application's real-time delay constraints. The proposed solution
solved the cross-layer optimization offline, under the assumption that the
multimedia system's probabilistic dynamics were known a priori. In practice,
however, these dynamics are unknown a priori and therefore must be learned
online. In this paper, we address this problem by allowing the multimedia
system layers to learn, through repeated interactions with each other, to
autonomously optimize the system's long-term performance at run-time. We
propose two reinforcement learning algorithms for optimizing the system under
different design constraints: the first algorithm solves the cross-layer
optimization in a centralized manner, and the second solves it in a
decentralized manner. We analyze both algorithms in terms of their required
computation, memory, and inter-layer communication overheads. After noting that
the proposed reinforcement learning algorithms learn too slowly, we introduce a
complementary accelerated learning algorithm that exploits partial knowledge
about the system's dynamics in order to dramatically improve the system's
performance. In our experiments, we demonstrate that decentralized learning can
perform as well as centralized learning, while enabling the layers to act
autonomously. Additionally, we show that existing application-independent
reinforcement learning algorithms, and existing myopic learning algorithms
deployed in multimedia systems, perform significantly worse than our proposed
application-aware and foresighted learning methods.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.