id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1003.0095
|
Multiuser MIMO Downlink Beamforming Design Based on Group Maximum SINR
Filtering
|
cs.IT math.IT
|
In this paper we aim to solve the multiuser multi-input multi-output (MIMO)
downlink beamforming problem where one multi-antenna base station broadcasts
data to many users. Each user is assigned multiple data streams and has
multiple antennas at its receiver. Efficient solutions to the joint
transmit-receive beamforming and power allocation problem based on iterative
methods are proposed. We adopt the group maximum
signal-to-interference-plus-noise-ratio (SINR) filter bank (GSINR-FB) as our
beamformer which exploits receiver diversity through cooperation between the
data streams of a user. The data streams for each user are subject to an
average SINR constraint, which has many important applications in wireless
communication systems and serves as a good metric to measure the quality of
service (QoS). The GSINR-FB also optimizes the average SINR of its output.
Based on the GSINR-FB beamformer, we find an SINR balancing structure for
optimal power allocation which simplifies the complicated power allocation
problem to a linear one. Simulation results verify the superiority of the
proposed algorithms over previous works with approximately the same complexity.
|
1003.0120
|
Learning from Logged Implicit Exploration Data
|
cs.LG cs.AI
|
We provide a sound and consistent foundation for the use of \emph{nonrandom}
exploration data in "contextual bandit" or "partially labeled" settings where
only the value of a chosen action is learned.
The primary challenge in a variety of settings is that the exploration
policy, in which "offline" data is logged, is not explicitly known. Prior
solutions here require either control of the actions during the learning
process, recorded random exploration, or actions chosen obliviously in a
repeated manner. The techniques reported here lift these restrictions, allowing
the learning of a policy for choosing actions given features from historical
data where no randomization occurred or was logged.
We empirically verify our solution on two reasonably sized sets of real-world
data obtained from Yahoo!.
|
1003.0146
|
A Contextual-Bandit Approach to Personalized News Article Recommendation
|
cs.LG cs.AI cs.IR
|
Personalized web services strive to adapt their services (advertisements,
news articles, etc) to individual users by making use of both content and user
information. Despite a few recent advances, this problem remains challenging
for at least two reasons. First, web service is featured with dynamically
changing pools of content, rendering traditional collaborative filtering
methods inapplicable. Second, the scale of most web services of practical
interest calls for solutions that are both fast in learning and computation.
In this work, we model personalized recommendation of news articles as a
contextual bandit problem, a principled approach in which a learning algorithm
sequentially selects articles to serve users based on contextual information
about the users and articles, while simultaneously adapting its
article-selection strategy based on user-click feedback to maximize total user
clicks.
The contributions of this work are three-fold. First, we propose a new,
general contextual bandit algorithm that is computationally efficient and well
motivated from learning theory. Second, we argue that any bandit algorithm can
be reliably evaluated offline using previously recorded random traffic.
Finally, using this offline evaluation method, we successfully applied our new
algorithm to a Yahoo! Front Page Today Module dataset containing over 33
million events. Results showed a 12.5% click lift compared to a standard
context-free bandit algorithm, and the advantage becomes even greater when data
gets more scarce.
|
1003.0205
|
Detecting Weak but Hierarchically-Structured Patterns in Networks
|
cs.IT cs.LG math.IT math.ST stat.TH
|
The ability to detect weak distributed activation patterns in networks is
critical to several applications, such as identifying the onset of anomalous
activity or incipient congestion in the Internet, or faint traces of a
biochemical spread by a sensor network. This is a challenging problem since
weak distributed patterns can be invisible in per node statistics as well as a
global network-wide aggregate. Most prior work considers situations in which
the activation/non-activation of each node is statistically independent, but
this is unrealistic in many problems. In this paper, we consider structured
patterns arising from statistical dependencies in the activation process. Our
contributions are three-fold. First, we propose a sparsifying transform that
succinctly represents structured activation patterns that conform to a
hierarchical dependency graph. Second, we establish that the proposed transform
facilitates detection of very weak activation patterns that cannot be detected
with existing methods. Third, we show that the structure of the hierarchical
dependency graph governing the activation process, and hence the network
transform, can be learnt from very few (logarithmic in network size)
independent snapshots of network activity.
|
1003.0206
|
Why has (reasonably accurate) Automatic Speech Recognition been so hard
to achieve?
|
cs.CL
|
Hidden Markov models (HMMs) have been successfully applied to automatic
speech recognition for more than 35 years in spite of the fact that a key HMM
assumption -- the statistical independence of frames -- is obviously violated
by speech data. In fact, this data/model mismatch has inspired many attempts to
modify or replace HMMs with alternative models that are better able to take
into account the statistical dependence of frames. However it is fair to say
that in 2010 the HMM is the consensus model of choice for speech recognition
and that HMMs are at the heart of both commercially available products and
contemporary research systems. In this paper we present a preliminary
exploration aimed at understanding how speech data depart from HMMs and what
effect this departure has on the accuracy of HMM-based speech recognition. Our
analysis uses standard diagnostic tools from the field of statistics --
hypothesis testing, simulation and resampling -- which are rarely used in the
field of speech recognition. Our main result, obtained by novel manipulations
of real and resampled data, demonstrates that real data have statistical
dependency and that this dependency is responsible for significant numbers of
recognition errors. We also demonstrate, using simulation and resampling, that
if we `remove' the statistical dependency from data, then the resulting
recognition error rates become negligible. Taken together, these results
suggest that a better understanding of the structure of the statistical
dependency in speech data is a crucial first step towards improving HMM-based
speech recognition.
|
1003.0219
|
Sequential Compressed Sensing
|
cs.IT math.IT
|
Compressed sensing allows perfect recovery of sparse signals (or signals
sparse in some basis) using only a small number of random measurements.
Existing results in compressed sensing literature have focused on
characterizing the achievable performance by bounding the number of samples
required for a given level of signal sparsity. However, using these bounds to
minimize the number of samples requires a-priori knowledge of the sparsity of
the unknown signal, or the decay structure for near-sparse signals.
Furthermore, there are some popular recovery methods for which no such bounds
are known.
In this paper, we investigate an alternative scenario where observations are
available in sequence. For any recovery method, this means that there is now a
sequence of candidate reconstructions. We propose a method to estimate the
reconstruction error directly from the samples themselves, for every candidate
in this sequence. This estimate is universal in the sense that it is based only
on the measurement ensemble, and not on the recovery method or any assumed
level of sparsity of the unknown signal. With these estimates, one can now stop
observations as soon as there is reasonable certainty of either exact or
sufficiently accurate reconstruction. They also provide a way to obtain
"run-time" guarantees for recovery methods that otherwise lack a-priori
performance bounds.
We investigate both continuous (e.g. Gaussian) and discrete (e.g. Bernoulli)
random measurement ensembles, both for exactly sparse and general near-sparse
signals, and with both noisy and noiseless measurements.
|
1003.0242
|
Peak to Average Power Ratio Reduction for Space-Time Codes That Achieve
Diversity-Multiplexing Gain Tradeoff
|
cs.IT math.IT
|
Zheng and Tse have shown that over a quasi-static channel, there exists a
fundamental tradeoff, known as the diversity-multiplexing gain (D-MG) tradeoff.
In a realistic system, to avoid inefficiently operating the power amplifier,
one should consider the situation where constraints are imposed on the peak to
average power ratio (PAPR) of the transmitted signal. In this paper, the D-MG
tradeoff of multi-antenna systems with PAPR constraints is analyzed. For
Rayleigh fading channels, we show that the D-MG tradeoff remains unchanged with
any PAPR constraints larger than one. This result implies that, instead of
designing codes on a case-by-case basis, as done by most existing works, there
possibly exist general methodologies for designing space-time codes with low
PAPR that achieve the optimal D-MG tradeoff. As an example of such
methodologies, we propose a PAPR reduction method based on constellation
shaping that can be applied to existing optimal space-time codes without
affecting their optimality in the D-MG tradeoff. Unlike most PAPR reduction
methods, the proposed method does not introduce redundancy or require side
information being transmitted to the decoder. Two realizations of the proposed
method are considered. The first is similar to the method proposed by Kwok
except that we employ the Hermite Normal Form (HNF) decomposition instead of
the Smith Normal Form (SNF) to reduce complexity. The second takes the idea of
integer reversible mapping which avoids the difficulty in matrix decomposition
when the number of antennas becomes large. Sphere decoding is performed to
verify that the proposed PAPR reduction method does not affect the performance
of optimal space-time codes.
|
1003.0248
|
Outage Probability of General Ad Hoc Networks in the High-Reliability
Regime
|
cs.IT cs.NI math.IT math.ST stat.TH
|
Outage probabilities in wireless networks depend on various factors: the node
distribution, the MAC scheme, and the models for path loss, fading and
transmission success. In prior work on outage characterization for networks
with randomly placed nodes, most of the emphasis was put on networks whose
nodes are Poisson distributed and where ALOHA is used as the MAC protocol. In
this paper we provide a general framework for the analysis of outage
probabilities in the high-reliability regime. The outage probability
characterization is based on two parameters: the intrinsic spatial contention
$\gamma$ of the network, introduced in [1], and the coordination level achieved
by the MAC as measured by the interference scaling exponent $\kappa$ introduced
in this paper. We study outage probabilities under the signal-to-interference
ratio (SIR) model, Rayleigh fading, and power-law path loss, and explain how
the two parameters depend on the network model. The main result is that the
outage probability approaches $\gamma\eta^{\kappa}$ as the density of
interferers $\eta$ goes to zero, and that $\kappa$ assumes values in the range
$1\leq \kappa\leq \alpha/2$ for all practical MAC protocols, where $\alpha$ is
the path loss exponent. This asymptotic expression is valid for all
motion-invariant point processes. We suggest a novel and complete taxonomy of
MAC protocols based mainly on the value of $\kappa$. Finally, our findings
suggest a conjecture that tightly bounds the outage probability for all
interferer densities.
|
1003.0319
|
Further Exploration of the Dendritic Cell Algorithm: Antigen Multiplier
and Time Windows
|
cs.AI cs.CR cs.NE
|
As an immune-inspired algorithm, the Dendritic Cell Algorithm (DCA), produces
promising performances in the field of anomaly detection. This paper presents
the application of the DCA to a standard data set, the KDD 99 data set. The
results of different implementation versions of the DXA, including the antigen
multiplier and moving time windows are reported. The real-valued Negative
Selection Algorithm (NSA) using constant-sized detectors and the C4.5 decision
tree algorithm are used, to conduct a baseline comparison. The results suggest
that the DCA is applicable to KDD 99 data set, and the antigen multiplier and
moving time windows have the same effect on the DCA for this particular data
set. The real-valued NSA with constant-sized detectors is not applicable to the
data set, and the C4.5 decision tree algorithm provides a benchmark of the
classification performance for this data set.
|
1003.0332
|
On the Optimal Number of Cooperative Base Stations in Network MIMO
Systems
|
cs.IT math.IT
|
We consider a multi-cell, frequency-selective fading, uplink channel (network
MIMO) where K user terminals (UTs) communicate simultaneously with B
cooperative base stations (BSs). Although the potential benefit of multi-cell
cooperation grows with B, the overhead related to the acquisition of channel
state information (CSI) will rapidly dominate the uplink resource. Thus, there
exists a non-trivial tradeoff between the performance gains of network MIMO and
the related overhead in channel estimation for a finite coherence time. Using a
close approximation of the net ergodic achievable rate based on recent results
from random matrix theory, we study this tradeoff by taking some realistic
aspects into account such as unreliable backhaul links and different path
losses between the UTs and BSs. We determine the optimal training length, the
optimal number of cooperative BSs and the optimal number of sub-carriers to be
used for an extended version of the circular Wyner model where each UT can
communicate with B BSs. Our results provide some insight into practical
limitations as well as realistic dimensions of network MIMO systems.
|
1003.0337
|
Change of word types to word tokens ratio in the course of translation
(based on Russian translations of K. Vonnegut novels)
|
cs.CL
|
The article provides lexical statistical analysis of K. Vonnegut's two novels
and their Russian translations. It is found out that there happen some changes
between the speed of word types and word tokens ratio change in the source and
target texts. The author hypothesizes that these changes are typical for
English-Russian translations, and moreover, they represent an example of
Baker's translation feature of levelling out.
|
1003.0339
|
libtissue - implementing innate immunity
|
cs.AI cs.NE
|
In a previous paper the authors argued the case for incorporating ideas from
innate immunity into articficial immune systems (AISs) and presented an outline
for a conceptual framework for such systems. A number of key general properties
observed in the biological innate and adaptive immune systems were hughlighted,
and how such properties might be instantiated in artificial systems was
discussed in detail. The next logical step is to take these ideas and build a
software system with which AISs with these properties can be implemented and
experimentally evaluated. This paper reports on the results of that step - the
libtissue system.
|
1003.0358
|
Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition
|
cs.NE cs.AI
|
Good old on-line back-propagation for plain multi-layer perceptrons yields a
very low 0.35% error rate on the famous MNIST handwritten digits benchmark. All
we need to achieve this best result so far are many hidden layers, many neurons
per layer, numerous deformed training images, and graphics cards to greatly
speed up learning.
|
1003.0367
|
Stopping Set Distributions of Some Linear Codes
|
cs.IT math.IT
|
Stopping sets and stopping set distribution of an low-density parity-check
code are used to determine the performance of this code under iterative
decoding over a binary erasure channel (BEC). Let $C$ be a binary $[n,k]$
linear code with parity-check matrix $H$, where the rows of $H$ may be
dependent. A stopping set $S$ of $C$ with parity-check matrix $H$ is a subset
of column indices of $H$ such that the restriction of $H$ to $S$ does not
contain a row of weight one. The stopping set distribution $\{T_i(H)\}_{i=0}^n$
enumerates the number of stopping sets with size $i$ of $C$ with parity-check
matrix $H$. Note that stopping sets and stopping set distribution are related
to the parity-check matrix $H$ of $C$. Let $H^{*}$ be the parity-check matrix
of $C$ which is formed by all the non-zero codewords of its dual code
$C^{\perp}$. A parity-check matrix $H$ is called BEC-optimal if
$T_i(H)=T_i(H^*), i=0,1,..., n$ and $H$ has the smallest number of rows. On the
BEC, iterative decoder of $C$ with BEC-optimal parity-check matrix is an
optimal decoder with much lower decoding complexity than the exhaustive
decoder. In this paper, we study stopping sets, stopping set distributions and
BEC-optimal parity-check matrices of binary linear codes. Using finite geometry
in combinatorics, we obtain BEC-optimal parity-check matrices and then
determine the stopping set distributions for the Simplex codes, the Hamming
codes, the first order Reed-Muller codes and the extended Hamming codes.
|
1003.0381
|
Modelling and Verification of Multiple UAV Mission Using SMV
|
cs.LO cs.MA cs.RO
|
Model checking has been used to verify the correctness of digital circuits,
security protocols, communication protocols, as they can be modelled by means
of finite state transition model. However, modelling the behaviour of hybrid
systems like UAVs in a Kripke model is challenging. This work is aimed at
capturing the behaviour of an UAV performing cooperative search mission into a
Kripke model, so as to verify it against the temporal properties expressed in
Computation Tree Logic (CTL). SMV model checker is used for the purpose of
model checking.
|
1003.0396
|
Developing Experimental Models for NASA Missions with ASSL
|
cs.SE cs.RO
|
NASA's new age of space exploration augurs great promise for deep space
exploration missions whereby spacecraft should be independent, autonomous, and
smart. Nowadays NASA increasingly relies on the concepts of autonomic
computing, exploiting these to increase the survivability of remote missions,
particularly when human tending is not feasible. Autonomic computing has been
recognized as a promising approach to the development of self-managing
spacecraft systems that employ onboard intelligence and rely less on control
links. The Autonomic System Specification Language (ASSL) is a framework for
formally specifying and generating autonomic systems. As part of long-term
research targeted at the development of models for space exploration missions
that rely on principles of autonomic computing, we have employed ASSL to
develop formal models and generate functional prototypes for NASA missions.
This helps to validate features and perform experiments through simulation.
Here, we discuss our work on developing such missions with ASSL.
|
1003.0400
|
Collaborative Hierarchical Sparse Modeling
|
cs.IT math.IT
|
Sparse modeling is a powerful framework for data analysis and processing.
Traditionally, encoding in this framework is done by solving an l_1-regularized
linear regression problem, usually called Lasso. In this work we first combine
the sparsity-inducing property of the Lasso model, at the individual feature
level, with the block-sparsity property of the group Lasso model, where sparse
groups of features are jointly encoded, obtaining a sparsity pattern
hierarchically structured. This results in the hierarchical Lasso, which shows
important practical modeling advantages. We then extend this approach to the
collaborative case, where a set of simultaneously coded signals share the same
sparsity pattern at the higher (group) level but not necessarily at the lower
one. Signals then share the same active groups, or classes, but not necessarily
the same active set. This is very well suited for applications such as source
separation. An efficient optimization procedure, which guarantees convergence
to the global optimum, is developed for these new models. The underlying
presentation of the new framework and optimization approach is complemented
with experimental examples and preliminary theoretical results.
|
1003.0404
|
Exploration Of The Dendritic Cell Algorithm Using The Duration Calculus
|
cs.AI cs.LO
|
As one of the newest members in Artificial Immune Systems (AIS), the
Dendritic Cell Algorithm (DCA) has been applied to a range of problems. These
applications mainly belong to the field of anomaly detection. However,
real-time detection, a new challenge to anomaly detection, requires improvement
on the real-time capability of the DCA. To assess such capability, formal
methods in the research of rea-time systems can be employed. The findings of
the assessment can provide guideline for the future development of the
algorithm. Therefore, in this paper we use an interval logic based method,
named the Duration Calculus (DC), to specify a simplified single-cell model of
the DCA. Based on the DC specifications with further induction, we find that
each individual cell in the DCA can perform its function as a detector in
real-time. Since the DCA can be seen as many such cells operating in parallel,
it is potentially capable of performing real-time detection. However, the
analysis process of the standard DCA constricts its real-time capability. As a
result, we conclude that the analysis process of the standard DCA should be
replaced by a real-time analysis component, which can perform periodic analysis
for the purpose of real-time detection.
|
1003.0415
|
The Sparsity Gap: Uncertainty Principles Proportional to Dimension
|
cs.IT math.IT
|
In an incoherent dictionary, most signals that admit a sparse representation
admit a unique sparse representation. In other words, there is no way to
express the signal without using strictly more atoms. This work demonstrates
that sparse signals typically enjoy a higher privilege: each nonoptimal
representation of the signal requires far more atoms than the sparsest
representation-unless it contains many of the same atoms as the sparsest
representation. One impact of this finding is to confer a certain degree of
legitimacy on the particular atoms that appear in a sparse representation. This
result can also be viewed as an uncertainty principle for random sparse signals
over an incoherent dictionary.
|
1003.0445
|
On The Design of Signature Codes in Decentralized Wireless Networks
|
cs.IT math.IT
|
This paper addresses a unified approach towards communication in
decentralized wireless networks of separate transmitter-receiver pairs. In
general, users are unaware of each other's codebooks and there is no central
controller to assign the resources in the network to the users. A randomized
signaling scheme is introduced in which each user locally spreads its Gaussian
signal along a randomly generated spreading code comprised of a sequence of
nonzero elements over a certain alphabet. Along with spreading, each
transmitter also masks its output independently from transmission to
transmission. Using a conditional version of entropy power inequality and a key
lemma on the differential entropy of mixed Gaussian random vectors, achievable
rates are developed for the users. It is seen that as the number of users
increases, the achievable Sum Multiplexing Gain of the network approaches that
of a centralized orthogonal scheme where multiuser interference is completely
avoided. An interesting observation is that in general the elements of a
spreading code are not equiprobable over the underlying alphabet. Finally,
using the recently developed extremal inequality of Liu-Viswanath, we present
an optimality result showing that transmission of Gaussian signals via
spreading and masking yields higher achievable rates than the maximum
achievable rate attained by applying masking only.
|
1003.0470
|
Unsupervised Supervised Learning II: Training Margin Based Classifiers
without Labels
|
cs.LG
|
Many popular linear classifiers, such as logistic regression, boosting, or
SVM, are trained by optimizing a margin-based risk function. Traditionally,
these risk functions are computed based on a labeled dataset. We develop a
novel technique for estimating such risks using only unlabeled data and the
marginal label distribution. We prove that the proposed risk estimator is
consistent on high-dimensional datasets and demonstrate it on synthetic and
real-world data. In particular, we show how the estimate is used for evaluating
classifiers in transfer learning, and for training classifiers with no labeled
data whatsoever.
|
1003.0487
|
Scalable Large-Margin Mahalanobis Distance Metric Learning
|
cs.CV
|
For many machine learning algorithms such as $k$-Nearest Neighbor ($k$-NN)
classifiers and $ k $-means clustering, often their success heavily depends on
the metric used to calculate distances between different data points.
An effective solution for defining such a metric is to learn it from a set of
labeled training samples. In this work, we propose a fast and scalable
algorithm to learn a Mahalanobis distance metric. By employing the principle of
margin maximization to achieve better generalization performances, this
algorithm formulates the metric learning as a convex optimization problem and a
positive semidefinite (psd) matrix is the unknown variable. a specialized
gradient descent method is proposed. our algorithm is much more efficient and
has a better performance in scalability compared with existing methods.
Experiments on benchmark data sets suggest that, compared with state-of-the-art
metric learning algorithms, our algorithm can achieve a comparable
classification accuracy with reduced computational complexity.
|
1003.0488
|
On Secure Distributed Data Storage Under Repair Dynamics
|
cs.IT cs.CR math.IT
|
We address the problem of securing distributed storage systems against
passive eavesdroppers that can observe a limited number of storage nodes. An
important aspect of these systems is node failures over time, which demand a
repair mechanism aimed at maintaining a targeted high level of system
reliability. If an eavesdropper observes a node that is added to the system to
replace a failed node, it will have access to all the data downloaded during
repair, which can potentially compromise the entire information in the system.
We are interested in determining the secrecy capacity of distributed storage
systems under repair dynamics, i.e., the maximum amount of data that can be
securely stored and made available to a legitimate user without revealing any
information to any eavesdropper. We derive a general upper bound on the secrecy
capacity and show that this bound is tight for the bandwidth-limited regime
which is of importance in scenarios such as peer-to-peer distributed storage
systems. We also provide a simple explicit code construction that achieves the
capacity for this regime.
|
1003.0514
|
The finite-dimensional Witsenhausen counterexample
|
cs.IT cs.CC math.IT math.OC
|
Recently, a vector version of Witsenhausen's counterexample was considered
and it was shown that in that limit of infinite vector length, certain
quantization-based control strategies are provably within a constant factor of
the optimal cost for all possible problem parameters. In this paper, finite
vector lengths are considered with the dimension being viewed as an additional
problem parameter. By applying a large-deviation "sphere-packing" philosophy, a
lower bound to the optimal cost for the finite dimensional case is derived that
uses appropriate shadows of the infinite-length bound. Using the new lower
bound, we show that good lattice-based control strategies achieve within a
constant factor of the optimal cost uniformly over all possible problem
parameters, including the vector length. For Witsenhausen's original problem --
the scalar case -- the gap between regular lattice-based strategies and the
lower bound is numerically never more than a factor of 8.
|
1003.0516
|
Model Selection with the Loss Rank Principle
|
cs.LG
|
A key issue in statistics and machine learning is to automatically select the
"right" model complexity, e.g., the number of neighbors to be averaged over in
k nearest neighbor (kNN) regression or the polynomial degree in regression with
polynomials. We suggest a novel principle - the Loss Rank Principle (LoRP) -
for model selection in regression and classification. It is based on the loss
rank, which counts how many other (fictitious) data would be fitted better.
LoRP selects the model that has minimal loss rank. Unlike most penalized
maximum likelihood variants (AIC, BIC, MDL), LoRP depends only on the
regression functions and the loss function. It works without a stochastic noise
model, and is directly applicable to any non-parametric regressor, like kNN.
|
1003.0520
|
Information embedding meets distributed control
|
cs.IT math.IT
|
We consider the problem of information embedding where the encoder modifies a
white Gaussian host signal in a power-constrained manner to encode the message,
and the decoder recovers both the embedded message and the modified host
signal. This extends the recent work of Sumszyk and Steinberg to the
continuous-alphabet Gaussian setting. We show that a dirty-paper-coding based
strategy achieves the optimal rate for perfect recovery of the modified host
and the message. We also provide bounds for the extension wherein the modified
host signal is recovered only to within a specified distortion. When
specialized to the zero-rate case, our results provide the tightest known lower
bounds on the asymptotic costs for the vector version of a famous open problem
in distributed control -- the Witsenhausen counterexample. Using this bound, we
characterize the asymptotically optimal costs for the vector Witsenhausen
problem numerically to within a factor of 1.3 for all problem parameters,
improving on the earlier best known bound of 2.
|
1003.0529
|
A Unified Algorithmic Framework for Multi-Dimensional Scaling
|
cs.LG cs.CG cs.CV
|
In this paper, we propose a unified algorithmic framework for solving many
known variants of \mds. Our algorithm is a simple iterative scheme with
guaranteed convergence, and is \emph{modular}; by changing the internals of a
single subroutine in the algorithm, we can switch cost functions and target
spaces easily. In addition to the formal guarantees of convergence, our
algorithms are accurate; in most cases, they converge to better quality
solutions than existing methods, in comparable time. We expect that this
framework will be useful for a number of \mds variants that have not yet been
studied.
Our framework extends to embedding high-dimensional points lying on a sphere
to points on a lower dimensional sphere, preserving geodesic distances. As a
compliment to this result, we also extend the Johnson-Lindenstrauss Lemma to
this spherical setting, where projecting to a random $O((1/\eps^2) \log
n)$-dimensional sphere causes $\eps$-distortion.
|
1003.0590
|
A new model for solution of complex distributed constrained problems
|
cs.AI
|
In this paper we describe an original computational model for solving
different types of Distributed Constraint Satisfaction Problems (DCSP). The
proposed model is called Controller-Agents for Constraints Solving (CACS). This
model is intended to be used which is an emerged field from the integration
between two paradigms of different nature: Multi-Agent Systems (MAS) and the
Constraint Satisfaction Problem paradigm (CSP) where all constraints are
treated in central manner as a black-box. This model allows grouping
constraints to form a subset that will be treated together as a local problem
inside the controller. Using this model allows also handling non-binary
constraints easily and directly so that no translating of constraints into
binary ones is needed. This paper presents the implementation outlines of a
prototype of DCSP solver, its usage methodology and overview of the CACS
application for timetabling problems.
|
1003.0617
|
Agent Based Approaches to Engineering Autonomous Space Software
|
cs.MA cs.AI
|
Current approaches to the engineering of space software such as satellite
control systems are based around the development of feedback controllers using
packages such as MatLab's Simulink toolbox. These provide powerful tools for
engineering real time systems that adapt to changes in the environment but are
limited when the controller itself needs to be adapted.
We are investigating ways in which ideas from temporal logics and agent
programming can be integrated with the use of such control systems to provide a
more powerful layer of autonomous decision making. This paper will discuss our
initial approaches to the engineering of such systems.
|
1003.0628
|
Linguistic Geometries for Unsupervised Dimensionality Reduction
|
cs.CL
|
Text documents are complex high dimensional objects. To effectively visualize
such data it is important to reduce its dimensionality and visualize the low
dimensional embedding as a 2-D or 3-D scatter plot. In this paper we explore
dimensionality reduction methods that draw upon domain knowledge in order to
achieve a better low dimensional embedding and visualization of documents. We
consider the use of geometries specified manually by an expert, geometries
derived automatically from corpus statistics, and geometries computed from
linguistic resources.
|
1003.0642
|
Text Region Extraction from Business Card Images for Mobile Devices
|
cs.CV
|
Designing a Business Card Reader (BCR) for mobile devices is a challenge to
the researchers because of huge deformation in acquired images, multiplicity in
nature of the business cards and most importantly the computational constraints
of the mobile devices. This paper presents a text extraction method designed in
our work towards developing a BCR for mobile devices. At first, the background
of a camera captured image is eliminated at a coarse level. Then, various rule
based techniques are applied on the Connected Components (CC) to filter out the
noises and picture regions. The CCs identified as text are then binarized using
an adaptive but light-weight binarization technique. Experiments show that the
text extraction accuracy is around 98% for a wide range of resolutions with
varying computation time and memory requirements. The optimum performance is
achieved for the images of resolution 1024x768 pixels with text extraction
accuracy of 98.54% and, space and time requirements as 1.1 MB and 0.16 seconds
respectively.
|
1003.0645
|
Binarizing Business Card Images for Mobile Devices
|
cs.CV
|
Business card images are of multiple natures as these often contain graphics,
pictures and texts of various fonts and sizes both in background and
foreground. So, the conventional binarization techniques designed for document
images can not be directly applied on mobile devices. In this paper, we have
presented a fast binarization technique for camera captured business card
images. A card image is split into small blocks. Some of these blocks are
classified as part of the background based on intensity variance. Then the
non-text regions are eliminated and the text ones are skew corrected and
binarized using a simple yet adaptive technique. Experiment shows that the
technique is fast, efficient and applicable for the mobile devices.
|
1003.0659
|
Particle Filtering on the Audio Localization Manifold
|
cs.AI cs.SD
|
We present a novel particle filtering algorithm for tracking a moving sound
source using a microphone array. If there are N microphones in the array, we
track all $N \choose 2$ delays with a single particle filter over time. Since
it is known that tracking in high dimensions is rife with difficulties, we
instead integrate into our particle filter a model of the low dimensional
manifold that these delays lie on. Our manifold model is based off of work on
modeling low dimensional manifolds via random projection trees [1]. In
addition, we also introduce a new weighting scheme to our particle filtering
algorithm based on recent advancements in online learning. We show that our
novel TDOA tracking algorithm that integrates a manifold model can greatly
outperform standard particle filters on this audio tracking task.
|
1003.0691
|
Statistical and Computational Tradeoffs in Stochastic Composite
Likelihood
|
cs.LG
|
Maximum likelihood estimators are often of limited practical use due to the
intensive computation they require. We propose a family of alternative
estimators that maximize a stochastic variation of the composite likelihood
function. Each of the estimators resolve the computation-accuracy tradeoff
differently, and taken together they span a continuous spectrum of
computation-accuracy tradeoff resolutions. We prove the consistency of the
estimators, provide formulas for their asymptotic variance, statistical
robustness, and computational complexity. We discuss experimental results in
the context of Boltzmann machines and conditional random fields. The
theoretical and experimental studies demonstrate the effectiveness of the
estimators when the computational resources are insufficient. They also
demonstrate that in some cases reduced computational complexity is associated
with robustness thereby increasing statistical accuracy.
|
1003.0696
|
Exponential Family Hybrid Semi-Supervised Learning
|
cs.LG
|
We present an approach to semi-supervised learning based on an exponential
family characterization. Our approach generalizes previous work on coupled
priors for hybrid generative/discriminative models. Our model is more flexible
and natural than previous approaches. Experimental results on several data sets
show that our approach also performs better in practice.
|
1003.0723
|
Securing Interactive Sessions Using Mobile Device through Visual Channel
and Visual Inspection
|
cs.CR cs.CV
|
Communication channel established from a display to a device's camera is
known as visual channel, and it is helpful in securing key exchange protocol.
In this paper, we study how visual channel can be exploited by a network
terminal and mobile device to jointly verify information in an interactive
session, and how such information can be jointly presented in a user-friendly
manner, taking into account that the mobile device can only capture and display
a small region, and the user may only want to authenticate selective
regions-of-interests. Motivated by applications in Kiosk computing and
multi-factor authentication, we consider three security models: (1) the mobile
device is trusted, (2) at most one of the terminal or the mobile device is
dishonest, and (3) both the terminal and device are dishonest but they do not
collude or communicate. We give two protocols and investigate them under the
abovementioned models. We point out a form of replay attack that renders some
other straightforward implementations cumbersome to use. To enhance
user-friendliness, we propose a solution using visual cues embedded into the 2D
barcodes and incorporate the framework of "augmented reality" for easy
verifications through visual inspection. We give a proof-of-concept
implementation to show that our scheme is feasible in practice.
|
1003.0727
|
On the comparison of volumes of quantum states
|
quant-ph cs.IT math-ph math.FA math.IT math.MP
|
This paper aims to study the $\a$-volume of $\cK$, an arbitrary subset of the
set of $N\times N$ density matrices. The $\a$-volume is a generalization of the
Hilbert-Schmidt volume and the volume induced by partial trace. We obtain
two-side estimates for the $\a$-volume of $\cK$ in terms of its Hilbert-Schmidt
volume. The analogous estimates between the Bures volume and the $\a$-volume
are also established. We employ our results to obtain bounds for the
$\a$-volume of the sets of separable quantum states and of states with positive
partial transpose (PPT). Hence, our asymptotic results provide answers for
questions listed on page 9 in \cite{K. Zyczkowski1998} for large $N$ in the
sense of $\a$-volume.
\vskip 3mm PACS numbers: 02.40.Ft, 03.65.Db, 03.65.Ud, 03.67.Mn
|
1003.0729
|
On the Secure Degrees-of-Freedom of the Multiple-Access-Channel
|
cs.IT math.IT
|
A $K$-user secure Gaussian Multiple-Access-Channel (MAC) with an external
eavesdropper is considered in this paper. An achievable rate region is
established for the secure discrete memoryless MAC. The secrecy sum capacity of
the degraded Gaussian MIMO MAC is proven using Gaussian codebooks. For the
non-degraded Gaussian MIMO MAC, an algorithm inspired by interference alignment
technique is proposed to achieve the largest possible total
Secure-Degrees-of-Freedom (S-DoF). When all the terminals are equipped with a
single antenna, Gaussian codebooks have shown to be inefficient in providing a
positive S-DoF. Instead, a novel secure coding scheme is proposed to achieve a
positive S-DoF in the single antenna MAC. This scheme converts the
single-antenna system into a multiple-dimension system with fractional
dimensions. The achievability scheme is based on the alignment of signals into
a small sub-space at the eavesdropper, and the simultaneous separation of the
signals at the intended receiver. Tools from the field of Diophantine
Approximation in number theory are used to analyze the probability of error in
the coding scheme. It is proven that the total S-DoF of $\frac{K-1}{K}$ can be
achieved for almost all channel gains. For the other channel gains, a
multi-layer coding scheme is proposed to achieve a positive S-DoF. As a
function of channel gains, therefore, the achievable S-DoF is discontinued.
|
1003.0735
|
Compress-and-Forward Performance in Low-SNR Relay Channels
|
cs.IT math.IT
|
In this paper, we study the Gaussian relay channels in the low
signal-to-noise ratio (SNR) regime with the time-sharing compress-and-forward
(CF) scheme, where at each time slot all the nodes keep silent at the first
fraction of time and then transmit with CF at a higher peak power in the second
fraction. Such a silent vs. active two-phase relay scheme is preferable in the
low-SNR regime. With this setup, the upper and lower bounds on the minimum
energy per bit required over the relay channel are established under both
full-duplex and half-duplex relaying modes. In particular, the lower bound is
derived by applying the max-flow min-cut capacity theorem; the upper bound is
established with the aforementioned time-sharing CF scheme, and is further
minimized by letting the active phase fraction decrease to zero at the same
rate as the SNR value. Numerical results are presented to validate the
theoretical results.
|
1003.0746
|
Automatically Discovering Hidden Transformation Chaining Constraints
|
cs.AI
|
Model transformations operate on models conforming to precisely defined
metamodels. Consequently, it often seems relatively easy to chain them: the
output of a transformation may be given as input to a second one if metamodels
match. However, this simple rule has some obvious limitations. For instance, a
transformation may only use a subset of a metamodel. Therefore, chaining
transformations appropriately requires more information. We present here an
approach that automatically discovers more detailed information about actual
chaining constraints by statically analyzing transformations. The objective is
to provide developers who decide to chain transformations with more data on
which to base their choices. This approach has been successfully applied to the
case of a library of endogenous transformations. They all have the same source
and target metamodel but have some hidden chaining constraints. In such a case,
the simple metamodel matching rule given above does not provide any useful
information.
|
1003.0776
|
Properties of the Discrete Pulse Transform for Multi-Dimensional Arrays
|
cs.CV
|
This report presents properties of the Discrete Pulse Transform on
multi-dimensional arrays introduced by the authors two or so years ago. The
main result given here in Lemma 2.1 is also formulated in a paper to appear in
IEEE Transactions on Image Processing. However, the proof, being too technical,
was omitted there and hence it appears in full in this publication.
|
1003.0789
|
Information Fusion for Anomaly Detection with the Dendritic Cell
Algorithm
|
cs.AI cs.CR cs.NE
|
Dendritic cells are antigen presenting cells that provide a vital link
between the innate and adaptive immune system, providing the initial detection
of pathogenic invaders. Research into this family of cells has revealed that
they perform information fusion which directs immune responses. We have derived
a Dendritic Cell Algorithm based on the functionality of these cells, by
modelling the biological signals and differentiation pathways to build a
control mechanism for an artificial immune system. We present algorithmic
details in addition to experimental results, when the algorithm was applied to
anomaly detection for the detection of port scans. The results show the
Dendritic Cell Algorithm is sucessful at detecting port scans.
|
1003.0888
|
Support Recovery of Sparse Signals
|
cs.IT math.IT
|
We consider the problem of exact support recovery of sparse signals via noisy
measurements. The main focus is the sufficient and necessary conditions on the
number of measurements for support recovery to be reliable. By drawing an
analogy between the problem of support recovery and the problem of channel
coding over the Gaussian multiple access channel, and exploiting mathematical
tools developed for the latter problem, we obtain an information theoretic
framework for analyzing the performance limits of support recovery. Sharp
sufficient and necessary conditions on the number of measurements in terms of
the signal sparsity level and the measurement noise level are derived.
Specifically, when the number of nonzero entries is held fixed, the exact
asymptotics on the number of measurements for support recovery is developed.
When the number of nonzero entries increases in certain manners, we obtain
sufficient conditions tighter than existing results. In addition, we show that
the proposed methodology can deal with a variety of models of sparse signal
recovery, hence demonstrating its potential as an effective analytical tool.
|
1003.0931
|
A student's guide to searching the literature using online databases
|
physics.ed-ph cs.DL cs.IR
|
A method is described to empower students to efficiently perform general and
literature searches using online resources. The method was tested on
undergraduate and graduate students with varying backgrounds with scientific
literature. Students involved in this study showed marked improvement in their
awareness of how and where to find accurate scientific information.
|
1003.0953
|
Information Flow in One-Dimensional Vehicular Ad Hoc Networks
|
cs.IT math.IT
|
We consider content distribution in vehicular ad hoc networks. We assume that
a file is encoded using fountain code, and the encoded message is cached at
infostations. Vehicles are allowed to download data packets from infostations,
which are placed along a highway. In addition, two vehicles can exchange
packets with each other when they are in proximity. As long as a vehicle has
received enough packets from infostations or from other vehicles, the original
file can be recovered. In this work, we show that system throughput increases
linearly with number of users, meaning that the system exhibits linear
scalability. Furthermore, we analyze the effect of mobility on system
throughput by considering both discrete and continuous velocity distributions
for the vehicles. In both cases, system throughput is shown to decrease when
the average speed of all vehicles increases. In other words, higher overall
mobility reduces system throughput.
|
1003.1010
|
Verifying Recursive Active Documents with Positive Data Tree Rewriting
|
cs.DB cs.OH
|
This paper proposes a data tree-rewriting framework for modeling evolving
documents. The framework is close to Guarded Active XML, a platform used for
handling XML repositories evolving through web services. We focus on automatic
verification of properties of evolving documents that can contain data from an
infinite domain. We establish the boundaries of decidability, and show that
verification of a {\em positive} fragment that can handle recursive service
calls is decidable. We also consider bounded model-checking in our data
tree-rewriting framework and show that it is $\nexptime$-complete.
|
1003.1018
|
Zipf's law and log-normal distributions in measures of scientific output
across fields and institutions: 40 years of Slovenia's research as an example
|
physics.data-an cs.DB stat.AP
|
Slovenia's Current Research Information System (SICRIS) currently hosts
86,443 publications with citation data from 8,359 researchers working on the
whole plethora of social and natural sciences from 1970 till present. Using
these data, we show that the citation distributions derived from individual
publications have Zipfian properties in that they can be fitted by a power law
$P(x) \sim x^{-\alpha}$, with $\alpha$ between 2.4 and 3.1 depending on the
institution and field of research. Distributions of indexes that quantify the
success of researchers rather than individual publications, on the other hand,
cannot be associated with a power law. We find that for Egghe's g-index and
Hirsch's h-index the log-normal form $P(x) \sim \exp[-a\ln x -b(\ln x)^2]$
applies best, with $a$ and $b$ depending moderately on the underlying set of
researchers. In special cases, particularly for institutions with a strongly
hierarchical constitution and research fields with high self-citation rates,
exponential distributions can be observed as well. Both indexes yield
distributions with equivalent statistical properties, which is a strong
indicator for their consistency and logical connectedness. At the same time,
differences in the assessment of citation histories of individual researchers
strengthen their importance for properly evaluating the quality and impact of
scientific output.
|
1003.1020
|
Learning by random walks in the weight space of the Ising perceptron
|
cond-mat.dis-nn cond-mat.stat-mech cs.LG q-bio.NC
|
Several variants of a stochastic local search process for constructing the
synaptic weights of an Ising perceptron are studied. In this process, binary
patterns are sequentially presented to the Ising perceptron and are then
learned as the synaptic weight configuration is modified through a chain of
single- or double-weight flips within the compatible weight configuration space
of the earlier learned patterns. This process is able to reach a storage
capacity of $\alpha \approx 0.63$ for pattern length N = 101 and $\alpha
\approx 0.41$ for N = 1001. If in addition a relearning process is exploited,
the learning performance is further improved to a storage capacity of $\alpha
\approx 0.80$ for N = 101 and $\alpha \approx 0.42$ for N=1001. We found that,
for a given learning task, the solutions constructed by the random walk
learning process are separated by a typical Hamming distance, which decreases
with the constraint density $\alpha$ of the learning task; at a fixed value of
$\alpha$, the width of the Hamming distance distributions decreases with $N$.
|
1003.1048
|
Tag Clusters as Information Retrieval Interfaces
|
cs.IR
|
The paper presents our design of a next generation information retrieval
system based on tag co-occurrences and subsequent clustering. We help users
getting access to digital data through information visualization in the form of
tag clusters. Current problems like the absence of interactivity and semantics
between tags or the difficulty of adding additional search arguments are
solved. In the evaluation, based upon SERVQUAL and IT systems quality
indicators, we found out that tag clusters are perceived as more useful than
tag clouds, are much more trustworthy, and are more enjoyable to use.
|
1003.1072
|
An Offline Technique for Localization of License Plates for Indian
Commercial Vehicles
|
cs.CV
|
Automatic License Plate Recognition (ALPR) is a challenging area of research
due to its importance to variety of commercial applications. The overall
problem may be subdivided into two key modules, firstly, localization of
license plates from vehicle images, and secondly, optical character recognition
of extracted license plates. In the current work, we have concentrated on the
first part of the problem, i.e., localization of license plate regions from
Indian commercial vehicles as a significant step towards development of a
complete ALPR system for Indian vehicles. The technique is based on color based
segmentation of vehicle images and identification of potential license plate
regions. True license plates are finally localized based on four spatial and
horizontal contrast features. The technique successfully localizes the actual
license plates in 73.4% images.
|
1003.1141
|
From Frequency to Meaning: Vector Space Models of Semantics
|
cs.CL cs.IR cs.LG
|
Computers understand very little of the meaning of human language. This
profoundly limits our ability to give instructions to computers, the ability of
computers to explain their actions to us, and the ability of computers to
analyse and process text. Vector space models (VSMs) of semantics are beginning
to address these limits. This paper surveys the use of VSMs for semantic
processing of text. We organize the literature on VSMs according to the
structure of the matrix in a VSM. There are currently three broad classes of
VSMs, based on term-document, word-context, and pair-pattern matrices, yielding
three classes of applications. We survey a broad range of applications in these
three categories and we take a detailed look at a specific open source project
in each category. Our goal in this survey is to show the breadth of
applications of VSMs for semantics, to provide a new perspective on VSMs for
those who are already familiar with the area, and to provide pointers into the
literature for those who are less familiar with the field.
|
1003.1179
|
View Synthesis from Schema Mappings
|
cs.DB
|
In data management, and in particular in data integration, data exchange,
query optimization, and data privacy, the notion of view plays a central role.
In several contexts, such as data integration, data mashups, and data
warehousing, the need arises of designing views starting from a set of known
correspondences between queries over different schemas. In this paper we deal
with the issue of automating such a design process. We call this novel problem
"view synthesis from schema mappings": given a set of schema mappings, each
relating a query over a source schema to a query over a target schema,
automatically synthesize for each source a view over the target schema in such
a way that for each mapping, the query over the source is a rewriting of the
query over the target wrt the synthesized views. We study view synthesis from
schema mappings both in the relational setting, where queries and views are
(unions of) conjunctive queries, and in the semistructured data setting, where
queries and views are (two-way) regular path queries, as well as unions of
conjunctions thereof. We provide techniques and complexity upper bounds for
each of these cases.
|
1003.1251
|
Minimum Spanning Tree on Spatio-Temporal Networks
|
cs.DS cs.DB
|
Given a spatio-temporal network (ST network) where edge properties vary with
time, a time-sub-interval minimum spanning tree (TSMST) is a collection of
minimum spanning trees of the ST network, where each tree is associated with a
time interval. During this time interval, the total cost of tree is least among
all the spanning trees. The TSMST problem aims to identify a collection of
distinct minimum spanning trees and their respective time-sub-intervals under
the constraint that the edge weight functions are piecewise linear. This is an
important problem in ST network application domains such as wireless sensor
networks (e.g., energy efficient routing). Computing TSMST is challenging
because the ranking of candidate spanning trees is non-stationary over a given
time interval. Existing methods such as dynamic graph algorithms and kinetic
data structures assume separable edge weight functions. In contrast, we propose
novel algorithms to find TSMST for large ST networks by accounting for both
separable and non-separable piecewise linear edge weight functions. The
algorithms are based on the ordering of edges in edge-order-intervals and
intersection points of edge weight functions.
|
1003.1256
|
Integrating Innate and Adaptive Immunity for Intrusion Detection
|
cs.AI cs.CR cs.NE
|
Network Intrusion Detection Systems (NDIS) monitor a network with the aim of
discerning malicious from benign activity on that network. While a wide range
of approaches have met varying levels of success, most IDS's rely on having
access to a database of known attack signatures which are written by security
experts. Nowadays, in order to solve problems with false positive alters,
correlation algorithms are used to add additional structure to sequences of IDS
alerts. However, such techniques are of no help in discovering novel attacks or
variations of known attacks, something the human immune system (HIS) is capable
of doing in its own specialised domain. This paper presents a novel immune
algorithm for application to an intrusion detection problem. The goal is to
discover packets containing novel variations of attacks covered by an existing
signature base.
|
1003.1257
|
On the symbol error probability of regular polytopes
|
cs.IT math.IT
|
An exact expression for the symbol error probability of the four-dimensional
24-cell in Gaussian noise is derived. Corresponding expressions for other
regular convex polytopes are summarized. Numerically stable versions of these
error probabilities are also obtained.
|
1003.1266
|
Hitting and commute times in large graphs are often misleading
|
cs.DS cs.LG math.PR
|
Next to the shortest path distance, the second most popular distance function
between vertices in a graph is the commute distance (resistance distance). For
two vertices u and v, the hitting time H_{uv} is the expected time it takes a
random walk to travel from u to v. The commute time is its symmetrized version
C_{uv} = H_{uv} + H_{vu}. In our paper we study the behavior of hitting times
and commute distances when the number n of vertices in the graph is very large.
We prove that as n converges to infinty, hitting times and commute distances
converge to expressions that do not take into account the global structure of
the graph at all. Namely, the hitting time H_{uv} converges to 1/d_v and the
commute time to 1/d_u + 1/d_v where d_u and d_v denote the degrees of vertices
u and v. In these cases, the hitting and commute times are misleading in the
sense that they do not provide information about the structure of the graph. We
focus on two major classes of random graphs: random geometric graphs (k-nearest
neighbor graphs, epsilon-graphs, Gaussian similarity graphs) and random graphs
with given expected degrees (in particular, Erdos-Renyi graphs with and without
planted partitions)
|
1003.1343
|
What does Newcomb's paradox teach us?
|
cs.GT cs.AI math.OC math.PR
|
In Newcomb's paradox you choose to receive either the contents of a
particular closed box, or the contents of both that closed box and another one.
Before you choose, a prediction algorithm deduces your choice, and fills the
two boxes based on that deduction. Newcomb's paradox is that game theory
appears to provide two conflicting recommendations for what choice you should
make in this scenario. We analyze Newcomb's paradox using a recent extension of
game theory in which the players set conditional probability distributions in a
Bayes net. We show that the two game theory recommendations in Newcomb's
scenario have different presumptions for what Bayes net relates your choice and
the algorithm's prediction. We resolve the paradox by proving that these two
Bayes nets are incompatible. We also show that the accuracy of the algorithm's
prediction, the focus of much previous work, is irrelevant. In addition we show
that Newcomb's scenario only provides a contradiction between game theory's
expected utility and dominance principles if one is sloppy in specifying the
underlying Bayes net. We also show that Newcomb's paradox is time-reversal
invariant; both the paradox and its resolution are unchanged if the algorithm
makes its `prediction' after you make your choice rather than before.
|
1003.1354
|
Faster Rates for training Max-Margin Markov Networks
|
cs.LG cs.CC
|
Structured output prediction is an important machine learning problem both in
theory and practice, and the max-margin Markov network (\mcn) is an effective
approach. All state-of-the-art algorithms for optimizing \mcn\ objectives take
at least $O(1/\epsilon)$ number of iterations to find an $\epsilon$ accurate
solution. Recent results in structured optimization suggest that faster rates
are possible by exploiting the structure of the objective function. Towards
this end \citet{Nesterov05} proposed an excessive gap reduction technique based
on Euclidean projections which converges in $O(1/\sqrt{\epsilon})$ iterations
on strongly convex functions. Unfortunately when applied to \mcn s, this
approach does not admit graphical model factorization which, as in many
existing algorithms, is crucial for keeping the cost per iteration tractable.
In this paper, we present a new excessive gap reduction technique based on
Bregman projections which admits graphical model factorization naturally, and
converges in $O(1/\sqrt{\epsilon})$ iterations. Compared with existing
algorithms, the convergence rate of our method has better dependence on
$\epsilon$ and other parameters of the problem, and can be easily kernelized.
|
1003.1399
|
Automatic derivation of domain terms and concept location based on the
analysis of the identifiers
|
cs.CL
|
Developers express the meaning of the domain ideas in specifically selected
identifiers and comments that form the target implemented code. Software
maintenance requires knowledge and understanding of the encoded ideas. This
paper presents a way how to create automatically domain vocabulary. Knowledge
of domain vocabulary supports the comprehension of a specific domain for later
code maintenance or evolution. We present experiments conducted in two selected
domains: application servers and web frameworks. Knowledge of domain terms
enables easy localization of chunks of code that belong to a certain term. We
consider these chunks of code as "concepts" and their placement in the code as
"concept location". Application developers may also benefit from the obtained
domain terms. These terms are parts of speech that characterize a certain
concept. Concepts are encoded in "classes" (OO paradigm) and the obtained
vocabulary of terms supports the selection and the comprehension of the class'
appropriate identifiers. We measured the following software products with our
tool: JBoss, JOnAS, GlassFish, Tapestry, Google Web Toolkit and Echo2.
|
1003.1410
|
Local Space-Time Smoothing for Version Controlled Documents
|
cs.GR cs.CL cs.LG
|
Unlike static documents, version controlled documents are continuously edited
by one or more authors. Such collaborative revision process makes traditional
modeling and visualization techniques inappropriate. In this paper we propose a
new representation based on local space-time smoothing that captures important
revision patterns. We demonstrate the applicability of our framework using
experiments on synthetic and real-world data.
|
1003.1422
|
Polar Coding for Secure Transmission and Key Agreement
|
cs.IT cs.CR math.IT
|
Wyner's work on wiretap channels and the recent works on information
theoretic security are based on random codes. Achieving information theoretical
security with practical coding schemes is of definite interest. In this note,
the attempt is to overcome this elusive task by employing the polar coding
technique of Ar{\i}kan. It is shown that polar codes achieve non-trivial
perfect secrecy rates for binary-input degraded wiretap channels while enjoying
their low encoding-decoding complexity. In the special case of symmetric main
and eavesdropper channels, this coding technique achieves the secrecy capacity.
Next, fading erasure wiretap channels are considered and a secret key agreement
scheme is proposed, which requires only the statistical knowledge of the
eavesdropper channel state information (CSI). The enabling factor is the
creation of advantage over Eve, by blindly using the proposed scheme over each
fading block, which is then exploited with privacy amplification techniques to
generate secret keys.
|
1003.1450
|
A New Clustering Approach based on Page's Path Similarity for Navigation
Patterns Mining
|
cs.LG
|
In recent years, predicting the user's next request in web navigation has
received much attention. An information source to be used for dealing with such
problem is the left information by the previous web users stored at the web
access log on the web servers. Purposed systems for this problem work based on
this idea that if a large number of web users request specific pages of a
website on a given session, it can be concluded that these pages are satisfying
similar information needs, and therefore they are conceptually related. In this
study, a new clustering approach is introduced that employs logical path
storing of a website pages as another parameter which is regarded as a
similarity parameter and conceptual relation between web pages. The results of
simulation have shown that the proposed approach is more than others precise in
determining the clusters.
|
1003.1455
|
A Computational Algorithm based on Empirical Analysis, that Composes
Sanskrit Poetry
|
cs.CL
|
Poetry-writing in Sanskrit is riddled with problems for even those who know
the language well. This is so because the rules that govern Sanskrit prosody
are numerous and stringent. We propose a computational algorithm that converts
prose given as E-text into poetry in accordance with the metrical rules of
Sanskrit prosody, simultaneously taking care to ensure that sandhi or euphonic
conjunction, which is compulsory in verse, is handled. The algorithm is
considerably speeded up by a novel method of reducing the target search
database. The algorithm further gives suggestions to the poet in case what
he/she has given as the input prose is impossible to fit into any allowed
metrical format. There is also an interactive component of the algorithm by
which the algorithm interacts with the poet to resolve ambiguities. In
addition, this unique work, which provides a solution to a problem that has
never been addressed before, provides a simple yet effective speech recognition
interface that would help the visually impaired dictate words in E-text, which
is in turn versified by our Poetry Composer Engine.
|
1003.1457
|
The Comparison of Methods Artificial Neural Network with Linear
Regression Using Specific Variables for Prediction Stock Price in Tehran
Stock Exchange
|
cs.NE
|
In this paper, researchers estimated the stock price of activated companies
in Tehran (Iran) stock exchange. It is used Linear Regression and Artificial
Neural Network methods and compared these two methods. In Artificial Neural
Network, of General Regression Neural Network method (GRNN) for architecture is
used. In this paper, first, researchers considered 10 macro economic variables
and 30 financial variables and then they obtained seven final variables
including 3 macro economic variables and 4 financial variables to estimate the
stock price using Independent components Analysis (ICA). So, we presented an
equation for two methods and compared their results which shown that artificial
neural network method is more efficient than linear regression method.
|
1003.1458
|
Secured Cryptographic Key Generation From Multimodal Biometrics: Feature
Level Fusion of Fingerprint and Iris
|
cs.CR cs.CV
|
Human users have a tough time remembering long cryptographic keys. Hence,
researchers, for so long, have been examining ways to utilize biometric
features of the user instead of a memorable password or passphrase, in an
effort to generate strong and repeatable cryptographic keys. Our objective is
to incorporate the volatility of the user's biometric features into the
generated key, so as to make the key unguessable to an attacker lacking
significant knowledge of the user's biometrics. We go one step further trying
to incorporate multiple biometric modalities into cryptographic key generation
so as to provide better security. In this article, we propose an efficient
approach based on multimodal biometrics (Iris and fingerprint) for generation
of secure cryptographic key. The proposed approach is composed of three modules
namely, 1) Feature extraction, 2) Multimodal biometric template generation and
3) Cryptographic key generation. Initially, the features, minutiae points and
texture properties are extracted from the fingerprint and iris images
respectively. Subsequently, the extracted features are fused together at the
feature level to construct the multi-biometric template. Finally, a 256-bit
secure cryptographic key is generated from the multi-biometric template. For
experimentation, we have employed the fingerprint images obtained from publicly
available sources and the iris images from CASIA Iris Database. The
experimental results demonstrate the effectiveness of the proposed approach.
|
1003.1460
|
Ontology Based Query Expansion Using Word Sense Disambiguation
|
cs.IR
|
The existing information retrieval techniques do not consider the context of
the keywords present in the user's queries. Therefore, the search engines
sometimes do not provide sufficient information to the users. New methods based
on the semantics of user keywords must be developed to search in the vast web
space without incurring loss of information. The semantic based information
retrieval techniques need to understand the meaning of the concepts in the user
queries. This will improve the precision-recall of the search results.
Therefore, this approach focuses on the concept based semantic information
retrieval. This work is based on Word sense disambiguation, thesaurus WordNet
and ontology of any domain for retrieving information in order to capture the
context of particular concept(s) and discover semantic relationships between
them.
|
1003.1493
|
Integration of Rule Based Expert Systems and Case Based Reasoning in an
Acute Bacterial Meningitis Clinical Decision Support System
|
cs.AI
|
This article presents the results of the research carried out on the
development of a medical diagnostic system applied to the Acute Bacterial
Meningitis, using the Case Based Reasoning methodology. The research was
focused on the implementation of the adaptation stage, from the integration of
Case Based Reasoning and Rule Based Expert Systems. In this adaptation stage we
use a higher level RBC that stores and allows reutilizing change experiences,
combined with a classic rule-based inference engine. In order to take into
account the most evident clinical situation, a pre-diagnosis stage is
implemented using a rule engine that, given an evident situation, emits the
corresponding diagnosis and avoids the complete process.
|
1003.1494
|
Formal Concept Analysis for Information Retrieval
|
cs.IR
|
In this paper we describe a mechanism to improve Information Retrieval (IR)
on the web. The method is based on Formal Concepts Analysis (FCA) that it is
makes semantical relations during the queries, and allows a reorganizing, in
the shape of a lattice of concepts, the answers provided by a search engine. We
proposed for the IR an incremental algorithm based on Galois lattice. This
algorithm allows a formal clustering of the data sources, and the results which
it turns over are classified by order of relevance. The control of relevance is
exploited in clustering, we improved the result by using ontology in field of
image processing, and reformulating the user queries which make it possible to
give more relevant documents.
|
1003.1499
|
Evaluation of E-Learners Behaviour using Different Fuzzy Clustering
Models: A Comparative Study
|
cs.CY cs.LG
|
This paper introduces an evaluation methodologies for the e-learners'
behaviour that will be a feedback to the decision makers in e-learning system.
Learner's profile plays a crucial role in the evaluation process to improve the
e-learning process performance. The work focuses on the clustering of the
e-learners based on their behaviour into specific categories that represent the
learner's profiles. The learners' classes named as regular, workers, casual,
bad, and absent. The work may answer the question of how to return bad students
to be regular ones. The work presented the use of different fuzzy clustering
techniques as fuzzy c-means and kernelized fuzzy c-means to find the learners'
categories and predict their profiles. The paper presents the main phases as
data description, preparation, features selection, and the experiments design
using different fuzzy clustering models. Analysis of the obtained results and
comparison with the real world behavior of those learners proved that there is
a match with percentage of 78%. Fuzzy clustering reflects the learners'
behavior more than crisp clustering. Comparison between FCM and KFCM proved
that the KFCM is much better than FCM in predicting the learners' behaviour.
|
1003.1500
|
Hierarchical Approach for Online Mining--Emphasis towards Software
Metrics
|
cs.DB
|
Several multi-pass algorithms have been proposed for Association Rule Mining
from static repositories. However, such algorithms are incapable of online
processing of transaction streams. In this paper we introduce an efficient
single-pass algorithm for mining association rules, given a hierarchical
classification amongest items. Processing efficiency is achieved by utilizing
two optimizations, hierarchy aware counting and transaction reduction, which
become possible in the context of hierarchical classification. This paper
considers the problem of integrating constraints that are Boolean expression
over the presence or absence of items into the association discovery algorithm.
This paper present three integrated algorithms for mining association rules
with item constraints and discuss their tradeoffs. It is concluded that the
variation of complexity depends on the measure of DIT (Depth of Inheritance
Tree) and NOC (Number of Children) in the context of Hierarchical
Classification.
|
1003.1504
|
Indexer Based Dynamic Web Services Discovery
|
cs.AI
|
Recent advancement in web services plays an important role in business to
business and business to consumer interaction. Discovery mechanism is not only
used to find a suitable service but also provides collaboration between service
providers and consumers by using standard protocols. A static web service
discovery mechanism is not only time consuming but requires continuous human
interaction. This paper proposed an efficient dynamic web services discovery
mechanism that can locate relevant and updated web services from service
registries and repositories with timestamp based on indexing value and
categorization for faster and efficient discovery of service. The proposed
prototype focuses on quality of service issues and introduces concept of local
cache, categorization of services, indexing mechanism, CSP (Constraint
Satisfaction Problem) solver, aging and usage of translator. Performance of
proposed framework is evaluated by implementing the algorithm and correctness
of our method is shown. The results of proposed framework shows greater
performance and accuracy in dynamic discovery mechanism of web services
resolving the existing issues of flexibility, scalability, based on quality of
service, and discovers updated and most relevant services with ease of usage.
|
1003.1510
|
Hierarchical Web Page Classification Based on a Topic Model and
Neighboring Pages Integration
|
cs.LG
|
Most Web page classification models typically apply the bag of words (BOW)
model to represent the feature space. The original BOW representation, however,
is unable to recognize semantic relationships between terms. One possible
solution is to apply the topic model approach based on the Latent Dirichlet
Allocation algorithm to cluster the term features into a set of latent topics.
Terms assigned into the same topic are semantically related. In this paper, we
propose a novel hierarchical classification method based on a topic model and
by integrating additional term features from neighboring pages. Our
hierarchical classification method consists of two phases: (1) feature
representation by using a topic model and integrating neighboring pages, and
(2) hierarchical Support Vector Machines (SVM) classification model constructed
from a confusion matrix. From the experimental results, the approach of using
the proposed hierarchical SVM model by integrating current page with
neighboring pages via the topic model yielded the best performance with the
accuracy equal to 90.33% and the F1 measure of 90.14%; an improvement of 5.12%
and 5.13% over the original SVM model, respectively.
|
1003.1511
|
Clinical gait data analysis based on Spatio-Temporal features
|
cs.CV
|
Analysing human gait has found considerable interest in recent computer
vision research. So far, however, contributions to this topic exclusively dealt
with the tasks of person identification or activity recognition. In this paper,
we consider a different application for gait analysis and examine its use as a
means of deducing the physical well-being of people. The proposed method is
based on transforming the joint motion trajectories using wavelets to extract
spatio-temporal features which are then fed as input to a vector quantiser; a
self-organising map for classification of walking patterns of individuals with
and without pathology. We show that our proposed algorithm is successful in
extracting features that successfully discriminate between individuals with and
without locomotion impairment.
|
1003.1588
|
On the Failure of the Finite Model Property in some Fuzzy Description
Logics
|
cs.AI
|
Fuzzy Description Logics (DLs) are a family of logics which allow the
representation of (and the reasoning with) structured knowledge affected by
vagueness. Although most of the not very expressive crisp DLs, such as ALC,
enjoy the Finite Model Property (FMP), this is not the case once we move into
the fuzzy case. In this paper we show that if we allow arbitrary knowledge
bases, then the fuzzy DLs ALC under Lukasiewicz and Product fuzzy logics do not
verify the FMP even if we restrict to witnessed models; in other words, finite
satisfiability and witnessed satisfiability are different for arbitrary
knowledge bases. The aim of this paper is to point out the failure of FMP
because it affects several algorithms published in the literature for reasoning
under fuzzy ALC.
|
1003.1598
|
Information Fusion in the Immune System
|
cs.AI cs.NE
|
Biologically-inspired methods such as evolutionary algorithms and neural
networks are proving useful in the field of information fusion. Artificial
Immune Systems (AISs) are a biologically-inspired approach which take
inspiration from the biological immune system. Interestingly, recent research
has show how AISs which use multi-level information sources as input data can
be used to build effective algorithms for real time computer intrusion
detection. This research is based on biological information fusion mechanisms
used by the human immune system and as such might be of interest to the
information fusion community. The aim of this paper is to present a summary of
some of the biological information fusion mechanisms seen in the human immune
system, and of how these mechanisms have been implemented as AISs
|
1003.1655
|
Inner and Outer Bounds for the Public Information Embedding Capacity
Region Under Multiple Access Attacks
|
cs.IT math.IT
|
We consider a public multi-user information embedding (watermarking) system
in which two messages (watermarks) are independently embedded into two
correlated covertexts and are transmitted through a multiple-access attack
channel. The tradeoff between the achievable embedding rates and the average
distortions for the two embedders is studied. For given distortion levels,
inner and outer bounds for the embedding capacity region are obtained in
single-letter form. Tighter bounds are also given for independent covertexts.
|
1003.1658
|
A multivalued knowledge-base model
|
cs.AI
|
The basic aim of our study is to give a possible model for handling uncertain
information. This model is worked out in the framework of DATALOG. At first the
concept of fuzzy Datalog will be summarized, then its extensions for
intuitionistic- and interval-valued fuzzy logic is given and the concept of
bipolar fuzzy Datalog is introduced. Based on these ideas the concept of
multivalued knowledge-base will be defined as a quadruple of any background
knowledge; a deduction mechanism; a connecting algorithm, and a function set of
the program, which help us to determine the uncertainty levels of the results.
At last a possible evaluation strategy is given.
|
1003.1738
|
MISO Capacity with Per-Antenna Power Constraint
|
cs.IT math.IT
|
We establish in closed-form the capacity and the optimal signaling scheme for
a MISO channel with per-antenna power constraint. Two cases of channel state
information are considered: constant channel known at both the transmitter and
receiver, and Rayleigh fading channel known only at the receiver. For the first
case, the optimal signaling scheme is beamforming with the phases of the beam
weights matched to the phases of the channel coefficients, but the amplitudes
independent of the channel coefficients and dependent only on the constrained
powers. For the second case, the optimal scheme is to send independent signals
from the antennas with the constrained powers. In both cases, the capacity with
per-antenna power constraint is usually less than that with sum power
constraint.
|
1003.1787
|
Vulnerability of MRD-Code-based Universal Secure Network Coding against
Stronger Eavesdroppers
|
cs.IT math.IT
|
Silva et al. proposed a universal secure network coding scheme based on MRD
codes, which can be applied to any underlying network code. This paper
considers a stronger eavesdropping model where the eavesdroppers possess the
ability to re-select the tapping links during the transmission. We give a proof
for the impossibility of attaining universal security against such adversaries
using Silva et al.'s code for all choices of code parameters, even with
restricted number of tapped links. We also consider the cases with restricted
tapping duration and derive some conditions for this code to be secure.
|
1003.1792
|
A Hybrid System based on Multi-Agent System in the Data Preprocessing
Stage
|
cs.MA
|
We describe the usage of the Multi-agent system in the data preprocessing
stage of an on-going project, called e-Wedding. The aim of this project is to
utilize MAS and various approaches, like Web services, Ontology, and Data
mining techniques, in e-Business that want to improve responsiveness and
efficiency of systems so as to extract customer behavior model on Wedding
Businesses. However, in this paper, we propose and implement the
multi-agent-system, based on JADE, to only cope data preprocessing stage
specified on handle with missing value techniques. JADE is quite easy to learn
and use. Moreover, it supports many agent approaches such as agent
communication, protocol, behavior and ontology. This framework has been
experimented and evaluated in the realization of a simple, but realistic. The
results, though still preliminary, are quite.
|
1003.1795
|
A Survey of Na\"ive Bayes Machine Learning approach in Text Document
Classification
|
cs.LG cs.IR
|
Text Document classification aims in associating one or more predefined
categories based on the likelihood suggested by the training set of labeled
documents. Many machine learning algorithms play a vital role in training the
system with predefined categories among which Na\"ive Bayes has some intriguing
facts that it is simple, easy to implement and draws better accuracy in large
datasets in spite of the na\"ive dependence. The importance of Na\"ive Bayes
Machine learning approach has felt hence the study has been taken up for text
document classification and the statistical event models available. This survey
the various feature selection methods has been discussed and compared along
with the metrics related to text document classification.
|
1003.1803
|
Nonlinear Filter Based Image Denoising Using AMF Approach
|
cs.CV
|
This paper proposes a new technique based on nonlinear Adaptive Median filter
(AMF) for image restoration. Image denoising is a common procedure in digital
image processing aiming at the removal of noise, which may corrupt an image
during its acquisition or transmission, while retaining its quality. This
procedure is traditionally performed in the spatial or frequency domain by
filtering. The aim of image enhancement is to reconstruct the true image from
the corrupted image. The process of image acquisition frequently leads to
degradation and the quality of the digitized image becomes inferior to the
original image. Filtering is a technique for enhancing the image. Linear filter
is the filtering in which the value of an output pixel is a linear combination
of neighborhood values, which can produce blur in the image. Thus a variety of
smoothing techniques have been developed that are non linear. Median filter is
the one of the most popular non-linear filter. When considering a small
neighborhood it is highly efficient but for large window and in case of high
noise it gives rise to more blurring to image. The Centre Weighted Median (CWM)
filter has got a better average performance over the median filter [8]. However
the original pixel corrupted and noise reduction is substantial under high
noise condition. Hence this technique has also blurring affect on the image. To
illustrate the superiority of the proposed approach by overcoming the existing
problem, the proposed new scheme (AMF) Adaptive Median Filter has been
simulated along with the standard ones and various performance measures have
been compared.
|
1003.1810
|
Reconfigurable Parallel Data Flow Architecture
|
cs.MA
|
This paper presents a reconfigurable parallel data flow architecture. This
architecture uses the concepts of multi-agent paradigm in reconfigurable
hardware systems. The utilization of this new paradigm has the potential to
greatly increase the flexibility, efficiency, expandability of data flow
systems and to provide an attractive alternative to the current set of disjoint
approaches that are currently applied to this problem domain. The ability of
methodology to implement data flow type processing with different models is
presented in this paper.
|
1003.1814
|
An Analytical Approach to Document Clustering Based on Internal
Criterion Function
|
cs.IR
|
Fast and high quality document clustering is an important task in organizing
information, search engine results obtaining from user query, enhancing web
crawling and information retrieval. With the large amount of data available and
with a goal of creating good quality clusters, a variety of algorithms have
been developed having quality-complexity trade-offs. Among these, some
algorithms seek to minimize the computational complexity using certain
criterion functions which are defined for the whole set of clustering solution.
In this paper, we are proposing a novel document clustering algorithm based on
an internal criterion function. Most commonly used partitioning clustering
algorithms (e.g. k-means) have some drawbacks as they suffer from local optimum
solutions and creation of empty clusters as a clustering solution. The proposed
algorithm usually does not suffer from these problems and converge to a global
optimum, its performance enhances with the increase in number of clusters. We
have checked our algorithm against three different datasets for four different
values of k (required number of clusters).
|
1003.1816
|
Role of Data Mining in E-Payment systems
|
cs.DB
|
Data Mining deals extracting hidden knowledge, unexpected pattern and new
rules from large database. Various customized data mining tools have been
developed for domain specific applications such as Biomedicine, DNA analysis
and telecommunication. Trends in data mining include further efforts towards
the exploration of new application areas and methods for handling complex data
types, algorithm scalability, constraint based data mining and visualization
methods. In this paper we will present domain specific Secure Multiparty
computation technique and applications. Data mining has matured as a field of
basic and applied research in computer science in general. In this paper, we
survey some of the recent approaches and architectures where data mining has
been applied in the fields of e-payment systems. In this paper we limit our
discussion to data mining in the context of e-payment systems. We also mention
a few directions for further work in this domain, based on the survey.
|
1003.1819
|
Facial Gesture Recognition Using Correlation And Mahalanobis Distance
|
cs.CV
|
Augmenting human computer interaction with automated analysis and synthesis
of facial expressions is a goal towards which much research effort has been
devoted recently. Facial gesture recognition is one of the important component
of natural human-machine interfaces; it may also be used in behavioural
science, security systems and in clinical practice. Although humans recognise
facial expressions virtually without effort or delay, reliable expression
recognition by machine is still a challenge. The face expression recognition
problem is challenging because different individuals display the same
expression differently. This paper presents an overview of gesture recognition
in real time using the concepts of correlation and Mahalanobis distance.We
consider the six universal emotional categories namely joy, anger, fear,
disgust, sadness and surprise.
|
1003.1826
|
A GA based Window Selection Methodology to Enhance Window based Multi
wavelet transformation and thresholding aided CT image denoising technique
|
cs.CV
|
Image denoising is getting more significance, especially in Computed
Tomography (CT), which is an important and most common modality in medical
imaging. This is mainly due to that the effectiveness of clinical diagnosis
using CT image lies on the image quality. The denoising technique for CT images
using window-based Multi-wavelet transformation and thresholding shows the
effectiveness in denoising, however, a drawback exists in selecting the closer
windows in the process of window-based multi-wavelet transformation and
thresholding. Generally, the windows of the duplicate noisy image that are
closer to each window of original noisy image are obtained by the checking them
sequentially. This leads to the possibility of missing out very closer windows
and so enhancement is required in the aforesaid process of the denoising
technique. In this paper, we propose a GA-based window selection methodology to
include the denoising technique. With the aid of the GA-based window selection
methodology, the windows of the duplicate noisy image that are very closer to
every window of the original noisy image are extracted in an effective manner.
By incorporating the proposed GA-based window selection methodology, the
denoising the CT image is performed effectively. Eventually, a comparison is
made between the denoising technique with and without the proposed GA-based
window selection methodology.
|
1003.1827
|
Investigation and Assessment of Disorder of Ultrasound B-mode Images
|
cs.CV
|
Digital image plays a vital role in the early detection of cancers, such as
prostate cancer, breast cancer, lungs cancer, cervical cancer. Ultrasound
imaging method is also suitable for early detection of the abnormality of
fetus. The accurate detection of region of interest in ultrasound image is
crucial. Since the result of reflection, refraction and deflection of
ultrasound waves from different types of tissues with different acoustic
impedance. Usually, the contrast in ultrasound image is very low and weak edges
make the image difficult to identify the fetus region in the ultrasound image.
So the analysis of ultrasound image is more challenging one. We try to develop
a new algorithmic approach to solve the problem of non clarity and find
disorder of it. Generally there is no common enhancement approach for noise
reduction. This paper proposes different filtering techniques based on
statistical methods for the removal of various noise. The quality of the
enhanced images is measured by the statistical quantity measures:
Signal-to-Noise Ratio (SNR), Peak Signal-to-Noise Ratio (PSNR), and Root Mean
Square Error (RMSE).
|
1003.1888
|
Biology-Derived Algorithms in Engineering Optimization
|
math.OC cs.CE cs.NE q-bio.QM
|
Biology-derived algorithms are an important part of computational sciences,
which are essential to many scientific disciplines and engineering
applications. Many computational methods are derived from or based on the
analogy to natural evolution and biological activities, and these biologically
inspired computations include genetic algorithms, neural networks, cellular
automata, and other algorithms.
|
1003.1891
|
Handwritten Arabic Numeral Recognition using a Multi Layer Perceptron
|
cs.CV
|
Handwritten numeral recognition is in general a benchmark problem of Pattern
Recognition and Artificial Intelligence. Compared to the problem of printed
numeral recognition, the problem of handwritten numeral recognition is
compounded due to variations in shapes and sizes of handwritten characters.
Considering all these, the problem of handwritten numeral recognition is
addressed under the present work in respect to handwritten Arabic numerals.
Arabic is spoken throughout the Arab World and the fifth most popular language
in the world slightly before Portuguese and Bengali. For the present work, we
have developed a feature set of 88 features is designed to represent samples of
handwritten Arabic numerals for this work. It includes 72 shadow and 16 octant
features. A Multi Layer Perceptron (MLP) based classifier is used here for
recognition handwritten Arabic digits represented with the said feature set. On
experimentation with a database of 3000 samples, the technique yields an
average recognition rate of 94.93% evaluated after three-fold cross validation
of results. It is useful for applications related to OCR of handwritten Arabic
Digit and can also be extended to include OCR of handwritten characters of
Arabic alphabet.
|
1003.1894
|
A comparative study of different feature sets for recognition of
handwritten Arabic numerals using a Multi Layer Perceptron
|
cs.CV
|
The work presents a comparative assessment of seven different feature sets
for recognition of handwritten Arabic numerals using a Multi Layer Perceptron
(MLP) based classifier. The seven feature sets employed here consist of shadow
features, octant centroids, longest runs, angular distances, effective spans,
dynamic centers of gravity, and some of their combinations. On experimentation
with a database of 3000 samples, the maximum recognition rate of 95.80% is
observed with both of two separate combinations of features. One of these
combinations consists of shadow and centriod features, i. e. 88 features in
all, and the other shadow, centroid and longest run features, i. e. 124
features in all. Out of these two, the former combination having a smaller
number of features is finally considered effective for applications related to
Optical Character Recognition (OCR) of handwritten Arabic numerals. The work
can also be extended to include OCR of handwritten characters of Arabic
alphabet.
|
1003.1931
|
Hypergraph model of social tagging networks
|
physics.soc-ph cs.IR
|
The past few years have witnessed the great success of a new family of
paradigms, so-called folksonomy, which allows users to freely associate tags to
resources and efficiently manage them. In order to uncover the underlying
structures and user behaviors in folksonomy, in this paper, we propose an
evolutionary hypergrah model to explain the emerging statistical properties.
The present model introduces a novel mechanism that one can not only assign
tags to resources, but also retrieve resources via collaborative tags. We then
compare the model with a real-world dataset: \emph{Del.icio.us}. Indeed, the
present model shows considerable agreement with the empirical data in following
aspects: power-law hyperdegree distributions, negtive correlation between
clustering coefficients and hyperdegrees, and small average distances.
Furthermore, the model indicates that most tagging behaviors are motivated by
labeling tags to resources, and tags play a significant role in effectively
retrieving interesting resources and making acquaintance with congenial
friends. The proposed model may shed some light on the in-depth understanding
of the structure and function of folksonomy.
|
1003.1954
|
Estimation of R\'enyi Entropy and Mutual Information Based on
Generalized Nearest-Neighbor Graphs
|
stat.ML cs.AI
|
We present simple and computationally efficient nonparametric estimators of
R\'enyi entropy and mutual information based on an i.i.d. sample drawn from an
unknown, absolutely continuous distribution over $\R^d$. The estimators are
calculated as the sum of $p$-th powers of the Euclidean lengths of the edges of
the `generalized nearest-neighbor' graph of the sample and the empirical copula
of the sample respectively. For the first time, we prove the almost sure
consistency of these estimators and upper bounds on their rates of convergence,
the latter of which under the assumption that the density underlying the sample
is Lipschitz continuous. Experiments demonstrate their usefulness in
independent subspace analysis.
|
1003.2005
|
Control of Complex Maneuvers for a Quadrotor UAV using Geometric Methods
on SE(3)
|
math.OC cs.SY
|
This paper provides new results for control of complex flight maneuvers for a
quadrotor unmanned aerial vehicle (UAV). The flight maneuvers are defined by a
concatenation of flight modes or primitives, each of which is achieved by a
nonlinear controller that solves an output tracking problem. A mathematical
model of the quadrotor UAV rigid body dynamics, defined on the configuration
space $\SE$, is introduced as a basis for the analysis. The quadrotor UAV has
four input degrees of freedom, namely the magnitudes of the four rotor thrusts;
each flight mode is defined by solving an asymptotic optimal tracking problem.
Although many flight modes can be studied, we focus on three output tracking
problems, namely (1) outputs given by the vehicle attitude, (2) outputs given
by the three position variables for the vehicle center of mass, and (3) output
given by the three velocity variables for the vehicle center of mass. A
nonlinear tracking controller is developed on the special Euclidean group $\SE$
for each flight mode, and the closed loop is shown to have desirable closed
loop properties that are almost global in each case. Several numerical
examples, including one example in which the quadrotor recovers from being
initially upside down and another example that includes switching and
transitions between different flight modes, illustrate the versatility and
generality of the proposed approach.
|
1003.2022
|
Fast space-variant elliptical filtering using box splines
|
cs.CV cs.CE cs.IT cs.NA math.IT
|
The efficient realization of linear space-variant (non-convolution) filters
is a challenging computational problem in image processing. In this paper, we
demonstrate that it is possible to filter an image with a Gaussian-like
elliptic window of varying size, elongation and orientation using a fixed
number of computations per pixel. The associated algorithm, which is based on a
family of smooth compactly supported piecewise polynomials, the
radially-uniform box splines, is realized using pre-integration and local
finite-differences. The radially-uniform box splines are constructed through
the repeated convolution of a fixed number of box distributions, which have
been suitably scaled and distributed radially in an uniform fashion. The
attractive features of these box splines are their asymptotic behavior, their
simple covariance structure, and their quasi-separability. They converge to
Gaussians with the increase of their order, and are used to approximate
anisotropic Gaussians of varying covariance simply by controlling the scales of
the constituent box distributions. Based on the second feature, we develop a
technique for continuously controlling the size, elongation and orientation of
these Gaussian-like functions. Finally, the quasi-separable structure, along
with a certain scaling property of box distributions, is used to efficiently
realize the associated space-variant elliptical filtering, which requires O(1)
computations per pixel irrespective of the shape and size of the filter.
|
1003.2138
|
Need-based Communication for Smart Grid: When to Inquire Power Price?
|
cs.IT math.IT
|
In smart grid, a home appliance can adjust its power consumption level
according to the realtime power price obtained from communication channels.
Most studies on smart grid do not consider the cost of communications which
cannot be ignored in many situations. Therefore, the total cost in smart grid
should be jointly optimized with the communication cost. In this paper, a
probabilistic mechanism of locational margin price (LMP) is applied and a model
for the stochastic evolution of the underlying load which determines the power
price is proposed. Based on this framework of power price, the problem of
determining when to inquire the power price is formulated as a Markov decision
process and the corresponding elements, namely the action space, system state
and reward function, are defined. Dynamic programming is then applied to obtain
the optimal strategy. A simpler myopic approach is proposed by comparing the
cost of communications and the penalty incurred by using the old value of power
price. Numerical results show the significant performance gain of the optimal
strategy of price inquiry, as well as the near-optimality of the myopic
approach.
|
1003.2142
|
QoS Routing in Smart Grid
|
cs.IT math.IT
|
Smart grid is an emerging technology which is able to control the power load
via price signaling. The communication between the power supplier and power
customers is a key issue in smart grid. Performance degradation like delay or
outage may cause significant impact on the stability of the pricing based
control and thus the reward of smart grid. Therefore, a QoS mechanism is
proposed for the communication system in smart grid, which incorporates the
derivation of QoS requirement and applies QoS routing in the communication
network. For deriving the QoS requirement, the dynamics of power load and the
load-price mapping are studied. The corresponding impacts of different QoS
metrics like delay are analyzed. Then, the QoS is derived via an optimization
problem that maximizes the total revenue. Based on the derived QoS requirement,
a simple greedy QoS routing algorithm is proposed for the requirement of high
speed routing in smart grid. It is also proven that the proposed greedy
algorithm is a $K$-approximation. Numerical simulation shows that the proposed
mechanism and algorithm can effectively derive and secure the communication QoS
in smart grid.
|
1003.2218
|
Supermartingales in Prediction with Expert Advice
|
cs.LG
|
We apply the method of defensive forecasting, based on the use of
game-theoretic supermartingales, to prediction with expert advice. In the
traditional setting of a countable number of experts and a finite number of
outcomes, the Defensive Forecasting Algorithm is very close to the well-known
Aggregating Algorithm. Not only the performance guarantees but also the
predictions are the same for these two methods of fundamentally different
nature. We discuss also a new setting where the experts can give advice
conditional on the learner's future decision. Both the algorithms can be
adapted to the new setting and give the same performance guarantees as in the
traditional setting. Finally, we outline an application of defensive
forecasting to a setting with several loss functions.
|
1003.2226
|
Interference Focusing for Mitigating Cross-Phase Modulation in a
Simplified Optical Fiber Model
|
cs.IT math.IT
|
A memoryless interference network model is introduced that is based on
non-linear phenomena observed when transmitting information over optical fiber
using wavelength division multiplexing. The main characteristic of the model is
that amplitude variations on one carrier wave are converted to phase variations
on another carrier wave, i.e., the carriers interfere with each other through
amplitude-to-phase conversion. For the case of two carriers, a new technique
called interference focusing is proposed where each carrier achieves the
capacity pre-log 1, thereby doubling the pre-log of 1/2 achieved by using
conventional methods. The technique requires neither channel time variations
nor global channel state information. Generalizations to more than two carriers
are outlined.
|
1003.2257
|
Bit Allocation Law for Multi-Antenna Channel Feedback Quantization:
Single-User Case
|
cs.IT math.IT
|
This paper studies the design and optimization of a limited feedback
single-user system with multiple-antenna transmitter and single-antenna
receiver. The design problem is cast in form of the minimizing the average
transmission power at the base station subject to the user's outage probability
constraint. The optimization is over the user's channel quantization codebook
and the transmission power control function at the base station. Our approach
is based on fixing the outage scenarios in advance and transforming the design
problem into a robust system design problem. We start by showing that uniformly
quantizing the channel magnitude in dB scale is asymptotically optimal,
regardless of the magnitude distribution function. We derive the optimal
uniform (in dB) channel magnitude codebook and combine it with a spatially
uniform channel direction codebook to arrive at a product channel quantization
codebook. We then optimize such a product structure in the asymptotic regime of
$B\rightarrow \infty$, where $B$ is the total number of quantization feedback
bits. The paper shows that for channels in the real space, the asymptotically
optimal number of direction quantization bits should be ${(M{-}1)}/{2}$ times
the number of magnitude quantization bits, where $M$ is the number of base
station antennas. We also show that the performance of the designed system
approaches the performance of the perfect channel state information system as
$2^{-\frac{2B}{M+1}}$. For complex channels, the number of magnitude and
direction quantization bits are related by a factor of $(M{-}1)$ and the system
performance scales as $2^{-\frac{B}{M}}$ as $B\rightarrow\infty$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.