id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1002.0485
|
Morphological study of Albanian words, and processing with NooJ
|
cs.CL
|
We are developing electronic dictionaries and transducers for the automatic
processing of the Albanian Language. We will analyze the words inside a linear
segment of text. We will also study the relationship between units of sense and
units of form. The composition of words takes different forms in Albanian. We
have found that morphemes are frequently concatenated or simply juxtaposed or
contracted. The inflected grammar of NooJ allows constructing the dictionaries
of flexed forms (declensions or conjugations). The diversity of word structures
requires tools to identify words created by simple concatenation, or to treat
contractions. The morphological tools of NooJ allow us to create grammatical
tools to represent and treat these phenomena. But certain problems exceed the
morphological analysis and must be represented by syntactical grammars.
|
1002.0577
|
Recherche de relations spatio-temporelles : une m\'ethode bas\'ee sur
l'analyse de corpus textuels
|
cs.IR
|
This paper presents a work package realized for the G\'eOnto project. A new
method is proposed for an enrichment of a first geographical ontology developed
beforehand. This method relies on text analysis by lexico-syntactic patterns.
From the retrieve of n-ary relations the method automatically detect those
involved in a spatial and/or temporal relation in a context of a description of
journeys.
|
1002.0672
|
The Gelfand widths of $\ell_p$-balls for $0<p\leq 1$
|
math.FA cs.IT math.IT
|
We provide sharp lower and upper bounds for the Gelfand widths of
$\ell_p$-balls in the $N$-dimensional $\ell_q^N$-space for $0<p\leq 1$ and $p<q
\leq 2$. Such estimates are highly relevant to the novel theory of compressive
sensing, and our proofs rely on methods from this area.
|
1002.0680
|
Some Relations between Divergence Derivatives and Estimation in Gaussian
channels
|
cs.IT math.IT
|
The minimum mean square error of the estimation of a non Gaussian signal
where observed from an additive white Gaussian noise channel's output, is
analyzed. First, a quite general time-continuous channel model is assumed for
which the behavior of the non-Gaussianess of the channel's output for small
signal to noise ratio q, is proved. Then, It is assumed that the channel
input's signal is composed of a (normalized) sum of N narrowband, mutually
independent waves. It is shown that if N goes to infinity, then for any fixed q
(no mater how big) both CMMSE and MMSE converge to the signal energy at a rate
which is proportional to the inverse of N. Finally, a known result for the MMSE
in the one-dimensional case, for small q, is used to show that all the first
four terms in the Taylor expansion of the non-Gaussianess of the channel's
output equal to zero.
|
1002.0696
|
Detecting Danger: Applying a Novel Immunological Concept to Intrusion
Detection Systems
|
cs.AI cs.CR cs.NE
|
In recent years computer systems have become increasingly complex and
consequently the challenge of protecting these systems has become increasingly
difficult. Various techniques have been implemented to counteract the misuse of
computer systems in the form of firewalls, anti-virus software and intrusion
detection systems. The complexity of networks and dynamic nature of computer
systems leaves current methods with significant room for improvement. Computer
scientists have recently drawn inspiration from mechanisms found in biological
systems and, in the context of computer security, have focused on the human
immune system (HIS). The human immune system provides a high level of
protection from constant attacks. By examining the precise mechanisms of the
human immune system, it is hoped the paradigm will improve the performance of
real intrusion detection systems. This paper presents an introduction to recent
developments in the field of immunology. It discusses the incorporation of a
novel immunological paradigm, Danger Theory, and how this concept is inspiring
artificial immune systems (AIS). Applications within the context of computer
security are outlined drawing direct reference to the underlying principles of
Danger Theory and finally, the current state of intrusion detection systems is
discussed and improvements suggested.
|
1002.0709
|
Aggregating Algorithm competing with Banach lattices
|
cs.LG
|
The paper deals with on-line regression settings with signals belonging to a
Banach lattice. Our algorithms work in a semi-online setting where all the
inputs are known in advance and outcomes are unknown and given step by step. We
apply the Aggregating Algorithm to construct a prediction method whose
cumulative loss over all the input vectors is comparable with the cumulative
loss of any linear functional on the Banach lattice. As a by-product we get an
algorithm that takes signals from an arbitrary domain. Its cumulative loss is
comparable with the cumulative loss of any predictor function from Besov and
Triebel-Lizorkin spaces. We describe several applications of our setting.
|
1002.0722
|
Fastest Distributed Consensus on Path Network
|
cs.IT cs.DC math.CO math.IT
|
Providing an analytical solution for the problem of finding Fastest
Distributed Consensus (FDC) is one of the challenging problems in the field of
sensor networks. Most of the methods proposed so far deal with the FDC
averaging algorithm problem by numerical convex optimization methods and in
general no closed-form solution for finding FDC has been offered up to now
except in [3] where the conjectured answer for path has been proved. Here in
this work we present an analytical solution for the problem of Fastest
Distributed Consensus for the Path network using semidefinite programming
particularly solving the slackness conditions, where the optimal weights are
obtained by inductive comparing of the characteristic polynomials initiated by
slackness conditions.
|
1002.0745
|
Using CODEQ to Train Feed-forward Neural Networks
|
cs.NE cs.AI
|
CODEQ is a new, population-based meta-heuristic algorithm that is a hybrid of
concepts from chaotic search, opposition-based learning, differential evolution
and quantum mechanics. CODEQ has successfully been used to solve different
types of problems (e.g. constrained, integer-programming, engineering) with
excellent results. In this paper, CODEQ is used to train feed-forward neural
networks. The proposed method is compared with particle swarm optimization and
differential evolution algorithms on three data sets with encouraging results.
|
1002.0747
|
Efficient Bayesian Learning in Social Networks with Gaussian Estimators
|
stat.AP cs.LG stat.ML
|
We consider a group of Bayesian agents who try to estimate a state of the
world $\theta$ through interaction on a social network. Each agent $v$
initially receives a private measurement of $\theta$: a number $S_v$ picked
from a Gaussian distribution with mean $\theta$ and standard deviation one.
Then, in each discrete time iteration, each reveals its estimate of $\theta$ to
its neighbors, and, observing its neighbors' actions, updates its belief using
Bayes' Law.
This process aggregates information efficiently, in the sense that all the
agents converge to the belief that they would have, had they access to all the
private measurements. We show that this process is computationally efficient,
so that each agent's calculation can be easily carried out. We also show that
on any graph the process converges after at most $2N \cdot D$ steps, where $N$
is the number of agents and $D$ is the diameter of the network. Finally, we
show that on trees and on distance transitive-graphs the process converges
after $D$ steps, and that it preserves privacy, so that agents learn very
little about the private signal of most other agents, despite the efficient
aggregation of information. Our results extend those in an unpublished
manuscript of the first and last authors.
|
1002.0757
|
Prequential Plug-In Codes that Achieve Optimal Redundancy Rates even if
the Model is Wrong
|
cs.IT cs.LG math.IT math.ST stat.TH
|
We analyse the prequential plug-in codes relative to one-parameter
exponential families M. We show that if data are sampled i.i.d. from some
distribution outside M, then the redundancy of any plug-in prequential code
grows at rate larger than 1/2 ln(n) in the worst case. This means that plug-in
codes, such as the Rissanen-Dawid ML code, may behave inferior to other
important universal codes such as the 2-part MDL, Shtarkov and Bayes codes, for
which the redundancy is always 1/2 ln(n) + O(1). However, we also show that a
slight modification of the ML plug-in code, "almost" in the model, does achieve
the optimal redundancy even if the the true distribution is outside M.
|
1002.0773
|
Approximations to the MMI criterion and their effect on lattice-based
MMI
|
cs.CL
|
Maximum mutual information (MMI) is a model selection criterion used for
hidden Markov model (HMM) parameter estimation that was developed more than
twenty years ago as a discriminative alternative to the maximum likelihood
criterion for HMM-based speech recognition. It has been shown in the speech
recognition literature that parameter estimation using the current MMI
paradigm, lattice-based MMI, consistently outperforms maximum likelihood
estimation, but this is at the expense of undesirable convergence properties.
In particular, recognition performance is sensitive to the number of times that
the iterative MMI estimation algorithm, extended Baum-Welch, is performed. In
fact, too many iterations of extended Baum-Welch will lead to degraded
performance, despite the fact that the MMI criterion improves at each
iteration. This phenomenon is at variance with the analogous behavior of
maximum likelihood estimation -- at least for the HMMs used in speech
recognition -- and it has previously been attributed to `over fitting'. In this
paper, we present an analysis of lattice-based MMI that demonstrates, first of
all, that the asymptotic behavior of lattice-based MMI is much worse than was
previously understood, i.e. it does not appear to converge at all, and, second
of all, that this is not due to `over fitting'. Instead, we demonstrate that
the `over fitting' phenomenon is the result of standard methodology that
exacerbates the poor behavior of two key approximations in the lattice-based
MMI machinery. We also demonstrate that if we modify the standard methodology
to improve the validity of these approximations, then the convergence
properties of lattice-based MMI become benign without sacrificing improvements
to recognition accuracy.
|
1002.0777
|
Polar Codes for the m-User MAC
|
cs.IT cs.DM math.CO math.IT
|
In this paper, polar codes for the $m$-user multiple access channel (MAC)
with binary inputs are constructed. It is shown that Ar{\i}kan's polarization
technique applied individually to each user transforms independent uses of a
$m$-user binary input MAC into successive uses of extremal MACs. This
transformation has a number of desirable properties: (i) the `uniform sum rate'
of the original MAC is preserved, (ii) the extremal MACs have uniform rate
regions that are not only polymatroids but matroids and thus (iii) their
uniform sum rate can be reached by each user transmitting either uncoded or
fixed bits; in this sense they are easy to communicate over. A polar code can
then be constructed with an encoding and decoding complexity of $O(n \log n)$
(where $n$ is the block length), a block error probability of $o(\exp(- n^{1/2
- \e}))$, and capable of achieving the uniform sum rate of any binary input MAC
with arbitrary many users. An application of this polar code construction to
communicating on the AWGN channel is also discussed.
|
1002.0852
|
High-Dimensional Matched Subspace Detection When Data are Missing
|
cs.IT math.IT
|
We consider the problem of deciding whether a highly incomplete signal lies
within a given subspace. This problem, Matched Subspace Detection, is a
classical, well-studied problem when the signal is completely observed. High-
dimensional testing problems in which it may be prohibitive or impossible to
obtain a complete observation motivate this work. The signal is represented as
a vector in R^n, but we only observe m << n of its elements. We show that
reliable detection is possible, under mild incoherence conditions, as long as m
is slightly greater than the dimension of the subspace in question.
|
1002.0904
|
On Event Structure in the Torn Dress
|
cs.CL
|
Using Pustejovsky's "The Syntax of Event Structure" and Fong's "On Mending a
Torn Dress" we give a glimpse of a Pustejovsky-like analysis to some example
sentences in Fong. We attempt to give a framework for semantics to the noun
phrases and adverbs as appropriate as well as the lexical entries for all words
in the examples and critique both papers in light of our findings and
difficulties.
|
1002.0908
|
Homomorphisms between fuzzy information systems revisited
|
cs.AI
|
Recently, Wang et al. discussed the properties of fuzzy information systems
under homomorphisms in the paper [C. Wang, D. Chen, L. Zhu, Homomorphisms
between fuzzy information systems, Applied Mathematics Letters 22 (2009)
1045-1050], where homomorphisms are based upon the concepts of consistent
functions and fuzzy relation mappings. In this paper, we classify consistent
functions as predecessor-consistent and successor-consistent, and then proceed
to present more properties of consistent functions. In addition, we improve
some characterizations of fuzzy relation mappings provided by Wang et al.
|
1002.0963
|
Discovery of Convoys in Trajectory Databases
|
cs.DB cs.CG
|
As mobile devices with positioning capabilities continue to proliferate, data
management for so-called trajectory databases that capture the historical
movements of populations of moving objects becomes important. This paper
considers the querying of such databases for convoys, a convoy being a group of
objects that have traveled together for some time. More specifically, this
paper formalizes the concept of a convoy query using density-based notions, in
order to capture groups of arbitrary extents and shapes. Convoy discovery is
relevant for real-life applications in throughput planning of trucks and
carpooling of vehicles. Although there has been extensive research on
trajectories in the literature, none of this can be applied to retrieve
correctly exact convoy result sets. Motivated by this, we develop three
efficient algorithms for convoy discovery that adopt the well-known
filter-refinement framework. In the filter step, we apply line-simplification
techniques on the trajectories and establish distance bounds between the
simplified trajectories. This permits efficient convoy discovery over the
simplified trajectories without missing any actual convoys. In the refinement
step, the candidate convoys are further processed to obtain the actual convoys.
Our comprehensive empirical study offers insight into the properties of the
paper's proposals and demonstrates that the proposals are effective and
efficient on real-world trajectory data.
|
1002.0968
|
Quantale Modules and their Operators, with Applications
|
math.LO cs.IT math.IT
|
The central topic of this work is the categories of modules over unital
quantales. The main categorical properties are established and a special class
of operators, called Q-module transforms, is defined. Such operators - that
turn out to be precisely the homomorphisms between free objects in those
categories - find concrete applications in two different branches of image
processing, namely fuzzy image compression and mathematical morphology.
|
1002.0971
|
The WebStand Project
|
cs.DB
|
In this paper we present the state of advancement of the French ANR WebStand
project. The objective of this project is to construct a customizable XML based
warehouse platform to acquire, transform, analyze, store, query and export data
from the web, in particular mailing lists, with the final intension of using
this data to perform sociological studies focused on social groups of World
Wide Web, with a specific emphasis on the temporal aspects of this data. We are
currently using this system to analyze the standardization process of the W3C,
through its social network of standard setters.
|
1002.0982
|
A Unified Algebraic Framework for Fuzzy Image Compression and
Mathematical Morphology
|
cs.IT math.IT
|
In this paper we show how certain techniques of image processing, having
different scopes, can be joined together under a common "algebraic roof".
|
1002.1060
|
Statistics for Ranking Program Committees and Editorial Boards
|
cs.IT cs.IR math.IT physics.soc-ph
|
Ranking groups of researchers is important in several contexts and can serve
many purposes such as the fair distribution of grants based on the scientist's
publication output, concession of research projects, classification of journal
editorial boards and many other applications in a social context. In this
paper, we propose a method for measuring the performance of groups of
researchers. The proposed method is called alpha-index and it is based on two
parameters: (i) the homogeneity of the h-indexes of the researchers in the
group; and (ii) the h-group, which is an extension of the h-index for groups.
Our method integrates the concepts of homogeneity and absolute value of the
h-index into a single measure which is appropriate for the evaluation of
groups. We report on experiments that assess computer science conferences based
on the h-indexes of their program committee members. Our results are similar to
a manual classification scheme adopted by a research agency.
|
1002.1095
|
Towards a Heuristic Categorization of Prepositional Phrases in English
with WordNet
|
cs.CL
|
This document discusses an approach and its rudimentary realization towards
automatic classification of PPs; the topic, that has not received as much
attention in NLP as NPs and VPs. The approach is a rule-based heuristics
outlined in several levels of our research. There are 7 semantic categories of
PPs considered in this document that we are able to classify from an annotated
corpus.
|
1002.1099
|
The "Hot Potato" Case: Challenges in Multiplayer Pervasive Games Based
on Ad hoc Mobile Sensor Networks and the Experimental Evaluation of a
Prototype Game
|
cs.HC cs.DC cs.MA cs.NI cs.PF
|
In this work, we discuss multiplayer pervasive games that rely on the use of
ad hoc mobile sensor networks. The unique feature in such games is that players
interact with each other and their surrounding environment by using movement
and presence as a means of performing game-related actions, utilizing sensor
devices. We discuss the fundamental issues and challenges related to these type
of games and the scenarios associated with them. We also present and evaluate
an example of such a game, called the "Hot Potato", developed using the Sun
SPOT hardware platform. We provide a set of experimental results, so as to both
evaluate our implementation and also to identify issues that arise in pervasive
games which utilize sensor network nodes, which show that there is great
potential in this type of games.
|
1002.1104
|
An Efficient Rigorous Approach for Identifying Statistically Significant
Frequent Itemsets
|
cs.DB cs.DS
|
As advances in technology allow for the collection, storage, and analysis of
vast amounts of data, the task of screening and assessing the significance of
discovered patterns is becoming a major challenge in data mining applications.
In this work, we address significance in the context of frequent itemset
mining. Specifically, we develop a novel methodology to identify a meaningful
support threshold s* for a dataset, such that the number of itemsets with
support at least s* represents a substantial deviation from what would be
expected in a random dataset with the same number of transactions and the same
individual item frequencies. These itemsets can then be flagged as
statistically significant with a small false discovery rate. We present
extensive experimental results to substantiate the effectiveness of our
methodology.
|
1002.1143
|
A Logical Temporal Relational Data Model
|
cs.DB
|
Time is one of the most difficult aspects to handle in real world
applications such as database systems. Relational database management systems
proposed by Codd offer very little built-in query language support for temporal
data management. The model itself incorporates neither the concept of time nor
any theory of temporal semantics. Many temporal extensions of the relational
model have been proposed and some of them are also implemented. This paper
offers a brief introduction to temporal database research. We propose a
conceptual model for handling time varying attributes in the relational
database model with minimal temporal attributes.
|
1002.1144
|
A CHAID Based Performance Prediction Model in Educational Data Mining
|
cs.LG
|
The performance in higher secondary school education in India is a turning
point in the academic lives of all students. As this academic performance is
influenced by many factors, it is essential to develop predictive data mining
model for students' performance so as to identify the slow learners and study
the influence of the dominant factors on their academic performance. In the
present investigation, a survey cum experimental methodology was adopted to
generate a database and it was constructed from a primary and a secondary
source. While the primary data was collected from the regular students, the
secondary data was gathered from the school and office of the Chief Educational
Officer (CEO). A total of 1000 datasets of the year 2006 from five different
schools in three different districts of Tamilnadu were collected. The raw data
was preprocessed in terms of filling up missing values, transforming values in
one form into another and relevant attribute/ variable selection. As a result,
we had 772 student records, which were used for CHAID prediction model
construction. A set of prediction rules were extracted from CHIAD prediction
model and the efficiency of the generated CHIAD prediction model was found. The
accuracy of the present model was compared with other model and it has been
found to be satisfactory.
|
1002.1148
|
A Comparative Study of Removal Noise from Remote Sensing Image
|
cs.CV
|
This paper attempts to undertake the study of three types of noise such as
Salt and Pepper (SPN), Random variation Impulse Noise (RVIN), Speckle (SPKN).
Different noise densities have been removed between 10% to 60% by using five
types of filters as Mean Filter (MF), Adaptive Wiener Filter (AWF), Gaussian
Filter (GF), Standard Median Filter (SMF) and Adaptive Median Filter (AMF). The
same is applied to the Saturn remote sensing image and they are compared with
one another. The comparative study is conducted with the help of Mean Square
Errors (MSE) and Peak-Signal to Noise Ratio (PSNR). So as to choose the base
method for removal of noise from remote sensing image.
|
1002.1150
|
Finding Sequential Patterns from Large Sequence Data
|
cs.DB
|
Data mining is the task of discovering interesting patterns from large
amounts of data. There are many data mining tasks, such as classification,
clustering, association rule mining, and sequential pattern mining. Sequential
pattern mining finds sets of data items that occur together frequently in some
sequences. Sequential pattern mining, which extracts frequent subsequences from
a sequence database, has attracted a great deal of interest during the recent
data mining research because it is the basis of many applications, such as: web
user analysis, stock trend prediction, DNA sequence analysis, finding language
or linguistic patterns from natural language texts, and using the history of
symptoms to predict certain kind of disease. The diversity of the applications
may not be possible to apply a single sequential pattern model to all these
problems. Each application may require a unique model and solution. A number of
research projects were established in recent years to develop meaningful
sequential pattern models and efficient algorithms for mining these patterns.
In this paper, we theoretically provided a brief overview three types of
sequential patterns model.
|
1002.1156
|
Dimensionality Reduction: An Empirical Study on the Usability of IFE-CF
(Independent Feature Elimination- by C-Correlation and F-Correlation)
Measures
|
cs.LG
|
The recent increase in dimensionality of data has thrown a great challenge to
the existing dimensionality reduction methods in terms of their effectiveness.
Dimensionality reduction has emerged as one of the significant preprocessing
steps in machine learning applications and has been effective in removing
inappropriate data, increasing learning accuracy, and improving
comprehensibility. Feature redundancy exercises great influence on the
performance of classification process. Towards the better classification
performance, this paper addresses the usefulness of truncating the highly
correlated and redundant attributes. Here, an effort has been made to verify
the utility of dimensionality reduction by applying LVQ (Learning Vector
Quantization) method on two Benchmark datasets of 'Pima Indian Diabetic
patients' and 'Lung cancer patients'.
|
1002.1157
|
Establishment of Relationships between Material Design and Product
Design Domains by Hybrid FEM-ANN Technique
|
cs.AI
|
In this paper, research on AI based modeling technique to optimize
development of new alloys with necessitated improvements in properties and
chemical mixture over existing alloys as per functional requirements of product
is done. The current research work novels AI in lieu of predictions to
establish association between material and product customary. Advanced
computational simulation techniques like CFD, FEA interrogations are made
viable to authenticate product dynamics in context to experimental
investigations. Accordingly, the current research is focused towards binding
relationships between material design and product design domains. The input to
feed forward back propagation prediction network model constitutes of material
design features. Parameters relevant to product design strategies are furnished
as target outputs. The outcomes of ANN shows good sign of correlation between
material and product design domains. The study enriches a new path to
illustrate material factors at the time of new product development.
|
1002.1159
|
Mining The Successful Binary Combinations: Methodology and A Simple Case
Study
|
cs.DB
|
The importance of finding the characteristics leading to either a success or
a failure is one of the driving forces of data mining. The various application
areas of finding success/failure factors cover vast variety of areas such as
credit risk evaluation and granting loans, micro array analysis, health factors
and health risk factors, and parameter combination leading to a product
success. This paper presents a new approach for making inferences about
dichotomous data. The objective is to determine rules that lead to a certain
result. The method consists of four phases: in the first phase, the data is
processed into a binary format of a truth table, in the second phase; rules are
found by utilizing an algorithm that minimizes Boolean functions. In the third
phase the rules are checked and filtered. In the fourth phase, simple rules
that involve one to two features are revealed.
|
1002.1164
|
Existence and Global Logarithmic Stability of Impulsive Neural Networks
with Time Delay
|
cs.NE
|
The stability and convergence of the neural networks are the fundamental
characteristics in the Hopfield type networks. Since time delay is ubiquitous
in most physical and biological systems, more attention is being made for the
delayed neural networks. The inclusion of time delay into a neural model is
natural due to the finite transmission time of the interactions. The stability
analysis of the neural networks depends on the Lyapunov function and hence it
must be constructed for the given system. In this paper we have made an attempt
to establish the logarithmic stability of the impulsive delayed neural networks
by constructing suitable Lyapunov function.
|
1002.1176
|
Phase-Only Planar Antenna Array Synthesis with Fuzzy Genetic Algorithms
|
cs.NE
|
This paper describes a new method for the synthesis of planar antenna arrays
using fuzzy genetic algorithms (FGAs) by optimizing phase excitation
coefficients to best meet a desired radiation pattern. We present the
application of a rigorous optimization technique based on fuzzy genetic
algorithms (FGAs), the optimizing algorithm is obtained by adjusting control
parameters of a standard version of genetic algorithm (SGAs) using a fuzzy
controller (FLC) depending on the best individual fitness and the population
diversity measurements (PDM). The presented optimization algorithms were
previously checked on specific mathematical test function and show their
superior capabilities with respect to the standard version (SGAs). A planar
array with rectangular cells using a probe feed is considered. Included example
using FGA demonstrates the good agreement between the desired and calculated
radiation patterns than those obtained by a SGA.
|
1002.1184
|
Implementation of an Innovative Bio Inspired GA and PSO Algorithm for
Controller design considering Steam GT Dynamics
|
cs.NE
|
The Application of Bio Inspired Algorithms to complicated Power System
Stability Problems has recently attracted the researchers in the field of
Artificial Intelligence. Low frequency oscillations after a disturbance in a
Power system, if not sufficiently damped, can drive the system unstable. This
paper provides a systematic procedure to damp the low frequency oscillations
based on Bio Inspired Genetic (GA) and Particle Swarm Optimization (PSO)
algorithms. The proposed controller design is based on formulating a System
Damping ratio enhancement based Optimization criterion to compute the optimal
controller parameters for better stability. The Novel and contrasting feature
of this work is the mathematical modeling and simulation of the Synchronous
generator model including the Steam Governor Turbine (GT) dynamics. To show the
robustness of the proposed controller, Non linear Time domain simulations have
been carried out under various system operating conditions. Also, a detailed
Comparative study has been done to show the superiority of the Bio inspired
algorithm based controllers over the Conventional Lead lag controller.
|
1002.1185
|
Significant Interval and Frequent Pattern Discovery in Web Log Data
|
cs.DB
|
There is a considerable body of work on sequence mining of Web Log Data. We
are using One Pass frequent Episode discovery (or FED) algorithm, takes a
different approach than the traditional apriori class of pattern detection
algorithms. In this approach significant intervals for each Website are
computed first (independently) and these interval used for detecting frequent
patterns/Episode and then the Analysis is performed on Significant Intervals
and frequent patterns That can be used to forecast the user's behavior using
previous trends and this can be also used for advertising purpose. This type of
applications predicts the Website interest. In this approach, time-series data
are folded over a periodicity (day, week, etc.) Which are used to form the
Interval? Significant intervals are discovered from these time points that
satisfy the criteria of minimum confidence and maximum interval length
specified by the user.
|
1002.1191
|
Unidirectional Error Correcting Codes for Memory Systems: A Comparative
Study
|
cs.IT math.IT
|
In order to achieve fault tolerance, highly reliable system often require the
ability to detect errors as soon as they occur and prevent the speared of
erroneous information throughout the system. Thus, the need for codes capable
of detecting and correcting byte errors are extremely important since many
memory systems use b-bit-per-chip organization. Redundancy on the chip must be
put to make fault-tolerant design available. This paper examined several
methods of computer memory systems, and then a proposed technique is designed
to choose a suitable method depending on the organization of memory systems.
The constructed codes require a minimum number of check bits with respect to
codes used previously, then it is optimized to fit the organization of memory
systems according to the requirements for data and byte lengths.
|
1002.1200
|
Detecting Bots Based on Keylogging Activities
|
cs.CR cs.AI cs.NE
|
A bot is a piece of software that is usually installed on an infected machine
without the user's knowledge. A bot is controlled remotely by the attacker
under a Command and Control structure. Recent statistics show that bots
represent one of the fastest growing threats to our network by performing
malicious activities such as email spamming or keylogging. However, few bot
detection techniques have been developed to date. In this paper, we investigate
a behavioural algorithm to detect a single bot that uses keylogging activity.
Our approach involves the use of function calls analysis for the detection of
the bot with a keylogging component. Correlation of the frequency of a
specified time-window is performed to enhance he detection scheme. We perform a
range of experiments with the spybot. Our results show that there is a high
correlation between some function calls executed by this bot which indicates
abnormal activity in our system.
|
1002.1285
|
The Influence of Intensity Standardization on Medical Image Registration
|
cs.CV
|
Acquisition-to-acquisition signal intensity variations (non-standardness) are
inherent in MR images. Standardization is a post processing method for
correcting inter-subject intensity variations through transforming all images
from the given image gray scale into a standard gray scale wherein similar
intensities achieve similar tissue meanings. The lack of a standard image
intensity scale in MRI leads to many difficulties in tissue characterizability,
image display, and analysis, including image segmentation. This phenomenon has
been documented well; however, effects of standardization on medical image
registration have not been studied yet. In this paper, we investigate the
influence of intensity standardization in registration tasks with systematic
and analytic evaluations involving clinical MR images. We conducted nearly
20,000 clinical MR image registration experiments and evaluated the quality of
registrations both quantitatively and qualitatively. The evaluations show that
intensity variations between images degrades the accuracy of registration
performance. The results imply that the accuracy of image registration not only
depends on spatial and geometric similarity but also on the similarity of the
intensity values for the same tissues in different images.
|
1002.1288
|
Ball-Scale Based Hierarchical Multi-Object Recognition in 3D Medical
Images
|
cs.CV
|
This paper investigates, using prior shape models and the concept of ball
scale (b-scale), ways of automatically recognizing objects in 3D images without
performing elaborate searches or optimization. That is, the goal is to place
the model in a single shot close to the right pose (position, orientation, and
scale) in a given image so that the model boundaries fall in the close vicinity
of object boundaries in the image. This is achieved via the following set of
key ideas: (a) A semi-automatic way of constructing a multi-object shape model
assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship
between objects in the training images and their intensity patterns captured in
b-scale images. (c) A hierarchical mechanism of positioning the model, in a
one-shot way, in a given image from a knowledge of the learnt pose relationship
and the b-scale image of the given image to be segmented. The evaluation
results on a set of 20 routine clinical abdominal female and male CT data sets
indicate the following: (1) Incorporating a large number of objects improves
the recognition accuracy dramatically. (2) The recognition algorithm can be
thought as a hierarchical framework such that quick replacement of the model
assembly is defined as coarse recognition and delineation itself is known as
finest recognition. (3) Scale yields useful information about the relationship
between the model assembly and any given image such that the recognition
results in a placement of the model close to the actual pose without doing any
elaborate searches or optimization. (4) Effective object recognition can make
delineation most accurate.
|
1002.1290
|
Bounds on Threshold of Regular Random $k$-SAT
|
cs.IT cs.CC math.CO math.IT
|
We consider the regular model of formula generation in conjunctive normal
form (CNF) introduced by Boufkhad et. al. We derive an upper bound on the
satisfiability threshold and NAE-satisfiability threshold for regular random
$k$-SAT for any $k \geq 3$. We show that these bounds matches with the
corresponding bound for the uniform model of formula generation.
We derive lower bound on the threshold by applying the second moment method
to the number of satisfying assignments. For large $k$, we note that the
obtained lower bounds on the threshold of a regular random formula converges to
the lower bound obtained for the uniform model. Thus, we answer the question
posed in \cite{AcM06} regarding the performance of the second moment method for
regular random formulas.
|
1002.1300
|
Architecture for communication with a fidelity criterion in unknown
networks
|
cs.IT math.IT
|
We prove that in order to communicate independent sources (this is the
unicast problem) between various users over an unknown medium to within various
distortion levels, it is sufficient to consider source-channel separation based
architectures: architectures which first compress the sources to within the
corresponding distortion levels followed by reliable communication over the
unknown medium. We are reducing the problem of universal rate-distortion
communication of independent sources over a network to the universal reliable
communication problem over networks. This is a reductionist view. We are not
solving the reliable communication problem in networks.
|
1002.1313
|
Half-Duplex Active Eavesdropping in Fast Fading Channels: A Block-Markov
Wyner Secrecy Encoding Scheme
|
cs.IT cs.CR math.IT
|
In this paper we study the problem of half-duplex active eavesdropping in
fast fading channels. The active eavesdropper is a more powerful adversary than
the classical eavesdropper. It can choose between two functional modes:
eavesdropping the transmission between the legitimate parties (Ex mode), and
jamming it (Jx mode) -- the active eavesdropper cannot function in full duplex
mode. We consider a conservative scenario, when the active eavesdropper can
choose its strategy based on the legitimate transmitter-receiver pair's
strategy -- and thus the transmitter and legitimate receiver have to plan for
the worst. We show that conventional physical-layer secrecy approaches perform
poorly (if at all), and we introduce a novel encoding scheme, based on very
limited and unsecured feedback -- the Block-Markov Wyner (BMW) encoding scheme
-- which outperforms any schemes currently available.
|
1002.1337
|
Capacity Scaling of Wireless Ad Hoc Networks: Shannon Meets Maxwell
|
cs.IT math.IT
|
In this paper, we characterize the information-theoretic capacity scaling of
wireless ad hoc networks with $n$ randomly distributed nodes. By using an exact
channel model from Maxwell's equations, we successfully resolve the conflict in
the literature between the linear capacity scaling by \"{O}zg\"{u}r et al. and
the degrees of freedom limit given as the ratio of the network diameter and the
wavelength $\lambda$ by Franceschetti et al. In dense networks where the
network area is fixed, the capacity scaling is given as the minimum of $n$ and
the degrees of freedom limit $\lambda^{-1}$ to within an arbitrarily small
exponent. In extended networks where the network area is linear in $n$, the
capacity scaling is given as the minimum of $n$ and the degrees of freedom
limit $\sqrt{n}\lambda^{-1}$ to within an arbitrarily small exponent. Hence, we
recover the linear capacity scaling by \"{O}zg\"{u}r et al. if
$\lambda=O(n^{-1})$ in dense networks and if $\lambda=O(n^{-1/2})$ in extended
networks. Otherwise, the capacity scaling is given as the degrees of freedom
limit characterized by Franceschetti et al. For achievability, a modified
hierarchical cooperation is proposed based on a lower bound on the capacity of
multiple-input multiple-output channel between two node clusters using our
channel model.
|
1002.1347
|
Utility and Privacy of Data Sources: Can Shannon Help Conceal and Reveal
Information?
|
cs.IT math.IT
|
The problem of private information "leakage" (inadvertently or by malicious
design) from the myriad large centralized searchable data repositories drives
the need for an analytical framework that quantifies unequivocally how safe
private data can be (privacy) while still providing useful benefit (utility) to
multiple legitimate information consumers. Rate distortion theory is shown to
be a natural choice to develop such a framework which includes the following:
modeling of data sources, developing application independent utility and
privacy metrics, quantifying utility-privacy tradeoffs irrespective of the type
of data sources or the methods of providing privacy, developing a
side-information model for dealing with questions of external knowledge, and
studying a successive disclosure problem for multiple query data sources.
|
1002.1406
|
Collecting Coded Coupons over Generations
|
cs.IT math.IT
|
To reduce computational complexity and delay in randomized network coded
content distribution (and for some other practical reasons), coding is not
performed simultaneously over all content blocks but over much smaller subsets
known as generations. A penalty is throughput reduction. We model coding over
generations as the coupon collector's brotherhood problem. This model enables
us to theoretically compute the expected number of coded packets needed for
successful decoding of the entire content, as well as a bound on the
probability of decoding failure, and further, to quantify the tradeoff between
computational complexity and throughput. Interestingly, with a moderate
increase in the generation size, throughput quickly approaches link capacity.
As an additional contribution, we derive new results for the generalized
collector's brotherhood problem which can also be used for further study of
many other aspects of coding over generations.
|
1002.1407
|
Collecting Coded Coupons over Overlapping Generations
|
cs.IT math.IT
|
Coding over subsets (known as generations) rather than over all content
blocks in P2P distribution networks and other applications is necessary for a
number of practical reasons such as computational complexity. A penalty for
coding only within generations is an overall throughput reduction. It has been
previously shown that allowing contiguous generations to overlap in a
head-to-toe manner improves the throughput. We here propose and study a scheme,
referred to as the {\it random annex code}, that creates shared packets between
any two generations at random rather than only the neighboring ones. By
optimizing very few design parameters, we obtain a simple scheme that
outperforms both the non-overlapping and the head-to-toe overlapping schemes of
comparable computational complexity, both in the expected throughput and in the
rate of convergence of the probability of decoding failure to zero. We provide
a practical algorithm for accurate analysis of the expected throughput of the
random annex code for finite-length information. This algorithm enables us to
quantify the throughput vs.computational complexity tradeoff, which is
necessary for optimal selection of the scheme parameters.
|
1002.1436
|
Constant-Weight Gray Codes for Local Rank Modulation
|
cs.IT math.IT
|
We consider the local rank-modulation scheme in which a sliding window going
over a sequence of real-valued variables induces a sequence of permutations.
The local rank-modulation, as a generalization of the rank-modulation scheme,
has been recently suggested as a way of storing information in flash memory.
We study constant-weight Gray codes for the local rank-modulation scheme in
order to simulate conventional multi-level flash cells while retaining the
benefits of rank modulation. We provide necessary conditions for the existence
of cyclic and cyclic optimal Gray codes. We then specifically study codes of
weight 2 and upper bound their efficiency, thus proving that there are no such
asymptotically-optimal cyclic codes. In contrast, we study codes of weight 3
and efficiently construct codes which are asymptotically-optimal.
|
1002.1446
|
On directed information theory and Granger causality graphs
|
cs.IT math.IT
|
Directed information theory deals with communication channels with feedback.
When applied to networks, a natural extension based on causal conditioning is
needed. We show here that measures built from directed information theory in
networks can be used to assess Granger causality graphs of stochastic
processes. We show that directed information theory includes measures such as
the transfer entropy, and that it is the adequate information theoretic
framework needed for neuroscience applications, such as connectivity inference
problems.
|
1002.1447
|
PAPR reduction of space-time and space-frequency coded OFDM systems
using active constellation extension
|
cs.IT math.IT
|
Active Constellation Extension (ACE) is one of techniques introduced for Peak
to Average Power Ratio (PAPR) reduction for OFDM systems. In this technique,
the constellation points are extended such that the PAPR is minimized but the
minimum distance of the constellation points does not decrease. In this paper,
an iterative ACE method is extended to spatially encoded OFDM systems. The
proposed methods are such that the PAPR is reduced simultaneously at all
antennas, while the spatial encoding relationships still hold. It will be shown
that the original ACE method can be employed before Space Time Block Coding
(STBC). But in case of Space Frequency Block Coding (SFBC), two modified
techniques have been proposed. In the first method, the OFDM frame is separated
by several subframes and the ACE method is applied to these subframes
independently to reduce their corresponding PAPRs. Then the low PAPR subframes
are recombined based on SFBC relationships to yield the transmitted signals
from different antennas. In the second method, for each iteration, the ACE is
applied to the antenna with the maximum PAPR, and the signals of the other
antennas are generated from that of this antenna. Simulation results show that
both algorithms converge, but the second method outperforms the first one when
the number of antennas is increased.
|
1002.1465
|
On Coding for Cooperative Data Exchange
|
cs.IT math.IT
|
We consider the problem of data exchange by a group of closely-located
wireless nodes. In this problem each node holds a set of packets and needs to
obtain all the packets held by other nodes. Each of the nodes can broadcast the
packets in its possession (or a combination thereof) via a noiseless broadcast
channel of capacity one packet per channel use. The goal is to minimize the
total number of transmissions needed to satisfy the demands of all the nodes,
assuming that they can cooperate with each other and are fully aware of the
packet sets available to other nodes. This problem arises in several practical
settings, such as peer-to-peer systems and wireless data broadcast. In this
paper, we establish upper and lower bounds on the optimal number of
transmissions and present an efficient algorithm with provable performance
guarantees. The effectiveness of our algorithms is established through
numerical simulations.
|
1002.1480
|
A Minimum Relative Entropy Controller for Undiscounted Markov Decision
Processes
|
cs.AI cs.LG cs.RO
|
Adaptive control problems are notoriously difficult to solve even in the
presence of plant-specific controllers. One way to by-pass the intractable
computation of the optimal policy is to restate the adaptive control as the
minimization of the relative entropy of a controller that ignores the true
plant dynamics from an informed controller. The solution is given by the
Bayesian control rule-a set of equations characterizing a stochastic adaptive
controller for the class of possible plant dynamics. Here, the Bayesian control
rule is applied to derive BCR-MDP, a controller to solve undiscounted Markov
decision processes with finite state and action spaces and unknown dynamics. In
particular, we derive a non-parametric conjugate prior distribution over the
policy space that encapsulates the agent's whole relevant history and we
present a Gibbs sampler to draw random policies from this distribution.
Preliminary results show that BCR-MDP successfully avoids sub-optimal limit
cycles due to its built-in mechanism to balance exploration versus
exploitation.
|
1002.1530
|
The Degrees of Freedom Region of the MIMO Cognitive Interference Channel
with No CSIT
|
cs.IT math.IT
|
This paper has been withdrawn by the author(s) for revision.
|
1002.1531
|
A Large-System Analysis of the Imperfect-CSIT Gaussian Broadcast Channel
with a DPC-based Transmission Strategy
|
cs.IT math.IT
|
The Gaussian broadcast channel (GBC) with $K$ transmit antennas and $K$
single-antenna users is considered for the case in which the channel state
information is obtained at the transmitter via a finite-rate feedback link of
capacity $r$ bits per user. The throughput (i.e., the sum-rate normalized by
$K$) of the GBC is analyzed in the limit as $K \to \infty$ with $\frac{r}{K}
\to \bar{r}$. Considering the transmission strategy of zeroforcing dirty paper
coding (ZFDPC), a closed-form expression for the asymptotic throughput is
derived. It is observed that, even under the finite-rate feedback setting,
ZFDPC achieves a significantly higher throughput than zeroforcing beamforming.
Using the asymptotic throughput expression, the problem of obtaining the number
of users to be selected in order to maximize the throughput is solved.
|
1002.1532
|
On the scaling of feedback bits to achieve the full multiplexing gain
over the Gaussian broadcast channel using DPC
|
cs.IT math.IT
|
This paper has been withdrawn by the author(s) for revision.
|
1002.1559
|
Computational limits to nonparametric estimation for ergodic processes
|
cs.IT math.IT
|
A new negative result for nonparametric estimation of binary ergodic
processes is shown. I The problem of estimation of distribution with any degree
of accuracy is studied. Then it is shown that for any countable class of
estimators there is a zero-entropy binary ergodic process that is inconsistent
with the class of estimators. Our result is different from other negative
results for universal forecasting scheme of ergodic processes.
|
1002.1584
|
Power Control for Maximum Throughput in Spectrum Underlay Cognitive
Radio Networks
|
cs.IT cs.NI math.IT math.OC
|
We investigate power allocation for users in a spectrum underlay cognitive
network. Our objective is to find a power control scheme that allocates
transmit power for both primary and secondary users so that the overall network
throughput is maximized while maintaining the quality of service (QoS) of the
primary users greater than a certain minimum limit. Since an optimum solution
to our problem is computationally intractable, as the optimization problem is
non-convex, we propose an iterative algorithm based on sequential geometric
programming, that is proved to converge to at least a local optimum solution.
We use the proposed algorithm to show how a spectrum underlay network would
achieve higher throughput with secondary users operation than with primary
users operating alone. Also, we show via simulations that the loss in primary
throughput due to the admission of the secondary users is accompanied by a
reduction in the total primary transmit power.
|
1002.1727
|
An Improved DC Recovery Method from AC Coefficients of DCT-Transformed
Images
|
cs.MM cs.CV
|
Motivated by the work of Uehara et al. [1], an improved method to recover DC
coefficients from AC coefficients of DCT-transformed images is investigated in
this work, which finds applications in cryptanalysis of selective multimedia
encryption. The proposed under/over-flow rate minimization (FRM) method employs
an optimization process to get a statistically more accurate estimation of
unknown DC coefficients, thus achieving a better recovery performance. It was
shown by experimental results based on 200 test images that the proposed DC
recovery method significantly improves the quality of most recovered images in
terms of the PSNR values and several state-of-the-art objective image quality
assessment (IQA) metrics such as SSIM and MS-SSIM.
|
1002.1744
|
On some invariants in numerical semigroups and estimations of the order
bound
|
cs.IT cs.DM math.AC math.IT
|
We study suitable parameters and relations in a numerical semigroup S. When S
is the Weierstrass semigroup at a rational point P of a projective curve C, we
evaluate the Feng-Rao order bound of the associated family of Goppa codes.
Further we conjecture that the order bound is always greater than a fixed value
easily deduced from the parameters of the semigroup: we also prove this
inequality in several cases.
|
1002.1773
|
Cuspidal and Noncuspidal Robot Manipulators
|
cs.RO
|
This article synthezises the most important results on the kinematics of
cuspidal manipulators i.e. nonredundant manipulators that can change posture
without meeting a singularity. The characteristic surfaces, the uniqueness
domains and the regions of feasible paths in the workspace are defined. Then,
several sufficient geometric conditions for a manipulator to be noncuspidal are
enumerated and a general necessary and sufficient condition for a manipulator
to be cuspidal is provided. An explicit DH-parameter-based condition for an
orthogonal manipulator to be cuspidal is derived. The full classification of 3R
orthogonal manipulators is provided and all types of cuspidal and noncuspidal
orthogonal manipulators are enumerated. Finally, some facts about cuspidal and
noncuspidal 6R manipulators are reported.
|
1002.1774
|
Position Analysis of the RRP-3(SS) Multi-Loop Spatial Structure
|
cs.RO
|
The paper presents the position analysis of a spatial structure composed of
two platforms mutually connected by one RRP and three SS serial kinematic
chains, where R, P, and S stand for revolute, prismatic, and spherical
kinematic pair respectively. A set of three compatibility equations is laid
down that, following algebraic elimination, results in a 28th-order univariate
algebraic equation, which in turn provides the addressed problem with 28
solutions in the complex domain. Among the applications of the results
presented in this paper is the solution to the forward kinematics of the
Tricept, a well-known in-parallel-actuated spatial manipulator. Numerical
examples show adoption of the proposed method in dealing with two case studies.
|
1002.1781
|
Linear Sum Capacity for Gaussian Multiple Access Channels with Feedback
|
cs.IT math.IT
|
The capacity region of the N-sender Gaussian multiple access channel with
feedback is not known in general. This paper studies the class of
linear-feedback codes that includes (nonlinear) nonfeedback codes at one
extreme and the linear-feedback codes by Schalkwijk and Kailath, Ozarow, and
Kramer at the other extreme. The linear-feedback sum-capacity C_L(N,P) under
symmetric power constraints P is characterized, the maximum sum-rate achieved
by linear-feedback codes when each sender has the equal block power constraint
P. In particular, it is shown that Kramer's code achieves this linear-feedback
sum-capacity. The proof involves the dependence balance condition introduced by
Hekstra and Willems and extended by Kramer and Gastpar, and the analysis of the
resulting nonconvex optimization problem via a Lagrange dual formulation.
Finally, an observation is presented based on the properties of the conditional
maximal correlation---an extension of the Hirschfeld--Gebelein--Renyi maximal
correlation---which reinforces the conjecture that Kramer's code achieves not
only the linear-feedback sum-capacity, but also the sum-capacity itself (the
maximum sum-rate achieved by arbitrary feedback codes).
|
1002.1782
|
Online Distributed Sensor Selection
|
cs.LG
|
A key problem in sensor networks is to decide which sensors to query when, in
order to obtain the most useful information (e.g., for performing accurate
prediction), subject to constraints (e.g., on power and bandwidth). In many
applications the utility function is not known a priori, must be learned from
data, and can even change over time. Furthermore for large sensor networks
solving a centralized optimization problem to select sensors is not feasible,
and thus we seek a fully distributed solution. In this paper, we present
Distributed Online Greedy (DOG), an efficient, distributed algorithm for
repeatedly selecting sensors online, only receiving feedback about the utility
of the selected sensors. We prove very strong theoretical no-regret guarantees
that apply whenever the (unknown) utility function satisfies a natural
diminishing returns property called submodularity. Our algorithm has extremely
low communication requirements, and scales well to large sensor deployments. We
extend DOG to allow observation-dependent sensor selection. We empirically
demonstrate the effectiveness of our algorithm on several real-world sensing
tasks.
|
1002.1896
|
When group level is different from the population level: an adaptive
network with the Deffuant model
|
physics.soc-ph cs.MA
|
We propose a model coupling the classical opinion dynamics of the bounded
confidence model, proposed by Deffuant et al., with an adaptive network forming
a community or group structure. At each step, an individual can decide if it
changes groups or interact on its opinion with one of its internal or external
neighbour. If it decides to look at the group level, it changes groups if its
opinion is far from the average of its group from more than a threshold. If it
is the case, it joins the group which has proportionally the closest average
opinion from its. If it decides to interact with one of its neighbour, it
becomes closer in opinion to it when its opinion and the one of the
selected-to-interact neighbour are less distant from the threshold. From the
study of this coupled model, we discover some surprising behaviours compared to
the known behaviour of the Deffuant bounded confidence model(BC): The coupled
model exhibits a total consensus for an threshold value lower than the BC
model; the distribution of sizes of the groups changes: some groups become
larger while other decrease in size, sometimes until containing only one
individual; from the point of view of the groups, the consensus remains for a
large set of threshold values while, looking at the population level, there are
a lot of opinion clusters.
|
1002.1916
|
Assisted Common Information with Applications to Secure Two-Party
Computation
|
cs.IT cs.CR math.IT
|
Secure multi-party computation is a central problem in modern cryptography.
An important sub-class of this are problems of the following form: Alice and
Bob desire to produce sample(s) of a pair of jointly distributed random
variables. Each party must learn nothing more about the other party's output
than what its own output reveals. To aid in this, they have available a set up
- correlated random variables whose distribution is different from the desired
distribution - as well as unlimited noiseless communication. In this paper we
present an upperbound on how efficiently a given set up can be used to produce
samples from a desired distribution.
The key tool we develop is a generalization of the concept of common
information of two dependent random variables [Gacs-Korner, 1973]. Our
generalization - a three-dimensional region - remedies some of the limitations
of the original definition which captured only a limited form of dependence. It
also includes as a special case Wyner's common information [Wyner, 1975]. To
derive the cryptographic bounds, we rely on a monotonicity property of this
region: the region of the "views" of Alice and Bob engaged in any protocol can
only monotonically expand and not shrink. Thus, by comparing the regions for
the target random variables and the given random variables, we obtain our
upperbound.
|
1002.1919
|
Thai Rhetorical Structure Analysis
|
cs.CL
|
Rhetorical structure analysis (RSA) explores discourse relations among
elementary discourse units (EDUs) in a text. It is very useful in many text
processing tasks employing relationships among EDUs such as text understanding,
summarization, and question-answering. Thai language with its distinctive
linguistic characteristics requires a unique technique. This article proposes
an approach for Thai rhetorical structure analysis. First, EDUs are segmented
by two hidden Markov models derived from syntactic rules. A rhetorical
structure tree is constructed from a clustering technique with its similarity
measure derived from Thai semantic rules. Then, a decision tree whose features
derived from the semantic rules is used to determine discourse relations.
|
1002.1951
|
Image Retrieval Techniques based on Image Features, A State of Art
approach for CBIR
|
cs.MM cs.IR
|
The purpose of this Paper is to describe our research on different feature
extraction and matching techniques in designing a Content Based Image Retrieval
(CBIR) system. Due to the enormous increase in image database sizes, as well as
its vast deployment in various applications, the need for CBIR development
arose. Firstly, this paper outlines a description of the primitive feature
extraction techniques like, texture, colour, and shape. Once these features are
extracted and used as the basis for a similarity check between images, the
various matching techniques are discussed. Furthermore, the results of its
performance are illustrated by a detailed example.
|
1002.2012
|
Implementing Genetic Algorithms on Arduino Micro-Controllers
|
cs.NE
|
Since their conception in 1975, Genetic Algorithms have been an extremely
popular approach to find exact or approximate solutions to optimization and
search problems. Over the last years there has been an enhanced interest in the
field with related techniques, such as grammatical evolution, being developed.
Unfortunately, work on developing genetic optimizations for low-end embedded
architectures hasn't embraced the same enthusiasm. This short paper tackles
that situation by demonstrating how genetic algorithms can be implemented in
Arduino Duemilanove, a 16 MHz open-source micro-controller, with limited
computation power and storage resources. As part of this short paper, the
libraries used in this implementation are released into the public domain under
a GPL license.
|
1002.2034
|
Dire n'est pas concevoir
|
cs.AI cs.CL
|
The conceptual modelling built from text is rarely an ontology. As a matter
of fact, such a conceptualization is corpus-dependent and does not offer the
main properties we expect from ontology. Furthermore, ontology extracted from
text in general does not match ontology defined by expert using a formal
language. It is not surprising since ontology is an extra-linguistic
conceptualization whereas knowledge extracted from text is the concern of
textual linguistics. Incompleteness of text and using rhetorical figures, like
ellipsis, modify the perception of the conceptualization we may have.
Ontological knowledge, which is necessary for text understanding, is not in
general embedded into documents.
|
1002.2044
|
On the Stability of Empirical Risk Minimization in the Presence of
Multiple Risk Minimizers
|
cs.LG
|
Recently Kutin and Niyogi investigated several notions of algorithmic
stability--a property of a learning map conceptually similar to
continuity--showing that training-stability is sufficient for consistency of
Empirical Risk Minimization while distribution-free CV-stability is necessary
and sufficient for having finite VC-dimension. This paper concerns a phase
transition in the training stability of ERM, conjectured by the same authors.
Kutin and Niyogi proved that ERM on finite hypothesis spaces containing a
unique risk minimizer has training stability that scales exponentially with
sample size, and conjectured that the existence of multiple risk minimizers
prevents even super-quadratic convergence. We prove this result for the
strictly weaker notion of CV-stability, positively resolving the conjecture.
|
1002.2050
|
Intrinsic dimension estimation of data by principal component analysis
|
cs.CV cs.LG
|
Estimating intrinsic dimensionality of data is a classic problem in pattern
recognition and statistics. Principal Component Analysis (PCA) is a powerful
tool in discovering dimensionality of data sets with a linear structure; it,
however, becomes ineffective when data have a nonlinear structure. In this
paper, we propose a new PCA-based method to estimate intrinsic dimension of
data with nonlinear structures. Our method works by first finding a minimal
cover of the data set, then performing PCA locally on each subset in the cover
and finally giving the estimation result by checking up the data variance on
all small neighborhood regions. The proposed method utilizes the whole data set
to estimate its intrinsic dimension and is convenient for incremental learning.
In addition, our new PCA procedure can filter out noise in data and converge to
a stable estimation with the neighborhood region size increasing. Experiments
on synthetic and real world data sets show effectiveness of the proposed
method.
|
1002.2164
|
Efficient LLR Calculation for Non-Binary Modulations over Fading
Channels
|
cs.IT math.IT
|
Log-likelihood ratio (LLR) computation for non-binary modulations over fading
channels is complicated. A measure of LLR accuracy on asymmetric binary
channels is introduced to facilitate good LLR approximations for non-binary
modulations. Considering piecewise linear LLR approximations, we prove
convexity of optimizing the coefficients according to this measure. For the
optimized approximate LLRs, we report negligible performance losses compared to
true LLRs.
|
1002.2171
|
Reverse Engineering Financial Markets with Majority and Minority Games
using Genetic Algorithms
|
q-fin.TR cs.LG cs.MA
|
Using virtual stock markets with artificial interacting software investors,
aka agent-based models (ABMs), we present a method to reverse engineer
real-world financial time series. We model financial markets as made of a large
number of interacting boundedly rational agents. By optimizing the similarity
between the actual data and that generated by the reconstructed virtual stock
market, we obtain parameters and strategies, which reveal some of the inner
workings of the target stock market. We validate our approach by out-of-sample
predictions of directional moves of the Nasdaq Composite Index.
|
1002.2182
|
Detection of Microcalcification in Mammograms Using Wavelet Transform
and Fuzzy Shell Clustering
|
cs.CV
|
Microcalcifications in mammogram have been mainly targeted as a reliable
earliest sign of breast cancer and their early detection is vital to improve
its prognosis. Since their size is very small and may be easily overlooked by
the examining radiologist, computer-based detection output can assist the
radiologist to improve the diagnostic accuracy. In this paper, we have proposed
an algorithm for detecting microcalcification in mammogram. The proposed
microcalcification detection algorithm involves mammogram quality enhancement
using multirresolution analysis based on the dyadic wavelet transform and
microcalcification detection by fuzzy shell clustering. It may be possible to
detect nodular components such as microcalcification accurately by introducing
shape information. The effectiveness of the proposed algorithm for
microcalcification detection is confirmed by experimental results.
|
1002.2184
|
The Fast Haar Wavelet Transform for Signal & Image Processing
|
cs.MM cs.CV
|
A method for the design of Fast Haar wavelet for signal processing and image
processing has been proposed. In the proposed work, the analysis bank and
synthesis bank of Haar wavelet is modified by using polyphase structure.
Finally, the Fast Haar wavelet was designed and it satisfies alias free and
perfect reconstruction condition. Computational time and computational
complexity is reduced in Fast Haar wavelet transform.
|
1002.2191
|
Vision Based Game Development Using Human Computer Interaction
|
cs.HC cs.CV cs.MM
|
A Human Computer Interface (HCI) System for playing games is designed here
for more natural communication with the machines. The system presented here is
a vision-based system for detection of long voluntary eye blinks and
interpretation of blink patterns for communication between man and machine.
This system replaces the mouse with the human face as a new way to interact
with the computer. Facial features (nose tip and eyes) are detected and tracked
in realtime to use their actions as mouse events. The coordinates and movement
of the nose tip in the live video feed are translated to become the coordinates
and movement of the mouse pointer on the application. The left or right eye
blinks fire left or right mouse click events. The system works with inexpensive
USB cameras and runs at a frame rate of 30 frames per second.
|
1002.2193
|
Using Statistical Moment Invariants and Entropy in Image Retrieval
|
cs.MM cs.IR
|
Although content-based image retrieval (CBIR) is not a new subject, it keeps
attracting more and more attention, as the amount of images grow tremendously
due to internet, inexpensive hardware and automation of image acquisition. One
of the applications of CBIR is fetching images from a database. This paper
presents a new method for automatic image retrieval using moment invariants and
image entropy, our technique could be used to find semi or perfect matches
based on query by example manner, experimental results demonstrate that the
purposed technique is scalable and efficient.
|
1002.2195
|
Multi Product Inventory Optimization using Uniform Crossover Genetic
Algorithm
|
cs.NE
|
Inventory management is considered to be an important field in Supply Chain
Management because the cost of inventories in a supply chain accounts for about
30 percent of the value of the product. The service provided to the customer
eventually gets enhanced once the efficient and effective management of
inventory is carried out all through the supply chain. The precise estimation
of optimal inventory is essential since shortage of inventory yields to lost
sales, while excess of inventory may result in pointless storage costs. Thus
the determination of the inventory to be held at various levels in a supply
chain becomes inevitable so as to ensure minimal cost for the supply chain. The
minimization of the total supply chain cost can only be achieved when
optimization of the base stock level is carried out at each member of the
supply chain. This paper deals with the problem of determination of base stock
levels in a ten member serial supply chain with multiple products produced by
factories using Uniform Crossover Genetic Algorithms. The complexity of the
problem increases when more distribution centers and agents and multiple
products were involved. These considerations leading to very complex inventory
management process has been resolved in this work.
|
1002.2196
|
Efficient Inventory Optimization of Multi Product, Multiple Suppliers
with Lead Time using PSO
|
cs.NE
|
With information revolution, increased globalization and competition, supply
chain has become longer and more complicated than ever before. These
developments bring supply chain management to the forefront of the managements
attention. Inventories are very important in a supply chain. The total
investment in inventories is enormous, and the management of inventory is
crucial to avoid shortages or delivery delays for the customers and serious
drain on a companys financial resources. The supply chain cost increases
because of the influence of lead times for supplying the stocks as well as the
raw materials. Practically, the lead times will not be same through out all the
periods. Maintaining abundant stocks in order to avoid the impact of high lead
time increases the holding cost. Similarly, maintaining fewer stocks because of
ballpark lead time may lead to shortage of stocks. This also happens in the
case of lead time involved in supplying raw materials. A better optimization
methodology that utilizes the Particle Swarm Optimization algorithm, one of the
best optimization algorithms, is proposed to overcome the impasse in
maintaining the optimal stock levels in each member of the supply chain. Taking
into account the stock levels thus obtained from the proposed methodology, an
appropriate stock levels to be maintained in the approaching periods that will
minimize the supply chain inventory cost can be arrived at.
|
1002.2202
|
Modeling of Human Criminal Behavior using Probabilistic Networks
|
cs.AI
|
Currently, criminals profile (CP) is obtained from investigators or forensic
psychologists interpretation, linking crime scene characteristics and an
offenders behavior to his or her characteristics and psychological profile.
This paper seeks an efficient and systematic discovery of nonobvious and
valuable patterns between variables from a large database of solved cases via a
probabilistic network (PN) modeling approach. The PN structure can be used to
extract behavioral patterns and to gain insight into what factors influence
these behaviors. Thus, when a new case is being investigated and the profile
variables are unknown because the offender has yet to be identified, the
observed crime scene variables are used to infer the unknown variables based on
their connections in the structure and the corresponding numerical
(probabilistic) weights. The objective is to produce a more systematic and
empirical approach to profiling, and to use the resulting PN model as a
decision tool.
|
1002.2240
|
A Generalization of the Chow-Liu Algorithm and its Application to
Statistical Learning
|
cs.IT cs.AI cs.LG math.IT
|
We extend the Chow-Liu algorithm for general random variables while the
previous versions only considered finite cases. In particular, this paper
applies the generalization to Suzuki's learning algorithm that generates from
data forests rather than trees based on the minimum description length by
balancing the fitness of the data to the forest and the simplicity of the
forest. As a result, we successfully obtain an algorithm when both of the
Gaussian and finite random variables are present.
|
1002.2244
|
Improved Constructions for Non-adaptive Threshold Group Testing
|
cs.DM cs.IT math.IT
|
The basic goal in combinatorial group testing is to identify a set of up to
$d$ defective items within a large population of size $n \gg d$ using a pooling
strategy. Namely, the items can be grouped together in pools, and a single
measurement would reveal whether there are one or more defectives in the pool.
The threshold model is a generalization of this idea where a measurement
returns positive if the number of defectives in the pool reaches a fixed
threshold $u > 0$, negative if this number is no more than a fixed lower
threshold $\ell < u$, and may behave arbitrarily otherwise. We study
non-adaptive threshold group testing (in a possibly noisy setting) and show
that, for this problem, $O(d^{g+2} (\log d) \log(n/d))$ measurements (where $g
:= u-\ell-1$ and $u$ is any fixed constant) suffice to identify the defectives,
and also present almost matching lower bounds. This significantly improves the
previously known (non-constructive) upper bound $O(d^{u+1} \log(n/d))$.
Moreover, we obtain a framework for explicit construction of measurement
schemes using lossless condensers. The number of measurements resulting from
this scheme is ideally bounded by $O(d^{g+3} (\log d) \log n)$. Using
state-of-the-art constructions of lossless condensers, however, we obtain
explicit testing schemes with $O(d^{g+3} (\log d) qpoly(\log n))$ and
$O(d^{g+3+\beta} poly(\log n))$ measurements, for arbitrary constant $\beta >
0$.
|
1002.2271
|
A Coordinate System for Gaussian Networks
|
cs.IT math.IT
|
This paper studies network information theory problems where the external
noise is Gaussian distributed. In particular, the Gaussian broadcast channel
with coherent fading and the Gaussian interference channel are investigated. It
is shown that in these problems, non-Gaussian code ensembles can achieve higher
rates than the Gaussian ones. It is also shown that the strong Shamai-Laroia
conjecture on the Gaussian ISI channel does not hold. In order to analyze
non-Gaussian code ensembles over Gaussian networks, a geometrical tool using
the Hermite polynomials is proposed. This tool provides a coordinate system to
analyze a class of non-Gaussian input distributions that are invariant over
Gaussian networks.
|
1002.2283
|
Gossip Algorithms for Convex Consensus Optimization over Networks
|
math.OC cs.DC cs.SY
|
In many applications, nodes in a network desire not only a consensus, but an
optimal one. To date, a family of subgradient algorithms have been proposed to
solve this problem under general convexity assumptions. This paper shows that,
for the scalar case and by assuming a bit more, novel non-gradient-based
algorithms with appealing features can be constructed. Specifically, we develop
Pairwise Equalizing (PE) and Pairwise Bisectioning (PB), two gossip algorithms
that solve unconstrained, separable, convex consensus optimization problems
over undirected networks with time-varying topologies, where each local
function is strictly convex, continuously differentiable, and has a minimizer.
We show that PE and PB are easy to implement, bypass limitations of the
subgradient algorithms, and produce switched, nonlinear, networked dynamical
systems that admit a common Lyapunov function and asymptotically converge.
Moreover, PE generalizes the well-known Pairwise Averaging and Randomized
Gossip Algorithm, while PB relaxes a requirement of PE, allowing nodes to never
share their local functions.
|
1002.2293
|
On Linear Operator Channels over Finite Fields
|
cs.IT math.IT
|
Motivated by linear network coding, communication channels perform linear
operation over finite fields, namely linear operator channels (LOCs), are
studied in this paper. For such a channel, its output vector is a linear
transform of its input vector, and the transformation matrix is randomly and
independently generated. The transformation matrix is assumed to remain
constant for every T input vectors and to be unknown to both the transmitter
and the receiver. There are NO constraints on the distribution of the
transformation matrix and the field size.
Specifically, the optimality of subspace coding over LOCs is investigated. A
lower bound on the maximum achievable rate of subspace coding is obtained and
it is shown to be tight for some cases. The maximum achievable rate of
constant-dimensional subspace coding is characterized and the loss of rate
incurred by using constant-dimensional subspace coding is insignificant.
The maximum achievable rate of channel training is close to the lower bound
on the maximum achievable rate of subspace coding. Two coding approaches based
on channel training are proposed and their performances are evaluated. Our
first approach makes use of rank-metric codes and its optimality depends on the
existence of maximum rank distance codes. Our second approach applies linear
coding and it can achieve the maximum achievable rate of channel training. Our
code designs require only the knowledge of the expectation of the rank of the
transformation matrix. The second scheme can also be realized ratelessly
without a priori knowledge of the channel statistics.
|
1002.2321
|
Exploiting Grids for applications in Condensed Matter Physics
|
cond-mat.mes-hall cond-mat.other cs.CE physics.comp-ph
|
Grids - the collection of heterogeneous computers spread across the globe -
present a new paradigm for the large scale problems in variety of fields. We
discuss two representative cases in the area of condensed matter physics
outlining the widespread applications of the Grids. Both the problems involve
calculations based on commonly used Density Functional Theory and hence can be
considered to be of general interest. We demonstrate the suitability of Grids
for the problems discussed and provide a general algorithm to implement and
manage such large scale problems.
|
1002.2408
|
Automatic diagnosis of retinal diseases from color retinal images
|
cs.CV
|
Teleophthalmology holds a great potential to improve the quality, access, and
affordability in health care. For patients, it can reduce the need for travel
and provide the access to a superspecialist. Ophthalmology lends itself easily
to telemedicine as it is a largely image based diagnosis. The main goal of the
proposed system is to diagnose the type of disease in the retina and to
automatically detect and segment retinal diseases without human supervision or
interaction. The proposed system will diagnose the disease present in the
retina using a neural network based classifier.The extent of the disease spread
in the retina can be identified by extracting the textural features of the
retina. This system will diagnose the following type of diseases: Diabetic
Retinopathy and Drusen.
|
1002.2412
|
A Probabilistic Model For Sequence Analysis
|
q-bio.QM cs.CE
|
This paper presents a probabilistic approach for DNA sequence analysis. A DNA
sequence consists of an arrangement of the four nucleotides A, C, T and G and
different representation schemes are presented according to a probability
measure associated with them. There are different ways that probability can be
associated with the DNA sequence: one way is when the probability of an
occurrence of a letter does not depend on the previous one (termed as
unsuccessive probability) and in another scheme the probability of occurrence
of a letter depends on its previous letter (termed as successive probability).
Further, based on these probability measures graphical representations of the
schemes are also presented. Using the diagram probability measure one can
easily calculate an associated probability measure which can serve as a
parameter to check how close is a new sequence to already existing ones.
|
1002.2418
|
Medical Image Compression using Wavelet Decomposition for Prediction
Method
|
cs.CV
|
In this paper offers a simple and lossless compression method for compression
of medical images. Method is based on wavelet decomposition of the medical
images followed by the correlation analysis of coefficients. The correlation
analyses are the basis of prediction equation for each sub band. Predictor
variable selection is performed through coefficient graphic method to avoid
multicollinearity problem and to achieve high prediction accuracy and
compression rate. The method is applied on MRI and CT images. Results show that
the proposed approach gives a high compression rate for MRI and CT images
comparing with state of the art methods.
|
1002.2425
|
Application of k Means Clustering algorithm for prediction of Students
Academic Performance
|
cs.LG cs.CY
|
The ability to monitor the progress of students academic performance is a
critical issue to the academic community of higher learning. A system for
analyzing students results based on cluster analysis and uses standard
statistical algorithms to arrange their scores data according to the level of
their performance is described. In this paper, we also implemented k mean
clustering algorithm for analyzing students result data. The model was combined
with the deterministic model to analyze the students results of a private
Institution in Nigeria which is a good benchmark to monitor the progression of
academic performance of students in higher Institution for the purpose of
making an effective decision by the academic planners.
|
1002.2439
|
Using Web Page Titles to Rediscover Lost Web Pages
|
cs.IR
|
Titles are denoted by the TITLE element within a web page. We queried the
title against the the Yahoo search engine to determine the page's status
(found, not found). We conducted several tests based on elements of the title.
These tests were used to discern whether we could predict a pages status based
on the title. Our results increase our ability to determine bad titles but not
our ability to determine good titles.
|
1002.2450
|
Modeling the Probability of Failure on LDAP Binding Operations in
Iplanet Web Proxy 3.6 Server
|
cs.PF cs.DB
|
This paper is devoted to the theoretical analysis of a problem derived from
interaction between two Iplanet products: Web Proxy Server and the Directory
Server. In particular, a probabilistic and stochastic-approximation model is
proposed to minimize the occurrence of LDAP connection failures in Iplanet Web
Proxy 3.6 Server. The proposed model serves not only to provide a
parameterization of the aforementioned phenomena, but also to provide
meaningful insights illustrating and supporting these theoretical results. In
addition, we shall also address practical considerations when estimating the
parameters of the proposed model from experimental data. Finally, we shall
provide some interesting results from real-world data collected from our
customers.
|
1002.2456
|
The Permutation Groups and the Equivalence of Cyclic and Quasi-Cyclic
Codes
|
cs.IT math.GR math.IT
|
We give the class of finite groups which arise as the permutation groups of
cyclic codes over finite fields. Furthermore, we extend the results of Brand
and Huffman et al. and we find the properties of the set of permutations by
which two cyclic codes of length p^r can be equivalent. We also find the set of
permutations by which two quasi-cyclic codes can be equivalent.
|
1002.2488
|
Entanglement-assisted zero-error capacity is upper bounded by the Lovasz
theta function
|
quant-ph cs.IT math.IT
|
The zero-error capacity of a classical channel is expressed in terms of the
independence number of some graph and its tensor powers. This quantity is hard
to compute even for small graphs such as the cycle of length seven, so upper
bounds such as the Lovasz theta function play an important role in zero-error
communication. In this paper, we show that the Lovasz theta function is an
upper bound on the zero-error capacity even in the presence of entanglement
between the sender and receiver.
|
1002.2514
|
Zero-error communication via quantum channels, non-commutative graphs
and a quantum Lovasz theta function
|
quant-ph cs.IT math.IT math.OA
|
We study the quantum channel version of Shannon's zero-error capacity
problem. Motivated by recent progress on this question, we propose to consider
a certain operator space as the quantum generalisation of the adjacency matrix,
in terms of which the plain, quantum and entanglement-assisted capacity can be
formulated, and for which we show some new basic properties.
Most importantly, we define a quantum version of Lovasz' famous theta
function, as the norm-completion (or stabilisation) of a "naive" generalisation
of theta. We go on to show that this function upper bounds the number of
entanglement-assisted zero-error messages, that it is given by a semidefinite
programme, whose dual we write down explicitly, and that it is multiplicative
with respect to the natural (strong) graph product.
We explore various other properties of the new quantity, which reduces to
Lovasz' original theta in the classical case, give several applications, and
propose to study the operator spaces associated to channels as "non-commutative
graphs", using the language of Hilbert modules.
|
1002.2523
|
Feature Level Fusion of Face and Fingerprint Biometrics
|
cs.CV cs.AI
|
The aim of this paper is to study the fusion at feature extraction level for
face and fingerprint biometrics. The proposed approach is based on the fusion
of the two traits by extracting independent feature pointsets from the two
modalities, and making the two pointsets compatible for concatenation.
Moreover, to handle the problem of curse of dimensionality, the feature
pointsets are properly reduced in dimension. Different feature reduction
techniques are implemented, prior and after the feature pointsets fusion, and
the results are duly recorded. The fused feature pointset for the database and
the query face and fingerprint images are matched using techniques based on
either the point pattern matching, or the Delaunay triangulation. Comparative
experiments are conducted on chimeric and real databases, to assess the actual
advantage of the fusion performed at the feature extraction level, in
comparison to the matching score level.
|
1002.2586
|
Blind Compressed Sensing
|
cs.IT math.IT
|
The fundamental principle underlying compressed sensing is that a signal,
which is sparse under some basis representation, can be recovered from a small
number of linear measurements. However, prior knowledge of the sparsity basis
is essential for the recovery process. This work introduces the concept of
blind compressed sensing, which avoids the need to know the sparsity basis in
both the sampling and the recovery process. We suggest three possible
constraints on the sparsity basis that can be added to the problem in order to
make its solution unique. For each constraint we prove conditions for
uniqueness, and suggest a simple method to retrieve the solution. Under the
uniqueness conditions, and as long as the signals are sparse enough, we
demonstrate through simulations that without knowing the sparsity basis our
methods can achieve results similar to those of standard compressed sensing,
which relay on prior knowledge of the sparsity basis. This offers a general
sampling and reconstruction system that fits all sparse signals, regardless of
the sparsity basis, under the conditions and constraints presented in this
work.
|
1002.2654
|
Assessment Of The Wind Farm Impact On The Radar
|
cs.CV cs.MS
|
This study shows the means to evaluate the wind farm impact on the radar. It
proposes the set of tools, which can be used to realise this objective. The big
part of report covers the study of complex pattern propagation factor as the
critical issue of the Advanced Propagation Model (APM). Finally, the reader can
find here the implementation of this algorithm - the real scenario in Inverness
airport (the United Kingdom), where the ATC radar STAR 2000, developed by
Thales Air Systems, operates in the presence of several wind farms. Basically,
the project is based on terms of the department "Strategy Technology &
Innovation", where it has been done. Also you can find here how the radar
industry can act with the problem engendered by wind farms. The current
strategies in this area are presented, such as a wind turbine production,
improvements of air traffic handling procedures and the collaboration between
developers of radars and wind turbines. The possible strategy for Thales as a
main pioneer was given as well.
|
1002.2655
|
Multicast Outage Probability and Transmission Capacity of Multihop
Wireless Networks
|
cs.IT math.IT
|
Multicast transmission, wherein the same packet must be delivered to multiple
receivers, is an important aspect of sensor and tactical networks and has
several distinctive traits as opposed to more commonly studied unicast
networks. Specially, these include (i) identical packets must be delivered
successfully to several nodes, (ii) outage at any receiver requires the packet
to be retransmitted at least to that receiver, and (iii) the multicast rate is
dominated by the receiver with the weakest link in order to minimize outage and
retransmission. A first contribution of this paper is the development of a
tractable multicast model and throughput metric that captures each of these key
traits in a multicast wireless network. We utilize a Poisson cluster process
(PCP) consisting of a distinct Poisson point process (PPP) for the transmitters
and receivers, and then define the multicast transmission capacity (MTC) as the
maximum achievable multicast rate per transmission attempt times the maximum
intensity of multicast clusters under decoding delay and multicast outage
constraints. A multicast cluster is a contiguous area over which a packet is
multicasted, and to reduce outage it can be tessellated into $v$ smaller
regions of multicast. The second contribution of the paper is the analysis of
several key aspects of this model, for which we develop the following main
result. Assuming $\tau/v$ transmission attempts are allowed for each
tessellated region in a multicast cluster, we show that the MTC is $\Theta(\rho
k^{x}\log(k)v^{y})$ where $\rho$, $x$ and $y$ are functions of $\tau$ and $v$
depending on the network size and intensity, and $k$ is the average number of
the intended receivers in a cluster. We derive $\{\rho, x, y\}$ for a number of
regimes of interest, and also show that an appropriate number of
retransmissions can significantly enhance the MTC.
|
1002.2677
|
Compressed Sensing for Sparse Underwater Channel Estimation: Some
Practical Considerations
|
stat.AP cs.IT math.IT
|
We examine the use of a structured thresholding algorithm for sparse
underwater channel estimation using compressed sensing. This method shows some
improvements over standard algorithms for sparse channel estimation such as
matching pursuit, iterative detection and least squares.
|
1002.2720
|
Aiming Perfectly in the Dark - Blind Interference Alignment through
Staggered Antenna Switching
|
cs.IT math.IT
|
We propose a blind interference alignment scheme for the vector broadcast
channel where the transmitter is equipped with M antennas and there are K
receivers, each equipped with a reconfigurable antenna capable of switching
among M preset modes. Without any knowledge of the channel coefficient values
at the transmitters and with only mild assumptions on the channel coherence
structure we show that MK/M+K-1 degrees of freedom are achievable. The key to
the blind interference alignment scheme is the ability of the receivers to
switch between reconfigurable antenna modes to create short term channel
fluctuation patterns that are exploited by the transmitter. The achievable
scheme does not require cooperation between transmit antennas and is therefore
applicable to the MxK X network as well. Only finite symbol extensions are
used, and no channel knowledge at the receivers is required to null the
interference.
|
1002.2755
|
Multibiometrics Belief Fusion
|
cs.CV cs.AI
|
This paper proposes a multimodal biometric system through Gaussian Mixture
Model (GMM) for face and ear biometrics with belief fusion of the estimated
scores characterized by Gabor responses and the proposed fusion is accomplished
by Dempster-Shafer (DS) decision theory. Face and ear images are convolved with
Gabor wavelet filters to extracts spatially enhanced Gabor facial features and
Gabor ear features. Further, GMM is applied to the high-dimensional Gabor face
and Gabor ear responses separately for quantitive measurements. Expectation
Maximization (EM) algorithm is used to estimate density parameters in GMM. This
produces two sets of feature vectors which are then fused using Dempster-Shafer
theory. Experiments are conducted on multimodal database containing face and
ear images of 400 individuals. It is found that use of Gabor wavelet filters
along with GMM and DS theory can provide robust and efficient multimodal fusion
strategy.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.