id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1006.5090
|
PAC learnability of a concept class under non-atomic measures: a problem
by Vidyasagar
|
cs.LG
|
In response to a 1997 problem of M. Vidyasagar, we state a necessary and
sufficient condition for distribution-free PAC learnability of a concept class
$\mathscr C$ under the family of all non-atomic (diffuse) measures on the
domain $\Omega$. Clearly, finiteness of the classical Vapnik-Chervonenkis
dimension of $\mathscr C$ is a sufficient, but no longer necessary, condition.
Besides, learnability of $\mathscr C$ under non-atomic measures does not imply
the uniform Glivenko-Cantelli property with regard to non-atomic measures. Our
learnability criterion is stated in terms of a combinatorial parameter
$\VC({\mathscr C}\,{\mathrm{mod}}\,\omega_1)$ which we call the VC dimension of
$\mathscr C$ modulo countable sets. The new parameter is obtained by
``thickening up'' single points in the definition of VC dimension to
uncountable ``clusters''. Equivalently, $\VC(\mathscr C\modd\omega_1)\leq d$ if
and only if every countable subclass of $\mathscr C$ has VC dimension $\leq d$
outside a countable subset of $\Omega$. The new parameter can be also expressed
as the classical VC dimension of $\mathscr C$ calculated on a suitable subset
of a compactification of $\Omega$. We do not make any measurability assumptions
on $\mathscr C$, assuming instead the validity of Martin's Axiom (MA).
|
1006.5099
|
Stochastic Calculus of Wrapped Compartments
|
cs.CE cs.FL cs.LO q-bio.QM
|
The Calculus of Wrapped Compartments (CWC) is a variant of the Calculus of
Looping Sequences (CLS). While keeping the same expressiveness, CWC strongly
simplifies the development of automatic tools for the analysis of biological
systems. The main simplification consists in the removal of the sequencing
operator, thus lightening the formal treatment of the patterns to be matched in
a term (whose complexity in CLS is strongly affected by the variables matching
in the sequences).
We define a stochastic semantics for this new calculus. As an application we
model the interaction between macrophages and apoptotic neutrophils and a
mechanism of gene regulation in E.Coli.
|
1006.5166
|
On Marton's Inner Bound for the General Broadcast Channel
|
cs.IT math.IT
|
We establish several new results on Marton's coding scheme and its
corresponding inner bound on the capacity region of the general broadcast
channel. We show that unlike the Gaussian case, Marton's coding scheme without
superposition coding is not optimal in general even for a degraded broadcast
channel with no common message. We then establish properties of Marton's inner
bound that help restrict the search space for computing the sum-rate. Next, we
show that the inner bound is optimal along certain directions. Finally, we
propose a coding scheme that may lead to a larger inner bound.
|
1006.5188
|
Feature Construction for Relational Sequence Learning
|
cs.AI cs.LG
|
We tackle the problem of multi-class relational sequence learning using
relevant patterns discovered from a set of labelled sequences. To deal with
this problem, firstly each relational sequence is mapped into a feature vector
using the result of a feature construction method. Since, the efficacy of
sequence learning algorithms strongly depends on the features used to represent
the sequences, the second step is to find an optimal subset of the constructed
features leading to high classification accuracy. This feature selection task
has been solved adopting a wrapper approach that uses a stochastic local search
algorithm embedding a naive Bayes classifier. The performance of the proposed
method applied to a real-world dataset shows an improvement when compared to
other established methods, such as hidden Markov models, Fisher kernels and
conditional random fields for relational sequences.
|
1006.5226
|
A Framework for Interactive Work Design based on Digital Work Analysis
and Simulation
|
cs.RO
|
Due to the flexibility and adaptability of human, manual handling work is
still very important in industry, especially for assembly and maintenance work.
Well-designed work operation can improve work efficiency and quality; enhance
safety, and lower cost. Most traditional methods for work system analysis need
physical mock-up and are time consuming. Digital mockup (DMU) and digital human
modeling (DHM) techniques have been developed to assist ergonomic design and
evaluation for a specific worker population (e.g. 95 percentile); however, the
operation adaptability and adjustability for a specific individual are not
considered enough. In this study, a new framework based on motion tracking
technique and digital human simulation technique is proposed for motion-time
analysis of manual operations. A motion tracking system is used to track a
worker's operation while he/she is conducting a manual handling work. The
motion data is transferred to a simulation computer for real time digital human
simulation. The data is also used for motion type recognition and analysis
either online or offline for objective work efficiency evaluation and
subjective work task evaluation. Methods for automatic motion recognition and
analysis are presented. Constraints and limitations of the proposed method are
discussed.
|
1006.5261
|
Data Stream Clustering: Challenges and Issues
|
cs.DB cs.LG
|
Very large databases are required to store massive amounts of data that are
continuously inserted and queried. Analyzing huge data sets and extracting
valuable pattern in many applications are interesting for researchers. We can
identify two main groups of techniques for huge data bases mining. One group
refers to streaming data and applies mining techniques whereas second group
attempts to solve this problem directly with efficient algorithms. Recently
many researchers have focused on data stream as an efficient strategy against
huge data base mining instead of mining on entire data base. The main problem
in data stream mining means evolving data is more difficult to detect in this
techniques therefore unsupervised methods should be applied. However,
clustering techniques can lead us to discover hidden information. In this
survey, we try to clarify: first, the different problem definitions related to
data stream clustering in general; second, the specific difficulties
encountered in this field of research; third, the varying assumptions,
heuristics, and intuitions forming the basis of different approaches; and how
several prominent solutions tackle different problems. Index Terms- Data
Stream, Clustering, K-Means, Concept drift
|
1006.5263
|
Design specifications of the Human Robotic interface for the biomimetic
underwater robot "yellow submarine project"
|
cs.MA cs.RO
|
This paper describes the design of a web based multi agent design for a
collision avoidance auto navigation biomimetic submarine for submarine
hydroelectricity. The paper describes the nature of the map - topology
interface for river bodies and the design of interactive agents for the control
of the robotic submarine. The agents are migratory on the web and are designed
in XML/html interface with both interactive capabilities and visibility on a
map. The paper describes mathematically the user interface and the map
definition languages used for the multi agent description
|
1006.5265
|
Control-theoretic Approach to Communication with Feedback: Fundamental
Limits and Code Design
|
cs.IT math.DS math.IT math.OC
|
Feedback communication is studied from a control-theoretic perspective,
mapping the communication problem to a control problem in which the control
signal is received through the same noisy channel as in the communication
problem, and the (nonlinear and time-varying) dynamics of the system determine
a subclass of encoders available at the transmitter. The MMSE capacity is
defined to be the supremum exponential decay rate of the mean square decoding
error. This is upper bounded by the information-theoretic feedback capacity,
which is the supremum of the achievable rates. A sufficient condition is
provided under which the upper bound holds with equality. For the special class
of stationary Gaussian channels, a simple application of Bode's integral
formula shows that the feedback capacity, recently characterized by Kim, is
equal to the maximum instability that can be tolerated by the controller under
a given power constraint. Finally, the control mapping is generalized to the
N-sender AWGN multiple access channel. It is shown that Kramer's code for this
channel, which is known to be sum rate optimal in the class of generalized
linear feedback codes, can be obtained by solving a linear quadratic Gaussian
control problem.
|
1006.5271
|
Construction of Slepian-Wolf Source Code and Broadcast Channel Code
Based on Hash Property
|
cs.IT math.IT
|
The aim of this paper is to prove theorems for the Slepian-Wolf source coding
and the broadcast channel coding (independent messages and no common message)
based on the the notion of a stronger version of the hash property for an
ensemble of functions. Since an ensemble of sparse matrices has a strong hash
property, codes using sparse matrices can realize the achievable rate region.
Furthermore, extensions to the multiple source coding and multiple output
broadcast channel coding are investigated.
|
1006.5273
|
Linear Detrending Subsequence Matching in Time-Series Databases
|
cs.DB
|
Each time-series has its own linear trend, the directionality of a
timeseries, and removing the linear trend is crucial to get the more intuitive
matching results. Supporting the linear detrending in subsequence matching is a
challenging problem due to a huge number of possible subsequences. In this
paper we define this problem the linear detrending subsequence matching and
propose its efficient index-based solution. To this end, we first present a
notion of LD-windows (LD means linear detrending), which is obtained as
follows: we eliminate the linear trend from a subsequence rather than each
window itself and obtain LD-windows by dividing the subsequence into windows.
Using the LD-windows we then present a lower bounding theorem for the
index-based matching solution and formally prove its correctness. Based on the
lower bounding theorem, we next propose the index building and subsequence
matching algorithms for linear detrending subsequence matching.We finally show
the superiority of our index-based solution through extensive experiments.
|
1006.5278
|
A Survey Paper on Recommender Systems
|
cs.IR cs.LG
|
Recommender systems apply data mining techniques and prediction algorithms to
predict users' interest on information, products and services among the
tremendous amount of available items. The vast growth of information on the
Internet as well as number of visitors to websites add some key challenges to
recommender systems. These are: producing accurate recommendation, handling
many recommendations efficiently and coping with the vast growth of number of
participants in the system. Therefore, new recommender system technologies are
needed that can quickly produce high quality recommendations even for huge data
sets.
To address these issues we have explored several collaborative filtering
techniques such as the item based approach, which identify relationship between
items and indirectly compute recommendations for users based on these
relationships. The user based approach was also studied, it identifies
relationships between users of similar tastes and computes recommendations
based on these relationships.
In this paper, we introduce the topic of recommender system. It provides ways
to evaluate efficiency, scalability and accuracy of recommender system. The
paper also analyzes different algorithms of user based and item based
techniques for recommendation generation. Moreover, a simple experiment was
conducted using a data mining application -Weka- to apply data mining
algorithms to recommender system. We conclude by proposing our approach that
might enhance the quality of recommender systems.
|
1006.5305
|
An Agent-Based Model of Collective Emotions in Online Communities
|
physics.soc-ph cs.MA nlin.AO
|
We develop a agent-based framework to model the emergence of collective
emotions, which is applied to online communities. Agents individual emotions
are described by their valence and arousal. Using the concept of Brownian
agents, these variables change according to a stochastic dynamics, which also
considers the feedback from online communication. Agents generate emotional
information, which is stored and distributed in a field modeling the online
medium. This field affects the emotional states of agents in a non-linear
manner. We derive conditions for the emergence of collective emotions,
observable in a bimodal valence distribution. Dependent on a saturated or a
superlinear feedback between the information field and the agent's arousal, we
further identify scenarios where collective emotions only appear once or in a
repeated manner. The analytical results are illustrated by agent-based computer
simulations. Our framework provides testable hypotheses about the emergence of
collective emotions, which can be verified by data from online communities.
|
1006.5354
|
Optimal Trade-Off for Succinct String Indexes
|
cs.DS cs.IT math.IT
|
Let s be a string whose symbols are solely available through access(i), a
read-only operation that probes s and returns the symbol at position i in s.
Many compressed data structures for strings, trees, and graphs, require two
kinds of queries on s: select(c, j), returning the position in s containing the
jth occurrence of c, and rank(c, p), counting how many occurrences of c are
found in the first p positions of s. We give matching upper and lower bounds
for this problem, improving the lower bounds given by Golynski [Theor. Comput.
Sci. 387 (2007)] [PhD thesis] and the upper bounds of Barbay et al. [SODA
2007]. We also present new results in another model, improving on Barbay et al.
[SODA 2007] and matching a lower bound of Golynski [SODA 2009]. The main
contribution of this paper is to introduce a general technique for proving
lower bounds on succinct data structures, that is based on the access patterns
of the supported operations, abstracting from the particular operations at
hand. For this, it may find application to other interesting problems on
succinct data structures.
|
1006.5367
|
The Link Prediction Problem in Bipartite Networks
|
cs.LG physics.soc-ph
|
We define and study the link prediction problem in bipartite networks,
specializing general link prediction algorithms to the bipartite case. In a
graph, a link prediction function of two vertices denotes the similarity or
proximity of the vertices. Common link prediction functions for general graphs
are defined using paths of length two between two nodes. Since in a bipartite
graph adjacency vertices can only be connected by paths of odd lengths, these
functions do not apply to bipartite graphs. Instead, a certain class of graph
kernels (spectral transformation kernels) can be generalized to bipartite
graphs when the positive-semidefinite kernel constraint is relaxed. This
generalization is realized by the odd component of the underlying spectral
transformation. This construction leads to several new link prediction
pseudokernels such as the matrix hyperbolic sine, which we examine for rating
graphs, authorship graphs, folksonomies, document--feature networks and other
types of bipartite networks.
|
1006.5445
|
Technical Report: MIMO B-MAC Interference Network Optimization under
Rate Constraints by Polite Water-filling and Duality
|
cs.IT math.IT
|
We take two new approaches to design efficient algorithms for transmitter
optimization under rate constraints to guarantee the Quality of Service in
general MIMO interference networks, named B-MAC Networks, which is a
combination of multiple interfering broadcast channels (BC) and multiaccess
channels (MAC). Two related optimization problems, maximizing the minimum of
weighted rates under a sum-power constraint and minimizing the sum-power under
rate constraints, are considered. The first approach takes advantage of
existing efficient algorithms for SINR problems by building a bridge between
rate and SINR through the design of optimal mappings between them so that the
problems can be converted to SINR constraint problems. The approach can be
applied to other optimization problems as well. The second approach employs
polite water-filling, which is the optimal network version of water-filling
that we recently found. It replaces almost all generic optimization algorithms
currently used for networks and reduces the complexity while demonstrating
superior performance even in non-convex cases. Both centralized and distributed
algorithms are designed and the performance is analyzed in addition to numeric
examples.
|
1006.5511
|
Soft Approximations and uni-int Decision Making
|
cs.AI
|
Notions of core, support and inversion of a soft set have been defined and
studied. Soft approximations are soft sets developed through core and support,
and are used for granulating the soft space. Membership structure of a soft set
has been probed in and many interesting properties presented. The mathematical
apparatus developed so far in this paper yields a detailed analysis of two
works viz. [N. Cagman, S. Enginoglu, Soft set theory and uni-int decision
making, European Jr. of Operational Research (article in press, available
online 12 May 2010)] and [N. Cagman, S. Enginoglu, Soft matrix theory and its
decision making, Computers and Mathematics with Applications 59 (2010) 3308 -
3314.]. We prove (Theorem 8.1) that uni-int method of Cagman is equivalent to a
core-support expression which is computationally far less expansive than
uni-int. This also highlights some shortcomings in Cagman's uni-int method and
thus motivates us to improve the method. We first suggest an improvement in
uni-int method and then present a new conjecture to solve the optimum choice
problem given by Cagman and Enginoglu. Our Example 8.6 presents a case where
the optimum choice is intuitively clear yet both uni-int methods (Cagman's and
our improved one) give wrong answer but the new conjecture solves the problem
correctly.
|
1006.5657
|
Reasoning Support for Risk Prediction and Prevention in Independent
Living
|
cs.AI
|
In recent years there has been growing interest in solutions for the delivery
of clinical care for the elderly, due to the large increase in aging
population. Monitoring a patient in his home environment is necessary to ensure
continuity of care in home settings, but, to be useful, this activity must not
be too invasive for patients and a burden for caregivers. We prototyped a
system called SINDI (Secure and INDependent lIving), focused on i) collecting a
limited amount of data about the person and the environment through Wireless
Sensor Networks (WSN), and ii) inferring from these data enough information to
support caregivers in understanding patients' well being and in predicting
possible evolutions of their health. Our hierarchical logic-based model of
health combines data from different sources, sensor data, tests results,
common-sense knowledge and patient's clinical profile at the lower level, and
correlation rules between health conditions across upper levels. The logical
formalization and the reasoning process are based on Answer Set Programming.
The expressive power of this logic programming paradigm makes it possible to
reason about health evolution even when the available information is incomplete
and potentially incoherent, while declarativity simplifies rules specification
by caregivers and allows automatic encoding of knowledge. This paper describes
how these issues have been targeted in the application scenario of the SINDI
system.
|
1006.5677
|
Shape of Traveling Densities with Extremum Statistical Complexity
|
nlin.PS cs.IT math.IT
|
In this paper, we analyze the behavior of statistical complexity in several
systems where two identical densities that travel in opposite direction cross
each other. Besides the crossing between two Gaussian, rectangular and
triangular densities studied in a previous work, we also investigate in detail
the crossing between two exponential and two gamma distributions. For all these
cases, the shape of the total density presenting an extreme value in complexity
is found.
|
1006.5686
|
Geometric Approximations of Some Aloha-like Stability Regions
|
cs.IT math.IT
|
Most bounds on the stability region of Aloha give necessary and sufficient
conditions for the stability of an arrival rate vector under a specific
contention probability (control) vector. But such results do not yield
easy-to-check bounds on the overall Aloha stability region because they
potentially require checking membership in an uncountably infinite number of
sets parameterized by each possible control vector. In this paper we consider
an important specific inner bound on Aloha that has this property of difficulty
to check membership in the set. We provide ellipsoids (for which membership is
easy-to-check) that we conjecture are inner and outer bounds on this set. We
also study the set of controls that stabilize a fixed arrival rate vector; this
set is shown to be a convex set.
|
1006.5739
|
Polyharmonic Daubechies type wavelets in Image Processing and Astronomy,
II
|
math.NA cs.CV
|
We consider the application of the polyharmonic subdivision wavelets (of
Daubechies type) to Image Processing, in particular to Astronomical Images. The
results show an essential advantage over some standard multivariate wavelets
and a potential for better compression.
|
1006.5745
|
Evolutionary Computation Algorithms for Cryptanalysis: A Study
|
cs.CR cs.NE
|
The cryptanalysis of various cipher problems can be formulated as NP-Hard
combinatorial problem. Solving such problems requires time and/or memory
requirement which increases with the size of the problem. Techniques for
solving combinatorial problems fall into two broad groups - exact algorithms
and Evolutionary Computation algorithms. An exact algorithms guarantees that
the optimal solution to the problem will be found. The exact algorithms like
branch and bound, simplex method, brute force etc methodology is very
inefficient for solving combinatorial problem because of their prohibitive
complexity (time and memory requirement). The Evolutionary Computation
algorithms are employed in an attempt to find an adequate solution to the
problem. A Evolutionary Computation algorithm - Genetic algorithm, simulated
annealing and tabu search were developed to provide a robust and efficient
methodology for cryptanalysis. The aim of these techniques to find sufficient
"good" solution efficiently with the characteristics of the problem, instead of
the global optimum solution, and thus it also provides attractive alternative
for the large scale applications. This paper focuses on the methodology of
Evolutionary Computation algorithms .
|
1006.5762
|
Construction and Applications of CRT Sequences
|
cs.IT math.IT
|
Protocol sequences are used for channel access in the collision channel
without feedback. Each user accesses the channel according to a deterministic
zero-one pattern, called the protocol sequence. In order to minimize
fluctuation of throughput due to delay offsets, we want to construct protocol
sequences whose pairwise Hamming cross-correlation is as close to a constant as
possible. In this paper, we present a construction of protocol sequences which
is based on the bijective mapping between one-dimensional sequence and
two-dimensional array by the Chinese Remainder Theorem (CRT). In the
application to the collision channel without feedback, a worst-case lower bound
on system throughput is derived.
|
1006.5787
|
Fatigue evaluation in maintenance and assembly operations by digital
human simulation
|
cs.RO
|
Virtual human techniques have been used a lot in industrial design in order
to consider human factors and ergonomics as early as possible. The physical
status (the physical capacity of virtual human) has been mostly treated as
invariable in the current available human simulation tools, while indeed the
physical capacity varies along time in an operation and the change of the
physical capacity depends on the history of the work as well. Virtual Human
Status is proposed in this paper in order to assess the difficulty of manual
handling operations, especially from the physical perspective. The decrease of
the physical capacity before and after an operation is used as an index to
indicate the work difficulty. The reduction of physical strength is simulated
in a theoretical approach on the basis of a fatigue model in which fatigue
resistances of different muscle groups were regressed from 24 existing maximum
endurance time (MET) models. A framework based on digital human modeling
technique is established to realize the comparison of physical status. An
assembly case in airplane assembly is simulated and analyzed under the
framework. The endurance time and the decrease of the joint moment strengths
are simulated. The experimental result in simulated operations under laboratory
conditions confirms the feasibility of the theoretical approach.
|
1006.5794
|
Report on the XBase Project
|
cs.DB
|
This project addressed the conceptual fundamentals of data storage,
investigating techniques for provision of highly generic storage facilities
that can be tailored to produce various individually customised storage
infrastructures, compliant to the needs of particular applications. This
requires the separation of mechanism and policy wherever possible. Aspirations
include: actors, whether users or individual processes, should be able to bind
to, update and manipulate data and programs transparently with respect to their
respective locations; programs should be expressed independently of the storage
and network technology involved in their execution; storage facilities should
be structure-neutral so that actors can impose multiple interpretations over
information, simultaneously and safely; information should not be discarded so
that arbitrary historical views are supported; raw stored information should be
open to all; where security restrictions on its use are required this should be
achieved using cryptographic techniques. The key advances of the research were:
1) the identification of a candidate set of minimal storage system building
blocks, which are sufficiently simple to avoid encapsulating policy where it
cannot be customised by applications, and composable to build highly flexible
storage architectures 2) insight into the nature of append-only storage
components, and the issues arising from their application to common storage
use-cases.
|
1006.5802
|
On Graphs and Codes Preserved by Edge Local Complementation
|
math.CO cs.IT math.IT
|
Orbits of graphs under local complementation (LC) and edge local
complementation (ELC) have been studied in several different contexts. For
instance, there are connections between orbits of graphs and error-correcting
codes. We define a new graph class, ELC-preserved graphs, comprising all graphs
that have an ELC orbit of size one. Through an exhaustive search, we find all
ELC-preserved graphs of order up to 12 and all ELC-preserved bipartite graphs
of order up to 16. We provide general recursive constructions for infinite
families of ELC-preserved graphs, and show that all known ELC-preserved graphs
arise from these constructions or can be obtained from Hamming codes. We also
prove that certain pairs of ELC-preserved graphs are LC equivalent. We define
ELC-preserved codes as binary linear codes corresponding to bipartite
ELC-preserved graphs, and study the parameters of such codes.
|
1006.5827
|
Approximate Robotic Mapping from sonar data by modeling Perceptions with
Antonyms
|
cs.RO cs.CL
|
This work, inspired by the idea of "Computing with Words and Perceptions"
proposed by Zadeh in 2001, focuses on how to transform measurements into
perceptions for the problem of map building by Autonomous Mobile Robots. We
propose to model the perceptions obtained from sonar-sensors as two grid maps:
one for obstacles and another for empty spaces. The rules used to build and
integrate these maps are expressed by linguistic descriptions and modeled by
fuzzy rules. The main difference of this approach from other studies reported
in the literature is that the method presented here is based on the hypothesis
that the concepts "occupied" and "empty" are antonyms rather than complementary
(as it happens in probabilistic approaches), or independent (as it happens in
the previous fuzzy models).
Controlled experimentation with a real robot in three representative indoor
environments has been performed and the results presented. We offer a
qualitative and quantitative comparison of the estimated maps obtained by the
probabilistic approach, the previous fuzzy method and the new antonyms-based
fuzzy approach. It is shown that the maps obtained with the antonyms-based
approach are better defined, capture better the shape of the walls and of the
empty-spaces, and contain less errors due to rebounds and short-echoes.
Furthermore, in spite of noise and low resolution inherent to the sonar-sensors
used, the maps obtained are accurate and tolerant to imprecision.
|
1006.5829
|
Online Event Segmentation in Active Perception using Adaptive Strong
Anticipation
|
cs.RO
|
Most cognitive architectures rely on discrete representation, both in space
(e.g., objects) and in time (e.g., events). However, a robot interaction with
the world is inherently continuous, both in space and in time. The segmentation
of the stream of perceptual inputs a robot receives into discrete and
meaningful events poses as a challenge in bridging the gap between internal
cognitive representations, and the external world. Event Segmentation Theory,
recently proposed in the context of cognitive systems research, sustains that
humans segment time into events based on matching perceptual input with
predictions. In this work we propose a framework for online event segmentation,
targeting robots endowed with active perception. Moreover, sensory processing
systems have an intrinsic latency, resulting from many factors such as sampling
rate, and computational processing, and which is seldom accounted for. This
framework is founded on the theory of dynamical systems synchronization, where
the system considered includes both the robot and the world coupled (strong
anticipation). An adaption rule is used to perform simultaneous system
identification and synchronization, and anticipating synchronization is
employed to predict the short-term system evolution. This prediction allows for
an appropriate control of the robot actuation. Event boundaries are detected
once synchronization is lost (sudden increase of the prediction error). An
experimental proof of concept of the proposed framework is presented, together
with some preliminary results corroborating the approach.
|
1006.5877
|
RoboCast: Asynchronous Communication in Robot Networks
|
cs.DC cs.RO
|
This paper introduces the \emph{RoboCast} communication abstraction. The
RoboCast allows a swarm of non oblivious, anonymous robots that are only
endowed with visibility sensors and do not share a common coordinate system, to
asynchronously exchange information. We propose a generic framework that covers
a large class of asynchronous communication algorithms and show how our
framework can be used to implement fundamental building blocks in robot
networks such as gathering or stigmergy. In more details, we propose a RoboCast
algorithm that allows robots to broadcast their local coordinate systems to
each others. Our algorithm is further refined with a local collision avoidance
scheme. Then, using the RoboCast primitive, we propose algorithms for
deterministic asynchronous gathering and binary information exchange.
|
1006.5879
|
Secure Transmission with Multiple Antennas II: The MIMOME Wiretap
Channel
|
cs.IT cs.CR math.IT
|
The capacity of the Gaussian wiretap channel model is analyzed when there are
multiple antennas at the sender, intended receiver and eavesdropper. The
associated channel matrices are fixed and known to all the terminals. A
computable characterization of the secrecy capacity is established as the
saddle point solution to a minimax problem. The converse is based on a
Sato-type argument used in other broadcast settings, and the coding theorem is
based on Gaussian wiretap codebooks.
At high signal-to-noise ratio (SNR), the secrecy capacity is shown to be
attained by simultaneously diagonalizing the channel matrices via the
generalized singular value decomposition, and independently coding across the
resulting parallel channels. The associated capacity is expressed in terms of
the corresponding generalized singular values. It is shown that a semi-blind
"masked" multi-input multi-output (MIMO) transmission strategy that sends
information along directions in which there is gain to the intended receiver,
and synthetic noise along directions in which there is not, can be arbitrarily
far from capacity in this regime.
Necessary and sufficient conditions for the secrecy capacity to be zero are
provided, which simplify in the limit of many antennas when the entries of the
channel matrices are independent and identically distributed. The resulting
scaling laws establish that to prevent secure communication, the eavesdropper
needs 3 times as many antennas as the sender and intended receiver have
jointly, and that the optimimum division of antennas between sender and
intended receiver is in the ratio of 2:1.
|
1006.5880
|
Testing SDRT's Right Frontier
|
cs.CL
|
The Right Frontier Constraint (RFC), as a constraint on the attachment of new
constituents to an existing discourse structure, has important implications for
the interpretation of anaphoric elements in discourse and for Machine Learning
(ML) approaches to learning discourse structures. In this paper we provide
strong empirical support for SDRT's version of RFC. The analysis of about 100
doubly annotated documents by five different naive annotators shows that SDRT's
RFC is respected about 95% of the time. The qualitative analysis of presumed
violations that we have performed shows that they are either click-errors or
structural misconceptions.
|
1006.5894
|
A possible intrinsic weakness of AES and other cryptosystems
|
cs.IT cs.CR math.IT
|
It has been suggested that the algebraic structure of AES (and other similar
block ciphers) could lead to a weakness exploitable in new attacks. In this
paper, we use the algebraic structure of AES-like ciphers to construct a cipher
embedding where the ciphers may lose their non-linearity. We show some examples
and we discuss the limitations of our approach.
|
1006.5896
|
Counterexample Guided Abstraction Refinement Algorithm for Propositional
Circumscription
|
cs.AI cs.LO
|
Circumscription is a representative example of a nonmonotonic reasoning
inference technique. Circumscription has often been studied for first order
theories, but its propositional version has also been the subject of extensive
research, having been shown equivalent to extended closed world assumption
(ECWA). Moreover, entailment in propositional circumscription is a well-known
example of a decision problem in the second level of the polynomial hierarchy.
This paper proposes a new Boolean Satisfiability (SAT)-based algorithm for
entailment in propositional circumscription that explores the relationship of
propositional circumscription to minimal models. The new algorithm is inspired
by ideas commonly used in SAT-based model checking, namely counterexample
guided abstraction refinement. In addition, the new algorithm is refined to
compute the theory closure for generalized close world assumption (GCWA).
Experimental results show that the new algorithm can solve problem instances
that other solutions are unable to solve.
|
1006.5901
|
Secret key agreement on wiretap channels with transmitter side
information
|
cs.IT math.IT
|
Secret-key agreement protocols over wiretap channels controlled by a state
parameter are studied. The entire state sequence is known (non-causally) to the
sender but not to the receiver and the eavesdropper. Upper and lower bounds on
the secret-key capacity are established both with and without public
discussion. The proposed coding scheme involves constructing a codebook to
create common reconstruction of the state sequence at the sender and the
receiver and another secret-key codebook constructed by random binning. For the
special case of Gaussian channels, with no public discussion, - the secret-key
generation with dirty paper problem, the gap between our bounds is at-most 1/2
bit and the bounds coincide in the high signal-to-noise ratio and high
interference-to-noise ratio regimes. In the presence of public discussion our
bounds coincide, yielding the capacity, when then the channels of the receiver
and the eavesdropper satisfy an in- dependent noise condition.
|
1006.5902
|
Performance Comparison of SVM and ANN for Handwritten Devnagari
Character Recognition
|
cs.CV
|
Classification methods based on learning from examples have been widely
applied to character recognition from the 1990s and have brought forth
significant improvements of recognition accuracies. This class of methods
includes statistical methods, artificial neural networks, support vector
machines (SVM), multiple classifier combination, etc. In this paper, we discuss
the characteristics of the some classification methods that have been
successfully applied to handwritten Devnagari character recognition and results
of SVM and ANNs classification method, applied on Handwritten Devnagari
characters. After preprocessing the character image, we extracted shadow
features, chain code histogram features, view based features and longest run
features. These features are then fed to Neural classifier and in support
vector machine for classification. In neural classifier, we explored three ways
of combining decisions of four MLP's designed for four different features.
|
1006.5908
|
Recognition of Non-Compound Handwritten Devnagari Characters using a
Combination of MLP and Minimum Edit Distance
|
cs.CV
|
This paper deals with a new method for recognition of offline Handwritten
non-compound Devnagari Characters in two stages. It uses two well known and
established pattern recognition techniques: one using neural networks and the
other one using minimum edit distance. Each of these techniques is applied on
different sets of characters for recognition. In the first stage, two sets of
features are computed and two classifiers are applied to get higher recognition
accuracy. Two MLP's are used separately to recognize the characters. For one of
the MLP's the characters are represented with their shadow features and for the
other chain code histogram feature is used. The decision of both MLP's is
combined using weighted majority scheme. Top three results produced by combined
MLP's in the first stage are used to calculate the relative difference values.
In the second stage, based on these relative differences character set is
divided into two. First set consists of the characters with distinct shapes and
second set consists of confused characters, which appear very similar in
shapes. Characters of distinct shapes of first set are classified using MLP.
Confused characters in second set are classified using minimum edit distance
method. Method of minimum edit distance makes use of corner detected in a
character image using modified Harris corner detection technique. Experiment on
this method is carried out on a database of 7154 samples. The overall
recognition is found to be 90.74%.
|
1006.5911
|
Application of Statistical Features in Handwritten Devnagari Character
Recognition
|
cs.CV
|
In this paper a scheme for offline Handwritten Devnagari Character
Recognition is proposed, which uses different feature extraction methodologies
and recognition algorithms. The proposed system assumes no constraints in
writing style or size. First the character is preprocessed and features namely
: Chain code histogram and moment invariant features are extracted and fed to
Multilayer Perceptrons as a preliminary recognition step. Finally the results
of both MLP's are combined using weighted majority scheme. The proposed system
is tested on 1500 handwritten devnagari character database collected from
different people. It is observed that the proposed system achieves recognition
rates 98.03% for top 5 results and 89.46% for top 1 result.
|
1006.5913
|
Multiple Classifier Combination for Off-line Handwritten Devnagari
Character Recognition
|
cs.CV
|
This work presents the application of weighted majority voting technique for
combination of classification decision obtained from three Multi_Layer
Perceptron(MLP) based classifiers for Recognition of Handwritten Devnagari
characters using three different feature sets. The features used are
intersection, shadow feature and chain code histogram features. Shadow features
are computed globally for character image while intersection features and chain
code histogram features are computed by dividing the character image into
different segments. On experimentation with a dataset of 4900 samples the
overall recognition rate observed is 92.16% as we considered top five choices
results. This method is compared with other recent methods for Handwritten
Devnagari Character Recognition and it has been observed that this approach has
better success rate than other methods.
|
1006.5920
|
A Two Stage Classification Approach for Handwritten Devanagari
Characters
|
cs.CV
|
The paper presents a two stage classification approach for handwritten
devanagari characters The first stage is using structural properties like
shirorekha, spine in character and second stage exploits some intersection
features of characters which are fed to a feedforward neural network. Simple
histogram based method does not work for finding shirorekha, vertical bar
(Spine) in handwritten devnagari characters. So we designed a differential
distance based technique to find a near straight line for shirorekha and spine.
This approach has been tested for 50000 samples and we got 89.12% success
|
1006.5924
|
A novel approach for handwritten Devnagari character recognition
|
cs.CV
|
In this paper a method for recognition of handwritten devanagari characters
is described. Here, feature vector is constituted by accumulated directional
gradient changes in different segments, number of intersections points for the
character, type of spine present and type of shirorekha present in the
character. One Multi-layer Perceptron with conjugate-gradient training is used
to classify these feature vectors. This method is applied to a database with
1000 sample characters and the recognition rate obtained is 88.12%
|
1006.5927
|
Classification Of Gradient Change Features Using MLP For Handwritten
Character Recognition
|
cs.CV
|
A novel, generic scheme for off-line handwritten English alphabets character
images is proposed. The advantage of the technique is that it can be applied in
a generic manner to different applications and is expected to perform better in
uncertain and noisy environments. The recognition scheme is using a multilayer
perceptron(MLP) neural networks. The system was trained and tested on a
database of 300 samples of handwritten characters. For improved generalization
and to avoid overtraining, the whole available dataset has been divided into
two subsets: training set and test set. We achieved 99.10% and 94.15% correct
recognition rates on training and test sets respectively. The purposed scheme
is robust with respect to various writing styles and size as well as presence
of considerable noise.
|
1006.5938
|
Secure Transmission with Artificial Noise over Fading Channels:
Achievable Rate and Optimal Power Allocation
|
cs.IT math.IT
|
We consider the problem of secure communication with multi-antenna
transmission in fading channels. The transmitter simultaneously transmits an
information bearing signal to the intended receiver and artificial noise to the
eavesdroppers. We obtain an analytical closed-form expression of an achievable
secrecy rate, and use it as the objective function to optimize the transmit
power allocation between the information signal and the artificial noise. Our
analytical and numerical results show that equal power allocation is a simple
yet near optimal strategy for the case of non-colluding eavesdroppers. When the
number of colluding eavesdroppers increases, more power should be used to
generate the artificial noise. We also provide an upper bound on the
signal-to-noise ratio (SNR) above which the achievable secrecy rate is positive
and show that the bound is tight at low SNR. Furthermore, we consider the
impact of imperfect channel state information (CSI) at both the transmitter and
the receiver and find that it is wise to create more artificial noise to
confuse the eavesdroppers than to increase the signal strength for the intended
receiver if the CSI is not accurately obtained.
|
1006.5942
|
FPGA Based Assembling of Facial Components for Human Face Construction
|
cs.CV
|
This paper aims at VLSI realization for generation of a new face from textual
description. The FASY (FAce SYnthesis) System is a Face Database Retrieval and
new Face generation System that is under development. One of its main features
is the generation of the requested face when it is not found in the existing
database. The new face generation system works in three steps - searching
phase, assembling phase and tuning phase. In this paper the tuning phase using
hardware description language and its implementation in a Field Programmable
Gate Array (FPGA) device is presented.
|
1006.5945
|
Fuzzy Classification of Facial Component Parameters
|
cs.CV
|
This paper presents a novel type-2 Fuzzy logic System to define the Shape of
a facial component with the crisp output. This work is the part of our main
research effort to design a system (called FASY) which offers a novel face
construction approach based on the textual description and also extracts and
analyzes the facial components from a face image by an efficient technique. The
Fuzzy model, designed in this paper, takes crisp value of width and height of a
facial component and produces the crisp value of Shape for different facial
components. This method is designed using Matlab 6.5 and Visual Basic 6.0 and
tested with the facial components extracted from 200 male and female face
images of different ages from different face databases.
|
1007.0085
|
Survey of Nearest Neighbor Techniques
|
cs.CV
|
The nearest neighbor (NN) technique is very simple, highly efficient and
effective in the field of pattern recognition, text categorization, object
recognition etc. Its simplicity is its main advantage, but the disadvantages
can't be ignored even. The memory requirement and computation complexity also
matter. Many techniques are developed to overcome these limitations. NN
techniques are broadly classified into structure less and structure based
techniques. In this paper, we present the survey of such techniques. Weighted
kNN, Model based kNN, Condensed NN, Reduced NN, Generalized NN are structure
less techniques whereas k-d tree, ball tree, Principal Axis Tree, Nearest
Feature Line, Tunable NN, Orthogonal Search Tree are structure based algorithms
developed on the basis of kNN. The structure less method overcome memory
limitation and structure based techniques reduce the computational complexity.
|
1007.0097
|
On Pairs of $f$-divergences and their Joint Range
|
cs.IT math.IT math.ST stat.TH
|
We compare two f-divergences and prove that their joint range is the convex
hull of the joint range for distributions supported on only two points. Some
applications of this result are given.
|
1007.0199
|
Optimal execution strategy in the presence of permanent price impact and
fixed transaction cost
|
q-fin.TR cs.SY math.OC math.PR
|
We study a single risky financial asset model subject to price impact and
transaction cost over an infinite horizon. An investor needs to execute a long
position in the asset affecting the price of the asset and possibly incurring
in fixed transaction cost. The objective is to maximize the discounted revenue
obtained by this transaction. This problem is formulated first as an impulse
control problem and we characterize the value function using the viscosity
solutions framework. We also analyze the case where there is no transaction
cost and how this formulation relates with a singular control problem. A
viscosity solution characterization is provided in this case as well. We also
establish a connection between both formulations with zero fixed transaction
cost. Numerical examples with different types of price impact conclude the
discussion.
|
1007.0210
|
Uncertainty of visual measurement and efficient allocation of sensory
resources
|
q-bio.NC cs.CV cs.IT math.IT
|
We review the reasoning underlying two approaches to combination of sensory
uncertainties. First approach is noncommittal, making no assumptions about
properties of uncertainty or parameters of stimulation. Then we explain the
relationship between this approach and the one commonly used in modeling
"higher level" aspects of sensory systems, such as in visual cue integration,
where assumptions are made about properties of stimulation. The two approaches
follow similar logic, except in one case maximal uncertainty is minimized, and
in the other minimal certainty is maximized. Then we demonstrate how optimal
solutions are found to the problem of resource allocation under uncertainty.
|
1007.0267
|
Interference Channel with an Out-of-Band Relay
|
cs.IT math.IT
|
A Gaussian interference channel (IC) with a relay is considered. The relay is
assumed to operate over an orthogonal band with respect to the underlying IC,
and the overall system is referred to as IC with an out-of-band relay (IC-OBR).
The system can be seen as operating over two parallel interference-limited
channels: The first is a standard Gaussian IC and the second is a Gaussian
relay channel characterized by two sources and destinations communicating
through the relay without direct links. We refer to the second parallel channel
as OBR Channel (OBRC). The main aim of this work is to identify conditions
under which optimal operation, in terms of the capacity region of the IC-OBR,
entails either signal relaying and/or interference forwarding by the relay,
with either a separable or non-separable use of the two parallel channels, IC
and OBRC. Here "separable" refers to transmission of independent information
over the two constituent channels. For a basic model in which the OBRC consists
of four orthogonal channels from sources to relay and from relay to
destinations (IC-OBR Type-I), a condition is identified under which signal
relaying and separable operation is optimal. When this condition is not
satisfied, various scenarios are identified in which interference forwarding
and non-separable operation are necessary to achieve optimal performance. In
these scenarios, the system exploits the "excess capacity" on the OBRC via
interference forwarding to drive the IC-OBR system in specific interference
regimes (strong or mixed). The analysis is then turned to a more complex
IC-OBR, in which the OBRC consists of only two orthogonal channels, one from
sources to relay and one from relay to destinations (IC-OBR Type-II). For this
channel, some capacity resuls are derived that parallel the conclusions for
IC-OBR Type-I.
|
1007.0273
|
New Common Proper-Motion Pairs From the PPMX Catalog
|
astro-ph.SR cs.DB
|
We use data mining techniques for finding 82 previously unreported common
proper motion pairs from the PPM-Extended catalogue. Special-purpose software
automating the different phases of the process has been developed. The software
simplifies the detection of the new pairs by integrating a set of basic
operations over catalogues. The operations can be combined by the user in
scripts representing different filtering criteria. This procedure facilitates
testing the software and employing the same scripts for different projects.
|
1007.0296
|
A Bayesian View of the Poisson-Dirichlet Process
|
math.ST cs.LG math.PR stat.TH
|
The two parameter Poisson-Dirichlet Process (PDP), a generalisation of the
Dirichlet Process, is increasingly being used for probabilistic modelling in
discrete areas such as language technology, bioinformatics, and image analysis.
There is a rich literature about the PDP and its derivative distributions such
as the Chinese Restaurant Process (CRP). This article reviews some of the basic
theory and then the major results needed for Bayesian modelling of discrete
problems including details of priors, posteriors and computation.
The PDP allows one to build distributions over countable partitions. The PDP
has two other remarkable properties: first it is partially conjugate to itself,
which allows one to build hierarchies of PDPs, and second using a marginalised
relative the CRP, one gets fragmentation and clustering properties that lets
one layer partitions to build trees. This article presents the basic theory for
understanding the notion of partitions and distributions over them, the PDP and
the CRP, and the important properties of conjugacy, fragmentation and
clustering, as well as some key related properties such as consistency and
convergence. This article also presents a Bayesian interpretation of the
Poisson-Dirichlet process based on an improper and infinite dimensional
Dirichlet distribution. This means we can understand the process as just
another Dirichlet and thus all its sampling properties emerge naturally.
The theory of PDPs is usually presented for continuous distributions (more
generally referred to as non-atomic distributions), however, when applied to
discrete distributions its remarkable conjugacy property emerges. This context
and basic results are also presented, as well as techniques for computing the
second order Stirling numbers that occur in the posteriors for discrete
distributions.
|
1007.0313
|
Repairing People Trajectories Based on Point Clustering
|
cs.CV
|
This paper presents a method for improving any object tracking algorithm
based on machine learning. During the training phase, important trajectory
features are extracted which are then used to calculate a confidence value of
trajectory. The positions at which objects are usually lost and found are
clustered in order to construct the set of 'lost zones' and 'found zones' in
the scene. Using these zones, we construct a triplet set of zones i.e. three
zones: In/Out zone (zone where an object can enter or exit the scene), 'lost
zone' and 'found zone'. Thanks to these triplets, during the testing phase, we
can repair the erroneous trajectories according to which triplet they are most
likely to belong to. The advantage of our approach over the existing state of
the art approaches is that (i) this method does not depend on a predefined
contextual scene, (ii) we exploit the semantic of the scene and (iii) we have
proposed a method to filter out noisy trajectories based on their confidence
value.
|
1007.0357
|
Transfer Entropy on Rank Vectors
|
nlin.CD cs.IT math.IT physics.data-an stat.ME
|
Transfer entropy (TE) is a popular measure of information flow found to
perform consistently well in different settings. Symbolic transfer entropy
(STE) is defined similarly to TE but on the ranks of the components of the
reconstructed vectors rather than the reconstructed vectors themselves. First,
we correct STE by forming the ranks for the future samples of the response
system with regard to the current reconstructed vector. We give the grounds for
this modified version of STE, which we call Transfer Entropy on Rank Vectors
(TERV). Then we propose to use more than one step ahead in the formation of the
future of the response in order to capture the information flow from the
driving system over a longer time horizon. To assess the performance of STE, TE
and TERV in detecting correctly the information flow we use receiver operating
characteristic (ROC) curves formed by the measure values in the two coupling
directions computed on a number of realizations of known weakly coupled
systems. We also consider different settings of state space reconstruction,
time series length and observational noise. The results show that TERV indeed
improves STE and in some cases performs better than TE, particularly in the
presence of noise, but overall TE gives more consistent results. The use of
multiple steps ahead improves the accuracy of TE and TERV.
|
1007.0376
|
The Transfer of Evolved Artificial Immune System Behaviours between
Small and Large Scale Robotic Platforms
|
cs.NE cs.RO
|
This paper demonstrates that a set of behaviours evolved in simulation on a
miniature robot (epuck) can be transferred to a much larger scale platform (a
virtual Pioneer P3-DX) that also differs in shape, sensor type, sensor
configuration and programming interface. The chosen architecture uses a
reinforcement learning-assisted genetic algorithm to evolve the epuck
behaviours, which are encoded as a genetic sequence. This sequence is then used
by the Pioneers as part of an adaptive, idiotypic artificial immune system
(AIS) control architecture. Testing in three different simulated worlds shows
that the Pioneer can use these behaviours to navigate and solve object-tracking
tasks successfully, as long as its adaptive AIS mechanism is in place.
|
1007.0379
|
Reliability Distributions of Truncated Max-log-map (MLM) Detectors
Applied to ISI Channels
|
cs.IT math.IT
|
The max-log-map (MLM) receiver is an approximated version of the well-known,
Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm. The MLM algorithm is attractive due
to its implementation simplicity. In practice, sliding-window implementations
are preferred; these practical implementations consider truncated signaling
neighborhoods around each transmission time instant. In this paper, we consider
sliding-window MLM receivers, where for any integer m, the MLM detector is
truncated to a length- m signaling neighborhood. For any number n of chosen
times instants, we derive exact expressions for both i) the joint distribution
of the MLM symbol reliabilities, and ii) the joint probability of the erroneous
MLM symbol detections. We show that the obtained expressions can be efficiently
evaluated using Monte-Carlo techniques. Our proposed method is efficient; the
most computationally expensive operation (in each Monte-Carlo trial) is an
eigenvalue decomposition of a size 2mn by 2mn matrix. Practical truncation
lengths can be easily handled. Finally, our proposed method is extremely
general, and various scenarios such as correlated noise distributions,
modulation coding, etc. may be easily accommodated.
|
1007.0380
|
Additive Non-negative Matrix Factorization for Missing Data
|
cs.NA cs.LG
|
Non-negative matrix factorization (NMF) has previously been shown to be a
useful decomposition for multivariate data. We interpret the factorization in a
new way and use it to generate missing attributes from test data. We provide a
joint optimization scheme for the missing attributes as well as the NMF
factors. We prove the monotonic convergence of our algorithms. We present
classification results for cases with missing attributes.
|
1007.0394
|
Non-uniform state space reconstruction and coupling detection
|
nlin.CD cs.IT math.IT physics.data-an q-bio.NC stat.ME
|
We investigate the state space reconstruction from multiple time series
derived from continuous and discrete systems and propose a method for building
embedding vectors progressively using information measure criteria regarding
past, current and future states. The embedding scheme can be adapted for
different purposes, such as mixed modelling, cross-prediction and Granger
causality. In particular we apply this method in order to detect and evaluate
information transfer in coupled systems. As a practical application, we
investigate in records of scalp epileptic EEG the information flow across brain
areas.
|
1007.0404
|
Quasi-Cyclic Asymptotically Regular LDPC Codes
|
cs.IT math.IT
|
Families of "asymptotically regular" LDPC block code ensembles can be formed
by terminating (J,K)-regular protograph-based LDPC convolutional codes. By
varying the termination length, we obtain a large selection of LDPC block code
ensembles with varying code rates, minimum distance that grows linearly with
block length, and capacity approaching iterative decoding thresholds, despite
the fact that the terminated ensembles are almost regular. In this paper, we
investigate the properties of the quasi-cyclic (QC) members of such an
ensemble. We show that an upper bound on the minimum Hamming distance of
members of the QC sub-ensemble can be improved by careful choice of the
component protographs used in the code construction. Further, we show that the
upper bound on the minimum distance can be improved by using arrays of
circulants in a graph cover of the protograph.
|
1007.0408
|
Privacy in geo-social networks: proximity notification with untrusted
service providers and curious buddies
|
cs.DB cs.CR
|
A major feature of the emerging geo-social networks is the ability to notify
a user when one of his friends (also called buddies) happens to be
geographically in proximity with the user. This proximity service is usually
offered by the network itself or by a third party service provider (SP) using
location data acquired from the users. This paper provides a rigorous
theoretical and experimental analysis of the existing solutions for the
location privacy problem in proximity services. This is a serious problem for
users who do not trust the SP to handle their location data, and would only
like to release their location information in a generalized form to
participating buddies. The paper presents two new protocols providing complete
privacy with respect to the SP, and controllable privacy with respect to the
buddies. The analytical and experimental analysis of the protocols takes into
account privacy, service precision, and computation and communication costs,
showing the superiority of the new protocols compared to those appeared in the
literature to date. The proposed protocols have also been tested in a full
system implementation of the proximity service.
|
1007.0412
|
Improving Iris Recognition Accuracy By Score Based Fusion Method
|
cs.AI
|
Iris recognition technology, used to identify individuals by photographing
the iris of their eye, has become popular in security applications because of
its ease of use, accuracy, and safety in controlling access to high-security
areas. Fusion of multiple algorithms for biometric verification performance
improvement has received considerable attention. The proposed method combines
the zero-crossing 1 D wavelet Euler number, and genetic algorithm based for
feature extraction. The output from these three algorithms is normalized and
their score are fused to decide whether the user is genuine or imposter. This
new strategies is discussed in this paper, in order to compute a multimodal
combined score.
|
1007.0417
|
Delta Learning Rule for the Active Sites Model
|
cs.NE
|
This paper reports the results on methods of comparing the memory retrieval
capacity of the Hebbian neural network which implements the B-Matrix approach,
by using the Widrow-Hoff rule of learning. We then, extend the recently
proposed Active Sites model by developing a delta rule to increase memory
capacity. Also, this paper extends the binary neural network to a multi-level
(non-binary) neural network.
|
1007.0436
|
Transmit Energy Focusing for DOA Estimation in MIMO Radar with Colocated
Antennas
|
cs.IT math.IT
|
In this paper, we propose a transmit beamspace energy focusing technique for
multiple-input multiple-output (MIMO) radar with application to direction
finding for multiple targets. The general angular directions of the targets are
assumed to be located within a certain spatial sector. We focus the energy of
multiple (two or more) transmitted orthogonal waveforms within that spatial
sector using transmit beamformers which are designed to improve the
signal-to-noise ratio (SNR) gain at each receive antenna. The subspace
decomposition-based techniques such as MUSIC can then be used for direction
finding for multiple targets. Moreover, the transmit beamformers can be
designed so that matched-filtering the received data to the waveforms yields
multiple (two or more) data sets with rotational invariance property that
allows applying search-free direction finding techniques such as ESPRIT for two
data sets or parallel factor analysis (PARAFAC) for more than two data sets.
Unlike previously reported MIMO radar ESPRIT/PARAFAC-based direction finding
techniques, our method achieves the rotational invariance property in a
different manner combined also with the transmit energy focusing. As a result,
it achieves better estimation performance at lower computational cost.
Particularly, the proposed technique leads to lower Cramer-Rao bound than the
existing techniques due to the transmit energy focusing capability. Simulation
results also show the superiority of the proposed technique over the existing
techniques.
|
1007.0449
|
Unimodular Lattices for the Gaussian Wiretap Channel
|
cs.IT cs.CR math.IT
|
In a recent paper, the authors introduced a lattice invariant called "Secrecy
Gain" which measures the confusion experienced by a passive eavesdropper on the
Gaussian Wiretap Channel. We study, here, the behavior of this invariant for
unimodular lattices by using tools from Modular Forms and show that, for some
families of unimodular lattices, indexed by the dimension, the secrecy gain
exponentially goes to infinity with the dimension.
|
1007.0465
|
On the Solvability of 2-pair Unicast Networks --- A Cut-based
Characterization
|
cs.IT math.IT
|
In this paper, we propose a subnetwork decomposition/combination approach to
investigate the single rate $2$-pair unicast problem. It is shown that the
solvability of a $2$-pair unicast problem is completely determined by four
specific link subsets, namely, $\mathcal A_{1,1}$, $\mathcal A_{2,2}$,
$\mathcal A_{1,2}$ and $\mathcal A_{2,1}$ of its underlying network. As a
result, an efficient cut-based algorithm to determine the solvability of a
$2$-pair unicast problem is presented.
|
1007.0481
|
IMP: A Message-Passing Algorithmfor Matrix Completion
|
cs.IT cs.LG math.IT
|
A new message-passing (MP) method is considered for the matrix completion
problem associated with recommender systems. We attack the problem using a
(generative) factor graph model that is related to a probabilistic low-rank
matrix factorization. Based on the model, we propose a new algorithm, termed
IMP, for the recovery of a data matrix from incomplete observations. The
algorithm is based on a clustering followed by inference via MP (IMP). The
algorithm is compared with a number of other matrix completion algorithms on
real collaborative filtering (e.g., Netflix) data matrices. Our results show
that, while many methods perform similarly with a large number of revealed
entries, the IMP algorithm outperforms all others when the fraction of observed
entries is small. This is helpful because it reduces the well-known cold-start
problem associated with collaborative filtering (CF) systems in practice.
|
1007.0484
|
Query Strategies for Evading Convex-Inducing Classifiers
|
cs.LG cs.CR cs.GT
|
Classifiers are often used to detect miscreant activities. We study how an
adversary can systematically query a classifier to elicit information that
allows the adversary to evade detection while incurring a near-minimal cost of
modifying their intended malfeasance. We generalize the theory of Lowd and Meek
(2005) to the family of convex-inducing classifiers that partition input space
into two sets one of which is convex. We present query algorithms for this
family that construct undetected instances of approximately minimal cost using
only polynomially-many queries in the dimension of the space and in the level
of approximation. Our results demonstrate that near-optimal evasion can be
accomplished without reverse-engineering the classifier's decision boundary. We
also consider general lp costs and show that near-optimal evasion on the family
of convex-inducing classifiers is generally efficient for both positive and
negative convexity for all levels of approximation if p=1.
|
1007.0496
|
Perturbed Hankel Determinants: Applications to the Information Theory of
MIMO Wireless Communications
|
cs.IT math.IT
|
In this paper we compute two important information-theoretic quantities which
arise in the application of multiple-input multiple-output (MIMO) antenna
wireless communication systems: the distribution of the mutual information of
multi-antenna Gaussian channels, and the Gallager random coding upper bound on
the error probability achievable by finite-length channel codes. It turns out
that the mathematical problem underpinning both quantities is the computation
of certain Hankel determinants generated by deformed versions of classical
weight functions. For single-user MIMO systems, it is a deformed Laguerre
weight, whereas for multi-user MIMO systems it is a deformed Jacobi weight. We
apply two different methods to characterize each of these Hankel determinants.
First, we employ the ladder operators of the corresponding monic orthogonal
polynomials to give an exact characterization of the Hankel determinants in
terms of Painlev\'{e} differential equations. This turns out to be a
Painlev\'{e} V for the single-user MIMO scenario and a Painlev\'{e} VI for the
multi user scenario. We then employ Coulomb fluid methods to derive new
closed-form approximations for the Hankel determinants which, although formally
valid for large matrix dimensions, are shown to give accurate results for both
the MIMO mutual information distribution and the error exponent even when the
matrix dimensions are small. Focusing on the single-user mutual information
distribution, we then employ both the exact Painlev\'{e} representation and the
Coulomb fluid approximation to yield deeper insights into the scaling behavior
in terms of the number of antennas and signal-to-noise ratio. Among other
things, these results allow us to study the asymptotic Gaussianity of the
distribution as the number of antennas increase, and to explicitly compute the
correction terms to the mean, variance, and higher order cumulants.
|
1007.0512
|
User Partitioning for Less Overhead in MIMO Interference Channels
|
cs.IT math.IT
|
This paper presents a study on multiple-antenna interference channels,
accounting for general overhead as a function of the number of users and
antennas in the network. The model includes both perfect and imperfect channel
state information based on channel estimation in the presence of noise. Three
low complexity methods are proposed for reducing the impact of overhead in the
sum network throughput by partitioning users into orthogonal groups. The first
method allocates spectrum to the groups equally, creating an imbalance in the
sum rate of each group. The second proposed method allocates spectrum unequally
among the groups to provide rate fairness. Finally, geographic grouping is
proposed for cases where some receivers do not observe significant interference
from other transmitters. For each partitioning method, the optimal solution not
only requires a brute force search over all possible partitions, but also
requires full channel state information, thereby defeating the purpose of
partitioning. We therefore propose greedy methods to solve the problems,
requiring no instantaneous channel knowledge. Simulations show that the
proposed greedy methods switch from time-division to interference alignment as
the coherence time of the channel increases, and have a small loss relative to
optimal partitioning only at moderate coherence times.
|
1007.0522
|
Diversity Embedded Streaming Erasure Codes (DE-SCo): Constructions and
Optimality
|
cs.IT cs.NI math.IT
|
Streaming erasure codes guarantee that each source packet is recovered within
a fixed delay at the receiver over a burst-erasure channel. This paper
introduces a new class of streaming codes: Diversity Embedded Streaming Erasure
Codes (DE-SCo), that provide a flexible tradeoff between the channel quality
and receiver delay. When the channel conditions are good, the source stream is
recovered with a low delay, whereas when the channel conditions are poor the
source stream is still recovered, albeit with a larger delay. Information
theoretic analysis of the underlying burst-erasure broadcast channel reveals
that DE-SCo achieve the minimum possible delay for the weaker user, without
sacrificing the single-user optimal performance of the stronger user. Our
constructions are explicit, incur polynomial time encoding and decoding
complexity and outperform random linear codes over burst-erasure channels.
|
1007.0528
|
Binary Independent Component Analysis with OR Mixtures
|
cs.IT cs.NI math.IT
|
Independent component analysis (ICA) is a computational method for separating
a multivariate signal into subcomponents assuming the mutual statistical
independence of the non-Gaussian source signals. The classical Independent
Components Analysis (ICA) framework usually assumes linear combinations of
independent sources over the field of realvalued numbers R. In this paper, we
investigate binary ICA for OR mixtures (bICA), which can find applications in
many domains including medical diagnosis, multi-cluster assignment, Internet
tomography and network resource management. We prove that bICA is uniquely
identifiable under the disjunctive generation model, and propose a
deterministic iterative algorithm to determine the distribution of the latent
random variables and the mixing matrix. The inverse problem concerning
inferring the values of latent variables are also considered along with noisy
measurements. We conduct an extensive simulation study to verify the
effectiveness of the propose algorithm and present examples of real-world
applications where bICA can be applied.
|
1007.0546
|
Computational Model of Music Sight Reading: A Reinforcement Learning
Approach
|
cs.AI cs.LG cs.NE math.OC
|
Although the Music Sight Reading process has been studied from the cognitive
psychology view points, but the computational learning methods like the
Reinforcement Learning have not yet been used to modeling of such processes. In
this paper, with regards to essential properties of our specific problem, we
consider the value function concept and will indicate that the optimum policy
can be obtained by the method we offer without to be getting involved with
computing of the complex value functions. Also, we will offer a normative
behavioral model for the interaction of the agent with the musical pitch
environment and by using a slightly different version of Partially observable
Markov decision processes we will show that our method helps for faster
learning of state-action pairs in our implemented agents.
|
1007.0547
|
A Fast Decision Technique for Hierarchical Hough Transform for Line
Detection
|
cs.CV
|
Many techniques have been proposed to speedup the performance of classic
Hough Transform. These techniques are primarily based on converting the voting
procedure to a hierarchy based voting method. These methods use approximate
decision-making process. In this paper, we propose a fast decision making
process that enhances the speed and reduces the space requirements.
Experimental results demonstrate that the proposed algorithm is much faster
than a similar Fast Hough Transform.
|
1007.0548
|
A Reinforcement Learning Model Using Neural Networks for Music Sight
Reading Learning Problem
|
cs.LG cs.NE
|
Music Sight Reading is a complex process in which when it is occurred in the
brain some learning attributes would be emerged. Besides giving a model based
on actor-critic method in the Reinforcement Learning, the agent is considered
to have a neural network structure. We studied on where the sight reading
process is happened and also a serious problem which is how the synaptic
weights would be adjusted through the learning process. The model we offer here
is a computational model on which an updated weights equation to fix the
weights is accompanied too.
|
1007.0549
|
Minimax Manifold Estimation
|
stat.ML cs.LG math.ST stat.TH
|
We find the minimax rate of convergence in Hausdorff distance for estimating
a manifold M of dimension d embedded in R^D given a noisy sample from the
manifold. We assume that the manifold satisfies a smoothness condition and that
the noise distribution has compact support. We show that the optimal rate of
convergence is n^{-2/(2+d)}. Thus, the minimax rate depends only on the
dimension of the manifold, not on the dimension of the space in which M is
embedded.
|
1007.0563
|
Graphical Models as Block-Tree Graphs
|
stat.ML cs.IT math.IT math.PR
|
We introduce block-tree graphs as a framework for deriving efficient
algorithms on graphical models. We define block-tree graphs as a
tree-structured graph where each node is a cluster of nodes such that the
clusters in the graph are disjoint. This differs from junction-trees, where two
clusters connected by an edge always have at least one common node. When
compared to junction-trees, we show that constructing block-tree graphs is
faster, and finding optimal block-tree graphs has a much smaller search space.
Applying our block-tree graph framework to graphical models, we show that, for
some graphs, e.g., grid graphs, using block-tree graphs for inference is
computationally more efficient than using junction-trees. For graphical models
with boundary conditions, the block-tree graph framework transforms the
boundary valued problem into an initial value problem. For Gaussian graphical
models, the block-tree graph framework leads to a linear state-space
representation. Since exact inference in graphical models can be
computationally intractable, we propose to use spanning block-trees to derive
approximate inference algorithms. Experimental results show the improved
performance in using spanning block-trees versus using spanning trees for
approximate estimation over Gaussian graphical models.
|
1007.0566
|
Organisation of signal flow in directed networks
|
physics.data-an cond-mat.dis-nn cs.SI physics.bio-ph physics.soc-ph stat.OT
|
Confining an answer to the question whether and how the coherent operation of
network elements is determined by the the network structure is the topic of our
work. We map the structure of signal flow in directed networks by analysing the
degree of edge convergence and the overlap between the in- and output sets of
an edge. Definitions of convergence degree and overlap are based on the
shortest paths, thus they encapsulate global network properties. Using the
defining notions of convergence degree and overlapping set we clarify the
meaning of network causality and demonstrate the crucial role of chordless
circles. In real-world networks the flow representation distinguishes nodes
according to their signal transmitting, processing and control properties. The
analysis of real-world networks in terms of flow representation was in
accordance with the known functional properties of the network nodes. It is
shown that nodes with different signal processing, transmitting and control
properties are randomly connected at the global scale, while local connectivity
patterns depart from randomness. Grouping network nodes according to their
signal flow properties was unrelated to the network's community structure. We
present evidence that signal flow properties of small-world-like, real-world
networks can not be reconstructed by algorithms used to generate small-world
networks. Convergence degree values were calculated for regular oriented trees,
and its probability density function for networks grown with the preferential
attachment mechanism. For Erd\H{o}s-R\'enyi graphs we calculated both the
probability density function of convergence degrees and of overlaps.
|
1007.0571
|
Quickest Detection with Social Learning: Interaction of local and global
decision makers
|
cs.GT cs.IT math.IT stat.ME
|
We consider how local and global decision policies interact in stopping time
problems such as quickest time change detection. Individual agents make myopic
local decisions via social learning, that is, each agent records a private
observation of a noisy underlying state process, selfishly optimizes its local
utility and then broadcasts its local decision. Given these local decisions,
how can a global decision maker achieve quickest time change detection when the
underlying state changes according to a phase-type distribution? The paper
presents four results. First, using Blackwell dominance of measures, it is
shown that the optimal cost incurred in social learning based quickest
detection is always larger than that of classical quickest detection. Second,
it is shown that in general the optimal decision policy for social learning
based quickest detection is characterized by multiple thresholds within the
space of Bayesian distributions. Third, using lattice programming and
stochastic dominance, sufficient conditions are given for the optimal decision
policy to consist of a single linear hyperplane, or, more generally, a
threshold curve. Estimation of the optimal linear approximation to this
threshold curve is formulated as a simulation-based stochastic optimization
problem. Finally, the paper shows that in multi-agent sensor management with
quickest detection, where each agent views the world according to its prior,
the optimal policy has a similar structure to social learning.
|
1007.0602
|
On The Complexity and Completeness of Static Constraints for Breaking
Row and Column Symmetry
|
cs.AI cs.CC
|
We consider a common type of symmetry where we have a matrix of decision
variables with interchangeable rows and columns. A simple and efficient method
to deal with such row and column symmetry is to post symmetry breaking
constraints like DOUBLELEX and SNAKELEX. We provide a number of positive and
negative results on posting such symmetry breaking constraints. On the positive
side, we prove that we can compute in polynomial time a unique representative
of an equivalence class in a matrix model with row and column symmetry if the
number of rows (or of columns) is bounded and in a number of other special
cases. On the negative side, we show that whilst DOUBLELEX and SNAKELEX are
often effective in practice, they can leave a large number of symmetric
solutions in the worst case. In addition, we prove that propagating DOUBLELEX
completely is NP-hard. Finally we consider how to break row, column and value
symmetry, correcting a result in the literature about the safeness of combining
different symmetry breaking constraints. We end with the first experimental
study on how much symmetry is left by DOUBLELEX and SNAKELEX on some benchmark
problems.
|
1007.0603
|
Decomposition of the NVALUE constraint
|
cs.AI
|
We study decompositions of the global NVALUE constraint. Our main
contribution is theoretical: we show that there are propagators for global
constraints like NVALUE which decomposition can simulate with the same time
complexity but with a much greater space complexity. This suggests that the
benefit of a global propagator may often not be in saving time but in saving
space. Our other theoretical contribution is to show for the first time that
range consistency can be enforced on NVALUE with the same worst-case time
complexity as bound consistency. Finally, the decompositions we study are
readily encoded as linear inequalities. We are therefore able to use them in
integer linear programs.
|
1007.0604
|
Symmetry within and between solutions
|
cs.AI
|
Symmetry can be used to help solve many problems. For instance, Einstein's
famous 1905 paper ("On the Electrodynamics of Moving Bodies") uses symmetry to
help derive the laws of special relativity. In artificial intelligence,
symmetry has played an important role in both problem representation and
reasoning. I describe recent work on using symmetry to help solve constraint
satisfaction problems. Symmetries occur within individual solutions of problems
as well as between different solutions of the same problem. Symmetry can also
be applied to the constraints in a problem to give new symmetric constraints.
Reasoning about symmetry can speed up problem solving, and has led to the
discovery of new results in both graph and number theory.
|
1007.0614
|
Online Cake Cutting
|
cs.AI
|
We propose an online form of the cake cutting problem. This models situations
where players arrive and depart during the process of dividing a resource. We
show that well known fair division procedures like cut-and-choose and the
Dubins-Spanier moving knife procedure can be adapted to apply to such online
problems. We propose some desirable properties that online cake cutting
procedures might possess like online forms of proportionality and
envy-freeness, and identify which properties are in fact possessed by the
different online cake procedures.
|
1007.0618
|
Face Synthesis (FASY) System for Determining the Characteristics of a
Face Image
|
cs.CV
|
This paper aims at determining the characteristics of a face image by
extracting its components. The FASY (FAce SYnthesis) System is a Face Database
Retrieval and new Face generation System that is under development. One of its
main features is the generation of the requested face when it is not found in
the existing database, which allows a continuous growing of the database also.
To generate the new face image, we need to store the face components in the
database. So we have designed a new technique to extract the face components by
a sophisticated method. After extraction of the facial feature points we have
analyzed the components to determine their characteristics. After extraction
and analysis we have stored the components along with their characteristics
into the face database for later use during the face construction.
|
1007.0620
|
Quotient Based Multiresolution Image Fusion of Thermal and Visual Images
Using Daubechies Wavelet Transform for Human Face Recognition
|
cs.CV
|
This paper investigates the multiresolution level-1 and level-2 Quotient
based Fusion of thermal and visual images. In the proposed system, the method-1
namely "Decompose then Quotient Fuse Level-1" and the method-2 namely
"Decompose-Reconstruct then Quotient Fuse Level-2" both work on wavelet
transformations of the visual and thermal face images. The wavelet transform is
well-suited to manage different image resolution and allows the image
decomposition in different kinds of coefficients, while preserving the image
information without any loss. This approach is based on a definition of an
illumination invariant signature image which enables an analytic generation of
the image space with varying illumination. The quotient fused images are passed
through Principal Component Analysis (PCA) for dimension reduction and then
those images are classified using a multi-layer perceptron (MLP). The
performances of both the methods have been evaluated using OTCBVS and IRIS
databases. All the different classes have been tested separately, among them
the maximum recognition result is 100%.
|
1007.0621
|
Fusion of Daubechies Wavelet Coefficients for Human Face Recognition
|
cs.CV
|
In this paper fusion of visual and thermal images in wavelet transformed
domain has been presented. Here, Daubechies wavelet transform, called as D2,
coefficients from visual and corresponding coefficients computed in the same
manner from thermal images are combined to get fused coefficients. After
decomposition up to fifth level (Level 5) fusion of coefficients is done.
Inverse Daubechies wavelet transform of those coefficients gives us fused face
images. The main advantage of using wavelet transform is that it is well-suited
to manage different image resolution and allows the image decomposition in
different kinds of coefficients, while preserving the image information. Fused
images thus found are passed through Principal Component Analysis (PCA) for
reduction of dimensions and then those reduced fused images are classified
using a multi-layer perceptron. For experiments IRIS Thermal/Visual Face
Database was used. Experimental results show that the performance of the
approach presented here achieves maximum success rate of 100% in many cases.
|
1007.0626
|
Fusion of Wavelet Coefficients from Visual and Thermal Face Images for
Human Face Recognition - A Comparative Study
|
cs.CV
|
In this paper we present a comparative study on fusion of visual and thermal
images using different wavelet transformations. Here, coefficients of discrete
wavelet transforms from both visual and thermal images are computed separately
and combined. Next, inverse discrete wavelet transformation is taken in order
to obtain fused face image. Both Haar and Daubechies (db2) wavelet transforms
have been used to compare recognition results. For experiments IRIS
Thermal/Visual Face Database was used. Experimental results using Haar and
Daubechies wavelets show that the performance of the approach presented here
achieves maximum success rate of 100% in many cases.
|
1007.0627
|
A Parallel Framework for Multilayer Perceptron for Human Face
Recognition
|
cs.CV
|
Artificial neural networks have already shown their success in face
recognition and similar complex pattern recognition tasks. However, a major
disadvantage of the technique is that it is extremely slow during training for
larger classes and hence not suitable for real-time complex problems such as
pattern recognition. This is an attempt to develop a parallel framework for the
training algorithm of a perceptron. In this paper, two general architectures
for a Multilayer Perceptron (MLP) have been demonstrated. The first
architecture is All-Class-in-One-Network (ACON) where all the classes are
placed in a single network and the second one is One-Class-in-One-Network
(OCON) where an individual single network is responsible for each and every
class. Capabilities of these two architectures were compared and verified in
solving human face recognition, which is a complex pattern recognition task
where several factors affect the recognition performance like pose variations,
facial expression changes, occlusions, and most importantly illumination
changes. Both the structures were implemented and tested for face recognition
purpose and experimental results show that the OCON structure performs better
than the generally used ACON ones in term of training convergence speed of the
network. Unlike the conventional sequential approach of training the neural
networks, the OCON technique may be implemented by training all the classes of
the face images simultaneously.
|
1007.0628
|
Image Pixel Fusion for Human Face Recognition
|
cs.CV
|
In this paper we present a technique for fusion of optical and thermal face
images based on image pixel fusion approach. Out of several factors, which
affect face recognition performance in case of visual images, illumination
changes are a significant factor that needs to be addressed. Thermal images are
better in handling illumination conditions but not very consistent in capturing
texture details of the faces. Other factors like sunglasses, beard, moustache
etc also play active role in adding complicacies to the recognition process.
Fusion of thermal and visual images is a solution to overcome the drawbacks
present in the individual thermal and visual face images. Here fused images are
projected into an eigenspace and the projected images are classified using a
radial basis function (RBF) neural network and also by a multi-layer perceptron
(MLP). In the experiments Object Tracking and Classification Beyond Visible
Spectrum (OTCBVS) database benchmark for thermal and visual face images have
been used. Comparison of experimental results show that the proposed approach
performs significantly well in recognizing face images with a success rate of
96% and 95.07% for RBF Neural Network and MLP respectively.
|
1007.0631
|
Classification of Fused Images using Radial Basis Function Neural
Network for Human Face Recognition
|
cs.CV
|
Here an efficient fusion technique for automatic face recognition has been
presented. Fusion of visual and thermal images has been done to take the
advantages of thermal images as well as visual images. By employing fusion a
new image can be obtained, which provides the most detailed, reliable, and
discriminating information. In this method fused images are generated using
visual and thermal face images in the first step. In the second step, fused
images are projected into eigenspace and finally classified using a radial
basis function neural network. In the experiments Object Tracking and
Classification Beyond Visible Spectrum (OTCBVS) database benchmark for thermal
and visual face images have been used. Experimental results show that the
proposed approach performs well in recognizing unknown individuals with a
maximum success rate of 96%.
|
1007.0633
|
Classification of fused face images using multilayer perceptron neural
network
|
cs.CV
|
This paper presents a concept of image pixel fusion of visual and thermal
faces, which can significantly improve the overall performance of a face
recognition system. Several factors affect face recognition performance
including pose variations, facial expression changes, occlusions, and most
importantly illumination changes. So, image pixel fusion of thermal and visual
images is a solution to overcome the drawbacks present in the individual
thermal and visual face images. Fused images are projected into eigenspace and
finally classified using a multi-layer perceptron. In the experiments we have
used Object Tracking and Classification Beyond Visible Spectrum (OTCBVS)
database benchmark thermal and visual face images. Experimental results show
that the proposed approach significantly improves the verification and
identification performance and the success rate is 95.07%. The main objective
of employing fusion is to produce a fused image that provides the most detailed
and reliable information. Fusion of multiple images together produces a more
efficient representation of the image.
|
1007.0636
|
Classification of Log-Polar-Visual Eigenfaces using Multilayer
Perceptron
|
cs.CV
|
In this paper we present a simple novel approach to tackle the challenges of
scaling and rotation of face images in face recognition. The proposed approach
registers the training and testing visual face images by log-polar
transformation, which is capable to handle complicacies introduced by scaling
and rotation. Log-polar images are projected into eigenspace and finally
classified using an improved multi-layer perceptron. In the experiments we have
used ORL face database and Object Tracking and Classification Beyond Visible
Spectrum (OTCBVS) database for visual face images. Experimental results show
that the proposed approach significantly improves the recognition performances
from visual to log-polar-visual face images. In case of ORL face database,
recognition rate for visual face images is 89.5% and that is increased to 97.5%
for log-polar-visual face images whereas for OTCBVS face database recognition
rate for visual images is 87.84% and 96.36% for log-polar-visual face images.
|
1007.0637
|
Local search for stable marriage problems with ties and incomplete lists
|
cs.AI
|
The stable marriage problem has a wide variety of practical applications,
ranging from matching resident doctors to hospitals, to matching students to
schools, or more generally to any two-sided market. We consider a useful
variation of the stable marriage problem, where the men and women express their
preferences using a preference list with ties over a subset of the members of
the other sex. Matchings are permitted only with people who appear in these
preference lists. In this setting, we study the problem of finding a stable
matching that marries as many people as possible. Stability is an envy-free
notion: no man and woman who are not married to each other would both prefer
each other to their partners or to being single. This problem is NP-hard. We
tackle this problem using local search, exploiting properties of the problem to
reduce the size of the neighborhood and to make local moves efficiently.
Experimental results show that this approach is able to solve large problems,
quickly returning stable matchings of large and often optimal size.
|
1007.0638
|
Human Face Recognition using Line Features
|
cs.CV
|
In this work we investigate a novel approach to handle the challenges of face
recognition, which includes rotation, scale, occlusion, illumination etc. Here,
we have used thermal face images as those are capable to minimize the affect of
illumination changes and occlusion due to moustache, beards, adornments etc.
The proposed approach registers the training and testing thermal face images in
polar coordinate, which is capable to handle complicacies introduced by scaling
and rotation. Line features are extracted from thermal polar images and feature
vectors are constructed using these line. Feature vectors thus obtained passes
through principal component analysis (PCA) for the dimensionality reduction of
feature vectors. Finally, the images projected into eigenspace are classified
using a multi-layer perceptron. In the experiments we have used Object Tracking
and Classification Beyond Visible Spectrum (OTCBVS) database. Experimental
results show that the proposed approach significantly improves the verification
and identification performance and the success rate is 99.25%.
|
1007.0660
|
The Latent Bernoulli-Gauss Model for Data Analysis
|
cs.LG
|
We present a new latent-variable model employing a Gaussian mixture
integrated with a feature selection procedure (the Bernoulli part of the model)
which together form a "Latent Bernoulli-Gauss" distribution. The model is
applied to MAP estimation, clustering, feature selection and collaborative
filtering and fares favorably with the state-of-the-art latent-variable models.
|
1007.0690
|
A unified view of Automata-based algorithms for Frequent Episode
Discovery
|
cs.AI
|
Frequent Episode Discovery framework is a popular framework in Temporal Data
Mining with many applications. Over the years many different notions of
frequencies of episodes have been proposed along with different algorithms for
episode discovery. In this paper we present a unified view of all such
frequency counting algorithms. We present a generic algorithm such that all
current algorithms are special cases of it. This unified view allows one to
gain insights into different frequencies and we present quantitative
relationships among different frequencies. Our unified view also helps in
obtaining correctness proofs for various algorithms as we show here. We also
point out how this unified view helps us to consider generalization of the
algorithm so that they can discover episodes with general partial orders.
|
1007.0728
|
Artificial Learning in Artificial Memories
|
cs.AI q-bio.NC
|
Memory refinements are designed below to detect those sequences of actions
that have been repeated a given number n. Subsequently such sequences are
permitted to run without CPU involvement. This mimics human learning. Actions
are rehearsed and once learned, they are performed automatically without
conscious involvement.
|
1007.0776
|
Is Computational Complexity a Barrier to Manipulation?
|
cs.AI cs.CC cs.GT
|
When agents are acting together, they may need a simple mechanism to decide
on joint actions. One possibility is to have the agents express their
preferences in the form of a ballot and use a voting rule to decide the winning
action(s). Unfortunately, agents may try to manipulate such an election by
misreporting their preferences. Fortunately, it has been shown that it is
NP-hard to compute how to manipulate a number of different voting rules.
However, NP-hardness only bounds the worst-case complexity. Recent theoretical
results suggest that manipulation may often be easy in practice. To address
this issue, I suggest studying empirically if computational complexity is in
practice a barrier to manipulation. The basic tool used in my investigations is
the identification of computational "phase transitions". Such an approach has
been fruitful in identifying hard instances of propositional satisfiability and
other NP-hard problems. I show that phase transition behaviour gives insight
into the hardness of manipulating voting rules, increasing concern that
computational complexity is indeed any sort of barrier. Finally, I look at the
problem of computing manipulation of other, related problems like stable
marriage and tournament problems.
|
1007.0799
|
Fountain Codes with Multiplicatively Repeated Non-Binary LDPC Codes
|
cs.IT math.IT
|
We study fountain codes transmitted over the binary-input symmetric-output
channel. For channels with small capacity, receivers needs to collects many
channel outputs to recover information bits. Since a collected channel output
yields a check node in the decoding Tanner graph, the channel with small
capacity leads to large decoding complexity. In this paper, we introduce a
novel fountain coding scheme with non-binary LDPC codes. The decoding
complexity of the proposed fountain code does not depend on the channel.
Numerical experiments show that the proposed codes exhibit better performance
than conventional fountain codes, especially for small number of information
bits.
|
1007.0803
|
Soft Control on Collective Behavior of a Group of Autonomous Agents by a
Shill Agent
|
cs.MA
|
This paper asks a new question: how can we control the collective behavior of
self-organized multi-agent systems? We try to answer the question by proposing
a new notion called 'Soft Control', which keeps the local rule of the existing
agents in the system. We show the feasibility of soft control by a case study.
Consider the simple but typical distributed multi-agent model proposed by
Vicsek et al. for flocking of birds: each agent moves with the same speed but
with different headings which are updated using a local rule based on the
average of its own heading and the headings of its neighbors. Most studies of
this model are about the self-organized collective behavior, such as
synchronization of headings. We want to intervene in the collective behavior
(headings) of the group by soft control. A specified method is to add a special
agent, called a 'Shill', which can be controlled by us but is treated as an
ordinary agent by other agents. We construct a control law for the shill so
that it can synchronize the whole group to an objective heading. This control
law is proved to be effective analytically and numerically. Note that soft
control is different from the approach of distributed control. It is a natural
way to intervene in the distributed systems. It may bring out many interesting
issues and challenges on the control of complex systems.
|
1007.0824
|
Filtrage vaste marge pour l'\'etiquetage s\'equentiel \`a noyaux de
signaux
|
cs.LG
|
We address in this paper the problem of multi-channel signal sequence
labeling. In particular, we consider the problem where the signals are
contaminated by noise or may present some dephasing with respect to their
labels. For that, we propose to jointly learn a SVM sample classifier with a
temporal filtering of the channels. This will lead to a large margin filtering
that is adapted to the specificity of each channel (noise and time-lag). We
derive algorithms to solve the optimization problem and we discuss different
filter regularizations for automated scaling or selection of channels. Our
approach is tested on a non-linear toy example and on a BCI dataset. Results
show that the classification performance on these problems can be improved by
learning a large margin filtering.
|
1007.0859
|
Local search for stable marriage problems
|
cs.AI
|
The stable marriage (SM) problem has a wide variety of practical
applications, ranging from matching resident doctors to hospitals, to matching
students to schools, or more generally to any two-sided market. In the
classical formulation, n men and n women express their preferences (via a
strict total order) over the members of the other sex. Solving a SM problem
means finding a stable marriage where stability is an envy-free notion: no man
and woman who are not married to each other would both prefer each other to
their partners or to being single. We consider both the classical stable
marriage problem and one of its useful variations (denoted SMTI) where the men
and women express their preferences in the form of an incomplete preference
list with ties over a subset of the members of the other sex. Matchings are
permitted only with people who appear in these lists, an we try to find a
stable matching that marries as many people as possible. Whilst the SM problem
is polynomial to solve, the SMTI problem is NP-hard. We propose to tackle both
problems via a local search approach, which exploits properties of the problems
to reduce the size of the neighborhood and to make local moves efficiently. We
evaluate empirically our algorithm for SM problems by measuring its runtime
behaviour and its ability to sample the lattice of all possible stable
marriages. We evaluate our algorithm for SMTI problems in terms of both its
runtime behaviour and its ability to find a maximum cardinality stable
marriage.For SM problems, the number of steps of our algorithm grows only as
O(nlog(n)), and that it samples very well the set of all stable marriages. It
is thus a fair and efficient approach to generate stable marriages.Furthermore,
our approach for SMTI problems is able to solve large problems, quickly
returning stable matchings of large and often optimal size despite the
NP-hardness of this problem.
|
1007.0875
|
On the Capacity Achieving Covariance Matrix for Frequency Selective MIMO
Channels Using the Asymptotic Approach
|
cs.IT math.IT
|
In this contribution, an algorithm for evaluating the capacity-achieving
input covariance matrices for frequency selective Rayleigh MIMO channels is
proposed. In contrast with the flat fading Rayleigh case, no closed-form
expressions for the eigenvectors of the optimum input covariance matrix are
available. Classically, both the eigenvectors and eigenvalues are computed
numerically and the corresponding optimization algorithms remain
computationally very demanding. In this paper, it is proposed to optimize
(w.r.t. the input covariance matrix) a large system approximation of the
average mutual information derived by Moustakas and Simon. The validity of this
asymptotic approximation is clarified thanks to Gaussian large random matrices
methods. It is shown that the approximation is a strictly concave function of
the input covariance matrix and that the average mutual information evaluated
at the argmax of the approximation is equal to the capacity of the channel up
to a O(1/t) term, where t is the number of transmit antennas. An algorithm
based on an iterative waterfilling scheme is proposed to maximize the average
mutual information approximation, and its convergence studied. Numerical
simulation results show that, even for a moderate number of transmit and
receive antennas, the new approach provides the same results as direct
maximization approaches of the average mutual information.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.