id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0805.3217
|
Statistical region-based active contours with exponential family
observations
|
cs.CV
|
In this paper, we focus on statistical region-based active contour models
where image features (e.g. intensity) are random variables whose distribution
belongs to some parametric family (e.g. exponential) rather than confining
ourselves to the special Gaussian case. Using shape derivation tools, our
effort focuses on constructing a general expression for the derivative of the
energy (with respect to a domain) and derive the corresponding evolution speed.
A general result is stated within the framework of multi-parameter exponential
family. More particularly, when using Maximum Likelihood estimators, the
evolution speed has a closed-form expression that depends simply on the
probability density function, while complicating additive terms appear when
using other estimators, e.g. moments method. Experimental results on both
synthesized and real images demonstrate the applicability of our approach.
|
0805.3218
|
Region-based active contour with noise and shape priors
|
cs.CV
|
In this paper, we propose to combine formally noise and shape priors in
region-based active contours. On the one hand, we use the general framework of
exponential family as a prior model for noise. On the other hand, translation
and scale invariant Legendre moments are considered to incorporate the shape
prior (e.g. fidelity to a reference shape). The combination of the two prior
terms in the active contour functional yields the final evolution equation
whose evolution speed is rigorously derived using shape derivative tools.
Experimental results on both synthetic images and real life cardiac echography
data clearly demonstrate the robustness to initialization and noise,
flexibility and large potential applicability of our segmentation algorithm.
|
0805.3267
|
Compressing Binary Decision Diagrams
|
cs.AI cs.DC
|
The paper introduces a new technique for compressing Binary Decision Diagrams
in those cases where random access is not required. Using this technique,
compression and decompression can be done in linear time in the size of the BDD
and compression will in many cases reduce the size of the BDD to 1-2 bits per
node. Empirical results for our compression technique are presented, including
comparisons with previously introduced techniques, showing that the new
technique dominate on all tested instances.
|
0805.3339
|
Tri de la table de faits et compression des index bitmaps avec
alignement sur les mots
|
cs.DB
|
Bitmap indexes are frequently used to index multidimensional data. They rely
mostly on sequential input/output. Bitmaps can be compressed to reduce
input/output costs and minimize CPU usage. The most efficient compression
techniques are based on run-length encoding (RLE), such as Word-Aligned Hybrid
(WAH) compression. This type of compression accelerates logical operations
(AND, OR) over the bitmaps. However, run-length encoding is sensitive to the
order of the facts. Thus, we propose to sort the fact tables. We review
lexicographic, Gray-code, and block-wise sorting. We found that a lexicographic
sort improves compression--sometimes generating indexes twice as small--and
make indexes several times faster. While sorting takes time, this is partially
offset by the fact that it is faster to index a sorted table. Column order is
significant: it is generally preferable to put the columns having more distinct
values at the beginning. A block-wise sort is much less efficient than a full
sort. Moreover, we found that Gray-code sorting is not better than
lexicographic sorting when using word-aligned compression.
|
0805.3366
|
Computational Representation of Linguistic Structures using
Domain-Specific Languages
|
cs.CL
|
We describe a modular system for generating sentences from formal definitions
of underlying linguistic structures using domain-specific languages. The system
uses Java in general, Prolog for lexical entries and custom domain-specific
languages based on Functional Grammar and Functional Discourse Grammar
notation, implemented using the ANTLR parser generator. We show how linguistic
and technological parts can be brought together in a natural language
processing system and how domain-specific languages can be used as a tool for
consistent formal notation in linguistic description.
|
0805.3390
|
Design of Attitude Stability System for Prolate Dual-spin Satellite in
Its Inclined Elliptical Orbit
|
cs.RO
|
In general, most of communication satellites were designed to be operated in
geostationary orbit. And many of them were designed in prolate dual-spin
configuration. As a prolate dual-spin vehicle, they have to be stabilized
against their internal energy dissipation effect. Several countries that
located in southern hemisphere, has shown interest to use communication
satellite. Because of those countries southern latitude, an idea emerged to
incline the communication satellite (due to its prolate dualspin configuration)
in elliptical orbit. This work is focused on designing Attitude Stability
System for prolate dual-spin satellite in the effect of perturbed field of
gravity due to the inclination of its elliptical orbit. DANDE (De-spin Active
Nutation Damping Electronics) provides primary stabilization method for the
satellite in its orbit. Classical Control Approach is used for the iteration of
DANDE parameters. The control performance is evaluated based on time response
analysis.
|
0805.3406
|
Performance Analysis of Signal Detection using Quantized Received
Signals of Linear Vector Channel
|
cs.IT math.IT
|
Performance analysis of optimal signal detection using quantized received
signals of a linear vector channel, which is an extension of code-division
multiple-access (CDMA) or multiple-input multiple-output (MIMO) channels, in
the large system limit, is presented in this paper. Here the dimensions of
channel input and output are both sent to infinity while their ratio remains
fixed. An optimal detector is one that uses a true channel model, true
distribution of input signals, and perfect knowledge about quantization.
Applying replica method developed in statistical mechanics, we show that, in
the case of a noiseless channel, the optimal detector has perfect detection
ability under certain conditions, and that for a noisy channel its detection
ability decreases monotonically as the quantization step size increases.
|
0805.3410
|
Exploring a type-theoretic approach to accessibility constraint
modelling
|
cs.CL
|
The type-theoretic modelling of DRT that [degroote06] proposed features
continuations for the management of the context in which a clause has to be
interpreted. This approach, while keeping the standard definitions of
quantifier scope, translates the rules of the accessibility constraints of
discourse referents inside the semantic recipes. In this paper, we deal with
additional rules for these accessibility constraints. In particular in the case
of discourse referents introduced by proper nouns, that negation does not
block, and in the case of rhetorical relations that structure discourses. We
show how this continuation-based approach applies to those accessibility
constraints and how we can consider the parallel management of various
principles.
|
0805.3484
|
A MacWilliams Identity for Convolutional Codes: The General Case
|
cs.IT math.IT math.OC
|
A MacWilliams Identity for convolutional codes will be established. It makes
use of the weight adjacency matrices of the code and its dual, based on state
space realizations (the controller canonical form) of the codes in question.
The MacWilliams Identity applies to various notions of duality appearing in the
literature on convolutional coding theory.
|
0805.3518
|
Logic programming with social features
|
cs.AI
|
In everyday life it happens that a person has to reason about what other
people think and how they behave, in order to achieve his goals. In other
words, an individual may be required to adapt his behaviour by reasoning about
the others' mental state. In this paper we focus on a knowledge representation
language derived from logic programming which both supports the representation
of mental states of individual communities and provides each with the
capability of reasoning about others' mental states and acting accordingly. The
proposed semantics is shown to be translatable into stable model semantics of
logic programs with aggregates.
|
0805.3521
|
Towards applied theories based on computability logic
|
cs.LO cs.AI math.LO math.NT
|
Computability logic (CL) (see http://www.cis.upenn.edu/~giorgi/cl.html) is a
recently launched program for redeveloping logic as a formal theory of
computability, as opposed to the formal theory of truth that logic has more
traditionally been. Formulas in it represent computational problems, "truth"
means existence of an algorithmic solution, and proofs encode such solutions.
Within the line of research devoted to finding axiomatizations for ever more
expressive fragments of CL, the present paper introduces a new deductive system
CL12 and proves its soundness and completeness with respect to the semantics of
CL. Conservatively extending classical predicate calculus and offering
considerable additional expressive and deductive power, CL12 presents a
reasonable, computationally meaningful, constructive alternative to classical
logic as a basis for applied theories. To obtain a model example of such
theories, this paper rebuilds the traditional, classical-logic-based Peano
arithmetic into a computability-logic-based counterpart. Among the purposes of
the present contribution is to provide a starting point for what, as the author
wishes to hope, might become a new line of research with a potential of
interesting findings -- an exploration of the presumably quite unusual
metatheory of CL-based arithmetic and other CL-based applied systems.
|
0805.3528
|
Coding Theory and Projective Spaces
|
cs.IT math.IT
|
The projective space of order $n$ over a finite field $\F_q$ is a set of all
subspaces of the vector space $\F_q^{n}$. In this work, we consider
error-correcting codes in the projective space, focusing mainly on constant
dimension codes. We start with the different representations of subspaces in
the projective space. These representations involve matrices in reduced row
echelon form, associated binary vectors, and Ferrers diagrams. Based on these
representations, we provide a new formula for the computation of the distance
between any two subspaces in the projective space. We examine lifted maximum
rank distance (MRD) codes, which are nearly optimal constant dimension codes.
We prove that a lifted MRD code can be represented in such a way that it forms
a block design known as a transversal design. The incidence matrix of the
transversal design derived from a lifted MRD code can be viewed as a
parity-check matrix of a linear code in the Hamming space. We find the
properties of these codes which can be viewed also as LDPC codes. We present
new bounds and constructions for constant dimension codes. First, we present a
multilevel construction for constant dimension codes, which can be viewed as a
generalization of a lifted MRD codes construction. This construction is based
on a new type of rank-metric codes, called Ferrers diagram rank-metric codes.
Then we derive upper bounds on the size of constant dimension codes which
contain the lifted MRD code, and provide a construction for two families of
codes, that attain these upper bounds. We generalize the well-known concept of
a punctured code for a code in the projective space to obtain large codes which
are not constant dimension. We present efficient enumerative encoding and
decoding techniques for the Grassmannian. Finally we describe a search method
for constant dimension lexicodes.
|
0805.3569
|
Joint Cooperation and Multi-Hopping Increase the Capacity of Wireless
Networks
|
cs.NI cs.IT math.IT
|
The problem of communication among nodes in an \emph{extended network} is
considered, where radio power decay and interference are limiting factors. It
has been shown previously that, with simple multi-hopping, the achievable total
communication rate in such a network is at most $\Theta(\sqrt{N})$. In this
work, we study the benefit of node cooperation in conjunction with
multi-hopping on the network capacity. We propose a multi-phase communication
scheme, combining distributed MIMO transmission with multi-hop forwarding among
clusters of nodes. We derive the network throughput of this communication
scheme and determine the optimal cluster size. This provides a constructive
lower bound on the network capacity. We first show that in \textit{regular
networks} a rate of $\omega(N^{{2/3}})$ can be achieved with transmission power
scaling of $\Theta(N^{\frac{\alpha}{6}-{1/3}})$, where $\alpha>2$ is the signal
path-loss exponent. We further extend this result to \textit{random networks},
where we show a rate of $\omega (N^{2/3}(\log{N})^{(2-\alpha)/6})$ can be
achieved with transmission power scaling of
$\Theta(N^{\alpha/6-1/3}(\log{N})^{-(\alpha-2)^2/6})$ in a random network with
unit node density. In particular, as $\alpha$ approaches 2, only constant
transmission power is required. Finally, we study a random network with density
$\lambda=\Omega(\log{N})$ and show that a rate of $\omega((\lambda N)^{2/3})$
is achieved and the required power scales as
$\Theta(N^{\alpha/6-1/3}/\lambda^{\alpha/3-2/3})$.
|
0805.3605
|
Success Exponent of Wiretapper: A Tradeoff between Secrecy and
Reliability
|
cs.IT cs.CR math.IT math.PR
|
Equivocation rate has been widely used as an information-theoretic measure of
security after Shannon[10]. It simplifies problems by removing the effect of
atypical behavior from the system. In [9], however, Merhav and Arikan
considered the alternative of using guessing exponent to analyze the Shannon's
cipher system. Because guessing exponent captures the atypical behavior, the
strongest expressible notion of secrecy requires the more stringent condition
that the size of the key, instead of its entropy rate, to be equal to the size
of the message. The relationship between equivocation and guessing exponent are
also investigated in [6][7] but it is unclear which is a better measure, and
whether there is a unifying measure of security.
Instead of using equivocation rate or guessing exponent, we study the wiretap
channel in [2] using the success exponent, defined as the exponent of a
wiretapper successfully learn the secret after making an exponential number of
guesses to a sequential verifier that gives yes/no answer to each guess. By
extending the coding scheme in [2][5] and the converse proof in [4] with the
new Overlap Lemma 5.2, we obtain a tradeoff between secrecy and reliability
expressed in terms of lower bounds on the error and success exponents of
authorized and respectively unauthorized decoding of the transmitted messages.
From this, we obtain an inner bound to the region of strongly achievable
public, private and guessing rate triples for which the exponents are strictly
positive. The closure of this region is equivalent to the closure of the region
in Theorem 1 of [2] when we treat equivocation rate as the guessing rate.
However, it is unclear if the inner bound is tight.
|
0805.3638
|
Sequential detection of Markov targets with trajectory estimation
|
cs.IT math.IT
|
The problem of detection and possible estimation of a signal generated by a
dynamic system when a variable number of noisy measurements can be taken is
here considered. Assuming a Markov evolution of the system (in particular, the
pair signal-observation forms a hidden Markov model), a sequential procedure is
proposed, wherein the detection part is a sequential probability ratio test
(SPRT) and the estimation part relies upon a maximum-a-posteriori (MAP)
criterion, gated by the detection stage (the parameter to be estimated is the
trajectory of the state evolution of the system itself). A thorough analysis of
the asymptotic behaviour of the test in this new scenario is given, and
sufficient conditions for its asymptotic optimality are stated, i.e. for almost
sure minimization of the stopping time and for (first-order) minimization of
any moment of its distribution. An application to radar surveillance problems
is also examined.
|
0805.3643
|
Overlay Cognitive Radio in Wireless Mesh Networks
|
cs.IT cs.NI math.IT
|
In this paper we apply the concept of overlay cognitive radio to the
communication between nodes in a wireless mesh network. Based on the overlay
cognitive radio model, it is possible to have two concurrent transmissions in a
given interference region, where usually only one communication takes place at
a given time. We analyze the cases of wireless mesh networks with regular and
random topologies. Numerical results show that considerable network capacity
gains can be achieved.
|
0805.3747
|
Constructing Folksonomies from User-specified Relations on Flickr
|
cs.AI
|
Many social Web sites allow users to publish content and annotate with
descriptive metadata. In addition to flat tags, some social Web sites have
recently began to allow users to organize their content and metadata
hierarchically. The social photosharing site Flickr, for example, allows users
to group related photos in sets, and related sets in collections. The social
bookmarking site Del.icio.us similarly lets users group related tags into
bundles. Although the sites themselves don't impose any constraints on how
these hierarchies are used, individuals generally use them to capture
relationships between concepts, most commonly the broader/narrower relations.
Collective annotation of content with hierarchical relations may lead to an
emergent classification system, called a folksonomy. While some researchers
have explored using tags as evidence for learning folksonomies, we believe that
hierarchical relations described above offer a high-quality source of evidence
for this task.
We propose a simple approach to aggregate shallow hierarchies created by many
distinct Flickr users into a common folksonomy. Our approach uses statistics to
determine if a particular relation should be retained or discarded. The
relations are then woven together into larger hierarchies. Although we have not
carried out a detailed quantitative evaluation of the approach, it looks very
promising since it generates very reasonable, non-trivial hierarchies.
|
0805.3799
|
The Structure of Narrative: the Case of Film Scripts
|
cs.AI
|
We analyze the style and structure of story narrative using the case of film
scripts. The practical importance of this is noted, especially the need to have
support tools for television movie writing. We use the Casablanca film script,
and scripts from six episodes of CSI (Crime Scene Investigation). For analysis
of style and structure, we quantify various central perspectives discussed in
McKee's book, "Story: Substance, Structure, Style, and the Principles of
Screenwriting". Film scripts offer a useful point of departure for exploration
of the analysis of more general narratives. Our methodology, using
Correspondence Analysis, and hierarchical clustering, is innovative in a range
of areas that we discuss. In particular this work is groundbreaking in taking
the qualitative analysis of McKee and grounding this analysis in a quantitative
and algorithmic framework.
|
0805.3800
|
An Evolutionary-Based Approach to Learning Multiple Decision Models from
Underrepresented Data
|
cs.AI cs.NE
|
The use of multiple Decision Models (DMs) enables to enhance the accuracy in
decisions and at the same time allows users to evaluate the confidence in
decision making. In this paper we explore the ability of multiple DMs to learn
from a small amount of verified data. This becomes important when data samples
are difficult to collect and verify. We propose an evolutionary-based approach
to solving this problem. The proposed technique is examined on a few clinical
problems presented by a small amount of data.
|
0805.3802
|
Feature Selection for Bayesian Evaluation of Trauma Death Risk
|
cs.AI
|
In the last year more than 70,000 people have been brought to the UK
hospitals with serious injuries. Each time a clinician has to urgently take a
patient through a screening procedure to make a reliable decision on the trauma
treatment. Typically, such procedure comprises around 20 tests; however the
condition of a trauma patient remains very difficult to be tested properly.
What happens if these tests are ambiguously interpreted, and information about
the severity of the injury will come misleading? The mistake in a decision can
be fatal: using a mild treatment can put a patient at risk of dying from
posttraumatic shock, while using an overtreatment can also cause death. How can
we reduce the risk of the death caused by unreliable decisions? It has been
shown that probabilistic reasoning, based on the Bayesian methodology of
averaging over decision models, allows clinicians to evaluate the uncertainty
in decision making. Based on this methodology, in this paper we aim at
selecting the most important screening tests, keeping a high performance. We
assume that the probabilistic reasoning within the Bayesian methodology allows
us to discover new relationships between the screening tests and uncertainty in
decisions. In practice, selection of the most informative tests can also reduce
the cost of a screening procedure in trauma care centers. In our experiments we
use the UK Trauma data to compare the efficiency of the proposed technique in
terms of the performance. We also compare the uncertainty in decisions in terms
of entropy.
|
0805.3824
|
On Metrics for Error Correction in Network Coding
|
cs.IT math.IT
|
The problem of error correction in both coherent and noncoherent network
coding is considered under an adversarial model. For coherent network coding,
where knowledge of the network topology and network code is assumed at the
source and destination nodes, the error correction capability of an (outer)
code is succinctly described by the rank metric; as a consequence, it is shown
that universal network error correcting codes achieving the Singleton bound can
be easily constructed and efficiently decoded. For noncoherent network coding,
where knowledge of the network topology and network code is not assumed, the
error correction capability of a (subspace) code is given exactly by a new
metric, called the injection metric, which is closely related to, but different
than, the subspace metric of K\"otter and Kschischang. In particular, in the
case of a non-constant-dimension code, the decoder associated with the
injection metric is shown to correct more errors then a
minimum-subspace-distance decoder. All of these results are based on a general
approach to adversarial error correction, which could be useful for other
adversarial channels beyond network coding.
|
0805.3935
|
Fusion for Evaluation of Image Classification in Uncertain Environments
|
cs.AI
|
We present in this article a new evaluation method for classification and
segmentation of textured images in uncertain environments. In uncertain
environments, real classes and boundaries are known with only a partial
certainty given by the experts. Most of the time, in many presented papers,
only classification or only segmentation are considered and evaluated. Here, we
propose to take into account both the classification and segmentation results
according to the certainty given by the experts. We present the results of this
method on a fusion of classifiers of sonar images for a seabed
characterization.
|
0805.3939
|
Decision Support with Belief Functions Theory for Seabed
Characterization
|
cs.AI cs.IT math.IT
|
The seabed characterization from sonar images is a very hard task because of
the produced data and the unknown environment, even for an human expert. In
this work we propose an original approach in order to combine binary
classifiers arising from different kinds of strategies such as one-versus-one
or one-versus-rest, usually used in the SVM-classification. The decision
functions coming from these binary classifiers are interpreted in terms of
belief functions in order to combine these functions with one of the numerous
operators of the belief functions theory. Moreover, this interpretation of the
decision function allows us to propose a process of decisions by taking into
account the rejected observations too far removed from the learning data, and
the imprecise decisions given in unions of classes. This new approach is
illustrated and evaluated with a SVM in order to classify the different kinds
of sediment on image sonar.
|
0805.3964
|
DimReduction - Interactive Graphic Environment for Dimensionality
Reduction
|
cs.CV
|
Feature selection is a pattern recognition approach to choose important
variables according to some criteria to distinguish or explain certain
phenomena. There are many genomic and proteomic applications which rely on
feature selection to answer questions such as: selecting signature genes which
are informative about some biological state, e.g. normal tissues and several
types of cancer; or defining a network of prediction or inference among
elements such as genes, proteins, external stimuli and other elements of
interest. In these applications, a recurrent problem is the lack of samples to
perform an adequate estimate of the joint probabilities between element states.
A myriad of feature selection algorithms and criterion functions are proposed,
although it is difficult to point the best solution in general. The intent of
this work is to provide an open-source multiplataform graphical environment to
apply, test and compare many feature selection approaches suitable to be used
in bioinformatics problems.
|
0805.3972
|
Intuitive visualization of the intelligence for the run-down of
terrorist wire-pullers
|
cs.AI
|
The investigation of the terrorist attack is a time-critical task. The
investigators have a limited time window to diagnose the organizational
background of the terrorists, to run down and arrest the wire-pullers, and to
take an action to prevent or eradicate the terrorist attack. The intuitive
interface to visualize the intelligence data set stimulates the investigators'
experience and knowledge, and aids them in decision-making for an immediately
effective action. This paper presents a computational method to analyze the
intelligence data set on the collective actions of the perpetrators of the
attack, and to visualize it into the form of a social network diagram which
predicts the positions where the wire-pullers conceals themselves.
|
0805.4023
|
Robust Joint Source-Channel Coding for Delay-Limited Applications
|
cs.IT math.IT
|
In this paper, we consider the problem of robust joint source-channel coding
over an additive white Gaussian noise channel. We propose a new scheme which
achieves the optimal slope of the signal-to-distortion (SDR) curve (unlike the
previously known coding schemes). Also, we propose a family of robust codes
which together maintain a bounded gap with the optimum SDR curve (in terms of
dB). To show the importance of this result, we drive some theoretical bounds on
the asymptotic performance of delay-limited hybrid digital-analog (HDA) coding
schemes. We show that, unlike the delay-unlimited case, for any family of
delay-limited HDA codes, the asymptotic performance loss is unbounded (in terms
of dB).
|
0805.4053
|
Source Coding for a Simple Network with Receiver Side Information
|
cs.IT math.IT
|
We consider the problem of source coding with receiver side information for
the simple network proposed by R. Gray and A. Wyner in 1974. In this network, a
transmitter must reliably transport the output of two correlated information
sources to two receivers using three noiseless channels: a public channel which
connects the transmitter to both receivers, and two private channels which
connect the transmitter directly to each receiver. We extend Gray and Wyner's
original problem by permitting side information to be present at each receiver.
We derive inner and outer bounds for the achievable rate region and, for three
special cases, we show that the outer bound is tight.
|
0805.4059
|
Menger's Paths with Minimum Mergings
|
cs.IT math.CO math.IT
|
For an acyclic directed graph with multiple sources and multiple sinks, we
prove that one can choose the Merger's paths between the sources and the sinks
such that the number of mergings between these paths is upper bounded by a
constant depending only on the min-cuts between the sources and the sinks,
regardless of the size and topology of the graph. We also give bounds on the
minimum number of mergings between these paths, and discuss how it depends on
the min-cuts.
|
0805.4081
|
Dynamics of thematic information flows
|
cs.IT math.IT
|
The studies of the dynamics of topical dataflow of new information in the
framework of a logistic model were suggested. The condition of topic balance,
when the number of publications on all topics is proportional to the
information space and time, was presented. General time dependence of the
publication intensity in the Internet, devoted to particular topics, was
observed; unlike an exponent model, it has a saturation area. Some limitations
of a logistic model were identified opening the way for further research.
|
0805.4085
|
Peculiarities of the Correlation between Local and Global News
Popularity of Electronic Mass Media
|
cs.IT math.IT
|
One of the approaches to the solution of the navigation problem in current
information flows is ranking the documents according to their popularity level.
The definition of local and global news popularity which is based on the number
of similar-in-content documents, published within local and global time
interval, was suggested. Mutual behavior of the documents of local and global
popularity levels was studied. The algorithm of detecting the documents which
received great popularity before new topics appeared was suggested.
|
0805.4101
|
Goal-oriented Dialog as a Collaborative Subordinated Activity involving
Collective Acceptance
|
cs.AI cs.CL
|
Modeling dialog as a collaborative activity consists notably in specifying
the content of the Conversational Common Ground and the kind of social mental
state involved. In previous work (Saget, 2006), we claim that Collective
Acceptance is the proper social attitude for modeling Conversational Common
Ground in the particular case of goal-oriented dialog. In this paper, a
formalization of Collective Acceptance is shown, besides elements in order to
integrate this attitude in a rational model of dialog are provided; and
finally, a model of referential acts as being part of a collaborative activity
is presented. The particular case of reference has been chosen in order to
exemplify our claims.
|
0805.4107
|
Spiral Walk on Triangular Meshes : Adaptive Replication in Data P2P
Networks
|
cs.NI cs.DB
|
We introduce a decentralized replication strategy for peer-to-peer file
exchange based on exhaustive exploration of the neighborhood of any node in the
network. The replication scheme lets the replicas evenly populate the network
mesh, while regulating the total number of replicas at the same time. This is
achieved by self adaptation to entering or leaving of nodes. Exhaustive
exploration is achieved by a spiral walk algorithm that generates a number of
messages linearly proportional to the number of visited nodes. It requires a
dedicated topology (a triangular mesh on a closed surface). We introduce
protocols for node connection and departure that maintain the triangular mesh
at low computational and bandwidth cost. Search efficiency is increased using a
mechanism based on dynamically allocated super peers. We conclude with a
discussion on experimental validation results.
|
0805.4112
|
On the entropy and log-concavity of compound Poisson measures
|
cs.IT math.IT math.PR
|
Motivated, in part, by the desire to develop an information-theoretic
foundation for compound Poisson approximation limit theorems (analogous to the
corresponding developments for the central limit theorem and for simple Poisson
approximation), this work examines sufficient conditions under which the
compound Poisson distribution has maximal entropy within a natural class of
probability measures on the nonnegative integers. We show that the natural
analog of the Poisson maximum entropy property remains valid if the measures
under consideration are log-concave, but that it fails in general. A parallel
maximum entropy result is established for the family of compound binomial
measures. The proofs are largely based on ideas related to the semigroup
approach introduced in recent work by Johnson for the Poisson family.
Sufficient conditions are given for compound distributions to be log-concave,
and specific examples are presented illustrating all the above results.
|
0805.4134
|
Design and Implementation Aspects of a novel Java P2P Simulator with GUI
|
cs.NI cs.DB cs.DC
|
Peer-to-peer networks consist of thousands or millions of nodes that might
join and leave arbitrarily. The evaluation of new protocols in real
environments is many times practically impossible, especially at design and
testing stages. The purpose of this paper is to describe the implementation
aspects of a new Java based P2P simulator that has been developed to support
scalability in the evaluation of such P2P dynamic environments. Evolving the
functionality presented by previous solutions, we provide a friendly graphical
user interface through which the high-level theoretic researcher/designer of a
P2P system can easily construct an overlay with the desirable number of nodes
and evaluate its operations using a number of key distributions. Furthermore,
the simulator has built-in ability to produce statistics about the distributed
structure. Emphasis was given to the parametrical configuration of the
simulator. As a result the developed tool can be utilized in the simulation and
evaluation procedures of a variety of different protocols, with only few
changes in the Java code.
|
0805.4247
|
Neural network learning of optimal Kalman prediction and control
|
cs.NE
|
Although there are many neural network (NN) algorithms for prediction and for
control, and although methods for optimal estimation (including filtering and
prediction) and for optimal control in linear systems were provided by Kalman
in 1960 (with nonlinear extensions since then), there has been, to my
knowledge, no NN algorithm that learns either Kalman prediction or Kalman
control (apart from the special case of stationary control). Here we show how
optimal Kalman prediction and control (KPC), as well as system identification,
can be learned and executed by a recurrent neural network composed of
linear-response nodes, using as input only a stream of noisy measurement data.
The requirements of KPC appear to impose significant constraints on the
allowed NN circuitry and signal flows. The NN architecture implied by these
constraints bears certain resemblances to the local-circuit architecture of
mammalian cerebral cortex. We discuss these resemblances, as well as caveats
that limit our current ability to draw inferences for biological function. It
has been suggested that the local cortical circuit (LCC) architecture may
perform core functions (as yet unknown) that underlie sensory, motor,and other
cortical processing. It is reasonable to conjecture that such functions may
include prediction, the estimation or inference of missing or noisy sensory
data, and the goal-driven generation of control signals. The resemblances found
between the KPC NN architecture and that of the LCC are consistent with this
conjecture.
|
0805.4248
|
On the Capacity of Wireless Multicast Networks
|
cs.IT math.IT
|
The problem of maximizing the average rate in a multicast network subject to
a coverage constraint (minimum quality of service) is studied. Assuming the
channel state information is available only at the receiver side and single
antenna nodes, the highest expected rate achievable by a random user in the
network, called expected typical rate, is derived in two scenarios: hard
coverage constraint and soft coverage constraint. In the first case, the
coverage is expressed in terms of the outage probability, while in the second
case, the expected rate should satisfy certain minimum requirement. It is shown
that the optimum solution in both cases (achieving the highest expected typical
rate for given coverage requirements) is achieved by an infinite layer
superposition code for which the optimum power allocation among the different
layers is derived. For the MISO case, a suboptimal coding scheme is proposed,
which is shown to be asymptotically optimal, when the number of transmit
antennas grows at least logarithmically with the number of users in the
network.
|
0805.4249
|
Coalition Games with Cooperative Transmission: A Cure for the Curse of
Boundary Nodes in Selfish Packet-Forwarding Wireless Networks
|
cs.IT cs.GT math.IT
|
In wireless packet-forwarding networks with selfish nodes, application of a
repeated game can induce the nodes to forward each others' packets, so that the
network performance can be improved. However, the nodes on the boundary of such
networks cannot benefit from this strategy, as the other nodes do not depend on
them. This problem is sometimes known as {\em the curse of the boundary nodes}.
To overcome this problem, an approach based on coalition games is proposed, in
which the boundary nodes can use cooperative transmission to help the backbone
nodes in the middle of the network. In return, the backbone nodes are willing
to forward the boundary nodes' packets. Here, the concept of core is used to
study the stability of the coalitions in such games. Then three types of
fairness are investigated, namely, min-max fairness using nucleolus, average
fairness using the Shapley function, and a newly proposed market fairness.
Based on the specific problem addressed in this paper, market fairness is a new
fairness concept involving fairness between multiple backbone nodes and
multiple boundary nodes. Finally, a protocol is designed using both repeated
games and coalition games. Simulation results show how boundary nodes and
backbone nodes form coalitions according to different fairness criteria. The
proposed protocol can improve the network connectivity by about 50%, compared
with pure repeated game schemes.
|
0805.4290
|
From Data Topology to a Modular Classifier
|
cs.LG
|
This article describes an approach to designing a distributed and modular
neural classifier. This approach introduces a new hierarchical clustering that
enables one to determine reliable regions in the representation space by
exploiting supervised information. A multilayer perceptron is then associated
with each of these detected clusters and charged with recognizing elements of
the associated cluster while rejecting all others. The obtained global
classifier is comprised of a set of cooperating neural networks and completed
by a K-nearest neighbor classifier charged with treating elements rejected by
all the neural networks. Experimental results for the handwritten digit
recognition problem and comparison with neural and statistical nonmodular
classifiers are given.
|
0805.4338
|
Quantization of Prior Probabilities for Hypothesis Testing
|
cs.IT math.IT math.ST stat.TH
|
Bayesian hypothesis testing is investigated when the prior probabilities of
the hypotheses, taken as a random vector, are quantized. Nearest neighbor and
centroid conditions are derived using mean Bayes risk error as a distortion
measure for quantization. A high-resolution approximation to the
distortion-rate function is also obtained. Human decision making in segregated
populations is studied assuming Bayesian hypothesis testing with quantized
priors.
|
0805.4369
|
A semantic space for modeling children's semantic memory
|
cs.CL
|
The goal of this paper is to present a model of children's semantic memory,
which is based on a corpus reproducing the kinds of texts children are exposed
to. After presenting the literature in the development of the semantic memory,
a preliminary French corpus of 3.2 million words is described. Similarities in
the resulting semantic space are compared to human data on four tests:
association norms, vocabulary test, semantic judgments and memory tasks. A
second corpus is described, which is composed of subcorpora corresponding to
various ages. This stratified corpus is intended as a basis for developmental
studies. Finally, two applications of these models of semantic memory are
presented: the first one aims at tracing the development of semantic
similarities paragraph by paragraph; the second one describes an implementation
of a model of text comprehension derived from the Construction-integration
model (Kintsch, 1988, 1998) and based on such models of semantic memory.
|
0805.4374
|
Capacity Bounds for Broadcast Channels with Confidential Messages
|
cs.IT math.IT
|
In this paper, we study capacity bounds for discrete memoryless broadcast
channels with confidential messages. Two private messages as well as a common
message are transmitted; the common message is to be decoded by both receivers,
while each private message is only for its intended receiver. In addition, each
private message is to be kept secret from the unintended receiver where secrecy
is measured by equivocation. We propose both inner and outer bounds to the rate
equivocation region for broadcast channels with confidential messages. The
proposed inner bound generalizes Csisz\'{a}r and K\"{o}rner's rate equivocation
region for broadcast channels with a single confidential message, Liu {\em et
al}'s achievable rate region for broadcast channels with perfect secrecy,
Marton's and Gel'fand and Pinsker's achievable rate region for general
broadcast channels. Our proposed outer bounds, together with the inner bound,
helps establish the rate equivocation region of several classes of discrete
memoryless broadcast channels with confidential messages, including less noisy,
deterministic, and semi-deterministic channels. Furthermore, specializing to
the general broadcast channel by removing the confidentiality constraint, our
proposed outer bounds reduce to new capacity outer bounds for the discrete
memory broadcast channel.
|
0805.4425
|
Low-Complexity Structured Precoding for Spatially Correlated MIMO
Channels
|
cs.IT math.IT
|
The focus of this paper is on spatial precoding in correlated multi-antenna
channels, where the number of independent data-streams is adapted to trade-off
the data-rate with the transmitter complexity. Towards the goal of a
low-complexity implementation, a structured precoder is proposed, where the
precoder matrix evolves fairly slowly at a rate comparable with the statistical
evolution of the channel. Here, the eigenvectors of the precoder matrix
correspond to the dominant eigenvectors of the transmit covariance matrix,
whereas the power allocation across the modes is fixed, known at both the ends,
and is of low-complexity. A particular case of the proposed scheme (semiunitary
precoding), where the spatial modes are excited with equal power, is shown to
be near-optimal in matched channels. A matched channel is one where the
dominant eigenvalues of the transmit covariance matrix are well-conditioned and
their number equals the number of independent data-streams, and the receive
covariance matrix is also well-conditioned. In mismatched channels, where the
above conditions are not met, it is shown that the loss in performance with
semiunitary precoding when compared with a perfect channel information
benchmark is substantial. This loss needs to be mitigated via limited feedback
techniques that provide partial channel information to the transmitter. More
importantly, we develop matching metrics that capture the degree of matching of
a channel to the precoder structure continuously, and allow ordering two matrix
channels in terms of their mutual information or error probability performance.
|
0805.4440
|
Optimal Coding for the Erasure Channel with Arbitrary Alphabet Size
|
cs.IT math.IT
|
An erasure channel with a fixed alphabet size $q$, where $q \gg 1$, is
studied. It is proved that over any erasure channel (with or without memory),
Maximum Distance Separable (MDS) codes achieve the minimum probability of error
(assuming maximum likelihood decoding). Assuming a memoryless erasure channel,
the error exponent of MDS codes are compared with that of random codes and
linear random codes. It is shown that the envelopes of all these exponents are
identical for rates above the critical rate. Noting the optimality of MDS
codes, it is concluded that both random codes and linear random codes are
exponentially optimal, whether the block sizes is larger or smaller than the
alphabet size.
|
0805.4471
|
Exact Matrix Completion via Convex Optimization
|
cs.IT math.IT
|
We consider a problem of considerable practical interest: the recovery of a
data matrix from a sampling of its entries. Suppose that we observe m entries
selected uniformly at random from a matrix M. Can we complete the matrix and
recover the entries that we have not seen?
We show that one can perfectly recover most low-rank matrices from what
appears to be an incomplete set of entries. We prove that if the number m of
sampled entries obeys m >= C n^{1.2} r log n for some positive numerical
constant C, then with very high probability, most n by n matrices of rank r can
be perfectly recovered by solving a simple convex optimization program. This
program finds the matrix with minimum nuclear norm that fits the data. The
condition above assumes that the rank is not too large. However, if one
replaces the 1.2 exponent with 1.25, then the result holds for all values of
the rank. Similar results hold for arbitrary rectangular matrices as well. Our
results are connected with the recent literature on compressed sensing, and
show that objects other than signals and images can be perfectly reconstructed
from very limited information.
|
0805.4502
|
Golden Space-Time Block Coded Modulation
|
cs.IT math.IT
|
In this paper we present a block coded modulation scheme for a 2 x 2 MIMO
system over slow fading channels, where the inner code is the Golden Code. The
scheme is based on a set partitioning of the Golden Code using two-sided ideals
whose norm is a power of two. In this case, a lower bound for the minimum
determinant is given by the minimum Hamming distance. The description of the
ring structure of the quotients suggests further optimization in order to
improve the overall distribution of determinants. Performance simulations show
that the GC-RS schemes achieve a significant gain over the uncoded Golden Code.
|
0805.4508
|
Modeling Loosely Annotated Images with Imagined Annotations
|
cs.IR cs.AI
|
In this paper, we present an approach to learning latent semantic analysis
models from loosely annotated images for automatic image annotation and
indexing. The given annotation in training images is loose due to: (1)
ambiguous correspondences between visual features and annotated keywords; (2)
incomplete lists of annotated keywords. The second reason motivates us to
enrich the incomplete annotation in a simple way before learning topic models.
In particular, some imagined keywords are poured into the incomplete annotation
through measuring similarity between keywords. Then, both given and imagined
annotations are used to learning probabilistic topic models for automatically
annotating new images. We conduct experiments on a typical Corel dataset of
images and loose annotations, and compare the proposed method with
state-of-the-art discrete annotation methods (using a set of discrete blobs to
represent an image). The proposed method improves word-driven probability
Latent Semantic Analysis (PLSA-words) up to a comparable performance with the
best discrete annotation method, while a merit of PLSA-words is still kept,
i.e., a wider semantic range.
|
0805.4521
|
Textual Entailment Recognizing by Theorem Proving Approach
|
cs.CL
|
In this paper we present two original methods for recognizing textual
inference. First one is a modified resolution method such that some linguistic
considerations are introduced in the unification of two atoms. The approach is
possible due to the recent methods of transforming texts in logic formulas.
Second one is based on semantic relations in text, as presented in WordNet.
Some similarities between these two methods are remarked.
|
0805.4560
|
Rock mechanics modeling based on soft granulation theory
|
cs.AI
|
This paper describes application of information granulation theory, on the
design of rock engineering flowcharts. Firstly, an overall flowchart, based on
information granulation theory has been highlighted. Information granulation
theory, in crisp (non-fuzzy) or fuzzy format, can take into account engineering
experiences (especially in fuzzy shape-incomplete information or superfluous),
or engineering judgments, in each step of designing procedure, while the
suitable instruments modeling are employed. In this manner and to extension of
soft modeling instruments, using three combinations of Self Organizing Map
(SOM), Neuro-Fuzzy Inference System (NFIS), and Rough Set Theory (RST) crisp
and fuzzy granules, from monitored data sets are obtained. The main underlined
core of our algorithms are balancing of crisp(rough or non-fuzzy) granules and
sub fuzzy granules, within non fuzzy information (initial granulation) upon the
open-close iterations. Using different criteria on balancing best granules
(information pockets), are obtained. Validations of our proposed methods, on
the data set of in-situ permeability in rock masses in Shivashan dam, Iran have
been highlighted.
|
0805.4583
|
Channels that Heat Up
|
cs.IT math.IT
|
This work considers an additive noise channel where the time-k noise variance
is a weighted sum of the channel input powers prior to time k. This channel is
motivated by point-to-point communication between two terminals that are
embedded in the same chip. Transmission heats up the entire chip and hence
increases the thermal noise at the receiver. The capacity of this channel (both
with and without feedback) is studied at low transmit powers and at high
transmit powers.
At low transmit powers, the slope of the capacity-vs-power curve at zero is
computed and it is shown that the heating-up effect is beneficial. At high
transmit powers, conditions are determined under which the capacity is bounded,
i.e., under which the capacity does not grow to infinity as the allowed average
power tends to infinity.
|
0805.4620
|
Uplink Macro Diversity of Limited Backhaul Cellular Network
|
cs.IT math.IT
|
In this work new achievable rates are derived, for the uplink channel of a
cellular network with joint multicell processing, where unlike previous
results, the ideal backhaul network has finite capacity per-cell. Namely, the
cell sites are linked to the central joint processor via lossless links with
finite capacity. The cellular network is abstracted by symmetric models, which
render analytical treatment plausible. For this idealistic model family,
achievable rates are presented for cell-sites that use compress-and-forward
schemes combined with local decoding, for both Gaussian and fading channels.
The rates are given in closed form for the classical Wyner model and the
soft-handover model. These rates are then demonstrated to be rather close to
the optimal unlimited backhaul joint processing rates, already for modest
backhaul capacities, supporting the potential gain offered by the joint
multicell processing approach. Particular attention is also given to the
low-SNR characterization of these rates through which the effect of the limited
backhaul network is explicitly revealed. In addition, the rate at which the
backhaul capacity should scale in order to maintain the original high-SNR
characterization of an unlimited backhaul capacity system is found.
|
0805.4722
|
La fiabilit\'e des informations sur le web
|
cs.IR cs.CL cs.CY
|
Online IR tools have to take into account new phenomena linked to the
appearance of blogs, wiki and other collaborative publications. Among these
collaborative sites, Wikipedia represents a crucial source of information.
However, the quality of this information has been recently questionned. A
better knowledge of the contributors' behaviors should help users navigate
through information whose quality may vary from one source to another. In order
to explore this idea, we present an analysis of the role of different types of
contributors in the control of the publication of conflictual articles.
|
0805.4748
|
New Construction of 2-Generator Quasi-Twisted Codes
|
cs.IT math.IT
|
Quasi-twisted (QT) codes are a generalization of quasi-cyclic (QC) codes.
Based on consta-cyclic simplex codes, a new explicit construction of a family
of 2-generator quasi-twisted (QT) two-weight codes is presented. It is also
shown that many codes in the family meet the Griesmer bound and therefore are
length-optimal. New distance-optimal binary QC [195, 8, 96], [210, 8, 104] and
[240, 8, 120] codes, and good ternary QC [208, 6, 135] and [221, 6, 144] codes
are also obtained by the construction.
|
0805.4754
|
Managing conflicts between users in Wikipedia
|
cs.IR cs.CL cs.CY cs.HC
|
Wikipedia is nowadays a widely used encyclopedia, and one of the most visible
sites on the Internet. Its strong principle of collaborative work and free
editing sometimes generates disputes due to disagreements between users. In
this article we study how the wikipedian community resolves the conflicts and
which roles do wikipedian choose in this process. We observed the users
behavior both in the article talk pages, and in the Arbitration Committee pages
specifically dedicated to serious disputes. We first set up a users typology
according to their involvement in conflicts and their publishing and management
activity in the encyclopedia. We then used those user types to describe users
behavior in contributing to articles that are tagged by the wikipedian
community as being in conflict with the official guidelines of Wikipedia, or
conversely as being well featured.
|
0806.0036
|
On the Design of Universal LDPC Codes
|
cs.IT math.IT
|
Low-density parity-check (LDPC) coding for a multitude of equal-capacity
channels is studied. First, based on numerous observations, a conjecture is
stated that when the belief propagation decoder converges on a set of
equal-capacity channels, it would also converge on any convex combination of
those channels. Then, it is proved that when the stability condition is
satisfied for a number of channels, it is also satisfied for any channel in
their convex hull. For the purpose of code design, a method is proposed which
can decompose every symmetric channel with capacity C into a set of
identical-capacity basis channels. We expect codes that work on the basis
channels to be suitable for any channel with capacity C. Such codes are found
and in comparison with existing LDPC codes that are designed for specific
channels, our codes obtain considerable coding gains when used across a
multitude of channels.
|
0806.0075
|
An Experimental Investigation of XML Compression Tools
|
cs.DB
|
This paper presents an extensive experimental study of the state-of-the-art
of XML compression tools. The study reports the behavior of nine XML
compressors using a large corpus of XML documents which covers the different
natures and scales of XML documents. In addition to assessing and comparing the
performance characteristics of the evaluated XML compression tools, the study
tries to assess the effectiveness and practicality of using these tools in the
real world. Finally, we provide some guidelines and recommen- dations which are
useful for helping developers and users for making an effective decision for
selecting the most suitable XML compression tool for their needs.
|
0806.0080
|
Outer Bounds for Multiple Access Channels with Feedback using Dependence
Balance
|
cs.IT math.IT
|
We use the idea of dependence balance to obtain a new outer bound for the
capacity region of the discrete memoryless multiple access channel with
noiseless feedback (MAC-FB). We consider a binary additive noisy MAC-FB whose
feedback capacity is not known. The binary additive noisy MAC considered in
this paper can be viewed as the discrete counterpart of the Gaussian MAC-FB.
Ozarow established that the capacity region of the two-user Gaussian MAC-FB is
given by the cut-set bound. Our result shows that for the discrete version of
the channel considered by Ozarow, this is not the case. Direct evaluation of
our outer bound is intractable due to an involved auxiliary random variable
whose large cardinality prohibits an exhaustive search. We overcome this
difficulty by using functional analysis to explicitly evaluate our outer bound.
Our outer bound is strictly less than the cut-set bound at all points on the
capacity region where feedback increases capacity. In addition, we explicitly
evaluate the Cover-Leung achievable rate region for the binary additive noisy
MAC-FB in consideration. Furthermore, using the tools developed for the
evaluation of our outer bound, we also explicitly characterize the boundary of
the feedback capacity region of the binary erasure MAC, for which the
Cover-Leung achievable rate region is known to be tight. This last result
confirms that the feedback strategies developed by Kramer for the binary
erasure MAC are capacity achieving.
|
0806.0142
|
Regularization of Invers Problem for M-Ary Channel
|
cs.IT math.IT
|
The problem of computation of parameters of m-ary channel is considered. It
is demonstrated that although the problem is ill-posed, it is possible turning
of the parameters of the system and transform the problem to well-posed one.
|
0806.0250
|
Checking the Quality of Clinical Guidelines using Automated Reasoning
Tools
|
cs.AI cs.LO cs.SC
|
Requirements about the quality of clinical guidelines can be represented by
schemata borrowed from the theory of abductive diagnosis, using temporal logic
to model the time-oriented aspects expressed in a guideline. Previously, we
have shown that these requirements can be verified using interactive theorem
proving techniques. In this paper, we investigate how this approach can be
mapped to the facilities of a resolution-based theorem prover, Otter, and a
complementary program that searches for finite models of first-order
statements, Mace. It is shown that the reasoning required for checking the
quality of a guideline can be mapped to such fully automated theorem-proving
facilities. The medical quality of an actual guideline concerning diabetes
mellitus 2 is investigated in this way.
|
0806.0283
|
Model of information diffusion
|
cs.IT math.IT
|
The system of cellular automata, which expresses the process of dissemination
and publication of the news among separate information resources, has been
described. A bell-shaped dependence of news diffusion on internet-sources
(web-sites) coheres well with a real behavior of thematic data flows, and at
local time spans - with noted models, e.g., exponential and logistic ones.
|
0806.0473
|
Kinematic Analysis of the vertebra of an eel like robot
|
cs.RO physics.class-ph
|
The kinematic analysis of a spherical wrist with parallel architecture is the
object of this article. This study is part of a larger French project, which
aims to design and to build an eel like robot to imitate the eel swimming. To
implement direct and inverse kinematics on the control law of the prototype, we
need to evaluate the workspace without any collisions between the different
bodies. The tilt and torsion parameters are used to represent the workspace.
|
0806.0526
|
An Ontology-based Knowledge Management System for Industry Clusters
|
cs.AI
|
Knowledge-based economy forces companies in the nation to group together as a
cluster in order to maintain their competitiveness in the world market. The
cluster development relies on two key success factors which are knowledge
sharing and collaboration between the actors in the cluster. Thus, our study
tries to propose knowledge management system to support knowledge management
activities within the cluster. To achieve the objectives of this study,
ontology takes a very important role in knowledge management process in various
ways; such as building reusable and faster knowledge-bases, better way for
representing the knowledge explicitly. However, creating and representing
ontology create difficulties to organization due to the ambiguity and
unstructured of source of knowledge. Therefore, the objectives of this paper
are to propose the methodology to create and represent ontology for the
organization development by using knowledge engineering approach. The
handicraft cluster in Thailand is used as a case study to illustrate our
proposed methodology.
|
0806.0535
|
Sincere-Strategy Preference-Based Approval Voting Fully Resists
Constructive Control and Broadly Resists Destructive Control
|
cs.GT cs.CC cs.MA
|
We study sincere-strategy preference-based approval voting (SP-AV), a system
proposed by Brams and Sanver [Electoral Studies, 25(2):287-305, 2006], and here
adjusted so as to coerce admissibility of the votes (rather than excluding
inadmissible votes a priori), with respect to procedural control. In such
control scenarios, an external agent seeks to change the outcome of an election
via actions such as adding/deleting/partitioning either candidates or voters.
SP-AV combines the voters' preference rankings with their approvals of
candidates, where in elections with at least two candidates the voters'
approval strategies are adjusted--if needed--to approve of their most-preferred
candidate and to disapprove of their least-preferred candidate. This rule
coerces admissibility of the votes even in the presence of control actions, and
hybridizes, in effect, approval with pluralitiy voting.
We prove that this system is computationally resistant (i.e., the
corresponding control problems are NP-hard) to 19 out of 22 types of
constructive and destructive control. Thus, SP-AV has more resistances to
control than is currently known for any other natural voting system with a
polynomial-time winner problem. In particular, SP-AV is (after Copeland voting,
see Faliszewski et al. [AAIM-2008, Springer LNCS 5034, pp. 165-176, 2008]) the
second natural voting system with an easy winner-determination procedure that
is known to have full resistance to constructive control, and unlike Copeland
voting it in addition displays broad resistance to destructive control.
|
0806.0562
|
Universal Coding on Infinite Alphabets: Exponentially Decreasing
Envelopes
|
cs.IT math.IT
|
This paper deals with the problem of universal lossless coding on a countable
infinite alphabet. It focuses on some classes of sources defined by an envelope
condition on the marginal distribution, namely exponentially decreasing
envelope classes with exponent $\alpha$. The minimax redundancy of
exponentially decreasing envelope classes is proved to be equivalent to
$\frac{1}{4 \alpha \log e} \log^2 n$. Then a coding strategy is proposed, with
a Bayes redundancy equivalent to the maximin redundancy. At last, an adaptive
algorithm is provided, whose redundancy is equivalent to the minimax redundancy
|
0806.0576
|
Steganographic Routing in Multi Agent System Environment
|
cs.CR cs.MA
|
In this paper we present an idea of trusted communication platform for
Multi-Agent Systems (MAS) called TrustMAS. Based on analysis of routing
protocols suitable for MAS we have designed a new proactive hidden routing.
Proposed steg-agents discovery procedure, as well as further routes updates and
hidden communication, are cryptographically independent. Steganographic
exchange can cover heterogeneous and geographically outlying environments using
available cross-layer covert channels. Finally we have specified rules that
agents have to follow to benefit the TrustMAS distributed router platform.
|
0806.0579
|
Multirate Synchronous Sampling of Sparse Multiband Signals
|
cs.IT math.IT
|
Recent advances in optical systems make them ideal for undersampling
multiband signals that have high bandwidths. In this paper we propose a new
scheme for reconstructing multiband sparse signals using a small number of
sampling channels. The scheme, which we call synchronous multirate sampling
(SMRS), entails gathering samples synchronously at few different rates whose
sum is significantly lower than the Nyquist sampling rate. The signals are
reconstructed by solving a system of linear equations. We have demonstrated an
accurate and robust reconstruction of signals using a small number of sampling
channels that operate at relatively high rates. Sampling at higher rates
increases the signal to noise ratio in samples. The SMRS scheme enables a
significant reduction in the number of channels required when the sampling rate
increases. We have demonstrated, using only three sampling channels, an
accurate sampling and reconstruction of 4 real signals (8 bands). The matrices
that are used to reconstruct the signals in the SMRS scheme also have low
condition numbers. This indicates that the SMRS scheme is robust to noise in
signals. The success of the SMRS scheme relies on the assumption that the
sampled signals are sparse. As a result most of the sampled spectrum may be
unaliased in at least one of the sampling channels. This is in contrast to
multicoset sampling schemes in which an alias in one channel is equivalent to
an alias in all channels. We have demonstrated that the SMRS scheme obtains
similar performance using 3 sampling channels and a total sampling rate 8 times
the Landau rate to an implementation of a multicoset sampling scheme that uses
6 sampling channels with a total sampling rate of 13 times the Landau rate.
|
0806.0604
|
Information-theoretic limits on sparse signal recovery: Dense versus
sparse measurement matrices
|
math.ST cs.IT math.IT stat.TH
|
We study the information-theoretic limits of exactly recovering the support
of a sparse signal using noisy projections defined by various classes of
measurement matrices. Our analysis is high-dimensional in nature, in which the
number of observations $n$, the ambient signal dimension $p$, and the signal
sparsity $k$ are all allowed to tend to infinity in a general manner. This
paper makes two novel contributions. First, we provide sharper necessary
conditions for exact support recovery using general (non-Gaussian) dense
measurement matrices. Combined with previously known sufficient conditions,
this result yields sharp characterizations of when the optimal decoder can
recover a signal for various scalings of the sparsity $k$ and sample size $n$,
including the important special case of linear sparsity ($k = \Theta(p)$) using
a linear scaling of observations ($n = \Theta(p)$). Our second contribution is
to prove necessary conditions on the number of observations $n$ required for
asymptotically reliable recovery using a class of $\gamma$-sparsified
measurement matrices, where the measurement sparsity $\gamma(n, p, k) \in
(0,1]$ corresponds to the fraction of non-zero entries per row. Our analysis
allows general scaling of the quadruplet $(n, p, k, \gamma)$, and reveals three
different regimes, corresponding to whether measurement sparsity has no effect,
a minor effect, or a dramatic effect on the information-theoretic limits of the
subset recovery problem.
|
0806.0676
|
On Peak versus Average Interference Power Constraints for Protecting
Primary Users in Cognitive Radio Networks
|
cs.IT math.IT
|
This paper considers spectrum sharing for wireless communication between a
cognitive radio (CR) link and a primary radio (PR) link. It is assumed that the
CR protects the PR transmission by applying the so-called
interference-temperature constraint, whereby the CR is allowed to transmit
regardless of the PR's on/off status provided that the resultant interference
power level at the PR receiver is kept below some predefined threshold. For the
fading PR and CR channels, the interference-power constraint at the PR receiver
is usually one of the following two types: One is to regulate the average
interference power (AIP) over all the fading states, while the other is to
limit the peak interference power (PIP) at each fading state. From the CR's
perspective, given the same average and peak power threshold, the AIP
constraint is more favorable than the PIP counterpart because of its more
flexibility for dynamically allocating transmit powers over the fading states.
On the contrary, from the perspective of protecting the PR, the more
restrictive PIP constraint appears at a first glance to be a better option than
the AIP. Some surprisingly, this paper shows that in terms of various forms of
capacity limits achievable for the PR fading channel, e.g., the ergodic and
outage capacities, the AIP constraint is also superior over the PIP. This
result is based upon an interesting interference diversity phenomenon, i.e.,
randomized interference powers over the fading states in the AIP case are more
advantageous over deterministic ones in the PIP case for minimizing the
resultant PR capacity losses. Therefore, the AIP constraint results in larger
fading channel capacities than the PIP for both the CR and PR transmissions.
|
0806.0689
|
Directional Cross Diamond Search Algorithm for Fast Block Motion
Estimation
|
cs.CV
|
In block-matching motion estimation (BMME), the search patterns have a
significant impact on the algorithm's performance, both the search speed and
the search quality. The search pattern should be designed to fit the motion
vector probability (MVP) distribution characteristics of the real-world
sequences. In this paper, we build a directional model of MVP distribution to
describe the directional-center-biased characteristic of the MVP distribution
and the directional characteristics of the conditional MVP distribution more
exactly based on the detailed statistical data of motion vectors of eighteen
popular sequences. Three directional search patterns are firstly designed by
utilizing the directional characteristics and they are the smallest search
patterns among the popular ones. A new algorithm is proposed using the
horizontal cross search pattern as the initial step and the horizontal/vertical
diamond search pattern as the subsequent step for the fast BMME, which is
called the directional cross diamond search (DCDS) algorithm. The DCDS
algorithm can obtain the motion vector with fewer search points than CDS, DS or
HEXBS while maintaining the similar or even better search quality. The gain on
speedup of DCDS over CDS or DS can be up to 54.9%. The simulation results show
that DCDS is efficient, effective and robust, and it can always give the faster
search speed on different sequences than other fast block-matching algorithm in
common use.
|
0806.0740
|
Modeling And Simulation Of Prolate Dual-Spin Satellite Dynamics In An
Inclined Elliptical Orbit: Case Study Of Palapa B2R Satellite
|
cs.RO
|
In response to the interest to re-use Palapa B2R satellite nearing its End of
Life (EOL) time, an idea to incline the satellite orbit in order to cover a new
region has emerged in the recent years. As a prolate dual-spin vehicle, Palapa
B2R has to be stabilized against its internal energy dissipation effect. This
work is focused on analyzing the dynamics of the reusable satellite in its
inclined orbit. The study discusses in particular the stability of the prolate
dual-spin satellite under the effect of perturbed field of gravitation due to
the inclination of its elliptical orbit. Palapa B2R physical data was
substituted into the dual-spin's equation of motion. The coefficient of zonal
harmonics J2 was induced into the gravity-gradient moment term that affects the
satellite attitude. The satellite's motion and attitude were then simulated in
the perturbed gravitational field by J2, with the variation of orbit's
eccentricity and inclination. The analysis of the satellite dynamics and its
stability was conducted for designing a control system for the vehicle in its
new inclined orbit.
|
0806.0743
|
Onboard Multivariable Controller Design for a Small Scale Helicopter
Using Coefficient Diagram Method
|
cs.RO
|
A mini scale helicopter exhibits not only increased sensitivity to control
inputs and disturbances, but also higher bandwidth of its dynamics. These
properties make model helicopters, as a flying robot, more difficult to
control. The dynamics model accuracy will determine the performance of the
designed controller. It is attractive in this regards to have a controller that
can accommodate the unmodeled dynamics or parameter changes and perform well in
such situations. Coefficient Diagram Method (CDM) is chosen as the candidate to
synthesize such a controller due to its simplicity and convenience in
demonstrating integrated performance measures including equivalent time
constant, stability indices and robustness. In this study, CDM is implemented
for a design of multivariable controller for a small scale helicopter during
hover and cruise flight. In the synthesis of MIMO CDM, good design common sense
based on hands-on experience is necessary. The low level controller algorithm
is designed as part of hybrid supervisory control architecture to be
implemented on an onboard computer system. Its feasibility and performance are
evaluated based on its robustness, desired time domain system responses and
compliance to hard-real time requirements.
|
0806.0784
|
Collaborative model of interaction and Unmanned Vehicle Systems'
interface
|
cs.AI cs.HC cs.MA
|
The interface for the next generation of Unmanned Vehicle Systems should be
an interface with multi-modal displays and input controls. Then, the role of
the interface will not be restricted to be a support of the interactions
between the ground operator and vehicles. Interface must take part in the
interaction management too. In this paper, we show that recent works in
pragmatics and philosophy provide a suitable theoretical framework for the next
generation of UV System's interface. We concentrate on two main aspects of the
collaborative model of interaction based on acceptance: multi-strategy approach
for communicative act generation and interpretation and communicative
alignment.
|
0806.0837
|
Upper and Lower Bounds on Black-Box Steganography
|
cs.CR cs.CC cs.IT math.IT
|
We study the limitations of steganography when the sender is not using any
properties of the underlying channel beyond its entropy and the ability to
sample from it. On the negative side, we show that the number of samples the
sender must obtain from the channel is exponential in the rate of the
stegosystem. On the positive side, we present the first secret-key stegosystem
that essentially matches this lower bound regardless of the entropy of the
underlying channel. Furthermore, for high-entropy channels, we present the
first secret-key stegosystem that matches this lower bound statelessly (i.e.,
without requiring synchronized state between sender and receiver).
|
0806.0838
|
Performance Analysis of Multiple Antenna Multi-User Detection
|
cs.IT math.IT
|
We derive the diversity order of some multiple antenna multi-user
cancellation and detection schemes. The common property of these detection
methods is the usage of Alamouti and quasi-orthogonal space-time block codes.
For detecting $J$ users each having $N$ transmit antennas, these schemes
require only $J$ antennas at the receiver. Our analysis shows that when having
$M$ receive antennas, the array-processing schemes provide the diversity order
of $N(M-J+1)$. In addition, our results prove that regardless of the number of
users or receive antennas, when using maximum-likelihood decoding we get the
full transmit and receive diversities, i.e. $NM$, similar to the
no-interference scenario.
|
0806.0870
|
The Euler-Poincare theory of Metamorphosis
|
cs.CV nlin.CD
|
In the pattern matching approach to imaging science, the process of
``metamorphosis'' is template matching with dynamical templates. Here, we
recast the metamorphosis equations of into the Euler-Poincare variational
framework of and show that the metamorphosis equations contain the equations
for a perfect complex fluid \cite{Ho2002}. This result connects the ideas
underlying the process of metamorphosis in image matching to the physical
concept of order parameter in the theory of complex fluids. After developing
the general theory, we reinterpret various examples, including point set, image
and density metamorphosis. We finally discuss the issue of matching measures
with metamorphosis, for which we provide existence theorems for the initial and
boundary value problems.
|
0806.0899
|
A Nonparametric Approach to 3D Shape Analysis from Digital Camera Images
- I. in Memory of W.P. Dayawansa
|
stat.ME cs.CV math.ST stat.TH
|
In this article, for the first time, one develops a nonparametric methodology
for an analysis of shapes of configurations of landmarks on real 3D objects
from regular camera photographs, thus making 3D shape analysis very accessible.
A fundamental result in computer vision by Faugeras (1992), Hartley, Gupta and
Chang (1992) is that generically, a finite 3D configuration of points can be
retrieved up to a projective transformation, from corresponding configurations
in a pair of camera images. Consequently, the projective shape of a 3D
configuration can be retrieved from two of its planar views. Given the inherent
registration errors, the 3D projective shape can be estimated from a sample of
photos of the scene containing that configuration. Projective shapes are here
regarded as points on projective shape manifolds. Using large sample and
nonparametric bootstrap methodology for extrinsic means on manifolds, one gives
confidence regions and tests for the mean projective shape of a 3D
configuration from its 2D camera images.
|
0806.0903
|
On the Capacity of Gaussian Relay Channels
|
cs.IT math.IT
|
This paper has been withdrawn due to that the same conclusion has been
proposed before.
|
0806.0905
|
Channel Capacity Limits of Cognitive Radio in Asymmetric Fading
Environments
|
cs.IT math.IT
|
Cognitive radio technology is an innovative radio design concept which aims
to increase spectrum utilization by exploiting unused spectrum in dynamically
changing environments. By extending previous results, we investigate the
capacity gains achievable with this dynamic spectrum approach in asymmetric
fading channels. More specifically, we allow the secondary-to-primary and
secondary-to-secondary user channels to undergo Rayleigh or Rician fading, with
arbitrary link power. In order to compute the capacity, we derive the
distributions of ratios of Rayleigh and Rician variables. Compared to the
symmetric fading scenario, our results indicate several interesting features of
the capacity behaviour under both average and peak received power constraints.
Finally, the impact of multiple primary users on the capacity under asymmetric
fading has also been studied.
|
0806.0909
|
Outage and Local Throughput and Capacity of Random Wireless Networks
|
cs.IT cs.NI math.IT
|
Outage probabilities and single-hop throughput are two important performance
metrics that have been evaluated for certain specific types of wireless
networks. However, there is a lack of comprehensive results for larger classes
of networks, and there is no systematic approach that permits the convenient
comparison of the performance of networks with different geometries and levels
of randomness.
The uncertainty cube is introduced to categorize the uncertainty present in a
network. The three axes of the cube represent the three main potential sources
of uncertainty in interference-limited networks: the node distribution, the
channel gains (fading), and the channel access (set of transmitting nodes). For
the performance analysis, a new parameter, the so-called {\em spatial
contention}, is defined. It measures the slope of the outage probability in an
ALOHA network as a function of the transmit probability $p$ at $p=0$. Outage is
defined as the event that the signal-to-interference ratio (SIR) is below a
certain threshold in a given time slot. It is shown that the spatial contention
is sufficient to characterize outage and throughput in large classes of
wireless networks, corresponding to different positions on the uncertainty
cube. Existing results are placed in this framework, and new ones are derived.
Further, interpreting the outage probability as the SIR distribution, the
ergodic capacity of unit-distance links is determined and compared to the
throughput achievable for fixed (yet optimized) transmission rates.
|
0806.1006
|
The VO-Neural project: recent developments and some applications
|
astro-ph cs.CE
|
VO-Neural is the natural evolution of the Astroneural project which was
started in 1994 with the aim to implement a suite of neural tools for data
mining in astronomical massive data sets. At a difference with its ancestor,
which was implemented under Matlab, VO-Neural is written in C++, object
oriented, and it is specifically tailored to work in distributed computing
architectures. We discuss the current status of implementation of VO-Neural,
present an application to the classification of Active Galactic Nuclei, and
outline the ongoing work to improve the functionalities of the package.
|
0806.1062
|
Capacity of Block-Memoryless Channels with Causal Channel Side
Information
|
cs.IT math.IT
|
The capacity of a time-varying block-memoryless channel in which the
transmitter and the receiver have access to (possibly different) noisy causal
channel side information (CSI) is obtained. It is shown that the capacity
formula obtained in this correspondence reduces to the capacity formula
reported in \cite{Gold07} for the special case where the transmitter CSI is a
deterministic function of the receiver CSI.
|
0806.1071
|
Histograms and Wavelets on Probabilistic Data
|
cs.DB
|
There is a growing realization that uncertain information is a first-class
citizen in modern database management. As such, we need techniques to correctly
and efficiently process uncertain data in database systems. In particular, data
reduction techniques that can produce concise, accurate synopses of large
probabilistic relations are crucial. Similar to their deterministic relation
counterparts, such compact probabilistic data synopses can form the foundation
for human understanding and interactive data exploration, probabilistic query
planning and optimization, and fast approximate query processing in
probabilistic database systems.
In this paper, we introduce definitions and algorithms for building
histogram- and wavelet-based synopses on probabilistic data. The core problem
is to choose a set of histogram bucket boundaries or wavelet coefficients to
optimize the accuracy of the approximate representation of a collection of
probabilistic tuples under a given error metric. For a variety of different
error metrics, we devise efficient algorithms that construct optimal or near
optimal B-term histogram and wavelet synopses. This requires careful analysis
of the structure of the probability distributions, and novel extensions of
known dynamic-programming-based techniques for the deterministic domain. Our
experiments show that this approach clearly outperforms simple ideas, such as
building summaries for samples drawn from the data distribution, while taking
equal or less time.
|
0806.1144
|
GRID-Launcher v.1.0
|
astro-ph cs.CE
|
GRID-launcher-1.0 was built within the VO-Tech framework, as a software
interface between the UK-ASTROGRID and a generic GRID infrastructures in order
to allow any ASTROGRID user to launch on the GRID computing intensive tasks
from the ASTROGRID Workbench or Desktop. Even though of general application, so
far the Grid-Launcher has been tested on a few selected softwares
(VONeural-MLP, VONeural-SVM, Sextractor and SWARP) and on the SCOPE-GRID.
|
0806.1156
|
Utilisation des grammaires probabilistes dans les t\^aches de
segmentation et d'annotation prosodique
|
cs.LG
|
Nous pr\'esentons dans cette contribution une approche \`a la fois symbolique
et probabiliste permettant d'extraire l'information sur la segmentation du
signal de parole \`a partir d'information prosodique. Nous utilisons pour ce
faire des grammaires probabilistes poss\'edant une structure hi\'erarchique
minimale. La phase de construction des grammaires ainsi que leur pouvoir de
pr\'ediction sont \'evalu\'es qualitativement ainsi que quantitativement.
-----
Methodologically oriented, the present work sketches an approach for prosodic
information retrieval and speech segmentation, based on both symbolic and
probabilistic information. We have recourse to probabilistic grammars, within
which we implement a minimal hierarchical structure. Both the stages of
probabilistic grammar building and its testing in prediction are explored and
quantitatively and qualitatively evaluated.
|
0806.1199
|
Belief Propagation and Beyond for Particle Tracking
|
cs.IT cond-mat.stat-mech cs.AI cs.LG math.IT physics.flu-dyn
|
We describe a novel approach to statistical learning from particles tracked
while moving in a random environment. The problem consists in inferring
properties of the environment from recorded snapshots. We consider here the
case of a fluid seeded with identical passive particles that diffuse and are
advected by a flow. Our approach rests on efficient algorithms to estimate the
weighted number of possible matchings among particles in two consecutive
snapshots, the partition function of the underlying graphical model. The
partition function is then maximized over the model parameters, namely
diffusivity and velocity gradient. A Belief Propagation (BP) scheme is the
backbone of our algorithm, providing accurate results for the flow parameters
we want to learn. The BP estimate is additionally improved by incorporating
Loop Series (LS) contributions. For the weighted matching problem, LS is
compactly expressed as a Cauchy integral, accurately estimated by a saddle
point approximation. Numerical experiments show that the quality of our
improved BP algorithm is comparable to the one of a fully polynomial randomized
approximation scheme, based on the Markov Chain Monte Carlo (MCMC) method,
while the BP-based scheme is substantially faster than the MCMC scheme.
|
0806.1215
|
Performance of LDPC Codes Under Faulty Iterative Decoding
|
cs.IT math.IT
|
Departing from traditional communication theory where decoding algorithms are
assumed to perform without error, a system where noise perturbs both
computational devices and communication channels is considered here. This paper
studies limits in processing noisy signals with noisy circuits by investigating
the effect of noise on standard iterative decoders for low-density parity-check
codes. Concentration of decoding performance around its average is shown to
hold when noise is introduced into message-passing and local computation.
Density evolution equations for simple faulty iterative decoders are derived.
In one model, computing nonlinear estimation thresholds shows that performance
degrades smoothly as decoder noise increases, but arbitrarily small probability
of error is not achievable. Probability of error may be driven to zero in
another system model; the decoding threshold again decreases smoothly with
decoder noise. As an application of the methods developed, an achievability
result for reliable memory systems constructed from unreliable components is
provided.
|
0806.1231
|
Improving Classical Authentication with Quantum Communication
|
cs.IT cs.CR math.IT
|
We propose a quantum-enhanced protocol to authenticate classical messages,
with improved security with respect to the classical scheme introduced by
Brassard in 1983. In that protocol, the shared key is the seed of a
pseudo-random generator (PRG) and a hash function is used to create the
authentication tag of a public message. We show that a quantum encoding of
secret bits offers more security than the classical XOR function introduced by
Brassard. Furthermore, we establish the relationship between the bias of a PRG
and the amount of information about the key that the attacker can retrieve from
a block of authenticated messages. Finally, we prove that quantum resources can
improve both the secrecy of the key generated by the PRG and the secrecy of the
tag obtained with a hidden hash function.
|
0806.1280
|
The Role of Artificial Intelligence Technologies in Crisis Response
|
cs.AI
|
Crisis response poses many of the most difficult information technology in
crisis management. It requires information and communication-intensive efforts,
utilized for reducing uncertainty, calculating and comparing costs and
benefits, and managing resources in a fashion beyond those regularly available
to handle routine problems. In this paper, we explore the benefits of
artificial intelligence technologies in crisis response. This paper discusses
the role of artificial intelligence technologies; namely, robotics, ontology
and semantic web, and multi-agent systems in crisis response.
|
0806.1316
|
The end of Sleeping Beauty's nightmare
|
cs.AI math.PR
|
The way a rational agent changes her belief in certain
propositions/hypotheses in the light of new evidence lies at the heart of
Bayesian inference. The basic natural assumption, as summarized in van
Fraassen's Reflection Principle ([1984]), would be that in the absence of new
evidence the belief should not change. Yet, there are examples that are claimed
to violate this assumption. The apparent paradox presented by such examples, if
not settled, would demonstrate the inconsistency and/or incompleteness of the
Bayesian approach and without eliminating this inconsistency, the approach
cannot be regarded as scientific.
The Sleeping Beauty Problem is just such an example. The existing attempts to
solve the problem fall into three categories. The first two share the view that
new evidence is absent, but differ about the conclusion of whether Sleeping
Beauty should change her belief or not, and why. The third category is
characterized by the view that, after all, new evidence (although hidden from
the initial view) is involved.
My solution is radically different and does not fall in either of these
categories. I deflate the paradox by arguing that the two different degrees of
belief presented in the Sleeping Beauty Problem are in fact beliefs in two
different propositions, i.e. there is no need to explain the (un)change of
belief.
|
0806.1343
|
Temporized Equilibria
|
cs.GT cs.AI
|
This paper has been withdrawn by the author due to a crucial error in the
submission action.
|
0806.1372
|
Robust Cognitive Beamforming With Partial Channel State Information
|
cs.IT math.IT
|
This paper considers a spectrum sharing based cognitive radio (CR)
communication system, which consists of a secondary user (SU) having multiple
transmit antennas and a single receive antenna and a primary user (PU) having a
single receive antenna. The channel state information (CSI) on the link of the
SU is assumed to be perfectly known at the SU transmitter (SU-Tx). However, due
to loose cooperation between the SU and the PU, only partial CSI of the link
between the SU-Tx and the PU is available at the SU-Tx. With the partial CSI
and a prescribed transmit power constraint, our design objective is to
determine the transmit signal covariance matrix that maximizes the rate of the
SU while keeping the interference power to the PU below a threshold for all the
possible channel realization within an uncertainty set. This problem, termed
the robust cognitive beamforming problem, can be naturally formulated as a
semi-infinite programming (SIP) problem with infinitely many constraints. This
problem is first transformed into the second order cone programming (SOCP)
problem and then solved via a standard interior point algorithm. Then, an
analytical solution with much reduced complexity is developed from a geometric
perspective. It is shown that both algorithms obtain the same optimal solution.
Simulation examples are presented to validate the effectiveness of the proposed
algorithms.
|
0806.1397
|
The Improvement of the Bound on Hash Family
|
cs.IT math.IT
|
In this paper, we study the bound on three kinds of hash family using the
Singleton bound. To $\epsilon-U(N; n, m)$ hash family, in the caes of $n>m^2>1$
and $1\geq\epsilon\geq \epsilon_1(n, m)$, we get that the new bound is better.
To $\epsilon-\bigtriangleup U(N; n, m)$ hash family, in the case of $n>m>1$ and
$1\geq\epsilon\geq\epsilon_3(n,m)$, the new bound is better. To $\epsilon-SU(N;
n, m)$ hash family, in the case of $n>2^m>2$ and $1\geq\epsilon\geq
\epsilon_4(n, m)$, we get that the new bound is better.
|
0806.1439
|
Dynamic Network of Concepts from Web-Publications
|
cs.IT math.IT
|
The network, the nodes of which are concepts (people's names, companies'
names, etc.), extracted from web-publications, is considered. A working
algorithm of extracting such concepts is presented. Edges of the network under
consideration refer to the reference frequency which depends on the fact how
many times the concepts, which correspond to the nodes, are mentioned in the
same documents. Web-documents being published within a period of time together
form an information flow, which defines the dynamics of the network studied.
The phenomenon of its structure stability, when the number of web-publications,
constituting its formation bases, increases, is discussed
|
0806.1446
|
Fast Wavelet-Based Visual Classification
|
cs.CV
|
We investigate a biologically motivated approach to fast visual
classification, directly inspired by the recent work of Serre et al.
Specifically, trading-off biological accuracy for computational efficiency, we
explore using wavelet and grouplet-like transforms to parallel the tuning of
visual cortex V1 and V2 cells, alternated with max operations to achieve scale
and translation invariance. A feature selection procedure is applied during
learning to accelerate recognition. We introduce a simple attention-like
feedback mechanism, significantly improving recognition and robustness in
multiple-object scenes. In experiments, the proposed algorithm achieves or
exceeds state-of-the-art success rate on object recognition, texture and
satellite image classification, language identification and sound
classification.
|
0806.1549
|
Bits through ARQs
|
cs.IT math.IT
|
A fundamental problem in dynamic frequency reuse is that the cognitive radio
is ignorant of the amount of interference it inflicts on the primary license
holder. A model for such a situation is proposed and analyzed. The primary
sends packets across an erasure channel and employs simple ACK/NAK feedback
(ARQs) to retransmit erased packets. Furthermore, its erasure probabilities are
influenced by the cognitive radio's activity. While the cognitive radio does
not know these interference characteristics, it can eavesdrop on the primary's
ARQs. The model leads to strategies in which the cognitive radio adaptively
adjusts its input based on the primary's ARQs thereby guaranteeing the primary
exceeds a target packet rate. A relatively simple strategy whereby the
cognitive radio transmits only when the primary's empirical packet rate exceeds
a threshold is shown to have interesting universal properties in the sense that
for unknown time-varying interference characteristics, the primary is
guaranteed to meet its target rate. Furthermore, a more intricate version of
this strategy is shown to be capacity-achieving for the cognitive radio when
the interference characteristics are time-invariant.
|
0806.1565
|
Competitive Design of Multiuser MIMO Systems based on Game Theory: A
Unified View
|
cs.IT cs.GT math.IT math.OC
|
This paper considers the noncooperative maximization of mutual information in
the Gaussian interference channel in a fully distributed fashion via game
theory. This problem has been studied in a number of papers during the past
decade for the case of frequency-selective channels. A variety of conditions
guaranteeing the uniqueness of the Nash Equilibrium (NE) and convergence of
many different distributed algorithms have been derived. In this paper we
provide a unified view of the state-of-the-art results, showing that most of
the techniques proposed in the literature to study the game, even though
apparently different, can be unified using our recent interpretation of the
waterfilling operator as a projection onto a proper polyhedral set. Based on
this interpretation, we then provide a mathematical framework, useful to derive
a unified set of sufficient conditions guaranteeing the uniqueness of the NE
and the global convergence of waterfilling based asynchronous distributed
algorithms.
The proposed mathematical framework is also instrumental to study the
extension of the game to the more general MIMO case, for which only few results
are available in the current literature. The resulting algorithm is, similarly
to the frequency-selective case, an iterative asynchronous MIMO waterfilling
algorithm. The proof of convergence hinges again on the interpretation of the
MIMO waterfilling as a matrix projection, which is the natural generalization
of our results obtained for the waterfilling mapping in the frequency-selective
case.
|
0806.1577
|
Co-ordinate Interleaved Distributed Space-Time Coding for
Two-Antenna-Relays Networks
|
cs.IT math.IT
|
Distributed space time coding for wireless relay networks when the source,
the destination and the relays have multiple antennas have been studied by Jing
and Hassibi. In this set-up, the transmit and the receive signals at different
antennas of the same relay are processed and designed independently, even
though the antennas are colocated. In this paper, a wireless relay network with
single antenna at the source and the destination and two antennas at each of
the R relays is considered. A new class of distributed space time block codes
called Co-ordinate Interleaved Distributed Space-Time Codes (CIDSTC) are
introduced where, in the first phase, the source transmits a T-length complex
vector to all the relays and in the second phase, at each relay, the in-phase
and quadrature component vectors of the received complex vectors at the two
antennas are interleaved and processed before forwarding them to the
destination. Compared to the scheme proposed by Jing-Hassibi, for $T \geq 4R$,
while providing the same asymptotic diversity order of 2R, CIDSTC scheme is
shown to provide asymptotic coding gain with the cost of negligible increase in
the processing complexity at the relays. However, for moderate and large values
of P, CIDSTC scheme is shown to provide more diversity than that of the scheme
proposed by Jing-Hassibi. CIDSTCs are shown to be fully diverse provided the
information symbols take value from an appropriate multi-dimensional signal
set.
|
0806.1636
|
Data-Complexity of the Two-Variable Fragment with Counting Quantifiers
|
cs.LO cs.AI cs.CC
|
The data-complexity of both satisfiability and finite satisfiability for the
two-variable fragment with counting is NP-complete; the data-complexity of both
query-answering and finite query-answering for the two-variable guarded
fragment with counting is co-NP-complete.
|
0806.1640
|
Toward a combination rule to deal with partial conflict and specificity
in belief functions theory
|
cs.AI
|
We present and discuss a mixed conjunctive and disjunctive rule, a
generalization of conflict repartition rules, and a combination of these two
rules. In the belief functions theory one of the major problem is the conflict
repartition enlightened by the famous Zadeh's example. To date, many
combination rules have been proposed in order to solve a solution to this
problem. Moreover, it can be important to consider the specificity of the
responses of the experts. Since few year some unification rules are proposed.
We have shown in our previous works the interest of the proportional conflict
redistribution rule. We propose here a mixed combination rule following the
proportional conflict redistribution rule modified by a discounting procedure.
This rule generalizes many combination rules.
|
0806.1659
|
Bounds on the Sum Capacity of Synchronous Binary CDMA Channels
|
cs.IT math.IT
|
In this paper, we obtain a family of lower bounds for the sum capacity of
Code Division Multiple Access (CDMA) channels assuming binary inputs and binary
signature codes in the presence of additive noise with an arbitrary
distribution. The envelope of this family gives a relatively tight lower bound
in terms of the number of users, spreading gain and the noise distribution. The
derivation methods for the noiseless and the noisy channels are different but
when the noise variance goes to zero, the noisy channel bound approaches the
noiseless case. The behavior of the lower bound shows that for small noise
power, the number of users can be much more than the spreading gain without any
significant loss of information (overloaded CDMA). A conjectured upper bound is
also derived under the usual assumption that the users send out equally likely
binary bits in the presence of additive noise with an arbitrary distribution.
As the noise level increases, and/or, the ratio of the number of users and the
spreading gain increases, the conjectured upper bound approaches the lower
bound. We have also derived asymptotic limits of our bounds that can be
compared to a formula that Tanaka obtained using techniques from statistical
physics; his bound is close to that of our conjectured upper bound for large
scale systems.
|
0806.1796
|
Evaluation for Uncertain Image Classification and Segmentation
|
cs.CV cs.AI
|
Each year, numerous segmentation and classification algorithms are invented
or reused to solve problems where machine vision is needed. Generally, the
efficiency of these algorithms is compared against the results given by one or
many human experts. However, in many situations, the location of the real
boundaries of the objects as well as their classes are not known with certainty
by the human experts. Furthermore, only one aspect of the segmentation and
classification problem is generally evaluated. In this paper we present a new
evaluation method for classification and segmentation of image, where we take
into account both the classification and segmentation results as well as the
level of certainty given by the experts. As a concrete example of our method,
we evaluate an automatic seabed characterization algorithm based on sonar
images.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.