id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1003.4830
|
Limits of Commutativity on Abstract Data Types
|
cs.DB
|
We present some formal properties of (symmetrical) commutativity, the major
criterion used in transactional systems, which allow us to fully understand its
advantages and disadvantages. The main result is that commutativity is subject
to the same limitation as compatibility for arbitrary objects. However,
commutativity has also a number of attracting properties, one of which is
related to recovery and, to our knowledge, has not been exploited in the
literature. Advantages and disadvantages are illustrated on abstract data types
of interest. We also show how limits of commutativity have been circumvented,
which gives guidelines for doing so (or not!).
|
1003.4831
|
Ball on a beam: stabilization under saturated input control with large
basin of attraction
|
cs.RO cs.SY physics.med-ph
|
This article is devoted to the stabilization of two underactuated planar
systems, the well-known straight beam-and-ball system and an original circular
beam-and-ball system. The feedback control for each system is designed, using
the Jordan form of its model, linearized near the unstable equilibrium. The
limits on the voltage, fed to the motor, are taken into account explicitly. The
straight beam-and-ball system has one unstable mode in the motion near the
equilibrium point. The proposed control law ensures that the basin of
attraction coincides with the controllability domain. The circular
beam-and-ball system has two unstable modes near the equilibrium point.
Therefore, this device, never considered in the past, is much more difficult to
control than the straight beam-and-ball system. The main contribution is to
propose a simple new control law, which ensures by adjusting its gain
parameters that the basin of attraction arbitrarily can approach the
controllability domain for the linear case. For both nonlinear systems,
simulation results are presented to illustrate the efficiency of the designed
nonlinear control laws and to determine the basin of attraction.
|
1003.4836
|
Automating Fine Concurrency Control in Object-Oriented Databases
|
cs.DB
|
Several propositions were done to provide adapted concurrency control to
object-oriented databases. However, most of these proposals miss the fact that
considering solely read and write access modes on instances may lead to less
parallelism than in relational databases! This paper cope with that issue, and
advantages are numerous: (1) commutativity of methods is determined a priori
and automatically by the compiler, without measurable overhead, (2) run-time
checking of commutativity is as efficient as for compatibility, (3) inverse
operations need not be specified for recovery, (4) this scheme does not
preclude more sophisticated approaches, and, last but not least, (5) relational
and object-oriented concurrency control schemes with read and write access
modes are subsumed under this proposition.
|
1003.4852
|
Product Perfect Z2Z4-linear codes in Steganography
|
cs.IT math.IT
|
Product perfect codes have been proven to enhance the performance of the F5
steganographic method, whereas perfect Z2Z4-linear codes have been recently
introduced as an efficient way to embed data, conforming to the
+/-1-steganography. In this paper, we present two steganographic methods. On
the one hand, a generalization of product perfect codes is made. On the other
hand, this generalization is applied to perfect Z2Z4-linear codes. Finally, the
performance of the proposed methods is evaluated and compared with those of the
aforementioned schemes.
|
1003.4879
|
Large Constant Dimension Codes and Lexicodes
|
cs.IT math.IT
|
Constant dimension codes, with a prescribed minimum distance, have found
recently an application in network coding. All the codewords in such a code are
subspaces of $\F_q^n$ with a given dimension. A computer search for large
constant dimension codes is usually inefficient since the search space domain
is extremely large. Even so, we found that some constant dimension lexicodes
are larger than other known codes. We show how to make the computer search more
efficient. In this context we present a formula for the computation of the
distance between two subspaces, not necessarily of the same dimension.
|
1003.4894
|
La repr\'esentation formelle des concepts spatiaux dans la langue
|
cs.CL
|
In this chapter, we assume that systematically studying spatial markers
semantics in language provides a means to reveal fundamental properties and
concepts characterizing conceptual representations of space. We propose a
formal system accounting for the properties highlighted by the linguistic
analysis, and we use these tools for representing the semantic content of
several spatial relations of French. The first part presents a semantic
analysis of the expression of space in French aiming at describing the
constraints that formal representations have to take into account. In the
second part, after presenting the structure of our formal system, we set out
its components. A commonsense geometry is sketched out and several functional
and pragmatic spatial concepts are formalized. We take a special attention in
showing that these concepts are well suited to representing the semantic
content of several prepositions of French ('sur' (on), 'dans' (in), 'devant'
(in front of), 'au-dessus' (above)), and in illustrating the inferential
adequacy of these representations.
|
1003.4898
|
Les entit\'es spatiales dans la langue : \'etude descriptive, formelle
et exp\'erimentale de la cat\'egorisation
|
cs.CL
|
While previous linguistic and psycholinguistic research on space has mainly
analyzed spatial relations, the studies reported in this paper focus on how
language distinguishes among spatial entities. Descriptive and experimental
studies first propose a classification of entities, which accounts for both
static and dynamic space, has some cross-linguistic validity, and underlies
adults' cognitive processing. Formal and computational analyses then introduce
theoretical elements aiming at modelling these categories, while fulfilling
various properties of formal ontologies (generality, parsimony, coherence...).
This formal framework accounts, in particular, for functional dependences among
entities underlying some part-whole descriptions. Finally, developmental
research shows that language-specific properties have a clear impact on how
children talk about space. The results suggest some cross-linguistic
variability in children's spatial representations from an early age onwards,
bringing into question models in which general cognitive capacities are the
only determinants of spatial cognition during the course of development.
|
1003.4944
|
Incorporating Side Information in Probabilistic Matrix Factorization
with Gaussian Processes
|
stat.ML cs.LG
|
Probabilistic matrix factorization (PMF) is a powerful method for modeling
data associated with pairwise relationships, finding use in collaborative
filtering, computational biology, and document analysis, among other areas. In
many domains, there is additional information that can assist in prediction.
For example, when modeling movie ratings, we might know when the rating
occurred, where the user lives, or what actors appear in the movie. It is
difficult, however, to incorporate this side information into the PMF model. We
propose a framework for incorporating side information by coupling together
multiple PMF problems via Gaussian process priors. We replace scalar latent
features with functions that vary over the space of side information. The GP
priors on these functions require them to vary smoothly and share information.
We successfully use this new method to predict the scores of professional
basketball games, where side information about the venue and date of the game
are relevant for the outcome.
|
1003.4972
|
Quickest Time Herding and Detection for Optimal Social Learning
|
cs.IT math.IT math.OC physics.soc-ph
|
This paper considers social learning amongst rational agents (for example,
sensors in a network). We consider three models of social learning in
increasing order of sophistication. In the first model, based on its private
observation of a noisy underlying state process, each agent selfishly optimizes
its local utility and broadcasts its action. This protocol leads to a herding
behavior where the agents eventually choose the same action irrespective of
their observations. We then formulate a second more general model where each
agent is benevolent and chooses its sensor-mode to optimize a social welfare
function to facilitate social learning. Using lattice programming and
stochastic orders, it is shown that the optimal decision each agent makes is
characterized by a switching curve on the space of Bayesian distributions. We
then present a third more general model where social learning takes place to
achieve quickest time change detection. Both geometric and phase-type change
time distributions are considered. It is proved that the optimal decision is
again characterized by a switching curve We present a stochastic approximation
(adaptive filtering) algorithms to estimate this switching curve. Finally, we
present extensions of the social learning model in a changing world (Markovian
target) where agents learn in multiple iterations. By using Blackwell
stochastic dominance, we give conditions under which myopic decisions are
optimal. We also analyze the effect of target dynamics on the social welfare
cost.
|
1003.4994
|
Weak Decoupling Duality and Quantum Identification
|
quant-ph cs.IT math.IT
|
If a quantum system is subject to noise, it is possible to perform quantum
error correction reversing the action of the noise if and only if no
information about the system's quantum state leaks to the environment. In this
article, we develop an analogous duality in the case that the environment
approximately forgets the identity of the quantum state, a weaker condition
satisfied by epsilon-randomizing maps and approximate unitary designs.
Specifically, we show that the environment approximately forgets quantum states
if and only if the original channel approximately preserves pairwise fidelities
of pure inputs, an observation we call weak decoupling duality. Using this
tool, we then go on to study the task of using the output of a channel to
simulate restricted classes of measurements on a space of input states. The
case of simulating measurements that test whether the input state is an
arbitrary pure state is known as equality testing or quantum identification. An
immediate consequence of weak decoupling duality is that the ability to perform
quantum identification cannot be cloned. We furthermore establish that the
optimal amortized rate at which quantum states can be identified through a
noisy quantum channel is equal to the entanglement-assisted classical capacity
of the channel, despite the fact that the task is quantum, not classical, and
entanglement-assistance is not allowed. In particular, this rate is strictly
positive for every non-constant quantum channel, including classical channels.
|
1003.5042
|
Local Popularity based Page Link Analysis
|
cs.IR
|
In this paper we introduce the concept of dynamic link pages. A web site/page
contains a number of links to other pages. All the links are not equally
important. Few links are more frequently visited and few rarely visited. In
this scenario, identifying the frequently used links and placing them in the
top left corner of the page will increase the user's satisfaction. This process
will reduce the time spent by a visitor on the page, as most of the times, the
popular links are presented in the visible part of the screen itself. Also, a
site can be indexed based on the popular links in that page. This will increase
the efficiency of the retrieval system. We presented a model to display the
popular links, and also proposed a method to increase the quality of retrieval
system.
|
1003.5056
|
Cubes convexes
|
cs.DB
|
In various approaches, data cubes are pre-computed in order to answer
efficiently OLAP queries. The notion of data cube has been declined in various
ways: iceberg cubes, range cubes or differential cubes. In this paper, we
introduce the concept of convex cube which captures all the tuples of a
datacube satisfying a constraint combination. It can be represented in a very
compact way in order to optimize both computation time and required storage
space. The convex cube is not an additional structure appended to the list of
cube variants but we propose it as a unifying structure that we use to
characterize, in a simple, sound and homogeneous way, the other quoted types of
cubes. Finally, we introduce the concept of emerging cube which captures the
significant trend inversions. characterizations.
|
1003.5080
|
Transparent Anonymization: Thwarting Adversaries Who Know the Algorithm
|
cs.DB
|
Numerous generalization techniques have been proposed for privacy preserving
data publishing. Most existing techniques, however, implicitly assume that the
adversary knows little about the anonymization algorithm adopted by the data
publisher. Consequently, they cannot guard against privacy attacks that exploit
various characteristics of the anonymization mechanism. This paper provides a
practical solution to the above problem. First, we propose an analytical model
for evaluating disclosure risks, when an adversary knows everything in the
anonymization process, except the sensitive values. Based on this model, we
develop a privacy principle, transparent l-diversity, which ensures privacy
protection against such powerful adversaries. We identify three algorithms that
achieve transparent l-diversity, and verify their effectiveness and efficiency
through extensive experiments with real data.
|
1003.5097
|
Power Loading in Parallel Diversity Channels Based on Statistical
Channel Information
|
cs.IT math.IT
|
In this paper, we show that there exists an arbitrary number of power
allocation schemes that achieve capacity in systems operating in parallel
channels comprised of single-input multiple-output (SIMO) Nakagami-m fading
subchannels when the number of degrees of freedom L (e.g., the number of
receive antennas) tends to infinity. Statistical waterfilling -- i.e.,
waterfilling using channel statistics rather than instantaneous channel
knowledge -- is one such scheme. We further prove that the convergence of
statistical waterfilling to the optimal power loading scheme is at least O(1/(L
log(L))), whereas convergence of other schemes is at worst O(1/log(L)). To
validate and demonstrate the practical use of our findings, we evaluate the
mutual information of example SIMO parallel channels using simulations as well
as new measured ultrawideband channel data.
|
1003.5173
|
LEXSYS: Architecture and Implication for Intelligent Agent systems
|
cs.AI
|
LEXSYS, (Legume Expert System) was a project conceived at IITA (International
Institute of Tropical Agriculture) Ibadan Nigeria. It was initiated by the
COMBS (Collaborative Group on Maize-Based Systems Research in the 1990. It was
meant for a general framework for characterizing on-farm testing for technology
design for sustainable cereal-based cropping system. LEXSYS is not a true
expert system as the name would imply, but simply a user-friendly information
system. This work is an attempt to give a formal representation of the existing
system and then present areas where intelligent agent can be applied.
|
1003.5212
|
Diversity-Multiplexing Tradeoff of Cooperative Communication with Linear
Network Coded Relays
|
cs.IT math.IT
|
Network coding and cooperative communication have received considerable
attention from the research community recently in order to mitigate the adverse
effects of fading in wireless transmissions and at the same time to achieve
high throughput and better spectral efficiency. In this work, we analyze a
network coding scheme for a cooperative communication setup with multiple
sources and destinations. The proposed protocol achieves the full diversity
order at the expense of a slightly reduced multiplexing rate compared to
existing schemes in the literature. We show that our scheme outperforms
conventional cooperation in terms of the diversity-multiplexing tradeoff.
|
1003.5249
|
Active Testing for Face Detection and Localization
|
cs.CV
|
We provide a novel search technique, which uses a hierarchical model and a
mutual information gain heuristic to efficiently prune the search space when
localizing faces in images. We show exponential gains in computation over
traditional sliding window approaches, while keeping similar performance
levels.
|
1003.5305
|
Rational Value of Information Estimation for Measurement Selection
|
cs.AI
|
Computing value of information (VOI) is a crucial task in various aspects of
decision-making under uncertainty, such as in meta-reasoning for search; in
selecting measurements to make, prior to choosing a course of action; and in
managing the exploration vs. exploitation tradeoff. Since such applications
typically require numerous VOI computations during a single run, it is
essential that VOI be computed efficiently. We examine the issue of anytime
estimation of VOI, as frequently it suffices to get a crude estimate of the
VOI, thus saving considerable computational resources. As a case study, we
examine VOI estimation in the measurement selection problem. Empirical
evaluation of the proposed scheme in this domain shows that computational
resources can indeed be significantly reduced, at little cost in expected
rewards achieved in the overall decision problem.
|
1003.5309
|
Gossip Algorithms for Distributed Signal Processing
|
cs.DC cs.IT cs.NI math.IT
|
Gossip algorithms are attractive for in-network processing in sensor networks
because they do not require any specialized routing, there is no bottleneck or
single point of failure, and they are robust to unreliable wireless network
conditions. Recently, there has been a surge of activity in the computer
science, control, signal processing, and information theory communities,
developing faster and more robust gossip algorithms and deriving theoretical
performance guarantees. This article presents an overview of recent work in the
area. We describe convergence rate results, which are related to the number of
transmitted messages and thus the amount of energy consumed in the network for
gossiping. We discuss issues related to gossiping over wireless links,
including the effects of quantization and noise, and we illustrate the use of
gossip algorithms for canonical signal processing tasks including distributed
estimation, source localization, and compression.
|
1003.5320
|
The Video Genome
|
cs.CV
|
Fast evolution of Internet technologies has led to an explosive growth of
video data available in the public domain and created unprecedented challenges
in the analysis, organization, management, and control of such content. The
problems encountered in video analysis such as identifying a video in a large
database (e.g. detecting pirated content in YouTube), putting together video
fragments, finding similarities and common ancestry between different versions
of a video, have analogous counterpart problems in genetic research and
analysis of DNA and protein sequences. In this paper, we exploit the analogy
between genetic sequences and videos and propose an approach to video analysis
motivated by genomic research. Representing video information as video DNA
sequences and applying bioinformatic algorithms allows to search, match, and
compare videos in large-scale databases. We show an application for
content-based metadata mapping between versions of annotated video.
|
1003.5325
|
What's in a Session: Tracking Individual Behavior on the Web
|
cs.HC cs.MA physics.soc-ph
|
We examine the properties of all HTTP requests generated by a thousand
undergraduates over a span of two months. Preserving user identity in the data
set allows us to discover novel properties of Web traffic that directly affect
models of hypertext navigation. We find that the popularity of Web sites -- the
number of users who contribute to their traffic -- lacks any intrinsic mean and
may be unbounded. Further, many aspects of the browsing behavior of individual
users can be approximated by log-normal distributions even though their
aggregate behavior is scale-free. Finally, we show that users' click streams
cannot be cleanly segmented into sessions using timeouts, affecting any attempt
to model hypertext navigation using statistics of individual sessions. We
propose a strictly logical definition of sessions based on browsing activity as
revealed by referrer URLs; a user may have several active sessions in their
click stream at any one time. We demonstrate that applying a timeout to these
logical sessions affects their statistics to a lesser extent than a purely
timeout-based mechanism.
|
1003.5327
|
Agents, Bookmarks and Clicks: A topical model of Web traffic
|
cs.NI cs.IR cs.MA physics.soc-ph
|
Analysis of aggregate and individual Web traffic has shown that PageRank is a
poor model of how people navigate the Web. Using the empirical traffic patterns
generated by a thousand users, we characterize several properties of Web
traffic that cannot be reproduced by Markovian models. We examine both
aggregate statistics capturing collective behavior, such as page and link
traffic, and individual statistics, such as entropy and session size. No model
currently explains all of these empirical observations simultaneously. We show
that all of these traffic patterns can be explained by an agent-based model
that takes into account several realistic browsing behaviors. First, agents
maintain individual lists of bookmarks (a non-Markovian memory mechanism) that
are used as teleportation targets. Second, agents can retreat along visited
links, a branching mechanism that also allows us to reproduce behaviors such as
the use of a back button and tabbed browsing. Finally, agents are sustained by
visiting novel pages of topical interest, with adjacent pages being more
topically related to each other than distant ones. This modulates the
probability that an agent continues to browse or starts a new session, allowing
us to recreate heterogeneous session lengths. The resulting model is capable of
reproducing the collective and individual behaviors we observe in the empirical
data, reconciling the narrowly focused browsing patterns of individual users
with the extreme heterogeneity of aggregate traffic measurements. This result
allows us to identify a few salient features that are necessary and sufficient
to interpret the browsing patterns observed in our data. In addition to the
descriptive and explanatory power of such a model, our results may lead the way
to more sophisticated, realistic, and effective ranking and crawling
algorithms.
|
1003.5345
|
Bounds for the Sum Capacity of Binary CDMA Systems in Presence of
Near-Far Effect
|
cs.IT math.IT
|
In this paper we are going to estimate the sum capacity of a binary CDMA
system in presence of the near-far effect. We model the near-far effect as a
random variable that is multiplied by the users binary data before entering the
noisy channel. We will find a lower bound and a conjectured upper bound for the
sum capacity in this situation. All the derivations are in the asymptotic case.
Simulations show that especially the lower bound is very tight for typical
values Eb/N0 and near-far effect. Also, we exploit our idea in conjunction with
the Tanaka's formula [6] which also estimates the sum capacity of binary CDMA
systems with perfect power control.
|
1003.5350
|
An Improved Algorithm for Generating Database Transactions from
Relational Algebra Specifications
|
cs.DB cs.LO cs.PL
|
Alloy is a lightweight modeling formalism based on relational algebra. In
prior work with Fisler, Giannakopoulos, Krishnamurthi, and Yoo, we have
presented a tool, Alchemy, that compiles Alloy specifications into
implementations that execute against persistent databases. The foundation of
Alchemy is an algorithm for rewriting relational algebra formulas into code for
database transactions. In this paper we report on recent progress in improving
the robustness and efficiency of this transformation.
|
1003.5372
|
Learning Recursive Segments for Discourse Parsing
|
cs.CL
|
Automatically detecting discourse segments is an important preliminary step
towards full discourse parsing. Previous research on discourse segmentation
have relied on the assumption that elementary discourse units (EDUs) in a
document always form a linear sequence (i.e., they can never be nested).
Unfortunately, this assumption turns out to be too strong, for some theories of
discourse like SDRT allows for nested discourse units. In this paper, we
present a simple approach to discourse segmentation that is able to produce
nested EDUs. Our approach builds on standard multi-class classification
techniques combined with a simple repairing heuristic that enforces global
coherence. Our system was developed and evaluated on the first round of
annotations provided by the French Annodis project (an ongoing effort to create
a discourse bank for French). Cross-validated on only 47 documents (1,445
EDUs), our system achieves encouraging performance results with an F-score of
73% for finding EDUs.
|
1003.5435
|
Image Compression and Watermarking scheme using Scalar Quantization
|
cs.CV cs.MM
|
This paper presents a new compression technique and image watermarking
algorithm based on Contourlet Transform (CT). For image compression, an energy
based quantization is used. Scalar quantization is explored for image
watermarking. Double filter bank structure is used in CT. The Laplacian Pyramid
(LP) is used to capture the point discontinuities, and then followed by a
Directional Filter Bank (DFB) to link point discontinuities. The coefficients
of down sampled low pass version of LP decomposed image are re-ordered in a
pre-determined manner and prediction algorithm is used to reduce entropy
(bits/pixel). In addition, the coefficients of CT are quantized based on the
energy in the particular band. The superiority of proposed algorithm to JPEG is
observed in terms of reduced blocking artifacts. The results are also compared
with wavelet transform (WT). Superiority of CT to WT is observed when the image
contains more contours. The watermark image is embedded in the low pass image
of contourlet decomposition. The watermark can be extracted with minimum error.
In terms of PSNR, the visual quality of the watermarked image is exceptional.
The proposed algorithm is robust to many image attacks and suitable for
copyright protection applications.
|
1003.5455
|
Towards physical laws for software architecture
|
cs.SE cs.IR physics.data-an physics.soc-ph
|
Starting from the pioneering works on software architecture precious
guidelines have emerged to indicate how computer programs should be organized.
For example the "separation of concerns" suggests to split a program into
modules that overlap in functionality as little as possible. However these
recommendations are mainly conceptual and are thus hard to express in a
quantitative form. Hence software architecture relies on the individual
experience and skill of the designers rather than on quantitative laws. In this
article I apply the methods developed for the classification of information on
the World-Wide-Web to study the organization of Open Source programs in an
attempt to establish the statistical laws governing software architecture.
|
1003.5623
|
Spoken Language Identification Using Hybrid Feature Extraction Methods
|
cs.SD cs.LG
|
This paper introduces and motivates the use of hybrid robust feature
extraction technique for spoken language identification (LID) system. The
speech recognizers use a parametric form of a signal to get the most important
distinguishable features of speech signal for recognition task. In this paper
Mel-frequency cepstral coefficients (MFCC), Perceptual linear prediction
coefficients (PLP) along with two hybrid features are used for language
Identification. Two hybrid features, Bark Frequency Cepstral Coefficients
(BFCC) and Revised Perceptual Linear Prediction Coefficients (RPLP) were
obtained from combination of MFCC and PLP. Two different classifiers, Vector
Quantization (VQ) with Dynamic Time Warping (DTW) and Gaussian Mixture Model
(GMM) were used for classification. The experiment shows better identification
rate using hybrid feature extraction techniques compared to conventional
feature extraction methods.BFCC has shown better performance than MFCC with
both classifiers. RPLP along with GMM has shown best identification performance
among all feature extraction techniques.
|
1003.5627
|
Wavelet-Based Mel-Frequency Cepstral Coefficients for Speaker
Identification using Hidden Markov Models
|
cs.SD cs.LG
|
To improve the performance of speaker identification systems, an effective
and robust method is proposed to extract speech features, capable of operating
in noisy environment. Based on the time-frequency multi-resolution property of
wavelet transform, the input speech signal is decomposed into various frequency
channels. For capturing the characteristic of the signal, the Mel-Frequency
Cepstral Coefficients (MFCCs) of the wavelet channels are calculated. Hidden
Markov Models (HMMs) were used for the recognition stage as they give better
recognition for the speaker's features than Dynamic Time Warping (DTW).
Comparison of the proposed approach with the MFCCs conventional feature
extraction method shows that the proposed method not only effectively reduces
the influence of noise, but also improves recognition. A recognition rate of
99.3% was obtained using the proposed feature extraction technique compared to
98.7% using the MFCCs. When the test patterns were corrupted by additive white
Gaussian noise with 20 dB S/N ratio, the recognition rate was 97.3% using the
proposed method compared to 93.3% using the MFCCs.
|
1003.5648
|
The Error-Pattern-Correcting Turbo Equalizer
|
cs.IT math.IT
|
The error-pattern correcting code (EPCC) is incorporated in the design of a
turbo equalizer (TE) with aim to correct dominant error events of the
inter-symbol interference (ISI) channel at the output of its matching Viterbi
detector. By targeting the low Hamming-weight interleaved errors of the outer
convolutional code, which are responsible for low Euclidean-weight errors in
the Viterbi trellis, the turbo equalizer with an error-pattern correcting code
(TE-EPCC) exhibits a much lower bit-error rate (BER) floor compared to the
conventional non-precoded TE, especially for high rate applications. A
maximum-likelihood upper bound is developed on the BER floor of the TE-EPCC for
a generalized two-tap ISI channel, in order to study TE-EPCC's signal-to-noise
ratio (SNR) gain for various channel conditions and design parameters. In
addition, the SNR gain of the TE-EPCC relative to an existing precoded TE is
compared to demonstrate the present TE's superiority for short interleaver
lengths and high coding rates.
|
1003.5693
|
An Iteratively Decodable Tensor Product Code with Application to Data
Storage
|
cs.IT math.IT
|
The error pattern correcting code (EPCC) can be constructed to provide a
syndrome decoding table targeting the dominant error events of an inter-symbol
interference channel at the output of the Viterbi detector. For the size of the
syndrome table to be manageable and the list of possible error events to be
reasonable in size, the codeword length of EPCC needs to be short enough.
However, the rate of such a short length code will be too low for hard drive
applications. To accommodate the required large redundancy, it is possible to
record only a highly compressed function of the parity bits of EPCC's tensor
product with a symbol correcting code. In this paper, we show that the proposed
tensor error-pattern correcting code (T-EPCC) is linear time encodable and also
devise a low-complexity soft iterative decoding algorithm for EPCC's tensor
product with q-ary LDPC (T-EPCC-qLDPC). Simulation results show that
T-EPCC-qLDPC achieves almost similar performance to single-level qLDPC with a
1/2 KB sector at 50% reduction in decoding complexity. Moreover, 1 KB
T-EPCC-qLDPC surpasses the performance of 1/2 KB single-level qLDPC at the same
decoder complexity.
|
1003.5749
|
Etiqueter un corpus oral par apprentissage automatique \`a l'aide de
connaissances linguistiques
|
cs.LG cs.CL
|
Thanks to the Eslo1 ("Enqu\^ete sociolinguistique d'Orl\'eans", i.e.
"Sociolinguistic Inquiery of Orl\'eans") campain, a large oral corpus has been
gathered and transcribed in a textual format. The purpose of the work presented
here is to associate a morpho-syntactic label to each unit of this corpus. To
this aim, we have first studied the specificities of the necessary labels, and
their various possible levels of description. This study has led to a new
original hierarchical structuration of labels. Then, considering that our new
set of labels was different from the one used in every available software, and
that these softwares usually do not fit for oral data, we have built a new
labeling tool by a Machine Learning approach, from data labeled by Cordial and
corrected by hand. We have applied linear CRF (Conditional Random Fields)
trying to take the best possible advantage of the linguistic knowledge that was
used to define the set of labels. We obtain an accuracy between 85 and 90%,
depending of the parameters used.
|
1003.5771
|
Analysis of a CSMA-Based Wireless Network: Feasible Throughput Region
and Power Consumption
|
cs.GT cs.IT math.IT
|
We analytically study a carrier sense multiple access (CSMA)-based network.
In the network, the nodes have their own average throughput demands for
transmission to a common base station. The CSMA is based on the request-to-send
(RTS)/clear-to-send (CTS) handshake mechanism. Each node individually chooses
its probability of transmitting an RTS packet, which specifies the length of
its requested data transmission period. The RTS packets transmitted by
different nodes in the same time slot interfere with one another, and compete
to be received by the base station. If a node's RTS has the received signal to
interference plus noise ratio (SINR) higher than the capture ratio, it will be
successfully received. The node will then be granted the data transmission
period. The transmission probabilities of RTS packets of all nodes will
determine the average throughput and power consumption of each node. The set of
all possible throughput demands of nodes that can be supported by the network
is called the feasible throughput region. We characterize the feasible
throughput region and provide an upper bound on the total power consumption for
any throughput demands in the feasible throughput region. The upper bound
corresponds to one of three points in the feasible throughput region depending
on the fraction of time occupied by the RTS packets.
|
1003.5821
|
Tuning CLD Maps
|
cs.CV
|
The Coherence Length Diagram and the related maps have been shown to
represent a useful tool for image analysis. Setting threshold parameters is one
of the most important issues when dealing with such applications, as they
affect both the computability, which is outlined by the support map, and the
appearance of the coherence length diagram itself and of defect maps. A coupled
optimization analysis, returning a range for the basic (saturation) threshold,
and a histogram based method, yielding suitable values for a desired map
appearance, are proposed for an effective control of the analysis process.
|
1003.5861
|
Robust multi-camera view face recognition
|
cs.CV
|
This paper presents multi-appearance fusion of Principal Component Analysis
(PCA) and generalization of Linear Discriminant Analysis (LDA) for multi-camera
view offline face recognition (verification) system. The generalization of LDA
has been extended to establish correlations between the face classes in the
transformed representation and this is called canonical covariate. The proposed
system uses Gabor filter banks for characterization of facial features by
spatial frequency, spatial locality and orientation to make compensate to the
variations of face instances occurred due to illumination, pose and facial
expression changes. Convolution of Gabor filter bank to face images produces
Gabor face representations with high dimensional feature vectors. PCA and
canonical covariate are then applied on the Gabor face representations to
reduce the high dimensional feature spaces into low dimensional Gabor
eigenfaces and Gabor canonical faces. Reduced eigenface vector and canonical
face vector are fused together using weighted mean fusion rule. Finally,
support vector machines (SVM) have trained with augmented fused set of features
and perform the recognition task. The system has been evaluated with UMIST face
database consisting of multiview faces. The experimental results demonstrate
the efficiency and robustness of the proposed system for multi-view face images
with high recognition rates. Complexity analysis of the proposed system is also
presented at the end of the experimental results.
|
1003.5865
|
Offline Signature Identification by Fusion of Multiple Classifiers using
Statistical Learning Theory
|
cs.CV cs.LG
|
This paper uses Support Vector Machines (SVM) to fuse multiple classifiers
for an offline signature system. From the signature images, global and local
features are extracted and the signatures are verified with the help of
Gaussian empirical rule, Euclidean and Mahalanobis distance based classifiers.
SVM is used to fuse matching scores of these matchers. Finally, recognition of
query signatures is done by comparing it with all signatures of the database.
The proposed system is tested on a signature database contains 5400 offline
signatures of 600 individuals and the results are found to be promising.
|
1003.5886
|
Development of a multi-user handwriting recognition system using
Tesseract open source OCR engine
|
cs.CV
|
The objective of the paper is to recognize handwritten samples of lower case
Roman script using Tesseract open source Optical Character Recognition (OCR)
engine under Apache License 2.0. Handwritten data samples containing isolated
and free-flow text were collected from different users. Tesseract is trained
with user-specific data samples of both the categories of document pages to
generate separate user-models representing a unique language-set. Each such
language-set recognizes isolated and free-flow handwritten test samples
collected from the designated user. On a three user model, the system is
trained with 1844, 1535 and 1113 isolated handwritten character samples
collected from three different users and the performance is tested on 1133,
1186 and 1204 character samples, collected form the test sets of the three
users respectively. The user specific character level accuracies were obtained
as 87.92%, 81.53% and 65.71% respectively. The overall character-level accuracy
of the system is observed as 78.39%. The system fails to segment 10.96%
characters and erroneously classifies 10.65% characters on the overall dataset.
|
1003.5891
|
Recognition of Handwritten Roman Script Using Tesseract Open source OCR
Engine
|
cs.CV
|
In the present work, we have used Tesseract 2.01 open source Optical
Character Recognition (OCR) Engine under Apache License 2.0 for recognition of
handwriting samples of lower case Roman script. Handwritten isolated and
free-flow text samples were collected from multiple users. Tesseract is trained
to recognize user-specific handwriting samples of both the categories of
document pages. On a single user model, the system is trained with 1844
isolated handwritten characters and the performance is tested on 1133
characters, taken form the test set. The overall character-level accuracy of
the system is observed as 83.5%. The system fails to segment 5.56% characters
and erroneously classifies 10.94% characters.
|
1003.5893
|
Recognition of Handwritten Textual Annotations using Tesseract Open
Source OCR Engine for information Just In Time (iJIT)
|
cs.CV
|
Objective of the current work is to develop an Optical Character Recognition
(OCR) engine for information Just In Time (iJIT) system that can be used for
recognition of handwritten textual annotations of lower case Roman script.
Tesseract open source OCR engine under Apache License 2.0 is used to develop
user-specific handwriting recognition models, viz., the language sets, for the
said system, where each user is identified by a unique identification tag
associated with the digital pen. To generate the language set for any user,
Tesseract is trained with labeled handwritten data samples of isolated and
free-flow texts of Roman script, collected exclusively from that user. The
designed system is tested on five different language sets with free- flow
handwritten annotations as test samples. The system could successfully segment
and subsequently recognize 87.92%, 81.53%, 92.88%, 86.75% and 90.80%
handwritten characters in the test samples of five different users.
|
1003.5897
|
Development of a Multi-User Recognition Engine for Handwritten Bangla
Basic Characters and Digits
|
cs.CV
|
The objective of the paper is to recognize handwritten samples of basic
Bangla characters using Tesseract open source Optical Character Recognition
(OCR) engine under Apache License 2.0. Handwritten data samples containing
isolated Bangla basic characters and digits were collected from different
users. Tesseract is trained with user-specific data samples of document pages
to generate separate user-models representing a unique language-set. Each such
language-set recognizes isolated basic Bangla handwritten test samples
collected from the designated users. On a three user model, the system is
trained with 919, 928 and 648 isolated handwritten character and digit samples
and the performance is tested on 1527, 14116 and 1279 character and digit
samples, collected form the test datasets of the three users respectively. The
user specific character/digit recognition accuracies were obtained as 90.66%,
91.66% and 96.87% respectively. The overall basic character-level and digit
level accuracy of the system is observed as 92.15% and 97.37%. The system fails
to segment 12.33% characters and 15.96% digits and also erroneously classifies
7.85% characters and 2.63% on the overall dataset.
|
1003.5898
|
Recognition of handwritten Roman Numerals using Tesseract open source
OCR engine
|
cs.CV
|
The objective of the paper is to recognize handwritten samples of Roman
numerals using Tesseract open source Optical Character Recognition (OCR)
engine. Tesseract is trained with data samples of different persons to generate
one user-independent language model, representing the handwritten Roman
digit-set. The system is trained with 1226 digit samples collected form the
different users. The performance is tested on two different datasets, one
consisting of samples collected from the known users (those who prepared the
training data samples) and the other consisting of handwritten data samples of
unknown users. The overall recognition accuracy is obtained as 92.1% and 86.59%
on these test datasets respectively.
|
1003.5899
|
Geometric Algebra Model of Distributed Representations
|
cs.AI
|
Formalism based on GA is an alternative to distributed representation models
developed so far --- Smolensky's tensor product, Holographic Reduced
Representations (HRR) and Binary Spatter Code (BSC). Convolutions are replaced
by geometric products, interpretable in terms of geometry which seems to be the
most natural language for visualization of higher concepts. This paper recalls
the main ideas behind the GA model and investigates recognition test results
using both inner product and a clipped version of matrix representation. The
influence of accidental blade equality on recognition is also studied. Finally,
the efficiency of the GA model is compared to that of previously developed
models.
|
1003.5956
|
Unbiased Offline Evaluation of Contextual-bandit-based News Article
Recommendation Algorithms
|
cs.LG cs.AI cs.RO stat.ML
|
Contextual bandit algorithms have become popular for online recommendation
systems such as Digg, Yahoo! Buzz, and news recommendation in general.
\emph{Offline} evaluation of the effectiveness of new algorithms in these
applications is critical for protecting online user experiences but very
challenging due to their "partial-label" nature. Common practice is to create a
simulator which simulates the online environment for the problem at hand and
then run an algorithm against this simulator. However, creating simulator
itself is often difficult and modeling bias is usually unavoidably introduced.
In this paper, we introduce a \emph{replay} methodology for contextual bandit
algorithm evaluation. Different from simulator-based approaches, our method is
completely data-driven and very easy to adapt to different applications. More
importantly, our method can provide provably unbiased evaluations. Our
empirical results on a large-scale news article recommendation dataset
collected from Yahoo! Front Page conform well with our theoretical results.
Furthermore, comparisons between our offline replay and online bucket
evaluation of several contextual bandit algorithms show accuracy and
effectiveness of our offline evaluation method.
|
1003.5966
|
Integer-Forcing Linear Receivers
|
cs.IT math.IT
|
Linear receivers are often used to reduce the implementation complexity of
multiple-antenna systems. In a traditional linear receiver architecture, the
receive antennas are used to separate out the codewords sent by each transmit
antenna, which can then be decoded individually. Although easy to implement,
this approach can be highly suboptimal when the channel matrix is near
singular. This paper develops a new linear receiver architecture that uses the
receive antennas to create an effective channel matrix with integer-valued
entries. Rather than attempting to recover transmitted codewords directly, the
decoder recovers integer combinations of the codewords according to the entries
of the effective channel matrix. The codewords are all generated using the same
linear code which guarantees that these integer combinations are themselves
codewords. Provided that the effective channel is full rank, these integer
combinations can then be digitally solved for the original codewords. This
paper focuses on the special case where there is no coding across transmit
antennas and no channel state information at the transmitter(s), which
corresponds either to a multi-user uplink scenario or to single-user V-BLAST
encoding. In this setting, the proposed integer-forcing linear receiver
significantly outperforms conventional linear architectures such as the
zero-forcing and linear MMSE receiver. In the high SNR regime, the proposed
receiver attains the optimal diversity-multiplexing tradeoff for the standard
MIMO channel with no coding across transmit antennas. It is further shown that
in an extended MIMO model with interference, the integer-forcing linear
receiver achieves the optimal generalized degrees-of-freedom.
|
1003.5993
|
A Triple-Error-Correcting Cyclic Code from the Gold and Kasami-Welch APN
Power Functions
|
cs.DM cs.IT math.IT
|
Based on a sufficient condition proposed by Hollmann and Xiang for
constructing triple-error-correcting codes, the minimum distance of a binary
cyclic code $\mathcal{C}_{1,3,13}$ with three zeros $\alpha$, $\alpha^3$, and
$\alpha^{13}$ of length $2^m-1$ and the weight divisibility of its dual code
are studied, where $m\geq 5$ is odd and $\alpha$ is a primitive element of the
finite field $\mathbb{F}_{2^m}$. The code $\mathcal{C}_{1,3,13}$ is proven to
have the same weight distribution as the binary triple-error-correcting
primitive BCH code $\mathcal{C}_{1,3,5}$ of the same length.
|
1003.5998
|
A New Mechanism for Maintaining Diversity of Pareto Archive in
Multiobjective Optimization
|
math.OC cs.NE
|
The article introduces a new mechanism for selecting individuals to a Pareto
archive. It was combined with a micro-genetic algorithm and tested on several
problems. The ability of this approach to produce individuals uniformly
distributed along the Pareto set without negative impact on convergence is
demonstrated on presented results. The new concept was confronted with NSGA-II,
SPEA2, and IBEA algorithms from the PISA package. Another studied effect is the
size of population versus number of generations for small populations.
|
1003.6052
|
Development of an automated Red Light Violation Detection System (RLVDS)
for Indian vehicles
|
cs.CV
|
Integrated Traffic Management Systems (ITMS) are now implemented in different
cities in India to primarily address the concerns of road-safety and security.
An automated Red Light Violation Detection System (RLVDS) is an integral part
of the ITMS. In our present work we have designed and developed a complete
system for generating the list of all stop-line violating vehicle images
automatically from video snapshots of road-side surveillance cameras. The
system first generates adaptive background images for each camera view,
subtracts captured images from the corresponding background images and analyses
potential occlusions over the stop-line in a traffic signal. Considering
round-the-clock operations in a real-life test environment, the developed
system could successfully track 92% images of vehicles with violations on the
stop-line in a "Red" traffic signal.
|
1003.6059
|
A novel scheme for binarization of vehicle images using hierarchical
histogram equalization technique
|
cs.CV
|
Automatic License Plate Recognition system is a challenging area of research
now-a-days and binarization is an integral and most important part of it. In
case of a real life scenario, most of existing methods fail to properly
binarize the image of a vehicle in a congested road, captured through a CCD
camera. In the current work we have applied histogram equalization technique
over the complete image and also over different hierarchy of image
partitioning. A novel scheme is formulated for giving the membership value to
each pixel for each hierarchy of histogram equalization. Then the image is
binarized depending on the net membership value of each pixel. The technique is
exhaustively evaluated on the vehicle image dataset as well as the license
plate dataset, giving satisfactory performances.
|
1003.6082
|
Coding Schemes and Asymptotic Capacity of the Gaussian Broadcast and
Interference Channels with Feedback
|
cs.IT math.IT
|
A coding scheme is proposed for the memoryless Gaussian broadcast channel
with correlated noises and feedback. For all noise correlations other than -1,
the gap between the sum-rate the scheme achieves and the full-cooperation bound
vanishes as the signal-to-noise ratio tends to infinity. When the correlation
coefficient is -1, the gains afforded by feedback are unbounded and the prelog
is doubled. When the correlation coefficient is +1 we demonstrate a dichotomy:
If the noise variances are equal, then feedback is useless, and otherwise,
feedback affords unbounded rate gains and doubles the prelog. The unbounded
feedback gains, however, require perfect (noiseless) feedback. When the
feedback links are noisy the feedback gains are bounded, unless the feedback
noise decays to zero sufficiently fast with the signal-to-noise ratio.
Extensions to more receivers are also discussed as is the memoryless Gaussian
interference channel with feedback.
|
1003.6091
|
Calculation of Mutual Information for Partially Coherent Gaussian
Channels with Applications to Fiber Optics
|
cs.IT math.IT
|
The mutual information between a complex-valued channel input and its
complex-valued output is decomposed into four parts based on polar coordinates:
an amplitude term, a phase term, and two mixed terms. Numerical results for the
additive white Gaussian noise (AWGN) channel with various inputs show that, at
high signal-to-noise ratio (SNR), the amplitude and phase terms dominate the
mixed terms. For the AWGN channel with a Gaussian input, analytical expressions
are derived for high SNR. The decomposition method is applied to partially
coherent channels and a property of such channels called "spectral loss" is
developed. Spectral loss occurs in nonlinear fiber-optic channels and it may be
one effect that needs to be taken into account to explain the behavior of the
capacity of nonlinear fiber-optic channels presented in recent studies.
|
1004.0027
|
Interference in Lattice Networks
|
cs.IT math.IT
|
Lattices are important as models for the node locations in wireless networks
for two main reasons: (1) When network designers have control over the
placement of the nodes, they often prefer a regular arrangement in a lattice
for coverage and interference reasons. (2) If nodes are randomly distributed or
mobile, good channel access schemes ensure that concurrent transmitters are
regularly spaced, hence the locations of the transmitting nodes are well
approximated by a lattice. In this paper, we introduce general interference
bounding techniques that permit the derivation of tight closed-form upper and
lower bounds for all lattice networks, and we present and analyze optimum or
near-optimum channel access schemes for one-dimensional, square, and triangular
lattices.
|
1004.0048
|
Anonimos: An LP based Approach for Anonymizing Weighted Social Network
Graphs
|
cs.DB
|
The increasing popularity of social networks has initiated a fertile research
area in information extraction and data mining. Anonymization of these social
graphs is important to facilitate publishing these data sets for analysis by
external entities. Prior work has concentrated mostly on node identity
anonymization and structural anonymization. But with the growing interest in
analyzing social networks as a weighted network, edge weight anonymization is
also gaining importance. We present An\'onimos, a Linear Programming based
technique for anonymization of edge weights that preserves linear properties of
graphs. Such properties form the foundation of many important graph-theoretic
algorithms such as shortest paths problem, k-nearest neighbors, minimum cost
spanning tree, and maximizing information spread. As a proof of concept, we
apply An\'onimos to the shortest paths problem and its extensions, prove the
correctness, analyze complexity, and experimentally evaluate it using real
social network data sets. Our experiments demonstrate that An\'onimos
anonymizes the weights, improves k-anonymity of the weights, and also scrambles
the relative ordering of the edges sorted by weights, thereby providing robust
and effective anonymization of the sensitive edge-weights. Additionally, we
demonstrate the composability of different models generated using An\'onimos, a
property that allows a single anonymized graph to preserve multiple linear
properties.
|
1004.0085
|
A stochastic model of human visual attention with a dynamic Bayesian
network
|
cs.CV cs.MM cs.NE stat.ML
|
Recent studies in the field of human vision science suggest that the human
responses to the stimuli on a visual display are non-deterministic. People may
attend to different locations on the same visual input at the same time. Based
on this knowledge, we propose a new stochastic model of visual attention by
introducing a dynamic Bayesian network to predict the likelihood of where
humans typically focus on a video scene. The proposed model is composed of a
dynamic Bayesian network with 4 layers. Our model provides a framework that
simulates and combines the visual saliency response and the cognitive state of
a person to estimate the most probable attended regions. Sample-based inference
with Markov chain Monte-Carlo based particle filter and stream processing with
multi-core processors enable us to estimate human visual attention in near real
time. Experimental results have demonstrated that our model performs
significantly better in predicting human visual attention compared to the
previous deterministic models.
|
1004.0092
|
Maximal Intersection Queries in Randomized Input Models
|
cs.IR
|
Consider a family of sets and a single set, called the query set. How can one
quickly find a member of the family which has a maximal intersection with the
query set? Time constraints on the query and on a possible preprocessing of the
set family make this problem challenging. Such maximal intersection queries
arise in a wide range of applications, including web search, recommendation
systems, and distributing on-line advertisements. In general, maximal
intersection queries are computationally expensive. We investigate two
well-motivated distributions over all families of sets and propose an algorithm
for each of them. We show that with very high probability an almost optimal
solution is found in time which is logarithmic in the size of the family.
Moreover, we point out a threshold phenomenon on the probabilities of
intersecting sets in each of our two input models which leads to the efficient
algorithms mentioned above.
|
1004.0180
|
Precoded Turbo Equalizer for Power Line Communication Systems
|
cs.IT math.IT
|
Power line communication continues to draw increasing interest by promising a
wide range of applications including cost-free last-mile communication
solution. However, signal transmitted through the power lines deteriorates
badly due to the presence of severe inter-symbol interference (ISI) and harsh
random pulse noise. This work proposes a new precoded turbo equalization scheme
specifically designed for the PLC channels. By introducing useful precoding to
reshape ISI, optimizing maximum {\it a posteriori} (MAP) detection to address
the non-Gaussian pulse noise, and performing soft iterative decision
refinement, the new equalizer demonstrates a gain significantly better than the
existing turbo equalizers.
|
1004.0208
|
Delay-rate tradeoff in ergodic interference alignment
|
cs.IT math.IT math.PR
|
Ergodic interference alignment, as introduced by Nazer et al (NGJV), is a
technique that allows high-rate communication in n-user interference networks
with fast fading. It works by splitting communication across a pair of fading
matrices. However, it comes with the overhead of a long time delay until
matchable matrices occur: the delay is q^n^2 for field size q.
In this paper, we outline two new families of schemes, called JAP and JAP-B,
that reduce the expected delay, sometimes at the cost of a reduction in rate
from the NGJV scheme. In particular, we give examples of good schemes for
networks with few users, and show that in large n-user networks, the delay
scales like q^T, where T is quadratic in n for a constant per-user rate and T
is constant for a constant sum-rate. We also show that half the single-user
rate can be achieved while reducing NGJV's delay from q^n^2 to q^(n-1)(n-2).
This extended version includes complete proofs and more details of good
schemes for small n.
|
1004.0258
|
Trends and Techniques in Visual Gaze Analysis
|
cs.HC cs.CV cs.GR cs.MM
|
Visualizing gaze data is an effective way for the quick interpretation of eye
tracking results. This paper presents a study investigation benefits and
limitations of visual gaze analysis among eye tracking professionals and
researchers. The results were used to create a tool for visual gaze analysis
within a Master's project.
|
1004.0269
|
The Degraded Poisson Wiretap Channel
|
cs.IT math.IT
|
Providing security guarantees for wireless communication is critically
important for today's applications. While previous work in this area has
concentrated on radio frequency (RF) channels, providing security guarantees
for RF channels is inherently difficult because they are prone to rapid
variations due small scale fading. Wireless optical communication, on the other
hand, is inherently more secure than RF communication due to the intrinsic
aspects of the signal propagation in the optical and near-optical frequency
range. In this paper, secure communication over wireless optical links is
examined by studying the secrecy capacity of a direct detection system. For the
degraded Poisson wiretap channel, a closed-form expression of the secrecy
capacity is given. A complete characterization of the general rate-equivocation
region is also presented. For achievability, an optimal code is explicitly
constructed by using the structured code designed by Wyner for the Poisson
channel. The converse is proved in two different ways: the first method relies
only on simple properties of the conditional expectation and basic information
theoretical inequalities, whereas the second method hinges on the recent link
established between minimum mean square estimation and mutual information in
Poisson channels.
|
1004.0346
|
Network Code Design for Orthogonal Two-hop Network with Broadcasting
Relay: A Joint Source-Channel-Network Coding Approach
|
cs.IT math.IT
|
This paper addresses network code design for robust transmission of sources
over an orthogonal two-hop wireless network with a broadcasting relay. The
network consists of multiple sources and destinations in which each
destination, benefiting the relay signal, intends to decode a subset of the
sources. Two special instances of this network are orthogonal broadcast relay
channel and the orthogonal multiple access relay channel. The focus is on
complexity constrained scenarios, e.g., for wireless sensor networks, where
channel coding is practically imperfect. Taking a source-channel and network
coding approach, we design the network code (mapping) at the relay such that
the average reconstruction distortion at the destinations is minimized. To this
end, by decomposing the distortion into its components, an efficient design
algorithm is proposed. The resulting network code is nonlinear and
substantially outperforms the best performing linear network code. A motivating
formulation of a family of structured nonlinear network codes is also
presented. Numerical results and comparison with linear network coding at the
relay and the corresponding distortion-power bound demonstrate the
effectiveness of the proposed schemes and a promising research direction.
|
1004.0366
|
Dense Error-Correcting Codes in the Lee Metric
|
cs.IT math.IT
|
Several new applications and a number of new mathematical techniques have
increased the research on error-correcting codes in the Lee metric in the last
decade. In this work we consider several coding problems and constructions of
error-correcting codes in the Lee metric. First, we consider constructions of
dense error-correcting codes in relatively small dimensions over small
alphabets. The second problem we solve is construction of diametric perfect
codes with minimum distance four. We will construct such codes over various
lengths and alphabet sizes. The third problem is to transfer an n-dimensional
Lee sphere with large radius into a shape, with the same volume, located in a
relatively small box. Hadamard matrices play an essential role in the solutions
for all three problems. A construction of codes based on Hadamard matrices will
start our discussion. These codes approach the sphere packing bound for very
high rate range and appear to be the best known codes over some sets of
parameters.
|
1004.0378
|
Facial Expression Representation and Recognition Using 2DHLDA, Gabor
Wavelets, and Ensemble Learning
|
cs.CV cs.LG
|
In this paper, a novel method for representation and recognition of the
facial expressions in two-dimensional image sequences is presented. We apply a
variation of two-dimensional heteroscedastic linear discriminant analysis
(2DHLDA) algorithm, as an efficient dimensionality reduction technique, to
Gabor representation of the input sequence. 2DHLDA is an extension of the
two-dimensional linear discriminant analysis (2DLDA) approach and it removes
the equal within-class covariance. By applying 2DHLDA in two directions, we
eliminate the correlations between both image columns and image rows. Then, we
perform a one-dimensional LDA on the new features. This combined method can
alleviate the small sample size problem and instability encountered by HLDA.
Also, employing both geometric and appearance features and using an ensemble
learning scheme based on data fusion, we create a classifier which can
efficiently classify the facial expressions. The proposed method is robust to
illumination changes and it can properly represent temporal information as well
as subtle changes in facial muscles. We provide experiments on Cohn-Kanade
database that show the superiority of the proposed method. KEYWORDS:
two-dimensional heteroscedastic linear discriminant analysis (2DHLDA), subspace
learning, facial expression analysis, Gabor wavelets, ensemble learning.
|
1004.0381
|
Gossip and Distributed Kalman Filtering: Weak Consensus under Weak
Detectability
|
cs.IT math.DS math.IT math.OC math.PR
|
The paper presents the gossip interactive Kalman filter (GIKF) for
distributed Kalman filtering for networked systems and sensor networks, where
inter-sensor communication and observations occur at the same time-scale. The
communication among sensors is random; each sensor occasionally exchanges its
filtering state information with a neighbor depending on the availability of
the appropriate network link. We show that under a weak distributed
detectability condition:
1. the GIKF error process remains stochastically bounded, irrespective of the
instability properties of the random process dynamics; and
2. the network achieves \emph{weak consensus}, i.e., the conditional
estimation error covariance at a (uniformly) randomly selected sensor converges
in distribution to a unique invariant measure on the space of positive
semi-definite matrices (independent of the initial state.)
To prove these results, we interpret the filtered states (estimates and error
covariances) at each node in the GIKF as stochastic particles with local
interactions. We analyze the asymptotic properties of the error process by
studying as a random dynamical system the associated switched (random) Riccati
equation, the switching being dictated by a non-stationary Markov chain on the
network graph.
|
1004.0382
|
Multigrid preconditioning of linear systems for interior point methods
applied to a class of box-constrained optimal control problems
|
math.NA cs.SY math.OC
|
In this article we construct and analyze multigrid preconditioners for
discretizations of operators of the form D+K* K, where D is the multiplication
with a relatively smooth positive function and K is a compact linear operator.
These systems arise when applying interior point methods to the minimization
problem min_u (||K u-f||^2 +b||u||^2) with box-constraints on the controls u.
The presented preconditioning technique is closely related to the one developed
by Draganescu and Dupont in [11] for the associated unconstrained problem, and
is intended for large-scale problems. As in [11], the quality of the resulting
preconditioners is shown to increase with increasing resolution but decreases
as the diagonal of D becomes less smooth. We test this algorithm first on a
Tikhonov-regularized backward parabolic equation with box-constraints on the
control, and then on a standard elliptic-constrained optimization problem. In
both cases it is shown that the number of linear iterations per optimization
step, as well as the total number of fine-scale matrix-vector multiplications
is decreasing with increasing resolution, thus showing the method to be
potentially very efficient for truly large-scale problems.
|
1004.0383
|
Multiuser Diversity Gain in Cognitive Networks
|
cs.IT math.IT
|
Dynamic allocation of resources to the \emph{best} link in large multiuser
networks offers considerable improvement in spectral efficiency. This gain,
often referred to as \emph{multiuser diversity gain}, can be cast as
double-logarithmic growth of the network throughput with the number of users.
In this paper we consider large cognitive networks granted concurrent spectrum
access with license-holding users. The primary network affords to share its
under-utilized spectrum bands with the secondary users. We assess the optimal
multiuser diversity gain in the cognitive networks by quantifying how the
sum-rate throughput of the network scales with the number of secondary users.
For this purpose we look at the optimal pairing of spectrum bands and secondary
users, which is supervised by a central entity fully aware of the instantaneous
channel conditions, and show that the throughput of the cognitive network
scales double-logarithmically with the number of secondary users ($N$) and
linearly with the number of available spectrum bands ($M$), i.e., $M\log\log
N$. We then propose a \emph{distributed} spectrum allocation scheme, which does
not necessitate a central controller or any information exchange between
different secondary users and still obeys the optimal throughput scaling law.
This scheme requires that \emph{some} secondary transmitter-receiver pairs
exchange $\log M$ information bits among themselves. We also show that the
aggregate amount of information exchange between secondary transmitter-receiver
pairs is {\em asymptotically} equal to $M\log M$. Finally, we show that our
distributed scheme guarantees fairness among the secondary users, meaning that
they are equally likely to get access to an available spectrum band.
|
1004.0393
|
Object-image correspondence for curves under finite and affine cameras
|
cs.CV math.AG
|
We provide criteria for deciding whether a given planar curve is an image of
a given spatial curve, obtained by a central or a parallel projection with
unknown parameters. These criteria reduce the projection problem to a certain
modification of the equivalence problem of planar curves under affine and
projective transformations. The latter problem can be addressed using Cartan's
moving frame method. This leads to a novel algorithmic solution of the
projection problem for curves. The computational advantage of the algorithms
presented here, in comparison to algorithms based on a straightforward
solution, lies in a significant reduction of a number of real parameters that
has to be eliminated in order to establish existence or non-existence of a
projection that maps a given spatial curve to a given planar curve. The same
approach can be used to decide whether a given finite set of ordered points on
a plane is an image of a given finite set of ordered points in R^3. The
motivation comes from the problem of establishing a correspondence between an
object and an image, taken by a camera with unknown position and parameters.
|
1004.0400
|
A new bound for the capacity of the deletion channel with high deletion
probabilities
|
cs.IT math.IT
|
Let $C(d)$ be the capacity of the binary deletion channel with deletion
probability $d$. It was proved by Drinea and Mitzenmacher that, for all $d$,
$C(d)/(1-d)\geq 0.1185 $. Fertonani and Duman recently showed that
$\limsup_{d\to 1}C(d)/(1-d)\leq 0.49$. In this paper, it is proved that
$\lim_{d\to 1}C(d)/(1-d)$ exists and is equal to $\inf_{d}C(d)/(1-d)$. This
result suggests the conjecture that the curve $C(d)$ my be convex in the
interval $d\in [0,1]$. Furthermore, using currently known bounds for $C(d)$, it
leads to the upper bound $\lim_{d\to 1}C(d)/(1-d)\leq 0.4143$.
|
1004.0402
|
Improved Sparse Recovery Thresholds with Two-Step Reweighted $\ell_1$
Minimization
|
cs.IT math.IT
|
It is well known that $\ell_1$ minimization can be used to recover
sufficiently sparse unknown signals from compressed linear measurements. In
fact, exact thresholds on the sparsity, as a function of the ratio between the
system dimensions, so that with high probability almost all sparse signals can
be recovered from iid Gaussian measurements, have been computed and are
referred to as "weak thresholds" \cite{D}. In this paper, we introduce a
reweighted $\ell_1$ recovery algorithm composed of two steps: a standard
$\ell_1$ minimization step to identify a set of entries where the signal is
likely to reside, and a weighted $\ell_1$ minimization step where entries
outside this set are penalized. For signals where the non-sparse component has
iid Gaussian entries, we prove a "strict" improvement in the weak recovery
threshold. Simulations suggest that the improvement can be quite
impressive-over 20% in the example we consider.
|
1004.0456
|
Exploratory Analysis of Functional Data via Clustering and Optimal
Segmentation
|
stat.ML cs.LG
|
We propose in this paper an exploratory analysis algorithm for functional
data. The method partitions a set of functions into $K$ clusters and represents
each cluster by a simple prototype (e.g., piecewise constant). The total number
of segments in the prototypes, $P$, is chosen by the user and optimally
distributed among the clusters via two dynamic programming algorithms. The
practical relevance of the method is shown on two real world datasets.
|
1004.0458
|
The quantum dynamic capacity formula of a quantum channel
|
quant-ph cs.IT math.IT
|
The dynamic capacity theorem characterizes the reliable communication rates
of a quantum channel when combined with the noiseless resources of classical
communication, quantum communication, and entanglement. In prior work, we
proved the converse part of this theorem by making contact with many previous
results in the quantum Shannon theory literature. In this work, we prove the
theorem with an "ab initio" approach, using only the most basic tools in the
quantum information theorist's toolkit: the Alicki-Fannes' inequality, the
chain rule for quantum mutual information, elementary properties of quantum
entropy, and the quantum data processing inequality. The result is a simplified
proof of the theorem that should be more accessible to those unfamiliar with
the quantum Shannon theory literature. We also demonstrate that the "quantum
dynamic capacity formula" characterizes the Pareto optimal trade-off surface
for the full dynamic capacity region. Additivity of this formula simplifies the
computation of the trade-off surface, and we prove that its additivity holds
for the quantum Hadamard channels and the quantum erasure channel. We then
determine exact expressions for and plot the dynamic capacity region of the
quantum dephasing channel, an example from the Hadamard class, and the quantum
erasure channel.
|
1004.0477
|
Decentralized event-triggered control over wireless sensor/actuator
networks
|
math.OC cs.SY
|
In recent years we have witnessed a move of the major industrial automation
providers into the wireless domain. While most of these companies already offer
wireless products for measurement and monitoring purposes, the ultimate goal is
to be able to close feedback loops over wireless networks interconnecting
sensors, computation devices, and actuators. In this paper we present a
decentralized event-triggered implementation, over sensor/actuator networks, of
centralized nonlinear controllers. Event-triggered control has been recently
proposed as an alternative to the more traditional periodic execution of
control tasks. In a typical event-triggered implementation, the control signals
are kept constant until the violation of a condition on the state of the plant
triggers the re-computation of the control signals. The possibility of reducing
the number of re-computations, and thus of transmissions, while guaranteeing
desired levels of performance makes event-triggered control very appealing in
the context of sensor/actuator networks. In these systems the communication
network is a shared resource and event-triggered implementations of control
laws offer a flexible way to reduce network utilization. Moreover reducing the
number of times that a feedback control law is executed implies a reduction in
transmissions and thus a reduction in energy expenditures of battery powered
wireless sensor nodes.
|
1004.0512
|
Analysis, Interpretation, and Recognition of Facial Action Units and
Expressions Using Neuro-Fuzzy Modeling
|
cs.CV
|
In this paper an accurate real-time sequence-based system for representation,
recognition, interpretation, and analysis of the facial action units (AUs) and
expressions is presented. Our system has the following characteristics: 1)
employing adaptive-network-based fuzzy inference systems (ANFIS) and temporal
information, we developed a classification scheme based on neuro-fuzzy modeling
of the AU intensity, which is robust to intensity variations, 2) using both
geometric and appearance-based features, and applying efficient dimension
reduction techniques, our system is robust to illumination changes and it can
represent the subtle changes as well as temporal information involved in
formation of the facial expressions, and 3) by continuous values of intensity
and employing top-down hierarchical rule-based classifiers, we can develop
accurate human-interpretable AU-to-expression converters. Extensive experiments
on Cohn-Kanade database show the superiority of the proposed method, in
comparison with support vector machines, hidden Markov models, and neural
network classifiers. Keywords: biased discriminant analysis (BDA), classifier
design and evaluation, facial action units (AUs), hybrid learning, neuro-fuzzy
modeling.
|
1004.0514
|
Superior Exploration-Exploitation Balance with Quantum-Inspired Hadamard
Walks
|
cs.NE
|
This paper extends the analogies employed in the development of
quantum-inspired evolutionary algorithms by proposing quantum-inspired Hadamard
walks, called QHW. A novel quantum-inspired evolutionary algorithm, called
HQEA, for solving combinatorial optimization problems, is also proposed. The
novelty of HQEA lies in it's incorporation of QHW Remote Search and QHW Local
Search - the quantum equivalents of classical mutation and local search, that
this paper defines. The intuitive reasoning behind this approach, and the
exploration-exploitation balance thus occurring is explained. From the results
of the experiments carried out on the 0,1-knapsack problem, HQEA performs
significantly better than a conventional genetic algorithm, CGA, and two
quantum-inspired evolutionary algorithms - QEA and NQEA, in terms of
convergence speed and accuracy.
|
1004.0515
|
Recognizing Combinations of Facial Action Units with Different Intensity
Using a Mixture of Hidden Markov Models and Neural Network
|
cs.CV cs.LG
|
Facial Action Coding System consists of 44 action units (AUs) and more than
7000 combinations. Hidden Markov models (HMMs) classifier has been used
successfully to recognize facial action units (AUs) and expressions due to its
ability to deal with AU dynamics. However, a separate HMM is necessary for each
single AU and each AU combination. Since combinations of AU numbering in
thousands, a more efficient method will be needed. In this paper an accurate
real-time sequence-based system for representation and recognition of facial
AUs is presented. Our system has the following characteristics: 1) employing a
mixture of HMMs and neural network, we develop a novel accurate classifier,
which can deal with AU dynamics, recognize subtle changes, and it is also
robust to intensity variations, 2) although we use an HMM for each single AU
only, by employing a neural network we can recognize each single and
combination AU, and 3) using both geometric and appearance-based features, and
applying efficient dimension reduction techniques, our system is robust to
illumination changes and it can represent the temporal information involved in
formation of the facial expressions. Extensive experiments on Cohn-Kanade
database show the superiority of the proposed method, in comparison with other
classifiers. Keywords: classifier design and evaluation, data fusion, facial
action units (AUs), hidden Markov models (HMMs), neural network (NN).
|
1004.0517
|
Multilinear Biased Discriminant Analysis: A Novel Method for Facial
Action Unit Representation
|
cs.CV cs.LG
|
In this paper a novel efficient method for representation of facial action
units by encoding an image sequence as a fourth-order tensor is presented. The
multilinear tensor-based extension of the biased discriminant analysis (BDA)
algorithm, called multilinear biased discriminant analysis (MBDA), is first
proposed. Then, we apply the MBDA and two-dimensional BDA (2DBDA) algorithms,
as the dimensionality reduction techniques, to Gabor representations and the
geometric features of the input image sequence respectively. The proposed
scheme can deal with the asymmetry between positive and negative samples as
well as curse of dimensionality dilemma. Extensive experiments on Cohn-Kanade
database show the superiority of the proposed method for representation of the
subtle changes and the temporal information involved in formation of the facial
expressions. As an accurate tool, this representation can be applied to many
areas such as recognition of spontaneous and deliberate facial expressions,
multi modal/media human computer interaction and lie detection efforts.
|
1004.0534
|
Impact of Connection Admission Process on the Direct Retry Load
Balancing Algorithm in Cellular Network
|
cs.NI cs.IT cs.PF math.IT
|
We present an analytical framework for modeling a priority-based load
balancing scheme in cellular networks based on a new algorithm called direct
retry with truncated offloading channel resource pool (DR$_{K}$). The model,
developed for a baseline case of two cell network, differs in many respects
from previous works on load balancing. Foremost, it incorporates the call
admission process, through random access. In specific, the proposed model
implements the Physical Random Access Channel used in 3GPP network standards.
Furthermore, the proposed model allows the differentiation of users based on
their priorities. The quantitative results illustrate that, for example,
cellular network operators can control the manner in which traffic is offloaded
between neighboring cells by simply adjusting the length of the random access
phase. Our analysis also allows for the quantitative determination of the
blocking probability individual users will experience given a specific length
of random access phase. Furthermore, we observe that the improvement in
blocking probability per shared channel for load balanced users using DR$_{K}$
is maximized at an intermediate number of shared channels, as opposed to the
maximum number of these shared resources. This occurs because a balance is
achieved between the number of users requesting connections and those that are
already admitted to the network. We also present an extension of our analytical
model to a multi-cell network (by means of an approximation) and an application
of the proposed load balancing scheme in the context of opportunistic spectrum
access.
|
1004.0542
|
Cognitive Interference Management in Retransmission-Based Wireless
Networks
|
cs.IT math.IT
|
Cognitive radio methodologies have the potential to dramatically increase the
throughput of wireless systems. Herein, control strategies which enable the
superposition in time and frequency of primary and secondary user transmissions
are explored in contrast to more traditional sensing approaches which only
allow the secondary user to transmit when the primary user is idle. In this
work, the optimal transmission policy for the secondary user when the primary
user adopts a retransmission based error control scheme is investigated. The
policy aims to maximize the secondary users' throughput, with a constraint on
the throughput loss and failure probability of the primary user. Due to the
constraint, the optimal policy is randomized, and determines how often the
secondary user transmits according to the retransmission state of the packet
being served by the primary user. The resulting optimal strategy of the
secondary user is proven to have a unique structure. In particular, the optimal
throughput is achieved by the secondary user by concentrating its transmission,
and thus its interference to the primary user, in the first transmissions of a
primary user packet. The rather simple framework considered in this paper
highlights two fundamental aspects of cognitive networks that have not been
covered so far: (i) the networking mechanisms implemented by the primary users
(error control by means of retransmissions in the considered model) react to
secondary users' activity; (ii) if networking mechanisms are considered, then
their state must be taken into account when optimizing secondary users'
strategy, i.e., a strategy based on a binary active/idle perception of the
primary users' state is suboptimal.
|
1004.0557
|
Applications of Lindeberg Principle in Communications and Statistical
Learning
|
cs.IT math.IT
|
We use a generalization of the Lindeberg principle developed by Sourav
Chatterjee to prove universality properties for various problems in
communications, statistical learning and random matrix theory. We also show
that these systems can be viewed as the limiting case of a properly defined
sparse system. The latter result is useful when the sparse systems are easier
to analyze than their dense counterparts. The list of problems we consider is
by no means exhaustive. We believe that the ideas can be used in many other
problems relevant for information theory.
|
1004.0567
|
Using Rough Set and Support Vector Machine for Network Intrusion
Detection
|
cs.LG cs.CR cs.NI
|
The main function of IDS (Intrusion Detection System) is to protect the
system, analyze and predict the behaviors of users. Then these behaviors will
be considered an attack or a normal behavior. Though IDS has been developed for
many years, the large number of return alert messages makes managers maintain
system inefficiently. In this paper, we use RST (Rough Set Theory) and SVM
(Support Vector Machine) to detect intrusions. First, RST is used to preprocess
the data and reduce the dimensions. Next, the features were selected by RST
will be sent to SVM model to learn and test respectively. The method is
effective to decrease the space density of data. The experiments will compare
the results with different methods and show RST and SVM schema could improve
the false positive rate and accuracy.
|
1004.0574
|
A Comparison between Memetic algorithm and Genetic algorithm for the
cryptanalysis of Simplified Data Encryption Standard algorithm
|
cs.CR cs.NE
|
Genetic algorithms are a population-based Meta heuristics. They have been
successfully applied to many optimization problems. However, premature
convergence is an inherent characteristic of such classical genetic algorithms
that makes them incapable of searching numerous solutions of the problem
domain. A memetic algorithm is an extension of the traditional genetic
algorithm. It uses a local search technique to reduce the likelihood of the
premature convergence. The cryptanalysis of simplified data encryption standard
can be formulated as NP-Hard combinatorial problem. In this paper, a comparison
between memetic algorithm and genetic algorithm were made in order to
investigate the performance for the cryptanalysis on simplified data encryption
standard problems(SDES). The methods were tested and various experimental
results show that memetic algorithm performs better than the genetic algorithms
for such type of NP-Hard combinatorial problem. This paper represents our first
effort toward efficient memetic algorithm for the cryptanalysis of SDES.
|
1004.0658
|
A new representation of Chaitin \Omega number based on compressible
strings
|
cs.IT cs.CC math.IT
|
In 1975 Chaitin introduced his \Omega number as a concrete example of random
real. The real \Omega is defined based on the set of all halting inputs for an
optimal prefix-free machine U, which is a universal decoding algorithm used to
define the notion of program-size complexity. Chaitin showed \Omega to be
random by discovering the property that the first n bits of the base-two
expansion of \Omega solve the halting problem of U for all binary inputs of
length at most n. In this paper, we introduce a new representation \Theta of
Chaitin \Omega number. The real \Theta is defined based on the set of all
compressible strings. We investigate the properties of \Theta and show that
\Theta is random. In addition, we generalize \Theta to two directions \Theta(T)
and \bar{\Theta}(T) with a real T>0. We then study their properties. In
particular, we show that the computability of the real \Theta(T) gives a
sufficient condition for a real T in (0,1) to be a fixed point on partial
randomness, i.e., to satisfy the condition that the compression rate of T
equals to T.
|
1004.0727
|
Scalar-linear Solvability of Matroidal Networks Associated with
Representable Matroids
|
cs.IT math.IT
|
We study matroidal networks introduced by Dougherty et al. We prove the
converse of the following theorem: If a network is scalar-linearly solvable
over some finite field, then the network is a matroidal network associated with
a representable matroid over a finite field. It follows that a network is
scalar-linearly solvable if and only if the network is a matroidal network
associated with a representable matroid over a finite field. We note that this
result combined with the construction method due to Dougherty et al. gives a
method for generating scalar-linearly solvable networks. Using the converse
implicitly, we demonstrate scalar-linear solvability of two classes of
matroidal networks: networks constructed from uniform matroids and those
constructed from graphic matroids.
|
1004.0755
|
Extended Two-Dimensional PCA for Efficient Face Representation and
Recognition
|
cs.CV cs.LG
|
In this paper a novel method called Extended Two-Dimensional PCA (E2DPCA) is
proposed which is an extension to the original 2DPCA. We state that the
covariance matrix of 2DPCA is equivalent to the average of the main diagonal of
the covariance matrix of PCA. This implies that 2DPCA eliminates some
covariance information that can be useful for recognition. E2DPCA instead of
just using the main diagonal considers a radius of r diagonals around it and
expands the averaging so as to include the covariance information within those
diagonals. The parameter r unifies PCA and 2DPCA. r = 1 produces the covariance
of 2DPCA, r = n that of PCA. Hence, by controlling r it is possible to control
the trade-offs between recognition accuracy and energy compression (fewer
coefficients), and between training and recognition complexity. Experiments on
ORL face database show improvement in both recognition accuracy and recognition
time over the original 2DPCA.
|
1004.0763
|
Symbolic Approximate Time-Optimal Control
|
math.OC cs.SY
|
There is an increasing demand for controller design techniques capable of
addressing the complex requirements of todays embedded applications. This
demand has sparked the interest in symbolic control where lower complexity
models of control systems are used to cater for complex specifications given by
temporal logics, regular languages, or automata. These specification mechanisms
can be regarded as qualitative since they divide the trajectories of the plant
into bad trajectories (those that need to be avoided) and good trajectories.
However, many applications require also the optimization of quantitative
measures of the trajectories retained by the controller, as specified by a cost
or utility function. As a first step towards the synthesis of controllers
reconciling both qualitative and quantitative specifications, we investigate in
this paper the use of symbolic models for time-optimal controller synthesis. We
consider systems related by approximate (alternating) simulation relations and
show how such relations enable the transfer of time-optimality information
between the systems. We then use this insight to synthesize approximately
time-optimal controllers for a control system by working with a lower
complexity symbolic model. The resulting approximately time-optimal controllers
are equipped with upper and lower bounds for the time to reach a target,
describing the quality of the controller. The results described in this paper
were implemented in the Matlab Toolbox Pessoa which we used to workout several
illustrative examples reported in this paper.
|
1004.0785
|
Cost-Bandwidth Tradeoff In Distributed Storage Systems
|
cs.IT cs.NI math.IT
|
Distributed storage systems are mainly justified due to the limited amount of
storage capacity and improving the reliability through distributing data over
multiple storage nodes. On the other hand, it may happen the data is stored in
unreliable nodes, while it is desired the end user to have a reliable access to
the stored data. So, in an event that a node is damaged, to prevent the system
reliability to regress, it is necessary to regenerate a new node with the same
amount of stored data as the damaged node to retain the number of storage
nodes, thereby having the previous reliability. This requires the new node to
connect to some of existing nodes and downloads the required information,
thereby occupying some bandwidth, called the repair bandwidth. On the other
hand, it is more likely the cost of downloading varies across different nodes.
This paper aims at investigating the theoretical cost-bandwidth tradeoff, and
more importantly, it is demonstrated that any point on this curve can be
achieved through the use of the so called generalized regenerating codes which
is an enhancement of the regeneration codes introduced by Dimakis et al. in
[1].
|
1004.0798
|
Generalized Secure Distributed Source Coding with Side Information
|
cs.IT math.IT
|
In this paper, new inner and outer bounds on the achievable
compression-equivocation rate region for generalized secure data compression
with side information are given that do not match in general. In this setup,
two senders, Alice and Charlie intend to transmit information to Bob via
channels with limited capacity so that he can reliably reconstruct their
observations. The eavesdropper, Eve, has access to one of the channels at each
instant and is interested in the source of the same channel at the time. Bob
and Eve also have their own observations which are correlated with Alice's and
Charlie's observations. In this model, two equivocation and compression rates
are defined with respect to the sources of Alice and Charlie. Furthermore,
different special cases are discussed where the inner and outer bounds match.
Our model covers the previously obtained results as well.
|
1004.0799
|
Rate Regions of Secret Key Sharing in a New Source Model
|
cs.IT math.IT
|
A source model for secret key generation between terminals is considered. Two
users, namely users 1 and 2, at one side communicate with another user, namely
user 3, at the other side via a public channel where three users can observe
i.i.d. outputs of correlated sources. Each of users 1 and 2 intends to share a
secret key with user 3 where user 1 acts as a wiretapper for user 2 and vice
versa. In this model, two situations are considered: communication from users 1
and 2 to user 3 (the forward key strategy) and from user 3 to users 1 and 2
(the backward key strategy). In both situations, the goal is sharing a secret
key between user 1 and user 3 while leaking no effective information about that
key to user 2, and simultaneously, sharing another secret key between user 2
and user 3 while leaking no effective information about the latter key to user
1. This model is motivated by wireless communications when considering user 3
as a base station and users 1 and 2 as network users. In this paper, for both
the forward and backward key strategies, inner and outer bounds of secret key
capacity regions are derived. In special situations where one of users 1 and 2
is only interested in wiretapping and not key sharing, our results agree with
that of Ahlswede and Csiszar. Also, we investigate some special cases in which
the inner bound coincides with the outer bound and secret key capacity region
is deduced.
|
1004.0816
|
Nepotistic Relationships in Twitter and their Impact on Rank Prestige
Algorithms
|
cs.IR
|
Micro-blogging services such as Twitter allow anyone to publish anything,
anytime. Needless to say, many of the available contents can be diminished as
babble or spam. However, given the number and diversity of users, some valuable
pieces of information should arise from the stream of tweets. Thus, such
services can develop into valuable sources of up-to-date information (the
so-called real-time web) provided a way to find the most
relevant/trustworthy/authoritative users is available. Hence, this makes a
highly pertinent question for which graph centrality methods can provide an
answer. In this paper the author offers a comprehensive survey of feasible
algorithms for ranking users in social networks, he examines their
vulnerabilities to linking malpractice in such networks, and suggests an
objective criterion against which to compare such algorithms. Additionally, he
suggests a first step towards "desensitizing" prestige algorithms against
cheating by spammers and other abusive users.
|
1004.0891
|
Secure Communication over Fading Channels with Statistical QoS
Constraints
|
cs.IT math.IT
|
In this paper, the secure transmission of information over an ergodic fading
channel is investigated in the presence of statistical quality of service (QoS)
constraints. We employ effective capacity, which provides the maximum constant
arrival rate that a given process can support while satisfying statistical
delay constraints, to measure the secure throughput of the system, i.e.,
effective secure throughput. We assume that the channel side information (CSI)
of the main channel is available at the transmitter side. Depending on the
availability of the CSI of the eavesdropper channel, we obtain the
corresponding optimal power control policies that maximize the effective secure
throughput. In particular, when the CSI of the eavesdropper channel is
available at the transmitter, the transmitter can no longer wait for
transmission when the main channel is much better than the eavesdropper channel
due to the introduction of QoS constraints. Moreover, the CSI of the
eavesdropper channel becomes useless as QoS constraints become stringent.
|
1004.0892
|
Secure Broadcasting over Fading Channels with Statistical QoS
Constraints
|
cs.IT math.IT
|
In this paper, the fading broadcast channel with confidential messages is
studied in the presence of statistical quality of service (QoS) constraints in
the form of limitations on the buffer length. We employ the effective capacity
formulation to measure the throughput of the confidential and common messages.
We assume that the channel side information (CSI) is available at both the
transmitter and the receivers. Assuming average power constraints at the
transmitter side, we first define the effective secure throughput region, and
prove that the throughput region is convex. Then, we obtain the optimal power
control policies that achieve the boundary points of the effective secure
throughput region.
|
1004.0897
|
Energy Efficiency Analysis in Amplify-and-Forward and Decode-and-Forward
Cooperative Networks
|
cs.IT math.IT
|
In this paper, we have studied the energy efficiency of cooperative networks
operating in either the fixed Amplifyand- Forward (AF) or the selective
Decode-and-Forward (DF) mode. We consider the optimization of the M-ary
quadrature amplitude modulation (MQAM) constellation size to minimize the bit
energy consumption under given bit error rate (BER) constraints. In the
computation of the energy expenditure, the circuit, transmission, and
retransmission energies are taken into account. The link reliabilities and
retransmission probabilities are determined through the outage probabilities
under the Rayleigh fading assumption. Several interesting observations with
practical implications are made. It is seen that while large constellations are
preferred at small transmission distances, constellation size should be
decreased as the distance increases; the cooperative gain is computed to
compare direct transmission and cooperative transmission.
|
1004.0899
|
Relay Beamforming Strategies for Physical-Layer Security
|
cs.IT math.IT
|
In this paper, collaborative use of relays to form a beamforming system and
provide physical-layer security is investigated. In particular,
amplify-and-forward (AF) relay beamforming designs under total and individual
relay power constraints are studied with the goal of maximizing the secrecy
rates when perfect channel state information (CSI) is available. In the AF
scheme, not having analytical solutions for the optimal beamforming design
under both total and individual power constraints, an iterative algorithm is
proposed to numerically obtain the optimal beamforming structure and maximize
the secrecy rates. Robust beamforming designs in the presence of imperfect CSI
are investigated for decode-and-forward (DF) based relay beamforming, and
optimization frameworks are provided.
|
1004.0902
|
On building minimal automaton for subset matching queries
|
cs.FL cs.DS cs.IR
|
We address the problem of building an index for a set $D$ of $n$ strings,
where each string location is a subset of some finite integer alphabet of size
$\sigma$, so that we can answer efficiently if a given simple query string
(where each string location is a single symbol) $p$ occurs in the set. That is,
we need to efficiently find a string $d \in D$ such that $p[i] \in d[i]$ for
every $i$. We show how to build such index in
$O(n^{\log_{\sigma/\Delta}(\sigma)}\log(n))$ average time, where $\Delta$ is
the average size of the subsets. Our methods have applications e.g.\ in
computational biology (haplotype inference) and music information retrieval.
|
1004.0907
|
QoS Analysis of Cognitive Radio Channels with Perfect CSI at both
Receiver and Transmitter
|
cs.IT math.IT
|
In this paper, cognitive transmission under quality of service (QoS)
constraints is studied. In the cognitive radio channel model, it is assumed
that both the secondary receiver and the secondary transmitter know the channel
fading coefficients perfectly and optimize the power adaptation policy under
given constraints, depending on the channel activity of the primary users,
which is determined by channel sensing performed by the secondary users. The
transmission rates are equal to the instantaneous channel capacity values. A
state transition model with four states is constructed to model this cognitive
transmission channel. Statistical limitations on the buffer lengths are imposed
to take into account the QoS constraints. The maximum throughput under these
statistical QoS constraints is identified by finding the effective capacity of
the cognitive radio channel. The impact upon the effective capacity of several
system parameters, including the channel sensing duration, detection threshold,
detection and false alarm probabilities, and QoS parameters, is investigated.
|
1004.0914
|
Collaborative Relay Beamforming for Secure Broadcasting
|
cs.IT math.IT
|
In this paper, collaborative use of relays to form a beamforming system with
the aid of perfect channel state information (CSI) and to provide communication
in physicallayer security between a transmitter and two receivers is
investigated. In particular, we describe decode-and-forward based null space
beamforming schemes and optimize the relay weights jointly to obtain the
largest secrecy rate region. Furthermore, the optimality of the proposed
schemes is investigated by comparing them with the outer bound secrecy rate
region
|
1004.1001
|
The Graph Traversal Pattern
|
cs.DS cs.DB
|
A graph is a structure composed of a set of vertices (i.e.nodes, dots)
connected to one another by a set of edges (i.e.links, lines). The concept of a
graph has been around since the late 19$^\text{th}$ century, however, only in
recent decades has there been a strong resurgence in both theoretical and
applied graph research in mathematics, physics, and computer science. In
applied computing, since the late 1960s, the interlinked table structure of the
relational database has been the predominant information storage and retrieval
model. With the growth of graph/network-based data and the need to efficiently
process such data, new data management systems have been developed. In contrast
to the index-intensive, set-theoretic operations of relational databases, graph
databases make use of index-free, local traversals. This article discusses the
graph traversal pattern and its use in computing.
|
1004.1003
|
Message-Passing Inference on a Factor Graph for Collaborative Filtering
|
cs.IT cs.LG math.IT
|
This paper introduces a novel message-passing (MP) framework for the
collaborative filtering (CF) problem associated with recommender systems. We
model the movie-rating prediction problem popularized by the Netflix Prize,
using a probabilistic factor graph model and study the model by deriving
generalization error bounds in terms of the training error. Based on the model,
we develop a new MP algorithm, termed IMP, for learning the model. To show
superiority of the IMP algorithm, we compare it with the closely related
expectation-maximization (EM) based algorithm and a number of other matrix
completion algorithms. Our simulation results on Netflix data show that, while
the methods perform similarly with large amounts of data, the IMP algorithm is
superior for small amounts of data. This improves the cold-start problem of the
CF systems in practice. Another advantage of the IMP algorithm is that it can
be analyzed using the technique of density evolution (DE) that was originally
developed for MP decoding of error-correcting codes.
|
1004.1045
|
Double-Directional Information Azimuth Spectrum and Relay Network
Tomography for a Decentralized Wireless Relay Network
|
cs.IT math.IT
|
A novel channel representation for a two-hop decentralized wireless relay
network (DWRN) is proposed, where the relays operate in a completely
distributive fashion. The modeling paradigm applies an analogous approach to
the description method for a double-directional multipath propagation channel,
and takes into account the finite system spatial resolution and the extended
relay listening/transmitting time. Specifically, the double-directional
information azimuth spectrum (IAS) is formulated to provide a compact
representation of information flows in a DWRN. The proposed channel
representation is then analyzed from a geometrically-based statistical modeling
perspective. Finally, we look into the problem of relay network tomography
(RNT), which solves an inverse problem to infer the internal structure of a
DWRN by using the instantaneous doubledirectional IAS recorded at multiple
measuring nodes exterior to the relay region.
|
1004.1061
|
On Tsallis Entropy Bias and Generalized Maximum Entropy Models
|
cs.LG cond-mat.stat-mech cs.AI cs.IT math.IT
|
In density estimation task, maximum entropy model (Maxent) can effectively
use reliable prior information via certain constraints, i.e., linear
constraints without empirical parameters. However, reliable prior information
is often insufficient, and the selection of uncertain constraints becomes
necessary but poses considerable implementation complexity. Improper setting of
uncertain constraints can result in overfitting or underfitting. To solve this
problem, a generalization of Maxent, under Tsallis entropy framework, is
proposed. The proposed method introduces a convex quadratic constraint for the
correction of (expected) Tsallis entropy bias (TEB). Specifically, we
demonstrate that the expected Tsallis entropy of sampling distributions is
smaller than the Tsallis entropy of the underlying real distribution. This
expected entropy reduction is exactly the (expected) TEB, which can be
expressed by a closed-form formula and act as a consistent and unbiased
correction. TEB indicates that the entropy of a specific sampling distribution
should be increased accordingly. This entails a quantitative re-interpretation
of the Maxent principle. By compensating TEB and meanwhile forcing the
resulting distribution to be close to the sampling distribution, our
generalized TEBC Maxent can be expected to alleviate the overfitting and
underfitting. We also present a connection between TEB and Lidstone estimator.
As a result, TEB-Lidstone estimator is developed by analytically identifying
the rate of probability correction in Lidstone. Extensive empirical evaluation
shows promising performance of both TEBC Maxent and TEB-Lidstone in comparison
with various state-of-the-art density estimation methods.
|
1004.1086
|
Grassmannian Fusion Frames
|
cs.IT math.IT
|
Transmitted data may be corrupted by both noise and data loss. Grassmannian
frames are in some sense optimal representations of data transmitted over a
noisy channel that may lose some of the transmitted coefficients. Fusion frame
(or frame of subspaces) theory is a new area that has potential to be applied
to problems in such fields as distributed sensing and parallel processing.
Grassmannian fusion frames combine elements from both theories. A simple, novel
construction of Grassmannian fusion frames with an extension to Grassmannian
fusion frames with local frames shall be presented. Some connections to sparse
representations shall also be discussed.
|
1004.1155
|
Optimal sequential transmission over broadcast channel with nested
feedback
|
cs.IT math.IT math.OC
|
We consider the optimal design of sequential transmission over broadcast
channel with nested feedback. Nested feedback means that the channel output of
the outer channel is also available at the decoder of the inner channel. We
model the communication system as a decentralized team with three decision
makers---the encoder and the two decoders. Structure of encoding and decoding
strategies that minimize a total distortion measure over a finite horizon are
determined. The results are applicable for real-time communication as well as
for the information theoretic setup.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.