id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0701160
|
Supporting Finite Element Analysis with a Relational Database Backend,
Part II: Database Design and Access
|
cs.DB cs.CE
|
This is Part II of a three article series on using databases for Finite
Element Analysis (FEA). It discusses (1) db design, (2) data loading, (3)
typical use cases during grid building, (4) typical use cases during simulation
(get and put), (5) typical use cases during analysis (also done in Part III)
and some performance measures of these cases. It argues that using a database
is simpler to implement than custom data schemas, has better performance
because it can use data parallelism, and better supports FEA modularity and
tool evolution because database schema evolution, data independence, and
self-defining data.
|
cs/0701161
|
Thousands of DebitCredit Transactions-Per-Second: Easy and Inexpensive
|
cs.DB cs.PF
|
A $2k computer can execute about 8k transactions per second. This is 80x more
than one of the largest US bank's 1970's traffic - it approximates the total US
1970's financial transaction volume. Very modest modern computers can easily
solve yesterday's problems.
|
cs/0701162
|
A Measure of Transaction Processing 20 Years Later
|
cs.DB cs.PF
|
This provides a retrospective of the paper "A Measure of Transaction
Processing" published in 1985. It shows that transaction processing peak
performance and price-peformance have improved about 100,000x respectively and
that sort/sequential performance has approximately doubled each year (so a
million fold improvement) even though processor performance plateaued in 1995.
|
cs/0701163
|
Using Table Valued Functions in SQL Server 2005 To Implement a Spatial
Data Library
|
cs.DB cs.CE
|
This article explains how to add spatial search functions (point-near-point
and point in polygon) to Microsoft SQL Server 2005 using C# and table-valued
functions. It is possible to use this library to add spatial search to your
application without writing any special code. The library implements the
public-domain C# Hierarchical Triangular Mesh (HTM) algorithms from Johns
Hopkins University. That C# library is connected to SQL Server 2005 via a set
of scalar-valued and table-valued functions. These functions act as a spatial
index.
|
cs/0701164
|
Indexing the Sphere with the Hierarchical Triangular Mesh
|
cs.DB cs.DS
|
We describe a method to subdivide the surface of a sphere into spherical
triangles of similar, but not identical, shapes and sizes. The Hierarchical
Triangular Mesh (HTM) is a quad-tree that is particularly good at supporting
searches at different resolutions, from arc seconds to hemispheres. The
subdivision scheme is universal, providing the basis for addressing and for
fast lookups. The HTM provides the basis for an efficient geospatial indexing
scheme in relational databases where the data have an inherent location on
either the celestial sphere or the Earth. The HTM index is superior to
cartographical methods using coordinates with singularities at the poles. We
also describe a way to specify surface regions that efficiently represent
spherical query areas. This article presents the algorithms used to identify
the HTM triangles covering such regions.
|
cs/0701165
|
Petascale Computational Systems
|
cs.DB cs.AR
|
Computational science is changing to be data intensive. Super-Computers must
be balanced systems; not just CPU farms but also petascale IO and networking
arrays. Anyone building CyberInfrastructure should allocate resources to
support a balanced Tier-1 through Tier-3 design.
|
cs/0701166
|
Empirical Measurements of Disk Failure Rates and Error Rates
|
cs.DB cs.AR
|
The SATA advertised bit error rate of one error in 10 terabytes is
frightening. We moved 2 PB through low-cost hardware and saw five disk read
error events, several controller failures, and many system reboots caused by
security patches. We conclude that SATA uncorrectable read errors are not yet a
dominant system-fault source - they happen, but are rare compared to other
problems. We also conclude that UER (uncorrectable error rate) is not the
relevant metric for our needs. When an uncorrectable read error happens, there
are typically several damaged storage blocks (and many uncorrectable read
errors.) Also, some uncorrectable read errors may be masked by the operating
system. The more meaningful metric for data architects is Mean Time To Data
Loss (MTTDL.)
|
cs/0701167
|
Large-Scale Query and XMatch, Entering the Parallel Zone
|
cs.DB cs.CE
|
Current and future astronomical surveys are producing catalogs with millions
and billions of objects. On-line access to such big datasets for data mining
and cross-correlation is usually as highly desired as unfeasible. Providing
these capabilities is becoming critical for the Virtual Observatory framework.
In this paper we present various performance tests that show how using
Relational Database Management Systems (RDBMS) and a Zoning algorithm to
partition and parallelize the computation, we can facilitate large-scale query
and cross-match.
|
cs/0701168
|
To BLOB or Not To BLOB: Large Object Storage in a Database or a
Filesystem?
|
cs.DB
|
Application designers often face the question of whether to store large
objects in a filesystem or in a database. Often this decision is made for
application design simplicity. Sometimes, performance measurements are also
used. This paper looks at the question of fragmentation - one of the
operational issues that can affect the performance and/or manageability of the
system as deployed long term. As expected from the common wisdom, objects
smaller than 256KB are best stored in a database while objects larger than 1M
are best stored in the filesystem. Between 256KB and 1MB, the read:write ratio
and rate of object overwrite or replacement are important factors. We used the
notion of "storage age" or number of object overwrites as way of normalizing
wall clock time. Storage age allows our results or similar such results to be
applied across a number of read:write ratios and object replacement rates.
|
cs/0701169
|
A Framework for Designing MIMO systems with Decision Feedback
Equalization or Tomlinson-Harashima Precoding
|
cs.IT math.IT
|
We consider joint transceiver design for general Multiple-Input
Multiple-Output communication systems that implement interference
(pre-)subtraction, such as those based on Decision Feedback Equalization (DFE)
or Tomlinson-Harashima precoding (THP). We develop a unified framework for
joint transceiver design by considering design criteria that are expressed as
functions of the Mean Square Error (MSE) of the individual data streams. By
deriving two inequalities that involve the logarithms of the individual MSEs,
we obtain optimal designs for two classes of communication objectives, namely
those that are Schur-convex and Schur-concave functions of these logarithms.
For Schur-convex objectives, the optimal design results in data streams with
equal MSEs. This design simultaneously minimizes the total MSE and maximizes
the mutual information for the DFE-based model. For Schur-concave objectives,
the optimal DFE design results in linear equalization and the optimal THP
design results in linear precoding. The proposed framework embraces a wide
range of design objectives and can be regarded as a counterpart of the existing
framework of linear transceiver design.
|
cs/0701170
|
Life Under Your Feet: An End-to-End Soil Ecology Sensor Network,
Database, Web Server, and Analysis Service
|
cs.DB cs.CE
|
Wireless sensor networks can revolutionize soil ecology by providing
measurements at temporal and spatial granularities previously impossible. This
paper presents a soil monitoring system we developed and deployed at an urban
forest in Baltimore as a first step towards realizing this vision. Motes in
this network measure and save soil moisture and temperature in situ every
minute. Raw measurements are periodically retrieved by a sensor gateway and
stored in a central database where calibrated versions are derived and stored.
The measurement database is published through Web Services interfaces. In
addition, analysis tools let scientists analyze current and historical data and
help manage the sensor network. The article describes the system design, what
we learned from the deployment, and initial results obtained from the sensors.
The system measures soil factors with unprecedented temporal precision.
However, the deployment required device-level programming, sensor calibration
across space and time, and cross-referencing measurements with external
sources. The database, web server, and data analysis design required
considerable innovation and expertise. So, the ratio of computer-scientists to
ecologists was 3:1. Before sensor networks can fulfill their potential as
instruments that can be easily deployed by scientists, these technical problems
must be addressed so that the ratio is one nerd per ten ecologists.
|
cs/0701171
|
The Zones Algorithm for Finding Points-Near-a-Point or Cross-Matching
Spatial Datasets
|
cs.DB cs.DS
|
Zones index an N-dimensional Euclidian or metric space to efficiently support
points-near-a-point queries either within a dataset or between two datasets.
The approach uses relational algebra and the B-Tree mechanism found in almost
all relational database systems. Hence, the Zones Algorithm gives a
portable-relational implementation of points-near-point, spatial cross-match,
and self-match queries. This article corrects some mistakes in an earlier
article we wrote on the Zones Algorithm and describes some algorithmic
improvements. The Appendix includes an implementation of point-near-point,
self-match, and cross-match using the USGS city and stream gauge database.
|
cs/0701172
|
Cross-Matching Multiple Spatial Observations and Dealing with Missing
Data
|
cs.DB cs.CE
|
Cross-match spatially clusters and organizes several astronomical
point-source measurements from one or more surveys. Ideally, each object would
be found in each survey. Unfortunately, the observation conditions and the
objects themselves change continually. Even some stationary objects are missing
in some observations; sometimes objects have a variable light flux and
sometimes the seeing is worse. In most cases we are faced with a substantial
number of differences in object detections between surveys and between
observations taken at different times within the same survey or instrument.
Dealing with such missing observations is a difficult problem. The first step
is to classify misses as ephemeral - when the object moved or simply
disappeared, masked - when noise hid or corrupted the object observation, or
edge - when the object was near the edge of the observational field. This
classification and a spatial library to represent and manipulate observational
footprints help construct a Match table recording both hits and misses.
Transitive closure clusters friends-of-friends into object bundles. The bundle
summary statistics are recorded in a Bundle table. This design is an evolution
of the Sloan Digital Sky Survey cross-match design that compared overlapping
observations taken at different times. Cross-Matching Multiple Spatial
Observations and Dealing with Missing Data.
|
cs/0701173
|
SkyServer Traffic Report - The First Five Years
|
cs.DB cs.CE
|
The SkyServer is an Internet portal to the Sloan Digital Sky Survey Catalog
Archive Server. From 2001 to 2006, there were a million visitors in 3 million
sessions generating 170 million Web hits, 16 million ad-hoc SQL queries, and 62
million page views. The site currently averages 35 thousand visitors and 400
thousand sessions per month. The Web and SQL logs are public. We analyzed
traffic and sessions by duration, usage pattern, data product, and client type
(mortal or bot) over time. The analysis shows (1) the site's popularity, (2)
the educational website that delivered nearly fifty thousand hours of
interactive instruction, (3) the relative use of interactive, programmatic, and
batch-local access, (4) the success of offering ad-hoc SQL, personal database,
and batch job access to scientists as part of the data publication, (5) the
continuing interest in "old" datasets, (6) the usage of SQL constructs, and (7)
a novel approach of using the corpus of correct SQL queries to suggest similar
but correct statements when a user presents an incorrect SQL statement.
|
cs/0701174
|
A Prototype for Educational Planning Using Course Constraints to
Simulate Student Populations
|
cs.AI cs.CY cs.DS cs.SC
|
Distance learning universities usually afford their students the flexibility
to advance their studies at their own pace. This can lead to a considerable
fluctuation of student populations within a program's courses, possibly
affecting the academic viability of a program as well as the related required
resources. Providing a method that estimates this population could be of
substantial help to university management and academic personnel. We describe
how to use course precedence constraints to calculate alternative tuition paths
and then use Markov models to estimate future populations. In doing so, we
identify key issues of a large scale potential deployment.
|
cs/0701178
|
Distributed Detection in Sensor Networks with Limited Range Sensors
|
cs.IT math.IT
|
We consider a multi-object detection problem over a sensor network (SNET)
with limited range sensors. This problem complements the widely considered
decentralized detection problem where all sensors observe the same object.
While the necessity for global collaboration is clear in the decentralized
detection problem, the benefits of collaboration with limited range sensors is
unclear and has not been widely explored. In this paper we develop a
distributed detection approach based on recent development of the false
discovery rate (FDR). We first extend the FDR procedure and develop a
transformation that exploits complete or partial knowledge of either the
observed distributions at each sensor or the ensemble (mixture) distribution
across all sensors. We then show that this transformation applies to
multi-dimensional observations, thus extending FDR to multi-dimensional
settings. We also extend FDR theory to cases where distributions under both
null and positive hypotheses are uncertain. We then propose a robust
distributed algorithm to perform detection. We further demonstrate scalability
to large SNETs by showing that the upper bound on the communication complexity
scales linearly with the number of sensors that are in the vicinity of objects
and is independent of the total number of sensors. Finally, we deal with
situations where the sensing model may be uncertain and establish robustness of
our techniques to such uncertainties.
|
cs/0701180
|
Ontology from Local Hierarchical Structure in Text
|
cs.IR
|
We study the notion of hierarchy in the context of visualizing textual data
and navigating text collections. A formal framework for ``hierarchy'' is given
by an ultrametric topology. This provides us with a theoretical foundation for
concept hierarchy creation. A major objective is {\em scalable} annotation or
labeling of concept maps. Serendipitously we pursue other objectives such as
deriving common word pair (and triplet) phrases, i.e., word 2- and 3-grams. We
evaluate our approach using (i) a collection of texts, (ii) a single text
subdivided into successive parts (for which we provide an interactive
demonstrator), and (iii) a text subdivided at the sentence or line level. While
detailing a generic framework, a distinguishing feature of our work is that we
focus on {\em locality} of hierarchic structure in order to extract semantic
information.
|
cs/0701181
|
A Note on Local Ultrametricity in Text
|
cs.CL
|
High dimensional, sparsely populated data spaces have been characterized in
terms of ultrametric topology. This implies that there are natural, not
necessarily unique, tree or hierarchy structures defined by the ultrametric
topology. In this note we study the extent of local ultrametric topology in
texts, with the aim of finding unique ``fingerprints'' for a text or corpus,
discriminating between texts from different domains, and opening up the
possibility of exploiting hierarchical structures in the data. We use coherent
and meaningful collections of over 1000 texts, comprising over 1.3 million
words.
|
cs/0701182
|
Supplement to: Code Spectrum and Reliability Function: Binary Symmetric
Channel
|
cs.IT math.IT
|
A much simpler proof of Theorem 1 from M.Burnashev "Code spectrum and
reliability function: Binary symmetric channel" is presented.
|
cs/0701184
|
Structure and Problem Hardness: Goal Asymmetry and DPLL Proofs in<br>
SAT-Based Planning
|
cs.AI
|
In Verification and in (optimal) AI Planning, a successful method is to
formulate the application as boolean satisfiability (SAT), and solve it with
state-of-the-art DPLL-based procedures. There is a lack of understanding of why
this works so well. Focussing on the Planning context, we identify a form of
problem structure concerned with the symmetrical or asymmetrical nature of the
cost of achieving the individual planning goals. We quantify this sort of
structure with a simple numeric parameter called AsymRatio, ranging between 0
and 1. We run experiments in 10 benchmark domains from the International
Planning Competitions since 2000; we show that AsymRatio is a good indicator of
SAT solver performance in 8 of these domains. We then examine carefully crafted
synthetic planning domains that allow control of the amount of structure, and
that are clean enough for a rigorous analysis of the combinatorial search
space. The domains are parameterized by size, and by the amount of structure.
The CNFs we examine are unsatisfiable, encoding one planning step less than the
length of the optimal plan. We prove upper and lower bounds on the size of the
best possible DPLL refutations, under different settings of the amount of
structure, as a function of size. We also identify the best possible sets of
branching variables (backdoors). With minimum AsymRatio, we prove exponential
lower bounds, and identify minimal backdoors of size linear in the number of
variables. With maximum AsymRatio, we identify logarithmic DPLL refutations
(and backdoors), showing a doubly exponential gap between the two structural
extreme cases. The reasons for this behavior -- the proof arguments --
illuminate the prototypical patterns of structure causing the empirical
behavior observed in the competition benchmarks.
|
cs/0701194
|
Menzerath-Altmann Law for Syntactic Structures in Ukrainian
|
cs.CL
|
In the paper, the definition of clause suitable for an automated processing
of a Ukrainian text is proposed. The Menzerath-Altmann law is verified on the
sentence level and the parameters for the dependences of the clause length
counted in words and syllables on the sentence length counted in clauses are
calculated for "Perekhresni Stezhky" ("The Cross-Paths"), a novel by Ivan
Franko.
|
cs/0701196
|
One-bit Distributed Sensing and Coding for Field Estimation in Sensor
Networks
|
cs.IT math.IT
|
This paper formulates and studies a general distributed field reconstruction
problem using a dense network of noisy one-bit randomized scalar quantizers in
the presence of additive observation noise of unknown distribution. A
constructive quantization, coding, and field reconstruction scheme is developed
and an upper-bound to the associated mean squared error (MSE) at any point and
any snapshot is derived in terms of the local spatio-temporal smoothness
properties of the underlying field. It is shown that when the noise, sensor
placement pattern, and the sensor schedule satisfy certain weak technical
requirements, it is possible to drive the MSE to zero with increasing sensor
density at points of field continuity while ensuring that the per-sensor
bitrate and sensing-related network overhead rate simultaneously go to zero.
The proposed scheme achieves the order-optimal MSE versus sensor density
scaling behavior for the class of spatially constant spatio-temporal fields.
|
cs/0701197
|
On Delayed Sequential Coding of Correlated Sources
|
cs.IT math.IT
|
Motivated by video coding applications, the problem of sequential coding of
correlated sources with encoding and/or decoding frame-delays is studied. The
fundamental tradeoffs between individual frame rates, individual frame
distortions, and encoding/decoding frame-delays are derived in terms of a
single-letter information-theoretic characterization of the rate-distortion
region for general inter-frame source correlations and certain types of
potentially frame specific and coupled single-letter fidelity criteria. The
sum-rate-distortion region is characterized in terms of generalized directed
information measures highlighting their role in delayed sequential source
coding problems. For video sources which are spatially stationary memoryless
and temporally Gauss-Markov, MSE frame distortions, and a sum-rate constraint,
our results expose the optimality of idealized differential predictive coding
among all causal sequential coders, when the encoder uses a positive rate to
describe each frame. Somewhat surprisingly, causal sequential encoding with
one-frame-delayed noncausal sequential decoding can exactly match the
sum-rate-MSE performance of joint coding for all nontrivial MSE-tuples
satisfying certain positive semi-definiteness conditions. Thus, even a single
frame-delay holds potential for yielding significant performance improvements.
Generalizations to higher order Markov sources are also presented and
discussed. A rate-distortion performance equivalence between, causal sequential
encoding with delayed noncausal sequential decoding, and, delayed noncausal
sequential encoding with causal sequential decoding, is also established.
|
cs/0702007
|
Power Optimal Scheduling for Guaranteed Throughput in Multi-access
Fading Channels
|
cs.IT math.IT
|
A power optimal scheduling algorithm that guarantees desired throughput and
bounded delay to each user is developed for fading multi-access multi-band
systems. The optimization is over the joint space of all rate allocation and
coding strategies. The proposed scheduling assigns rates on each band based
only on the current system state, and subsequently uses optimal multi-user
signaling to achieve these rates. The scheduling is computationally simple, and
hence scalable. Due to uplink-downlink duality, all the results extend in
straightforward fashion to the broadcast channels.
|
cs/0702008
|
MMSE Optimal Algebraic Space-Time Codes
|
cs.IT math.IT
|
Design of Space-Time Block Codes (STBCs) for Maximum Likelihood (ML)
reception has been predominantly the main focus of researchers. However, the ML
decoding complexity of STBCs becomes prohibitive large as the number of
transmit and receive antennas increase. Hence it is natural to resort to a
suboptimal reception technique like linear Minimum Mean Squared Error (MMSE)
receiver. Barbarossa et al and Liu et al have independently derived necessary
and sufficient conditions for a full rate linear STBC to be MMSE optimal, i.e
achieve least Symbol Error Rate (SER). Motivated by this problem, certain
existing high rate STBC constructions from crossed product algebras are
identified to be MMSE optimal. Also, it is shown that a certain class of codes
from cyclic division algebras which are special cases of crossed product
algebras are MMSE optimal. Hence, these STBCs achieve least SER when MMSE
reception is employed and are fully diverse when ML reception is employed.
|
cs/0702009
|
On Evaluating the Rate-Distortion Function of Sources with Feed-Forward
and the Capacity of Channels with Feedback
|
cs.IT math.IT
|
We study the problem of computing the rate-distortion function for sources
with feed-forward and the capacity for channels with feedback. The formulas
(involving directed information) for the optimal rate-distortion function with
feed-forward and channel capacity with feedback are multi-letter expressions
and cannot be computed easily in general. In this work, we derive conditions
under which these can be computed for a large class of sources/channels with
memory and distortion/cost measures. Illustrative examples are also provided.
|
cs/0702011
|
Dealing With Logical Omniscience: Expressiveness and Pragmatics
|
cs.LO cs.AI
|
We examine four approaches for dealing with the logical omniscience problem
and their potential applicability: the syntactic approach, awareness,
algorithmic knowledge, and impossible possible worlds. Although in some
settings these approaches are equi-expressive and can capture all epistemic
states, in other settings of interest (especially with probability in the
picture), we show that they are not equi-expressive. We then consider the
pragmatics of dealing with logical omniscience-- how to choose an approach and
construct an appropriate model.
|
cs/0702012
|
Plagiarism Detection in arXiv
|
cs.DB cs.DL cs.IR
|
We describe a large-scale application of methods for finding plagiarism in
research document collections. The methods are applied to a collection of
284,834 documents collected by arXiv.org over a 14 year period, covering a few
different research disciplines. The methodology efficiently detects a variety
of problematic author behaviors, and heuristics are developed to reduce the
number of false positives. The methods are also efficient enough to implement
as a real-time submission screen for a collection many times larger.
|
cs/0702014
|
Probabilistic Analysis of Linear Programming Decoding
|
cs.IT cs.DM math.IT
|
We initiate the probabilistic analysis of linear programming (LP) decoding of
low-density parity-check (LDPC) codes. Specifically, we show that for a random
LDPC code ensemble, the linear programming decoder of Feldman et al. succeeds
in correcting a constant fraction of errors with high probability. The fraction
of correctable errors guaranteed by our analysis surpasses previous
non-asymptotic results for LDPC codes, and in particular exceeds the best
previous finite-length result on LP decoding by a factor greater than ten. This
improvement stems in part from our analysis of probabilistic bit-flipping
channels, as opposed to adversarial channels. At the core of our analysis is a
novel combinatorial characterization of LP decoding success, based on the
notion of a generalized matching. An interesting by-product of our analysis is
to establish the existence of ``probabilistic expansion'' in random bipartite
graphs, in which one requires only that almost every (as opposed to every) set
of a certain size expands, for sets much larger than in the classical
worst-case setting.
|
cs/0702015
|
Network Coding for Distributed Storage Systems
|
cs.IT cs.NI math.IT
|
Peer-to-peer distributed storage systems provide reliable access to data
through redundancy spread over nodes across the Internet. A key goal is to
minimize the amount of bandwidth used to maintain that redundancy. Storing a
file using an erasure code, in fragments spread across nodes, promises to
require less redundancy and hence less maintenance bandwidth than simple
replication to provide the same level of reliability. However, since fragments
must be periodically replaced as nodes fail, a key question is how to generate
a new fragment in a distributed way while transferring as little data as
possible across the network.
In this paper, we introduce a general technique to analyze storage
architectures that combine any form of coding and replication, as well as
presenting two new schemes for maintaining redundancy using erasure codes.
First, we show how to optimally generate MDS fragments directly from existing
fragments in the system. Second, we introduce a new scheme called Regenerating
Codes which use slightly larger fragments than MDS but have lower overall
bandwidth use. We also show through simulation that in realistic environments,
Regenerating Codes can reduce maintenance bandwidth use by 25 percent or more
compared with the best previous design--a hybrid of replication and erasure
codes--while simplifying system architecture.
|
cs/0702017
|
Comment on Improved Analysis of List Decoding and Its Application to
Convolutional Codes and Turbo Codes
|
cs.IT math.IT
|
In a recent paper [1] an improved analysis concerning the analysis of List
Decoding was presented. The event that the correct codeword is excluded from
the list is central. For the additive white Gaussian noise (AWGN) channel an
important quantity is the in [1] called effective Euclidean distance. This was
earlier considered in [2] under the name Vector Euclidean Distance, where also
a simple mathematical expression for this quantity was easily derived for any
list size. In [1], a geometrical analysis gives this when the list size is 1, 2
or 3.
|
cs/0702018
|
Estimation of the Rate-Distortion Function
|
cs.IT math.IT math.ST stat.TH
|
Motivated by questions in lossy data compression and by theoretical
considerations, we examine the problem of estimating the rate-distortion
function of an unknown (not necessarily discrete-valued) source from empirical
data. Our focus is the behavior of the so-called "plug-in" estimator, which is
simply the rate-distortion function of the empirical distribution of the
observed data. Sufficient conditions are given for its consistency, and
examples are provided to demonstrate that in certain cases it fails to converge
to the true rate-distortion function. The analysis of its performance is
complicated by the fact that the rate-distortion function is not continuous in
the source distribution; the underlying mathematical problem is closely related
to the classical problem of establishing the consistency of maximum likelihood
estimators. General consistency results are given for the plug-in estimator
applied to a broad class of sources, including all stationary and ergodic ones.
A more general class of estimation problems is also considered, arising in the
context of lossy data compression when the allowed class of coding
distributions is restricted; analogous results are developed for the plug-in
estimator in that case. Finally, consistency theorems are formulated for
modified (e.g., penalized) versions of the plug-in, and for estimating the
optimal reproduction distribution.
|
cs/0702020
|
Construction of Minimal Tail-Biting Trellises for Codes over Finite
Abelian Groups
|
cs.IT math.IT
|
A definition of atomic codeword for a group code is presented. Some
properties of atomic codewords of group codes are investigated. Using these
properties, it is shown that every minimal tail-biting trellis for a group code
over a finite abelian group can be constructed from its characteristic
generators, which extends the work of Koetter and Vardy who treated the case of
a linear code over a field. We also present an efficient algorithm for
constructing the minimal tail-biting trellis of a group code over a finite
abelian group, given a generator matrix.
|
cs/0702023
|
High-rate, Multi-Symbol-Decodable STBCs from Clifford Algebras
|
cs.IT math.IT
|
It is well known that Space-Time Block Codes (STBCs) obtained from Orthogonal
Designs (ODs) are single-symbol-decodable (SSD) and from Quasi-Orthogonal
Designs (QODs) are double-symbol decodable. However, there are SSD codes that
are not obtainable from ODs and DSD codes that are not obtainable from QODs. In
this paper a method of constructing $g$-symbol decodable ($g$-SD) STBCs using
representations of Clifford algebras are presented which when specialized to
$g=1,2$ gives SSD and DSD codes respectively. For the number of transmit
antennas $2^a$ the rate (in complex symbols per channel use) of the $g$-SD
codes presented in this paper is $\frac{a+1-g}{2^{a-g}}$. The maximum rate of
the DSD STBCs from QODs reported in the literature is $\frac{a}{2^{a-1}}$ which
is smaller than the rate $\frac{a-1}{2^{a-2}}$ of the DSD codes of this paper,
for $2^a$ transmit antennas. In particular, the reported DSD codes for 8 and 16
transmit antennas offer rates 1 and 3/4 respectively whereas the known STBCs
from QODs offer only 3/4 and 1/2 respectively. The construction of this paper
is applicable for any number of transmit antennas.
|
cs/0702024
|
Searching for low weight pseudo-codewords
|
cs.IT math.IT
|
Belief Propagation (BP) and Linear Programming (LP) decodings of Low Density
Parity Check (LDPC) codes are discussed. We summarize results of
instanton/pseudo-codeword approach developed for analysis of the error-floor
domain of the codes. Instantons are special, code and decoding specific,
configurations of the channel noise contributing most to the Frame-Error-Rate
(FER). Instantons are decoded into pseudo-codewords. Instanton/pseudo-codeword
with the lowest weight describes the largest Signal-to-Noise-Ratio (SNR)
asymptotic of FER, while the whole spectra of the low weight instantons is
descriptive of the FER vs SNR profile in the extended error-floor domain.
First, we describe a general optimization method that allows to find the
instantons for any coding/decoding. Second, we introduce LP-specific
pseudo-codeword search algorithm that allows efficient calculations of the
pseudo-codeword spectra. Finally, we discuss results of combined BP/LP
error-floor exploration experiments for two model codes.
|
cs/0702025
|
Algebraic Signal Processing Theory: Cooley-Tukey Type Algorithms for
DCTs and DSTs
|
cs.IT cs.DS math.IT
|
This paper presents a systematic methodology based on the algebraic theory of
signal processing to classify and derive fast algorithms for linear transforms.
Instead of manipulating the entries of transform matrices, our approach derives
the algorithms by stepwise decomposition of the associated signal models, or
polynomial algebras. This decomposition is based on two generic methods or
algebraic principles that generalize the well-known Cooley-Tukey FFT and make
the algorithms' derivations concise and transparent. Application to the 16
discrete cosine and sine transforms yields a large class of fast algorithms,
many of which have not been found before.
|
cs/0702028
|
Uniform and Partially Uniform Redistribution Rules
|
cs.AI
|
This short paper introduces two new fusion rules for combining quantitative
basic belief assignments. These rules although very simple have not been
proposed in literature so far and could serve as useful alternatives because of
their low computation cost with respect to the recent advanced Proportional
Conflict Redistribution rules developed in the DSmT framework.
|
cs/0702030
|
Optimizing the SINR operating point of spatial networks
|
cs.IT math.IT
|
This paper addresses the following question, which is of interest in the
design and deployment of a multiuser decentralized network. Given a total
system bandwidth of W Hz and a fixed data rate constraint of R bps for each
transmission, how many frequency slots N of size W/N should the band be
partitioned into to maximize the number of simultaneous transmissions in the
network? In an interference-limited ad-hoc network, dividing the available
spectrum results in two competing effects: on the positive side, it reduces the
number of users on each band and therefore decreases the interference level
which leads to an increased SINR, while on the negative side the SINR
requirement for each transmission is increased because the same information
rate must be achieved over a smaller bandwidth. Exploring this tradeoff between
bandwidth and SINR and determining the optimum value of N in terms of the
system parameters is the focus of the paper. Using stochastic geometry, we
analytically derive the optimal SINR threshold (which directly corresponds to
the optimal spectral efficiency) on this tradeoff curve and show that it is a
function of only the path loss exponent. Furthermore, the optimal SINR point
lies between the low-SINR (power-limited) and high-SINR (bandwidth-limited)
regimes. In order to operate at this optimal point, the number of frequency
bands (i.e., the reuse factor) should be increased until the threshold SINR,
which is an increasing function of the reuse factor, is equal to the optimal
value.
|
cs/0702031
|
Quantized vs. Analog Feedback for the MIMO Downlink: A Comparison
between Zero-Forcing Based Achievable Rates
|
cs.IT math.IT
|
We consider a MIMO fading broadcast channel and compare the achievable
ergodic rates when the channel state information at the transmitter is provided
by analog noisy feedback or by quantized (digital) feedback. The superiority of
digital feedback is shown, with perfect or imperfect CSIR, whenever the number
of feedback channel uses per channel coefficient is larger than 1. Also, we
show that by proper design of the digital feedback link, errors in the feedback
have a minor effect even by using very simple uncoded modulation. Finally, we
show that analog feedback achieves a fraction 1 - 2F of the optimal
multiplexing gain even in the presence of a feedback delay, when the fading
belongs to the class of Doppler processes with normalized maximum Doppler
frequency shift 0 <= F <= 1/2.
|
cs/0702033
|
Bounds on ordered codes and orthogonal arrays
|
cs.IT math.CO math.IT
|
We derive new estimates of the size of codes and orthogonal arrays in the
ordered Hamming space (the Niederreiter-Rosenbloom-Tsfasman space). We also
show that the eigenvalues of the ordered Hamming scheme, the association scheme
that describes the combinatorics of the space, are given by the multivariable
Krawtchouk polynomials, and establish some of their properties.
|
cs/0702035
|
New Models for the Correlation in Sensor Data
|
cs.IT math.IT
|
In this paper, we propose two new models of spatial correlations in sensor
data in a data-gathering sensor network. A particular property of these models
is that if a sensor node knows in \textit{how many} bits it needs to transmit
its data, then it also knows \textit{which} bits of its data it needs to
transmit.
|
cs/0702037
|
Evolutionary Approaches to Minimizing Network Coding Resources
|
cs.NI cs.IT math.IT
|
We wish to minimize the resources used for network coding while achieving the
desired throughput in a multicast scenario. We employ evolutionary approaches,
based on a genetic algorithm, that avoid the computational complexity that
makes the problem NP-hard. Our experiments show great improvements over the
sub-optimal solutions of prior methods. Our new algorithms improve over our
previously proposed algorithm in three ways. First, whereas the previous
algorithm can be applied only to acyclic networks, our new method works also
with networks with cycles. Second, we enrich the set of components used in the
genetic algorithm, which improves the performance. Third, we develop a novel
distributed framework. Combining distributed random network coding with our
distributed optimization yields a network coding protocol where the resources
used for coding are optimized in the setup phase by running our evolutionary
algorithm at each node of the network. We demonstrate the effectiveness of our
approach by carrying out simulations on a number of different sets of network
topologies.
|
cs/0702038
|
Genetic Representations for Evolutionary Minimization of Network Coding
Resources
|
cs.NE cs.NI
|
We demonstrate how a genetic algorithm solves the problem of minimizing the
resources used for network coding, subject to a throughput constraint, in a
multicast scenario. A genetic algorithm avoids the computational complexity
that makes the problem NP-hard and, for our experiments, greatly improves on
sub-optimal solutions of established methods. We compare two different genotype
encodings, which tradeoff search space size with fitness landscape, as well as
the associated genetic operators. Our finding favors a smaller encoding despite
its fewer intermediate solutions and demonstrates the impact of the modularity
enforced by genetic operators on the performance of the algorithm.
|
cs/0702044
|
Transmission Capacity of Ad Hoc Networks with Spatial Diversity
|
cs.IT math.IT
|
This paper derives the outage probability and transmission capacity of ad hoc
wireless networks with nodes employing multiple antenna diversity techniques,
for a general class of signal distributions. This analysis allows system
performance to be quantified for fading or non-fading environments. The
transmission capacity is given for interference-limited uniformly random
networks on the entire plane with path loss exponent $\alpha>2$ in which nodes
use: (1) static beamforming through $M$ sectorized antennas, for which the
increase in transmission capacity is shown to be $\Theta(M^2)$ if the antennas
are without sidelobes, but less in the event of a nonzero sidelobe level; (2)
dynamic eigen-beamforming (maximal ratio transmission/combining), in which the
increase is shown to be $\Theta(M^{\frac{2}{\alpha}})$; (3) various transmit
antenna selection and receive antenna selection combining schemes, which give
appreciable but rapidly diminishing gains; and (4) orthogonal space-time block
coding, for which there is only a small gain due to channel hardening,
equivalent to Nakagami-$m$ fading for increasing $m$. It is concluded that in
ad hoc networks, static and dynamic beamforming perform best, selection
combining performs well but with rapidly diminishing returns with added
antennas, and that space-time block coding offers only marginal gains.
|
cs/0702045
|
Gaussian Interference Channel Capacity to Within One Bit
|
cs.IT math.IT
|
The capacity of the two-user Gaussian interference channel has been open for
thirty years. The understanding on this problem has been limited. The best
known achievable region is due to Han-Kobayashi but its characterization is
very complicated. It is also not known how tight the existing outer bounds are.
In this work, we show that the existing outer bounds can in fact be arbitrarily
loose in some parameter ranges, and by deriving new outer bounds, we show that
a simplified Han-Kobayashi type scheme can achieve to within a single bit the
capacity for all values of the channel parameters. We also show that the scheme
is asymptotically optimal at certain high SNR regimes. Using our results, we
provide a natural generalization of the point-to-point classical notion of
degrees of freedom to interference-limited scenarios.
|
cs/0702050
|
Permutation Decoding and the Stopping Redundancy Hierarchy of Linear
Block Codes
|
cs.IT math.IT
|
We investigate the stopping redundancy hierarchy of linear block codes and
its connection to permutation decoding techniques. An element in the ordered
list of stopping redundancy values represents the smallest number of possibly
linearly dependent rows in any parity-check matrix of a code that avoids
stopping sets of a given size. Redundant parity-check equations can be shown to
have a similar effect on decoding performance as permuting the coordinates of
the received codeword according to a selected set of automorphisms of the code.
Based on this finding we develop new decoding strategies for data transmission
over the binary erasure channel that combine iterative message passing and
permutation decoding in order to avoid errors confined to stopping sets. We
also introduce the notion of s-SAD sets, containing the smallest number of
automorphisms of a code with the property that they move any set of not more
than s erasures into positions that do not correspond to stopping sets within a
judiciously chosen parity-check matrix.
|
cs/0702051
|
The Gaussian multiple access wire-tap channel: wireless secrecy and
cooperative jamming
|
cs.IT cs.CR math.IT
|
We consider the General Gaussian Multiple Access Wire-Tap Channel (GGMAC-WT).
In this scenario, multiple users communicate with an intended receiver in the
presence of an intelligent and informed eavesdropper. We define two suitable
secrecy measures, termed individual and collective, to reflect the confidence
in the system for this multi-access environment. We determine achievable rates
such that secrecy to some pre-determined degree can be maintained, using
Gaussian codebooks. We also find outer bounds for the case when the
eavesdropper receives a degraded version of the intended receiver's signal. In
the degraded case, Gaussian codewords are shown to achieve the sum capacity for
collective constraints. In addition, a TDMA scheme is shown to also achieve sum
capacity for both sets of constraints. Numerical results showing the new rate
region are presented and compared with the capacity region of the Gaussian
Multiple-Access Channel (GMAC) with no secrecy constraints. We then find the
secrecy sum-rate maximizing power allocations for the transmitters, and show
that a cooperative jamming scheme can be used to increase achievable rates in
this scenario.
|
cs/0702052
|
On Random Network Coding for Multicast
|
cs.IT math.IT
|
Random linear network coding is a particularly decentralized approach to the
multicast problem. Use of random network codes introduces a non-zero
probability however that some sinks will not be able to successfully decode the
required sources. One of the main theoretical motivations for random network
codes stems from the lower bound on the probability of successful decoding
reported by Ho et. al. (2003). This result demonstrates that all sinks in a
linearly solvable network can successfully decode all sources provided that the
random code field size is large enough. This paper develops a new bound on the
probability of successful decoding.
|
cs/0702055
|
On the possibility of making the complete computer model of a human
brain
|
cs.NE
|
The development of the algorithm of a neural network building by the
corresponding parts of a DNA code is discussed.
|
cs/0702059
|
Redundancy-Related Bounds on Generalized Huffman Codes
|
cs.IT math.IT
|
This paper presents new lower and upper bounds for the compression rate of
binary prefix codes optimized over memoryless sources according to various
nonlinear codeword length objectives. Like the most well-known redundancy
bounds for minimum average redundancy coding - Huffman coding - these are in
terms of a form of entropy and/or the probability of an input symbol, often the
most probable one. The bounds here, some of which are tight, improve on known
bounds of the form L in [H,H+1), where H is some form of entropy in bits (or,
in the case of redundancy objectives, 0) and L is the length objective, also in
bits. The objectives explored here include exponential-average length, maximum
pointwise redundancy, and exponential-average pointwise redundancy (also called
dth exponential redundancy). The first of these relates to various problems
involving queueing, uncertainty, and lossless communications; the second
relates to problems involving Shannon coding and universal modeling. For these
two objectives we also explore the related problem of the necessary and
sufficient conditions for the shortest codeword of a code being a specific
length.
|
cs/0702063
|
Entropy vectors and network codes
|
cs.IT cs.NI math.IT
|
We consider a network multicast example that relates the solvability of the
multicast problem with the existence of an entropy function. As a result, we
provide an alternative approach to the proving of the insufficiency of linear
(and abelian) network codes and demonstrate the utility of non-Shannon
inequalities to tighten outer bounds on network coding capacity regions.
|
cs/0702064
|
Group characterizable entropy functions
|
cs.IT math.IT
|
This paper studies properties of entropy functions that are induced by groups
and subgroups. We showed that many information theoretic properties of those
group induced entropy functions also have corresponding group theoretic
interpretations. Then we propose an extension method to find outer bound for
these group induced entropy functions.
|
cs/0702067
|
The Haar Wavelet Transform of a Dendrogram: Additional Notes
|
cs.IR
|
We consider the wavelet transform of a finite, rooted, node-ranked, $p$-way
tree, focusing on the case of binary ($p = 2$) trees. We study a Haar wavelet
transform on this tree. Wavelet transforms allow for multiresolution analysis
through translation and dilation of a wavelet function. We explore how this
works in our tree context.
|
cs/0702068
|
Distributed Decision Through Self-Synchronizing Sensor Networks in the
Presence of Propagation Delays and Nonreciprocal Channels
|
cs.IT cs.MA math.IT
|
In this paper we propose and analyze a distributed algorithm for achieving
globally optimal decisions, either estimation or detection, through a
self-synchronization mechanism among linearly coupled integrators initialized
with local measurements. We model the interaction among the nodes as a directed
graph with weights dependent on the radio interface and we pose special
attention to the effect of the propagation delays occurring in the exchange of
data among sensors, as a function of the network geometry. We derive necessary
and sufficient conditions for the proposed system to reach a consensus on
globally optimal decision statistics. One of the major results proved in this
work is that a consensus is achieved for any bounded delay condition if and
only if the directed graph is quasi-strongly connected. We also provide a
closed form expression for the global consensus, showing that the effect of
delays is, in general, to introduce a bias in the final decision. The closed
form expression is also useful to modify the consensus mechanism in order to
get rid of the bias with minimum extra complexity.
|
cs/0702070
|
A Practical Approach to Lossy Joint Source-Channel Coding
|
cs.IT math.IT
|
This work is devoted to practical joint source channel coding. Although the
proposed approach has more general scope, for the sake of clarity we focus on a
specific application example, namely, the transmission of digital images over
noisy binary-input output-symmetric channels. The basic building blocks of most
state-of the art source coders are: 1) a linear transformation; 2) scalar
quantization of the transform coefficients; 3) probability modeling of the
sequence of quantization indices; 4) an entropy coding stage. We identify the
weakness of the conventional separated source-channel coding approach in the
catastrophic behavior of the entropy coding stage. Hence, we replace this stage
with linear coding, that maps directly the sequence of redundant quantizer
output symbols into a channel codeword. We show that this approach does not
entail any loss of optimality in the asymptotic regime of large block length.
However, in the practical regime of finite block length and low decoding
complexity our approach yields very significant improvements. Furthermore, our
scheme allows to retain the transform, quantization and probability modeling of
current state-of the art source coders, that are carefully matched to the
features of specific classes of sources. In our working example, we make use of
``bit-planes'' and ``contexts'' model defined by the JPEG2000 standard and we
re-interpret the underlying probability model as a sequence of conditionally
Markov sources. The Markov structure allows to derive a simple successive
coding and decoding scheme, where the latter is based on iterative Belief
Propagation. We provide a construction example of the proposed scheme based on
punctured Turbo Codes and we demonstrate the gain over a conventional separated
scheme by running extensive numerical experiments on test images.
|
cs/0702071
|
What is needed to exploit knowledge of primary transmissions?
|
cs.IT math.IT
|
Recently, Tarokh and others have raised the possibility that a cognitive
radio might know the interference signal being transmitted by a strong primary
user in a non-causal way, and use this knowledge to increase its data rates.
However, there is a subtle difference between knowing the signal transmitted by
the primary and the actual interference at our receiver since there is a
wireless channel between these two points. We show that even an unknown phase
results in a substantial decrease in the data rates that can be achieved, and
thus there is a need to feedback interference channel estimates to the
cognitive transmitter. We then consider the case of fading channels. We derive
an upper bound on the rate for given outage error probability for faded dirt.
We give a scheme that uses appropriate "training" to obtain such estimates and
quantify this scheme's required overhead as a function of the relevant
coherence time and interference power.
|
cs/0702072
|
Logic Programming with Satisfiability
|
cs.PL cs.AI
|
This paper presents a Prolog interface to the MiniSat satisfiability solver.
Logic program- ming with satisfiability combines the strengths of the two
paradigms: logic programming for encoding search problems into satisfiability
on the one hand and efficient SAT solving on the other. This synergy between
these two exposes a programming paradigm which we propose here as a logic
programming pearl. To illustrate logic programming with SAT solving we give an
example Prolog program which solves instances of Partial MAXSAT.
|
cs/0702073
|
Tradeoff between decoding complexity and rate for codes on graphs
|
cs.IT cs.CC math.IT
|
We consider transmission over a general memoryless channel, with bounded
decoding complexity per bit under message passing decoding. We show that the
achievable rate is bounded below capacity if there is a finite success in the
decoding in a specified number of operations per bit at the decoder for some
codes on graphs. These codes include LDPC and LDGM codes. Good performance with
low decoding complexity suggests strong local structures in the graphs of these
codes, which are detrimental to the code rate asymptotically. The proof method
leads to an interesting necessary condition on the code structures which could
achieve capacity with bounded decoding complexity. We also show that if a code
sequence achieves a rate epsilon close to the channel capacity, the decoding
complexity scales at least as O(log(1/epsilon).
|
cs/0702075
|
Firebird Database Backup by Serialized Database Table Dump
|
cs.DB
|
This paper presents a simple data dump and load utility for Firebird
databases which mimics mysqldump in MySQL. This utility, fb_dump and fb_load,
for dumping and loading respectively, retrieves each database table using
kinterbasdb and serializes the data using marshal module. This utility has two
advantages over the standard Firebird database backup utility, gbak. Firstly,
it is able to backup and restore single database tables which might help to
recover corrupted databases. Secondly, the output is in text-coded format (from
marshal module) making it more resilient than a compressed text backup, as in
the case of using gbak.
|
cs/0702077
|
Properties of Rank Metric Codes
|
cs.IT math.IT
|
This paper investigates general properties of codes with the rank metric. We
first investigate asymptotic packing properties of rank metric codes. Then, we
study sphere covering properties of rank metric codes, derive bounds on their
parameters, and investigate their asymptotic covering properties. Finally, we
establish several identities that relate the rank weight distribution of a
linear code to that of its dual code. One of our identities is the counterpart
of the MacWilliams identity for the Hamming metric, and it has a different form
from the identity by Delsarte.
|
cs/0702081
|
Random Sentences from a Generalized Phrase-Structure Grammar Interpreter
|
cs.CL
|
In numerous domains in cognitive science it is often useful to have a source
for randomly generated corpora. These corpora may serve as a foundation for
artificial stimuli in a learning experiment (e.g., Ellefson & Christiansen,
2000), or as input into computational models (e.g., Christiansen & Dale, 2001).
The following compact and general C program interprets a phrase-structure
grammar specified in a text file. It follows parameters set at a Unix or
Unix-based command-line and generates a corpus of random sentences from that
grammar.
|
cs/0702082
|
Invariant template matching in systems with spatiotemporal coding: a
vote for instability
|
cs.CV cs.AI
|
We consider the design of a pattern recognition that matches templates to
images, both of which are spatially sampled and encoded as temporal sequences.
The image is subject to a combination of various perturbations. These include
ones that can be modeled as parameterized uncertainties such as image blur,
luminance, translation, and rotation as well as unmodeled ones. Biological and
neural systems require that these perturbations be processed through a minimal
number of channels by simple adaptation mechanisms. We found that the most
suitable mathematical framework to meet this requirement is that of weakly
attracting sets. This framework provides us with a normative and unifying
solution to the pattern recognition problem. We analyze the consequences of its
explicit implementation in neural systems. Several properties inherent to the
systems designed in accordance with our normative mathematical argument
coincide with known empirical facts. This is illustrated in mental rotation,
visual search and blur/intensity adaptation. We demonstrate how our results can
be applied to a range of practical problems in template matching and pattern
recognition.
|
cs/0702084
|
Performance of Ultra-Wideband Impulse Radio in Presence of Impulsive
Interference
|
cs.IT math.IT
|
We analyze the performance of coherent impulsive-radio (IR) ultra-wideband
(UWB) channel in presence of the interference generated by concurrent
transmissions of the systems with the same impulsive radio. We derive a novel
algorithm, using Monte-Carlo method, to calculate a lower bound on the rate
that can be achieved using maximum-likelihood estimator. Using this bound we
show that such a channel is very robust to interference, in contrast to the
nearest-neighbor detector.
|
cs/0702085
|
Social Behaviours Applied to P2P Systems: An efficient Algorithm for
Resource Organisation
|
cs.DC cs.IR
|
P2P systems are a great solution to the problem of distributing resources.
The main issue of P2P networks is that searching and retrieving resources
shared by peers is usually expensive and does not take into account
similarities among peers. In this paper we present preliminary simulations of
PROSA, a novel algorithm for P2P network structuring, inspired by social
behaviours. Peers in PROSA self--organise in social groups of similar peers,
called ``semantic--groups'', depending on the resources they are sharing. Such
a network smoothly evolves to a small--world graph, where queries for resources
are efficiently and effectively routed.
|
cs/0702091
|
Observable Graphs
|
cs.MA
|
An edge-colored directed graph is \emph{observable} if an agent that moves
along its edges is able to determine his position in the graph after a
sufficiently long observation of the edge colors. When the agent is able to
determine his position only from time to time, the graph is said to be
\emph{partly observable}. Observability in graphs is desirable in situations
where autonomous agents are moving on a network and one wants to localize them
(or the agent wants to localize himself) with limited information. In this
paper, we completely characterize observable and partly observable graphs and
show how these concepts relate to observable discrete event systems and to
local automata. Based on these characterizations, we provide polynomial time
algorithms to decide observability, to decide partial observability, and to
compute the minimal number of observations necessary for finding the position
of an agent. In particular we prove that in the worst case this minimal number
of observations increases quadratically with the number of nodes in the graph.
From this it follows that it may be necessary for an agent to pass through
the same node several times before he is finally able to determine his position
in the graph. We then consider the more difficult question of assigning colors
to a graph so as to make it observable and we prove that two different versions
of this problem are NP-complete.
|
cs/0702093
|
Secure Broadcasting
|
cs.IT math.IT
|
Wyner's wiretap channel is extended to parallel broadcast channels and fading
channels with multiple receivers. In the first part of the paper, we consider
the setup of parallel broadcast channels with one sender, multiple intended
receivers, and one eavesdropper. We study the situations where the sender
broadcasts either a common message or independent messages to the intended
receivers. We derive upper and lower bounds on the common-message-secrecy
capacity, which coincide when the users are reversely degraded. For the case of
independent messages we establish the secrecy sum-capacity when the users are
reversely degraded.
In the second part of the paper we apply our results to fading channels:
perfect channel state information of all intended receivers is known globally,
whereas the eavesdropper channel is known only to her. For the common message
case, a somewhat surprising result is proven: a positive rate can be achieved
independently of the number of intended receivers. For independent messages, an
opportunistic transmission scheme is presented that achieves the secrecy
sum-capacity in the limit of large number of receivers. Our results are stated
for a fast fading channel model. Extensions to the block fading model are also
discussed.
|
cs/0702096
|
Overcoming Hierarchical Difficulty by Hill-Climbing the Building Block
Structure
|
cs.NE cs.AI
|
The Building Block Hypothesis suggests that Genetic Algorithms (GAs) are
well-suited for hierarchical problems, where efficient solving requires proper
problem decomposition and assembly of solution from sub-solution with strong
non-linear interdependencies. The paper proposes a hill-climber operating over
the building block (BB) space that can efficiently address hierarchical
problems. The new Building Block Hill-Climber (BBHC) uses past hill-climb
experience to extract BB information and adapts its neighborhood structure
accordingly. The perpetual adaptation of the neighborhood structure allows the
method to climb the hierarchical structure solving successively the
hierarchical levels. It is expected that for fully non deceptive hierarchical
BB structures the BBHC can solve hierarchical problems in linearithmic time.
Empirical results confirm that the proposed method scales almost linearly with
the problem size thus clearly outperforms population based recombinative
methods.
|
cs/0702097
|
Avoiding bias in cards cryptography
|
cs.CR cs.MA
|
We outline the need for stricter requirements for unconditionally secure
cryptographic protocols inspired by the Russian Cards problem. A new
requirement CA4 is proposed that checks for bias in single card occurrence in
announcements consisting of alternatives for players' holdings of cards. This
requirement CA4 is shown to be equivalent to an alternative requirement CA5.
All announcements found to satisfy CA4 are 2-designs. We also show that all
binary designs are 3-designs. Instead of avoiding bias in announcements
produced by such protocols, one may as well apply unbiased protocols such that
patterns in announcements become meaningless. We gave two examples of such
protocols for card deal parameters (3,3,1), i.e. two of the players hold three
cards, and the remaining player, playing the role of eavesdropper, holds a
single card.
|
cs/0702099
|
Discrete Memoryless Interference and Broadcast Channels with
Confidential Messages: Secrecy Rate Regions
|
cs.IT math.IT
|
We study information-theoretic security for discrete memoryless interference
and broadcast channels with independent confidential messages sent to two
receivers. Confidential messages are transmitted to their respective receivers
with information-theoretic secrecy. That is, each receiver is kept in total
ignorance with respect to the message intended for the other receiver. The
secrecy level is measured by the equivocation rate at the eavesdropping
receiver. In this paper, we present inner and outer bounds on secrecy capacity
regions for these two communication systems. The derived outer bounds have an
identical mutual information expression that applies to both channel models.
The difference is in the input distributions over which the expression is
optimized. The inner bound rate regions are achieved by random binning
techniques. For the broadcast channel, a double-binning coding scheme allows
for both joint encoding and preserving of confidentiality. Furthermore, we show
that, for a special case of the interference channel, referred to as the switch
channel, the two bound bounds meet. Finally, we describe several transmission
schemes for Gaussian interference channels and derive their achievable rate
regions while ensuring mutual information-theoretic secrecy. An encoding scheme
in which transmitters dedicate some of their power to create artificial noise
is proposed and shown to outperform both time-sharing and simple multiplexed
transmission of the confidential messages.
|
cs/0702100
|
A Class of Multi-Channel Cosine Modulated IIR Filter Banks
|
cs.IT math.IT
|
This paper presents a class of multi-channel cosine-modulated filter banks
satisfying the perfect reconstruction (PR) property using an IIR prototype
filter. By imposing a suitable structure on the polyphase filter coefficients,
we show that it is possible to greatly simplify the PR condition, while
preserving the causality and stability of the system. We derive closed-form
expressions for the synthesis filters and also study the numerical stability of
the filter bank using frame theoretic bounds. Further, we show that it is
possible to implement this filter bank with much lower number of arithmetic
operations when compared to FIR filter banks with comparable performance. The
filter bank's modular structure also lends itself to efficient VLSI
implementation.
|
cs/0702101
|
An identity of Chernoff bounds with an interpretation in statistical
physics and applications in information theory
|
cs.IT math.IT
|
An identity between two versions of the Chernoff bound on the probability a
certain large deviations event, is established. This identity has an
interpretation in statistical physics, namely, an isothermal equilibrium of a
composite system that consists of multiple subsystems of particles. Several
information--theoretic application examples, where the analysis of this large
deviations probability naturally arises, are then described from the viewpoint
of this statistical mechanical interpretation. This results in several
relationships between information theory and statistical physics, which we
hope, the reader will find insightful.
|
cs/0702102
|
Paging and Registration in Cellular Networks: Jointly Optimal Policies
and an Iterative Algorithm
|
cs.IT cs.NI math.IT
|
This paper explores optimization of paging and registration policies in
cellular networks. Motion is modeled as a discrete-time Markov process, and
minimization of the discounted, infinite-horizon average cost is addressed. The
structure of jointly optimal paging and registration policies is investigated
through the use of dynamic programming for partially observed Markov processes.
It is shown that there exist policies with a certain simple form that are
jointly optimal, though the dynamic programming approach does not directly
provide an efficient method to find the policies.
An iterative algorithm for policies with the simple form is proposed and
investigated. The algorithm alternates between paging policy optimization and
registration policy optimization. It finds a pair of individually optimal
policies, but an example is given showing that the policies need not be jointly
optimal. Majorization theory and Riesz's rearrangement inequality are used to
show that jointly optimal paging and registration policies are given for
symmetric or Gaussian random walk models by the nearest-location-first paging
policy and distance threshold registration policies.
|
cs/0702104
|
A Union Bound Approximation for Rapid Performance Evaluation of
Punctured Turbo Codes
|
cs.IT math.IT
|
In this paper, we present a simple technique to approximate the performance
union bound of a punctured turbo code. The bound approximation exploits only
those terms of the transfer function that have a major impact on the overall
performance. We revisit the structure of the constituent convolutional encoder
and we develop a rapid method to calculate the most significant terms of the
transfer function of a turbo encoder. We demonstrate that, for a large
interleaver size, this approximation is very accurate. Furthermore, we apply
our proposed method to a family of punctured turbo codes, which we call
pseudo-randomly punctured codes. We conclude by emphasizing the benefits of our
approach compared to those employed previously. We also highlight the
advantages of pseudo-random puncturing over other puncturing schemes.
|
cs/0702105
|
The Simplest Solution to an Underdetermined System of Linear Equations
|
cs.IT math.IT
|
Consider a d*n matrix A, with d<n. The problem of solving for x in y=Ax is
underdetermined, and has infinitely many solutions (if there are any). Given y,
the minimum Kolmogorov complexity solution (MKCS) of the input x is defined to
be an input z (out of many) with minimum Kolmogorov-complexity that satisfies
y=Az. One expects that if the actual input is simple enough, then MKCS will
recover the input exactly. This paper presents a preliminary study of the
existence and value of the complexity level up to which such a complexity-based
recovery is possible. It is shown that for the set of all d*n binary matrices
(with entries 0 or 1 and d<n), MKCS exactly recovers the input for an
overwhelming fraction of the matrices provided the Kolmogorov complexity of the
input is O(d). A weak converse that is loose by a log n factor is also
established for this case. Finally, we investigate the difficulty of finding a
matrix that has the property of recovering inputs with complexity of O(d) using
MKCS.
|
cs/0702106
|
Wild, Wild Wikis: A way forward
|
cs.IR
|
Wikis can be considered as public domain knowledge sharing system. They
provide opportunity for those who may not have the privilege to publish their
thoughts through the traditional methods. They are one of the fastest growing
systems of online encyclopaedia. In this study, we consider the importance of
wikis as a way of creating, sharing and improving public knowledge. We identify
some of the problems associated with wikis to include, (a) identification of
the identities of information and its creator (b) accuracy of information (c)
justification of the credibility of authors (d) vandalism of quality of
information (e) weak control over the contents. A solution to some of these
problems is sought through the use of an annotation model. The model assumes
that contributions in wikis can be seen as annotation to the initial document.
It proposed a systematic control of contributors and contributions to the
initiative and the keeping of records of what existed and what was done to
initial documents. We believe that with this model, analysis can be done on the
progress of wiki initiatives. We assumed that using this model, wikis can be
better used for creation and sharing of knowledge for public use.
|
cs/0702107
|
AMIEDoT: An annotation model for document tracking and recommendation
service
|
cs.IR
|
The primary objective of document annotation in whatever form, manual or
electronic is to allow those who may not have control to original document to
provide personal view on information source. Beyond providing personal
assessment to original information sources, we are looking at a situation where
annotation made can be used as additional source of information for document
tracking and recommendation service. Most of the annotation tools existing
today were conceived for their independent use with no reference to the creator
of the annotation. We propose AMIEDoT (Annotation Model for Information
Exchange and Document Tracking) an annotation model that can assist in document
tracking and recommendation service. The model is based on three parameters in
the acts of annotation. We believe that introducing document parameters, time
and the parameters of the creator of annotation into an annotation process can
be a dependable source to know, who used a document, when a document was used
and for what a document was used for. Beyond document tracking, our model can
be used in not only for selective dissemination of information but for
recommendation services. AMIEDoT can also be used for information sharing and
information reuse.
|
cs/0702108
|
Orthogonal Codes for Robust Low-Cost Communication
|
cs.IT math.IT
|
Orthogonal coding schemes, known to asymptotically achieve the capacity per
unit cost (CPUC) for single-user ergodic memoryless channels with a zero-cost
input symbol, are investigated for single-user compound memoryless channels,
which exhibit uncertainties in their input-output statistical relationships. A
minimax formulation is adopted to attain robustness. First, a class of
achievable rates per unit cost (ARPUC) is derived, and its utility is
demonstrated through several representative case studies. Second, when the
uncertainty set of channel transition statistics satisfies a convexity
property, optimization is performed over the class of ARPUC through utilizing
results of minimax robustness. The resulting CPUC lower bound indicates the
ultimate performance of the orthogonal coding scheme, and coincides with the
CPUC under certain restrictive conditions. Finally, still under the convexity
property, it is shown that the CPUC can generally be achieved, through
utilizing a so-called mixed strategy in which an orthogonal code contains an
appropriate composition of different nonzero-cost input symbols.
|
cs/0702109
|
AMIE: An annotation model for information research
|
cs.IR
|
The objective of most users for consulting any information database,
information warehouse or the internet is to resolve one problem or the other.
Available online or offline annotation tools were not conceived with the
objective of assisting users in their bid to resolve a decisional problem.
Apart from the objective and usage of annotation tools, how these tools are
conceived and classified has implication on their usage. Several criteria have
been used to categorize annotation concepts. Typically annotation are conceived
based on how it affect the organization of document been considered for
annotation or the organization of the resulting annotation. Our approach is
annotation that will assist in information research for decision making.
Annotation model for information exchange (AMIE) was conceived with the
objective of information sharing and reuse.
|
cs/0702111
|
Informed Dynamic Scheduling for Belief-Propagation Decoding of LDPC
Codes
|
cs.IT math.IT
|
Low-Density Parity-Check (LDPC) codes are usually decoded by running an
iterative belief-propagation, or message-passing, algorithm over the factor
graph of the code. The traditional message-passing schedule consists of
updating all the variable nodes in the graph, using the same pre-update
information, followed by updating all the check nodes of the graph, again,
using the same pre-update information. Recently several studies show that
sequential scheduling, in which messages are generated using the latest
available information, significantly improves the convergence speed in terms of
number of iterations. Sequential scheduling raises the problem of finding the
best sequence of message updates. This paper presents practical scheduling
strategies that use the value of the messages in the graph to find the next
message to be updated. Simulation results show that these informed update
sequences require significantly fewer iterations than standard sequential
schedules. Furthermore, the paper shows that informed scheduling solves some
standard trapping set errors. Therefore, it also outperforms traditional
scheduling for a large numbers of iterations. Complexity and implementability
issues are also addressed.
|
cs/0702112
|
The General Gaussian Multiple Access and Two-Way Wire-Tap Channels:
Achievable Rates and Cooperative Jamming
|
cs.IT cs.CR math.IT
|
The General Gaussian Multiple Access Wire-Tap Channel (GGMAC-WT) and the
Gaussian Two-Way Wire-Tap Channel (GTW-WT) are considered. In the GGMAC-WT,
multiple users communicate with an intended receiver in the presence of an
eavesdropper who receives their signals through another GMAC. In the GTW-WT,
two users communicate with each other over a common Gaussian channel, with an
eavesdropper listening through a GMAC. A secrecy measure that is suitable for
this multi-terminal environment is defined, and achievable secrecy rate regions
are found for both channels. For both cases, the power allocations maximizing
the achievable secrecy sum-rate are determined. It is seen that the optimum
policy may prevent some terminals from transmission in order to preserve the
secrecy of the system. Inspired by this construct, a new scheme,
\ital{cooperative jamming}, is proposed, where users who are prevented from
transmitting according to the secrecy sum-rate maximizing power allocation
policy "jam" the eavesdropper, thereby helping the remaining users. This scheme
is shown to increase the achievable secrecy sum-rate. Overall, our results show
that in multiple-access scenarios, users can help each other to collectively
achieve positive secrecy rates. In other words, cooperation among users can be
invaluable for achieving secrecy for the system.
|
cs/0702115
|
Guessing based on length functions
|
cs.IT cs.CR math.IT
|
A guessing wiretapper's performance on a Shannon cipher system is analyzed
for a source with memory. Close relationships between guessing functions and
length functions are first established. Subsequently, asymptotically optimal
encryption and attack strategies are identified and their performances analyzed
for sources with memory. The performance metrics are exponents of guessing
moments and probability of large deviations. The metrics are then characterized
for unifilar sources. Universal asymptotically optimal encryption and attack
strategies are also identified for unifilar sources. Guessing in the increasing
order of Lempel-Ziv coding lengths is proposed for finite-state sources, and
shown to be asymptotically optimal. Finally, competitive optimality properties
of guessing in the increasing order of description lengths and Lempel-Ziv
coding lengths are demonstrated.
|
cs/0702118
|
Interpolation-based Decoding of Alternant Codes
|
cs.IT math.IT
|
We formulate the classical decoding algorithm of alternant codes afresh based
on interpolation as in Sudan's list decoding of Reed-Solomon codes, and thus
get rid of the key equation and the linear recurring sequences in the theory.
The result is a streamlined exposition of the decoding algorithm using a bit of
the theory of Groebner bases of modules.
|
cs/0702122
|
Transmitter and Precoding Order Optimization for Nonlinear Downlink
Beamforming
|
cs.IT math.IT
|
The downlink of a multiple-input multiple output (MIMO) broadcast channel
(BC) is considered, where each receiver is equipped with a single antenna and
the transmitter performs nonlinear Dirty-Paper Coding (DPC). We present an
efficient algorithm that finds the optimum transmit filters and power
allocation as well as the optimum precoding order(s) possibly affording
time-sharing between individual DPC orders. Subsequently necessary and
sufficient conditions for the optimality of an arbitrary precoding order are
derived. Based on these we propose a suboptimal algorithm showing excellent
performance and having low complexity.
|
cs/0702126
|
Efficient Searching and Retrieval of Documents in PROSA
|
cs.DC cs.IR
|
Retrieving resources in a distributed environment is more difficult than
finding data in centralised databases. In the last decade P2P system arise as
new and effective distributed architectures for resource sharing, but searching
in such environments could be difficult and time-consuming. In this paper we
discuss efficiency of resource discovery in PROSA, a self-organising P2P system
heavily inspired by social networks. All routing choices in PROSA are made
locally, looking only at the relevance of the next peer to each query. We show
that PROSA is able to effectively answer queries for rare documents, forwarding
them through the most convenient path to nodes that much probably share
matching resources. This result is heavily related to the small-world structure
that naturally emerges in PROSA.
|
cs/0702127
|
Exploiting social networks dynamics for P2P resource organisation
|
cs.DC cs.IR
|
In this paper we present a formal description of PROSA, a P2P resource
management system heavily inspired by social networks. Social networks have
been deeply studied in the last two decades in order to understand how
communities of people arise and grow. It is a widely known result that networks
of social relationships usually evolves to small-worlds, i.e. networks where
nodes are strongly connected to neighbours and separated from all other nodes
by a small amount of hops. This work shows that algorithms implemented into
PROSA allow to obtain an efficient small-world P2P network.
|
cs/0702130
|
Syndrome Decoding of Reed-Solomon Codes Beyond Half the Minimum Distance
based on Shift-Register Synthesis
|
cs.IT math.IT
|
In this paper, a new approach for decoding low-rate Reed-Solomon codes beyond
half the minimum distance is considered and analyzed. Unlike the Sudan
algorithm published in 1997, this new approach is based on multi-sequence
shift-register synthesis, which makes it easy to understand and simple to
implement. The computational complexity of this shift-register based algorithm
is of the same order as the complexity of the well-known Berlekamp-Massey
algorithm. Moreover, the error correcting radius coincides with the error
correcting radius of the original Sudan algorithm, and the practical decoding
performance observed on a q-ary symmetric channel (QSC) is virtually identical
to the decoding performance of the Sudan algorithm. Bounds for the failure and
error probability as well as for the QSC decoding performance of the new
algorithm are derived, and the performance is illustrated by means of examples.
|
cs/0702131
|
AICA: a New Pair Force Evaluation Method for Parallel Molecular Dynamics
in Arbitrary Geometries
|
cs.CE cs.DC
|
A new algorithm for calculating intermolecular pair forces in Molecular
Dynamics (MD) simulations on a distributed parallel computer is presented. The
Arbitrary Interacting Cells Algorithm (AICA) is designed to operate on
geometrical domains defined by an unstructured, arbitrary polyhedral mesh,
which has been spatially decomposed into irregular portions for
parallelisation. It is intended for nano scale fluid mechanics simulation by MD
in complex geometries, and to provide the MD component of a hybrid MD/continuum
simulation. AICA has been implemented in the open-source computational toolbox
OpenFOAM, and verified against a published MD code.
|
cs/0702132
|
Uplink Capacity and Interference Avoidance for Two-Tier Femtocell
Networks
|
cs.NI cs.IT math.IT
|
Two-tier femtocell networks-- comprising a conventional macrocellular network
plus embedded femtocell hotspots-- offer an economically viable solution to
achieving high cellular user capacity and improved coverage. With universal
frequency reuse and DS-CDMA transmission however, the ensuing cross-tier
cochannel interference (CCI) causes unacceptable outage probability. This paper
develops an uplink capacity analysis and interference avoidance strategy in
such a two-tier CDMA network. We evaluate a network-wide area spectral
efficiency metric called the \emph{operating contour (OC)} defined as the
feasible combinations of the average number of active macrocell users and
femtocell base stations (BS) per cell-site that satisfy a target outage
constraint. The capacity analysis provides an accurate characterization of the
uplink outage probability, accounting for power control, path-loss and
shadowing effects. Considering worst case CCI at a corner femtocell, results
reveal that interference avoidance through a time-hopped CDMA physical layer
and sectorized antennas allows about a 7x higher femtocell density, relative to
a split spectrum two-tier network with omnidirectional femtocell antennas. A
femtocell exclusion region and a tier selection based handoff policy offers
modest improvements in the OCs. These results provide guidelines for the design
of robust shared spectrum two-tier networks.
|
cs/0702138
|
On the Maximal Diversity Order of Spatial Multiplexing with Transmit
Antenna Selection
|
cs.IT math.IT
|
Zhang et. al. recently derived upper and lower bounds on the achievable
diversity of an N_R x N_T i.i.d. Rayleigh fading multiple antenna system using
transmit antenna selection, spatial multiplexing and a linear receiver
structure. For the case of L = 2 transmitting (out of N_T available) antennas
the bounds are tight and therefore specify the maximal diversity order. For the
general case with L <= min(N_R,N_T) transmitting antennas it was conjectured
that the maximal diversity is (N_T-L+1)(N_R-L+1) which coincides with the lower
bound. Herein, we prove this conjecture for the zero forcing and zero forcing
decision feedback (with optimal detection ordering) receiver structures.
|
cs/0702142
|
An Optimal Linear Time Algorithm for Quasi-Monotonic Segmentation
|
cs.DS cs.DB
|
Monotonicity is a simple yet significant qualitative characteristic. We
consider the problem of segmenting an array in up to K segments. We want
segments to be as monotonic as possible and to alternate signs. We propose a
quality metric for this problem, present an optimal linear time algorithm based
on novel formalism, and compare experimentally its performance to a linear time
top-down regression algorithm. We show that our algorithm is faster and more
accurate. Applications include pattern recognition and qualitative modeling.
|
cs/0702143
|
Attribute Value Reordering For Efficient Hybrid OLAP
|
cs.DB
|
The normalization of a data cube is the ordering of the attribute values. For
large multidimensional arrays where dense and sparse chunks are stored
differently, proper normalization can lead to improved storage efficiency. We
show that it is NP-hard to compute an optimal normalization even for 1x3
chunks, although we find an exact algorithm for 1x2 chunks. When dimensions are
nearly statistically independent, we show that dimension-wise attribute
frequency sorting is an optimal normalization and takes time O(d n log(n)) for
data cubes of size n^d. When dimensions are not independent, we propose and
evaluate several heuristics. The hybrid OLAP (HOLAP) storage mechanism is
already 19%-30% more efficient than ROLAP, but normalization can improve it
further by 9%-13% for a total gain of 29%-44% over ROLAP.
|
cs/0702144
|
Slope One Predictors for Online Rating-Based Collaborative Filtering
|
cs.DB cs.AI
|
Rating-based collaborative filtering is the process of predicting how a user
would rate a given item from other user ratings. We propose three related slope
one schemes with predictors of the form f(x) = x + b, which precompute the
average difference between the ratings of one item and another for users who
rated both. Slope one algorithms are easy to implement, efficient to query,
reasonably accurate, and they support both online queries and dynamic updates,
which makes them good candidates for real-world systems. The basic slope one
scheme is suggested as a new reference scheme for collaborative filtering. By
factoring in items that a user liked separately from items that a user
disliked, we achieve results competitive with slower memory-based schemes over
the standard benchmark EachMovie and Movielens data sets while better
fulfilling the desiderata of CF applications.
|
cs/0702146
|
A Local Tree Structure is NOT Sufficient for the Local Optimality of
Message-Passing Decoding in Low Density Parity Check Codes
|
cs.IT math.IT
|
We address the problem,`Is a local tree structure sufficient for the local
optimality of message passing algorithm in low density parity check codes?'.It
is shown that the answer is negative. Using this observation, we pinpoint a
flaw in the proof of Theorem 1 in the paper `The Capacity of Low-Density
Parity-Check Codes Under Message-Passing Decoding' by Thomas J. Richardson and
R\"udiger L.Urbanke\cite{RUCapacity}. We further provide a new proof of that
theorem based on a different argument.
|
cs/0702147
|
On the Complexity of Exact Maximum-Likelihood Decoding for
Asymptotically Good Low Density Parity Check Codes
|
cs.IT math.IT
|
Since the classical work of Berlekamp, McEliece and van Tilborg, it is well
known that the problem of exact maximum-likelihood (ML) decoding of general
linear codes is NP-hard. In this paper, we show that exact ML decoding of a
classs of asymptotically good error correcting codes--expander codes, a special
case of low density parity check (LDPC) codes--over binary symmetric channels
(BSCs) is possible with an expected polynomial complexity. More precisely, for
any bit-flipping probability, $p$, in a nontrivial range, there exists a rate
region of non-zero support and a family of asymptotically good codes, whose
error probability decays exponentially in coding length $n$, for which ML
decoding is feasible in expected polynomial time. Furthermore, as $p$
approaches zero, this rate region approaches the channel capacity region. The
result is based on the existence of polynomial-time suboptimal decoding
algorithms that provide an ML certificate and the ability to compute the
probability that the suboptimal decoder yields the ML solution. One such ML
certificate decoder is the LP decoder of Feldman; we also propose a more
efficient $O(n^2)$ algorithm based on the work of Sipser and Spielman and the
Ford-Fulkerson algorithm. The results can be extended to AWGN channels and
suggest that it may be feasible to eliminate the error floor phenomenon
associated with message-passage decoding of LDPC codes in the high SNR regime.
Finally, we observe that the argument of Berlekamp, McEliece and van Tilborg
can be used to show that ML decoding of the considered class of codes
constructed from LDPC codes with regular left degree, of which the considered
expander codes are a special case, remains NP-hard; thus giving an interesting
contrast between the worst-case and expected complexities.
|
cs/0702148
|
Linking Microscopic and Macroscopic Models for Evolution: Markov Chain
Network Training and Conservation Law Approximations
|
cs.CE cs.IT cs.NA cs.NE math.IT
|
In this paper, a general framework for the analysis of a connection between
the training of artificial neural networks via the dynamics of Markov chains
and the approximation of conservation law equations is proposed. This framework
allows us to demonstrate an intrinsic link between microscopic and macroscopic
models for evolution via the concept of perturbed generalized dynamic systems.
The main result is exemplified with a number of illustrative examples where
efficient numerical approximations follow directly from network-based
computational models, viewed here as Markov chain approximations. Finally,
stability and consistency conditions of such computational models are
discussed.
|
cs/0702149
|
Coupling Control and Human-Centered Automation in Mathematical Models of
Complex Systems
|
cs.CE cs.AI cs.HC cs.IT math.IT
|
In this paper we analyze mathematically how human factors can be effectively
incorporated into the analysis and control of complex systems. As an example,
we focus our discussion around one of the key problems in the Intelligent
Transportation Systems (ITS) theory and practice, the problem of speed control,
considered here as a decision making process with limited information
available. The problem is cast mathematically in the general framework of
control problems and is treated in the context of dynamically changing
environments where control is coupled to human-centered automation. Since in
this case control might not be limited to a small number of control settings,
as it is often assumed in the control literature, serious difficulties arise in
the solution of this problem. We demonstrate that the problem can be reduced to
a set of Hamilton-Jacobi-Bellman equations where human factors are incorporated
via estimations of the system Hamiltonian. In the ITS context, these
estimations can be obtained with the use of on-board equipment like
sensors/receivers/actuators, in-vehicle communication devices, etc. The
proposed methodology provides a way to integrate human factor into the solving
process of the models for other complex dynamic systems.
|
cs/0702150
|
A note on rate-distortion functions for nonstationary Gaussian
autoregressive processes
|
cs.IT math.IT
|
Source coding theorems and Shannon rate-distortion functions were studied for
the discrete-time Wiener process by Berger and generalized to nonstationary
Gaussian autoregressive processes by Gray and by Hashimoto and Arimoto.
Hashimoto and Arimoto provided an example apparently contradicting the methods
used in Gray, implied that Gray's rate-distortion evaluation was not correct in
the nonstationary case, and derived a new formula that agreed with previous
results for the stationary case and held in the nonstationary case. In this
correspondence it is shown that the rate-distortion formulas of Gray and
Hashimoto and Arimoto are in fact consistent and that the example of of
Hashimoto and Arimoto does not form a counter example to the methods or results
of the earlier paper. Their results do provide an alternative, but equivalent,
formula for the rate-distortion function in the nonstationary case and they
provide a concrete example that the classic Kolmogorov formula differs from the
autoregressive formula when the autoregressive source is not stationary. Some
observations are offered on the different versions of the Toeplitz asymptotic
eigenvalue distribution theorem used in the two papers to emphasize how a
slight modification of the classic theorem avoids the problems with certain
singularities.
|
cs/0702154
|
On the Capacity of the Single Source Multiple Relay Single Destination
Mesh Network
|
cs.IT cs.NI math.IT
|
In this paper, we derive the information theoretic capacity of a special
class of mesh networks. A mesh network is a heterogeneous wireless network in
which the transmission among power limited nodes is assisted by powerful
relays, which use the same wireless medium. We investigate the mesh network
when there is one source, one destination, and multiple relays, which we call
the single source multiple relay single destination (SSMRSD) mesh network. We
derive the asymptotic capacity of the SSMRSD mesh network when the relay powers
grow to infinity. Our approach is as follows. We first look at an upper bound
on the information theoretic capacity of these networks in a Gaussian setting.
We then show that this bound is achievable asymptotically using the
compress-and-forward strategy for the multiple relay channel. We also perform
numerical computations for the case when the relays have finite powers. We
observe that even when the relay power is only a few times larger than the
source power, the compress-and-forward rate gets close to the capacity. The
results indicate the value of cooperation in wireless mesh networks. The
capacity characterization quantifies how the relays can cooperate, using the
compress-and-forward strategy, to either conserve node energy or to increase
transmission rate.
|
cs/0702159
|
Perfect Hashing for Data Management Applications
|
cs.DS cs.DB
|
Perfect hash functions can potentially be used to compress data in connection
with a variety of data management tasks. Though there has been considerable
work on how to construct good perfect hash functions, there is a gap between
theory and practice among all previous methods on minimal perfect hashing. On
one side, there are good theoretical results without experimentally proven
practicality for large key sets. On the other side, there are the theoretically
analyzed time and space usage algorithms that assume that truly random hash
functions are available for free, which is an unrealistic assumption. In this
paper we attempt to bridge this gap between theory and practice, using a number
of techniques from the literature to obtain a novel scheme that is
theoretically well-understood and at the same time achieves an
order-of-magnitude increase in performance compared to previous ``practical''
methods. This improvement comes from a combination of a novel, theoretically
optimal perfect hashing scheme that greatly simplifies previous methods, and
the fact that our algorithm is designed to make good use of the memory
hierarchy. We demonstrate the scalability of our algorithm by considering a set
of over one billion URLs from the World Wide Web of average length 64, for
which we construct a minimal perfect hash function on a commodity PC in a
little more than 1 hour. Our scheme produces minimal perfect hash functions
using slightly more than 3 bits per key. For perfect hash functions in the
range $\{0,...,2n-1\}$ the space usage drops to just over 2 bits per key (i.e.,
one bit more than optimal for representing the key). This is significantly
below of what has been achieved previously for very large values of $n$.
|
cs/0702161
|
Perfectly Secure Steganography: Capacity, Error Exponents, and Code
Constructions
|
cs.IT cs.CR math.IT
|
An analysis of steganographic systems subject to the following perfect
undetectability condition is presented in this paper. Following embedding of
the message into the covertext, the resulting stegotext is required to have
exactly the same probability distribution as the covertext. Then no statistical
test can reliably detect the presence of the hidden message. We refer to such
steganographic schemes as perfectly secure. A few such schemes have been
proposed in recent literature, but they have vanishing rate. We prove that
communication performance can potentially be vastly improved; specifically, our
basic setup assumes independently and identically distributed (i.i.d.)
covertext, and we construct perfectly secure steganographic codes from public
watermarking codes using binning methods and randomized permutations of the
code. The permutation is a secret key shared between encoder and decoder. We
derive (positive) capacity and random-coding exponents for perfectly-secure
steganographic systems. The error exponents provide estimates of the code
length required to achieve a target low error probability. We address the
potential loss in communication performance due to the perfect-security
requirement. This loss is the same as the loss obtained under a weaker order-1
steganographic requirement that would just require matching of first-order
marginals of the covertext and stegotext distributions. Furthermore, no loss
occurs if the covertext distribution is uniform and the distortion metric is
cyclically symmetric; steganographic capacity is then achieved by randomized
linear codes. Our framework may also be useful for developing computationally
secure steganographic systems that have near-optimal communication performance.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.