id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0809.1077
|
Variable Neighborhood Search for the University Lecturer-Student
Assignment Problem
|
cs.AI
|
The paper presents a study of local search heuristics in general and variable
neighborhood search in particular for the resolution of an assignment problem
studied in the practical work of universities. Here, students have to be
assigned to scientific topics which are proposed and supported by members of
staff. The problem involves the optimization under given preferences of
students which may be expressed when applying for certain topics.
It is possible to observe that variable neighborhood search leads to superior
results for the tested problem instances. One instance is taken from an actual
case, while others have been generated based on the real world data to support
the analysis with a deeper analysis.
An extension of the problem has been formulated by integrating a second
objective function that simultaneously balances the workload of the members of
staff while maximizing utility of the students. The algorithmic approach has
been prototypically implemented in a computer system. One important aspect in
this context is the application of the research work to problems of other
scientific institutions, and therefore the provision of decision support
functionalities.
|
0809.1205
|
On Information-Theoretic Scaling Laws for Wireless Networks
|
cs.IT math.IT
|
With the analysis of the hierarchical scheme, the potential influence of the
pre-constant in deriving scaling laws is exposed. It is found that a modified
hierarchical scheme can achieve a throughput arbitrarily times higher than the
original one, although it is still diminishingly small compared to the linear
scaling. The study demonstrates the essential importance of the throughput
formula itself, rather than the scaling laws consequently derived.
|
0809.1208
|
Bounds on the Capacity of the Relay Channel with States at the Source
|
cs.IT math.IT
|
This paper has been withdrawn by the authors
|
0809.1226
|
Applications of Universal Source Coding to Statistical Analysis of Time
Series
|
cs.IT cs.AI math.IT math.ST stat.TH
|
We show how universal codes can be used for solving some of the most
important statistical problems for time series. By definition, a universal code
(or a universal lossless data compressor) can compress any sequence generated
by a stationary and ergodic source asymptotically to the Shannon entropy,
which, in turn, is the best achievable ratio for lossless data compressors.
We consider finite-alphabet and real-valued time series and the following
problems: estimation of the limiting probabilities for finite-alphabet time
series and estimation of the density for real-valued time series, the on-line
prediction, regression, classification (or problems with side information) for
both types of the time series and the following problems of hypothesis testing:
goodness-of-fit testing, or identity testing, and testing of serial
independence. It is important to note that all problems are considered in the
framework of classical mathematical statistics and, on the other hand, everyday
methods of data compression (or archivers) can be used as a tool for the
estimation and testing. It turns out, that quite often the suggested methods
and tests are more powerful than known ones when they are applied in practice.
|
0809.1241
|
A New Framework of Multistage Estimation
|
math.ST cs.LG math.PR stat.ME stat.TH
|
In this paper, we have established a unified framework of multistage
parameter estimation. We demonstrate that a wide variety of statistical
problems such as fixed-sample-size interval estimation, point estimation with
error control, bounded-width confidence intervals, interval estimation
following hypothesis testing, construction of confidence sequences, can be cast
into the general framework of constructing sequential random intervals with
prescribed coverage probabilities. We have developed exact methods for the
construction of such sequential random intervals in the context of multistage
sampling. In particular, we have established inclusion principle and coverage
tuning techniques to control and adjust the coverage probabilities of
sequential random intervals. We have obtained concrete sampling schemes which
are unprecedentedly efficient in terms of sampling effort as compared to
existing procedures.
|
0809.1252
|
Maximum Entropy Rate of Markov Sources for Systems With Non-regular
Constraints
|
cs.IT math.IT
|
Using the concept of discrete noiseless channels, it was shown by Shannon in
A Mathematical Theory of Communication that the ultimate performance of an
encoder for a constrained system is limited by the combinatorial capacity of
the system if the constraints define a regular language. In the present work,
it is shown that this is not an inherent property of regularity but holds in
general. To show this, constrained systems are described by generating
functions and random walks on trees.
|
0809.1257
|
The Golden Ratio Encoder
|
cs.IT math.IT
|
This paper proposes a novel Nyquist-rate analog-to-digital (A/D) conversion
algorithm which achieves exponential accuracy in the bit-rate despite using
imperfect components. The proposed algorithm is based on a robust
implementation of a beta-encoder where the value of the base beta is equal to
golden mean. It was previously shown that beta-encoders can be implemented in
such a way that their exponential accuracy is robust against threshold offsets
in the quantizer element. This paper extends this result by allowing for
imperfect analog multipliers with imprecise gain values as well. A formal
computational model for algorithmic encoders and a general test bed for
evaluating their robustness is also proposed.
|
0809.1258
|
Network Protection Codes Against Link Failures Using Network Coding
|
cs.IT cs.NI math.IT
|
Protecting against link failures in communication networks is essential to
increase robustness, accessibility, and reliability of data transmission.
Recently, network coding has been proposed as a solution to provide agile and
cost efficient network protection against link failures, which does not require
data rerouting, or packet retransmission. To achieve this, separate paths have
to be provisioned to carry encoded packets, hence requiring either the addition
of extra links, or reserving some of the resources for this purpose. In this
paper, we propose network protection codes against a single link failure using
network coding, where a separate path using reserved links is not needed. In
this case portions of the link capacities are used to carry the encoded
packets.
The scheme is extended to protect against multiple link failures and can be
implemented at an overlay layer. Although this leads to reducing the network
capacity, the network capacity reduction is asymptotically small in most cases
of practical interest. We demonstrate that such network protection codes are
equivalent to error correcting codes for erasure channels. Finally, we study
the encoding and decoding operations of such codes over the binary field.
|
0809.1264
|
Tight Bounds on Minimum Maximum Pointwise Redundancy
|
cs.IT math.IT
|
This paper presents new lower and upper bounds for the optimal compression of
binary prefix codes in terms of the most probable input symbol, where
compression efficiency is determined by the nonlinear codeword length objective
of minimizing maximum pointwise redundancy. This objective relates to both
universal modeling and Shannon coding, and these bounds are tight throughout
the interval. The upper bounds also apply to a related objective, that of dth
exponential redundancy.
|
0809.1270
|
Predictive Hypothesis Identification
|
cs.LG math.ST stat.ML stat.TH
|
While statistics focusses on hypothesis testing and on estimating (properties
of) the true sampling distribution, in machine learning the performance of
learning algorithms on future data is the primary issue. In this paper we
bridge the gap with a general principle (PHI) that identifies hypotheses with
best predictive performance. This includes predictive point and interval
estimation, simple and composite hypothesis testing, (mixture) model selection,
and others as special cases. For concrete instantiations we will recover
well-known methods, variations thereof, and new ones. PHI nicely justifies,
reconciles, and blends (a reparametrization invariant variation of) MAP, ML,
MDL, and moment estimation. One particular feature of PHI is that it can
genuinely deal with nested hypotheses.
|
0809.1300
|
What makes a good role model
|
cs.IT math.IT
|
The role model strategy is introduced as a method for designing an estimator
by approaching the output of a superior estimator that has better input
observations. This strategy is shown to yield the optimal Bayesian estimator
when a Markov condition is fulfilled. Two examples involving simple channels
are given to illustrate its use. The strategy is combined with time averaging
to construct a statistical model by numerically solving a convex program. The
role model strategy was developed in the context of low complexity decoder
design for iterative decoding. Potential applications outside the field of
communications are discussed.
|
0809.1330
|
Low-Complexity Coding and Source-Optimized Clustering for Large-Scale
Sensor Networks
|
cs.IT math.IT
|
We consider the distributed source coding problem in which correlated data
picked up by scattered sensors has to be encoded separately and transmitted to
a common receiver, subject to a rate-distortion constraint. Although
near-tooptimal solutions based on Turbo and LDPC codes exist for this problem,
in most cases the proposed techniques do not scale to networks of hundreds of
sensors. We present a scalable solution based on the following key elements:
(a) distortion-optimized index assignments for low-complexity distributed
quantization, (b) source-optimized hierarchical clustering based on the
Kullback-Leibler distance and (c) sum-product decoding on specific factor
graphs exploiting the correlation of the data.
|
0809.1344
|
The Balanced Unicast and Multicast Capacity Regions of Large Wireless
Networks
|
cs.IT math.IT
|
We consider the question of determining the scaling of the $n^2$-dimensional
balanced unicast and the $n 2^n$-dimensional balanced multicast capacity
regions of a wireless network with $n$ nodes placed uniformly at random in a
square region of area $n$ and communicating over Gaussian fading channels. We
identify this scaling of both the balanced unicast and multicast capacity
regions in terms of $\Theta(n)$, out of $2^n$ total possible, cuts. These cuts
only depend on the geometry of the locations of the source nodes and their
destination nodes and the traffic demands between them, and thus can be readily
evaluated. Our results are constructive and provide optimal (in the scaling
sense) communication schemes.
|
0809.1348
|
MBBP for improved iterative channel decoding in 802.16e WiMAX systems
|
cs.IT math.IT
|
We propose the application of multiple-bases belief-propagation, an optimized
iterative decoding method, to a set of rate-1/2 LDPC codes from the IEEE
802.16e WiMAX standard. The presented approach allows for improved decoding
performance when signaling over the AWGN channel. As all required operations
for this method can be run in parallel, the decoding delay of this method and
standard belief-propagation decoding are equal. The obtained results are
compared to the performance of LDPC codes optimized with the progressive
edge-growth algorithm and to bounds from information theory. It will be shown
that the discussed method mitigates the gap to the well-known random coding
bound by about 20 percent.
|
0809.1366
|
Network Coding Security: Attacks and Countermeasures
|
cs.CR cs.IT cs.NI math.IT
|
By allowing intermediate nodes to perform non-trivial operations on packets,
such as mixing data from multiple streams, network coding breaks with the
ruling store and forward networking paradigm and opens a myriad of challenging
security questions. Following a brief overview of emerging network coding
protocols, we provide a taxonomy of their security vulnerabilities, which
highlights the differences between attack scenarios in which network coding is
particularly vulnerable and other relevant cases in which the intrinsic
properties of network coding allow for stronger and more efficient security
solutions than classical routing. Furthermore, we give practical examples where
network coding can be combined with classical cryptography both for secure
communication and secret key distribution. Throughout the paper we identify a
number of research challenges deemed relevant towards the applicability of
secure network coding in practical networks.
|
0809.1379
|
A Max-Flow Min-Cut Theorem with Applications in Small Worlds and Dual
Radio Networks
|
cs.IT cs.DM math.IT
|
Intrigued by the capacity of random networks, we start by proving a max-flow
min-cut theorem that is applicable to any random graph obeying a suitably
defined independence-in-cut property. We then show that this property is
satisfied by relevant classes, including small world topologies, which are
pervasive in both man-made and natural networks, and wireless networks of dual
devices, which exploit multiple radio interfaces to enhance the connectivity of
the network. In both cases, we are able to apply our theorem and derive
max-flow min-cut bounds for network information flow.
|
0809.1398
|
Stability of Maximum likelihood based clustering methods: exploring the
backbone of classifications (Who is keeping you in that community?)
|
physics.soc-ph cond-mat.stat-mech cs.IT math.IT physics.comp-ph physics.data-an
|
Components of complex systems are often classified according to the way they
interact with each other. In graph theory such groups are known as clusters or
communities. Many different techniques have been recently proposed to detect
them, some of which involve inference methods using either Bayesian or Maximum
Likelihood approaches. In this article, we study a statistical model designed
for detecting clusters based on connection similarity. The basic assumption of
the model is that the graph was generated by a certain grouping of the nodes
and an Expectation Maximization algorithm is employed to infer that grouping.
We show that the method admits further development to yield a stability
analysis of the groupings that quantifies the extent to which each node
influences its neighbors group membership. Our approach naturally allows for
the identification of the key elements responsible for the grouping and their
resilience to changes in the network. Given the generality of the assumptions
underlying the statistical model, such nodes are likely to play special roles
in the original system. We illustrate this point by analyzing several empirical
networks for which further information about the properties of the nodes is
available. The search and identification of stabilizing nodes constitutes thus
a novel technique to characterize the relevance of nodes in complex networks.
|
0809.1493
|
Exploring Large Feature Spaces with Hierarchical Multiple Kernel
Learning
|
cs.LG stat.ML
|
For supervised and unsupervised learning, positive definite kernels allow to
use large and potentially infinite dimensional feature spaces with a
computational cost that only depends on the number of observations. This is
usually done through the penalization of predictor functions by Euclidean or
Hilbertian norms. In this paper, we explore penalizing by sparsity-inducing
norms such as the l1-norm or the block l1-norm. We assume that the kernel
decomposes into a large sum of individual basis kernels which can be embedded
in a directed acyclic graph; we show that it is then possible to perform kernel
selection through a hierarchical multiple kernel learning framework, in
polynomial time in the number of selected kernels. This framework is naturally
applied to non linear variable selection; our extensive simulations on
synthetic datasets and datasets from the UCI repository show that efficiently
exploring the large feature space through sparsity-inducing norms leads to
state-of-the-art predictive performance.
|
0809.1522
|
On the permutation capacity of digraphs
|
math.CO cs.IT math.IT
|
We extend several results of the third author and C. Malvenuto on
graph-different permutations to the case of directed graphs and introduce new
open problems. Permutation capacity is a natural extension of Sperner capacity
from finite directed graphs to infinite digraphs. Our subject is combinatorial
in nature, but can be equally regarded as zero-error information theory.
|
0809.1551
|
Consistent Query Answers in the Presence of Universal Constraints
|
cs.DB
|
The framework of consistent query answers and repairs has been introduced to
alleviate the impact of inconsistent data on the answers to a query. A repair
is a minimally different consistent instance and an answer is consistent if it
is present in every repair. In this article we study the complexity of
consistent query answers and repair checking in the presence of universal
constraints.
We propose an extended version of the conflict hypergraph which allows to
capture all repairs w.r.t. a set of universal constraints. We show that repair
checking is in PTIME for the class of full tuple-generating dependencies and
denial constraints, and we present a polynomial repair algorithm. This
algorithm is sound, i.e. always produces a repair, but also complete, i.e.
every repair can be constructed. Next, we present a polynomial-time algorithm
computing consistent answers to ground quantifier-free queries in the presence
of denial constraints, join dependencies, and acyclic full-tuple generating
dependencies. Finally, we show that extending the class of constraints leads to
intractability. For arbitrary full tuple-generating dependencies consistent
query answering becomes coNP-complete. For arbitrary universal constraints
consistent query answering is \Pi_2^p-complete and repair checking
coNP-complete.
|
0809.1590
|
When is there a representer theorem? Vector versus matrix regularizers
|
cs.LG
|
We consider a general class of regularization methods which learn a vector of
parameters on the basis of linear measurements. It is well known that if the
regularizer is a nondecreasing function of the inner product then the learned
vector is a linear combination of the input data. This result, known as the
{\em representer theorem}, is at the basis of kernel-based methods in machine
learning. In this paper, we prove the necessity of the above condition, thereby
completing the characterization of kernel methods based on regularization. We
further extend our analysis to regularization methods which learn a matrix, a
problem which is motivated by the application to multi-task learning. In this
context, we study a more general representer theorem, which holds for a larger
class of regularizers. We provide a necessary and sufficient condition for
these class of matrix regularizers and highlight them with some concrete
examples of practical importance. Our analysis uses basic principles from
matrix theory, especially the useful notion of matrix nondecreasing function.
|
0809.1593
|
Constructing Perfect Steganographic Systems
|
cs.CR cs.IT math.IT
|
We propose steganographic systems for the case when covertexts (containers)
are generated by a finite-memory source with possibly unknown statistics. The
probability distributions of covertexts with and without hidden information are
the same; this means that the proposed stegosystems are perfectly secure, i.e.
an observer cannot determine whether hidden information is being transmitted.
The speed of transmission of hidden information can be made arbitrary close to
the theoretical limit - the Shannon entropy of the source of covertexts. An
interesting feature of the suggested stegosystems is that they do not require
any (secret or public) key.
At the same time, we outline some principled computational limitations on
steganography. We show that there are such sources of covertexts, that any
stegosystem that has linear (in the length of the covertext) speed of
transmission of hidden text must have an exponential Kolmogorov complexity.
This shows, in particular, that some assumptions on the sources of covertext
are necessary.
|
0809.1618
|
ECOLANG - Communications Language for Ecological Simulations Network
|
cs.AI cs.MA
|
This document describes the communication language used in one multiagent
system environment for ecological simulations, based on EcoDynamo simulator
application linked with several intelligent agents and visualisation
applications, and extends the initial definition of the language. The agents
actions and perceptions are translated into messages exchanged with the
simulator application and other agents. The concepts and definitions used
follow the BNF notation (Backus et al. 1960) and is inspired in the Coach
Unilang language (Reis and Lau 2002).
|
0809.1686
|
Agent-based Ecological Model Calibration - on the Edge of a New Approach
|
cs.AI cs.MA
|
The purpose of this paper is to present a new approach to ecological model
calibration -- an agent-based software. This agent works on three stages: 1- It
builds a matrix that synthesizes the inter-variable relationships; 2- It
analyses the steady-state sensitivity of different variables to different
parameters; 3- It runs the model iteratively and measures model lack of fit,
adequacy and reliability. Stage 3 continues until some convergence criteria are
attained. At each iteration, the agent knows from stages 1 and 2, which
parameters are most likely to produce the desired shift on predicted results.
|
0809.1687
|
Incoherent dictionaries and the statistical restricted isometry property
|
cs.IT cs.DM math.IT math.PR
|
In this article we present a statistical version of the Candes-Tao restricted
isometry property (SRIP for short) which holds in general for any incoherent
dictionary which is a disjoint union of orthonormal bases. In addition, under
appropriate normalization, the eigenvalues of the associated Gram matrix
fluctuate around 1 according to the Wigner semicircle distribution. The result
is then applied to various dictionaries that arise naturally in the setting of
finite harmonic analysis, giving, in particular, a better understanding on a
remark of Applebaum-Howard-Searle-Calderbank concerning RIP for the Heisenberg
dictionary of chirp like functions.
|
0809.1802
|
Automatic Identification and Data Extraction from 2-Dimensional Plots in
Digital Documents
|
cs.CV
|
Most search engines index the textual content of documents in digital
libraries. However, scholarly articles frequently report important findings in
figures for visual impact and the contents of these figures are not indexed.
These contents are often invaluable to the researcher in various fields, for
the purposes of direct comparison with their own work. Therefore, searching for
figures and extracting figure data are important problems. To the best of our
knowledge, there exists no tool to automatically extract data from figures in
digital documents. If we can extract data from these images automatically and
store them in a database, an end-user can query and combine data from multiple
digital documents simultaneously and efficiently. We propose a framework based
on image analysis and machine learning to extract information from 2-D plot
images and store them in a database. The proposed algorithm identifies a 2-D
plot and extracts the axis labels, legend and the data points from the 2-D
plot. We also segregate overlapping shapes that correspond to different data
points. We demonstrate performance of individual algorithms, using a
combination of generated and real-life images.
|
0809.1900
|
Distributed Detection in Sensor Networks with Limited Range Multi-Modal
Sensors
|
cs.IT math.IT
|
We consider a multi-object detection problem over a sensor network (SNET)
with limited range multi-modal sensors. Limited range sensing environment
arises in a sensing field prone to signal attenuation and path losses. The
general problem complements the widely considered decentralized detection
problem where all sensors observe the same object. In this paper we develop a
distributed detection approach based on recent development of the false
discovery rate (FDR) and the associated BH test procedure. The BH procedure is
based on rank ordering of scalar test statistics. We first develop scalar test
statistics for multidimensional data to handle multi-modal sensor observations
and establish its optimality in terms of the BH procedure. We then propose a
distributed algorithm in the ideal case of infinite attenuation for
identification of sensors that are in the immediate vicinity of an object. We
demonstrate communication message scalability to large SNETs by showing that
the upper bound on the communication message complexity scales linearly with
the number of sensors that are in the vicinity of objects and is independent of
the total number of sensors in the SNET. This brings forth an important
principle for evaluating the performance of an SNET, namely, the need for
scalability of communications and performance with respect to the number of
objects or events in an SNET irrespective of the network size. We then account
for finite attenuation by modeling sensor observations as corrupted by
uncertain interference arising from distant objects and developing robust
extensions to our idealized distributed scheme. The robustness properties
ensure that both the error performance and communication message complexity
degrade gracefully with interference.
|
0809.1910
|
Reliable Communications with Asymmetric Codebooks: An Information
Theoretic Analysis of Robust Signal Hashing
|
cs.IT math.IT
|
In this paper, a generalization of the traditional point-to-point to
communication setup, which is named as "reliable communications with asymmetric
codebooks", is proposed. Under the assumption of independent identically
distributed (i.i.d) encoder codewords, it is proven that the operational
capacity of the system is equal to the information capacity of the system,
which is given by $\max_{p(x)} I(U;Y)$, where $X, U$ and $Y$ denote the
individual random elements of encoder codewords, decoder codewords and decoder
inputs. The capacity result is derived in the "binary symmetric" case (which is
an analogous formulation of the traditional "binary symmetric channel" for our
case), as a function of the system parameters. A conceptually insightful
inference is made by attributing the difference from the classical Shannon-type
capacity of binary symmetric channel to the {\em gap} due to the codebook
asymmetry.
|
0809.1916
|
Randomized Distributed Configuration Management of Wireless Networks:
Multi-layer Markov Random Fields and Near-Optimality
|
cs.DC cs.AI
|
Distributed configuration management is imperative for wireless
infrastructureless networks where each node adjusts locally its physical and
logical configuration through information exchange with neighbors. Two issues
remain open. The first is the optimality. The second is the complexity. We
study these issues through modeling, analysis, and randomized distributed
algorithms. Modeling defines the optimality. We first derive a global
probabilistic model for a network configuration which characterizes jointly the
statistical spatial dependence of a physical- and a logical-configuration. We
then show that a local model which approximates the global model is a two-layer
Markov Random Field or a random bond model. The complexity of the local model
is the communication range among nodes. The local model is near-optimal when
the approximation error to the global model is within a given error bound. We
analyze the trade-off between an approximation error and complexity, and derive
sufficient conditions on the near-optimality of the local model. We validate
the model, the analysis and the randomized distributed algorithms also through
simulation.
|
0809.1963
|
Materialized View Selection by Query Clustering in XML Data Warehouses
|
cs.DB
|
XML data warehouses form an interesting basis for decision-support
applications that exploit complex data. However, native XML database management
systems currently bear limited performances and it is necessary to design
strategies to optimize them. In this paper, we propose an automatic strategy
for the selection of XML materialized views that exploits a data mining
technique, more precisely the clustering of the query workload. To validate our
strategy, we implemented an XML warehouse modeled along the XCube
specifications. We executed a workload of XQuery decision-support queries on
this warehouse, with and without using our strategy. Our experimental results
demonstrate its efficiency, even when queries are complex.
|
0809.1965
|
Dynamic index selection in data warehouses
|
cs.DB
|
Analytical queries defined on data warehouses are complex and use several
join operations that are very costly, especially when run on very large data
volumes. To improve response times, data warehouse administrators casually use
indexing techniques. This task is nevertheless complex and fastidious. In this
paper, we present an automatic, dynamic index selection method for data
warehouses that is based on incremental frequent itemset mining from a given
query workload. The main advantage of this approach is that it helps update the
set of selected indexes when workload evolves instead of recreating it from
scratch. Preliminary experimental results illustrate the efficiency of this
approach, both in terms of performance enhancement and overhead.
|
0809.1971
|
Knowledge and Metadata Integration for Warehousing Complex Data
|
cs.DB
|
With the ever-growing availability of so-called complex data, especially on
the Web, decision-support systems such as data warehouses must store and
process data that are not only numerical or symbolic. Warehousing and analyzing
such data requires the joint exploitation of metadata and domain-related
knowledge, which must thereby be integrated. In this paper, we survey the types
of knowledge and metadata that are needed for managing complex data, discuss
the issue of knowledge and metadata integration, and propose a CWM-compliant
integration solution that we incorporate into an XML complex data warehousing
framework we previously designed.
|
0809.1981
|
A Join Index for XML Data Warehouses
|
cs.DB
|
XML data warehouses form an interesting basis for decision-support
applications that exploit complex data. However, native-XML database management
systems (DBMSs) currently bear limited performances and it is necessary to
research for ways to optimize them. In this paper, we propose a new join index
that is specifically adapted to the multidimensional architecture of XML
warehouses. It eliminates join operations while preserving the information
contained in the original warehouse. A theoretical study and experimental
results demonstrate the efficiency of our join index. They also show that
native XML DBMSs can compete with XML-compatible, relational DBMSs when
warehousing and analyzing XML data.
|
0809.2075
|
Low congestion online routing and an improved mistake bound for online
prediction of graph labeling
|
cs.DS cs.DM cs.LG
|
In this paper, we show a connection between a certain online low-congestion
routing problem and an online prediction of graph labeling. More specifically,
we prove that if there exists a routing scheme that guarantees a congestion of
$\alpha$ on any edge, there exists an online prediction algorithm with mistake
bound $\alpha$ times the cut size, which is the size of the cut induced by the
label partitioning of graph vertices. With previous known bound of $O(\log n)$
for $\alpha$ for the routing problem on trees with $n$ vertices, we obtain an
improved prediction algorithm for graphs with high effective resistance.
In contrast to previous approaches that move the graph problem into problems
in vector space using graph Laplacian and rely on the analysis of the
perceptron algorithm, our proof are purely combinatorial. Further more, our
approach directly generalizes to the case where labels are not binary.
|
0809.2085
|
Clustered Multi-Task Learning: A Convex Formulation
|
cs.LG
|
In multi-task learning several related tasks are considered simultaneously,
with the hope that by an appropriate sharing of information across tasks, each
task may benefit from the others. In the context of learning linear functions
for supervised classification or regression, this can be achieved by including
a priori information about the weight vectors associated with the tasks, and
how they are expected to be related to each other. In this paper, we assume
that tasks are clustered into groups, which are unknown beforehand, and that
tasks within a group have similar weight vectors. We design a new spectral norm
that encodes this a priori assumption, without the prior knowledge of the
partition of tasks into groups, resulting in a new convex optimization
formulation for multi-task learning. We show in simulations on synthetic
examples and on the IEDB MHC-I binding dataset, that our approach outperforms
well-known convex methods for multi-task learning, as well as related non
convex methods dedicated to the same problem.
|
0809.2136
|
The Potluck Problem
|
cs.GT cs.MA
|
This paper proposes the Potluck Problem as a model for the behavior of
independent producers and consumers under standard economic assumptions, as a
problem of resource allocation in a multi-agent system in which there is no
explicit communication among the agents.
|
0809.2147
|
Investigation on Multiuser Diversity in Spectrum Sharing Based Cognitive
Radio Networks
|
cs.IT math.IT
|
A new form of multiuser diversity, named \emph{multiuser interference
diversity}, is investigated for opportunistic communications in cognitive radio
(CR) networks by exploiting the mutual interference between the CR and the
existing primary radio (PR) links. The multiuser diversity gain and ergodic
throughput are analyzed for different types of CR networks and compared against
those in the conventional networks without the PR link.
|
0809.2148
|
Cognitive Beamforming Made Practical: Effective Interference Channel and
Learning-Throughput Tradeoff
|
cs.IT math.IT
|
This paper studies the transmit strategy for a secondary link or the
so-called cognitive radio (CR) link under opportunistic spectrum sharing with
an existing primary radio (PR) link. It is assumed that the CR transmitter is
equipped with multi-antennas, whereby transmit precoding and power control can
be jointly deployed to balance between avoiding interference at the PR
terminals and optimizing performance of the CR link. This operation is named as
cognitive beamforming (CB). Unlike prior study on CB that assumes perfect
knowledge of the channels over which the CR transmitter interferes with the PR
terminals, this paper proposes a practical CB scheme utilizing a new idea of
effective interference channel (EIC), which can be efficiently estimated at the
CR transmitter from its observed PR signals. Somehow surprisingly, this paper
shows that the learning-based CB scheme with the EIC improves the CR channel
capacity against the conventional scheme even with the exact CR-to-PR channel
knowledge, when the PR link is equipped with multi-antennas but only
communicates over a subspace of the total available spatial dimensions.
Moreover, this paper presents algorithms for the CR to estimate the EIC over a
finite learning time. Due to channel estimation errors, the proposed CB scheme
causes leakage interference at the PR terminals, which leads to an interesting
learning-throughput tradeoff phenomenon for the CR, pertinent to its time
allocation between channel learning and data transmission. This paper derives
the optimal channel learning time to maximize the effective throughput of the
CR link, subject to the CR transmit power constraint and the interference power
constraints for the PR terminals.
|
0809.2152
|
Informed Network Coding for Minimum Decoding Delay
|
cs.IT math.IT
|
Network coding is a highly efficient data dissemination mechanism for
wireless networks. Since network coded information can only be recovered after
delivering a sufficient number of coded packets, the resulting decoding delay
can become problematic for delay-sensitive applications such as real-time media
streaming. Motivated by this observation, we consider several algorithms that
minimize the decoding delay and analyze their performance by means of
simulation. The algorithms differ both in the required information about the
state of the neighbors' buffers and in the way this knowledge is used to decide
which packets to combine through coding operations. Our results show that a
greedy algorithm, whose encodings maximize the number of nodes at which a coded
packet is immediately decodable significantly outperforms existing network
coding protocols.
|
0809.2168
|
Fairness in Combinatorial Auctioning Systems
|
cs.GT cs.MA
|
One of the Multi-Agent Systems that is widely used by various government
agencies, buyers and sellers in a market economy, in such a manner so as to
attain optimized resource allocation, is the Combinatorial Auctioning System
(CAS). We study another important aspect of resource allocations in CAS, namely
fairness. We present two important notions of fairness in CAS, extended
fairness and basic fairness. We give an algorithm that works by incorporating a
metric to ensure fairness in a CAS that uses the Vickrey-Clark-Groves (VCG)
mechanism, and uses an algorithm of Sandholm to achieve optimality.
Mathematical formulations are given to represent measures of extended fairness
and basic fairness.
|
0809.2226
|
Relay vs. User Cooperation in Time-Duplexed Multiaccess Networks
|
cs.IT math.IT
|
The performance of user-cooperation in a multi-access network is compared to
that of using a wireless relay. Using the total transmit and processing power
consumed at all nodes as a cost metric, the outage probabilities achieved by
dynamic decode-and-forward (DDF) and amplify-and-forward (AF) are compared for
the two networks. A geometry-inclusive high signal-to-noise ratio (SNR) outage
analysis in conjunction with area-averaged numerical simulations shows that
user and relay cooperation achieve a maximum diversity of K and 2 respectively
for a K-user multiaccess network under both DDF and AF. However, when
accounting for energy costs of processing and communication, relay cooperation
can be more energy efficient than user cooperation, i.e., relay cooperation
achieves coding (SNR) gains, particularly in the low SNR regime, that override
the diversity advantage of user cooperation.
|
0809.2315
|
On the Construction of Skew Quasi-Cyclic Codes
|
cs.IT cs.DM math.IT math.RA
|
In this paper we study a special type of quasi-cyclic (QC) codes called skew
QC codes. This set of codes is constructed using a non-commutative ring called
the skew polynomial rings $F[x;\theta ]$. After a brief description of the skew
polynomial ring $F[x;\theta ]$ it is shown that skew QC codes are left
submodules of the ring $R_{s}^{l}=(F[x;\theta ]/(x^{s}-1))^{l}.$ The notions of
generator and parity-check polynomials are given. We also introduce the notion
of similar polynomials in the ring $F[x;\theta ]$ and show that parity-check
polynomials for skew QC codes are unique up to similarity. Our search results
lead to the construction of several new codes with Hamming distances exceeding
the Hamming distances of the previously best known linear codes with comparable
parameters.
|
0809.2350
|
Random Linear Network Coding For Time Division Duplexing: When To Stop
Talking And Start Listening
|
cs.IT math.IT
|
A new random linear network coding scheme for reliable communications for
time division duplexing channels is proposed. The setup assumes a packet
erasure channel and that nodes cannot transmit and receive information
simultaneously. The sender transmits coded data packets back-to-back before
stopping to wait for the receiver to acknowledge (ACK) the number of degrees of
freedom, if any, that are required to decode correctly the information. We
provide an analysis of this problem to show that there is an optimal number of
coded data packets, in terms of mean completion time, to be sent before
stopping to listen. This number depends on the latency, probabilities of packet
erasure and ACK erasure, and the number of degrees of freedom that the receiver
requires to decode the data. This scheme is optimal in terms of the mean time
to complete the transmission of a fixed number of data packets. We show that
its performance is very close to that of a full duplex system, while
transmitting a different number of coded packets can cause large degradation in
performance, especially if latency is high. Also, we study the throughput
performance of our scheme and compare it to existing half-duplex Go-back-N and
Selective Repeat ARQ schemes. Numerical results, obtained for different
latencies, show that our scheme has similar performance to the Selective Repeat
in most cases and considerable performance gain when latency and packet error
probability is high.
|
0809.2421
|
Electricity Demand and Energy Consumption Management System
|
cs.AI cs.CE
|
This project describes the electricity demand and energy consumption
management system and its application to Southern Peru smelter. It is composed
of an hourly demand-forecasting module and of a simulation component for a
plant electrical system. The first module was done using dynamic neural
networks with backpropagation training algorithm; it is used to predict the
electric power demanded every hour, with an error percentage below of 1%. This
information allows efficient management of energy peak demands before this
happen, distributing the raise of electric load to other hours or improving
those equipments that increase the demand. The simulation module is based in
advanced estimation techniques, such as: parametric estimation, neural network
modeling, statistic regression and previously developed models, which simulates
the electric behavior of the smelter plant. These modules facilitate
electricity demand and consumption proper planning, because they allow knowing
the behavior of the hourly demand and the consumption patterns of the plant,
including the bill components, but also energy deficiencies and opportunities
for improvement, based on analysis of information about equipments, processes
and production plans, as well as maintenance programs. Finally the results of
its application in Southern Peru smelter are presented.
|
0809.2446
|
High-Rate Space-Time Coded Large MIMO Systems: Low-Complexity Detection
and Channel Estimation
|
cs.IT math.IT
|
In this paper, we present a low-complexity algorithm for detection in
high-rate, non-orthogonal space-time block coded (STBC) large-MIMO systems that
achieve high spectral efficiencies of the order of tens of bps/Hz. We also
present a training-based iterative detection/channel estimation scheme for such
large STBC MIMO systems. Our simulation results show that excellent bit error
rate and nearness-to-capacity performance are achieved by the proposed
multistage likelihood ascent search (M-LAS) detector in conjunction with the
proposed iterative detection/channel estimation scheme at low complexities. The
fact that we could show such good results for large STBCs like 16x16 and 32x32
STBCs from Cyclic Division Algebras (CDA) operating at spectral efficiencies in
excess of 20 bps/Hz (even after accounting for the overheads meant for pilot
based training for channel estimation and turbo coding) establishes the
effectiveness of the proposed detector and channel estimator. We decode perfect
codes of large dimensions using the proposed detector. With the feasibility of
such a low-complexity detection/channel estimation scheme, large-MIMO systems
with tens of antennas operating at several tens of bps/Hz spectral efficiencies
can become practical, enabling interesting high data rate wireless
applications.
|
0809.2508
|
A fast approach for overcomplete sparse decomposition based on smoothed
L0 norm
|
cs.IT math.IT
|
In this paper, a fast algorithm for overcomplete sparse decomposition, called
SL0, is proposed. The algorithm is essentially a method for obtaining sparse
solutions of underdetermined systems of linear equations, and its applications
include underdetermined Sparse Component Analysis (SCA), atomic decomposition
on overcomplete dictionaries, compressed sensing, and decoding real field
codes. Contrary to previous methods, which usually solve this problem by
minimizing the L1 norm using Linear Programming (LP) techniques, our algorithm
tries to directly minimize the L0 norm. It is experimentally shown that the
proposed algorithm is about two to three orders of magnitude faster than the
state-of-the-art interior-point LP solvers, while providing the same (or
better) accuracy.
|
0809.2532
|
Multidimensional Visualization of Oracle Performance Using Barry007
|
cs.PF cs.DB
|
Most generic performance tools display only system-level performance data
using 2-dimensional plots or diagrams and this limits the informational detail
that can be displayed. Moreover, a modern relational database system, like
Oracle, can concurrently serve thousands of client processes with different
workload characteristics, so that generic performance-data displays inevitably
hide important information. Drawing on our previous work, this paper
demonstrates the application of Barry007 multidimensional visualization to the
analysis of Oracle end-user, session-level, performance data, showing both
collective trends and individual performance anomalies.
|
0809.2546
|
Depth as Randomness Deficiency
|
cs.CC cs.IT math.IT
|
Depth of an object concerns a tradeoff between computation time and excess of
program length over the shortest program length required to obtain the object.
It gives an unconditional lower bound on the computation time from a given
program in absence of auxiliary information. Variants known as logical depth
and computational depth are expressed in Kolmogorov complexity theory.
We derive quantitative relation between logical depth and computational depth
and unify the different depth notions by relating them to A. Kolmogorov and L.
Levin's fruitful notion of randomness deficiency. Subsequently, we revisit the
computational depth of infinite strings, introducing the notion of super deep
sequences and relate it with other approaches.
|
0809.2553
|
Normalized Information Distance
|
cs.IR cs.AI
|
The normalized information distance is a universal distance measure for
objects of all kinds. It is based on Kolmogorov complexity and thus
uncomputable, but there are ways to utilize it. First, compression algorithms
can be used to approximate the Kolmogorov complexity if the objects have a
string representation. Second, for names and abstract concepts, page count
statistics from the World Wide Web can be used. These practical realizations of
the normalized information distance can then be applied to machine learning
tasks, expecially clustering, to perform feature-free and parameter-free data
mining. This chapter discusses the theoretical foundations of the normalized
information distance and both practical realizations. It presents numerous
examples of successful real-world applications based on these distance
measures, ranging from bioinformatics to music clustering to machine
translation.
|
0809.2639
|
Code diversity in multiple antenna wireless communication
|
cs.IT math.IT
|
The standard approach to the design of individual space-time codes is based
on optimizing diversity and coding gains. This geometric approach leads to
remarkable examples, such as perfect space-time block codes, for which the
complexity of Maximum Likelihood (ML) decoding is considerable. Code diversity
is an alternative and complementary approach where a small number of feedback
bits are used to select from a family of space-time codes. Different codes lead
to different induced channels at the receiver, where Channel State Information
(CSI) is used to instruct the transmitter how to choose the code. This method
of feedback provides gains associated with beamforming while minimizing the
number of feedback bits. It complements the standard approach to code design by
taking advantage of different (possibly equivalent) realizations of a
particular code design. Feedback can be combined with sub-optimal low
complexity decoding of the component codes to match ML decoding performance of
any individual code in the family. It can also be combined with ML decoding of
the component codes to improve performance beyond ML decoding performance of
any individual code. One method of implementing code diversity is the use of
feedback to adapt the phase of a transmitted signal as shown for 4 by 4
Quasi-Orthogonal Space-Time Block Code (QOSTBC) and multi-user detection using
the Alamouti code. Code diversity implemented by selecting from equivalent
variants is used to improve ML decoding performance of the Golden code. This
paper introduces a family of full rate circulant codes which can be linearly
decoded by fourier decomposition of circulant matrices within the code
diversity framework. A 3 by 3 circulant code is shown to outperform the
Alamouti code at the same transmission rate.
|
0809.2680
|
Mathematical Tool of Discrete Dynamic Modeling of Complex Systems in
Control Loop
|
cs.MA cs.CE
|
In this paper we present a method of discrete modeling and analysis of
multi-level dynamics of complex large-scale hierarchical dynamic systems
subject to external dynamic control mechanism. In a model each state describes
parallel dynamics and simultaneous trends of changes in system parameters. The
essence of the approach is in analysis of system state dynamics while it is in
the control loop.
|
0809.2686
|
An MAS-Based ETL Approach for Complex Data
|
cs.DB
|
In a data warehousing process, the phase of data integration is crucial. Many
methods for data integration have been published in the literature. However,
with the development of the Internet, the availability of various types of data
(images, texts, sounds, videos, databases...) has increased, and structuring
such data is a difficult task. We name these data, which may be structured or
unstructured, "complex data". In this paper, we propose a new approach for
complex data integration, based on a Multi-Agent System (MAS), in association
to a data warehousing approach. Our objective is to take advantage of the MAS
to perform the integration phase for complex data. We indeed consider the
different tasks of the data integration process as services offered by agents.
To validate this approach, we have actually developed an MAS for complex data
integration.
|
0809.2687
|
Frequent itemsets mining for database auto-administration
|
cs.DB
|
With the wide development of databases in general and data warehouses in
particular, it is important to reduce the tasks that a database administrator
must perform manually. The aim of auto-administrative systems is to
administrate and adapt themselves automatically without loss (or even with a
gain) in performance. The idea of using data mining techniques to extract
useful knowledge for administration from the data themselves has existed for
some years. However, little research has been achieved. This idea nevertheless
remains a very promising approach, notably in the field of data warehousing,
where queries are very heterogeneous and cannot be interpreted easily. The aim
of this study is to search for a way of extracting useful knowledge from stored
data themselves to automatically apply performance optimization techniques, and
more particularly indexing techniques. We have designed a tool that extracts
frequent itemsets from a given workload to compute an index configuration that
helps optimizing data access time. The experiments we performed showed that the
index configurations generated by our tool allowed performance gains of 15% to
25% on a test database and a test data warehouse.
|
0809.2688
|
A Complex Data Warehouse for Personalized, Anticipative Medicine
|
cs.DB
|
With the growing use of new technologies, healthcare is nowadays undergoing
significant changes. Information-based medicine has to exploit medical
decision-support systems and requires the analysis of various, heterogeneous
data, such as patient records, medical images, biological analysis results,
etc. In this paper, we present the design of the complex data warehouse
relating to high-level athletes. It is original in two ways. First, it is aimed
at storing complex medical data. Second, it is designed to allow innovative and
quite different kinds of analyses to support: (1) personalized and anticipative
medicine (in opposition to curative medicine) for well-identified patients; (2)
broad-band statistical studies over a given population of patients.
Furthermore, the system includes data relating to several medical fields. It is
also designed to be evolutionary to take into account future advances in
medical research.
|
0809.2691
|
Expressing OLAP operators with the TAX XML algebra
|
cs.DB
|
With the rise of XML as a standard for representing business data, XML data
warehouses appear as suitable solutions for Web-based decision-support
applications. In this context, it is necessary to allow OLAP analyses over XML
data cubes (XOLAP). Thus, XQuery extensions are needed. To help define a formal
framework and allow much-needed performance optimizations on analytical queries
expressed in XQuery, having an algebra at one's disposal is desirable. However,
XOLAP approaches and algebras from the literature still largely rely on the
relational model and/or only feature a small number of OLAP operators. In
opposition, we propose in this paper to express a broad set of OLAP operators
with the TAX XML algebra.
|
0809.2754
|
Algorithmic information theory
|
cs.IT cs.LG math.IT math.ST stat.TH
|
We introduce algorithmic information theory, also known as the theory of
Kolmogorov complexity. We explain the main concepts of this quantitative
approach to defining `information'. We discuss the extent to which Kolmogorov's
and Shannon's information theory have a common purpose, and where they are
fundamentally different. We indicate how recent developments within the theory
allow one to formally distinguish between `structural' (meaningful) and
`random' information as measured by the Kolmogorov structure function, which
leads to a mathematical formalization of Occam's razor in inductive inference.
We end by discussing some of the philosophical implications of the theory.
|
0809.2768
|
Hubs and Clusters in the Evolving U. S. Internal Migration Network
|
physics.soc-ph cs.SI physics.data-an stat.AP
|
Most nations of the world periodically publish N x N origin-destination
tables, recording the number of people who lived in geographic subdivision i at
time t and j at t+1. We have developed and widely applied to such national
tables and other analogous (weighted, directed) socioeconomic networks, a
two-stage--double-standardization and (strong component) hierarchical
clustering--procedure. Previous applications of this methodology and related
analytical issues are discussed. Its use is illustrated in a large-scale study,
employing recorded United States internal migration flows between the 3,000+
county-level units of the nation for the periods 1965-1970 and 1995-2000.
Prominent, important features--such as ''cosmopolitan hubs'' and ``functional
regions''--are extracted from master dendrograms. The extent to which such
characteristics have varied over the intervening thirty years is evaluated.
|
0809.2792
|
Predicting Abnormal Returns From News Using Text Classification
|
cs.LG cs.AI
|
We show how text from news articles can be used to predict intraday price
movements of financial assets using support vector machines. Multiple kernel
learning is used to combine equity returns with text as predictive features to
increase classification performance and we develop an analytic center cutting
plane method to solve the kernel learning problem efficiently. We observe that
while the direction of returns is not predictable using either text or returns,
their size is, with text features producing significantly better performance
than historical returns alone.
|
0809.2835
|
Fundamental Constraints on Multicast Capacity Regions
|
cs.IT math.IT
|
Much of the existing work on the broadcast channel focuses only on the
sending of private messages. In this work we examine the scenario where the
sender also wishes to transmit common messages to subsets of receivers. For an
L user broadcast channel there are 2L - 1 subsets of receivers and
correspondingly 2L - 1 independent messages. The set of achievable rates for
this channel is a 2L - 1 dimensional region. There are fundamental constraints
on the geometry of this region. For example, observe that if the transmitter is
able to simultaneously send L rate-one private messages, error-free to all
receivers, then by sending the same information in each message, it must be
able to send a single rate-one common message, error-free to all receivers.
This swapping of private and common messages illustrates that for any broadcast
channel, the inclusion of a point R* in the achievable rate region implies the
achievability of a set of other points that are not merely component-wise less
than R*. We formerly define this set and characterize it for L = 2 and L = 3.
Whereas for L = 2 all the points in the set arise only from operations relating
to swapping private and common messages, for L = 3 a form of network coding is
required.
|
0809.2840
|
Spectrum Sharing between Wireless Networks
|
cs.IT math.IT
|
We consider the problem of two wireless networks operating on the same
(presumably unlicensed) frequency band. Pairs within a given network cooperate
to schedule transmissions, but between networks there is competition for
spectrum. To make the problem tractable, we assume transmissions are scheduled
according to a random access protocol where each network chooses an access
probability for its users. A game between the two networks is defined. We
characterize the Nash Equilibrium behavior of the system. Three regimes are
identified; one in which both networks simultaneously schedule all
transmissions; one in which the denser network schedules all transmissions and
the sparser only schedules a fraction; and one in which both networks schedule
only a fraction of their transmissions. The regime of operation depends on the
pathloss exponent $\alpha$, the latter regime being desirable, but attainable
only for $\alpha>4$. This suggests that in certain environments, rival wireless
networks may end up naturally cooperating. To substantiate our analytical
results, we simulate a system where networks iteratively optimize their access
probabilities in a greedy manner. We also discuss a distributed scheduling
protocol that employs carrier sensing, and demonstrate via simulations, that
again a near cooperative equilibrium exists for sufficiently large $\alpha$.
|
0809.2931
|
An Efficient Algorithm for Cooperative Spectrum Sensing in Cognitive
Radio Networks
|
cs.IT math.IT
|
We consider the problem of Spectrum Sensing in Cognitive Radio Systems. We
have developed a distributed algorithm that the Secondary users can run to
sense the channel cooperatively. It is based on sequential detection algorithms
which optimally use the past observations. We use the algorithm on secondary
users with energy detectors although it can be used with matched filter and
other spectrum sensing algorithms also. The algorithm provides very low
detection delays and also consumes little energy. Furthermore it causes low
interference to the primary users. We compare this algorithm to several
recently proposed algorithms and show that it detects changes in spectrum
faster than these algorithms and uses significantly less energy.
|
0809.2965
|
On Time-Bounded Incompressibility of Compressible Strings and Sequences
|
cs.CC cs.IT math.IT
|
For every total recursive time bound $t$, a constant fraction of all
compressible (low Kolmogorov complexity) strings is $t$-bounded incompressible
(high time-bounded Kolmogorov complexity); there are uncountably many infinite
sequences of which every initial segment of length $n$ is compressible to $\log
n$ yet $t$-bounded incompressible below ${1/4}n - \log n$; and there are
countable infinitely many recursive infinite sequence of which every initial
segment is similarly $t$-bounded incompressible. These results are related to,
but different from, Barzdins's lemma.
|
0809.2968
|
Bounds on Covering Codes with the Rank Metric
|
cs.IT math.IT
|
In this paper, we investigate geometrical properties of the rank metric space
and covering properties of rank metric codes. We first establish an analytical
expression for the intersection of two balls with rank radii, and then derive
an upper bound on the volume of the union of multiple balls with rank radii.
Using these geometrical properties, we derive both upper and lower bounds on
the minimum cardinality of a code with a given rank covering radius. The
geometrical properties and bounds proposed in this paper are significant to the
design, decoding, and performance analysis of rank metric codes.
|
0809.3010
|
Improved Upper Bounds for the Information Rates of the Secret Sharing
Schemes Induced by the Vamos Matroid
|
cs.CR cs.IT math.IT
|
An access structure specifying the qualified sets of a secret sharing scheme
must have information rate less than or equal to one. The Vamos matroid induces
two non-isomorphic access structures V1 and V6, which were shown by Marti-Farre
and Padro to have information rates of at least 3/4. Beimel, Livne, and Padro
showed that the information rates of V1 and V6 are bounded above by 10/11 and
9/10 respectively. Here we improve those upper bounds to 19/21 for V1 and 17/19
for V6.
|
0809.3023
|
Graph-based Logic and Sketches
|
math.CT cs.IT math.IT math.LO
|
We present the basic ideas of forms (a generalization of Ehresmann's
sketches) and their theories and models, more explicitly than in previous
expositions. Forms provide the ability to specify mathematical structures and
data types in any appropriate category, including many types of structures
(e.g. function spaces) that cannot be specified by sketches. We also outline a
new kind of formal logic (based on graphs instead of strings of symbols) that
gives an intrinsically categorial definition of assertion and proof for each
type of form. This formal logic is new to this monograph. The relationship
between multisorted equational logic and finite product theories is worked out
in detail.
|
0809.3027
|
Finding links and initiators: a graph reconstruction problem
|
cs.AI cs.DB physics.soc-ph
|
Consider a 0-1 observation matrix M, where rows correspond to entities and
columns correspond to signals; a value of 1 (or 0) in cell (i,j) of M indicates
that signal j has been observed (or not observed) in entity i. Given such a
matrix we study the problem of inferring the underlying directed links between
entities (rows) and finding which entries in the matrix are initiators.
We formally define this problem and propose an MCMC framework for estimating
the links and the initiators given the matrix of observations M. We also show
how this framework can be extended to incorporate a temporal aspect; instead of
considering a single observation matrix M we consider a sequence of observation
matrices M1,..., Mt over time.
We show the connection between our problem and several problems studied in
the field of social-network analysis. We apply our method to paleontological
and ecological data and show that our algorithms work well in practice and give
reasonable results.
|
0809.3035
|
Interference Alignment for Line-of-Sight Channels
|
cs.IT math.IT
|
The fully connected K-user interference channel is studied in a multipath
environment with bandwidth W. We show that when each link consists of D
physical paths, the total spectral efficiency can grow {\it linearly} with K.
This result holds not merely in the limit of large transmit power P, but for
any fixed P, and is therefore a stronger characterization than degrees of
freedom. It is achieved via a form of interference alignment in the time
domain. A caveat of this result is that W must grow with K, a phenomenon we
refer to as {\it bandwidth scaling}. Our insight comes from examining channels
with single path links (D=1), which we refer to as line-of-sight (LOS) links.
For such channels we build a time-indexed interference graph and associate the
communication problem with finding its maximal independent set. This graph has
a stationarity property that we exploit to solve the problem efficiently via
dynamic programming. Additionally, the interference graph enables us to
demonstrate the necessity of bandwidth scaling for any scheme operating over
LOS interference channels. Bandwidth scaling is then shown to also be a
necessary ingredient for interference alignment in the K-user interference
channel.
|
0809.3044
|
Kinetostatic Performance of a Planar Parallel Mechanism with Variable
Actuation
|
cs.RO
|
This paper deals with a new planar parallel mechanism with variable actuation
and its kinetostatic performance. A drawback of parallel mechanisms is the non
homogeneity of kinetostatic performance within their workspace. The common
approach to solve this problem is the introduction of actuation redundancy,
that involves force control algorithms. Another approach, highlighted in this
paper, is to select the actuated joint in each limb with regard to the pose of
the end-effector. First, the architecture of the mechanism and two kinetostatic
performance indices are described. Then, the actuating modes of the mechanism
are compared.
|
0809.3083
|
Supervised Dictionary Learning
|
cs.CV
|
It is now well established that sparse signal models are well suited to
restoration tasks and can effectively be learned from audio, image, and video
data. Recent research has been aimed at learning discriminative sparse models
instead of purely reconstructive ones. This paper proposes a new step in that
direction, with a novel sparse representation for signals belonging to
different classes in terms of a shared dictionary and multiple class-decision
functions. The linear variant of the proposed model admits a simple
probabilistic interpretation, while its most general variant admits an
interpretation in terms of kernels. An optimization framework for learning all
the components of the proposed model is presented, along with experimental
results on standard handwritten digit and texture classification tasks.
|
0809.3140
|
Monadic Datalog over Finite Structures with Bounded Treewidth
|
cs.DB cs.CC cs.LO
|
Bounded treewidth and Monadic Second Order (MSO) logic have proved to be key
concepts in establishing fixed-parameter tractability results. Indeed, by
Courcelle's Theorem we know: Any property of finite structures, which is
expressible by an MSO sentence, can be decided in linear time (data complexity)
if the structures have bounded treewidth.
In principle, Courcelle's Theorem can be applied directly to construct
concrete algorithms by transforming the MSO evaluation problem into a tree
language recognition problem. The latter can then be solved via a finite tree
automaton (FTA). However, this approach has turned out to be problematical,
since even relatively simple MSO formulae may lead to a ``state explosion'' of
the FTA.
In this work we propose monadic datalog (i.e., datalog where all intentional
predicate symbols are unary) as an alternative method to tackle this class of
fixed-parameter tractable problems. We show that if some property of finite
structures is expressible in MSO then this property can also be expressed by
means of a monadic datalog program over the structure plus the tree
decomposition.
Moreover, we show that the resulting fragment of datalog can be evaluated in
linear time (both w.r.t. the program size and w.r.t. the data size). This new
approach is put to work by devising new algorithms for the 3-Colorability
problem of graphs and for the PRIMALITY problem of relational schemas (i.e.,
testing if some attribute in a relational schema is part of a key). We also
report on experimental results with a prototype implementation.
|
0809.3159
|
A Geometrical Description of the SINR Region of the Gaussian
Interference Channel: the two and three-user case
|
cs.IT math.IT
|
This paper addresses the problem of computing the achievable rates for two
(and three) users sharing a same frequency band without coordination and thus
interfering with each other. It is thus primarily related to the field of
cognitive radio studies as we look for the achievable increase in the spectrum
use efficiency. It is also strongly related to the long standing problem of the
capacity region of a Gaussian interference channel (GIC) because of the
assumption of no user coordination (and the underlying assumption that all
signals and interferences are Gaussian). We give a geometrical description of
the SINR region for the two-user and three-user channels. This geometric
approach provides a closed-form expression of the capacity region of the
two-user interference channel and an insightful of known optimal power
allocation scheme.
|
0809.3170
|
A New Framework of Multistage Hypothesis Tests
|
math.ST cs.LG math.PR stat.ME stat.TH
|
In this paper, we have established a general framework of multistage
hypothesis tests which applies to arbitrarily many mutually exclusive and
exhaustive composite hypotheses. Within the new framework, we have constructed
specific multistage tests which rigorously control the risk of committing
decision errors and are more efficient than previous tests in terms of average
sample number and the number of sampling operations. Without truncation, the
sample numbers of our testing plans are absolutely bounded.
|
0809.3179
|
Kinematic and Dynamic Analyses of the Orthoglide 5-axis
|
cs.RO
|
This paper deals with the kinematic and dynamic analyses of the Orthoglide
5-axis, a five-degree-of-freedom manipulator. It is derived from two
manipulators: i) the Orthoglide 3-axis; a three dof translational manipulator
and ii) the Agile eye; a parallel spherical wrist. First, the kinematic and
dynamic models of the Orthoglide 5-axis are developed. The geometric and
inertial parameters of the manipulator are determined by means of a CAD
software. Then, the required motors performances are evaluated for some test
trajectories. Finally, the motors are selected in the catalogue from the
previous results.
|
0809.3180
|
Singularity Analysis of Limited-dof Parallel Manipulators using
Grassmann-Cayley Algebra
|
cs.RO
|
This paper characterizes geometrically the singularities of limited DOF
parallel manipulators. The geometric conditions associated with the dependency
of six Pl\"ucker vector of lines (finite and infinite) constituting the rows of
the inverse Jacobian matrix are formulated using Grassmann-Cayley algebra.
Manipulators under consideration do not need to have a passive spherical joint
somewhere in each leg. This study is illustrated with three example robots
|
0809.3181
|
Framework for Dynamic Evaluation of Muscle Fatigue in Manual Handling
Work
|
cs.RO
|
Muscle fatigue is defined as the point at which the muscle is no longer able
to sustain the required force or work output level. The overexertion of muscle
force and muscle fatigue can induce acute pain and chronic pain in human body.
When muscle fatigue is accumulated, the functional disability can be resulted
as musculoskeletal disorders (MSD). There are several posture exposure analysis
methods useful for rating the MSD risks, but they are mainly based on static
postures. Even in some fatigue evaluation methods, muscle fatigue evaluation is
only available for static postures, but not suitable for dynamic working
process. Meanwhile, some existing muscle fatigue models based on physiological
models cannot be easily used in industrial ergonomic evaluations. The external
dynamic load is definitely the most important factor resulting muscle fatigue,
thus we propose a new fatigue model under a framework for evaluating fatigue in
dynamic working processes. Under this framework, virtual reality system is
taken to generate virtual working environment, which can be interacted with the
work with haptic interfaces and optical motion capture system. The motion
information and load information are collected and further processed to
evaluate the overall work load of the worker based on dynamic muscle fatigue
models and other work evaluation criterions and to give new information to
characterize the penibility of the task in design process.
|
0809.3182
|
SINGULAB - A Graphical user Interface for the Singularity Analysis of
Parallel Robots based on Grassmann-Cayley Algebra
|
cs.RO
|
This paper presents SinguLab, a graphical user interface for the singularity
analysis of parallel robots. The algorithm is based on Grassmann-Cayley
algebra. The proposed tool is interactive and introduces the designer to the
singularity analysis performed by this method, showing all the stages along the
procedure and eventually showing the solution algebraically and graphically,
allowing as well the singularity verification of different robot poses.
|
0809.3187
|
A Control Variate Approach for Improving Efficiency of Ensemble Monte
Carlo
|
cs.CE cond-mat.stat-mech stat.CO
|
In this paper we present a new approach to control variates for improving
computational efficiency of Ensemble Monte Carlo. We present the approach using
simulation of paths of a time-dependent nonlinear stochastic equation. The core
idea is to extract information at one or more nominal model parameters and use
this information to gain estimation efficiency at neighboring parameters. This
idea is the basis of a general strategy, called DataBase Monte Carlo (DBMC),
for improving efficiency of Monte Carlo. In this paper we describe how this
strategy can be implemented using the variance reduction technique of Control
Variates (CV). We show that, once an initial setup cost for extracting
information is incurred, this approach can lead to significant gains in
computational efficiency. The initial setup cost is justified in projects that
require a large number of estimations or in those that are to be performed
under real-time constraints.
|
0809.3204
|
Extended ASP tableaux and rule redundancy in normal logic programs
|
cs.AI
|
We introduce an extended tableau calculus for answer set programming (ASP).
The proof system is based on the ASP tableaux defined in [Gebser&Schaub, ICLP
2006], with an added extension rule. We investigate the power of Extended ASP
Tableaux both theoretically and empirically. We study the relationship of
Extended ASP Tableaux with the Extended Resolution proof system defined by
Tseitin for sets of clauses, and separate Extended ASP Tableaux from ASP
Tableaux by giving a polynomial-length proof for a family of normal logic
programs P_n for which ASP Tableaux has exponential-length minimal proofs with
respect to n. Additionally, Extended ASP Tableaux imply interesting insight
into the effect of program simplification on the lengths of proofs in ASP.
Closely related to Extended ASP Tableaux, we empirically investigate the effect
of redundant rules on the efficiency of ASP solving.
To appear in Theory and Practice of Logic Programming (TPLP).
|
0809.3250
|
Using descriptive mark-up to formalize translation quality assessment
|
cs.CL
|
The paper deals with using descriptive mark-up to emphasize translation
mistakes. The author postulates the necessity to develop a standard and formal
XML-based way of describing translation mistakes. It is considered to be
important for achieving impersonal translation quality assessment. Marked-up
translations can be used in corpus translation studies; moreover, automatic
translation assessment based on marked-up mistakes is possible. The paper
concludes with setting up guidelines for further activity within the described
field.
|
0809.3273
|
Direct and Reverse Secret-Key Capacities of a Quantum Channel
|
quant-ph cs.CR cs.IT math.IT physics.optics
|
We define the direct and reverse secret-key capacities of a memoryless
quantum channel as the optimal rates that entanglement-based quantum key
distribution protocols can reach by using a single forward classical
communication (direct reconciliation) or a single feedback classical
communication (reverse reconciliation). In particular, the reverse secret-key
capacity can be positive for antidegradable channels, where no forward strategy
is known to be secure. This property is explicitly shown in the continuous
variable framework by considering arbitrary one-mode Gaussian channels.
|
0809.3352
|
Generalized Prediction Intervals for Arbitrary Distributed
High-Dimensional Data
|
cs.CV cs.AI cs.LG
|
This paper generalizes the traditional statistical concept of prediction
intervals for arbitrary probability density functions in high-dimensional
feature spaces by introducing significance level distributions, which provides
interval-independent probabilities for continuous random variables. The
advantage of the transformation of a probability density function into a
significance level distribution is that it enables one-class classification or
outlier detection in a direct manner.
|
0809.3365
|
Algebraic reduction for space-time codes based on quaternion algebras
|
cs.IT math.IT
|
In this paper we introduce a new right preprocessing method for the decoding
of 2x2 algebraic STBCs, called algebraic reduction, which exploits the
multiplicative structure of the code. The principle of the new reduction is to
absorb part of the channel into the code, by approximating the channel matrix
with an element of the maximal order of the algebra. We prove that algebraic
reduction attains the receive diversity when followed by a simple ZF detection.
Simulation results for the Golden Code show that using MMSE-GDFE left
preprocessing, algebraic reduction with simple ZF detection has a loss of only
$3 \dB$ with respect to ML decoding.
|
0809.3370
|
Achievability of the Rate ${1/2}\log(1+\es)$ in the Discrete-Time
Poisson Channel
|
cs.IT math.IT
|
A simple lower bound to the capacity of the discrete-time Poisson channel
with average energy $\es$ is derived. The rate ${1/2}\log(1+\es)$ is shown to
be the generalized mutual information of a modified minimum-distance decoder,
when the input follows a gamma distribution of parameter 1/2 and mean $\es$.
|
0809.3384
|
Changing Assembly Modes without Passing Parallel Singularities in
Non-Cuspidal 3-R\underline{P}R Planar Parallel Robots
|
cs.RO
|
This paper demonstrates that any general 3-DOF three-legged planar parallel
robot with extensible legs can change assembly modes without passing through
parallel singularities (configurations where the mobile platform loses its
stiffness). While the results are purely theoretical, this paper questions the
very definition of parallel singularities.
|
0809.3447
|
An Exploratory Study of Calendar Use
|
cs.HC cs.IR
|
In this paper, we report on findings from an ethnographic study of how people
use their calendars for personal information management (PIM). Our participants
were faculty, staff and students who were not required to use or contribute to
any specific calendaring solution, but chose to do so anyway. The study was
conducted in three parts: first, an initial survey provided broad insights into
how calendars were used; second, this was followed up with personal interviews
of a few participants which were transcribed and content-analyzed; and third,
examples of calendar artifacts were collected to inform our analysis. Findings
from our study include the use of multiple reminder alarms, the reliance on
paper calendars even among regular users of electronic calendars, and wide use
of calendars for reporting and life-archival purposes. We conclude the paper
with a discussion of what these imply for designers of interactive calendar
systems and future work in PIM research.
|
0809.3479
|
Fermions and Loops on Graphs. I. Loop Calculus for Determinant
|
cond-mat.stat-mech cond-mat.dis-nn cs.CC cs.IT hep-th math.IT
|
This paper is the first in the series devoted to evaluation of the partition
function in statistical models on graphs with loops in terms of the
Berezin/fermion integrals. The paper focuses on a representation of the
determinant of a square matrix in terms of a finite series, where each term
corresponds to a loop on the graph. The representation is based on a fermion
version of the Loop Calculus, previously introduced by the authors for
graphical models with finite alphabets. Our construction contains two levels.
First, we represent the determinant in terms of an integral over anti-commuting
Grassman variables, with some reparametrization/gauge freedom hidden in the
formulation. Second, we show that a special choice of the gauge, called BP
(Bethe-Peierls or Belief Propagation) gauge, yields the desired loop
representation. The set of gauge-fixing BP conditions is equivalent to the
Gaussian BP equations, discussed in the past as efficient (linear scaling)
heuristics for estimating the covariance of a sparse positive matrix.
|
0809.3481
|
Fermions and Loops on Graphs. II. Monomer-Dimer Model as Series of
Determinants
|
cond-mat.stat-mech cond-mat.dis-nn cs.CC cs.IT hep-th math.IT
|
We continue the discussion of the fermion models on graphs that started in
the first paper of the series. Here we introduce a Graphical Gauge Model (GGM)
and show that : (a) it can be stated as an average/sum of a determinant defined
on the graph over $\mathbb{Z}_{2}$ (binary) gauge field; (b) it is equivalent
to the Monomer-Dimer (MD) model on the graph; (c) the partition function of the
model allows an explicit expression in terms of a series over disjoint directed
cycles, where each term is a product of local contributions along the cycle and
the determinant of a matrix defined on the remainder of the graph (excluding
the cycle). We also establish a relation between the MD model on the graph and
the determinant series, discussed in the first paper, however, considered using
simple non-Belief-Propagation choice of the gauge. We conclude with a
discussion of possible analytic and algorithmic consequences of these results,
as well as related questions and challenges.
|
0809.3540
|
A Note on the Equivalence of Gibbs Free Energy and Information Theoretic
Capacity
|
cond-mat.stat-mech cs.IT math.IT
|
The minimization of Gibbs free energy is based on the changes in work and
free energy that occur in a physical or chemical system. The maximization of
mutual information, the capacity, of a noisy channel is determined based on the
marginal probabilities and conditional entropies associated with a
communications system. As different as the procedures might first appear,
through the exploration of a simple, "dual use" Ising model, it is seen that
the two concepts are in fact the same. In particular, the case of a binary
symmetric channel is calculated in detail.
|
0809.3546
|
Universal Secure Network Coding via Rank-Metric Codes
|
cs.IT cs.CR math.IT
|
The problem of securing a network coding communication system against an
eavesdropper adversary is considered. The network implements linear network
coding to deliver n packets from source to each receiver, and the adversary can
eavesdrop on \mu arbitrarily chosen links. The objective is to provide reliable
communication to all receivers, while guaranteeing that the source information
remains information-theoretically secure from the adversary. A coding scheme is
proposed that can achieve the maximum possible rate of n-\mu packets. The
scheme, which is based on rank-metric codes, has the distinctive property of
being universal: it can be applied on top of any communication network without
requiring knowledge of or any modifications on the underlying network code. The
only requirement of the scheme is that the packet length be at least n, which
is shown to be strictly necessary for universal communication at the maximum
rate. A further scenario is considered where the adversary is allowed not only
to eavesdrop but also to inject up to t erroneous packets into the network, and
the network may suffer from a rank deficiency of at most \rho. In this case,
the proposed scheme can be extended to achieve the rate of n-\rho-2t-\mu
packets. This rate is shown to be optimal under the assumption of zero-error
communication.
|
0809.3554
|
The Approximate Capacity of the Many-to-One and One-to-Many Gaussian
Interference Channels
|
cs.IT math.IT
|
Recently, Etkin, Tse, and Wang found the capacity region of the two-user
Gaussian interference channel to within one bit/s/Hz. A natural goal is to
apply this approach to the Gaussian interference channel with an arbitrary
number of users. We make progress towards this goal by finding the capacity
region of the many-to-one and one-to-many Gaussian interference channels to
within a constant number of bits. The result makes use of a deterministic model
to provide insight into the Gaussian channel. The deterministic model makes
explicit the dimension of signal scale. A central theme emerges: the use of
lattice codes for alignment of interfering signals on the signal scale.
|
0809.3600
|
On the Capacity Improvement of Multicast Traffic with Network Coding
|
cs.IT math.IT
|
In this paper, we study the contribution of network coding (NC) in improving
the multicast capacity of random wireless ad hoc networks when nodes are
endowed with multi-packet transmission (MPT) and multi-packet reception (MPR)
capabilities. We show that a per session throughput capacity of
$\Theta(nT^{3}(n))$, where $n$ is the total number of nodes and T(n) is the
communication range, can be achieved as a tight bound when each session
contains a constant number of sinks. Surprisingly, an identical order capacity
can be achieved when nodes have only MPR and MPT capabilities. This result
proves that NC does not contribute to the order capacity of multicast traffic
in wireless ad hoc networks when MPR and MPT are used in the network. The
result is in sharp contrast to the general belief (conjecture) that NC improves
the order capacity of multicast. Furthermore, if the communication range is
selected to guarantee the connectivity in the network, i.e., $T(n)\ge
\Theta(\sqrt{\log n/n})$, then the combination of MPR and MPT achieves a
throughput capacity of $\Theta(\frac{\log^{{3/2}} n}{\sqrt{n}})$ which provides
an order capacity gain of $\Theta(\log^2 n)$ compared to the point-to-point
multicast capacity with the same number of destinations.
|
0809.3618
|
Robust Near-Isometric Matching via Structured Learning of Graphical
Models
|
cs.CV cs.LG
|
Models for near-rigid shape matching are typically based on distance-related
features, in order to infer matches that are consistent with the isometric
assumption. However, real shapes from image datasets, even when expected to be
related by "almost isometric" transformations, are actually subject not only to
noise but also, to some limited degree, to variations in appearance and scale.
In this paper, we introduce a graphical model that parameterises appearance,
distance, and angle features and we learn all of the involved parameters via
structured prediction. The outcome is a model for near-rigid shape matching
which is robust in the sense that it is able to capture the possibly limited
but still important scale and appearance variations. Our experimental results
reveal substantial improvements upon recent successful models, while
maintaining similar running times.
|
0809.3650
|
Hierarchical Bayesian sparse image reconstruction with application to
MRFM
|
physics.data-an cs.IT math.IT stat.ME
|
This paper presents a hierarchical Bayesian model to reconstruct sparse
images when the observations are obtained from linear transformations and
corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is
well suited to such naturally sparse image applications as it seamlessly
accounts for properties such as sparsity and positivity of the image via
appropriate Bayes priors. We propose a prior that is based on a weighted
mixture of a positive exponential distribution and a mass at zero. The prior
has hyperparameters that are tuned automatically by marginalization over the
hierarchical Bayesian model. To overcome the complexity of the posterior
distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be
used to estimate the image to be recovered, e.g. by maximizing the estimated
posterior distribution. In our fully Bayesian approach the posteriors of all
the parameters are available. Thus our algorithm provides more information than
other previously proposed sparse reconstruction methods that only give a point
estimate. The performance of our hierarchical Bayesian sparse reconstruction
method is illustrated on synthetic and real data collected from a tobacco virus
sample using a prototype MRFM instrument.
|
0809.3688
|
Mathematical and computer tools of discrete dynamic modeling and
analysis of complex systems in control loop
|
cs.CE cs.MA
|
We present a method of discrete modeling and analysis of multilevel dynamics
of complex large-scale hierarchical dynamic systems subject to external dynamic
control mechanism. Architectural model of information system supporting
simulation and analysis of dynamic processes and development scenarios
(strategies) of complex large-scale hierarchical systems is also proposed.
|
0809.3690
|
Modeling and Control with Local Linearizing Nadaraya Watson Regression
|
cs.CV
|
Black box models of technical systems are purely descriptive. They do not
explain why a system works the way it does. Thus, black box models are
insufficient for some problems. But there are numerous applications, for
example, in control engineering, for which a black box model is absolutely
sufficient. In this article, we describe a general stochastic framework with
which such models can be built easily and fully automated by observation.
Furthermore, we give a practical example and show how this framework can be
used to model and control a motorcar powertrain.
|
0809.3731
|
Uncertainty Relations for Shift-Invariant Analog Signals
|
cs.IT math.IT
|
The past several years have witnessed a surge of research investigating
various aspects of sparse representations and compressed sensing. Most of this
work has focused on the finite-dimensional setting in which the goal is to
decompose a finite-length vector into a given finite dictionary. Underlying
many of these results is the conceptual notion of an uncertainty principle: a
signal cannot be sparsely represented in two different bases. Here, we extend
these ideas and results to the analog, infinite-dimensional setting by
considering signals that lie in a finitely-generated shift-invariant (SI)
space. This class of signals is rich enough to include many interesting special
cases such as multiband signals and splines. By adapting the notion of
coherence defined for finite dictionaries to infinite SI representations, we
develop an uncertainty principle similar in spirit to its finite counterpart.
We demonstrate tightness of our bound by considering a bandlimited lowpass
train that achieves the uncertainty principle. Building upon these results and
similar work in the finite setting, we show how to find a sparse decomposition
in an overcomplete dictionary by solving a convex optimization problem. The
distinguishing feature of our approach is the fact that even though the problem
is defined over an infinite domain with infinitely many variables and
constraints, under certain conditions on the dictionary spectrum our algorithm
can find the sparsest representation by solving a finite-dimensional problem.
|
0809.4019
|
Throughput Scaling of Wireless Networks With Random Connections
|
cs.IT math.IT
|
This work studies the throughput scaling laws of ad hoc wireless networks in
the limit of a large number of nodes. A random connections model is assumed in
which the channel connections between the nodes are drawn independently from a
common distribution. Transmitting nodes are subject to an on-off strategy, and
receiving nodes employ conventional single-user decoding. The following results
are proven:
1) For a class of connection models with finite mean and variance, the
throughput scaling is upper-bounded by $O(n^{1/3})$ for single-hop schemes, and
$O(n^{1/2})$ for two-hop (and multihop) schemes.
2) The $\Theta (n^{1/2})$ throughput scaling is achievable for a specific
connection model by a two-hop opportunistic relaying scheme, which employs
full, but only local channel state information (CSI) at the receivers, and
partial CSI at the transmitters.
3) By relaxing the constraints of finite mean and variance of the connection
model, linear throughput scaling $\Theta (n)$ is achievable with Pareto-type
fading models.
|
0809.4058
|
Target Localization Accuracy Gain in MIMO Radar Based Systems
|
cs.IT math.IT
|
This paper presents an analysis of target localization accuracy, attainable
by the use of MIMO (Multiple-Input Multiple-Output) radar systems, configured
with multiple transmit and receive sensors, widely distributed over a given
area. The Cramer-Rao lower bound (CRLB) for target localization accuracy is
developed for both coherent and non-coherent processing. Coherent processing
requires a common phase reference for all transmit and receive sensors. The
CRLB is shown to be inversely proportional to the signal effective bandwidth in
the non-coherent case, but is approximately inversely proportional to the
carrier frequency in the coherent case. We further prove that optimization over
the sensors' positions lowers the CRLB by a factor equal to the product of the
number of transmitting and receiving sensors. The best linear unbiased
estimator (BLUE) is derived for the MIMO target localization problem. The
BLUE's utility is in providing a closed form localization estimate that
facilitates the analysis of the relations between sensors locations, target
location, and localization accuracy. Geometric dilution of precision (GDOP)
contours are used to map the relative performance accuracy for a given layout
of radars over a given geographic area.
|
0809.4059
|
Information transmission in oscillatory neural activity
|
q-bio.NC cs.IT math.IT q-bio.QM
|
Periodic neural activity not locked to the stimulus or to motor responses is
usually ignored. Here, we present new tools for modeling and quantifying the
information transmission based on periodic neural activity that occurs with
quasi-random phase relative to the stimulus. We propose a model to reproduce
characteristic features of oscillatory spike trains, such as histograms of
inter-spike intervals and phase locking of spikes to an oscillatory influence.
The proposed model is based on an inhomogeneous Gamma process governed by a
density function that is a product of the usual stimulus-dependent rate and a
quasi-periodic function. Further, we present an analysis method generalizing
the direct method (Rieke et al, 1999; Brenner et al, 2000) to assess the
information content in such data. We demonstrate these tools on recordings from
relay cells in the lateral geniculate nucleus of the cat.
|
0809.4086
|
Learning Hidden Markov Models using Non-Negative Matrix Factorization
|
cs.LG cs.AI cs.IT math.IT
|
The Baum-Welsh algorithm together with its derivatives and variations has
been the main technique for learning Hidden Markov Models (HMM) from
observational data. We present an HMM learning algorithm based on the
non-negative matrix factorization (NMF) of higher order Markovian statistics
that is structurally different from the Baum-Welsh and its associated
approaches. The described algorithm supports estimation of the number of
recurrent states of an HMM and iterates the non-negative matrix factorization
(NMF) algorithm to improve the learned HMM parameters. Numerical examples are
provided as well.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.