id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0611113
|
An Anthological Review of Research Utilizing MontyLingua, a Python-Based
End-to-End Text Processor
|
cs.CL
|
MontyLingua, an integral part of ConceptNet which is currently the largest
commonsense knowledge base, is an English text processor developed using Python
programming language in MIT Media Lab. The main feature of MontyLingua is the
coverage for all aspects of English text processing from raw input text to
semantic meanings and summary generation, yet each component in MontyLingua is
loosely-coupled to each other at the architectural and code level, which
enabled individual components to be used independently or substituted. However,
there has been no review exploring the role of MontyLingua in recent research
work utilizing it. This paper aims to review the use of and roles played by
MontyLingua and its components in research work published in 19 articles
between October 2004 and August 2006. We had observed a diversified use of
MontyLingua in many different areas, both generic and domain-specific. Although
the use of text summarizing component had not been observe, we are optimistic
that it will have a crucial role in managing the current trend of information
overload in future research.
|
cs/0611114
|
Very Sparse Stable Random Projections, Estimators and Tail Bounds for
Stable Random Projections
|
cs.DS cs.IT cs.LG math.IT
|
This paper will focus on three different aspects in improving the current
practice of stable random projections.
Firstly, we propose {\em very sparse stable random projections} to
significantly reduce the processing and storage cost, by replacing the
$\alpha$-stable distribution with a mixture of a symmetric $\alpha$-Pareto
distribution (with probability $\beta$, $0<\beta\leq1$) and a point mass at the
origin (with a probability $1-\beta$). This leads to a significant
$\frac{1}{\beta}$-fold speedup for small $\beta$.
Secondly, we provide an improved estimator for recovering the original
$l_\alpha$ norms from the projected data. The standard estimator is based on
the (absolute) sample median, while we suggest using the geometric mean. The
geometric mean estimator we propose is strictly unbiased and is easier to
study. Moreover, the geometric mean estimator is more accurate, especially
non-asymptotically.
Thirdly, we provide an adequate answer to the basic question of how many
projections (samples) are needed for achieving some pre-specified level of
accuracy. \cite{Proc:Indyk_FOCS00,Article:Indyk_TKDE03} did not provide a
criterion that can be used in practice. The geometric mean estimator we propose
allows us to derive sharp tail bounds which can be expressed in exponential
forms with constants explicitly given.
|
cs/0611115
|
A higher-order active contour model of a `gas of circles' and its
application to tree crown extraction
|
cs.CV
|
Many image processing problems involve identifying the region in the image
domain occupied by a given entity in the scene. Automatic solution of these
problems requires models that incorporate significant prior knowledge about the
shape of the region. Many methods for including such knowledge run into
difficulties when the topology of the region is unknown a priori, for example
when the entity is composed of an unknown number of similar objects.
Higher-order active contours (HOACs) represent one method for the modelling of
non-trivial prior knowledge about shape without necessarily constraining region
topology, via the inclusion of non-local interactions between region boundary
points in the energy defining the model. The case of an unknown number of
circular objects arises in a number of domains, e.g. medical, biological,
nanotechnological, and remote sensing imagery. Regions composed of an a priori
unknown number of circles may be referred to as a `gas of circles'. In this
report, we present a HOAC model of a `gas of circles'. In order to guarantee
stable circles, we conduct a stability analysis via a functional Taylor
expansion of the HOAC energy around a circular shape. This analysis fixes one
of the model parameters in terms of the others and constrains the rest. In
conjunction with a suitable likelihood energy, we apply the model to the
extraction of tree crowns from aerial imagery, and show that the new model
outperforms other techniques.
|
cs/0611118
|
A Neutrosophic Description Logic
|
cs.AI
|
Description Logics (DLs) are appropriate, widely used, logics for managing
structured knowledge. They allow reasoning about individuals and concepts, i.e.
set of individuals with common properties. Typically, DLs are limited to
dealing with crisp, well defined concepts. That is, concepts for which the
problem whether an individual is an instance of it is yes/no question. More
often than not, the concepts encountered in the real world do not have a
precisely defined criteria of membership: we may say that an individual is an
instance of a concept only to a certain degree, depending on the individual's
properties. The DLs that deal with such fuzzy concepts are called fuzzy DLs. In
order to deal with fuzzy, incomplete, indeterminate and inconsistent concepts,
we need to extend the fuzzy DLs, combining the neutrosophic logic with a
classical DL. In particular, concepts become neutrosophic (here neutrosophic
means fuzzy, incomplete, indeterminate, and inconsistent), thus reasoning about
neutrosophic concepts is supported. We'll define its syntax, its semantics, and
describe its properties.
|
cs/0611120
|
Wireless Information-Theoretic Security - Part I: Theoretical Aspects
|
cs.IT math.IT
|
In this two-part paper, we consider the transmission of confidential data
over wireless wiretap channels. The first part presents an
information-theoretic problem formulation in which two legitimate partners
communicate over a quasi-static fading channel and an eavesdropper observes
their transmissions through another independent quasi-static fading channel. We
define the secrecy capacity in terms of outage probability and provide a
complete characterization of the maximum transmission rate at which the
eavesdropper is unable to decode any information. In sharp contrast with known
results for Gaussian wiretap channels (without feedback), our contribution
shows that in the presence of fading information-theoretic security is
achievable even when the eavesdropper has a better average signal-to-noise
ratio (SNR) than the legitimate receiver - fading thus turns out to be a friend
and not a foe. The issue of imperfect channel state information is also
addressed. Practical schemes for wireless information-theoretic security are
presented in Part II, which in some cases comes close to the secrecy capacity
limits given in this paper.
|
cs/0611121
|
Wireless Information-Theoretic Security - Part II: Practical
Implementation
|
cs.IT math.IT
|
In Part I of this two-part paper on confidential communication over wireless
channels, we studied the fundamental security limits of quasi-static fading
channels from the point of view of outage secrecy capacity with perfect and
imperfect channel state information. In Part II, we develop a practical secret
key agreement protocol for Gaussian and quasi-static fading wiretap channels.
The protocol uses a four-step procedure to secure communications: establish
common randomness via an opportunistic transmission, perform message
reconciliation, establish a common key via privacy amplification, and use of
the key. We introduce a new reconciliation procedure that uses multilevel
coding and optimized low density parity check codes which in some cases comes
close to achieving the secrecy capacity limits established in Part I. Finally,
we develop new metrics for assessing average secure key generation rates and
show that our protocol is effective in secure key renewal.
|
cs/0611122
|
Knowledge Representation Concepts for Automated SLA Management
|
cs.SE cs.AI cs.LO cs.PL
|
Outsourcing of complex IT infrastructure to IT service providers has
increased substantially during the past years. IT service providers must be
able to fulfil their service-quality commitments based upon predefined Service
Level Agreements (SLAs) with the service customer. They need to manage, execute
and maintain thousands of SLAs for different customers and different types of
services, which needs new levels of flexibility and automation not available
with the current technology. The complexity of contractual logic in SLAs
requires new forms of knowledge representation to automatically draw inferences
and execute contractual agreements. A logic-based approach provides several
advantages including automated rule chaining allowing for compact knowledge
representation as well as flexibility to adapt to rapidly changing business
requirements. We suggest adequate logical formalisms for representation and
enforcement of SLA rules and describe a proof-of-concept implementation. The
article describes selected formalisms of the ContractLog KR and their adequacy
for automated SLA management and presents results of experiments to demonstrate
flexibility and scalability of the approach.
|
cs/0611123
|
Functional Bregman Divergence and Bayesian Estimation of Distributions
|
cs.IT cs.LG math.IT
|
A class of distortions termed functional Bregman divergences is defined,
which includes squared error and relative entropy. A functional Bregman
divergence acts on functions or distributions, and generalizes the standard
Bregman divergence for vectors and a previous pointwise Bregman divergence that
was defined for functions. A recently published result showed that the mean
minimizes the expected Bregman divergence. The new functional definition
enables the extension of this result to the continuous case to show that the
mean minimizes the expected functional Bregman divergence over a set of
functions or distributions. It is shown how this theorem applies to the
Bayesian estimation of distributions. Estimation of the uniform distribution
from independent and identically drawn samples is used as a case study.
|
cs/0611124
|
Low-rank matrix factorization with attributes
|
cs.LG cs.AI cs.IR
|
We develop a new collaborative filtering (CF) method that combines both
previously known users' preferences, i.e. standard CF, as well as product/user
attributes, i.e. classical function approximation, to predict a given user's
interest in a particular product. Our method is a generalized low rank matrix
completion problem, where we learn a function whose inputs are pairs of vectors
-- the standard low rank matrix completion problem being a special case where
the inputs to the function are the row and column indices of the matrix. We
solve this generalized matrix completion problem using tensor product kernels
for which we also formally generalize standard kernel properties. Benchmark
experiments on movie ratings show the advantages of our generalized matrix
completion method over the standard matrix completion one with no information
about movies or people, as well as over standard multi-task or single task
learning methods.
|
cs/0611125
|
Relay Channels with Confidential Messages
|
cs.IT math.IT
|
We consider a relay channel where a relay helps the transmission of messages
from one sender to one receiver. The relay is considered not only as a sender
that helps the message transmission but as a wire-tapper who can obtain some
knowledge about the transmitted messages. In this paper we study the coding
problem of the relay channel under the situation that some of transmitted
messages are confidential to the relay. A security of such confidential
messages is measured by the conditional entropy. The rate region is defined by
the set of transmission rates for which messages are reliably transmitted and
the security of confidential messages is larger than a prescribed level. In
this paper we give two definition of the rate region. We first define the rate
region in the case of deterministic encoder and call it the deterministic rate
region. Next, we define the rate region in the case of stochastic encoder and
call it the stochastic rate region. We derive explicit inner and outer bounds
for the above two rate regions and present a class of relay channels where two
bounds match. Furthermore, we show that stochastic encoder can enlarge the rate
region. We also evaluate the deterministic rate region of the Gaussian relay
channel with confidential messages.
|
cs/0611127
|
Coupling Methodology within the Software Platform Alliances
|
cs.MS cs.CE
|
CEA, ANDRA and EDF are jointly developing the software platform ALLIANCES
which aim is to produce a tool for the simulation of nuclear waste storage and
disposal repository. This type of simulations deals with highly coupled
thermo-hydro-mechanical and chemical (T-H-M-C) processes. A key objective of
Alliances is to give the capability for coupling algorithms development between
existing codes. The aim of this paper is to present coupling methodology use in
the context of this software platform.
|
cs/0611129
|
Shannon's secrecy system with informed receivers and its application to
systematic coding for wiretapped channels
|
cs.IT math.IT
|
Shannon's secrecy system is studied in a setting, where both the legitimate
decoder and the wiretapper have access to side information sequences correlated
to the source, but the wiretapper receives both the coded information and the
side information via channels that are more noisy than the respective channels
of the legitmate decoder, which in turn, also shares a secret key with the
encoder. A single--letter characterization is provided for the achievable
region in the space of five figures of merit: the equivocation at the
wiretapper, the key rate, the distortion of the source reconstruction at the
legitimate receiver, the bandwidth expansion factor of the coded channels, and
the average transmission cost (generalized power). Beyond the fact that this is
an extension of earlier studies, it also provides a framework for studying
fundamental performance limits of systematic codes in the presence of a wiretap
channel. The best achievable performance of systematic codes is then compared
to that of a general code in several respects, and a few examples are given.
|
cs/0611131
|
Scatter Networks: A New Approach for Analyzing Information Scatter on
the Web
|
cs.IR
|
Information on any given topic is often scattered across the web. Previously
this scatter has been characterized through the distribution of a set of facts
(i.e. pieces of information) across web pages, showing that typically a few
pages contain many facts on the topic, while many pages contain just a few.
While such approaches have revealed important scatter phenomena, they are lossy
in that they conceal how specific facts (e.g. rare facts) occur in specific
types of pages (e.g. fact-rich pages). To reveal such regularities, we
construct bi-partite networks, consisting of two types of vertices: the facts
contained in webpages and the webpages themselves. Such a representation
enables the application of a series of network analysis techniques, revealing
structural features such as connectivity, robustness, and clustering. We
discuss the implications of each of these features to the users' ability to
find comprehensive information online. Finally, we compare the bipartite graph
structure of webpages and facts with the hyperlink structure between the
webpages.
|
cs/0611132
|
The specifications making in complex CAD-system of renovation of the
enterprises on the basis of modules in the drawing and electronic catalogues
|
cs.CE
|
The experience of automation of the specifications making of the projects of
renovation of the industrial enterprises is described, being based on the
special modules in the drawing containing the visible image and additional
parameters, and electronic catalogues
|
cs/0611133
|
The modelling of the automation schemes of technological processes in
CAD-system of renovation of the enterprises
|
cs.CE
|
According to the requirements of the Russian standards, the automation
schemes are necessary practically in each project of renovation of industrial
buildings and facilities, in which any technological processes are realized.
The model representations of the automation schemes in CAD-system TechnoCAD
GlassX are described. The models follow a principle "to exclude a repeated
input operations"
|
cs/0611135
|
Genetic Programming for Kernel-based Learning with Co-evolving Subsets
Selection
|
cs.AI
|
Support Vector Machines (SVMs) are well-established Machine Learning (ML)
algorithms. They rely on the fact that i) linear learning can be formalized as
a well-posed optimization problem; ii) non-linear learning can be brought into
linear learning thanks to the kernel trick and the mapping of the initial
search space onto a high dimensional feature space. The kernel is designed by
the ML expert and it governs the efficiency of the SVM approach. In this paper,
a new approach for the automatic design of kernels by Genetic Programming,
called the Evolutionary Kernel Machine (EKM), is presented. EKM combines a
well-founded fitness function inspired from the margin criterion, and a
co-evolution framework ensuring the computational scalability of the approach.
Empirical validation on standard ML benchmark demonstrates that EKM is
competitive using state-of-the-art SVMs with tuned hyper-parameters.
|
cs/0611136
|
Neural Computation with Rings of Quasiperiodic Oscillators
|
cs.RO
|
We describe the use of quasiperiodic oscillators for computation and control
of robots. We also describe their relationship to central pattern generators in
simple organisms and develop a group theory for describing the dynamics of
these systems.
|
cs/0611138
|
Functional Brain Imaging with Multi-Objective Multi-Modal Evolutionary
Optimization
|
cs.AI
|
Functional brain imaging is a source of spatio-temporal data mining problems.
A new framework hybridizing multi-objective and multi-modal optimization is
proposed to formalize these data mining problems, and addressed through
Evolutionary Computation (EC). The merits of EC for spatio-temporal data mining
are demonstrated as the approach facilitates the modelling of the experts'
requirements, and flexibly accommodates their changing goals.
|
cs/0611140
|
On the Benefits of Inoculation, an Example in Train Scheduling
|
cs.AI cs.NE
|
The local reconstruction of a railway schedule following a small perturbation
of the traffic, seeking minimization of the total accumulated delay, is a very
difficult and tightly constrained combinatorial problem. Notoriously enough,
the railway company's public image degrades proportionally to the amount of
daily delays, and the same goes for its profit! This paper describes an
inoculation procedure which greatly enhances an evolutionary algorithm for
train re-scheduling. The procedure consists in building the initial population
around a pre-computed solution based on problem-related information available
beforehand. The optimization is performed by adapting times of departure and
arrival, as well as allocation of tracks, for each train at each station. This
is achieved by a permutation-based evolutionary algorithm that relies on a
semi-greedy heuristic scheduler to gradually reconstruct the schedule by
inserting trains one after another. Experimental results are presented on
various instances of a large real-world case involving around 500 trains and
more than 1 million constraints. In terms of competition with commercial math
ematical programming tool ILOG CPLEX, it appears that within a large class of
instances, excluding trivial instances as well as too difficult ones, and with
very few exceptions, a clever initialization turns an encouraging failure into
a clear-cut success auguring of substantial financial savings.
|
cs/0611141
|
A Generic Global Constraint based on MDDs
|
cs.AI
|
The paper suggests the use of Multi-Valued Decision Diagrams (MDDs) as the
supporting data structure for a generic global constraint. We give an algorithm
for maintaining generalized arc consistency (GAC) on this constraint that
amortizes the cost of the GAC computation over a root-to-terminal path in the
search tree. The technique used is an extension of the GAC algorithm for the
regular language constraint on finite length input. Our approach adds support
for skipped variables, maintains the reduced property of the MDD dynamically
and provides domain entailment detection. Finally we also show how to adapt the
approach to constraint types that are closely related to MDDs, such as AOMDDs
and Case DAGs.
|
cs/0611144
|
Coding Improves the Optimal Delay-Throughput Trade-offs in Mobile Ad-Hoc
Networks: Two-Dimensional I.I.D. Mobility Models
|
cs.NI cs.IT math.IT
|
In this paper, we investigate the delay-throughput trade-offs in mobile
ad-hoc networks under two-dimensional i.i.d. mobility models. We consider two
mobility time-scales: (i) Fast mobility where node mobility is at the same
time-scale as data transmissions; (ii) Slow mobility where node mobility is
assumed to occur at a much slower time-scale than data transmissions. Given a
delay constraint $D,$ the main results are as follows: (1) For the
two-dimensional i.i.d. mobility model with fast mobiles, the maximum throughput
per source-destination (S-D) pair is shown to be $O(\sqrt{D/n}),$ where $n$ is
the number of mobiles. (2) For the two-dimensional i.i.d. mobility model with
slow mobiles, the maximum throughput per S-D pair is shown to be
$O(\sqrt[3]{D/n}).$ (3) For each case, we propose a joint coding-scheduling
algorithm to achieve the optimal delay-throughput trade-offs.
|
cs/0611145
|
A Unified View of TD Algorithms; Introducing Full-Gradient TD and
Equi-Gradient Descent TD
|
cs.LG
|
This paper addresses the issue of policy evaluation in Markov Decision
Processes, using linear function approximation. It provides a unified view of
algorithms such as TD(lambda), LSTD(lambda), iLSTD, residual-gradient TD. It is
asserted that they all consist in minimizing a gradient function and differ by
the form of this function and their means of minimizing it. Two new schemes are
introduced in that framework: Full-gradient TD which uses a generalization of
the principle introduced in iLSTD, and EGD TD, which reduces the gradient by
successive equi-gradient descents. These three algorithms form a new
intermediate family with the interesting property of making much better use of
the samples than TD while keeping a gradient descent scheme, which is useful
for complexity issues and optimistic policy iteration.
|
cs/0611146
|
Linear-Codes-Based Lossless Joint Source-Channel Coding for
Multiple-Access Channels
|
cs.IT math.IT
|
A general lossless joint source-channel coding (JSCC) scheme based on linear
codes and random interleavers for multiple-access channels (MACs) is presented
and then analyzed in this paper. By the information-spectrum approach and the
code-spectrum approach, it is shown that a linear code with a good joint
spectrum can be used to establish limit-approaching lossless JSCC schemes for
correlated general sources and general MACs, where the joint spectrum is a
generalization of the input-output weight distribution. Some properties of
linear codes with good joint spectra are investigated. A formula on the
"distance" property of linear codes with good joint spectra is derived, based
on which, it is further proved that, the rate of any systematic codes with good
joint spectra cannot be larger than the reciprocal of the corresponding
alphabet cardinality, and any sparse generator matrices cannot yield linear
codes with good joint spectra. The problem of designing arbitrary rate coding
schemes is also discussed. A novel idea called "generalized puncturing" is
proposed, which makes it possible that one good low-rate linear code is enough
for the design of coding schemes with multiple rates. Finally, various coding
problems of MACs are reviewed in a unified framework established by the
code-spectrum approach, under which, criteria and candidates of good linear
codes in terms of spectrum requirements for such problems are clearly
presented.
|
cs/0611148
|
Next Generation Language Resources using GRID
|
cs.DC cs.CL
|
This paper presents a case study concerning the challenges and requirements
posed by next generation language resources, realized as an overall model of
open, distributed and collaborative language infrastructure. If a sort of "new
paradigm" is required, we think that the emerging and still evolving technology
connected to Grid computing is a very interesting and suitable one for a
concrete realization of this vision. Given the current limitations of Grid
computing, it is very important to test the new environment on basic language
analysis tools, in order to get the feeling of what are the potentialities and
possible limitations connected to its use in NLP. For this reason, we have done
some experiments on a module of Linguistic Miner, i.e. the extraction of
linguistic patterns from restricted domain corpora.
|
cs/0611150
|
A Novel Bayesian Classifier using Copula Functions
|
cs.LG cs.AI cs.IR
|
A useful method for representing Bayesian classifiers is through
\emph{discriminant functions}. Here, using copula functions, we propose a new
model for discriminants. This model provides a rich and generalized class of
decision boundaries. These decision boundaries significantly boost the
classification accuracy especially for high dimensional feature spaces. We
strengthen our analysis through simulation results.
|
cs/0611155
|
Zig-zag and Replacement Product Graphs and LDPC Codes
|
cs.IT math.IT
|
The performance of codes defined from graphs depends on the expansion
property of the underlying graph in a crucial way. Graph products, such as the
zig-zag product and replacement product provide new infinite families of
constant degree expander graphs. The paper investigates the use of zig-zag and
replacement product graphs for the construction of codes on graphs. A
modification of the zig-zag product is also introduced, which can operate on
two unbalanced biregular bipartite graphs.
|
cs/0611156
|
D-MG Tradeoff and Optimal Codes for a Class of AF and DF Cooperative
Communication Protocols
|
cs.IT math.IT
|
We consider cooperative relay communication in a fading channel environment
under the Orthogonal Amplify and Forward (OAF) and Orthogonal and
Non-Orthogonal Selection Decode and Forward (OSDF and NSDF) protocols. For all
these protocols, we compute the Diversity-Multiplexing Gain Tradeoff (DMT). We
construct DMT optimal codes for the protocols which are sphere decodable and,
in certain cases, incur minimum possible delay. Our results establish that the
DMT of the OAF protocol is identical to the DMT of the Non-Orthogonal Amplify
and Forward (NAF) protocol. Two variants of the NSDF protocol are considered:
fixed-NSDF and variable-NSDF protocol. In the variable-NSDF protocol, the
fraction of time duration for which the source alone transmits is allowed to
vary with the rate of communication. Among the class of static
amplify-and-forward and decode-and-forward protocols, the variable-NSDF
protocol is shown to have the best known DMT for any number of relays apart
from the two-relay case. When there are two relays, the variable-NSDF protocol
is shown to improve on the DMT of the best previously-known protocol for higher
values of the multiplexing gain. Our results also establish that the fixed-NSDF
protocol has a better DMT than the NAF protocol for any number of relays.
Finally, we present a DMT optimal code construction for the NAF protocol.
|
cs/0611160
|
Complementary Sets, Generalized Reed-Muller Codes, and Power Control for
OFDM
|
cs.IT math.IT
|
The use of error-correcting codes for tight control of the peak-to-mean
envelope power ratio (PMEPR) in orthogonal frequency-division multiplexing
(OFDM) transmission is considered in this correspondence. By generalizing a
result by Paterson, it is shown that each q-phase (q is even) sequence of
length 2^m lies in a complementary set of size 2^{k+1}, where k is a
nonnegative integer that can be easily determined from the generalized Boolean
function associated with the sequence. For small k this result provides a
reasonably tight bound for the PMEPR of q-phase sequences of length 2^m. A new
2^h-ary generalization of the classical Reed-Muller code is then used together
with the result on complementary sets to derive flexible OFDM coding schemes
with low PMEPR. These codes include the codes developed by Davis and Jedwab as
a special case. In certain situations the codes in the present correspondence
are similar to Paterson's code constructions and often outperform them.
|
cs/0611161
|
On the Peak-to-Mean Envelope Power Ratio of Phase-Shifted Binary Codes
|
cs.IT math.IT
|
The peak-to-mean envelope power ratio (PMEPR) of a code employed in
orthogonal frequency-division multiplexing (OFDM) systems can be reduced by
permuting its coordinates and by rotating each coordinate by a fixed phase
shift. Motivated by some previous designs of phase shifts using suboptimal
methods, the following question is considered in this paper. For a given binary
code, how much PMEPR reduction can be achieved when the phase shifts are taken
from a 2^h-ary phase-shift keying (2^h-PSK) constellation? A lower bound on the
achievable PMEPR is established, which is related to the covering radius of the
binary code. Generally speaking, the achievable region of the PMEPR shrinks as
the covering radius of the binary code decreases. The bound is then applied to
some well understood codes, including nonredundant BPSK signaling, BCH codes
and their duals, Reed-Muller codes, and convolutional codes. It is demonstrated
that most (presumably not optimal) phase-shift designs from the literature
attain or approach our bound.
|
cs/0611162
|
Quaternary Constant-Amplitude Codes for Multicode CDMA
|
cs.IT math.IT
|
A constant-amplitude code is a code that reduces the peak-to-average power
ratio (PAPR) in multicode code-division multiple access (MC-CDMA) systems to
the favorable value 1. In this paper quaternary constant-amplitude codes (codes
over Z_4) of length 2^m with error-correction capabilities are studied. These
codes exist for every positive integer m, while binary constant-amplitude codes
cannot exist if m is odd. Every word of such a code corresponds to a function
from the binary m-tuples to Z_4 having the bent property, i.e., its Fourier
transform has magnitudes 2^{m/2}. Several constructions of such functions are
presented, which are exploited in connection with algebraic codes over Z_4 (in
particular quaternary Reed-Muller, Kerdock, and Delsarte-Goethals codes) to
construct families of quaternary constant-amplitude codes. Mappings from binary
to quaternary constant-amplitude codes are presented as well.
|
cs/0611163
|
On Measuring the Impact of Human Actions in the Machine Learning of a
Board Game's Playing Policies
|
cs.AI cs.GT cs.NE
|
We investigate systematically the impact of human intervention in the
training of computer players in a strategy board game. In that game, computer
players utilise reinforcement learning with neural networks for evolving their
playing strategies and demonstrate a slow learning speed. Human intervention
can significantly enhance learning performance, but carry-ing it out
systematically seems to be more of a problem of an integrated game development
environment as opposed to automatic evolutionary learning.
|
cs/0611164
|
Player co-modelling in a strategy board game: discovering how to play
fast
|
cs.AI cs.LG
|
In this paper we experiment with a 2-player strategy board game where playing
models are evolved using reinforcement learning and neural networks. The models
are evolved to speed up automatic game development based on human involvement
at varying levels of sophistication and density when compared to fully
autonomous playing. The experimental results suggest a clear and measurable
association between the ability to win games and the ability to do that fast,
while at the same time demonstrating that there is a minimum level of human
involvement beyond which no learning really occurs.
|
cs/0611166
|
Lossless fitness inheritance in genetic algorithms for decision trees
|
cs.AI cs.DS cs.NE
|
When genetic algorithms are used to evolve decision trees, key tree quality
parameters can be recursively computed and re-used across generations of
partially similar decision trees. Simply storing instance indices at leaves is
enough for fitness to be piecewise computed in a lossless fashion. We show the
derivation of the (substantial) expected speed-up on two bounding case problems
and trace the attractive property of lossless fitness inheritance to the
divide-and-conquer nature of decision trees. The theoretical results are
supported by experimental evidence.
|
cs/0612002
|
Reuse of designs: Desperately seeking an interdisciplinary cognitive
approach
|
cs.HC cs.AI
|
This text analyses the papers accepted for the workshop "Reuse of designs: an
interdisciplinary cognitive approach". Several dimensions and questions
considered as important (by the authors and/or by us) are addressed: What about
the "interdisciplinary cognitive" character of the approaches adopted by the
authors? Is design indeed a domain where the use of CBR is particularly
suitable? Are there important distinctions between CBR and other approaches?
Which types of knowledge -other than cases- is being, or might be, used in CBR
systems? With respect to cases: are there different "types" of case and
different types of case use? which formats are adopted for their
representation? do cases have "components"? how are cases organised in the case
memory? Concerning their retrieval: which types of index are used? on which
types of relation is retrieval based? how does one retrieve only a selected
number of cases, i.e., how does one retrieve only the "best" cases? which
processes and strategies are used, by the system and by its user? Finally, some
important aspects of CBR system development are shortly discussed: should CBR
systems be assistance or autonomous systems? how can case knowledge be
"acquired"? what about the empirical evaluation of CBR systems? The conclusion
points out some lacking points: not much attention is paid to the user, and few
papers have indeed adopted an interdisciplinary cognitive approach.
|
cs/0612007
|
High SNR Analysis for MIMO Broadcast Channels: Dirty Paper Coding vs.
Linear Precoding
|
cs.IT math.IT
|
We study the MIMO broadcast channel and compare the achievable throughput for
the optimal strategy of dirty paper coding to that achieved with sub-optimal
and lower complexity linear precoding (e.g., zero-forcing and block
diagonalization) transmission. Both strategies utilize all available spatial
dimensions and therefore have the same multiplexing gain, but an absolute
difference in terms of throughput does exist. The sum rate difference between
the two strategies is analytically computed at asymptotically high SNR, and it
is seen that this asymptotic statistic provides an accurate characterization at
even moderate SNR levels. Furthermore, the difference is not affected by
asymmetric channel behavior when each user a has different average SNR.
Weighted sum rate maximization is also considered, and a similar quantification
of the throughput difference between the two strategies is performed. In the
process, it is shown that allocating user powers in direct proportion to user
weights asymptotically maximizes weighted sum rate. For multiple antenna users,
uniform power allocation across the receive antennas is applied after
distributing power proportional to the user weight.
|
cs/0612011
|
Estimation of Bit and Frame Error Rates of Low-Density Parity-Check
Codes on Binary Symmetric Channels
|
cs.IT math.IT
|
A method for estimating the performance of low-density parity-check (LDPC)
codes decoded by hard-decision iterative decoding algorithms on binary
symmetric channels (BSC) is proposed. Based on the enumeration of the smallest
weight error patterns that can not be all corrected by the decoder, this method
estimates both the frame error rate (FER) and the bit error rate (BER) of a
given LDPC code with very good precision for all crossover probabilities of
practical interest. Through a number of examples, we show that the proposed
method can be effectively applied to both regular and irregular LDPC codes and
to a variety of hard-decision iterative decoding algorithms. Compared with the
conventional Monte Carlo simulation, the proposed method has a much smaller
computational complexity, particularly for lower error rates.
|
cs/0612012
|
Geographic Gossip on Geometric Random Graphs via Affine Combinations
|
cs.MA cs.IT math.IT
|
In recent times, a considerable amount of work has been devoted to the
development and analysis of gossip algorithms in Geometric Random Graphs. In a
recently introduced model termed "Geographic Gossip," each node is aware of its
position but possesses no further information. Traditionally, gossip protocols
have always used convex linear combinations to achieve averaging. We develop a
new protocol for Geographic Gossip, in which counter-intuitively, we use {\it
non-convex affine combinations} as updates in addition to convex combinations
to accelerate the averaging process. The dependence of the number of
transmissions used by our algorithm on the number of sensors $n$ is $n
\exp(O(\log \log n)^2) = n^{1 + o(1)}$. For the previous algorithm, this
dependence was $\tilde{O}(n^{1.5})$. The exponent 1+ o(1) of our algorithm is
asymptotically optimal. Our algorithm involves a hierarchical structure of
$\log \log n$ depth and is not completely decentralized. However, the extent of
control exercised by a sensor on another is restricted to switching the other
on or off.
|
cs/0612014
|
Going Stupid with EcoLab
|
cs.MA
|
In 2005, Railsback et al. proposed a very simple model ({\em Stupid
Model}) that could be implemented within a couple of hours, and later
extended to demonstrate the use of common ABM platform functionality. They
provided implementations of the model in several agent based modelling
platforms, and compared the platforms for ease of implementation of this simple
model, and performance. In this paper, I implement Railsback et al's Stupid
Model in the EcoLab simulation platform, a C++ based modelling platform,
demonstrating that it is a feasible platform for these sorts of models, and
compare the performance of the implementation with Repast, Mason and Swarm
versions.
|
cs/0612015
|
On the intersection of additive perfect codes
|
cs.IT math.IT
|
The intersection problem for additive (extended and non-extended) perfect
codes, i.e. which are the possibilities for the number of codewords in the
intersection of two additive codes C1 and C2 of the same length, is
investigated. Lower and upper bounds for the intersection number are computed
and, for any value between these bounds, codes which have this given
intersection value are constructed. For all these codes the abelian group
structure of the intersection is characterized. The parameters of this abelian
group structure corresponding to the intersection codes are computed and lower
and upper bounds for these parameters are established. Finally, constructions
of codes the intersection of which fits any parameters between these bounds are
given.
|
cs/0612019
|
On Finite Memory Universal Data Compression and Classification of
Individual Sequences
|
cs.IT math.IT
|
Consider the case where consecutive blocks of N letters of a semi-infinite
individual sequence X over a finite-alphabet are being compressed into binary
sequences by some one-to-one mapping. No a-priori information about X is
available at the encoder, which must therefore adopt a universal
data-compression algorithm. It is known that if the universal LZ77 data
compression algorithm is successively applied to N-blocks then the best
error-free compression for the particular individual sequence X is achieved, as
$N$ tends to infinity. The best possible compression that may be achieved by
any universal data compression algorithm for finite N-blocks is discussed. It
is demonstrated that context tree coding essentially achieves it. Next,
consider a device called classifier (or discriminator) that observes an
individual training sequence X. The classifier's task is to examine individual
test sequences of length N and decide whether the test N-sequence has the same
features as those that are captured by the training sequence X, or is
sufficiently different, according to some appropriatecriterion. Here again, it
is demonstrated that a particular universal context classifier with a
storage-space complexity that is linear in N, is essentially optimal. This may
contribute a theoretical "individual sequence" justification for the
Probabilistic Suffix Tree (PST) approach in learning theory and in
computational biology.
|
cs/0612024
|
On the Maximum Sum-rate Capacity of Cognitive Multiple Access Channel
|
cs.IT math.IT
|
We consider the communication scenario where multiple cognitive users wish to
communicate to the same receiver, in the presence of primary transmission. The
cognitive transmitters are assumed to have the side information about the
primary transmission. The capacity region of cognitive users is formulated
under the constraint that the capacity of primary transmission is not changed
as if no cognitive users exist. Moreover, the maximum sum-rate point of the
capacity region is characterized, by optimally allocating the power of each
cognitive user to transmit its own information.
|
cs/0612027
|
Experimental Information and Statistical Modeling of Physical Laws
|
cs.IT cs.IR math.IT
|
Statistical modeling of physical laws connects experiments with mathematical
descriptions of natural phenomena. The modeling is based on the probability
density of measured variables expressed by experimental data via a kernel
estimator. As an objective kernel the scattering function determined by
calibration of the instrument is introduced. This function provides for a new
definition of experimental information and redundancy of experimentation in
terms of information entropy. The redundancy increases with the number of
experiments, while the experimental information converges to a value that
describes the complexity of the data. The difference between the redundancy and
the experimental information is proposed as the model cost function. From its
minimum, a proper number of data in the model is estimated. As an optimal,
nonparametric estimator of the relation between measured variables the
conditional average extracted from the kernel estimator is proposed. The
modeling is demonstrated on noisy chaotic data.
|
cs/0612029
|
A Classification of 6R Manipulators
|
cs.RO
|
This paper presents a classification of generic 6-revolute jointed (6R)
manipulators using homotopy class of their critical point manifold. A part of
classification is listed in this paper because of the complexity of homotopy
class of 4-torus. The results of this classification will serve future research
of the classification and topological properties of maniplators joint space and
workspace.
|
cs/0612030
|
Loop corrections for approximate inference
|
cs.AI cs.IT cs.LG math.IT
|
We propose a method for improving approximate inference methods that corrects
for the influence of loops in the graphical model. The method is applicable to
arbitrary factor graphs, provided that the size of the Markov blankets is not
too large. It is an alternative implementation of an idea introduced recently
by Montanari and Rizzo (2005). In its simplest form, which amounts to the
assumption that no loops are present, the method reduces to the minimal Cluster
Variation Method approximation (which uses maximal factors as outer clusters).
On the other hand, using estimates of the effect of loops (obtained by some
approximate inference algorithm) and applying the Loop Correcting (LC) method
usually gives significantly better results than applying the approximate
inference algorithm directly without loop corrections. Indeed, we often observe
that the loop corrected error is approximately the square of the error of the
approximate inference method used to estimate the effect of loops. We compare
different variants of the Loop Correcting method with other approximate
inference methods on a variety of graphical models, including "real world"
networks, and conclude that the LC approach generally obtains the most accurate
results.
|
cs/0612031
|
Estimating Aggregate Properties on Probabilistic Streams
|
cs.DS cs.DB
|
The probabilistic-stream model was introduced by Jayram et al. \cite{JKV07}.
It is a generalization of the data stream model that is suited to handling
``probabilistic'' data where each item of the stream represents a probability
distribution over a set of possible events. Therefore, a probabilistic stream
determines a distribution over potentially a very large number of classical
"deterministic" streams where each item is deterministically one of the domain
values. The probabilistic model is applicable for not only analyzing streams
where the input has uncertainties (such as sensor data streams that measure
physical processes) but also where the streams are derived from the input data
by post-processing, such as tagging or reconciling inconsistent and poor
quality data.
We present streaming algorithms for computing commonly used aggregates on a
probabilistic stream. We present the first known, one pass streaming algorithm
for estimating the \AVG, improving results in \cite{JKV07}. We present the
first known streaming algorithms for estimating the number of \DISTINCT items
on probabilistic streams. Further, we present extensions to other aggregates
such as the repeat rate, quantiles, etc. In all cases, our algorithms work with
provable accuracy guarantees and within the space constraints of the data
stream model.
|
cs/0612032
|
Code Spectrum and Reliability Function: Binary Symmetric Channel
|
cs.IT math.IT
|
A new approach for upper bounding the channel reliability function using the
code spectrum is described. It allows to treat in a unified way both a low and
a high rate cases. In particular, the earlier known upper bounds are improved,
and a new derivation of the sphere-packing bound is presented.
|
cs/0612033
|
Acronym-Meaning Extraction from Corpora Using Multi-Tape Weighted
Finite-State Machines
|
cs.CL cs.DS cs.SC
|
The automatic extraction of acronyms and their meaning from corpora is an
important sub-task of text mining. It can be seen as a special case of string
alignment, where a text chunk is aligned with an acronym. Alternative
alignments have different cost, and ideally the least costly one should give
the correct meaning of the acronym. We show how this approach can be
implemented by means of a 3-tape weighted finite-state machine (3-WFSM) which
reads a text chunk on tape 1 and an acronym on tape 2, and generates all
alternative alignments on tape 3. The 3-WFSM can be automatically generated
from a simple regular expression. No additional algorithms are required at any
stage. Our 3-WFSM has a size of 27 states and 64 transitions, and finds the
best analysis of an acronym in a few milliseconds.
|
cs/0612041
|
Viterbi Algorithm Generalized for n-Tape Best-Path Search
|
cs.CL cs.DS cs.SC
|
We present a generalization of the Viterbi algorithm for identifying the path
with minimal (resp. maximal) weight in a n-tape weighted finite-state machine
(n-WFSM), that accepts a given n-tuple of input strings (s_1,... s_n). It also
allows us to compile the best transduction of a given input n-tuple by a
weighted (n+m)-WFSM (transducer) with n input and m output tapes. Our algorithm
has a worst-case time complexity of O(|s|^n |E| log (|s|^n |Q|)), where n and
|s| are the number and average length of the strings in the n-tuple, and |Q|
and |E| the number of states and transitions in the n-WFSM, respectively. A
straight forward alternative, consisting in intersection followed by classical
shortest-distance search, operates in O(|s|^n (|E|+|Q|) log (|s|^n |Q|)) time.
|
cs/0612042
|
Decentralized Maximum Likelihood Estimation for Sensor Networks Composed
of Nonlinearly Coupled Dynamical Systems
|
cs.DC cs.IT math.IT
|
In this paper we propose a decentralized sensor network scheme capable to
reach a globally optimum maximum likelihood (ML) estimate through
self-synchronization of nonlinearly coupled dynamical systems. Each node of the
network is composed of a sensor and a first-order dynamical system initialized
with the local measurements. Nearby nodes interact with each other exchanging
their state value and the final estimate is associated to the state derivative
of each dynamical system. We derive the conditions on the coupling mechanism
guaranteeing that, if the network observes one common phenomenon, each node
converges to the globally optimal ML estimate. We prove that the synchronized
state is globally asymptotically stable if the coupling strength exceeds a
given threshold. Acting on a single parameter, the coupling strength, we show
how, in the case of nonlinear coupling, the network behavior can switch from a
global consensus system to a spatial clustering system. Finally, we show the
effect of the network topology on the scalability properties of the network and
we validate our theoretical findings with simulation results.
|
cs/0612043
|
About the Lifespan of Peer to Peer Networks
|
cs.DC cs.IR
|
We analyze the ability of peer to peer networks to deliver a complete file
among the peers. Early on we motivate a broad generalization of network
behavior organizing it into one of two successive phases. According to this
view the network has two main states: first centralized - few sources (roots)
hold the complete file, and next distributed - peers hold some parts (chunks)
of the file such that the entire network has the whole file, but no individual
has it. In the distributed state we study two scenarios, first, when the peers
are ``patient'', i.e, do not leave the system until they obtain the complete
file; second, peers are ``impatient'' and almost always leave the network
before obtaining the complete file.
|
cs/0612044
|
The Relay-Eavesdropper Channel: Cooperation for Secrecy
|
cs.IT math.IT
|
This paper establishes the utility of user cooperation in facilitating secure
wireless communications. In particular, the four-terminal relay-eavesdropper
channel is introduced and an outer-bound on the optimal rate-equivocation
region is derived. Several cooperation strategies are then devised and the
corresponding achievable rate-equivocation region are characterized. Of
particular interest is the novel Noise-Forwarding (NF) strategy, where the
relay node sends codewords independent of the source message to confuse the
eavesdropper. This strategy is used to illustrate the deaf helper phenomenon,
where the relay is able to facilitate secure communications while being totally
ignorant of the transmitted messages. Furthermore, NF is shown to increase the
secrecy capacity in the reversely degraded scenario, where the relay node fails
to offer performance gains in the classical setting. The gain offered by the
proposed cooperation strategies is then proved theoretically and validated
numerically in the additive White Gaussian Noise (AWGN) channel.
|
cs/0612046
|
Social Networks and Social Information Filtering on Digg
|
cs.HC cs.AI cs.IR
|
The new social media sites -- blogs, wikis, Flickr and Digg, among others --
underscore the transformation of the Web to a participatory medium in which
users are actively creating, evaluating and distributing information. Digg is a
social news aggregator which allows users to submit links to, vote on and
discuss news stories. Each day Digg selects a handful of stories to feature on
its front page. Rather than rely on the opinion of a few editors, Digg
aggregates opinions of thousands of its users to decide which stories to
promote to the front page.
Digg users can designate other users as ``friends'' and easily track friends'
activities: what new stories they submitted, commented on or read. The friends
interface acts as a \emph{social filtering} system, recommending to user
stories his or her friends liked or found interesting. By tracking the votes
received by newly submitted stories over time, we showed that social filtering
is an effective information filtering approach. Specifically, we showed that
(a) users tend to like stories submitted by friends and (b) users tend to like
stories their friends read and liked. As a byproduct of social filtering,
social networks also play a role in promoting stories to Digg's front page,
potentially leading to ``tyranny of the minority'' situation where a
disproportionate number of front page stories comes from the same small group
of interconnected users. Despite this, social filtering is a promising new
technology that can be used to personalize and tailor information to individual
users: for example, through personal front pages.
|
cs/0612047
|
Social Browsing on Flickr
|
cs.HC cs.AI
|
The new social media sites - blogs, wikis, del.icio.us and Flickr, among
others - underscore the transformation of the Web to a participatory medium in
which users are actively creating, evaluating and distributing information. The
photo-sharing site Flickr, for example, allows users to upload photographs,
view photos created by others, comment on those photos, etc. As is common to
other social media sites, Flickr allows users to designate others as
``contacts'' and to track their activities in real time. The contacts (or
friends) lists form the social network backbone of social media sites. We claim
that these social networks facilitate new ways of interacting with information,
e.g., through what we call social browsing. The contacts interface on Flickr
enables users to see latest images submitted by their friends. Through an
extensive analysis of Flickr data, we show that social browsing through the
contacts' photo streams is one of the primary methods by which users find new
images on Flickr. This finding has implications for creating personalized
recommendation systems based on the user's declared contacts lists.
|
cs/0612049
|
Power Control in Distributed Cooperative OFDMA Cellular Networks
|
cs.IT math.IT
|
This paper has been withdrawn by the author.
|
cs/0612051
|
On the Decoder Error Probability of Bounded Rank-Distance Decoders for
Maximum Rank Distance Codes
|
cs.IT math.IT
|
In this paper, we first introduce the concept of elementary linear subspace,
which has similar properties to those of a set of coordinates. We then use
elementary linear subspaces to derive properties of maximum rank distance (MRD)
codes that parallel those of maximum distance separable codes. Using these
properties, we show that, for MRD codes with error correction capability t, the
decoder error probability of bounded rank distance decoders decreases
exponentially with t^2 based on the assumption that all errors with the same
rank are equally likely.
|
cs/0612052
|
Budget Optimization in Search-Based Advertising Auctions
|
cs.DS cs.CE cs.GT
|
Internet search companies sell advertisement slots based on users' search
queries via an auction. While there has been a lot of attention on the auction
process and its game-theoretic aspects, our focus is on the advertisers. In
particular, the advertisers have to solve a complex optimization problem of how
to place bids on the keywords of their interest so that they can maximize their
return (the number of user clicks on their ads) for a given budget. We model
the entire process and study this budget optimization problem. While most
variants are NP hard, we show, perhaps surprisingly, that simply randomizing
between two uniform strategies that bid equally on all the keywords works well.
More precisely, this strategy gets at least 1-1/e fraction of the maximum
clicks possible. Such uniform strategies are likely to be practical. We also
present inapproximability results, and optimal algorithms for variants of the
budget optimization problem.
|
cs/0612053
|
Deriving Schrodinger Equation From A Soft-Decision Iterative Decoding
Algorithm
|
cs.IT math.IT
|
The belief propagation algorithm has been recognized in the information
theory community as a soft-decision iterative decoding algorithm. It is the
most powerful algorithm found so far for attacking hard optimization problems
in channel decoding. Quantum mechanics is the foundation of modern physics with
the time-independent Schrodinger equation being one of the most important
equations. This paper shows that the equation can be derived from a generalized
belief propagation algorithm. Such a connection on a mathematical basis might
shed new insights into the foundations of quantum mechanics and quantum
computing.
|
cs/0612055
|
Linear Probing with Constant Independence
|
cs.DS cs.DB
|
Hashing with linear probing dates back to the 1950s, and is among the most
studied algorithms. In recent years it has become one of the most important
hash table organizations since it uses the cache of modern computers very well.
Unfortunately, previous analysis rely either on complicated and space consuming
hash functions, or on the unrealistic assumption of free access to a truly
random hash function. Already Carter and Wegman, in their seminal paper on
universal hashing, raised the question of extending their analysis to linear
probing. However, we show in this paper that linear probing using a pairwise
independent family may have expected {\em logarithmic} cost per operation. On
the positive side, we show that 5-wise independence is enough to ensure
constant expected time per operation. This resolves the question of finding a
space and time efficient hash function that provably ensures good performance
for linear probing.
|
cs/0612056
|
Conscious Intelligent Systems - Part 1 : I X I
|
cs.AI
|
Did natural consciousness and intelligent systems arise out of a path that
was co-evolutionary to evolution? Can we explain human self-consciousness as
having risen out of such an evolutionary path? If so how could it have been?
In this first part of a two-part paper (titled IXI), we take a learning
system perspective to the problem of consciousness and intelligent systems, an
approach that may look unseasonable in this age of fMRI's and high tech
neuroscience.
We posit conscious intelligent systems in natural environments and wonder how
natural factors influence their design paths. Such a perspective allows us to
explain seamlessly a variety of natural factors, factors ranging from the rise
and presence of the human mind, man's sense of I, his self-consciousness and
his looping thought processes to factors like reproduction, incubation,
extinction, sleep, the richness of natural behavior, etc. It even allows us to
speculate on a possible human evolution scenario and other natural phenomena.
|
cs/0612057
|
Conscious Intelligent Systems - Part II - Mind, Thought, Language and
Understanding
|
cs.AI
|
This is the second part of a paper on Conscious Intelligent Systems. We use
the understanding gained in the first part (Conscious Intelligent Systems Part
1: IXI (arxiv id cs.AI/0612056)) to look at understanding. We see how the
presence of mind affects understanding and intelligent systems; we see that the
presence of mind necessitates language. The rise of language in turn has
important effects on understanding. We discuss the humanoid question and how
the question of self-consciousness (and by association mind/thought/language)
would affect humanoids too.
|
cs/0612059
|
Synchronization recovery and state model reduction for soft decoding of
variable length codes
|
cs.NI cs.IT math.IT
|
Variable length codes exhibit de-synchronization problems when transmitted
over noisy channels. Trellis decoding techniques based on Maximum A Posteriori
(MAP) estimators are often used to minimize the error rate on the estimated
sequence. If the number of symbols and/or bits transmitted are known by the
decoder, termination constraints can be incorporated in the decoding process.
All the paths in the trellis which do not lead to a valid sequence length are
suppressed. This paper presents an analytic method to assess the expected error
resilience of a VLC when trellis decoding with a sequence length constraint is
used. The approach is based on the computation, for a given code, of the amount
of information brought by the constraint. It is then shown that this quantity
as well as the probability that the VLC decoder does not re-synchronize in a
strict sense, are not significantly altered by appropriate trellis states
aggregation. This proves that the performance obtained by running a
length-constrained Viterbi decoder on aggregated state models approaches the
one obtained with the bit/symbol trellis, with a significantly reduced
complexity. It is then shown that the complexity can be further decreased by
projecting the state model on two state models of reduced size.
|
cs/0612062
|
Unifying Lexicons in view of a Phonological and Morphological Lexical DB
|
cs.IR
|
The present work falls in the line of activities promoted by the European
Languguage Resource Association (ELRA) Production Committee (PCom) and raises
issues in methods, procedures and tools for the reusability, creation, and
management of Language Resources. A two-fold purpose lies behind this
experiment. The first aim is to investigate the feasibility, define methods and
procedures for combining two Italian lexical resources that have incompatible
formats and complementary information into a Unified Lexicon (UL). The adopted
strategy and the procedures appointed are described together with the driving
criterion of the merging task, where a balance between human and computational
efforts is pursued. The coverage of the UL has been maximized, by making use of
simple and fast matching procedures. The second aim is to exploit this newly
obtained resource for implementing the phonological and morphological layers of
the CLIPS lexical database. Implementing these new layers and linking them with
the already exisitng syntactic and semantic layers is not a trivial task. The
constraints imposed by the model, the impact at the architectural level and the
solution adopted in order to make the whole database `speak' efficiently are
presented. Advantages vs. disadvantages are discussed.
|
cs/0612064
|
Bounds on Key Appearance Equivocation for Substitution Ciphers
|
cs.IT cs.CR math.IT
|
The average conditional entropy of the key given the message and its
corresponding cryptogram, H(K|M,C), which is reffer as a key appearance
equivocation, was proposed as a theoretical measure of the strength of the
cipher system under a known-plaintext attack by Dunham in 1980. In the same
work (among other things), lower and upper bounds for H(S}_{M}|M^L,C^L) are
found and its asymptotic behaviour as a function of cryptogram length L is
described for simple substitution ciphers i.e. when the key space S_{M} is the
symmetric group acting on a discrete alphabet M. In the present paper we
consider the same problem when the key space is an arbitrary subgroup K of
S_{M} and generalize Dunham's result.
|
cs/0612067
|
Retrieving Reed-Solomon coded data under interpolation-based list
decoding
|
cs.IT math.IT
|
A transform that enables generator-matrix-based Reed-Solomon (RS) coded data
to be recovered under interpolation-based list decoding is presented. The
transform matrix needs to be computed only once and the transformation of an
element from the output list to the desired RS coded data block incurs $k^{2}$
field multiplications, given a code of dimension $k$.
|
cs/0612068
|
Interactive Configuration by Regular String Constraints
|
cs.AI
|
A product configurator which is complete, backtrack free and able to compute
the valid domains at any state of the configuration can be constructed by
building a Binary Decision Diagram (BDD). Despite the fact that the size of the
BDD is exponential in the number of variables in the worst case, BDDs have
proved to work very well in practice. Current BDD-based techniques can only
handle interactive configuration with small finite domains. In this paper we
extend the approach to handle string variables constrained by regular
expressions. The user is allowed to change the strings by adding letters at the
end of the string. We show how to make a data structure that can perform fast
valid domain computations given some assignment on the set of string variables.
We first show how to do this by using one large DFA. Since this approach is
too space consuming to be of practical use, we construct a data structure that
simulates the large DFA and in most practical cases are much more space
efficient. As an example a configuration problem on $n$ string variables with
only one solution in which each string variable is assigned to a value of
length of $k$ the former structure will use $\Omega(k^n)$ space whereas the
latter only need $O(kn)$. We also show how this framework easily can be
combined with the recent BDD techniques to allow both boolean, integer and
string variables in the configuration problem.
|
cs/0612073
|
On the Fingerprinting Capacity Under the Marking Assumption
|
cs.IT cs.CR math.IT
|
We address the maximum attainable rate of fingerprinting codes under the
marking assumption, studying lower and upper bounds on the value of the rate
for various sizes of the attacker coalition. Lower bounds are obtained by
considering typical coalitions, which represents a new idea in the area of
fingerprinting and enables us to improve the previously known lower bounds for
coalitions of size two and three. For upper bounds, the fingerprinting problem
is modelled as a communications problem. It is shown that the maximum code rate
is bounded above by the capacity of a certain class of channels, which are
similar to the multiple-access channel. Converse coding theorems proved in the
paper provide new upper bounds on fingerprinting capacity.
It is proved that capacity for fingerprinting against coalitions of size two
and three over the binary alphabet satisfies $0.25 \leq C_{2,2} \leq 0.322$ and
$0.083 \leq C_{3,2} \leq 0.199$ respectively. For coalitions of an arbitrary
fixed size $t,$ we derive an upper bound $(t\ln2)^{-1}$ on fingerprinting
capacity in the binary case. Finally, for general alphabets, we establish upper
bounds on the fingerprinting capacity involving only single-letter mutual
information quantities.
|
cs/0612075
|
Intermediate Performance of Rateless Codes
|
cs.IT math.IT
|
Rateless/fountain codes are designed so that all input symbols can be
recovered from a slightly larger number of coded symbols, with high probability
using an iterative decoder. In this paper we investigate the number of input
symbols that can be recovered by the same decoder, but when the number of coded
symbols available is less than the total number of input symbols. Of course
recovery of all inputs is not possible, and the fraction that can be recovered
will depend on the output degree distribution of the code.
In this paper we (a) outer bound the fraction of inputs that can be recovered
for any output degree distribution of the code, and (b) design degree
distributions which meet/perform close to this bound. Our results are of
interest for real-time systems using rateless codes, and for Raptor-type
two-stage designs.
|
cs/0612076
|
A New Approach for Capacity Analysis of Large Dimensional Multi-Antenna
Channels
|
cs.IT math.IT math.PR
|
This paper adresses the behaviour of the mutual information of correlated
MIMO Rayleigh channels when the numbers of transmit and receive antennas
converge to infinity at the same rate. Using a new and simple approach based on
Poincar\'{e}-Nash inequality and on an integration by parts formula, it is
rigorously established that the mutual information converges to a Gaussian
random variable whose mean and variance are evaluated. These results confirm
previous evaluations based on the powerful but non rigorous replica method. It
is believed that the tools that are used in this paper are simple, robust, and
of interest for the communications engineering community.
|
cs/0612077
|
Algebraic Signal Processing Theory
|
cs.IT math.IT
|
This paper presents an algebraic theory of linear signal processing. At the
core of algebraic signal processing is the concept of a linear signal model
defined as a triple (A, M, phi), where familiar concepts like the filter space
and the signal space are cast as an algebra A and a module M, respectively, and
phi generalizes the concept of the z-transform to bijective linear mappings
from a vector space of, e.g., signal samples, into the module M. A signal model
provides the structure for a particular linear signal processing application,
such as infinite and finite discrete time, or infinite or finite discrete
space, or the various forms of multidimensional linear signal processing. As
soon as a signal model is chosen, basic ingredients follow, including the
associated notions of filtering, spectrum, and Fourier transform. The shift
operator is a key concept in the algebraic theory: it is the generator of the
algebra of filters A. Once the shift is chosen, a well-defined methodology
leads to the associated signal model. Different shifts correspond to infinite
and finite time models with associated infinite and finite z-transforms, and to
infinite and finite space models with associated infinite and finite
C-transforms (that we introduce). In particular, we show that the 16 discrete
cosine and sine transforms are Fourier transforms for the finite space models.
Other definitions of the shift naturally lead to new signal models and to new
transforms as associated Fourier transforms in one and higher dimensions,
separable and non-separable. We explain in algebraic terms shift-invariance
(the algebra of filters A is commutative), the role of boundary conditions and
signal extensions, the connections between linear transforms and linear finite
Gauss-Markov fields, and several other concepts and connections.
|
cs/0612078
|
Effect of Finite Rate Feedback on CDMA Signature Optimization and MIMO
Beamforming Vector Selection
|
cs.IT math.IT
|
We analyze the effect of finite rate feedback on CDMA (code-division multiple
access) signature optimization and MIMO (multi-input-multi-output) beamforming
vector selection. In CDMA signature optimization, for a particular user, the
receiver selects a signature vector from a codebook to best avoid interference
from other users, and then feeds the corresponding index back to the specified
user. For MIMO beamforming vector selection, the receiver chooses a beamforming
vector from a given codebook to maximize throughput, and feeds back the
corresponding index to the transmitter. These two problems are dual: both can
be modeled as selecting a unit norm vector from a finite size codebook to
"match" a randomly generated Gaussian matrix. In signature optimization, the
least match is required while the maximum match is preferred for beamforming
selection.
Assuming that the feedback link is rate limited, our main result is an exact
asymptotic performance formulae where the length of the signature/beamforming
vector, the dimensions of interference/channel matrix, and the feedback rate
approach infinity with constant ratios. The proof rests on a large deviation
principle over a random matrix ensemble. Further, we show that random codebooks
generated from the isotropic distritution are asymptotically optimal not only
on average, but also with probability one.
|
cs/0612080
|
On the Decrease Rate of the Non-Gaussianness of the Sum of Independent
Random Variables
|
cs.IT math.IT
|
Several proofs of the monotonicity of the non-Gaussianness (divergence with
respect to a Gaussian random variable with identical second order statistics)
of the sum of n independent and identically distributed (i.i.d.) random
variables were published. We give an upper bound on the decrease rate of the
non-Gaussianness which is proportional to the inverse of n, for large n. The
proof is based on the relationship between non-Gaussianness and minimum
mean-square error (MMSE) and causal minimum mean-square error (CMMSE) in the
time-continuous Gaussian channel.
|
cs/0612083
|
A Byzantine Fault Tolerant Distributed Commit Protocol
|
cs.DC cs.DB
|
In this paper, we present a Byzantine fault tolerant distributed commit
protocol for transactions running over untrusted networks. The traditional
two-phase commit protocol is enhanced by replicating the coordinator and by
running a Byzantine agreement algorithm among the coordinator replicas. Our
protocol can tolerate Byzantine faults at the coordinator replicas and a subset
of malicious faults at the participants. A decision certificate, which includes
a set of registration records and a set of votes from participants, is used to
facilitate the coordinator replicas to reach a Byzantine agreement on the
outcome of each transaction. The certificate also limits the ways a faulty
replica can use towards non-atomic termination of transactions, or semantically
incorrect transaction outcomes.
|
cs/0612084
|
Achievable Rates for the General Gaussian Multiple Access Wire-Tap
Channel with Collective Secrecy
|
cs.IT cs.CR math.IT
|
We consider the General Gaussian Multiple Access Wire-Tap Channel (GGMAC-WT).
In this scenario, multiple users communicate with an intended receiver in the
presence of an intelligent and informed eavesdropper who is as capable as the
intended receiver, but has different channel parameters. We aim to provide
perfect secrecy for the transmitters in this multi-access environment. Using
Gaussian codebooks, an achievable secrecy region is determined and the power
allocation that maximizes the achievable sum-rate is found. Numerical results
showing the new rate region are presented. It is shown that the multiple-access
nature of the channel may be utilized to allow users with zero single-user
secrecy capacity to be able to transmit in perfect secrecy. In addition, a new
collaborative scheme is shown that may increase the achievable sum-rate. In
this scheme, a user who would not transmit to maximize the sum rate can help
another user who (i) has positive secrecy capacity to increase its rate, or
(ii) has zero secrecy capacity to achieve a positive secrecy capacity.
|
cs/0612086
|
An asynchronous, decentralised commitment protocol for semantic
optimistic replication
|
cs.DB cs.NI
|
We study large-scale distributed cooperative systems that use optimistic
replication. We represent a system as a graph of actions (operations) connected
by edges that reify semantic constraints between actions. Constraint types
include conflict, execution order, dependence, and atomicity. The local state
is some schedule that conforms to the constraints; because of conflicts, client
state is only tentative. For consistency, site schedules should converge; we
designed a decentralised, asynchronous commitment protocol. Each client makes a
proposal, reflecting its tentative and{\slash}or preferred schedules. Our
protocol distributes the proposals, which it decomposes into
semantically-meaningful units called candidates, and runs an election between
comparable candidates. A candidate wins when it receives a majority or a
plurality. The protocol is fully asynchronous: each site executes its tentative
schedule independently, and determines locally when a candidate has won an
election. The committed schedule is as close as possible to the preferences
expressed by clients.
|
cs/0612087
|
Statistical mechanics of neocortical interactions: Portfolio of
Physiological Indicators
|
cs.CE cs.IT cs.NE math.IT q-bio.QM
|
There are several kinds of non-invasive imaging methods that are used to
collect data from the brain, e.g., EEG, MEG, PET, SPECT, fMRI, etc. It is
difficult to get resolution of information processing using any one of these
methods. Approaches to integrate data sources may help to get better resolution
of data and better correlations to behavioral phenomena ranging from attention
to diagnoses of disease. The approach taken here is to use algorithms developed
for the author's Trading in Risk Dimensions (TRD) code using modern methods of
copula portfolio risk management, with joint probability distributions derived
from the author's model of statistical mechanics of neocortical interactions
(SMNI). The author's Adaptive Simulated Annealing (ASA) code is for
optimizations of training sets, as well as for importance-sampling. Marginal
distributions will be evolved to determine their expected duration and
stability using algorithms developed by the author, i.e., PATHTREE and PATHINT
codes.
|
cs/0612095
|
Approximation of the Two-Part MDL Code
|
cs.LG cs.AI cs.IT math.IT
|
Approximation of the optimal two-part MDL code for given data, through
successive monotonically length-decreasing two-part MDL codes, has the
following properties: (i) computation of each step may take arbitrarily long;
(ii) we may not know when we reach the optimum, or whether we will reach the
optimum at all; (iii) the sequence of models generated may not monotonically
improve the goodness of fit; but (iv) the model associated with the optimum has
(almost) the best goodness of fit. To express the practically interesting
goodness of fit of individual models for individual data sets we have to rely
on Kolmogorov complexity.
|
cs/0612096
|
Using state space differential geometry for nonlinear blind source
separation
|
cs.LG cs.SD
|
Given a time series of multicomponent measurements of an evolving stimulus,
nonlinear blind source separation (BSS) seeks to find a "source" time series,
comprised of statistically independent combinations of the measured components.
In this paper, we seek a source time series with local velocity cross
correlations that vanish everywhere in stimulus state space. However, in an
earlier paper the local velocity correlation matrix was shown to constitute a
metric on state space. Therefore, nonlinear BSS maps onto a problem of
differential geometry: given the metric observed in the measurement coordinate
system, find another coordinate system in which the metric is diagonal
everywhere. We show how to determine if the observed data are separable in this
way, and, if they are, we show how to construct the required transformation to
the source coordinate system, which is essentially unique except for an unknown
rotation that can be found by applying the methods of linear BSS. Thus, the
proposed technique solves nonlinear BSS in many situations or, at least,
reduces it to linear BSS, without the use of probabilistic, parametric, or
iterative procedures. This paper also describes a generalization of this
methodology that performs nonlinear independent subspace separation. In every
case, the resulting decomposition of the observed data is an intrinsic property
of the stimulus' evolution in the sense that it does not depend on the way the
observer chooses to view it (e.g., the choice of the observing machine's
sensors). In other words, the decomposition is a property of the evolution of
the "real" stimulus that is "out there" broadcasting energy to the observer.
The technique is illustrated with analytic and numerical examples.
|
cs/0612097
|
Error Exponents for Variable-length Block Codes with Feedback and Cost
Constraints
|
cs.IT math.IT
|
Variable-length block-coding schemes are investigated for discrete memoryless
channels with ideal feedback under cost constraints. Upper and lower bounds are
found for the minimum achievable probability of decoding error $P_{e,\min}$ as
a function of constraints $R, \AV$, and $\bar \tau$ on the transmission rate,
average cost, and average block length respectively. For given $R$ and $\AV$,
the lower and upper bounds to the exponent $-(\ln P_{e,\min})/\bar \tau$ are
asymptotically equal as $\bar \tau \to \infty$. The resulting reliability
function, $\lim_{\bar \tau\to \infty} (-\ln P_{e,\min})/\bar \tau$, as a
function of $R$ and $\AV$, is concave in the pair $(R, \AV)$ and generalizes
the linear reliability function of Burnashev to include cost constraints. The
results are generalized to a class of discrete-time memoryless channels with
arbitrary alphabets, including additive Gaussian noise channels with amplitude
and power constraints.
|
cs/0612099
|
Network Information Flow in Small World Networks
|
cs.IT cs.DM math.IT
|
Recent results from statistical physics show that large classes of complex
networks, both man-made and of natural origin, are characterized by high
clustering properties yet strikingly short path lengths between pairs of nodes.
This class of networks are said to have a small-world topology. In the context
of communication networks, navigable small-world topologies, i.e. those which
admit efficient distributed routing algorithms, are deemed particularly
effective, for example in resource discovery tasks and peer-to-peer
applications. Breaking with the traditional approach to small-world topologies
that privileges graph parameters pertaining to connectivity, and intrigued by
the fundamental limits of communication in networks that exploit this type of
topology, we investigate the capacity of these networks from the perspective of
network information flow. Our contribution includes upper and lower bounds for
the capacity of standard and navigable small-world models, and the somewhat
surprising result that, with high probability, random rewiring does not alter
the capacity of a small-world network.
|
cs/0612101
|
Maximum Entropy MIMO Wireless Channel Models
|
cs.IT math.IT
|
In this contribution, models of wireless channels are derived from the
maximum entropy principle, for several cases where only limited information
about the propagation environment is available. First, analytical models are
derived for the cases where certain parameters (channel energy, average energy,
spatial correlation matrix) are known deterministically. Frequently, these
parameters are unknown (typically because the received energy or the spatial
correlation varies with the user position), but still known to represent
meaningful system characteristics. In these cases, analytical channel models
are derived by assigning entropy-maximizing distributions to these parameters,
and marginalizing them out. For the MIMO case with spatial correlation, we show
that the distribution of the covariance matrices is conveniently handled
through its eigenvalues. The entropy-maximizing distribution of the covariance
matrix is shown to be a Wishart distribution. Furthermore, the corresponding
probability density function of the channel matrix is shown to be described
analytically by a function of the channel Frobenius norm. This technique can
provide channel models incorporating the effect of shadow fading and spatial
correlation between antennas without the need to assume explicit values for
these parameters. The results are compared in terms of mutual information to
the classical i.i.d. Gaussian model.
|
cs/0612102
|
The Dichotomy of Conjunctive Queries on Probabilistic Structures
|
cs.DB
|
We show that for every conjunctive query, the complexity of evaluating it on
a probabilistic database is either \PTIME or #\P-complete, and we give an
algorithm for deciding whether a given conjunctive query is \PTIME or
#\P-complete. The dichotomy property is a fundamental result on query
evaluation on probabilistic databases and it gives a complete classification of
the complexity of conjunctive queries.
|
cs/0612103
|
The Boundary Between Privacy and Utility in Data Anonymization
|
cs.DB
|
We consider the privacy problem in data publishing: given a relation I
containing sensitive information 'anonymize' it to obtain a view V such that,
on one hand attackers cannot learn any sensitive information from V, and on the
other hand legitimate users can use V to compute useful statistics on I. These
are conflicting goals. We use a definition of privacy that is derived from
existing ones in the literature, which relates the a priori probability of a
given tuple t, Pr(t), with the a posteriori probability, Pr(t | V), and propose
a novel and quite practical definition for utility. Our main result is the
following. Denoting n the size of I and m the size of the domain from which I
was drawn (i.e. n < m) then: when the a priori probability is Pr(t) =
Omega(n/sqrt(m)) for some t, there exists no useful anonymization algorithm,
while when Pr(t) = O(n/m) for all tuples t, then we give a concrete
anonymization algorithm that is both private and useful. Our algorithm is quite
different from the k-anonymization algorithm studied intensively in the
literature, and is based on random deletions and insertions to I.
|
cs/0612104
|
Sufficient Conditions for Coarse-Graining Evolutionary Dynamics
|
cs.NE cs.AI
|
It is commonly assumed that the ability to track the frequencies of a set of
schemata in the evolving population of an infinite population genetic algorithm
(IPGA) under different fitness functions will advance efforts to obtain a
theory of adaptation for the simple GA. Unfortunately, for IPGAs with long
genomes and non-trivial fitness functions there do not currently exist
theoretical results that allow such a study. We develop a simple framework for
analyzing the dynamics of an infinite population evolutionary algorithm (IPEA).
This framework derives its simplicity from its abstract nature. In particular
we make no commitment to the data-structure of the genomes, the kind of
variation performed, or the number of parents involved in a variation
operation. We use this framework to derive abstract conditions under which the
dynamics of an IPEA can be coarse-grained. We then use this result to derive
concrete conditions under which it becomes computationally feasible to closely
approximate the frequencies of a family of schemata of relatively low order
over multiple generations, even when the bitstsrings in the evolving population
of the IPGA are long.
|
cs/0612109
|
Truncating the loop series expansion for Belief Propagation
|
cs.AI
|
Recently, M. Chertkov and V.Y. Chernyak derived an exact expression for the
partition sum (normalization constant) corresponding to a graphical model,
which is an expansion around the Belief Propagation solution. By adding
correction terms to the BP free energy, one for each "generalized loop" in the
factor graph, the exact partition sum is obtained. However, the usually
enormous number of generalized loops generally prohibits summation over all
correction terms. In this article we introduce Truncated Loop Series BP
(TLSBP), a particular way of truncating the loop series of M. Chertkov and V.Y.
Chernyak by considering generalized loops as compositions of simple loops. We
analyze the performance of TLSBP in different scenarios, including the Ising
model, regular random graphs and on Promedas, a large probabilistic medical
diagnostic system. We show that TLSBP often improves upon the accuracy of the
BP solution, at the expense of increased computation time. We also show that
the performance of TLSBP strongly depends on the degree of interaction between
the variables. For weak interactions, truncating the series leads to
significant improvements, whereas for strong interactions it can be
ineffective, even if a high number of terms is considered.
|
cs/0612110
|
Architecture for Modular Data Centers
|
cs.DB
|
Several factors are driving high-scale deployments of large data centers
built upon commodity components. These commodity clusters are far cheaper than
mainframe systems of the past but they bring serious heat and power density
issues. Also the high failure rate of the individual components drives
significant administrative costs. This proposal outlines an architecture for
data center design based upon 20'x8'x8' modules that substantially changes how
these systems are acquired, administered, and then later recycled.
|
cs/0612111
|
Fragmentation in Large Object Repositories
|
cs.DB
|
Fragmentation leads to unpredictable and degraded application performance.
While these problems have been studied in detail for desktop filesystem
workloads, this study examines newer systems such as scalable object stores and
multimedia repositories. Such systems use a get/put interface to store objects.
In principle, databases and filesystems can support such applications
efficiently, allowing system designers to focus on complexity, deployment cost
and manageability. Although theoretical work proves that certain storage
policies behave optimally for some workloads, these policies often behave
poorly in practice. Most storage benchmarks focus on short-term behavior or do
not measure fragmentation. We compare SQL Server to NTFS and find that
fragmentation dominates performance when object sizes exceed 256KB-1MB. NTFS
handles fragmentation better than SQL Server. Although the performance curves
will vary with other systems and workloads, we expect the same interactions
between fragmentation and free space to apply. It is well-known that
fragmentation is related to the percentage free space. We found that the ratio
of free space to object size also impacts performance. Surprisingly, in both
systems, storing objects of a single size causes fragmentation, and changing
the size of write requests affects fragmentation. These problems could be
addressed with simple changes to the filesystem and database interfaces. It is
our hope that an improved understanding of fragmentation will lead to
predictable storage systems that require less maintenance after deployment.
|
cs/0612112
|
Managing Query Compilation Memory Consumption to Improve DBMS Throughput
|
cs.DB
|
While there are known performance trade-offs between database page buffer
pool and query execution memory allocation policies, little has been written on
the impact of query compilation memory use on overall throughput of the
database management system (DBMS). We present a new aspect of the query
optimization problem and offer a solution implemented in Microsoft SQL Server
2005. The solution provides stable throughput for a range of workloads even
when memory requests outstrip the ability of the hardware to service those
requests.
|
cs/0612113
|
Isolation Support for Service-based Applications: A Position Paper
|
cs.DB
|
In this paper, we propose an approach to providing the benefits of isolation
in service-oriented applications where it is not feasible to hold traditional
locks for ACID transactions. Our technique, called "Promises", provides an
uniform view for clients which covers a wide range of implementation techniques
on the service side, all allowing the client to check a condition and then
later rely on that condition still holding.
|
cs/0612114
|
Demaq: A Foundation for Declarative XML Message Processing
|
cs.DB
|
This paper gives an overview of Demaq, an XML message processing system
operating on the foundation of transactional XML message queues. We focus on
the syntax and semantics of its fully declarative, rule-based application
language and demonstrate our message-based programming paradigm in the context
of a case study. Further, we discuss optimization opportunities for executing
Demaq programs.
|
cs/0612115
|
Consistent Streaming Through Time: A Vision for Event Stream Processing
|
cs.DB
|
Event processing will play an increasingly important role in constructing
enterprise applications that can immediately react to business critical events.
Various technologies have been proposed in recent years, such as event
processing, data streams and asynchronous messaging (e.g. pub/sub). We believe
these technologies share a common processing model and differ only in target
workload, including query language features and consistency requirements. We
argue that integrating these technologies is the next step in a natural
progression. In this paper, we present an overview and discuss the foundations
of CEDR, an event streaming system that embraces a temporal stream model to
unify and further enrich query language features, handle imperfections in event
delivery and define correctness guarantees. We describe specific contributions
made so far and outline next steps in developing the CEDR system.
|
cs/0612117
|
Statistical Mechanics of On-line Learning when a Moving Teacher Goes
around an Unlearnable True Teacher
|
cs.LG cond-mat.dis-nn
|
In the framework of on-line learning, a learning machine might move around a
teacher due to the differences in structures or output functions between the
teacher and the learning machine. In this paper we analyze the generalization
performance of a new student supervised by a moving machine. A model composed
of a fixed true teacher, a moving teacher, and a student is treated
theoretically using statistical mechanics, where the true teacher is a
nonmonotonic perceptron and the others are simple perceptrons. Calculating the
generalization errors numerically, we show that the generalization errors of a
student can temporarily become smaller than that of a moving teacher, even if
the student only uses examples from the moving teacher. However, the
generalization error of the student eventually becomes the same value with that
of the moving teacher. This behavior is qualitatively different from that of a
linear model.
|
cs/0612118
|
Gossiping with Multiple Messages
|
cs.NI cs.IT math.IT
|
This paper investigates the dissemination of multiple pieces of information
in large networks where users contact each other in a random uncoordinated
manner, and users upload one piece per unit time. The underlying motivation is
the design and analysis of piece selection protocols for peer-to-peer networks
which disseminate files by dividing them into pieces. We first investigate
one-sided protocols, where piece selection is based on the states of either the
transmitter or the receiver. We show that any such protocol relying only on
pushes, or alternatively only on pulls, is inefficient in disseminating all
pieces to all users. We propose a hybrid one-sided piece selection protocol --
INTERLEAVE -- and show that by using both pushes and pulls it disseminates $k$
pieces from a single source to $n$ users in $10(k+\log n)$ time, while obeying
the constraint that each user can upload at most one piece in one unit of time,
with high probability for large $n$. An optimal, unrealistic centralized
protocol would take $k+\log_2 n$ time in this setting. Moreover, efficient
dissemination is also possible if the source implements forward erasure coding,
and users push the latest-released coded pieces (but do not pull). We also
investigate two-sided protocols where piece selection is based on the states of
both the transmitter and the receiver. We show that it is possible to
disseminate $n$ pieces to $n$ users in $n+O(\log n)$ time, starting from an
initial state where each user has a unique piece.
|
cs/0612122
|
Large N Analysis of Amplify-and-Forward MIMO Relay Channels with
Correlated Rayleigh Fading
|
cs.IT math.IT
|
In this correspondence the cumulants of the mutual information of the flat
Rayleigh fading amplify-and-forward MIMO relay channel without direct link
between source and destination are derived in the large array limit. The
analysis is based on the replica trick and covers both spatially independent
and correlated fading in the first and the second hop, while beamforming at all
terminals is restricted to deterministic weight matrices. Expressions for mean
and variance of the mutual information are obtained. Their parameters are
determined by a nonlinear equation system. All higher cumulants are shown to
vanish as the number of antennas n goes to infinity. In conclusion the
distribution of the mutual information I becomes Gaussian in the large n limit
and is completely characterized by the expressions obtained for mean and
variance of I. Comparisons with simulation results show that the asymptotic
results serve as excellent approximations for systems with only few antennas at
each node. The derivation of the results follows the technique formalized by
Moustakas et al. in [1]. Although the evaluations are more involved for the
MIMO relay channel compared to point-to-point MIMO channels, the structure of
the results is surprisingly simple again. In particular an elegant formula for
the mean of the mutual information is obtained, i.e., the ergodic capacity of
the two-hop amplify-and-forward MIMO relay channel without direct link.
|
cs/0612123
|
Electronic Laboratory Notebook Assisting Reflectance Spectrometry in
Legal Medicine
|
cs.DB cs.DL cs.IR
|
Reflectance spectrometry is a fast and reliable method for the
characterisation of human skin if the spectra are analysed with respect to a
physical model describing the optical properties of human skin. For a field
study performed at the Institute of Legal Medicine and the Freiburg Materials
Research Center of the University of Freiburg an electronic laboratory notebook
has been developed, which assists in the recording, management, and analysis of
reflectance spectra. The core of the electronic laboratory notebook is a MySQL
database. It is filled with primary data via a web interface programmed in
Java, which also enables the user to browse the database and access the results
of data analysis. These are carried out by Matlab, Tcl and
Python scripts, which retrieve the primary data from the electronic
laboratory notebook, perform the analysis, and store the results in the
database for further usage.
|
cs/0612124
|
Highly robust error correction by convex programming
|
cs.IT math.IT math.PR math.ST stat.TH
|
This paper discusses a stylized communications problem where one wishes to
transmit a real-valued signal x in R^n (a block of n pieces of information) to
a remote receiver. We ask whether it is possible to transmit this information
reliably when a fraction of the transmitted codeword is corrupted by arbitrary
gross errors, and when in addition, all the entries of the codeword are
contaminated by smaller errors (e.g. quantization errors).
We show that if one encodes the information as Ax where A is a suitable m by
n coding matrix (m >= n), there are two decoding schemes that allow the
recovery of the block of n pieces of information x with nearly the same
accuracy as if no gross errors occur upon transmission (or equivalently as if
one has an oracle supplying perfect information about the sites and amplitudes
of the gross errors). Moreover, both decoding strategies are very concrete and
only involve solving simple convex optimization programs, either a linear
program or a second-order cone program. We complement our study with numerical
simulations showing that the encoder/decoder pair performs remarkably well.
|
cs/0612126
|
The virtual reality framework for engineering objects
|
cs.CE cs.MS
|
A framework for virtual reality of engineering objects has been developed.
This framework may simulate different equipment related to virtual reality.
Framework supports 6D dynamics, ordinary differential equations, finite
formulas, vector and matrix operations. The framework also supports embedding
of external software.
|
cs/0612127
|
bdbms -- A Database Management System for Biological Data
|
cs.DB
|
Biologists are increasingly using databases for storing and managing their
data. Biological databases typically consist of a mixture of raw data,
metadata, sequences, annotations, and related data obtained from various
sources. Current database technology lacks several functionalities that are
needed by biological databases. In this paper, we introduce bdbms, an
extensible prototype database management system for supporting biological data.
bdbms extends the functionalities of current DBMSs to include: (1) Annotation
and provenance management including storage, indexing, manipulation, and
querying of annotation and provenance as first class objects in bdbms, (2)
Local dependency tracking to track the dependencies and derivations among data
items, (3) Update authorization to support data curation via content-based
authorization, in contrast to identity-based authorization, and (4) New access
methods and their supporting operators that support pattern matching on various
types of compressed biological data types. This paper presents the design of
bdbms along with the techniques proposed to support these functionalities
including an extension to SQL. We also outline some open issues in building
bdbms.
|
cs/0612128
|
SASE: Complex Event Processing over Streams
|
cs.DB
|
RFID technology is gaining adoption on an increasing scale for tracking and
monitoring purposes. Wide deployments of RFID devices will soon generate an
unprecedented volume of data. Emerging applications require the RFID data to be
filtered and correlated for complex pattern detection and transformed to events
that provide meaningful, actionable information to end applications. In this
work, we design and develop SASE, a com-plex event processing system that
performs such data-information transformation over real-time streams. We design
a complex event language for specifying application logic for such
transformation, devise new query processing techniques to effi-ciently
implement the language, and develop a comprehensive system that collects,
cleans, and processes RFID data for deliv-ery of relevant, timely information
as well as storing necessary data for future querying. We demonstrate an
initial prototype of SASE through a real-world retail management scenario.
|
cs/0612129
|
Impliance: A Next Generation Information Management Appliance
|
cs.DB
|
ably successful in building a large market and adapting to the changes of the
last three decades, its impact on the broader market of information management
is surprisingly limited. If we were to design an information management system
from scratch, based upon today's requirements and hardware capabilities, would
it look anything like today's database systems?" In this paper, we introduce
Impliance, a next-generation information management system consisting of
hardware and software components integrated to form an easy-to-administer
appliance that can store, retrieve, and analyze all types of structured,
semi-structured, and unstructured information. We first summarize the trends
that will shape information management for the foreseeable future. Those trends
imply three major requirements for Impliance: (1) to be able to store, manage,
and uniformly query all data, not just structured records; (2) to be able to
scale out as the volume of this data grows; and (3) to be simple and robust in
operation. We then describe four key ideas that are uniquely combined in
Impliance to address these requirements, namely the ideas of: (a) integrating
software and off-the-shelf hardware into a generic information appliance; (b)
automatically discovering, organizing, and managing all data - unstructured as
well as structured - in a uniform way; (c) achieving scale-out by exploiting
simple, massive parallel processing, and (d) virtualizing compute and storage
resources to unify, simplify, and streamline the management of Impliance.
Impliance is an ambitious, long-term effort to define simpler, more robust, and
more scalable information systems for tomorrow's enterprises.
|
cs/0612132
|
A New Era in Citation and Bibliometric Analyses: Web of Science, Scopus,
and Google Scholar
|
cs.DL cs.IR
|
Academic institutions, federal agencies, publishers, editors, authors, and
librarians increasingly rely on citation analysis for making hiring, promotion,
tenure, funding, and/or reviewer and journal evaluation and selection
decisions. The Institute for Scientific Information's (ISI) citation databases
have been used for decades as a starting point and often as the only tools for
locating citations and/or conducting citation analyses. ISI databases (or Web
of Science), however, may no longer be adequate as the only or even the main
sources of citations because new databases and tools that allow citation
searching are now available. Whether these new databases and tools complement
or represent alternatives to Web of Science (WoS) is important to explore.
Using a group of 15 library and information science faculty members as a case
study, this paper examines the effects of using Scopus and Google Scholar (GS)
on the citation counts and rankings of scholars as measured by WoS. The paper
discusses the strengths and weaknesses of WoS, Scopus, and GS, their overlap
and uniqueness, quality and language of the citations, and the implications of
the findings for citation analysis. The project involved citation searching for
approximately 1,100 scholarly works published by the study group and over 200
works by a test group (an additional 10 faculty members). Overall, more than
10,000 citing and purportedly citing documents were examined. WoS data took
about 100 hours of collecting and processing time, Scopus consumed 200 hours,
and GS a grueling 3,000 hours.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.