id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0909.0777
|
Optimally Tuned Iterative Reconstruction Algorithms for Compressed
Sensing
|
cs.NA cs.IT cs.MS math.IT
|
We conducted an extensive computational experiment, lasting multiple
CPU-years, to optimally select parameters for two important classes of
algorithms for finding sparse solutions of underdetermined systems of linear
equations. We make the optimally tuned implementations available at {\tt
sparselab.stanford.edu}; they run `out of the box' with no user tuning: it is
not necessary to select thresholds or know the likely degree of sparsity. Our
class of algorithms includes iterative hard and soft thresholding with or
without relaxation, as well as CoSaMP, subspace pursuit and some natural
extensions. As a result, our optimally tuned algorithms dominate such
proposals. Our notion of optimality is defined in terms of phase transitions,
i.e. we maximize the number of nonzeros at which the algorithm can successfully
operate. We show that the phase transition is a well-defined quantity with our
suite of random underdetermined linear systems. Our tuning gives the highest
transition possible within each class of algorithms.
|
0909.0801
|
A Monte Carlo AIXI Approximation
|
cs.AI cs.IT cs.LG math.IT
|
This paper introduces a principled approach for the design of a scalable
general reinforcement learning agent. Our approach is based on a direct
approximation of AIXI, a Bayesian optimality notion for general reinforcement
learning agents. Previously, it has been unclear whether the theory of AIXI
could motivate the design of practical algorithms. We answer this hitherto open
question in the affirmative, by providing the first computationally feasible
approximation to the AIXI agent. To develop our approximation, we introduce a
new Monte-Carlo Tree Search algorithm along with an agent-specific extension to
the Context Tree Weighting algorithm. Empirically, we present a set of
encouraging results on a variety of stochastic and partially observable
domains. We conclude by proposing a number of directions for future research.
|
0909.0809
|
An Infinite Family of Recursive Formulas Generating Power Moments of
Kloosterman Sums with Trace One Arguments: O(2n+1,2^r) Case
|
math.NT cs.IT math.IT
|
In this paper, we construct an infinite family of binary linear codes
associated with double cosets with respect to certain maximal parabolic
subgroup of the orthogonal group O(2n+1,q). Here q is a power of two. Then we
obtain an infinite family of recursive formulas generating the odd power
moments of Kloosterman sums with trace one arguments in terms of the
frequencies of weights in the codes associated with those double cosets in
O(2n+1,q) and in the codes associated with similar double cosets in the
symplectic group Sp(2n,q). This is done via Pless power moment identity and by
utilizing the explicit expressions of exponential sums over those double cosets
related to the evaluations of "Gauss sums" for the orthogonal group O(2n+1,q).
|
0909.0811
|
Ternary Codes Associated with O^-(2n,q) and Power Moments of Kloosterman
Sums with Square Arguments
|
math.NT cs.IT math.IT
|
In this paper, we construct three ternary linear codes associated with the
orthogonal group O^-(2,q) and the special orthogonal groups SO^-(2,q) and
SO^-(4,q). Here q is a power of three. Then we obtain recursive formulas for
the power moments of Kloosterman sums with square arguments and for the even
power moments of those in terms of the frequencies of weights in the codes.
This is done via Pless power moment identity and by utilizing the explicit
expressions of "Gauss sums" for the orthogonal and special orthogonal groups
O^-(2n,q) and SO^-(2n,q).
|
0909.0844
|
High-Dimensional Non-Linear Variable Selection through Hierarchical
Kernel Learning
|
cs.LG math.ST stat.TH
|
We consider the problem of high-dimensional non-linear variable selection for
supervised learning. Our approach is based on performing linear selection among
exponentially many appropriately defined positive definite kernels that
characterize non-linear interactions between the original variables. To select
efficiently from these many kernels, we use the natural hierarchical structure
of the problem to extend the multiple kernel learning framework to kernels that
can be embedded in a directed acyclic graph; we show that it is then possible
to perform kernel selection through a graph-adapted sparsity-inducing norm, in
polynomial time in the number of selected kernels. Moreover, we study the
consistency of variable selection in high-dimensional settings, showing that
under certain assumptions, our regularization framework allows a number of
irrelevant variables which is exponential in the number of observations. Our
simulations on synthetic datasets and datasets from the UCI repository show
state-of-the-art predictive performance for non-linear regression problems.
|
0909.0901
|
Assessing the Impact of Informedness on a Consultant's Profit
|
cs.AI
|
We study the notion of informedness in a client-consultant setting. Using a
software simulator, we examine the extent to which it pays off for consultants
to provide their clients with advice that is well-informed, or with advice that
is merely meant to appear to be well-informed. The latter strategy is
beneficial in that it costs less resources to keep up-to-date, but carries the
risk of a decreased reputation if the clients discover the low level of
informedness of the consultant. Our experimental results indicate that under
different circumstances, different strategies yield the optimal results (net
profit) for the consultants.
|
0909.0996
|
The Kalman Like Particle Filter : Optimal Estimation With Quantized
Innovations/Measurements
|
cs.IT math.IT math.OC
|
We study the problem of optimal estimation and control of linear systems
using quantized measurements, with a focus on applications over sensor
networks. We show that the state conditioned on a causal quantization of the
measurements can be expressed as the sum of a Gaussian random vector and a
certain truncated Gaussian vector. This structure bears close resemblance to
the full information Kalman filter and so allows us to effectively combine the
Kalman structure with a particle filter to recursively compute the state
estimate. We call the resulting filter the Kalman like particle filter (KLPF)
and observe that it delivers close to optimal performance using far fewer
particles than that of a particle filter directly applied to the original
problem. We show that the conditional state density follows a, so called,
generalized closed skew-normal (GCSN) distribution. We further show that for
such systems the classical separation property between control and estimation
holds and that the certainty equivalent control law is LQG optimal.
|
0909.1011
|
Bits About the Channel: Multi-round Protocols for Two-way Fading
Channels
|
cs.IT math.IT
|
Most communication systems use some form of feedback, often related to
channel state information. In this paper, we study diversity multiplexing
tradeoff for both FDD and TDD systems, when both receiver and transmitter
knowledge about the channel is noisy and potentially mismatched. For FDD
systems, we first extend the achievable tradeoff region for 1.5 rounds of
message passing to get higher diversity compared to the best known scheme, in
the regime of higher multiplexing gains. We then break the mold of all current
channel state based protocols by using multiple rounds of conferencing to
extract more bits about the actual channel. This iterative refinement of the
channel increases the diversity order with every round of communication. The
protocols are on-demand in nature, using high powers for training and feedback
only when the channel is in poor states. The key result is that the diversity
multiplexing tradeoff with perfect training and K levels of perfect feedback
can be achieved, even when there are errors in training the receiver and errors
in the feedback link, with a multi-round protocol which has K rounds of
training and K-1 rounds of binary feedback. The above result can be viewed as a
generalization of Zheng and Tse, and Aggarwal and Sabharwal, where the result
was shown to hold for K=1 and K=2 respectively. For TDD systems, we also
develop new achievable strategies with multiple rounds of communication between
the transmitter and the receiver, which use the reciprocity of the forward and
the feedback channel. The multi-round TDD protocol achieves a
diversity-multiplexing tradeoff which uniformly dominates its FDD counterparts,
where no channel reciprocity is available.
|
0909.1021
|
A multiagent urban traffic simulation Part I: dealing with the ordinary
|
cs.AI
|
We describe in this article a multiagent urban traffic simulation, as we
believe individual-based modeling is necessary to encompass the complex
influence the actions of an individual vehicle can have on the overall flow of
vehicles. We first describe how we build a graph description of the network
from purely geometric data, ESRI shapefiles. We then explain how we include
traffic related data to this graph. We go on after that with the model of the
vehicle agents: origin and destination, driving behavior, multiple lanes,
crossroads, and interactions with the other vehicles in day-to-day, ?ordinary?
traffic. We conclude with the presentation of the resulting simulation of this
model on the Rouen agglomeration.
|
0909.1062
|
New Approximation Algorithms for Minimum Enclosing Convex Shapes
|
cs.CG cs.DS cs.LG
|
Given $n$ points in a $d$ dimensional Euclidean space, the Minimum Enclosing
Ball (MEB) problem is to find the ball with the smallest radius which contains
all $n$ points. We give a $O(nd\Qcal/\sqrt{\epsilon})$ approximation algorithm
for producing an enclosing ball whose radius is at most $\epsilon$ away from
the optimum (where $\Qcal$ is an upper bound on the norm of the points). This
improves existing results using \emph{coresets}, which yield a $O(nd/\epsilon)$
greedy algorithm. Finding the Minimum Enclosing Convex Polytope (MECP) is a
related problem wherein a convex polytope of a fixed shape is given and the aim
is to find the smallest magnification of the polytope which encloses the given
points. For this problem we present a $O(mnd\Qcal/\epsilon)$ approximation
algorithm, where $m$ is the number of faces of the polytope. Our algorithms
borrow heavily from convex duality and recently developed techniques in
non-smooth optimization, and are in contrast with existing methods which rely
on geometric arguments. In particular, we specialize the excessive gap
framework of \citet{Nesterov05a} to obtain our results.
|
0909.1115
|
Capacity Region of Layered Erasure One-sided Interference Channels
without CSIT
|
cs.IT math.IT
|
This paper studies a layered erasure interference channel model, which is a
simplification of the Gaussian interference channel with fading using the
deterministic model approach. In particular, the capacity region of the layered
erasure one-sided interference channel is completely determined, assuming that
the channel state information (CSI) is known to the receivers, but there is no
CSI at transmitters (CSIT). The result holds for arbitrary fading statistics.
Previous results of Aggarwal, Sankar, Calderbank and Poor on the capacity
region or sum capacity under several interference configurations are shown to
be special cases of the capacity region shown in this paper.
|
0909.1127
|
Anonymization with Worst-Case Distribution-Based Background Knowledge
|
cs.DB cs.CR
|
Background knowledge is an important factor in privacy preserving data
publishing. Distribution-based background knowledge is one of the well studied
background knowledge. However, to the best of our knowledge, there is no
existing work considering the distribution-based background knowledge in the
worst case scenario, by which we mean that the adversary has accurate knowledge
about the distribution of sensitive values according to some tuple attributes.
Considering this worst case scenario is essential because we cannot overlook
any breaching possibility. In this paper, we propose an algorithm to anonymize
dataset in order to protect individual privacy by considering this background
knowledge. We prove that the anonymized datasets generated by our proposed
algorithm protects individual privacy. Our empirical studies show that our
method preserves high utility for the published data at the same time.
|
0909.1147
|
Empowering OLAC Extension using Anusaaraka and Effective text processing
using Double Byte coding
|
cs.CL
|
The paper reviews the hurdles while trying to implement the OLAC extension
for Dravidian / Indian languages. The paper further explores the possibilities
which could minimise or solve these problems. In this context, the Chinese
system of text processing and the anusaaraka system are scrutinised.
|
0909.1151
|
n-Opposition theory to structure debates
|
cs.AI
|
2007 was the first international congress on the ?square of oppositions?. A
first attempt to structure debate using n-opposition theory was presented along
with the results of a first experiment on the web. Our proposal for this paper
is to define relations between arguments through a structure of opposition
(square of oppositions is one structure of opposition). We will be trying to
answer the following questions: How to organize debates on the web 2.0? How to
structure them in a logical way? What is the role of n-opposition theory, in
this context? We present in this paper results of three experiments
(Betapolitique 2007, ECAP 2008, Intermed 2008).
|
0909.1153
|
Recursive formulas generating power moments of multi-dimensional
Kloosterman sums and $m$-multiple power moments of Kloosterman sums
|
math.NT cs.IT math.IT
|
In this paper, we construct two binary linear codes associated with
multi-dimensional and $m -$multiple power Kloosterman sums (for any fixed $m$)
over the finite field $\mathbb{F}_{q}$. Here $q$ is a power of two. The former
codes are dual to a subcode of the binary hyper-Kloosterman code. Then we
obtain two recursive formulas for the power moments of multi-dimensional
Kloosterman sums and for the $m$-multiple power moments of Kloosterman sums in
terms of the frequencies of weights in the respective codes. This is done via
Pless power moment identity and yields, in the case of power moments of
multi-dimensional Kloosterman sums, much simpler recursive formulas than those
associated with finite special linear groups obtained previously.
|
0909.1156
|
Ternary Codes Associated with $O(3,3^r)$ and Power Moments of
Kloosterman Sums with Trace Nonzero Square Arguments
|
math.NT cs.IT math.IT
|
In this paper, we construct two ternary linear codes $C(SO(3,q))$ and
$C(O(3,q))$, respectively associated with the orthogonal groups $SO(3,q)$ and
$O(3,q)$. Here $q$ is a power of three. Then we obtain two recursive formulas
for the power moments of Kloosterman sums with $``$trace nonzero square
arguments" in terms of the frequencies of weights in the codes. This is done
via Pless power moment identity and by utilizing the explicit expressions of
Gauss sums for the orthogonal groups.
|
0909.1175
|
Infinite Families of Recursive Formulas Generating Power Moments of
Ternary Kloosterman Sums with Trace Nonzero Square Arguments: $O(2n+1,2^{r})$
Case
|
math.NT cs.IT math.IT
|
In this paper, we construct four infinite families of ternary linear codes
associated with double cosets in $O(2n+1,q)$ with respect to certain maximal
parabolic subgroup of the special orthogonal group $SO(2n+1,q)$. Here $q$ is a
power of three. Then we obtain two infinite families of recursive formulas, the
one generating the power moments of Kloosterman sums with $``$trace nonzero
square arguments" and the other generating the even power moments of those.
Both of these families are expressed in terms of the frequencies of weights in
the codes associated with those double cosets in $O(2n+1,q)$ and in the codes
associated with similar double cosets in the symplectic group $Sp(2n,q)$. This
is done via Pless power moment identity and by utilizing the explicit
expressions of exponential sums over those double cosets related to the
evaluations of $"$Gauss sums" for the orthogonal group $O(2n+1,q)$.
|
0909.1178
|
Infinite Families of Recursive Formulas Generating Power Moments of
Ternary Kloosterman Sums with Square Arguments Associated with
$O^{-}_{}(2n,q)$
|
math.NT cs.IT math.IT
|
In this paper, we construct eight infinite families of ternary linear codes
associated with double cosets with respect to certain maximal parabolic
subgroup of the special orthogonal group $SO^{-}(2n,q)$. Here ${q}$ is a power
of three. Then we obtain four infinite families of recursive formulas for power
moments of Kloosterman sums with square arguments and four infinite families of
recursive formulas for even power moments of those in terms of the frequencies
of weights in the codes. This is done via Pless power moment identity and by
utilizing the explicit expressions of exponential sums over those double cosets
related to the evaluations of $"$Gauss sums" for the orthogonal groups
$O^{-}(2n,q)$.
|
0909.1186
|
Scheme of thinking quantum systems
|
quant-ph cond-mat.quant-gas cs.AI
|
A general approach describing quantum decision procedures is developed. The
approach can be applied to quantum information processing, quantum computing,
creation of artificial quantum intelligence, as well as to analyzing decision
processes of human decision makers. Our basic point is to consider an active
quantum system possessing its own strategic state. Processing information by
such a system is analogous to the cognitive processes associated to decision
making by humans. The algebra of probability operators, associated with the
possible options available to the decision maker, plays the role of the algebra
of observables in quantum theory of measurements. A scheme is advanced for a
practical realization of decision procedures by thinking quantum systems. Such
thinking quantum systems can be realized by using spin lattices, systems of
magnetic molecules, cold atoms trapped in optical lattices, ensembles of
quantum dots, or multilevel atomic systems interacting with electromagnetic
field.
|
0909.1209
|
SNR Estimation in Maximum Likelihood Decoded Spatial Multiplexing
|
cs.IT math.IT
|
Link adaptation is a crucial part of many modern communications systems,
allowing the system to adapt the transmission and reception strategies to
changes in channel conditions. One of the fundamental components of the link
adaptation mechanism is signal to noise ratio (SNR) estimation, measuring the
instantaneous (mostly post processing) SNR at the receiver. That is, the SNR at
the decoder input, which is an important metric for the prediction of decoder
performance. In linearly decoded MIMO, which is the common MIMO decoding
strategy, the post processing SNR is well defined. However, this is not the
case in optimal maximum likelihood (ML) decoding applied to spatial
multiplexing (SM). This gap is interesting since ML decoded SM is gaining ever
growing interest in recent research and practice due to the rapid increase in
computation power, and available near optimal low complexity schemes. In this
paper we close the gap and provide SNR estimation schemes for ML decoded SM,
which are based on various approximations of the "per stream" error
probability. The proposed methods are applicable for both horizonal and
vertical decoding. Moreover, we propose a very low complexity implementation
for the SNR estimation mechanism employing the ML decoder itself with
negligible overhead.
|
0909.1308
|
Efficient Learning of Sparse Conditional Random Fields for Supervised
Sequence Labelling
|
cs.LG cs.CL
|
Conditional Random Fields (CRFs) constitute a popular and efficient approach
for supervised sequence labelling. CRFs can cope with large description spaces
and can integrate some form of structural dependency between labels. In this
contribution, we address the issue of efficient feature selection for CRFs
based on imposing sparsity through an L1 penalty. We first show how sparsity of
the parameter set can be exploited to significantly speed up training and
labelling. We then introduce coordinate descent parameter update schemes for
CRFs with L1 regularization. We finally provide some empirical comparisons of
the proposed approach with state-of-the-art CRF training strategies. In
particular, it is shown that the proposed approach is able to take profit of
the sparsity to speed up processing and hence potentially handle larger
dimensional models.
|
0909.1310
|
Sparse image representation by discrete cosine/spline based dictionaries
|
math.NA cs.CV
|
Mixed dictionaries generated by cosine and B-spline functions are considered.
It is shown that, by highly nonlinear approaches such as Orthogonal Matching
Pursuit, the discrete version of the proposed dictionaries yields a significant
gain in the sparsity of an image representation.
|
0909.1334
|
Lower Bounds for BMRM and Faster Rates for Training SVMs
|
cs.LG cs.AI cs.DS
|
Regularized risk minimization with the binary hinge loss and its variants
lies at the heart of many machine learning problems. Bundle methods for
regularized risk minimization (BMRM) and the closely related SVMStruct are
considered the best general purpose solvers to tackle this problem. It was
recently shown that BMRM requires $O(1/\epsilon)$ iterations to converge to an
$\epsilon$ accurate solution. In the first part of the paper we use the
Hadamard matrix to construct a regularized risk minimization problem and show
that these rates cannot be improved. We then show how one can exploit the
structure of the objective function to devise an algorithm for the binary hinge
loss which converges to an $\epsilon$ accurate solution in
$O(1/\sqrt{\epsilon})$ iterations.
|
0909.1338
|
"Rewiring" Filterbanks for Local Fourier Analysis: Theory and Practice
|
cs.IT math.IT
|
This article describes a series of new results outlining equivalences between
certain "rewirings" of filterbank system block diagrams, and the corresponding
actions of convolution, modulation, and downsampling operators. This gives rise
to a general framework of reverse-order and convolution subband structures in
filterbank transforms, which we show to be well suited to the analysis of
filterbank coefficients arising from subsampled or multiplexed signals. These
results thus provide a means to understand time-localized aliasing and
modulation properties of such signals and their subband
representations--notions that are notably absent from the global viewpoint
afforded by Fourier analysis. The utility of filterbank rewirings is
demonstrated by the closed-form analysis of signals subject to degradations
such as missing data, spatially or temporally multiplexed data acquisition, or
signal-dependent noise, such as are often encountered in practical signal
processing applications.
|
0909.1344
|
Multiuser MISO Transmitter Optimization for Inter-Cell Interference
Mitigation
|
cs.IT math.IT
|
The transmitter optimization (i.e., steering vectors and power allocation)
for a MISO Broadcast Channel (MISO-BC) subject to general linear constraints is
considered. Such constraints include, as special cases, the sum power, the
per-antenna or per-group-of-antennas power, and "forbidden interference
direction" constraints. We consider both the optimal dirty-paper coding and the
simple suboptimal linear zero-forcing beamforming strategies, and provide
numerically efficient algorithms that solve the problem in its most general
form. As an application, we consider a multi-cell scenario with partial cell
cooperation, where each cell optimizes its precoder by taking into account
interference constraints on specific users in adjacent cells. The effectiveness
of the proposed methods is evaluated in a simple system scenario including two
adjacent cells, under different fairness criteria that emphasize the bottleneck
role of users near the cell "boundary". Our results show that "active"
Inter-Cell Interference (ICI) mitigation outperforms the conventional "static"
ICI mitigation based on fractional frequency reuse.
|
0909.1346
|
Reordering Columns for Smaller Indexes
|
cs.DB
|
Column-oriented indexes-such as projection or bitmap indexes-are compressed
by run-length encoding to reduce storage and increase speed. Sorting the tables
improves compression. On realistic data sets, permuting the columns in the
right order before sorting can reduce the number of runs by a factor of two or
more. Unfortunately, determining the best column order is NP-hard. For many
cases, we prove that the number of runs in table columns is minimized if we
sort columns by increasing cardinality. Experimentally, sorting based on
Hilbert space-filling curves is poor at minimizing the number of runs.
|
0909.1397
|
Resource Matchmaking Algorithm using Dynamic Rough Set in Grid
Environment
|
cs.DC cs.AI
|
Grid environment is a service oriented infrastructure in which many
heterogeneous resources participate to provide the high performance
computation. One of the bug issues in the grid environment is the vagueness and
uncertainty between advertised resources and requested resources. Furthermore,
in an environment such as grid dynamicity is considered as a crucial issue
which must be dealt with. Classical rough set have been used to deal with the
uncertainty and vagueness. But it can just be used on the static systems and
can not support dynamicity in a system. In this work we propose a solution,
called Dynamic Rough Set Resource Discovery (DRSRD), for dealing with cases of
vagueness and uncertainty problems based on Dynamic rough set theory which
considers dynamic features in this environment. In this way, requested resource
properties have a weight as priority according to which resource matchmaking
and ranking process is done. We also report the result of the solution obtained
from the simulation in GridSim simulator. The comparison has been made between
DRSRD, classical rough set theory based algorithm, and UDDI and OWL S combined
algorithm. DRSRD shows much better precision for the cases with vagueness and
uncertainty in a dynamic system such as the grid rather than the classical
rough set theory based algorithm, and UDDI and OWL S combined algorithm.
|
0909.1405
|
A Hybrid Multi Objective Particle Swarm Optimization Method to Discover
Biclusters in Microarray Data
|
cs.CE cs.NE
|
In recent years, with the development of microarray technique, discovery of
useful knowledge from microarray data has become very important. Biclustering
is a very useful data mining technique for discovering genes which have similar
behavior. In microarray data, several objectives have to be optimized
simultaneously and often these objectives are in conflict with each other. A
Multi Objective model is capable of solving such problems. Our method proposes
a Hybrid algorithm which is based on the Multi Objective Particle Swarm
Optimization for discovering biclusters in gene expression data. In our method,
we will consider a low level of overlapping amongst the biclusters and try to
cover all elements of the gene expression matrix. Experimental results in the
bench mark database show a significant improvement in both overlap among
biclusters and coverage of elements in the gene expression matrix.
|
0909.1426
|
L^p boundedness of the Hilbert transform
|
cs.IT math.IT
|
The Hilbert transform is essentially the \textit{only} singular operator in
one dimension. This undoubtedly makes it one of the the most important linear
operators in harmonic analysis. The Hilbert transform has had a profound
bearing on several theoretical and physical problems across a wide range of
disciplines; this includes problems in Fourier convergence, complex analysis,
potential theory, modulation theory, wavelet theory, aerofoil design,
dispersion relations and high-energy physics, to name a few.
In this monograph, we revisit some of the established results concerning the
global behavior of the Hilbert transform, namely that it is is weakly bounded
on $\eL^1(\R)$, and strongly bounded on $\eL^p(\R)$ for $1 < p <\infty$, and
provide a self-contained derivation of the same using real-variable techniques.
|
0909.1460
|
Accuracy Improvement for Stiffness Modeling of Parallel Manipulators
|
cs.RO
|
The paper focuses on the accuracy improvement of stiffness models for
parallel manipulators, which are employed in high-speed precision machining. It
is based on the integrated methodology that combines analytical and numerical
techniques and deals with multidimensional lumped-parameter models of the
links. The latter replace the link flexibility by localized 6-dof virtual
springs describing both translational/rotational compliance and the coupling
between them. There is presented detailed accuracy analysis of the stiffness
identification procedures employed in the commercial CAD systems (including
statistical analysis of round-off errors, evaluating the confidence intervals
for stiffness matrices). The efficiency of the developed technique is confirmed
by application examples, which deal with stiffness analysis of translational
parallel manipulators.
|
0909.1475
|
Design optimization of parallel manipulators for high-speed precision
machining applications
|
cs.RO
|
The paper proposes an integrated approach to the design optimization of
parallel manipulators, which is based on the concept of the workspace grid and
utilizes the goal-attainment formulation for the global optimization. To
combine the non-homogenous design specification, the developed optimization
technique transforms all constraints and objectives into similar performance
indices related to the maximum size of the prescribed shape workspace. This
transformation is based on the dedicated dynamic programming procedures that
satisfy computational requirements of modern CAD. Efficiency of the developed
technique is demonstrated via two case studies that deal with optimization of
the kinematical and stiffness performances for parallel manipulators of the
Orthoglide family.
|
0909.1525
|
Training-Embedded, Single-Symbol ML-Decodable, Distributed STBCs for
Relay Networks
|
cs.IT math.IT
|
Recently, a special class of complex designs called Training-Embedded Complex
Orthogonal Designs (TE-CODs) has been introduced to construct single-symbol
Maximum Likelihood (ML) decodable (SSD) distributed space-time block codes
(DSTBCs) for two-hop wireless relay networks using the amplify and forward
protocol. However, to implement DSTBCs from square TE-CODs, the overhead due to
the transmission of training symbols becomes prohibitively large as the number
of relays increase. In this paper, we propose TE-Coordinate Interleaved
Orthogonal Designs (TE-CIODs) to construct SSD DSTBCs. Exploiting the block
diagonal structure of TE-CIODs, we show that, the overhead due to the
transmission of training symbols to implement DSTBCs from TE-CIODs is smaller
than that for TE-CODs. We also show that DSTBCs from TE-CIODs offer higher rate
than those from TE-CODs for identical number of relays while maintaining the
SSD and full-diversity properties.
|
0909.1599
|
Frame Permutation Quantization
|
cs.IT math.IT
|
Frame permutation quantization (FPQ) is a new vector quantization technique
using finite frames. In FPQ, a vector is encoded using a permutation source
code to quantize its frame expansion. This means that the encoding is a partial
ordering of the frame expansion coefficients. Compared to ordinary permutation
source coding, FPQ produces a greater number of possible quantization rates and
a higher maximum rate. Various representations for the partitions induced by
FPQ are presented, and reconstruction algorithms based on linear programming,
quadratic programming, and recursive orthogonal projection are derived.
Implementations of the linear and quadratic programming algorithms for uniform
and Gaussian sources show performance improvements over entropy-constrained
scalar quantization for certain combinations of vector dimension and coding
rate. Monte Carlo evaluation of the recursive algorithm shows that mean-squared
error (MSE) decays as 1/M^4 for an M-element frame, which is consistent with
previous results on optimal decay of MSE. Reconstruction using the canonical
dual frame is also studied, and several results relate properties of the
analysis frame to whether linear reconstruction techniques provide consistent
reconstructions.
|
0909.1605
|
Kernel Spectral Curvature Clustering (KSCC)
|
cs.CV
|
Multi-manifold modeling is increasingly used in segmentation and data
representation tasks in computer vision and related fields. While the general
problem, modeling data by mixtures of manifolds, is very challenging, several
approaches exist for modeling data by mixtures of affine subspaces (which is
often referred to as hybrid linear modeling). We translate some important
instances of multi-manifold modeling to hybrid linear modeling in embedded
spaces, without explicitly performing the embedding but applying the kernel
trick. The resulting algorithm, Kernel Spectral Curvature Clustering, uses
kernels at two levels - both as an implicit embedding method to linearize
nonflat manifolds and as a principled method to convert a multiway affinity
problem into a spectral clustering one. We demonstrate the effectiveness of the
method by comparing it with other state-of-the-art methods on both synthetic
data and a real-world problem of segmenting multiple motions from two
perspective camera views.
|
0909.1608
|
Motion Segmentation by SCC on the Hopkins 155 Database
|
cs.CV
|
We apply the Spectral Curvature Clustering (SCC) algorithm to a benchmark
database of 155 motion sequences, and show that it outperforms all other
state-of-the-art methods. The average misclassification rate by SCC is 1.41%
for sequences having two motions and 4.85% for three motions.
|
0909.1623
|
Two channel paraunitary filter banks based on linear canonical transform
|
cs.IT math.IT
|
In this paper a two channel paraunitary filter bank is proposed, which is
based on linear canonical transform, instead of discrete Fourier transform.
Input-output relation for such a filter bank are derived in terms of polyphase
matrices and modulation matrices. It is shown that like conventional filter
banks, the LCT based paraunitary filter banks need only one filter to be
designed and rest of the filters can be obtained from it. It is also shown that
LCT based paraunitary filter banks can be designed by using conventional
power-symmetric filter design in Fourier domain.
|
0909.1626
|
Computing the distance distribution of systematic non-linear codes
|
cs.DM cs.IT math.IT
|
The most important families of non-linear codes are systematic. A brute-force
check is the only known method to compute their weight distribution and
distance distribution. On the other hand, it outputs also all closest word
pairs in the code. In the black-box complexity model, the check is optimal
among closest-pair algorithms. In this paper we provide a Groebner basis
technique to compute the weight/distance distribution of any systematic
non-linear code. Also our technique outputs all closest pairs. Unlike the
check, our method can be extended to work on code families.
|
0909.1638
|
Single-generation Network Coding for Networks with Delay
|
cs.IT math.IT
|
A single-source network is said to be \textit{memory-free} if all of the
internal nodes (those except the source and the sinks) do not employ memory but
merely send linear combinations of the incoming symbols (received at their
incoming edges) on their outgoing edges. Memory-free networks with delay using
network coding are forced to do inter-generation network coding, as a result of
which the problem of some or all sinks requiring a large amount of memory for
decoding is faced. In this work, we address this problem by utilizing memory
elements at the internal nodes of the network also, which results in the
reduction of the number of memory elements used at the sinks. We give an
algorithm which employs memory at the nodes to achieve single-generation
network coding. For fixed latency, our algorithm reduces the total number of
memory elements used in the network to achieve single-generation network
coding. We also discuss the advantages of employing single-generation network
coding together with convolutional network-error correction codes (CNECCs) for
networks with unit-delay and illustrate the performance gain of CNECCs by using
memory at the intermediate nodes using simulations on an example network under
a probabilistic network error model.
|
0909.1758
|
Teaching an Old Elephant New Tricks
|
cs.DB cs.DS cs.PF
|
In recent years, column stores (or C-stores for short) have emerged as a
novel approach to deal with read-mostly data warehousing applications.
Experimental evidence suggests that, for certain types of queries, the new
features of C-stores result in orders of magnitude improvement over traditional
relational engines. At the same time, some C-store proponents argue that
C-stores are fundamentally different from traditional engines, and therefore
their benefits cannot be incorporated into a relational engine short of a
complete rewrite. In this paper we challenge this claim and show that many of
the benefits of C-stores can indeed be simulated in traditional engines with no
changes whatsoever. We then identify some limitations of our ?pure-simulation?
approach for the case of more complex queries. Finally, we predict that
traditional relational engines will eventually leverage most of the benefits of
C-stores natively, as is currently happening in other domains such as XML data.
|
0909.1759
|
Declarative Reconfigurable Trust Management
|
cs.CR cs.DB
|
In recent years, there has been a proliferation of declarative logic-based
trust management languages and systems proposed to ease the description,
configuration, and enforcement of security policies. These systems have
different tradeoffs in expressiveness and complexity, depending on the security
constructs (e.g. authentication, delegation, secrecy, etc.) that are supported,
and the assumed trust level and scale of the execution environment. In this
paper, we present LBTrust, a unified declarative system for reconfigurable
trust management, where various security constructs can be customized and
composed in a declarative fashion. We present an initial proof-of-concept
implementation of LBTrust using LogicBlox, an emerging commercial Datalog-based
platform for enterprise software systems. The LogicBlox language enhances
Datalog in a variety of ways, including constraints and meta-programming, as
well as support for programmer defined constraints which on the meta-model
itself ? meta-constraints ? which act to restrict the set of allowable
programs. LBTrust utilizes LogicBlox?s meta-programming and meta-constraints to
enable customizable cryptographic, partitioning and distribution strategies
based on the execution environment. We present uses cases of LBTrust based on
three trust management systems (Binder, D1LP, and Secure Network Datalog), and
provide a preliminary evaluation of a Binder-based trust management system.
|
0909.1760
|
LifeRaft: Data-Driven, Batch Processing for the Exploration of
Scientific Databases
|
cs.DB
|
Workloads that comb through vast amounts of data are gaining importance in
the sciences. These workloads consist of "needle in a haystack" queries that
are long running and data intensive so that query throughput limits
performance. To maximize throughput for data-intensive queries, we put forth
LifeRaft: a query processing system that batches queries with overlapping data
requirements. Rather than scheduling queries in arrival order, LifeRaft
executes queries concurrently against an ordering of the data that maximizes
data sharing among queries. This decreases I/O and increases cache utility.
However, such batch processing can increase query response time by starving
interactive workloads. LifeRaft addresses starvation using techniques inspired
by head scheduling in disk drives. Depending upon the workload saturation and
queuing times, the system adaptively and incrementally trades-off processing
queries in arrival order and data-driven batch processing. Evaluating LifeRaft
in the SkyQuery federation of astronomy databases reveals a two-fold
improvement in query throughput.
|
0909.1763
|
Remembrance: The Unbearable Sentience of Being Digital
|
cs.DB cs.OS
|
We introduce a world vision in which data is endowed with memory. In this
data-centric systems paradigm, data items can be enabled to retain all or some
of their previous values. We call this ability "remembrance" and posit that it
empowers significant leaps in the security, availability, and general
operational dimensions of systems. With the explosion in cheap, fast memories
and storage, large-scale remembrance will soon become practical. Here, we
introduce and explore the advantages of such a paradigm and the challenges in
making it a reality.
|
0909.1764
|
Data Management for High-Throughput Genomics
|
cs.DB q-bio.GN
|
Today's sequencing technology allows sequencing an individual genome within a
few weeks for a fraction of the costs of the original Human Genome project.
Genomics labs are faced with dozens of TB of data per week that have to be
automatically processed and made available to scientists for further analysis.
This paper explores the potential and the limitations of using relational
database systems as the data processing platform for high-throughput genomics.
In particular, we are interested in the storage management for high-throughput
sequence data and in leveraging SQL and user-defined functions for data
analysis inside a database system. We give an overview of a database design for
high-throughput genomics, how we used a SQL Server database in some
unconventional ways to prototype this scenario, and we will discuss some
initial findings about the scalability and performance of such a more
database-centric approach.
|
0909.1765
|
Qunits: queried units in database search
|
cs.DB cs.IR
|
Keyword search against structured databases has become a popular topic of
investigation, since many users find structured queries too hard to express,
and enjoy the freedom of a ``Google-like'' query box into which search terms
can be entered. Attempts to address this problem face a fundamental dilemma.
Database querying is based on the logic of predicate evaluation, with a
precisely defined answer set for a given query. On the other hand, in an
information retrieval approach, ranked query results have long been accepted as
far superior to results based on boolean query evaluation. As a consequence,
when keyword queries are attempted against databases, relatively ad-hoc ranking
mechanisms are invented (if ranking is used at all), and there is little
leverage from the large body of IR literature regarding how to rank query
results.
Our proposal is to create a clear separation between ranking and database
querying. This divides the problem into two parts, and allows us to address
these separately. The first task is to represent the database, conceptually, as
a collection of independent ``queried units'', or ``qunits'', each of which
represents the desired result for some query against the database. The second
task is to evaluate keyword queries against a collection of qunits, which can
be treated as independent documents for query purposes, thereby permitting the
use of standard IR techniques. We provide insights that encourage the use of
this query paradigm, and discuss preliminary investigations into the efficacy
of a qunits-based framework based on a prototype implementation.
|
0909.1766
|
RIOT: I/O-Efficient Numerical Computing without SQL
|
cs.DB
|
R is a numerical computing environment that is widely popular for statistical
data analysis. Like many such environments, R performs poorly for large
datasets whose sizes exceed that of physical memory. We present our vision of
RIOT (R with I/O Transparency), a system that makes R programs I/O-efficient in
a way transparent to the users. We describe our experience with RIOT-DB, an
initial prototype that uses a relational database system as a backend. Despite
the overhead and inadequacy of generic database systems in handling array data
and numerical computation, RIOT-DB significantly outperforms R in many
large-data scenarios, thanks to a suite of high-level, inter-operation
optimizations that integrate seamlessly into R. While many techniques in RIOT
are inspired by databases (and, for RIOT-DB, realized by a database system),
RIOT users are insulated from anything database related. Compared with previous
approaches that require users to learn new languages and rewrite their programs
to interface with a database, RIOT will, we believe, be easier to adopt by the
majority of the R users.
|
0909.1767
|
Towards Eco-friendly Database Management Systems
|
cs.DB
|
Database management systems (DBMSs) have largely ignored the task of managing
the energy consumed during query processing. Both economical and environmental
factors now require that DBMSs pay close attention to energy consumption. In
this paper we approach this issue by considering energy consumption as a
first-class performance goal for query processing in a DBMS. We present two
concrete techniques that can be used by a DBMS to directly manage the energy
consumption. Both techniques trade energy consumption for performance. The
first technique, called PVC, leverages the ability of modern processors to
execute at lower processor voltage and frequency. The second technique, called
QED, uses query aggregation to leverage common components of queries in a
workload. Using experiments run on a commercial DBMS and MySQL, we show that
PVC can reduce the processor energy consumption by 49% of the original
consumption while increasing the response time by only 3%. On MySQL, PVC can
reduce energy consumption by 20% with a response time penalty of only 6%. For
simple selection queries with no predicate overlap, we show that QED can be
used to gracefully trade response time for energy, reducing energy consumption
by 54% for a 43% increase in average response time. In this paper we also
highlight some research issues in the emerging area of energy-efficient data
processing.
|
0909.1768
|
Unbundling Transaction Services in the Cloud
|
cs.DB cs.DC
|
The traditional architecture for a DBMS engine has the recovery, concurrency
control and access method code tightly bound together in a storage engine for
records. We propose a different approach, where the storage engine is factored
into two layers (each of which might have multiple heterogeneous instances). A
Transactional Component (TC) works at a logical level only: it knows about
transactions and their "logical" concurrency control and undo/redo recovery,
but it does not know about page layout, B-trees etc. A Data Component (DC)
knows about the physical storage structure. It supports a record oriented
interface that provides atomic operations, but it does not know about
transactions. Providing atomic record operations may itself involve DC-local
concurrency control and recovery, which can be implemented using system
transactions. The interaction of the mechanisms in TC and DC leads to
multi-level redo (unlike the repeat history paradigm for redo in integrated
engines). This refactoring of the system architecture could allow easier
deployment of application-specific physical structures and may also be helpful
to exploit multi-core hardware. Particularly promising is its potential to
enable flexible transactions in cloud database deployments. We describe the
necessary principles for unbundled recovery, and discuss implementation issues.
|
0909.1769
|
Interactive Data Integration through Smart Copy & Paste
|
cs.DB cs.AI
|
In many scenarios, such as emergency response or ad hoc collaboration, it is
critical to reduce the overhead in integrating data. Ideally, one could perform
the entire process interactively under one unified interface: defining
extractors and wrappers for sources, creating a mediated schema, and adding
schema mappings ? while seeing how these impact the integrated view of the
data, and refining the design accordingly.
We propose a novel smart copy and paste (SCP) model and architecture for
seamlessly combining the design-time and run-time aspects of data integration,
and we describe an initial prototype, the CopyCat system. In CopyCat, the user
does not need special tools for the different stages of integration: instead,
the system watches as the user copies data from applications (including the Web
browser) and pastes them into CopyCat?s spreadsheet-like workspace. CopyCat
generalizes these actions and presents proposed auto-completions, each with an
explanation in the form of provenance. The user provides feedback on these
suggestions ? through either direct interactions or further copy-and-paste
operations ? and the system learns from this feedback. This paper provides an
overview of our prototype system, and identifies key research challenges in
achieving SCP in its full generality.
|
0909.1770
|
From Declarative Languages to Declarative Processing in Computer Games
|
cs.DB cs.MA
|
Recent work has shown that we can dramatically improve the performance of
computer games and simulations through declarative processing: Character AI can
be written in an imperative scripting language which is then compiled to
relational algebra and executed by a special games engine with features similar
to a main memory database system. In this paper we lay out a challenging
research agenda built on these ideas.
We discuss several research ideas for novel language features to support
atomic actions and reactive programming. We also explore challenges for
main-memory query processing in games and simulations including adaptive query
plan selection, support for parallel architectures, debugging simulation
scripts, and extensions for multi-player games and virtual worlds. We believe
that these research challenges will result in a dramatic change in the design
of game engines over the next decade.
|
0909.1771
|
The Role of Schema Matching in Large Enterprises
|
cs.DB
|
To date, the principal use case for schema matching research has been as a
precursor for code generation, i.e., constructing mappings between schema
elements with the end goal of data transfer. In this paper, we argue that
schema matching plays valuable roles independent of mapping construction,
especially as schemata grow to industrial scales. Specifically, in large
enterprises human decision makers and planners are often the immediate consumer
of information derived from schema matchers, instead of schema mapping tools.
We list a set of real application areas illustrating this role for schema
matching, and then present our experiences tackling a customer problem in one
of these areas. We describe the matcher used, where the tool was effective,
where it fell short, and our lessons learned about how well current schema
matching technology is suited for use in large enterprises. Finally, we suggest
a new agenda for schema matching research based on these experiences.
|
0909.1772
|
Visualizing the robustness of query execution
|
cs.DB cs.PF
|
In database query processing, actual run-time conditions (e.g., actual
selectivities and actual available memory) very often differ from compile-time
expectations of run-time conditions (e.g., estimated predicate selectivities
and anticipated memory availability). Robustness of query processing can be
defined as the ability to handle unexpected conditions. Robustness of query
execution, specifically, can be defined as the ability to process a specific
plan efficiently in an unexpected condition. We focus on query execution
(run-time), ignoring query optimization (compile-time), in order to complement
existing research and to explore untapped potential for improved robustness in
database query processing.
One of our initial steps has been to devise diagrams or maps that show how
well plans perform in the face of varying run-time conditions and how
gracefully a system's query architecture, operators, and their implementation
degrade in the face of adverse conditions. In this paper, we show several kinds
of diagrams with data from three real systems and report on what we have
learned both about these visualization techniques and about the three database
systems
|
0909.1773
|
Search Driven Analysis of Heterogenous XML Data
|
cs.DB
|
Analytical processing on XML repositories is usually enabled by designing
complex data transformations that shred the documents into a common data
warehousing schema. This can be very time-consuming and costly, especially if
the underlying XML data has a lot of variety in structure, and only a subset of
attributes constitutes meaningful dimensions and facts. Today, there is no tool
to explore an XML data set, discover interesting attributes, dimensions and
facts, and rapidly prototype an OLAP solution.
In this paper, we propose a system, called SEDA that enables users to start
with simple keyword-style querying, and interactively refine the query based on
result summaries. SEDA then maps query results onto a set of known, or newly
created, facts and dimensions, and derives a star schema and its instantiation
to be fed into an off-the-shelf OLAP tool, for further analysis.
|
0909.1774
|
Social Systems: Can we Do More Than Just Poke Friends?
|
cs.DB cs.CY
|
Social sites have become extremely popular among users but have they
attracted equal attention from the research community? Are they good only for
simple tasks, such as tagging and poking friends? Do they present any new or
interesting research challenges? In this paper, we describe the insights we
have obtained implementing CourseRank, a course evaluation and planning social
system. We argue that more attention should be given to social sites like ours
and that there are many challenges (though not the traditional DBMS ones) that
should be addressed by our community.
|
0909.1775
|
SCADS: Scale-Independent Storage for Social Computing Applications
|
cs.DB cs.DC
|
Collaborative web applications such as Facebook, Flickr and Yelp present new
challenges for storing and querying large amounts of data. As users and
developers are focused more on performance than single copy consistency or the
ability to perform ad-hoc queries, there exists an opportunity for a
highly-scalable system tailored specifically for relaxed consistency and
pre-computed queries. The Web 2.0 development model demands the ability to both
rapidly deploy new features and automatically scale with the number of users.
There have been many successful distributed key-value stores, but so far none
provide as rich a query language as SQL. We propose a new architecture, SCADS,
that allows the developer to declaratively state application specific
consistency requirements, takes advantage of utility computing to provide cost
effective scale-up and scale-down, and will use machine learning models to
introspectively anticipate performance problems and predict the resource
requirements of new queries before execution.
|
0909.1776
|
Sailing the Information Ocean with Awareness of Currents: Discovery and
Application of Source Dependence
|
cs.DB cs.LG
|
The Web has enabled the availability of a huge amount of useful information,
but has also eased the ability to spread false information and rumors across
multiple sources, making it hard to distinguish between what is true and what
is not. Recent examples include the premature Steve Jobs obituary, the second
bankruptcy of United airlines, the creation of Black Holes by the operation of
the Large Hadron Collider, etc. Since it is important to permit the expression
of dissenting and conflicting opinions, it would be a fallacy to try to ensure
that the Web provides only consistent information. However, to help in
separating the wheat from the chaff, it is essential to be able to determine
dependence between sources. Given the huge number of data sources and the vast
volume of conflicting data available on the Web, doing so in a scalable manner
is extremely challenging and has not been addressed by existing work yet.
In this paper, we present a set of research problems and propose some
preliminary solutions on the issues involved in discovering dependence between
sources. We also discuss how this knowledge can benefit a variety of
technologies, such as data integration and Web 2.0, that help users manage and
access the totality of the available information from various sources.
|
0909.1777
|
Capturing Data Uncertainty in High-Volume Stream Processing
|
cs.DB
|
We present the design and development of a data stream system that captures
data uncertainty from data collection to query processing to final result
generation. Our system focuses on data that is naturally modeled as continuous
random variables. For such data, our system employs an approach grounded in
probability and statistical theory to capture data uncertainty and integrates
this approach into high-volume stream processing. The first component of our
system captures uncertainty of raw data streams from sensing devices. Since
such raw streams can be highly noisy and may not carry sufficient information
for query processing, our system employs probabilistic models of the data
generation process and stream-speed inference to transform raw data into a
desired format with an uncertainty metric. The second component captures
uncertainty as data propagates through query operators. To efficiently quantify
result uncertainty of a query operator, we explore a variety of techniques
based on probability and statistical theory to compute the result distribution
at stream speed. We are currently working with a group of scientists to
evaluate our system using traces collected from the domains of (and eventually
in the real systems for) hazardous weather monitoring and object tracking and
monitoring.
|
0909.1778
|
A Case for A Collaborative Query Management System
|
cs.DB
|
Over the past 40 years, database management systems (DBMSs) have evolved to
provide a sophisticated variety of data management capabilities. At the same
time, tools for managing queries over the data have remained relatively
primitive. One reason for this is that queries are typically issued through
applications. They are thus debugged once and re-used repeatedly. This mode of
interaction, however, is changing. As scientists (and others) store and share
increasingly large volumes of data in data centers, they need the ability to
analyze the data by issuing exploratory queries. In this paper, we argue that,
in these new settings, data management systems must provide powerful query
management capabilities, from query browsing to automatic query
recommendations. We first discuss the requirements for a collaborative query
management system. We outline an early system architecture and discuss the many
research challenges associated with building such an engine.
|
0909.1779
|
The Case for RodentStore, an Adaptive, Declarative Storage System
|
cs.DB
|
Recent excitement in the database community surrounding new
applications?analytic, scientific, graph, geospatial, etc.?has led to an
explosion in research on database storage systems. New storage systems are
vital to the database community, as they are at the heart of making database
systems perform well in new application domains. Unfortunately, each such
system also represents a substantial engineering effort including a great deal
of duplication of mechanisms for features such as transactions and caching. In
this paper, we make the case for RodentStore, an adaptive and declarative
storage system providing a high-level interface for describing the physical
representation of data. Specifically, RodentStore uses a declarative storage
algebra whereby administrators (or database design tools) specify how a logical
schema should be grouped into collections of rows, columns, and/or arrays, and
the order in which those groups should be laid out on disk. We describe the key
operators and types of our algebra, outline the general architecture of
RodentStore, which interprets algebraic expressions to generate a physical
representation of the data, and describe the interface between RodentStore and
other parts of a database system, such as the query optimizer and executor. We
provide a case study of the potential use of RodentStore in representing dense
geospatial data collected from a mobile sensor network, showing the ease with
which different storage layouts can be expressed using some of our algebraic
constructs and the potential performance gains that a RodentStore-built storage
system can offer.
|
0909.1781
|
Boosting XML Filtering with a Scalable FPGA-based Architecture
|
cs.AR cs.DB
|
The growing amount of XML encoded data exchanged over the Internet increases
the importance of XML based publish-subscribe (pub-sub) and content based
routing systems. The input in such systems typically consists of a stream of
XML documents and a set of user subscriptions expressed as XML queries. The
pub-sub system then filters the published documents and passes them to the
subscribers. Pub-sub systems are characterized by very high input ratios,
therefore the processing time is critical. In this paper we propose a "pure
hardware" based solution, which utilizes XPath query blocks on FPGA to solve
the filtering problem. By utilizing the high throughput that an FPGA provides
for parallel processing, our approach achieves drastically better throughput
than the existing software or mixed (hardware/software) architectures. The
XPath queries (subscriptions) are translated to regular expressions which are
then mapped to FPGA devices. By introducing stacks within the FPGA we are able
to express and process a wide range of path queries very efficiently, on a
scalable environment. Moreover, the fact that the parser and the filter
processing are performed on the same FPGA chip, eliminates expensive
communication costs (that a multi-core system would need) thus enabling very
fast and efficient pipelining. Our experimental evaluation reveals more than
one order of magnitude improvement compared to traditional pub/sub systems.
|
0909.1782
|
Principles for Inconsistency
|
cs.DB
|
Data consistency is very desirable because strong semantic properties make it
easier to write correct programs that perform as users expect. However, there
are good reasons why consistency may have to be weakened to achieve other
business goals. In this CIDR 2009 Perspectives paper, we present real-world
reasons inconsistency may be necessary, offer principles for managing
inconsistency coherently, and describe implementation approaches we are
investigating for sustainably scalable systems that offer comprehensible user
experiences despite inconsistency.
|
0909.1783
|
The Case for a Structured Approach to Managing Unstructured Data
|
cs.DB cs.IR
|
The challenge of managing unstructured data represents perhaps the largest
data management opportunity for our community since managing relational data.
And yet we are risking letting this opportunity go by, ceding the playing field
to other players, ranging from communities such as AI, KDD, IR, Web, and
Semantic Web, to industrial players such as Google, Yahoo, and Microsoft. In
this essay we explore what we can do to improve upon this situation. Drawing on
the lessons learned while managing relational data, we outline a structured
approach to managing unstructured data. We conclude by discussing the potential
implications of this approach to managing other kinds of non-relational data,
and to the identify of our field.
|
0909.1784
|
Energy Efficiency: The New Holy Grail of Data Management Systems
Research
|
cs.DB cs.PF
|
Energy costs are quickly rising in large-scale data centers and are soon
projected to overtake the cost of hardware. As a result, data center operators
have recently started turning into using more energy-friendly hardware. Despite
the growing body of research in power management techniques, there has been
little work to date on energy efficiency from a data management software
perspective.
In this paper, we argue that hardware-only approaches are only part of the
solution, and that data management software will be key in optimizing for
energy efficiency. We discuss the problems arising from growing energy use in
data centers and the trends that point to an increasing set of opportunities
for software-level optimizations. Using two simple experiments, we illustrate
the potential of such optimizations, and, motivated by these examples, we
discuss general approaches for reducing energy waste. Lastly, we point out
existing places within database systems that are promising for
energy-efficiency optimizations and urge the data management systems community
to shift focus from performance-oriented research to energy-efficient
computing.
|
0909.1785
|
Harnessing the Deep Web: Present and Future
|
cs.DB
|
Over the past few years, we have built a system that has exposed large
volumes of Deep-Web content to Google.com users. The content that our system
exposes contributes to more than 1000 search queries per-second and spans over
50 languages and hundreds of domains. The Deep Web has long been acknowledged
to be a major source of structured data on the web, and hence accessing
Deep-Web content has long been a problem of interest in the data management
community. In this paper, we report on where we believe the Deep Web provides
value and where it does not. We contrast two very different approaches to
exposing Deep-Web content -- the surfacing approach that we used, and the
virtual integration approach that has often been pursued in the data management
literature. We emphasize where the values of each of the two approaches lie and
caution against potential pitfalls. We outline important areas of future
research and, in particular, emphasize the value that can be derived from
analyzing large collections of potentially disparate structured data on the
web.
|
0909.1786
|
DBMSs Should Talk Back Too
|
cs.DB cs.HC
|
Natural language user interfaces to database systems have been studied for
several decades now. They have mainly focused on parsing and interpreting
natural language queries to generate them in a formal database language. We
envision the reverse functionality, where the system would be able to take the
internal result of that translation, say in SQL form, translate it back into
natural language, and show it to the initiator of the query for verification.
Likewise, information extraction has received considerable attention in the
past ten years or so, identifying structured information in free text so that
it may then be stored appropriately and queried. Validation of the records
stored with a backward translation into text would again be very powerful.
Verification and validation of query and data input of a database system
correspond to just one example of the many important applications that would
benefit greatly from having mature techniques for translating such database
constructs into free-flowing text. The problem appears to be deceivingly
simple, as there are no ambiguities or other complications in interpreting
internal database elements, so initially a straightforward translation appears
adequate. Reality teaches us quite the opposite, however, as the resulting text
should be expressive, i.e., accurate in capturing the underlying queries or
data, and effective, i.e., allowing fast and unique interpretation of them.
Achieving both of these qualities is very difficult and raises several
technical challenges that need to be addressed. In this paper, we first expose
the reader to several situations and applications that need translation into
natural language, thereby, motivating the problem. We then outline, by example,
the research problems that need to be solved, separately for data translations
and query translations.
|
0909.1801
|
Randomized Sensor Selection in Sequential Hypothesis Testing
|
cs.IT math.IT
|
We consider the problem of sensor selection for time-optimal detection of a
hypothesis. We consider a group of sensors transmitting their observations to a
fusion center. The fusion center considers the output of only one randomly
chosen sensor at the time, and performs a sequential hypothesis test. We
consider the class of sequential tests which are easy to implement,
asymptotically optimal, and computationally amenable. For three distinct
performance metrics, we show that, for a generic set of sensors and binary
hypothesis, the fusion center needs to consider at most two sensors. We also
show that for the case of multiple hypothesis, the optimal policy needs at most
as many sensors to be observed as the number of underlying hypotheses.
|
0909.1817
|
Cooperative Transmission for a Vector Gaussian Parallel Relay Network
|
cs.IT math.IT
|
In this paper, we consider a parallel relay network where two relays
cooperatively help a source transmit to a destination. We assume the source and
the destination nodes are equipped with multiple antennas. Three basic schemes
and their achievable rates are studied: Decode-and-Forward (DF),
Amplify-and-Forward (AF), and Compress-and-Forward (CF). For the DF scheme, the
source transmits two private signals, one for each relay, where dirty paper
coding (DPC) is used between the two private streams, and a common signal for
both relays. The relays make efficient use of the common information to
introduce a proper amount of correlation in the transmission to the
destination. We show that the DF scheme achieves the capacity under certain
conditions. We also show that the CF scheme is asymptotically optimal in the
high relay power limit, regardless of channel ranks. It turns out that the AF
scheme also achieves the asymptotic optimality but only when the
relays-to-destination channel is full rank. The relative advantages of the
three schemes are discussed with numerical results.
|
0909.1830
|
Greedy Gossip with Eavesdropping
|
cs.DC cs.AI
|
This paper presents greedy gossip with eavesdropping (GGE), a novel
randomized gossip algorithm for distributed computation of the average
consensus problem. In gossip algorithms, nodes in the network randomly
communicate with their neighbors and exchange information iteratively. The
algorithms are simple and decentralized, making them attractive for wireless
network applications. In general, gossip algorithms are robust to unreliable
wireless conditions and time varying network topologies. In this paper we
introduce GGE and demonstrate that greedy updates lead to rapid convergence. We
do not require nodes to have any location information. Instead, greedy updates
are made possible by exploiting the broadcast nature of wireless
communications. During the operation of GGE, when a node decides to gossip,
instead of choosing one of its neighbors at random, it makes a greedy
selection, choosing the node which has the value most different from its own.
In order to make this selection, nodes need to know their neighbors' values.
Therefore, we assume that all transmissions are wireless broadcasts and nodes
keep track of their neighbors' values by eavesdropping on their communications.
We show that the convergence of GGE is guaranteed for connected network
topologies. We also study the rates of convergence and illustrate, through
theoretical bounds and numerical simulations, that GGE consistently outperforms
randomized gossip and performs comparably to geographic gossip on
moderate-sized random geometric graph topologies.
|
0909.1933
|
Chromatic PAC-Bayes Bounds for Non-IID Data: Applications to Ranking and
Stationary $\beta$-Mixing Processes
|
cs.LG math.ST stat.ML stat.TH
|
Pac-Bayes bounds are among the most accurate generalization bounds for
classifiers learned from independently and identically distributed (IID) data,
and it is particularly so for margin classifiers: there have been recent
contributions showing how practical these bounds can be either to perform model
selection (Ambroladze et al., 2007) or even to directly guide the learning of
linear classifiers (Germain et al., 2009). However, there are many practical
situations where the training data show some dependencies and where the
traditional IID assumption does not hold. Stating generalization bounds for
such frameworks is therefore of the utmost interest, both from theoretical and
practical standpoints. In this work, we propose the first - to the best of our
knowledge - Pac-Bayes generalization bounds for classifiers trained on data
exhibiting interdependencies. The approach undertaken to establish our results
is based on the decomposition of a so-called dependency graph that encodes the
dependencies within the data, in sets of independent data, thanks to graph
fractional covers. Our bounds are very general, since being able to find an
upper bound on the fractional chromatic number of the dependency graph is
sufficient to get new Pac-Bayes bounds for specific settings. We show how our
results can be used to derive bounds for ranking statistics (such as Auc) and
classifiers trained on data distributed according to a stationary {\ss}-mixing
process. In the way, we show how our approach seemlessly allows us to deal with
U-processes. As a side note, we also provide a Pac-Bayes generalization bound
for classifiers learned on data from stationary $\varphi$-mixing distributions.
|
0909.2009
|
A Fresh Look at Coding for q-ary Symmetric Channels
|
cs.IT math.IT
|
This paper studies coding schemes for the $q$-ary symmetric channel based on
binary low-density parity-check (LDPC) codes that work for any alphabet size
$q=2^m$, $m\in\mathbb{N}$, thus complementing some recently proposed
packet-based schemes requiring large $q$. First, theoretical optimality of a
simple layered scheme is shown, then a practical coding scheme based on a
simple modification of standard binary LDPC decoding is proposed. The decoder
is derived from first principles and using a factor-graph representation of a
front-end that maps $q$-ary symbols to groups of $m$ bits connected to a binary
code. The front-end can be processed with a complexity that is linear in
$m=\log_2 q$. An extrinsic information transfer chart analysis is carried out
and used for code optimization. Finally, it is shown how the same decoder
structure can also be applied to a larger class of $q$-ary channels.
|
0909.2017
|
Sparsity and `Something Else': An Approach to Encrypted Image Folding
|
cs.CV cs.IT math.IT
|
A property of sparse representations in relation to their capacity for
information storage is discussed. It is shown that this feature can be used for
an application that we term Encrypted Image Folding. The proposed procedure is
realizable through any suitable transformation. In particular, in this paper we
illustrate the approach by recourse to the Discrete Cosine Transform and a
combination of redundant Cosine and Dirac dictionaries. The main advantage of
the proposed technique is that both storage and encryption can be achieved
simultaneously using simple processing steps.
|
0909.2030
|
Size Bounds for Conjunctive Queries with General Functional Dependencies
|
cs.DB cs.DS
|
This paper extends the work of Gottlob, Lee, and Valiant (PODS 2009)[GLV],
and considers worst-case bounds for the size of the result Q(D) of a
conjunctive query Q to a database D given an arbitrary set of functional
dependencies. The bounds in [GLV] are based on a "coloring" of the query
variables. In order to extend the previous bounds to the setting of arbitrary
functional dependencies, we leverage tools from information theory to formalize
the original intuition that each color used represents some possible entropy of
that variable, and bound the maximum possible size increase via a linear
program that seeks to maximize how much more entropy is in the result of the
query than the input. This new view allows us to precisely characterize the
entropy structure of worst-case instances for conjunctive queries with simple
functional dependencies (keys), providing new insights into the results of
[GLV]. We extend these results to the case of general functional dependencies,
providing upper and lower bounds on the worst-case size increase. We identify
the fundamental connection between the gap in these bounds and a central open
question in information theory.
Finally, we show that, while both the upper and lower bounds are given by
exponentially large linear programs, one can distinguish in polynomial time
whether the result of a query with an arbitrary set of functional dependencies
can be any larger than the input database.
|
0909.2058
|
SocialScope: Enabling Information Discovery on Social Content Sites
|
cs.DB cs.HC cs.IR cs.PL
|
Recently, many content sites have started encouraging their users to engage
in social activities such as adding buddies on Yahoo! Travel and sharing
articles with their friends on New York Times. This has led to the emergence of
{\em social content sites}, which is being facilitated by initiatives like
OpenID (http://www.openid.net/) and OpenSocial (http://www.opensocial.org/).
These community standards enable the open access to users' social profiles and
connections by individual content sites and are bringing content-oriented sites
and social networking sites ever closer. The integration of content and social
information raises new challenges for {\em information management and
discovery} over such sites. We propose a logical architecture, named
\kw{SocialScope}, consisting of three layers, for tackling the challenges. The
{\em content management} layer is responsible for integrating, maintaining and
physically accessing the content and social data. The {\em information
discovery} layer takes care of analyzing content to derive interesting new
information, and interpreting and processing the user's information need to
identify relevant information. Finally, the {\em information presentation}
layer explores the discovered information and helps users better understand it
in a principled way. We describe the challenges in each layer and propose
solutions for some of those challenges. In particular, we propose a uniform
algebraic framework, which can be leveraged to uniformly and flexibly specify
many of the information discovery and analysis tasks and provide the foundation
for the optimization of those tasks.
|
0909.2062
|
Inter-Operator Feedback in Data Stream Management Systems via
Punctuation
|
cs.DB
|
High-volume, high-speed data streams may overwhelm the capabilities of stream
processing systems; techniques such as data prioritization, avoidance of
unnecessary processing and on-demand result production may be necessary to
reduce processing requirements. However, the dynamic nature of data streams, in
terms of both rate and content, makes the application of such techniques
challenging. Such techniques have been addressed in the context of static and
centralized query optimization; however, they have not been fully addressed for
data stream management systems. In this work, we present a comprehensive
framework that supports prioritization, avoidance of unnecessary work, and
on-demand result production over distributed, unreliable, bursty, disordered
data sources, typical of many data streams. We propose a form of inter-operator
feedback, which flows against the stream direction, to communicate the
information needed to enable execution of these techniques. This feedback
leverages punctuations to describe the subsets of interest. We identify
potential sources of feedback information, characterize new types of
punctuation to support feedback, and describe the roles of producers,
exploiters, and relayers of feedback that query operators may implement. We
present initial experimental observations using the NiagaraST data-stream
system.
|
0909.2074
|
Sum Capacity of MIMO Interference Channels in the Low Interference
Regime
|
cs.IT math.IT
|
Using Gaussian inputs and treating interference as noise at the receivers has
recently been shown to be sum capacity achieving for the two-user single-input
single-output (SISO) Gaussian interference channel in a low interference
regime, where the interference levels are below certain thresholds. In this
paper, such a low interference regime is characterized for multiple-input
multiple-output (MIMO) Gaussian interference channels. Conditions are provided
on the direct and cross channel gain matrices under which using Gaussian inputs
and treating interference as noise at the receivers is sum capacity achieving.
For the special cases of the symmetric multiple-input single-output (MISO) and
single-input multiple-output (SIMO) Gaussian interference channels, more
explicit expressions for the low interference regime are derived. In
particular, the threshold on the interference levels that characterize low
interference regime is related to the input SNR and the angle between the
direct and cross channel gain vectors. It is shown that the low interference
regime can be quite significant for MIMO interference channels, with the low
interference threshold being at least as large as the sine of the angle between
the direct and cross channel gain vectors for the MISO and SIMO cases.
|
0909.2091
|
Paired Comparisons-based Interactive Differential Evolution
|
cs.AI
|
We propose Interactive Differential Evolution (IDE) based on paired
comparisons for reducing user fatigue and evaluate its convergence speed in
comparison with Interactive Genetic Algorithms (IGA) and tournament IGA. User
interface and convergence performance are two big keys for reducing Interactive
Evolutionary Computation (IEC) user fatigue. Unlike IGA and conventional IDE,
users of the proposed IDE and tournament IGA do not need to compare whole
individuals each other but compare pairs of individuals, which largely
decreases user fatigue. In this paper, we design a pseudo-IEC user and evaluate
another factor, IEC convergence performance, using IEC simulators and show that
our proposed IDE converges significantly faster than IGA and tournament IGA,
i.e. our proposed one is superior to others from both user interface and
convergence performance points of view.
|
0909.2194
|
Approximate Nearest Neighbor Search through Comparisons
|
cs.DS cs.DB cs.LG
|
This paper addresses the problem of finding the nearest neighbor (or one of
the R-nearest neighbors) of a query object q in a database of n objects. In
contrast with most existing approaches, we can only access the ``hidden'' space
in which the objects live through a similarity oracle. The oracle, given two
reference objects and a query object, returns the reference object closest to
the query object. The oracle attempts to model the behavior of human users,
capable of making statements about similarity, but not of assigning meaningful
numerical values to distances between objects.
|
0909.2234
|
Universal and Composite Hypothesis Testing via Mismatched Divergence
|
cs.IT cs.LG math.IT math.ST stat.TH
|
For the universal hypothesis testing problem, where the goal is to decide
between the known null hypothesis distribution and some other unknown
distribution, Hoeffding proposed a universal test in the nineteen sixties.
Hoeffding's universal test statistic can be written in terms of
Kullback-Leibler (K-L) divergence between the empirical distribution of the
observations and the null hypothesis distribution. In this paper a modification
of Hoeffding's test is considered based on a relaxation of the K-L divergence
test statistic, referred to as the mismatched divergence. The resulting
mismatched test is shown to be a generalized likelihood-ratio test (GLRT) for
the case where the alternate distribution lies in a parametric family of the
distributions characterized by a finite dimensional parameter, i.e., it is a
solution to the corresponding composite hypothesis testing problem. For certain
choices of the alternate distribution, it is shown that both the Hoeffding test
and the mismatched test have the same asymptotic performance in terms of error
exponents. A consequence of this result is that the GLRT is optimal in
differentiating a particular distribution from others in an exponential family.
It is also shown that the mismatched test has a significant advantage over the
Hoeffding test in terms of finite sample size performance. This advantage is
due to the difference in the asymptotic variances of the two test statistics
under the null hypothesis. In particular, the variance of the K-L divergence
grows linearly with the alphabet size, making the test impractical for
applications involving large alphabet distributions. The variance of the
mismatched divergence on the other hand grows linearly with the dimension of
the parameter space, and can hence be controlled through a prudent choice of
the function class defining the mismatched divergence.
|
0909.2290
|
Slicing: A New Approach to Privacy Preserving Data Publishing
|
cs.DB cs.CR
|
Several anonymization techniques, such as generalization and bucketization,
have been designed for privacy preserving microdata publishing. Recent work has
shown that generalization loses considerable amount of information, especially
for high-dimensional data. Bucketization, on the other hand, does not prevent
membership disclosure and does not apply for data that do not have a clear
separation between quasi-identifying attributes and sensitive attributes.
In this paper, we present a novel technique called slicing, which partitions
the data both horizontally and vertically. We show that slicing preserves
better data utility than generalization and can be used for membership
disclosure protection. Another important advantage of slicing is that it can
handle high-dimensional data. We show how slicing can be used for attribute
disclosure protection and develop an efficient algorithm for computing the
sliced data that obey the l-diversity requirement. Our workload experiments
confirm that slicing preserves better utility than generalization and is more
effective than bucketization in workloads involving the sensitive attribute.
Our experiments also demonstrate that slicing can be used to prevent membership
disclosure.
|
0909.2292
|
Random Sampling Using Shannon Interpolation and Poisson Summation
Formulae
|
cs.IT cs.CE math.IT math.NA
|
This report mainly focused on the basic concepts and the recovery methods for
the random sampling. The recovery methods involve the orthogonal matching
pursuit algorithm and the gradient-based total variation strategy. In
particular, a fast and efficient observation matrix filling technique was
implemented by the classic Shannon interpolation and Poisson summation
formulae. The numerical results for the trigonometric signal, the
Gaussian-modulated sinusoidal pulse, and the square wave were demonstrated and
discussed. The work may give some help for future work in theoretical study and
practical implementation of the random sampling.
|
0909.2309
|
Logic with Verbs
|
cs.AI cs.LO
|
The aim of this paper is to introduce a logic in which nouns and verbs are
handled together as a deductive reasoning, and also to observe the relationship
between nouns and verbs as well as between logics and conversations.
|
0909.2336
|
Two-Phase Flow in Heterogeneous Media
|
cs.CE
|
In this study, we investigate the appeared complexity of two-phase flow
(air-water) in a heterogeneous soil where the supposed porous media is
non-deformable media which is under the time-dependent gas pressure. After
obtaining of governing equations and considering the capillary
pressure-saturation and permeability functions, the evolution of the models
unknown parameters were obtained. In this way, using COMSOL (FEMLAB) and fluid
flow-script Module, the role of heterogeneity in intrinsic permeability was
analysed. Also, the evolution of relative permeability of wetting and
non-wetting fluid, capillary pressure and other parameters were elicited.
|
0909.2339
|
Back analysis based on SOM-RST system
|
cs.AI
|
This paper describes application of information granulation theory, on the
back analysis of Jeffrey mine southeast wall Quebec. In this manner, using a
combining of Self Organizing Map (SOM) and rough set theory (RST), crisp and
rough granules are obtained. Balancing of crisp granules and sub rough granules
is rendered in close-open iteration. Combining of hard and soft computing,
namely finite difference method (FDM) and computational intelligence and taking
in to account missing information are two main benefits of the proposed method.
As a practical example, reverse analysis on the failure of the southeast wall
Jeffrey mine is accomplished.
|
0909.2345
|
Weblog Clustering in Multilinear Algebra Perspective
|
cs.IR
|
This paper describes a clustering method to group the most similar and
important weblogs with their descriptive shared words by using a technique from
multilinear algebra known as PARAFAC tensor decomposition. The proposed method
first creates labeled-link network representation of the weblog datasets, where
the nodes are the blogs and the labels are the shared words. Then, 3-way
adjacency tensor is extracted from the network and the PARAFAC decomposition is
applied to the tensor to get pairs of node lists and label lists with scores
attached to each list as the indication of the degree of importance. The
clustering is done by sorting the lists in decreasing order and taking the
pairs of top ranked blogs and words. Thus, unlike standard co-clustering
methods, this method not only groups the similar blogs with their descriptive
words but also tends to produce clusters of important blogs and descriptive
words.
|
0909.2358
|
Message Passing for Integrating and Assessing Renewable Generation in a
Redundant Power Grid
|
physics.soc-ph cond-mat.stat-mech cs.CE physics.data-an
|
A simplified model of a redundant power grid is used to study integration of
fluctuating renewable generation. The grid consists of large number of
generator and consumer nodes. The net power consumption is determined by the
difference between the gross consumption and the level of renewable generation.
The gross consumption is drawn from a narrow distribution representing the
predictability of aggregated loads, and we consider two different distributions
representing wind and solar resources. Each generator is connected to D
consumers, and redundancy is built in by connecting R of these consumers to
other generators. The lines are switchable so that at any instance each
consumer is connected to a single generator. We explore the capacity of the
renewable generation by determining the level of "firm" generation capacity
that can be displaced for different levels of redundancy R. We also develop
message-passing control algorithm for finding switch settings where no
generator is overloaded.
|
0909.2373
|
An Efficient Secure Multimodal Biometric Fusion Using Palmprint and Face
Image
|
cs.CV cs.CR
|
Biometrics based personal identification is regarded as an effective method
for automatically recognizing, with a high confidence a person's identity. A
multimodal biometric systems consolidate the evidence presented by multiple
biometric sources and typically better recognition performance compare to
system based on a single biometric modality. This paper proposes an
authentication method for a multimodal biometric system identification using
two traits i.e. face and palmprint. The proposed system is designed for
application where the training data contains a face and palmprint. Integrating
the palmprint and face features increases robustness of the person
authentication. The final decision is made by fusion at matching score level
architecture in which features vectors are created independently for query
measures and are then compared to the enrolment template, which are stored
during database preparation. Multimodal biometric system is developed through
fusion of face and palmprint recognition.
|
0909.2375
|
Similarity Matching Techniques for Fault Diagnosis in Automotive
Infotainment Electronics
|
cs.AI
|
Fault diagnosis has become a very important area of research during the last
decade due to the advancement of mechanical and electrical systems in
industries. The automobile is a crucial field where fault diagnosis is given a
special attention. Due to the increasing complexity and newly added features in
vehicles, a comprehensive study has to be performed in order to achieve an
appropriate diagnosis model. A diagnosis system is capable of identifying the
faults of a system by investigating the observable effects (or symptoms). The
system categorizes the fault into a diagnosis class and identifies a probable
cause based on the supplied fault symptoms. Fault categorization and
identification are done using similarity matching techniques. The development
of diagnosis classes is done by making use of previous experience, knowledge or
information within an application area. The necessary information used may come
from several sources of knowledge, such as from system analysis. In this paper
similarity matching techniques for fault diagnosis in automotive infotainment
applications are discussed.
|
0909.2376
|
Performing Hybrid Recommendation in Intermodal Transportation-the
FTMarket System's Recommendation Module
|
cs.AI
|
Diverse recommendation techniques have been already proposed and encapsulated
into several e-business applications, aiming to perform a more accurate
evaluation of the existing information and accordingly augment the assistance
provided to the users involved. This paper reports on the development and
integration of a recommendation module in an agent-based transportation
transactions management system. The module is built according to a novel hybrid
recommendation technique, which combines the advantages of collaborative
filtering and knowledge-based approaches. The proposed technique and supporting
module assist customers in considering in detail alternative transportation
transactions that satisfy their requests, as well as in evaluating completed
transactions. The related services are invoked through a software agent that
constructs the appropriate knowledge rules and performs a synthesis of the
recommendation policy.
|
0909.2379
|
Implementation of Rule Based Algorithm for Sandhi-Vicheda Of Compound
Hindi Words
|
cs.CL
|
Sandhi means to join two or more words to coin new word. Sandhi literally
means `putting together' or combining (of sounds), It denotes all combinatory
sound-changes effected (spontaneously) for ease of pronunciation.
Sandhi-vicheda describes [5] the process by which one letter (whether single or
cojoined) is broken to form two words. Part of the broken letter remains as the
last letter of the first word and part of the letter forms the first letter of
the next letter. Sandhi- Vicheda is an easy and interesting way that can give
entirely new dimension that add new way to traditional approach to Hindi
Teaching. In this paper using the Rule based algorithm we have reported an
accuracy of 60-80% depending upon the number of rules to be implemented.
|
0909.2408
|
Coordination Capacity
|
cs.IT math.IT
|
We develop elements of a theory of cooperation and coordination in networks.
Rather than considering a communication network as a means of distributing
information, or of reconstructing random processes at remote nodes, we ask what
dependence can be established among the nodes given the communication
constraints. Specifically, in a network with communication rates {R_{i,j}}
between the nodes, we ask what is the set of all achievable joint distributions
p(x1, ..., xm) of actions at the nodes of the network. Several networks are
solved, including arbitrarily large cascade networks.
Distributed cooperation can be the solution to many problems such as
distributed games, distributed control, and establishing mutual information
bounds on the influence of one part of a physical system on another.
|
0909.2476
|
Design of an ultrasound-guided robotic brachytherapy needle insertion
system
|
cs.RO
|
In this paper we describe a new robotic brachytherapy needle-insertion system
that is designed to replace the template used in the manual technique. After a
brief review of existing robotic systems, we describe the requirements that we
based our design upon. A detailed description of the proposed system follows.
Our design is capable of positioning and inclining a needle within the same
workspace as the manual template. To help improve accuracy, the needle can be
rotated about its axis during insertion into the prostate. The system can be
mounted on existing steppers and also easily accommodates existing seed
dispensers, such as the Mick Applicator.
|
0909.2489
|
PrisCrawler: A Relevance Based Crawler for Automated Data Classification
from Bulletin Board
|
cs.IR
|
Nowadays people realize that it is difficult to find information simply and
quickly on the bulletin boards. In order to solve this problem, people propose
the concept of bulletin board search engine. This paper describes the
priscrawler system, a subsystem of the bulletin board search engine, which can
automatically crawl and add the relevance to the classified attachments of the
bulletin board. Priscrawler utilizes Attachrank algorithm to generate the
relevance between webpages and attachments and then turns bulletin board into
clear classified and associated databases, making the search for attachments
greatly simplified. Moreover, it can effectively reduce the complexity of
pretreatment subsystem and retrieval subsystem and improve the search
precision. We provide experimental results to demonstrate the efficacy of the
priscrawler.
|
0909.2496
|
Pavideoge: A Metadata Markup Video Structure in Video Search Engine
|
cs.IR
|
In this paper, we study the problems of video processing in video search
engine. Video has now become a very important kind of data in Internet; while
searching for video is still a challenging task due to the inner properties of
video: requiring enormous storage space, being independent, expressing
information hiddenly. To handle the properties of video more effectively, in
this paper, we propose a new video processing method in video search engine. In
detail, the core of the new video processing method is creating pavideoge--a
new data type, which contains the video advantages and webpage advantages. The
pavideoge has four attributes: real link, videorank, text information and
playnum. Each of them combines video's properties with webpage's. Video search
engine based on the pavideoge can retrieve video more effectively. The
experiment results show the encouraging performance of our approach. Based on
the pavideoge, our video search engine can retrieve more precise videos in
comparsion with previous related work.
|
0909.2526
|
Two Optimal One-Error-Correcting Codes of Length 13 That Are Not Doubly
Shortened Perfect Codes
|
cs.IT math.IT
|
The doubly shortened perfect codes of length 13 are classified utilizing the
classification of perfect codes in [P.R.J. \"Osterg{\aa}rd and O. Pottonen, The
perfect binary one-error-correcting codes of length 15: Part I -
Classification, IEEE Trans. Inform. Theory, to appear]; there are 117821 such
(13,512,3) codes. By applying a switching operation to those codes, two more
(13,512,3) codes are obtained, which are then not doubly shortened perfect
codes.
|
0909.2542
|
Stochastic Optimization of Linear Dynamic Systems with Parametric
Uncertainties
|
cs.AI cs.IT math.IT
|
This paper describes a new approach to solving some stochastic optimization
problems for linear dynamic system with various parametric uncertainties.
Proposed approach is based on application of tensor formalism for creation the
mathematical model of parametric uncertainties. Within proposed approach
following problems are considered: prediction, data processing and optimal
control. Outcomes of carried out simulation are used as illustration of
properties and effectiveness of proposed methods.
|
0909.2622
|
Transmitter Optimization for Achieving Secrecy Capacity in Gaussian MIMO
Wiretap Channels
|
cs.IT math.IT
|
We consider a Gaussian multiple-input multiple-output (MIMO) wiretap channel
model, where there exists a transmitter, a legitimate receiver and an
eavesdropper, each node equipped with multiple antennas. We study the problem
of finding the optimal input covariance matrix that achieves secrecy capacity
subject to a power constraint, which leads to a non-convex optimization problem
that is in general difficult to solve. Existing results for this problem
address the case in which the transmitter and the legitimate receiver have two
antennas each and the eavesdropper has one antenna. For the general cases, it
has been shown that the optimal input covariance matrix has low rank when the
difference between the Grams of the eavesdropper and the legitimate receiver
channel matrices is indefinite or semi-definite, while it may have low rank or
full rank when the difference is positive definite. In this paper, the
aforementioned non-convex optimization problem is investigated. In particular,
for the multiple-input single-output (MISO) wiretap channel, the optimal input
covariance matrix is obtained in closed form. For general cases, we derive the
necessary conditions for the optimal input covariance matrix consisting of a
set of equations. For the case in which the transmitter has two antennas, the
derived necessary conditions can result in a closed form solution; For the case
in which the difference between the Grams is indefinite and has all negative
eigenvalues except one positive eigenvalue, the optimal input covariance matrix
has rank one and can be obtained in closed form; For other cases, the solution
is proved to be a fixed point of a mapping from a convex set to itself and an
iterative procedure is provided to search for it. Numerical results are
presented to illustrate the proposed theoretical findings.
|
0909.2623
|
Reducing Network Traffic in Unstructured P2P Systems Using Top-k Queries
|
cs.DB
|
A major problem of unstructured P2P systems is their heavy network traffic.
This is caused mainly by high numbers of query answers, many of which are
irrelevant for users. One solution to this problem is to use Top-k queries
whereby the user can specify a limited number (k) of the most relevant answers.
In this paper, we present FD, a (Fully Distributed) framework for executing
Top-k queries in unstructured P2P systems, with the objective of reducing
network traffic. FD consists of a family of algorithms that are simple but
effec-tive. FD is completely distributed, does not depend on the existence of
certain peers, and addresses the volatility of peers during query execution. We
vali-dated FD through implementation over a 64-node cluster and simulation
using the BRITE topology generator and SimJava. Our performance evaluation
shows that FD can achieve major performance gains in terms of communication and
response time.
|
0909.2626
|
Reference Resolution within the Framework of Cognitive Grammar
|
cs.CL
|
Following the principles of Cognitive Grammar, we concentrate on a model for
reference resolution that attempts to overcome the difficulties previous
approaches, based on the fundamental assumption that all reference (independent
on the type of the referring expression) is accomplished via access to and
restructuring of domains of reference rather than by direct linkage to the
entities themselves. The model accounts for entities not explicitly mentioned
but understood in a discourse, and enables exploitation of discursive and
perceptual context to limit the set of potential referents for a given
referring expression. As the most important feature, we note that a single
mechanism is required to handle what are typically treated as diverse
phenomena. Our approach, then, provides a fresh perspective on the relations
between Cognitive Grammar and the problem of reference.
|
0909.2705
|
SET: an algorithm for consistent matrix completion
|
cs.IT math.IT
|
A new algorithm, termed subspace evolution and transfer (SET), is proposed
for solving the consistent matrix completion problem. In this setting, one is
given a subset of the entries of a low-rank matrix, and asked to find one
low-rank matrix consistent with the given observations. We show that this
problem can be solved by searching for a column space that matches the
observations. The corresponding algorithm consists of two parts -- subspace
evolution and subspace transfer. In the evolution part, we use a line search
procedure to refine the column space. However, line search is not guaranteed to
converge, as there may exist barriers along the search path that prevent the
algorithm from reaching a global optimum. To address this problem, in the
transfer part, we design mechanisms to detect barriers and transfer the
estimated column space from one side of the barrier to the another. The SET
algorithm exhibits excellent empirical performance for very low-rank matrices.
|
0909.2715
|
Marking-up multiple views of a Text: Discourse and Reference
|
cs.CL
|
We describe an encoding scheme for discourse structure and reference, based
on the TEI Guidelines and the recommendations of the Corpus Encoding
Specification (CES). A central feature of the scheme is a CES-based data
architecture enabling the encoding of and access to multiple views of a
marked-up document. We describe a tool architecture that supports the encoding
scheme, and then show how we have used the encoding scheme and the tools to
perform a discourse analytic task in support of a model of global discourse
cohesion called Veins Theory (Cristea & Ide, 1998).
|
0909.2718
|
A Common XML-based Framework for Syntactic Annotations
|
cs.CL
|
It is widely recognized that the proliferation of annotation schemes runs
counter to the need to re-use language resources, and that standards for
linguistic annotation are becoming increasingly mandatory. To answer this need,
we have developed a framework comprised of an abstract model for a variety of
different annotation types (e.g., morpho-syntactic tagging, syntactic
annotation, co-reference annotation, etc.), which can be instantiated in
different ways depending on the annotator's approach and goals. In this paper
we provide an overview of the framework, demonstrate its applicability to
syntactic annotation, and show how it can contribute to comparative evaluation
of parser output and diverse syntactic annotation schemes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.