id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0702162
|
Distributed Power Allocation with Rate Constraints in Gaussian Parallel
Interference Channels
|
cs.IT cs.GT math.IT
|
This paper considers the minimization of transmit power in Gaussian parallel
interference channels, subject to a rate constraint for each user. To derive
decentralized solutions that do not require any cooperation among the users, we
formulate this power control problem as a (generalized) Nash equilibrium game.
We obtain sufficient conditions that guarantee the existence and nonemptiness
of the solution set to our problem. Then, to compute the solutions of the game,
we propose two distributed algorithms based on the single user waterfilling
solution: The \emph{sequential} and the \emph{simultaneous} iterative
waterfilling algorithms, wherein the users update their own strategies
sequentially and simultaneously, respectively. We derive a unified set of
sufficient conditions that guarantee the uniqueness of the solution and global
convergence of both algorithms. Our results are applicable to all practical
distributed multipoint-to-multipoint interference systems, either wired or
wireless, where a quality of service in terms of information rate must be
guaranteed for each link.
|
cs/0702163
|
First Passage Time for Multivariate Jump-diffusion Stochastic Models
With Applications in Finance
|
cs.CE cs.NA
|
The ``first passage-time'' (FPT) problem is an important problem with a wide
range of applications in mathematics, physics, biology and finance.
Mathematically, such a problem can be reduced to estimating the probability of
a (stochastic) process first to reach a critical level or threshold. While in
other areas of applications the FPT problem can often be solved analytically,
in finance we usually have to resort to the application of numerical
procedures, in particular when we deal with jump-diffusion stochastic processes
(JDP). In this paper, we develop a Monte-Carlo-based methodology for the
solution of the FPT problem in the context of a multivariate jump-diffusion
stochastic process. The developed methodology is tested by using different
parameters, the simulation results indicate that the developed methodology is
much more efficient than the conventional Monte Carlo method. It is an
efficient tool for further practical applications, such as the analysis of
default correlation and predicting barrier options in finance.
|
cs/0702164
|
Monte-Carlo Simulations of the First Passage Time for Multivariate
Jump-Diffusion Processes in Financial Applications
|
cs.CE cs.NA
|
Many problems in finance require the information on the first passage time
(FPT) of a stochastic process. Mathematically, such problems are often reduced
to the evaluation of the probability density of the time for such a process to
cross a certain level, a boundary, or to enter a certain region. While in other
areas of applications the FPT problem can often be solved analytically, in
finance we usually have to resort to the application of numerical procedures,
in particular when we deal with jump-diffusion stochastic processes (JDP). In
this paper, we propose a Monte-Carlo-based methodology for the solution of the
first passage time problem in the context of multivariate (and correlated)
jump-diffusion processes. The developed technique provide an efficient tool for
a number of applications, including credit risk and option pricing. We
demonstrate its applicability to the analysis of the default rates and default
correlations of several different, but correlated firms via a set of empirical
data.
|
cs/0702165
|
Efficient estimation of default correlation for multivariate
jump-diffusion processes
|
cs.CE cs.NA
|
Evaluation of default correlation is an important task in credit risk
analysis. In many practical situations, it concerns the joint defaults of
several correlated firms, the task that is reducible to a first passage time
(FPT) problem. This task represents a great challenge for jump-diffusion
processes (JDP), where except for very basic cases, there are no analytical
solutions for such problems. In this contribution, we generalize our previous
fast Monte-Carlo method (non-correlated jump-diffusion cases) for multivariate
(and correlated) jump-diffusion processes. This generalization allows us, among
other things, to evaluate the default events of several correlated assets based
on a set of empirical data. The developed technique is an efficient tool for a
number of other applications, including credit risk and option pricing.
|
cs/0702166
|
Solving Stochastic Differential Equations with Jump-Diffusion
Efficiently: Applications to FPT Problems in Credit Risk
|
cs.CE cs.NA
|
The first passage time (FPT) problem is ubiquitous in many applications. In
finance, we often have to deal with stochastic processes with jump-diffusion,
so that the FTP problem is reducible to a stochastic differential equation with
jump-diffusion. While the application of the conventional Monte-Carlo procedure
is possible for the solution of the resulting model, it becomes computationally
inefficient which severely restricts its applicability in many practically
interesting cases. In this contribution, we focus on the development of
efficient Monte-Carlo-based computational procedures for solving the FPT
problem under the multivariate (and correlated) jump-diffusion processes. We
also discuss the implementation of the developed Monte-Carlo-based technique
for multivariate jump-diffusion processes driving by several compound Poisson
shocks. Finally, we demonstrate the application of the developed methodologies
for analyzing the default rates and default correlations of differently rated
firms via historical data.
|
cs/0702167
|
Finite Volume Analysis of Nonlinear Thermo-mechanical Dynamics of Shape
Memory Alloys
|
cs.CE cs.NA
|
In this paper, the finite volume method is developed to analyze coupled
dynamic problems of nonlinear thermoelasticity. The major focus is given to the
description of martensitic phase transformations essential in the modelling of
shape memory alloys. Computational experiments are carried out to study the
thermo-mechanical wave interactions in a shape memory alloy rod, and a patch.
Both mechanically and thermally induced phase transformations, as well as
hysteresis effects, in a one-dimensional structure are successfully simulated
with the developed methodology. In the two-dimensional case, the main focus is
given to square-to-rectangular transformations and examples of martensitic
combinations under different mechanical loadings are provided.
|
cs/0702168
|
Simulation of Phase Combinations in Shape Memory Alloys Patches by
Hybrid Optimization Methods
|
cs.CE cs.NA
|
In this paper, phase combinations among martensitic variants in shape memory
alloys patches and bars are simulated by a hybrid optimization methodology. The
mathematical model is based on the Landau theory of phase transformations. Each
stable phase is associated with a local minimum of the free energy function,
and the phase combinations are simulated by minimizing the bulk energy. At low
temperature, the free energy function has double potential wells leading to
non-convexity of the optimization problem. The methodology proposed in the
present paper is based on an initial estimate of the global solution by a
genetic algorithm, followed by a refined quasi-Newton procedure to locally
refine the optimum. By combining the local and global search algorithms, the
phase combinations are successfully simulated. Numerical experiments are
presented for the phase combinations in a SMA patch under several typical
mechanical loadings.
|
cs/0702170
|
Generic Global Constraints based on MDDs
|
cs.AI
|
Constraint Programming (CP) has been successfully applied to both constraint
satisfaction and constraint optimization problems. A wide variety of
specialized global constraints provide critical assistance in achieving a good
model that can take advantage of the structure of the problem in the search for
a solution. However, a key outstanding issue is the representation of 'ad-hoc'
constraints that do not have an inherent combinatorial nature, and hence are
not modeled well using narrowly specialized global constraints. We attempt to
address this issue by considering a hybrid of search and compilation.
Specifically we suggest the use of Reduced Ordered Multi-Valued Decision
Diagrams (ROMDDs) as the supporting data structure for a generic global
constraint. We give an algorithm for maintaining generalized arc consistency
(GAC) on this constraint that amortizes the cost of the GAC computation over a
root-to-leaf path in the search tree without requiring asymptotically more
space than used for the MDD. Furthermore we present an approach for
incrementally maintaining the reduced property of the MDD during the search,
and show how this can be used for providing domain entailment detection.
Finally we discuss how to apply our approach to other similar data structures
such as AOMDDs and Case DAGs. The technique used can be seen as an extension of
the GAC algorithm for the regular language constraint on finite length input.
|
cs/0702172
|
Numerical Model For Vibration Damping Resulting From the First Order
Phase Transformations
|
cs.CE cs.NA
|
A numerical model is constructed for modelling macroscale damping effects
induced by the first order martensite phase transformations in a shape memory
alloy rod. The model is constructed on the basis of the modified
Landau-Ginzburg theory that couples nonlinear mechanical and thermal fields.
The free energy function for the model is constructed as a double well function
at low temperature, such that the external energy can be absorbed during the
phase transformation and converted into thermal form. The Chebyshev spectral
methods are employed together with backward differentiation for the numerical
analysis of the problem. Computational experiments performed for different
vibration energies demonstrate the importance of taking into account damping
effects induced by phase transformations.
|
cs/0703002
|
Integral Biomathics: A Post-Newtonian View into the Logos of Bios (On
the New Meaning, Relations and Principles of Life in Science)
|
cs.NE cs.CC
|
This work is an attempt for a state-of-the-art survey of natural and life
sciences with the goal to define the scope and address the central questions of
an original research program. It is focused on the phenomena of emergence,
adaptive dynamics and evolution of self-assembling, self-organizing,
self-maintaining and self-replicating biosynthetic systems viewed from a
newly-arranged perspective and understanding of computation and communication
in the living nature.
|
cs/0703005
|
State Amplification
|
cs.IT math.IT
|
We consider the problem of transmitting data at rate R over a state dependent
channel p(y|x,s) with the state information available at the sender and at the
same time conveying the information about the channel state itself to the
receiver. The amount of state information that can be learned at the receiver
is captured by the mutual information I(S^n; Y^n) between the state sequence
S^n and the channel output Y^n. The optimal tradeoff is characterized between
the information transmission rate R and the state uncertainty reduction rate
\Delta, when the state information is either causally or noncausally available
at the sender. This result is closely related and in a sense dual to a recent
study by Merhav and Shamai, which solves the problem of masking the state
information from the receiver rather than conveying it.
|
cs/0703016
|
Outage Probability of Multiple-Input Single-Output (MISO) Systems with
Delayed Feedback
|
cs.IT math.IT
|
We investigate the effect of feedback delay on the outage probability of
multiple-input single-output (MISO) fading channels. Channel state information
at the transmitter (CSIT) is a delayed version of the channel state information
available at the receiver (CSIR). We consider two cases of CSIR: (a) perfect
CSIR and (b) CSI estimated at the receiver using training symbols. With perfect
CSIR, under a short-term power constraint, we determine: (a) the outage
probability for beamforming with imperfect CSIT (BF-IC) analytically, and (b)
the optimal spatial power allocation (OSPA) scheme that minimizes outage
numerically. Results show that, for delayed CSIT, BF-IC is close to optimal for
low SNR and uniform spatial power allocation (USPA) is close to optimal at high
SNR. Similarly, under a long-term power constraint, we show that BF-IC is close
to optimal for low SNR and USPA is close to optimal at high SNR. With imperfect
CSIR, we obtain an upper bound on the outage probability with USPA and BF-IC.
Results show that the loss in performance due to imperfection in CSIR is not
significant, if the training power is chosen appropriately.
|
cs/0703017
|
Performance Bounds for Bi-Directional Coded Cooperation Protocols
|
cs.IT math.IT
|
In coded bi-directional cooperation, two nodes wish to exchange messages over
a shared half-duplex channel with the help of a relay. In this paper, we derive
performance bounds for this problem for each of three protocols.
The first protocol is a two phase protocol were both users simultaneously
transmit during the first phase and the relay alone transmits during the
second. In this protocol, our bounds are tight and a multiple-access channel
transmission from the two users to the relay followed by a coded broadcast-type
transmission from the relay to the users achieves all points in the two-phase
capacity region.
The second protocol considers sequential transmissions from the two users
followed by a transmission from the relay while the third protocol is a hybrid
of the first two protocols and has four phases. In the latter two protocols the
inner and outer bounds are not identical, and differ in a manner similar to the
inner and outer bounds of Cover's relay channel. Numerical evaluation shows
that at least in some cases of interest our bounds do not differ significantly.
Finally, in the Gaussian case with path loss, we derive achievable rates and
compare the relative merits of each protocol in various regimes. This case is
of interest in cellular systems. Surprisingly, we find that in some cases, the
achievable rate region of the four phase protocol sometimes contains points
that are outside the outer bounds of the other protocols.
|
cs/0703022
|
Rate of Channel Hardening of Antenna Selection Diversity Schemes and Its
Implication on Scheduling
|
cs.IT math.IT
|
For a multiple antenna system, we compute the asymptotic distribution of
antenna selection gain when the transmitter selects the transmit antenna with
the strongest channel. We use this to asymptotically estimate the underlying
channel capacity distributions, and demonstrate that unlike
multiple-input/multiple-output (MIMO) systems, the channel for antenna
selection systems hardens at a slower rate, and thus a significant multiuser
scheduling gain can exist - O(1/ log m) for channel selection as opposed to
O(1/ sqrt{m}) for MIMO, where m is the number of transmit antennas.
Additionally, even without this scheduling gain, it is demonstrated that
transmit antenna selection systems outperform open loop MIMO systems in low
signal-to-interference-plus-noise ratio (SINR) regimes, particularly for a
small number of receive antennas. This may have some implications on wireless
system design, because most of the users in modern wireless systems have low
SINRs
|
cs/0703024
|
Algorithmic Information Theory: a brief non-technical guide to the field
|
cs.IT cs.CC math.IT
|
This article is a brief guide to the field of algorithmic information theory
(AIT), its underlying philosophy, and the most important concepts. AIT arises
by mixing information theory and computation theory to obtain an objective and
absolute notion of information in an individual object, and in so doing gives
rise to an objective and robust notion of randomness of individual objects.
This is in contrast to classical information theory that is based on random
variables and communication, and has no bearing on information and randomness
of individual objects. After a brief overview, the major subfields,
applications, history, and a map of the field are presented.
|
cs/0703027
|
Interroger un corpus par le sens
|
cs.CL cs.IR
|
In textual knowledge management, statistical methods prevail. Nonetheless,
some difficulties cannot be overcome by these methodologies. I propose a
symbolic approach using a complete textual analysis to identify which analysis
level can improve the the answers provided by a system. The approach identifies
word senses and relation between words and generates as many rephrasings as
possible. Using synonyms and derivative, the system provides new utterances
without changing the original meaning of the sentences. Such a way, an
information can be retrieved whatever the question or answer's wording may be.
|
cs/0703033
|
Time Warp Edit Distance with Stiffness Adjustment for Time Series
Matching
|
cs.IR
|
In a way similar to the string-to-string correction problem we address time
series similarity in the light of a time-series-to-time-series-correction
problem for which the similarity between two time series is measured as the
minimum cost sequence of "edit operations" needed to transform one time series
into another. To define the "edit operations" we use the paradigm of a
graphical editing process and end up with a dynamic programming algorithm that
we call Time Warp Edit Distance (TWED). TWED is slightly different in form from
Dynamic Time Warping, Longest Common Subsequence or Edit Distance with Real
Penalty algorithms. In particular, it highlights a parameter which drives a
kind of stiffness of the elastic measure along the time axis. We show that the
similarity provided by TWED is a metric potentially useful in time series
retrieval applications since it could benefit from the triangular inequality
property to speed up the retrieval process while tuning the parameters of the
elastic measure. In that context, a lower bound is derived to relate the
matching of time series into down sampled representation spaces to the matching
into the original space. Empiric quality of the TWED distance is evaluated on a
simple classification task. Compared to Edit Distance, Dynamic Time Warping,
Longest Common Subsequnce and Edit Distance with Real Penalty, TWED has proven
to be quite effective on the considered experimental task.
|
cs/0703034
|
Nanoscale Communication with Brownian Motion
|
cs.IT math.IT
|
In this paper, the problem of communicating using chemical messages
propagating using Brownian motion, rather than electromagnetic messages
propagating as waves in free space or along a wire, is considered. This problem
is motivated by nanotechnological and biotechnological applications, where the
energy cost of electromagnetic communication might be prohibitive. Models are
given for communication using particles that propagate with Brownian motion,
and achievable capacity results are given. Under conservative assumptions, it
is shown that rates exceeding one bit per particle are achievable.
|
cs/0703035
|
On the Distortion SNR Exponent of Some Layered Transmission Schemes
|
cs.IT math.IT
|
We consider the problem of joint source-channel coding for transmitting K
samples of a complex Gaussian source over T = bK uses of a block-fading
multiple input multiple output (MIMO) channel with M transmit and N receive
antennas. We consider the case when we are allowed to code over L blocks. The
channel gain is assumed to be constant over a block and channel gains for
different blocks are assumed to be independent. The performance measure of
interest is the rate of decay of the expected mean squared error with the
signal-to-noise ratio (SNR), called the distortion SNR exponent. We first show
that using a broadcast strategy of Gunduz and Erkip, but with a different power
and rate allocation policy, the optimal distortion SNR exponent can be achieved
for bandwidth efficiencies 0 < b < (|N-M|+1)/min(M,N). This is the first time
the optimal exponent is characterized for 1/min(M,N) < b < (|N-M |+ 1)/ min(M,
N). Also, for b > MNL^2, we show that the broadcast scheme achieves the optimal
exponent of MNL. Special cases of this result have been derived for the L=1
case and for M=N=1 by Gunduz and Erkip. We then propose a digital layered
transmission scheme that uses both time layering and superposition. This
includes many previously known schemes as special cases. The proposed scheme is
at least as good as the currently best known schemes for the entire range of
bandwidth efficiencies, whereas at least for some M, N, and b, it is strictly
better than the currently best known schemes.
|
cs/0703036
|
Constructions of Grassmannian Simplices
|
cs.IT math.IT
|
In this article an explicit method (relying on representation theory) to
construct packings in Grassmannian space is presented. Infinite families of
configurations having only one non-trivial set of principal angles are found
using 2-transitive groups. These packings are proved to reach the simplex bound
and are therefore optimal w.r.t. the chordal distance. The construction is
illustrated by an example on the symmetric group. Then some natural extends and
consequences of this situation are given.
|
cs/0703038
|
Delay and Throughput Optimal Scheduling for OFDM Broadcast Channels
|
cs.IT math.IT
|
In this paper a scheduling policy is presented which minimizes the average
delay of the users. The scheduling scheme is investigated both by analysis and
simulations carried out in the context of Orthogonal Frequency Division
Multiplexing (OFDM) broadcast channels (BC). First the delay optimality is
obtained for a static scenario providing solutions for specific subproblems,
then the analysis is carried over to the dynamic scheme. Furthermore auxiliary
tools are given for proving throughput optimality. Finally simulations show the
superior performance of the presented scheme.
|
cs/0703042
|
Recommender System for Online Dating Service
|
cs.IR cs.SE
|
Users of online dating sites are facing information overload that requires
them to manually construct queries and browse huge amount of matching user
profiles. This becomes even more problematic for multimedia profiles. Although
matchmaking is frequently cited as a typical application for recommender
systems, there is a surprising lack of work published in this area. In this
paper we describe a recommender system we implemented and perform a
quantitative comparison of two collaborative filtering (CF) and two global
algorithms. Results show that collaborative filtering recommenders
significantly outperform global algorithms that are currently used by dating
sites. A blind experiment with real users also confirmed that users prefer CF
based recommendations to global popularity recommendations. Recommender systems
show a great potential for online dating where they could improve the value of
the service to users and improve monetization of the service.
|
cs/0703045
|
Performance Bounds on Sparse Representations Using Redundant Frames
|
cs.IT math.IT
|
We consider approximations of signals by the elements of a frame in a complex
vector space of dimension $N$ and formulate both the noiseless and the noisy
sparse representation problems. The noiseless representation problem is to find
sparse representations of a signal $\mathbf{r}$ given that such representations
exist. In this case, we explicitly construct a frame, referred to as the
Vandermonde frame, for which the noiseless sparse representation problem can be
solved uniquely using $O(N^2)$ operations, as long as the number of non-zero
coefficients in the sparse representation of $\mathbf{r}$ is $\epsilon N$ for
some $0 \le \epsilon \le 0.5$, thus improving on a result of Candes and Tao
\cite{Candes-Tao}. We also show that $\epsilon \le 0.5$ cannot be relaxed
without violating uniqueness.
The noisy sparse representation problem is to find sparse representations of
a signal $\mathbf{r}$ satisfying a distortion criterion. In this case, we
establish a lower bound on the trade-off between the sparsity of the
representation, the underlying distortion and the redundancy of any given
frame.
|
cs/0703046
|
Optimal Power Allocation for Distributed Detection over MIMO Channels in
Wireless Sensor Networks
|
cs.IT math.IT
|
In distributed detection systems with wireless sensor networks, the
communication between sensors and a fusion center is not perfect due to
interference and limited transmitter power at the sensors to combat noise at
the fusion center's receiver. The problem of optimizing detection performance
with such imperfect communication brings a new challenge to distributed
detection. In this paper, sensors are assumed to have independent but
nonidentically distributed observations, and a multi-input/multi-output (MIMO)
channel model is included to account for imperfect communication between the
sensors and the fusion center. The J-divergence between the distributions of
the detection statistic under different hypotheses is used as a performance
criterion in order to provide a tractable analysis. Optimizing the performance
(in terms of the J-divergence) with individual and total transmitter power
constraints on the sensors is studied, and the corresponding power allocation
scheme is provided. It is interesting to see that the proposed power allocation
is a tradeoff between two factors, the communication channel quality and the
local decision quality. For the case with orthogonal channels under certain
conditions, the power allocation can be solved by a weighted water-filling
algorithm. Simulations show that, to achieve the same performance, the proposed
power allocation in certain cases only consumes as little as 25 percent of the
total power used by an equal power allocation scheme.
|
cs/0703047
|
Precoding for the AWGN Channel with Discrete Interference
|
cs.IT math.IT
|
$M$-ary signal transmission over AWGN channel with additive $Q$-ary
interference where the sequence of i.i.d. interference symbols is known
causally at the transmitter is considered. Shannon's theorem for channels with
side information at the transmitter is used to formulate the capacity of the
channel. It is shown that by using at most $MQ-Q+1$ out of $M^Q$ input symbols
of the \emph{associated} channel, the capacity is achievable. For the special
case where the Gaussian noise power is zero, a sufficient condition, which is
independent of interference, is given for the capacity to be $\log_2 M$ bits
per channel use. The problem of maximization of the transmission rate under the
constraint that the channel input given any current interference symbol is
uniformly distributed over the channel input alphabet is investigated. For this
setting, the general structure of a communication system with optimal precoding
is proposed. The extension of the proposed precoding scheme to continuous
channel input alphabet is also investigated.
|
cs/0703048
|
Path Loss Models Based on Stochastic Rays
|
cs.IT math.IT
|
In this paper, two-dimensional percolation lattices are applied to describe
wireless propagation environment, and stochastic rays are employed to model the
trajectories of radio waves. We first derive the probability that a stochastic
ray undergoes certain number of collisions at a specific spatial location.
Three classes of stochastic rays with different constraint conditions are
considered: stochastic rays of random walks, and generic stochastic rays with
two different anomalous levels. Subsequently, we obtain the closed-form
formulation of mean received power of radio waves under non line-of-sight
conditions for each class of stochastic ray. Specifically, the determination of
model parameters and the effects of lattice structures on the path loss are
investigated. The theoretical results are validated by comparison with
experimental data.
|
cs/0703049
|
Algorithm of Segment-Syllabic Synthesis in Speech Recognition Problem
|
cs.SD cs.CL
|
Speech recognition based on the syllable segment is discussed in this paper.
The principal search methods in space of states for the speech recognition
problem by segment-syllabic parameters trajectory synthesis are investigated.
Recognition as comparison the parameters trajectories in chosen speech units on
the sections of the segmented speech is realized. Some experimental results are
given and discussed.
|
cs/0703050
|
On The Capacity Deficit of Mobile Wireless Ad Hoc Networks: A Rate
Distortion Formulation
|
cs.IT math.IT
|
Overheads incurred by routing protocols diminish the capacity available for
relaying useful data in a mobile wireless ad hoc network. Discovering lower
bounds on the amount of protocol overhead incurred for routing data packets is
important for the development of efficient routing protocols, and for
characterizing the actual (effective) capacity available for network users.
This paper presents an information-theoretic framework for characterizing the
minimum routing overheads of geographic routing in a network with mobile nodes.
specifically, the minimum overhead problem is formulated as a rate-distortion
problem. The formulation may be applied to networks with arbitrary traffic
arrival and location service schemes. Lower bounds are derived for the minimum
overheads incurred for maintaining the location of destination nodes and
consistent neighborhood information in terms of node mobility and packet
arrival process. This leads to a characterization of the deficit caused by the
routing overheads on the overall transport capacity.
|
cs/0703052
|
On the densest MIMO lattices from cyclic division algebras
|
cs.IT math.IT
|
It is shown why the discriminant of a maximal order within a cyclic division
algebra must be minimized in order to get the densest possible matrix lattices
with a prescribed nonvanishing minimum determinant. Using results from class
field theory a lower bound to the minimum discriminant of a maximal order with
a given center and index (= the number of Tx/Rx antennas) is derived. Also
numerous examples of division algebras achieving our bound are given. E.g. we
construct a matrix lattice with QAM coefficients that has 2.5 times as many
codewords as the celebrated Golden code of the same minimum determinant. We
describe a general algorithm due to Ivanyos and Ronyai for finding maximal
orders within a cyclic division algebra and discuss our enhancements to this
algorithm. We also consider general methods for finding cyclic division
algebras of a prescribed index achieving our lower bound.
|
cs/0703053
|
Extraction of cartographic objects in high resolution satellite images
for object model generation
|
cs.CV
|
The aim of this study is to detect man-made cartographic objects in
high-resolution satellite images. New generation satellites offer a sub-metric
spatial resolution, in which it is possible (and necessary) to develop methods
at object level rather than at pixel level, and to exploit structural features
of objects. With this aim, a method to generate structural object models from
manually segmented images has been developed. To generate the model from
non-segmented images, extraction of the objects from the sample images is
required. A hybrid method of extraction (both in terms of input sources and
segmentation algorithms) is proposed: A region based segmentation is applied on
a 10 meter resolution multi-spectral image. The result is used as marker in a
"marker-controlled watershed method using edges" on a 2.5 meter resolution
panchromatic image. Very promising results have been obtained even on images
where the limits of the target objects are not apparent.
|
cs/0703055
|
Support and Quantile Tubes
|
cs.IT cs.LG math.IT
|
This correspondence studies an estimator of the conditional support of a
distribution underlying a set of i.i.d. observations. The relation with mutual
information is shown via an extension of Fano's theorem in combination with a
generalization bound based on a compression argument. Extensions to estimating
the conditional quantile interval, and statistical guarantees on the minimal
convex hull are given.
|
cs/0703056
|
Unasssuming View-Size Estimation Techniques in OLAP
|
cs.DB cs.PF
|
Even if storage was infinite, a data warehouse could not materialize all
possible views due to the running time and update requirements. Therefore, it
is necessary to estimate quickly, accurately, and reliably the size of views.
Many available techniques make particular statistical assumptions and their
error can be quite large. Unassuming techniques exist, but typically assume we
have independent hashing for which there is no known practical implementation.
We adapt an unassuming estimator due to Gibbons and Tirthapura: its theoretical
bounds do not make unpractical assumptions. We compare this technique
experimentally with stochastic probabilistic counting, LogLog probabilistic
counting, and multifractal statistical models. Our experiments show that we can
reliably and accurately (within 10%, 19 times out 20) estimate view sizes over
large data sets (1.5 GB) within minutes, using almost no memory. However, only
Gibbons-Tirthapura provides universally tight estimates irrespective of the
size of the view. For large views, probabilistic counting has a small edge in
accuracy, whereas the competitive sampling-based method (multifractal) we
tested is an order of magnitude faster but can sometimes provide poor estimates
(relative error of 100%). In our tests, LogLog probabilistic counting is not
competitive. Experimental validation on the US Census 1990 data set and on the
Transaction Processing Performance (TPC H) data set is provided.
|
cs/0703057
|
Doppler Resilient Waveforms with Perfect Autocorrelation
|
cs.IT math.IT
|
We describe a method of constructing a sequence of phase coded waveforms with
perfect autocorrelation in the presence of Doppler shift. The constituent
waveforms are Golay complementary pairs which have perfect autocorrelation at
zero Doppler but are sensitive to nonzero Doppler shifts. We extend this
construction to multiple dimensions, in particular to radar polarimetry, where
the two dimensions are realized by orthogonal polarizations. Here we determine
a sequence of two-by-two Alamouti matrices where the entries involve Golay
pairs and for which the sum of the matrix-valued ambiguity functions vanish at
small Doppler shifts. The Prouhet-Thue-Morse sequence plays a key role in the
construction of Doppler resilient sequences of Golay pairs.
|
cs/0703058
|
A Comparison of Five Probabilistic View-Size Estimation Techniques in
OLAP
|
cs.DB cs.PF
|
A data warehouse cannot materialize all possible views, hence we must
estimate quickly, accurately, and reliably the size of views to determine the
best candidates for materialization. Many available techniques for view-size
estimation make particular statistical assumptions and their error can be
large. Comparatively, unassuming probabilistic techniques are slower, but they
estimate accurately and reliability very large view sizes using little memory.
We compare five unassuming hashing-based view-size estimation techniques
including Stochastic Probabilistic Counting and LogLog Probabilistic Counting.
Our experiments show that only Generalized Counting, Gibbons-Tirthapura, and
Adaptive Counting provide universally tight estimates irrespective of the size
of the view; of those, only Adaptive Counting remains constantly fast as we
increase the memory budget.
|
cs/0703060
|
Redesigning Decision Matrix Method with an indeterminacy-based inference
process
|
cs.AI
|
For academics and practitioners concerned with computers, business and
mathematics, one central issue is supporting decision makers. In this paper, we
propose a generalization of Decision Matrix Method (DMM), using Neutrosophic
logic. It emerges as an alternative to the existing logics and it represents a
mathematical model of uncertainty and indeterminacy. This paper proposes the
Neutrosophic Decision Matrix Method as a more realistic tool for decision
making. In addition, a de-neutrosophication process is included.
|
cs/0703061
|
Coding for Errors and Erasures in Random Network Coding
|
cs.IT cs.NI math.IT
|
The problem of error-control in random linear network coding is considered. A
``noncoherent'' or ``channel oblivious'' model is assumed where neither
transmitter nor receiver is assumed to have knowledge of the channel transfer
characteristic. Motivated by the property that linear network coding is
vector-space preserving, information transmission is modelled as the injection
into the network of a basis for a vector space $V$ and the collection by the
receiver of a basis for a vector space $U$. A metric on the projective geometry
associated with the packet space is introduced, and it is shown that a minimum
distance decoder for this metric achieves correct decoding if the dimension of
the space $V \cap U$ is sufficiently large. If the dimension of each codeword
is restricted to a fixed integer, the code forms a subset of a finite-field
Grassmannian, or, equivalently, a subset of the vertices of the corresponding
Grassmann graph. Sphere-packing and sphere-covering bounds as well as a
generalization of the Singleton bound are provided for such codes. Finally, a
Reed-Solomon-like code construction, related to Gabidulin's construction of
maximum rank-distance codes, is described and a Sudan-style ``list-1'' minimum
distance decoding algorithm is provided.
|
cs/0703062
|
Bandit Algorithms for Tree Search
|
cs.LG
|
Bandit based methods for tree search have recently gained popularity when
applied to huge trees, e.g. in the game of go (Gelly et al., 2006). The UCT
algorithm (Kocsis and Szepesvari, 2006), a tree search method based on Upper
Confidence Bounds (UCB) (Auer et al., 2002), is believed to adapt locally to
the effective smoothness of the tree. However, we show that UCT is too
``optimistic'' in some cases, leading to a regret O(exp(exp(D))) where D is the
depth of the tree. We propose alternative bandit algorithms for tree search.
First, a modification of UCT using a confidence sequence that scales
exponentially with the horizon depth is proven to have a regret O(2^D
\sqrt{n}), but does not adapt to possible smoothness in the tree. We then
analyze Flat-UCB performed on the leaves and provide a finite regret bound with
high probability. Then, we introduce a UCB-based Bandit Algorithm for Smooth
Trees which takes into account actual smoothness of the rewards for performing
efficient ``cuts'' of sub-optimal branches with high confidence. Finally, we
present an incremental tree search version which applies when the full tree is
too big (possibly infinite) to be entirely represented and show that with high
probability, essentially only the optimal branches is indefinitely developed.
We illustrate these methods on a global optimization problem of a Lipschitz
function, given noisy data.
|
cs/0703067
|
Target assignment for robotic networks: asymptotic performance under
limited communication
|
cs.RO
|
We are given an equal number of mobile robotic agents, and distinct target
locations. Each agent has simple integrator dynamics, a limited communication
range, and knowledge of the position of every target. We address the problem of
designing a distributed algorithm that allows the group of agents to divide the
targets among themselves and, simultaneously, leads each agent to reach its
unique target. We do not require connectivity of the communication graph at any
time. We introduce a novel assignment-based algorithm with the following
features: initial assignments and robot motions follow a greedy rule, and
distributed refinements of the assignment exploit an implicit circular ordering
of the targets. We prove correctness of the algorithm, and give worst-case
asymptotic bounds on the time to complete the assignment as the environment
grows with the number of agents. We show that among a certain class of
distributed algorithms, our algorithm is asymptotically optimal. The analysis
utilizes results on the Euclidean traveling salesperson problem.
|
cs/0703068
|
Option Valuation using Fourier Space Time Stepping
|
cs.CE
|
It is well known that the Black-Scholes-Merton model suffers from several
deficiencies. Jump-diffusion and Levy models have been widely used to partially
alleviate some of the biases inherent in this classical model. Unfortunately,
the resulting pricing problem requires solving a more difficult partial-integro
differential equation (PIDE) and although several approaches for solving the
PIDE have been suggested in the literature, none are entirely satisfactory. All
treat the integral and diffusive terms asymmetrically and are difficult to
extend to higher dimensions. We present a new, efficient algorithm, based on
transform methods, which symmetrically treats the diffusive and integrals
terms, is applicable to a wide class of path-dependent options (such as
Bermudan, barrier, and shout options) and options on multiple assets, and
naturally extends to regime-switching Levy models. We present a concise study
of the precision and convergence properties of our algorithm for several
classes of options and Levy models and demonstrate that the algorithm is
second-order in space and first-order in time for path-dependent options.
|
cs/0703078
|
Broadcast Capacity Region of Two-Phase Bidirectional Relaying
|
cs.IT math.IT
|
In a three-node network a half-duplex relay node enables bidirectional
communication between two nodes with a spectral efficient two phase protocol.
In the first phase, two nodes transmit their message to the relay node, which
decodes the messages and broadcast a re-encoded composition in the second
phase. In this work we determine the capacity region of the broadcast phase. In
this scenario each receiving node has perfect information about the message
that is intended for the other node. The resulting set of achievable rates of
the two-phase bidirectional relaying includes the region which can be achieved
by applying XOR on the decoded messages at the relay node. We also prove the
strong converse for the maximum error probability and show that this implies
that the $[\eps_1,\eps_2]$-capacity region defined with respect to the average
error probability is constant for small values of error parameters $\eps_1$,
$\eps_2$.
|
cs/0703081
|
Randomized Computations on Large Data Sets: Tight Lower Bounds
|
cs.DB cs.CC
|
We study the randomized version of a computation model (introduced by Grohe,
Koch, and Schweikardt (ICALP'05); Grohe and Schweikardt (PODS'05)) that
restricts random access to external memory and internal memory space.
Essentially, this model can be viewed as a powerful version of a data stream
model that puts no cost on sequential scans of external memory (as other models
for data streams) and, in addition, (like other external memory models, but
unlike streaming models), admits several large external memory devices that can
be read and written to in parallel.
We obtain tight lower bounds for the decision problems set equality, multiset
equality, and checksort. More precisely, we show that any randomized
one-sided-error bounded Monte Carlo algorithm for these problems must perform
Omega(log N) random accesses to external memory devices, provided that the
internal memory size is at most O(N^(1/4)/log N), where N denotes the size of
the input data.
From the lower bound on the set equality problem we can infer lower bounds on
the worst case data complexity of query evaluation for the languages XQuery,
XPath, and relational algebra on streaming data. More precisely, we show that
there exist queries in XQuery, XPath, and relational algebra, such that any
(randomized) Las Vegas algorithm that evaluates these queries must perform
Omega(log N) random accesses to external memory devices, provided that the
internal memory size is at most O(N^(1/4)/log N).
|
cs/0703087
|
Social Information Processing in Social News Aggregation
|
cs.CY cs.AI cs.HC cs.MA
|
The rise of the social media sites, such as blogs, wikis, Digg and Flickr
among others, underscores the transformation of the Web to a participatory
medium in which users are collaboratively creating, evaluating and distributing
information. The innovations introduced by social media has lead to a new
paradigm for interacting with information, what we call 'social information
processing'. In this paper, we study how social news aggregator Digg exploits
social information processing to solve the problems of document recommendation
and rating. First, we show, by tracking stories over time, that social networks
play an important role in document recommendation. The second contribution of
this paper consists of two mathematical models. The first model describes how
collaborative rating and promotion of stories emerges from the independent
decisions made by many users. The second model describes how a user's
influence, the number of promoted stories and the user's social network,
changes in time. We find qualitative agreement between predictions of the model
and user data gathered from Digg.
|
cs/0703088
|
Plot 94 in ambiance X-Window
|
cs.CV cs.GR
|
<PLOT > is a collection of routines to draw surfaces, contours and so on. In
this work we are presenting a version, that functions over work stations with
the operative system UNIX, that count with the graphic ambiance X-WINDOW with
the tools XLIB and OSF/MOTIF. This implant was realized for the work stations
DEC 5000-200, DEC IPX, and DEC ALFA of the CINVESTAV (Center of Investigation
and Advanced Studies). Also implanted in SILICON GRAPHICS of the CENAC
(National Center of Calculation of the Polytechnic National Institute
|
cs/0703089
|
Space Program Language (SPL/SQL) for the Relational Approach of the
Spatial Databases
|
cs.DB cs.CG
|
In this project we are presenting a grammar which unify the design and
development of spatial databases. In order to make it, we combine nominal and
spatial information, the former is represented by the relational model and
latter by a modification of the same model. The modification lets to represent
spatial data structures (as Quadtrees, Octrees, etc.) in a integrated way. This
grammar is important because with it we can create tools to build systems that
combine spatial-nominal characteristics such as Geographical Information
Systems (GIS), Hypermedia Systems, Computed Aided Design Systems (CAD), and so
on
|
cs/0703090
|
Orthogonal Frequency Division Multiplexing: An Overview
|
cs.IT math.IT
|
Orthogonal Frequency Division Multiplexing (OFDM) is a multi-carrier
modulation scheme that provides efficient bandwidth utilization and robustness
against time dispersive channels. This paper deals with the basic system model
for OFDM based systems and with self-interference, or the corruption of desired
signal by itself in OFDM systems. A simple transceiver based on OFDM modulation
is presented. Important impairments in OFDM systems are mathematically analyzed
|
cs/0703091
|
Multimodal Meaning Representation for Generic Dialogue Systems
Architectures
|
cs.AI cs.MM
|
An unified language for the communicative acts between agents is essential
for the design of multi-agents architectures. Whatever the type of interaction
(linguistic, multimodal, including particular aspects such as force feedback),
whatever the type of application (command dialogue, request dialogue, database
querying), the concepts are common and we need a generic meta-model. In order
to tend towards task-independent systems, we need to clarify the modules
parameterization procedures. In this paper, we focus on the characteristics of
a meta-model designed to represent meaning in linguistic and multimodal
applications. This meta-model is called MMIL for MultiModal Interface Language,
and has first been specified in the framework of the IST MIAMM European
project. What we want to test here is how relevant is MMIL for a completely
different context (a different task, a different interaction type, a different
linguistic domain). We detail the exploitation of MMIL in the framework of the
IST OZONE European project, and we draw the conclusions on the role of MMIL in
the parameterization of task-independent dialogue managers.
|
cs/0703095
|
Copula Component Analysis
|
cs.IR cs.AI
|
A framework named Copula Component Analysis (CCA) for blind source separation
is proposed as a generalization of Independent Component Analysis (ICA). It
differs from ICA which assumes independence of sources that the underlying
components may be dependent with certain structure which is represented by
Copula. By incorporating dependency structure, much accurate estimation can be
made in principle in the case that the assumption of independence is
invalidated. A two phrase inference method is introduced for CCA which is based
on the notion of multidimensional ICA.
|
cs/0703097
|
On Approximating Optimal Weighted Lobbying, and Frequency of Correctness
versus Average-Case Polynomial Time
|
cs.GT cs.CC cs.MA
|
We investigate issues related to two hard problems related to voting, the
optimal weighted lobbying problem and the winner problem for Dodgson elections.
Regarding the former, Christian et al. [CFRS06] showed that optimal lobbying is
intractable in the sense of parameterized complexity. We provide an efficient
greedy algorithm that achieves a logarithmic approximation ratio for this
problem and even for a more general variant--optimal weighted lobbying. We
prove that essentially no better approximation ratio than ours can be proven
for this greedy algorithm.
The problem of determining Dodgson winners is known to be complete for
parallel access to NP [HHR97]. Homan and Hemaspaandra [HH06] proposed an
efficient greedy heuristic for finding Dodgson winners with a guaranteed
frequency of success, and their heuristic is a ``frequently self-knowingly
correct algorithm.'' We prove that every distributional problem solvable in
polynomial time on the average with respect to the uniform distribution has a
frequently self-knowingly correct polynomial-time algorithm. Furthermore, we
study some features of probability weight of correctness with respect to
Procaccia and Rosenschein's junta distributions [PR07].
|
cs/0703099
|
Constrained Cost-Coupled Stochastic Games with Independent State
Processes
|
cs.IT cs.GT math.IT
|
We consider a non-cooperative constrained stochastic games with N players
with the following special structure. With each player there is an associated
controlled Markov chain. The transition probabilities of the i-th Markov chain
depend only on the state and actions of controller i. The information structure
that we consider is such that each player knows the state of its own MDP and
its own actions. It does not know the states of, and the actions taken by other
players. Finally, each player wishes to minimize a time-average cost function,
and has constraints over other time-avrage cost functions. Both the cost that
is minimized as well as those defining the constraints depend on the state and
actions of all players. We study in this paper the existence of a Nash
equilirium. Examples in power control in wireless communications are given.
|
cs/0703101
|
A Note on Approximate Nearest Neighbor Methods
|
cs.IR cs.CC cs.CV
|
A number of authors have described randomized algorithms for solving the
epsilon-approximate nearest neighbor problem. In this note I point out that the
epsilon-approximate nearest neighbor property often fails to be a useful
approximation property, since epsilon-approximate solutions fail to satisfy the
necessary preconditions for using nearest neighbors for classification and
related tasks.
|
cs/0703102
|
Generation of Efficient Codes for Realizing Boolean Functions in
Nanotechnologies
|
cs.IT cs.DM math.IT
|
We address the challenge of implementing reliable computation of Boolean
functions in future nanocircuit fabrics. Such fabrics are projected to have
very high defect rates. We overcome this limitation by using a combination of
cheap but unreliable nanodevices and reliable but expensive CMOS devices. In
our approach, defect tolerance is achieved through a novel coding of Boolean
functions; specifically, we exploit the dont cares of Boolean functions
encountered in multi-level Boolean logic networks for constructing better
codes. We show that compared to direct application of existing coding
techniques, the coding overhead in terms of extra bits can be reduced, on
average by 23%, and savings can go up to 34%. We demonstrate that by
incorporating efficient coding techniques more than a 40% average yield
improvement is possible in case of 1% and 0.1% defect rates. With 0.1% defect
density, the savings can be up to 90%.
|
cs/0703103
|
Concept of a Value in Multilevel Security Databases
|
cs.DB
|
This paper has been withdrawn.
|
cs/0703104
|
Encoding via Gr\"obner bases and discrete Fourier transforms for several
types of algebraic codes
|
cs.IT math.IT
|
We propose a novel encoding scheme for algebraic codes such as codes on
algebraic curves, multidimensional cyclic codes, and hyperbolic cascaded
Reed-Solomon codes and present numerical examples. We employ the recurrence
from the Gr\"obner basis of the locator ideal for a set of rational points and
the two-dimensional inverse discrete Fourier transform. We generalize the
functioning of the generator polynomial for Reed-Solomon codes and develop
systematic encoding for various algebraic codes.
|
cs/0703105
|
New List Decoding Algorithms for Reed-Solomon and BCH Codes
|
cs.IT cs.CC math.IT
|
In this paper we devise a rational curve fitting algorithm and apply it to
the list decoding of Reed-Solomon and BCH codes. The proposed list decoding
algorithms exhibit the following significant properties. 1 The algorithm
corrects up to $n(1-\sqrt{1-D})$ errors for a (generalized) $(n, k, d=n-k+1)$
Reed-Solomon code, which matches the Johnson bound, where $D\eqdef \frac{d}{n}$
denotes the normalized minimum distance. In comparison with the Guruswami-Sudan
algorithm, which exhibits the same list correction capability, the former
requires multiplicity, which dictates the algorithmic complexity,
$O(n(1-\sqrt{1-D}))$, whereas the latter requires multiplicity $O(n^2(1-D))$.
With the up-to-date most efficient implementation, the former has complexity
$O(n^{6}(1-\sqrt{1-D})^{7/2})$, whereas the latter has complexity
$O(n^{10}(1-D)^4)$. 2. With the multiplicity set to one, the derivative list
correction capability precisely sits in between the conventional hard-decision
decoding and the optimal list decoding. Moreover, the number of candidate
codewords is upper bounded by a constant for a fixed code rate and thus, the
derivative algorithm exhibits quadratic complexity $O(n^2)$. 3. By utilizing
the unique properties of the Berlekamp algorithm, the algorithm corrects up to
$\frac{n}{2}(1-\sqrt{1-2D})$ errors for a narrow-sense $(n, k, d)$ binary BCH
code, which matches the Johnson bound for binary codes. The algorithmic
complexity is $O(n^{6}(1-\sqrt{1-2D})^7)$.
|
cs/0703111
|
Maximum Weighted Sum Rate of Multi-Antenna Broadcast Channels
|
cs.IT math.IT
|
Recently, researchers showed that dirty paper coding (DPC) is the optimal
transmission strategy for multiple-input multiple-output broadcast channels
(MIMO-BC). In this paper, we study how to determine the maximum weighted sum of
DPC rates through solving the maximum weighted sum rate problem of the dual
MIMO multiple access channel (MIMO-MAC) with a sum power constraint. We first
simplify the maximum weighted sum rate problem such that enumerating all
possible decoding orders in the dual MIMO-MAC is unnecessary. We then design an
efficient algorithm based on conjugate gradient projection (CGP) to solve the
maximum weighted sum rate problem. Our proposed CGP method utilizes the
powerful concept of Hessian conjugacy. We also develop a rigorous algorithm to
solve the projection problem. We show that CGP enjoys provable convergence,
nice scalability, and great efficiency for large MIMO-BC systems.
|
cs/0703113
|
Automatic Selection of Bitmap Join Indexes in Data Warehouses
|
cs.DB
|
The queries defined on data warehouses are complex and use several join
operations that induce an expensive computational cost. This cost becomes even
more prohibitive when queries access very large volumes of data. To improve
response time, data warehouse administrators generally use indexing techniques
such as star join indexes or bitmap join indexes. This task is nevertheless
complex and fastidious. Our solution lies in the field of data warehouse
auto-administration. In this framework, we propose an automatic index selection
strategy. We exploit a data mining technique ; more precisely frequent itemset
mining, in order to determine a set of candidate indexes from a given workload.
Then, we propose several cost models allowing to create an index configuration
composed by the indexes providing the best profit. These models evaluate the
cost of accessing data using bitmap join indexes, and the cost of updating and
storing these indexes.
|
cs/0703114
|
Clustering-Based Materialized View Selection in Data Warehouses
|
cs.DB
|
Materialized view selection is a non-trivial task. Hence, its complexity must
be reduced. A judicious choice of views must be cost-driven and influenced by
the workload experienced by the system. In this paper, we propose a framework
for materialized view selection that exploits a data mining technique
(clustering), in order to determine clusters of similar queries. We also
propose a view merging algorithm that builds a set of candidate views, as well
as a greedy process for selecting a set of views to materialize. This selection
is based on cost models that evaluate the cost of accessing data using views
and the cost of storing these views. To validate our strategy, we executed a
workload of decision-support queries on a test data warehouse, with and without
using our strategy. Our experimental results demonstrate its efficiency, even
when storage space is limited.
|
cs/0703118
|
Mathematical model of interest matchmaking in electronic social networks
|
cs.CY cs.AI
|
The problem of matchmaking in electronic social networks is formulated as an
optimization problem. In particular, a function measuring the matching degree
of fields of interest of a search profile with those of an advertising profile
is proposed.
|
cs/0703120
|
Sequential decoding for lossless streaming source coding with side
information
|
cs.IT math.IT
|
The problem of lossless fixed-rate streaming coding of discrete memoryless
sources with side information at the decoder is studied. A random time-varying
tree-code is used to sequentially bin strings and a Stack Algorithm with a
variable bias uses the side information to give a delay-universal coding system
for lossless source coding with side information. The scheme is shown to give
exponentially decaying probability of error with delay, with exponent equal to
Gallager's random coding exponent for sources with side information. The mean
of the random variable of computation for the stack decoder is bounded, and
conditions on the bias are given to guarantee a finite $\rho^{th}$ moment for
$0 \leq \rho \leq 1$.
Further, the problem is also studied in the case where there is a discrete
memoryless channel between encoder and decoder. The same scheme is slightly
modified to give a joint-source channel encoder and Stack Algorithm-based
sequential decoder using side information. Again, by a suitable choice of bias,
the probability of error decays exponentially with delay and the random
variable of computation has a finite mean. Simulation results for several
examples are given.
|
cs/0703123
|
Adaptive Methods for Linear Programming Decoding
|
cs.IT math.IT
|
Detectability of failures of linear programming (LP) decoding and the
potential for improvement by adding new constraints motivate the use of an
adaptive approach in selecting the constraints for the underlying LP problem.
In this paper, we make a first step in studying this method, and show that it
can significantly reduce the complexity of the problem, which was originally
exponential in the maximum check-node degree. We further show that adaptively
adding new constraints, e.g. by combining parity checks, can provide large
gains in the performance.
|
cs/0703124
|
Modelling Complexity in Musical Rhythm
|
cs.AI
|
This paper constructs a tree structure for the music rhythm using the
L-system. It models the structure as an automata and derives its complexity. It
also solves the complexity for the L-system. This complexity can resolve the
similarity between trees. This complexity serves as a measure of psychological
complexity for rhythms. It resolves the music complexity of various
compositions including the Mozart effect K488.
Keyword: music perception, psychological complexity, rhythm, L-system,
automata, temporal associative memory, inverse problem, rewriting rule,
bracketed string, tree similarity
|
cs/0703125
|
Intrinsic dimension of a dataset: what properties does one expect?
|
cs.LG
|
We propose an axiomatic approach to the concept of an intrinsic dimension of
a dataset, based on a viewpoint of geometry of high-dimensional structures. Our
first axiom postulates that high values of dimension be indicative of the
presence of the curse of dimensionality (in a certain precise mathematical
sense). The second axiom requires the dimension to depend smoothly on a
distance between datasets (so that the dimension of a dataset and that of an
approximating principal manifold would be close to each other). The third axiom
is a normalization condition: the dimension of the Euclidean $n$-sphere $\s^n$
is $\Theta(n)$. We give an example of a dimension function satisfying our
axioms, even though it is in general computationally unfeasible, and discuss a
computationally cheap function satisfying most but not all of our axioms (the
``intrinsic dimensionality'' of Ch\'avez et al.)
|
cs/0703127
|
Isochronous Data Transmission With Rates Close to Channel Capacity
|
cs.IT math.IT
|
The existing ARQ schemes (including a hybrid ARQ) have a throughput depending
on packet error probability. In this paper we describe a strategy for delay
tolerant applications which provide a constant throughput until the algorithm
robustness criterion is not failed. The algorithm robustness criterion is
applied to find the optimum size of the retransmission block in the assumption
of the small changes of coding rate within the rate compatible codes family.
|
cs/0703129
|
A theorem on the quantum evaluation of Weight Enumerators for a certain
class of Cyclic Codes with a note on Cyclotomic cosets
|
cs.IT math.IT quant-ph
|
This note is a stripped down version of a published paper on the Potts
partition function, where we concentrate solely on the linear coding aspect of
our approach. It is meant as a resource for people interested in coding theory
but who do not know much of the mathematics involved and how quantum
computation may provide a speed up in the computation of a very important
quantity in coding theory. We provide a theorem on the quantum computation of
the Weight Enumerator polynomial for a restricted family of cyclic codes. The
complexity of obtaining an exact evaluation is $O(k^{2s}(\log q)^{2})$, where
$s$ is a parameter which determines the class of cyclic codes in question, $q$
is the characteristic of the finite field over which the code is defined, and
$k$ is the dimension of the code. We also provide an overview of cyclotomic
cosets and discuss applications including how they can be used to speed up the
computation of the weight enumerator polynomial (which is related to the Potts
partition function). We also give an algorithm which returns the coset leaders
and the size of each coset from the list $\{0,1,2,...,N-1\}$, whose time
complexity is soft-O(N). This algorithm uses standard techniques but we include
it as a resource for students. Note that cyclotomic cosets do not improve the
asymptotic complexity of the computation of weight enumerators.
|
cs/0703130
|
Space-contained conflict revision, for geographic information
|
cs.AI
|
Using qualitative reasoning with geographic information, contrarily, for
instance, with robotics, looks not only fastidious (i.e.: encoding knowledge
Propositional Logics PL), but appears to be computational complex, and not
tractable at all, most of the time. However, knowledge fusion or revision, is a
common operation performed when users merge several different data sets in a
unique decision making process, without much support. Introducing logics would
be a great improvement, and we propose in this paper, means for deciding -a
priori- if one application can benefit from a complete revision, under only the
assumption of a conjecture that we name the "containment conjecture", which
limits the size of the minimal conflicts to revise. We demonstrate that this
conjecture brings us the interesting computational property of performing a
not-provable but global, revision, made of many local revisions, at a tractable
size. We illustrate this approach on an application.
|
cs/0703131
|
Open Access Scientometrics and the UK Research Assessment Exercise
|
cs.IR cs.DL
|
Scientometric predictors of research performance need to be validated by
showing that they have a high correlation with the external criterion they are
trying to predict. The UK Research Assessment Exercise (RAE), together with the
growing movement toward making the full-texts of research articles freely
available on the web -- offer a unique opportunity to test and validate a
wealth of old and new scientometric predictors, through multiple regression
analysis: Publications, journal impact factors, citations, co-citations,
citation chronometrics (age, growth, latency to peak, decay rate),
hub/authority scores, h-index, prior funding, student counts, co-authorship
scores, endogamy/exogamy, textual proximity, download/co-downloads and their
chronometrics, etc. can all be tested and validated jointly, discipline by
discipline, against their RAE panel rankings in the forthcoming parallel
panel-based and metric RAE in 2008. The weights of each predictor can be
calibrated to maximize the joint correlation with the rankings. Open Access
Scientometrics will provide powerful new means of navigating, evaluating,
predicting and analyzing the growing Open Access database, as well as powerful
incentives for making it grow faster. ~
|
cs/0703132
|
Structure induction by lossless graph compression
|
cs.DS cs.IT cs.LG math.IT
|
This work is motivated by the necessity to automate the discovery of
structure in vast and evergrowing collection of relational data commonly
represented as graphs, for example genomic networks. A novel algorithm, dubbed
Graphitour, for structure induction by lossless graph compression is presented
and illustrated by a clear and broadly known case of nested structure in a DNA
molecule. This work extends to graphs some well established approaches to
grammatical inference previously applied only to strings. The bottom-up graph
compression problem is related to the maximum cardinality (non-bipartite)
maximum cardinality matching problem. The algorithm accepts a variety of graph
types including directed graphs and graphs with labeled nodes and arcs. The
resulting structure could be used for representation and classification of
graphs.
|
cs/0703133
|
Computing Good Nash Equilibria in Graphical Games
|
cs.GT cs.DS cs.MA
|
This paper addresses the problem of fair equilibrium selection in graphical
games. Our approach is based on the data structure called the {\em best
response policy}, which was proposed by Kearns et al. \cite{kls} as a way to
represent all Nash equilibria of a graphical game. In \cite{egg}, it was shown
that the best response policy has polynomial size as long as the underlying
graph is a path. In this paper, we show that if the underlying graph is a
bounded-degree tree and the best response policy has polynomial size then there
is an efficient algorithm which constructs a Nash equilibrium that guarantees
certain payoffs to all participants. Another attractive solution concept is a
Nash equilibrium that maximizes the social welfare. We show that, while exactly
computing the latter is infeasible (we prove that solving this problem may
involve algebraic numbers of an arbitrarily high degree), there exists an FPTAS
for finding such an equilibrium as long as the best response policy has
polynomial size. These two algorithms can be combined to produce Nash
equilibria that satisfy various fairness criteria.
|
cs/0703134
|
Automatic Generation of Benchmarks for Plagiarism Detection Tools using
Grammatical Evolution
|
cs.NE cs.IT math.IT
|
This paper has been withdrawn by the authors due to a major rewriting.
|
cs/0703135
|
Dependency Parsing with Dynamic Bayesian Network
|
cs.CL cs.AI
|
Exact parsing with finite state automata is deemed inappropriate because of
the unbounded non-locality languages overwhelmingly exhibit. We propose a way
to structure the parsing task in order to make it amenable to local
classification methods. This allows us to build a Dynamic Bayesian Network
which uncovers the syntactic dependency structure of English sentences.
Experiments with the Wall Street Journal demonstrate that the model
successfully learns from labeled data.
|
cs/0703136
|
Uncovering Plagiarism Networks
|
cs.IT cs.SI math.IT
|
Plagiarism detection in educational programming assignments is still a
problematic issue in terms of resource waste, ethical controversy, legal risks,
and technical complexity. This paper presents AC, a modular plagiarism
detection system. The design is portable across platforms and assignment
formats and provides easy extraction into the internal assignment
representation. Multiple similarity measures have been incorporated, both
existing and newly-developed. Statistical analysis and several graphical
visualizations aid in the interpretation of analysis results. The system has
been evaluated with a survey that encompasses several academic semesters of use
at the authors' institution.
|
cs/0703138
|
Reinforcement Learning for Adaptive Routing
|
cs.LG cs.AI cs.NI
|
Reinforcement learning means learning a policy--a mapping of observations
into actions--based on feedback from the environment. The learning can be
viewed as browsing a set of policies while evaluating them by trial through
interaction with the environment. We present an application of gradient ascent
algorithm for reinforcement learning to a complex domain of packet routing in
network communication and compare the performance of this algorithm to other
routing methods on a benchmark problem.
|
cs/0703141
|
Constructive Conjugate Codes for Quantum Error Correction and
Cryptography
|
cs.IT math.IT
|
A conjugate code pair is defined as a pair of linear codes either of which
contains the dual of the other. A conjugate code pair represents the essential
structure of the corresponding Calderbank-Shor-Steane (CSS) quantum
error-correcting code. It is known that conjugate code pairs are applicable to
quantum cryptography. In this work, a polynomial construction of conjugate code
pairs is presented. The constructed pairs achieve the highest known achievable
rate on additive channels, and are decodable with algorithms of polynomial
complexity.
|
cs/0703142
|
Pragmatic Space-Time Trellis Codes for Block Fading Channels
|
cs.IT math.IT
|
A pragmatic approach for the construction of space-time codes over block
fading channels is investigated. The approach consists in using common
convolutional encoders and Viterbi decoders with suitable generators and rates,
thus greatly simplifying the implementation of space-time codes. For the design
of pragmatic space-time codes a methodology is proposed and applied, based on
the extension of the concept of generalized transfer function for convolutional
codes over block fading channels. Our search algorithm produces the
convolutional encoder generators of pragmatic space-time codes for various
number of states, number of antennas and fading rate. Finally it is shown that,
for the investigated cases, the performance of pragmatic space-time codes is
better than that of previously known space-time codes, confirming that they are
a valuable choice in terms of both implementation complexity and performance.
|
cs/0703143
|
How much feedback is required in MIMO Broadcast Channels?
|
cs.IT math.IT
|
In this paper, a downlink communication system, in which a Base Station (BS)
equipped with M antennas communicates with N users each equipped with K receive
antennas ($K \leq M$), is considered. It is assumed that the receivers have
perfect Channel State Information (CSI), while the BS only knows the partial
CSI, provided by the receivers via feedback. The minimum amount of feedback
required at the BS, to achieve the maximum sum-rate capacity in the asymptotic
case of $N \to \infty$ and different ranges of SNR is studied. In the fixed and
low SNR regimes, it is demonstrated that to achieve the maximum sum-rate, an
infinite amount of feedback is required. Moreover, in order to reduce the gap
to the optimum sum-rate to zero, in the fixed SNR regime, the minimum amount of
feedback scales as $\theta(\ln \ln \ln N)$, which is achievable by the Random
Beam-Forming scheme proposed in [14]. In the high SNR regime, two cases are
considered; in the case of $K < M$, it is proved that the minimum amount of
feedback bits to reduce the gap between the achievable sum-rate and the maximum
sum-rate to zero grows logaritmically with SNR, which is achievable by the
"Generalized Random Beam-Forming" scheme, proposed in [18]. In the case of $K =
M$, it is shown that by using the Random Beam-Forming scheme and the total
amount of feedback not growing with SNR, the maximum sum-rate capacity is
achieved.
|
cs/0703144
|
On The Capacity Of Time-Varying Channels With Periodic Feedback
|
cs.IT math.IT
|
The capacity of time-varying channels with periodic feedback at the
transmitter is evaluated. It is assumed that the channel state information is
perfectly known at the receiver and is fed back to the transmitter at the
regular time-intervals. The system capacity is investigated in two cases: i)
finite state Markov channel, and ii) additive white Gaussian noise channel with
time-correlated fading. In the first case, it is shown that the capacity is
achievable by multiplexing multiple codebooks across the channel. In the second
case, the channel capacity and the optimal adaptive coding is obtained. It is
shown that the optimal adaptation can be achieved by a single Gaussian
codebook, while adaptively allocating the total power based on the side
information at the transmitter.
|
cs/0703149
|
Exploring Logic Artificial Chemistries: An Illogical Attempt?
|
cs.NE nlin.AO
|
Robustness to a wide variety of negative factors and the ability to
self-repair is an inherent and natural characteristic of all life forms on
earth. As opposed to nature, man-made systems are in most cases not inherently
robust and a significant effort has to be made in order to make them resistant
against failures. This can be done in a wide variety of ways and on various
system levels. In the field of digital systems, for example, techniques such as
triple modular redundancy (TMR) are frequently used, which results in a
considerable hardware overhead. Biologically-inspired computing by means of
bio-chemical metaphors offers alternative paradigms, which need to be explored
and evaluated.
Here, we are interested to evaluate the potential of nature-inspired
artificial chemistries and membrane systems as an alternative information
representing and processing paradigm in order to obtain robust and spatially
extended Boolean computing systems in a distributed environment. We investigate
conceptual approaches inspired by artificial chemistries and membrane systems
and compare proof-of-concepts. First, we show, that elementary logical
functions can be implemented. Second, we illustrate how they can be made more
robust and how they can be assembled to larger-scale systems. Finally, we
discuss the implications for and paths to possible genuine implementations.
Compared to the main body of work in artificial chemistries, we take a very
pragmatic and implementation-oriented approach and are interested in realizing
Boolean computations only. The results emphasize that artificial chemistries
can be used to implement Boolean logic in a spatially extended and distributed
environment and can also be made highly robust, but at a significant price.
|
cs/0703151
|
Asymptotic Analysis of Amplify and Forward Relaying in a Parallel MIMO
Relay Network
|
cs.IT math.IT
|
This paper considers the setup of a parallel MIMO relay network in which $K$
relays, each equipped with $N$ antennas, assist the transmitter and the
receiver, each equipped with $M$ antennas, in the half-duplex mode, under the
assumption that $N\geq{M}$. This setup has been studied in the literature like
in \cite{nabar}, \cite{nabar2}, and \cite{qr}. In this paper, a simple scheme,
the so-called Incremental Cooperative Beamforming, is introduced and shown to
achieve the capacity of the network in the asymptotic case of $K\to{\infty}$
with a gap no more than $O(\frac{1}{\log(K)})$. This result is shown to hold,
as long as the power of the relays scales as $\omega(\frac{\log^9(K)}{K})$.
Finally, the asymptotic SNR behavior is studied and it is proved that the
proposed scheme achieves the full multiplexing gain, regardless of the number
of relays.
|
cs/0703154
|
A Hot Channel
|
cs.IT math.IT
|
This paper studies on-chip communication with non-ideal heat sinks. A channel
model is proposed where the variance of the additive noise depends on the
weighted sum of the past channel input powers. It is shown that, depending on
the weights, the capacity can be either bounded or unbounded in the input
power. A necessary condition and a sufficient condition for the capacity to be
bounded are presented.
|
cs/0703156
|
Case Base Mining for Adaptation Knowledge Acquisition
|
cs.AI
|
In case-based reasoning, the adaptation of a source case in order to solve
the target problem is at the same time crucial and difficult to implement. The
reason for this difficulty is that, in general, adaptation strongly depends on
domain-dependent knowledge. This fact motivates research on adaptation
knowledge acquisition (AKA). This paper presents an approach to AKA based on
the principles and techniques of knowledge discovery from databases and
data-mining. It is implemented in CABAMAKA, a system that explores the
variations within the case base to elicit adaptation knowledge. This system has
been successfully tested in an application of case-based reasoning to decision
support in the domain of breast cancer treatment.
|
cs/9308101
|
Dynamic Backtracking
|
cs.AI
|
Because of their occasional need to return to shallow points in a search
tree, existing backtracking methods can sometimes erase meaningful progress
toward solving a search problem. In this paper, we present a method by which
backtrack points can be moved deeper in the search space, thereby avoiding this
difficulty. The technique developed is a variant of dependency-directed
backtracking that uses only polynomial space while still providing useful
control information and retaining the completeness guarantees provided by
earlier approaches.
|
cs/9308102
|
A Market-Oriented Programming Environment and its Application to
Distributed Multicommodity Flow Problems
|
cs.AI
|
Market price systems constitute a well-understood class of mechanisms that
under certain conditions provide effective decentralization of decision making
with minimal communication overhead. In a market-oriented programming approach
to distributed problem solving, we derive the activities and resource
allocations for a set of computational agents by computing the competitive
equilibrium of an artificial economy. WALRAS provides basic constructs for
defining computational market structures, and protocols for deriving their
corresponding price equilibria. In a particular realization of this approach
for a form of multicommodity flow problem, we see that careful construction of
the decision process according to economic principles can lead to efficient
distributed resource allocation, and that the behavior of the system can be
meaningfully analyzed in economic terms.
|
cs/9309101
|
An Empirical Analysis of Search in GSAT
|
cs.AI
|
We describe an extensive study of search in GSAT, an approximation procedure
for propositional satisfiability. GSAT performs greedy hill-climbing on the
number of satisfied clauses in a truth assignment. Our experiments provide a
more complete picture of GSAT's search than previous accounts. We describe in
detail the two phases of search: rapid hill-climbing followed by a long plateau
search. We demonstrate that when applied to randomly generated 3SAT problems,
there is a very simple scaling with problem size for both the mean number of
satisfied clauses and the mean branching rate. Our results allow us to make
detailed numerical conjectures about the length of the hill-climbing phase, the
average gradient of this phase, and to conjecture that both the average score
and average branching rate decay exponentially during plateau search. We end by
showing how these results can be used to direct future theoretical analysis.
This work provides a case study of how computer experiments can be used to
improve understanding of the theoretical properties of algorithms.
|
cs/9311101
|
The Difficulties of Learning Logic Programs with Cut
|
cs.AI
|
As real logic programmers normally use cut (!), an effective learning
procedure for logic programs should be able to deal with it. Because the cut
predicate has only a procedural meaning, clauses containing cut cannot be
learned using an extensional evaluation method, as is done in most learning
systems. On the other hand, searching a space of possible programs (instead of
a space of independent clauses) is unfeasible. An alternative solution is to
generate first a candidate base program which covers the positive examples, and
then make it consistent by inserting cut where appropriate. The problem of
learning programs with cut has not been investigated before and this seems to
be a natural and reasonable approach. We generalize this scheme and investigate
the difficulties that arise. Some of the major shortcomings are actually
caused, in general, by the need for intensional evaluation. As a conclusion,
the analysis of this paper suggests, on precise and technical grounds, that
learning cut is difficult, and current induction techniques should probably be
restricted to purely declarative logic languages.
|
cs/9311102
|
Software Agents: Completing Patterns and Constructing User Interfaces
|
cs.AI
|
To support the goal of allowing users to record and retrieve information,
this paper describes an interactive note-taking system for pen-based computers
with two distinctive features. First, it actively predicts what the user is
going to write. Second, it automatically constructs a custom, button-box user
interface on request. The system is an example of a learning-apprentice
software- agent. A machine learning component characterizes the syntax and
semantics of the user's information. A performance system uses this learned
information to generate completion strings and construct a user interface.
Description of Online Appendix: People like to record information. Doing this
on paper is initially efficient, but lacks flexibility. Recording information
on a computer is less efficient but more powerful. In our new note taking
softwre, the user records information directly on a computer. Behind the
interface, an agent acts for the user. To help, it provides defaults and
constructs a custom user interface. The demonstration is a QuickTime movie of
the note taking agent in action. The file is a binhexed self-extracting
archive. Macintosh utilities for binhex are available from
mac.archive.umich.edu. QuickTime is available from ftp.apple.com in the
dts/mac/sys.soft/quicktime.
|
cs/9312101
|
Decidable Reasoning in Terminological Knowledge Representation Systems
|
cs.AI
|
Terminological knowledge representation systems (TKRSs) are tools for
designing and using knowledge bases that make use of terminological languages
(or concept languages). We analyze from a theoretical point of view a TKRS
whose capabilities go beyond the ones of presently available TKRSs. The new
features studied, often required in practical applications, can be summarized
in three main points. First, we consider a highly expressive terminological
language, called ALCNR, including general complements of concepts, number
restrictions and role conjunction. Second, we allow to express inclusion
statements between general concepts, and terminological cycles as a particular
case. Third, we prove the decidability of a number of desirable TKRS-deduction
services (like satisfiability, subsumption and instance checking) through a
sound, complete and terminating calculus for reasoning in ALCNR-knowledge
bases. Our calculus extends the general technique of constraint systems. As a
byproduct of the proof, we get also the result that inclusion statements in
ALCNR can be simulated by terminological cycles, if descriptive semantics is
adopted.
|
cs/9401101
|
Teleo-Reactive Programs for Agent Control
|
cs.AI
|
A formalism is presented for computing and organizing actions for autonomous
agents in dynamic environments. We introduce the notion of teleo-reactive (T-R)
programs whose execution entails the construction of circuitry for the
continuous computation of the parameters and conditions on which agent action
is based. In addition to continuous feedback, T-R programs support parameter
binding and recursion. A primary difference between T-R programs and many other
circuit-based systems is that the circuitry of T-R programs is more compact; it
is constructed at run time and thus does not have to anticipate all the
contingencies that might arise over all possible runs. In addition, T-R
programs are intuitive and easy to write and are written in a form that is
compatible with automatic planning and learning methods. We briefly describe
some experimental applications of T-R programs in the control of simulated and
actual mobile robots.
|
cs/9402101
|
Learning the Past Tense of English Verbs: The Symbolic Pattern
Associator vs. Connectionist Models
|
cs.AI
|
Learning the past tense of English verbs - a seemingly minor aspect of
language acquisition - has generated heated debates since 1986, and has become
a landmark task for testing the adequacy of cognitive modeling. Several
artificial neural networks (ANNs) have been implemented, and a challenge for
better symbolic models has been posed. In this paper, we present a
general-purpose Symbolic Pattern Associator (SPA) based upon the decision-tree
learning algorithm ID3. We conduct extensive head-to-head comparisons on the
generalization ability between ANN models and the SPA under different
representations. We conclude that the SPA generalizes the past tense of unseen
verbs better than ANN models by a wide margin, and we offer insights as to why
this should be the case. We also discuss a new default strategy for
decision-tree learning algorithms.
|
cs/9402102
|
Substructure Discovery Using Minimum Description Length and Background
Knowledge
|
cs.AI
|
The ability to identify interesting and repetitive substructures is an
essential component to discovering knowledge in structural data. We describe a
new version of our SUBDUE substructure discovery system based on the minimum
description length principle. The SUBDUE system discovers substructures that
compress the original data and represent structural concepts in the data. By
replacing previously-discovered substructures in the data, multiple passes of
SUBDUE produce a hierarchical description of the structural regularities in the
data. SUBDUE uses a computationally-bounded inexact graph match that identifies
similar, but not identical, instances of a substructure and finds an
approximate measure of closeness of two substructures when under computational
constraints. In addition to the minimum description length principle, other
background knowledge can be used by SUBDUE to guide the search towards more
appropriate substructures. Experiments in a variety of domains demonstrate
SUBDUE's ability to find substructures capable of compressing the original data
and to discover structural concepts important to the domain. Description of
Online Appendix: This is a compressed tar file containing the SUBDUE discovery
system, written in C. The program accepts as input databases represented in
graph form, and will output discovered substructures with their corresponding
value.
|
cs/9402103
|
Bias-Driven Revision of Logical Domain Theories
|
cs.AI
|
The theory revision problem is the problem of how best to go about revising a
deficient domain theory using information contained in examples that expose
inaccuracies. In this paper we present our approach to the theory revision
problem for propositional domain theories. The approach described here, called
PTR, uses probabilities associated with domain theory elements to numerically
track the ``flow'' of proof through the theory. This allows us to measure the
precise role of a clause or literal in allowing or preventing a (desired or
undesired) derivation for a given example. This information is used to
efficiently locate and repair flawed elements of the theory. PTR is proved to
converge to a theory which correctly classifies all examples, and shown
experimentally to be fast and accurate even for deep theories.
|
cs/9403101
|
Exploring the Decision Forest: An Empirical Investigation of Occam's
Razor in Decision Tree Induction
|
cs.AI
|
We report on a series of experiments in which all decision trees consistent
with the training data are constructed. These experiments were run to gain an
understanding of the properties of the set of consistent decision trees and the
factors that affect the accuracy of individual trees. In particular, we
investigated the relationship between the size of a decision tree consistent
with some training data and the accuracy of the tree on test data. The
experiments were performed on a massively parallel Maspar computer. The results
of the experiments on several artificial and two real world problems indicate
that, for many of the problems investigated, smaller consistent decision trees
are on average less accurate than the average accuracy of slightly larger
trees.
|
cs/9406101
|
A Semantics and Complete Algorithm for Subsumption in the CLASSIC
Description Logic
|
cs.AI
|
This paper analyzes the correctness of the subsumption algorithm used in
CLASSIC, a description logic-based knowledge representation system that is
being used in practical applications. In order to deal efficiently with
individuals in CLASSIC descriptions, the developers have had to use an
algorithm that is incomplete with respect to the standard, model-theoretic
semantics for description logics. We provide a variant semantics for
descriptions with respect to which the current implementation is complete, and
which can be independently motivated. The soundness and completeness of the
polynomial-time subsumption algorithm is established using description graphs,
which are an abstracted version of the implementation structures used in
CLASSIC, and are of independent interest.
|
cs/9406102
|
Applying GSAT to Non-Clausal Formulas
|
cs.AI
|
In this paper we describe how to modify GSAT so that it can be applied to
non-clausal formulas. The idea is to use a particular ``score'' function which
gives the number of clauses of the CNF conversion of a formula which are false
under a given truth assignment. Its value is computed in linear time, without
constructing the CNF conversion itself. The proposed methodology applies to
most of the variants of GSAT proposed so far.
|
cs/9408101
|
Random Worlds and Maximum Entropy
|
cs.AI
|
Given a knowledge base KB containing first-order and statistical facts, we
consider a principled method, called the random-worlds method, for computing a
degree of belief that some formula Phi holds given KB. If we are reasoning
about a world or system consisting of N individuals, then we can consider all
possible worlds, or first-order models, with domain {1,...,N} that satisfy KB,
and compute the fraction of them in which Phi is true. We define the degree of
belief to be the asymptotic value of this fraction as N grows large. We show
that when the vocabulary underlying Phi and KB uses constants and unary
predicates only, we can naturally associate an entropy with each world. As N
grows larger, there are many more worlds with higher entropy. Therefore, we can
use a maximum-entropy computation to compute the degree of belief. This result
is in a similar spirit to previous work in physics and artificial intelligence,
but is far more general. Of equal interest to the result itself are the
limitations on its scope. Most importantly, the restriction to unary predicates
seems necessary. Although the random-worlds method makes sense in general, the
connection to maximum entropy seems to disappear in the non-unary case. These
observations suggest unexpected limitations to the applicability of
maximum-entropy methods.
|
cs/9408102
|
Pattern Matching and Discourse Processing in Information Extraction from
Japanese Text
|
cs.AI
|
Information extraction is the task of automatically picking up information of
interest from an unconstrained text. Information of interest is usually
extracted in two steps. First, sentence level processing locates relevant
pieces of information scattered throughout the text; second, discourse
processing merges coreferential information to generate the output. In the
first step, pieces of information are locally identified without recognizing
any relationships among them. A key word search or simple pattern search can
achieve this purpose. The second step requires deeper knowledge in order to
understand relationships among separately identified pieces of information.
Previous information extraction systems focused on the first step, partly
because they were not required to link up each piece of information with other
pieces. To link the extracted pieces of information and map them onto a
structured output format, complex discourse processing is essential. This paper
reports on a Japanese information extraction system that merges information
using a pattern matcher and discourse processor. Evaluation results show a high
level of system performance which approaches human performance.
|
cs/9408103
|
A System for Induction of Oblique Decision Trees
|
cs.AI
|
This article describes a new system for induction of oblique decision trees.
This system, OC1, combines deterministic hill-climbing with two forms of
randomization to find a good oblique split (in the form of a hyperplane) at
each node of a decision tree. Oblique decision tree methods are tuned
especially for domains in which the attributes are numeric, although they can
be adapted to symbolic or mixed symbolic/numeric attributes. We present
extensive empirical studies, using both real and artificial data, that analyze
OC1's ability to construct oblique trees that are smaller and more accurate
than their axis-parallel counterparts. We also examine the benefits of
randomization for the construction of oblique decision trees.
|
cs/9409101
|
On Planning while Learning
|
cs.AI
|
This paper introduces a framework for Planning while Learning where an agent
is given a goal to achieve in an environment whose behavior is only partially
known to the agent. We discuss the tractability of various plan-design
processes. We show that for a large natural class of Planning while Learning
systems, a plan can be presented and verified in a reasonable time. However,
coming up algorithmically with a plan, even for simple classes of systems is
apparently intractable. We emphasize the role of off-line plan-design
processes, and show that, in most natural cases, the verification (projection)
part can be carried out in an efficient algorithmic manner.
|
cs/9412101
|
Wrap-Up: a Trainable Discourse Module for Information Extraction
|
cs.AI
|
The vast amounts of on-line text now available have led to renewed interest
in information extraction (IE) systems that analyze unrestricted text,
producing a structured representation of selected information from the text.
This paper presents a novel approach that uses machine learning to acquire
knowledge for some of the higher level IE processing. Wrap-Up is a trainable IE
discourse component that makes intersentential inferences and identifies
logical relations among information extracted from the text. Previous
corpus-based approaches were limited to lower level processing such as
part-of-speech tagging, lexical disambiguation, and dictionary construction.
Wrap-Up is fully trainable, and not only automatically decides what classifiers
are needed, but even derives the feature set for each classifier automatically.
Performance equals that of a partially trainable discourse module requiring
manual customization for each domain.
|
cs/9412102
|
Operations for Learning with Graphical Models
|
cs.AI
|
This paper is a multidisciplinary review of empirical, statistical learning
from a graphical model perspective. Well-known examples of graphical models
include Bayesian networks, directed graphs representing a Markov chain, and
undirected networks representing a Markov field. These graphical models are
extended to model data analysis and empirical learning using the notation of
plates. Graphical operations for simplifying and manipulating a problem are
provided including decomposition, differentiation, and the manipulation of
probability models from the exponential family. Two standard algorithm schemas
for learning are reviewed in a graphical framework: Gibbs sampling and the
expectation maximization algorithm. Using these operations and schemas, some
popular algorithms can be synthesized from their graphical specification. This
includes versions of linear regression, techniques for feed-forward networks,
and learning Gaussian and discrete Bayesian networks from data. The paper
concludes by sketching some implications for data analysis and summarizing how
some popular algorithms fall within the framework presented. The main original
contributions here are the decomposition techniques and the demonstration that
graphical models provide a framework for understanding and developing complex
learning algorithms.
|
cs/9412103
|
Total-Order and Partial-Order Planning: A Comparative Analysis
|
cs.AI
|
For many years, the intuitions underlying partial-order planning were largely
taken for granted. Only in the past few years has there been renewed interest
in the fundamental principles underlying this paradigm. In this paper, we
present a rigorous comparative analysis of partial-order and total-order
planning by focusing on two specific planners that can be directly compared. We
show that there are some subtle assumptions that underly the wide-spread
intuitions regarding the supposed efficiency of partial-order planning. For
instance, the superiority of partial-order planning can depend critically upon
the search strategy and the structure of the search space. Understanding the
underlying assumptions is crucial for constructing efficient planners.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.