id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1003.2259
|
Bit Allocation Laws for Multi-Antenna Channel Feedback Quantization:
Multi-User Case
|
cs.IT math.IT
|
This paper addresses the optimal design of limited-feedback downlink
multi-user spatial multiplexing systems. A multiple-antenna base-station is
assumed to serve multiple single-antenna users, who quantize and feed back
their channel state information (CSI) through a shared rate-limited feedback
channel. The optimization problem is cast in the form of minimizing the average
transmission power at the base-station subject to users' target
signal-to-interference-plus-noise ratios (SINR) and outage probability
constraints. The goal is to derive the feedback bit allocations among the users
and the corresponding channel magnitude and direction quantization codebooks in
a high-resolution quantization regime. Toward this end, this paper develops an
optimization framework using approximate analytical closed-form solutions, the
accuracy of which is then verified by numerical results. The results show that,
for channels in the real space, the number of channel direction quantization
bits should be $(M-1)$ times the number of channel magnitude quantization bits,
where $M$ is the number of base-station antennas. Moreover, users with higher
requested quality-of-service (QoS), i.e. lower target outage probabilities, and
higher requested downlink rates, i.e. higher target SINR's, should use larger
shares of the feedback rate. It is also shown that, for the target QoS
parameters to be feasible, the total feedback bandwidth should scale
logarithmically with the geometric mean of the target SINR values and the
geometric mean of the inverse target outage probabilities. In particular, the
minimum required feedback rate is shown to increase if the users' target
parameters deviate from the corresponding geometric means. Finally, the paper
shows that, as the total number of feedback bits $B$ increases, the performance
of the limited-feedback system approaches the perfect-CSI system as
${2^{-{B}/{M^2}}}$.
|
1003.2372
|
On Ergodic Secrecy Capacity for Gaussian MISO Wiretap Channels
|
cs.IT math.IT
|
A Gaussian multiple-input single-output (MISO) wiretap channel model is
considered, where there exists a transmitter equipped with multiple antennas, a
legitimate receiver and an eavesdropper each equipped with a single antenna. We
study the problem of finding the optimal input covariance that achieves ergodic
secrecy capacity subject to a power constraint where only statistical
information about the eavesdropper channel is available at the transmitter.
This is a non-convex optimization problem that is in general difficult to
solve. Existing results address the case in which the eavesdropper or/and
legitimate channels have independent and identically distributed Gaussian
entries with zero-mean and unit-variance, i.e., the channels have trivial
covariances. This paper addresses the general case where eavesdropper and
legitimate channels have nontrivial covariances. A set of equations describing
the optimal input covariance matrix are proposed along with an algorithm to
obtain the solution. Based on this framework, we show that when full
information on the legitimate channel is available to the transmitter, the
optimal input covariance has always rank one. We also show that when only
statistical information on the legitimate channel is available to the
transmitter, the legitimate channel has some general non-trivial covariance,
and the eavesdropper channel has trivial covariance, the optimal input
covariance has the same eigenvectors as the legitimate channel covariance.
Numerical results are presented to illustrate the algorithm.
|
1003.2429
|
Predicting Positive and Negative Links in Online Social Networks
|
physics.soc-ph cs.AI cs.CY
|
We study online social networks in which relationships can be either positive
(indicating relations such as friendship) or negative (indicating relations
such as opposition or antagonism). Such a mix of positive and negative links
arise in a variety of online settings; we study datasets from Epinions,
Slashdot and Wikipedia. We find that the signs of links in the underlying
social networks can be predicted with high accuracy, using models that
generalize across this diverse range of sites. These models provide insight
into some of the fundamental principles that drive the formation of signed
links in networks, shedding light on theories of balance and status from social
psychology; they also suggest social computing applications by which the
attitude of one user toward another can be estimated from evidence provided by
their relationships with other members of the surrounding social network.
|
1003.2454
|
Decoding Complexity of Irregular LDGM-LDPC Codes Over the BISOM Channels
|
cs.IT math.IT
|
An irregular LDGM-LDPC code is studied as a sub-code of an LDPC code with
some randomly \emph{punctured} output-bits. It is shown that the LDGM-LDPC
codes achieve rates arbitrarily close to the channel-capacity of the
binary-input symmetric-output memoryless (BISOM) channel with bounded
\emph{complexity}. The measure of complexity is the average-degree (per
information-bit) of the check-nodes for the factor-graph of the code. A
lower-bound on the average degree of the check-nodes of the irregular LDGM-LDPC
codes is obtained. The bound does not depend on the decoder used at the
receiver. The stability condition for decoding the irregular LDGM-LDPC codes
over the binary-erasure channel (BEC) under iterative-decoding with
message-passing is described.
|
1003.2458
|
Revisiting the Examination Hypothesis with Query Specific Position Bias
|
cs.IR
|
Click through rates (CTR) offer useful user feedback that can be used to
infer the relevance of search results for queries. However it is not very
meaningful to look at the raw click through rate of a search result because the
likelihood of a result being clicked depends not only on its relevance but also
the position in which it is displayed. One model of the browsing behavior, the
{\em Examination Hypothesis} \cite{RDR07,Craswell08,DP08}, states that each
position has a certain probability of being examined and is then clicked based
on the relevance of the search snippets. This is based on eye tracking studies
\cite{Claypool01, GJG04} which suggest that users are less likely to view
results in lower positions. Such a position dependent variation in the
probability of examining a document is referred to as {\em position bias}. Our
main observation in this study is that the position bias tends to differ with
the kind of information the user is looking for. This makes the position bias
{\em query specific}. In this study, we present a model for analyzing a query
specific position bias from the click data and use these biases to derive
position independent relevance values of search results. Our model is based on
the assumption that for a given query, the positional click through rate of a
document is proportional to the product of its relevance and a {\em query
specific} position bias. We compare our model with the vanilla examination
hypothesis model (EH) on a set of queries obtained from search logs of a
commercial search engine. We also compare it with the User Browsing Model (UBM)
\cite{DP08} which extends the cascade model of Craswell et al\cite{Craswell08}
by incorporating multiple clicks in a query session. We show that the our
model, although much simpler to implement, consistently outperforms both EH and
UBM on well-used measures such as relative error and cross entropy.
|
1003.2471
|
Structure-Aware Stochastic Control for Transmission Scheduling
|
cs.LG cs.IT cs.MM math.IT
|
In this paper, we consider the problem of real-time transmission scheduling
over time-varying channels. We first formulate the transmission scheduling
problem as a Markov decision process (MDP) and systematically unravel the
structural properties (e.g. concavity in the state-value function and
monotonicity in the optimal scheduling policy) exhibited by the optimal
solutions. We then propose an online learning algorithm which preserves these
structural properties and achieves -optimal solutions for an arbitrarily small
. The advantages of the proposed online method are that: (i) it does not
require a priori knowledge of the traffic arrival and channel statistics and
(ii) it adaptively approximates the state-value functions using piece-wise
linear functions and has low storage and computation complexity. We also extend
the proposed low-complexity online learning solution to the prioritized data
transmission. The simulation results demonstrate that the proposed method
achieves significantly better utility (or delay)-energy trade-offs when
comparing to existing state-of-art online optimization methods.
|
1003.2586
|
Inductive Logic Programming in Databases: from Datalog to DL+log
|
cs.LO cs.AI cs.DB cs.LG
|
In this paper we address an issue that has been brought to the attention of
the database community with the advent of the Semantic Web, i.e. the issue of
how ontologies (and semantics conveyed by them) can help solving typical
database problems, through a better understanding of KR aspects related to
databases. In particular, we investigate this issue from the ILP perspective by
considering two database problems, (i) the definition of views and (ii) the
definition of constraints, for a database whose schema is represented also by
means of an ontology. Both can be reformulated as ILP problems and can benefit
from the expressive and deductive power of the KR framework DL+log. We
illustrate the application scenarios by means of examples. Keywords: Inductive
Logic Programming, Relational Databases, Ontologies, Description Logics, Hybrid
Knowledge Representation and Reasoning Systems. Note: To appear in Theory and
Practice of Logic Programming (TPLP).
|
1003.2606
|
Asymptotically-Optimal, Fast-Decodable, Full-Diversity STBCs
|
cs.IT math.IT
|
For a family/sequence of STBCs $\mathcal{C}_1,\mathcal{C}_2,\dots$, with
increasing number of transmit antennas $N_i$, with rates $R_i$ complex symbols
per channel use (cspcu), the asymptotic normalized rate is defined as $\lim_{i
\to \infty}{\frac{R_i}{N_i}}$. A family of STBCs is said to be
asymptotically-good if the asymptotic normalized rate is non-zero, i.e., when
the rate scales as a non-zero fraction of the number of transmit antennas, and
the family of STBCs is said to be asymptotically-optimal if the asymptotic
normalized rate is 1, which is the maximum possible value. In this paper, we
construct a new class of full-diversity STBCs that have the least ML decoding
complexity among all known codes for any number of transmit antennas $N>1$ and
rates $R>1$ cspcu. For a large set of $\left(R,N\right)$ pairs, the new codes
have lower ML decoding complexity than the codes already available in the
literature. Among the new codes, the class of full-rate codes ($R=N$) are
asymptotically-optimal and fast-decodable, and for $N>5$ have lower ML decoding
complexity than all other families of asymptotically-optimal, fast-decodable,
full-diversity STBCs available in the literature. The construction of the new
STBCs is facilitated by the following further contributions of this paper:(i)
For $g > 1$, we construct $g$-group ML-decodable codes with rates greater than
one cspcu. These codes are asymptotically-good too. For $g>2$, these are the
first instances of $g$-group ML-decodable codes with rates greater than $1$
cspcu presented in the literature. (ii) We construct a new class of
fast-group-decodable codes for all even number of transmit antennas and rates
$1 < R \leq 5/4$.(iii) Given a design with full-rank linear dispersion
matrices, we show that a full-diversity STBC can be constructed from this
design by encoding the real symbols independently using only regular PAM
constellations.
|
1003.2641
|
Release ZERO.0.1 of package RefereeToolbox
|
cs.AI
|
RefereeToolbox is a java package implementing combination operators for
fusing evidences. It is downloadable from:
http://refereefunction.fredericdambreville.com/releases RefereeToolbox is based
on an interpretation of the fusion rules by means of Referee Functions. This
approach implies a dissociation between the definition of the combination and
its actual implementation, which is common to all referee-based combinations.
As a result, RefereeToolbox is designed with the aim to be generic and
evolutive.
|
1003.2675
|
Exploiting Channel Memory for Multi-User Wireless Scheduling without
Channel Measurement: Capacity Regions and Algorithms
|
cs.IT cs.NI math.DS math.IT math.OC
|
We study the fundamental network capacity of a multi-user wireless downlink
under two assumptions: (1) Channels are not explicitly measured and thus
instantaneous states are unknown, (2) Channels are modeled as ON/OFF Markov
chains. This is an important network model to explore because channel probing
may be costly or infeasible in some contexts. In this case, we can use channel
memory with ACK/NACK feedback from previous transmissions to improve network
throughput. Computing in closed form the capacity region of this network is
difficult because it involves solving a high dimension partially observed
Markov decision problem. Instead, in this paper we construct an inner and outer
bound on the capacity region, showing that the bound is tight when the number
of users is large and the traffic is symmetric. For the case of heterogeneous
traffic and any number of users, we propose a simple queue-dependent policy
that can stabilize the network with any data rates strictly within the inner
capacity bound. The stability analysis uses a novel frame-based Lyapunov drift
argument. The outer-bound analysis uses stochastic coupling and state
aggregation to bound the performance of a restless bandit problem using a
related multi-armed bandit system. Our results are useful in cognitive radio
networks, opportunistic scheduling with delayed/uncertain channel state
information, and restless bandit problems.
|
1003.2677
|
Classified Ads Harvesting Agent and Notification System
|
cs.IR
|
The shift from an information society to a knowledge society require rapid
information harvesting, reliable search and instantaneous on demand delivery.
Information extraction agents are used to explore and collect data available
from Web, in order to effectively exploit such data for business purposes, such
as automatic news filtering, advertisement or product searching and price
comparing. In this paper, we develop a real-time automatic harvesting agent for
adverts posted on Servihoo web portal and an SMS-based notification system. It
uses the URL of the web portal and the object model, i.e., the fields of
interests and a set of rules written using the HTML parsing functions to
extract latest adverts information. The extraction engine executes the
extraction rules and stores the information in a database to be processed for
automatic notification. This intelligent system helps to tremendously save
time. It also enables users or potential product buyers to react more quickly
to changes and newly posted sales adverts, paving the way to real-time best buy
deals.
|
1003.2681
|
A Systematic Framework for the Construction of Optimal Complete
Complementary Codes
|
cs.IT math.IT
|
The complete complementary code (CCC) is a sequence family with ideal
correlation sums which was proposed by Suehiro and Hatori. Numerous literatures
show its applications to direct-spread code-division multiple access (DS-CDMA)
systems for inter-channel interference (ICI)-free communication with improved
spectral efficiency. In this paper, we propose a systematic framework for the
construction of CCCs based on $N$-shift cross-orthogonal sequence families
($N$-CO-SFs). We show theoretical bounds on the size of $N$-CO-SFs and CCCs,
and give a set of four algorithms for their generation and extension. The
algorithms are optimal in the sense that the size of resulted sequence families
achieves theoretical bounds and, with the algorithms, we can construct an
optimal CCC consisting of sequences whose lengths are not only almost arbitrary
but even variable between sequence families. We also discuss the family size,
alphabet size, and lengths of constructible CCCs based on the proposed
algorithms.
|
1003.2682
|
Table manipulation in simplicial databases
|
cs.DB cs.IR
|
In \cite{Spi}, we developed a category of databases in which the schema of a
database is represented as a simplicial set. Each simplex corresponds to a
table in the database. There, our main concern was to find a categorical
formulation of databases; the simplicial nature of the schemas was to some
degree unexpected and unexploited.
In the present note, we show how to use this geometric formulation
effectively on a computer. If we think of each simplex as a polygonal tile, we
can imagine assembling custom databases by mixing and matching tiles. Queries
on this database can be performed by drawing paths through the resulting tile
formations, selecting records at the start-point of this path and retrieving
corresponding records at its end-point.
|
1003.2700
|
The role of semantics in mining frequent patterns from knowledge bases
in description logics with rules
|
cs.LO cs.AI
|
We propose a new method for mining frequent patterns in a language that
combines both Semantic Web ontologies and rules. In particular we consider the
setting of using a language that combines description logics with DL-safe
rules. This setting is important for the practical application of data mining
to the Semantic Web. We focus on the relation of the semantics of the
representation formalism to the task of frequent pattern discovery, and for the
core of our method, we propose an algorithm that exploits the semantics of the
combined knowledge base. We have developed a proof-of-concept data mining
implementation of this. Using this we have empirically shown that using the
combined knowledge base to perform semantic tests can make data mining faster
by pruning useless candidate patterns before their evaluation. We have also
shown that the quality of the set of patterns produced may be improved: the
patterns are more compact, and there are fewer patterns. We conclude that
exploiting the semantics of a chosen representation formalism is key to the
design and application of (onto-)relational frequent pattern discovery methods.
Note: To appear in Theory and Practice of Logic Programming (TPLP)
|
1003.2724
|
Particle Swarm Optimization Based Diophantine Equation Solver
|
cs.NE cs.NA
|
The paper introduces particle swarm optimization as a viable strategy to find
numerical solution of Diophantine equation, for which there exists no general
method of finding solutions. The proposed methodology uses a population of
integer particles. The candidate solutions in the feasible space are optimized
to have better positions through particle best and global best positions. The
methodology, which follows fully connected neighborhood topology, can offer
many solutions of such equations.
|
1003.2749
|
Efficient Queue-based CSMA with Collisions
|
cs.IT cs.NI math.IT math.PR
|
Recently there has been considerable interest in the design of efficient
carrier sense multiple access(CSMA) protocol for wireless network. The basic
assumption underlying recent results is availability of perfect carrier sense
information. This allows for design of continuous time algorithm under which
collisions are avoided. The primary purpose of this note is to show how these
results can be extended in the case when carrier sense information may not be
perfect, or equivalently delayed. Specifically, an adaptation of algorithm in
Rajagopalan, Shah, Shin (2009) is presented here for time slotted setup with
carrier sense information available only at the end of the time slot. To
establish its throughput optimality, in additon to method developed in
Rajagopalan, Shah, Shin (2009), understanding properties of stationary
distribution of a certain non-reversible Markov chain as well as bound on its
mixing time is essential. This note presents these key results. A longer
version of this note will provide detailed account of how this gets
incorporated with methods of Rajagopalan, Shah, Shin (2009) to provide the
positive recurrence of underlying network Markov process. In addition, these
results will help design optimal rate control in conjunction with CSMA in
presence of collision building upon the method of Jiang, Shah, Shin, Walrand
(2009).
|
1003.2751
|
Near-Optimal Evasion of Convex-Inducing Classifiers
|
cs.LG cs.CR
|
Classifiers are often used to detect miscreant activities. We study how an
adversary can efficiently query a classifier to elicit information that allows
the adversary to evade detection at near-minimal cost. We generalize results of
Lowd and Meek (2005) to convex-inducing classifiers. We present algorithms that
construct undetected instances of near-minimal cost using only polynomially
many queries in the dimension of the space and without reverse engineering the
decision boundary.
|
1003.2760
|
On the monotonicity, log-concavity and tight bounds of the generalized
Marcum and Nuttall Q-functions
|
cs.IT math.IT math.PR
|
In this paper, we present a comprehensive study of the monotonicity and
log-concavity of the generalized Marcum and Nuttall Q-functions. More
precisely, a simple probabilistic method is firstly given to prove the
monotonicity of these two functions. Then, the log-concavity of the generalized
Marcum Q-function and its deformations is established with respect to each of
the three parameters. Since the Nuttall Q-function has similar probabilistic
interpretations as the generalized Marcum Q-function, we deduce the
log-concavity of the Nuttall Q-function. By exploiting the log-concavity of
these two functions, we propose new tight lower and upper bounds for the
generalized Marcum and Nuttall Q-functions. Our proposed bounds are much
tighter than the existing bounds in the literature in most of the cases. The
relative errors of our proposed bounds converge to 0 as b tends to infinity.
The numerical results show that the absolute relative errors of the proposed
bounds are less than 5% in most of the cases. The proposed bounds can be
effectively applied to the outage probability analysis of interference-limited
systems such as cognitive radio and wireless sensor network, in the study of
error performance of various wireless communication systems operating over
fading channels and extracting the log-likelihood ratio for differential
phase-shift keying (DPSK) signals.
|
1003.2782
|
Reduced ML-Decoding Complexity, Full-Rate STBCs for $2^a$ Transmit
Antenna Systems
|
cs.IT math.IT
|
For an $n_t$ transmit, $n_r$ receive antenna system ($n_t \times n_r$
system), a {\it{full-rate}} space time block code (STBC) transmits $n_{min} =
min(n_t,n_r)$ complex symbols per channel use and in general, has an
ML-decoding complexity of the order of $M^{n_tn_{min}}$ (considering square
designs), where $M$ is the constellation size. In this paper, a scheme to
obtain a full-rate STBC for $2^a$ transmit antennas and any $n_r$, with reduced
ML-decoding complexity of the order of $M^{n_t(n_{min}-3/4)}$, is presented.
The weight matrices of the proposed STBC are obtained from the unitary matrix
representations of a Clifford Algebra. For any value of $n_r$, the proposed
design offers a reduction from the full ML-decoding complexity by a factor of
$M^{3n_t/4}}$. The well known Silver code for 2 transmit antennas is a special
case of the proposed scheme. Further, it is shown that the codes constructed
using the scheme have higher ergodic capacity than the well known punctured
Perfect codes for $n_r < n_t$. Simulation results of the symbol error rates are
shown for $8 \times 2$ systems, where the comparison of the proposed code is
with the punctured Perfect code for 8 transmit antennas. The proposed code
matches the punctured perfect code in error performance, while having reduced
ML-decoding complexity and higher ergodic capacity.
|
1003.2822
|
Innovation Rate Sampling of Pulse Streams with Application to Ultrasound
Imaging
|
cs.IT math.IT
|
Signals comprised of a stream of short pulses appear in many applications
including bio-imaging and radar. The recent finite rate of innovation
framework, has paved the way to low rate sampling of such pulses by noticing
that only a small number of parameters per unit time are needed to fully
describe these signals. Unfortunately, for high rates of innovation, existing
sampling schemes are numerically unstable. In this paper we propose a general
sampling approach which leads to stable recovery even in the presence of many
pulses. We begin by deriving a condition on the sampling kernel which allows
perfect reconstruction of periodic streams from the minimal number of samples.
We then design a compactly supported class of filters, satisfying this
condition. The periodic solution is extended to finite and infinite streams,
and is shown to be numerically stable even for a large number of pulses. High
noise robustness is also demonstrated when the delays are sufficiently
separated. Finally, we process ultrasound imaging data using our techniques,
and show that substantial rate reduction with respect to traditional ultrasound
sampling schemes can be achieved.
|
1003.2836
|
Fishing in Poisson streams: focusing on the whales, ignoring the minnows
|
cs.IT math.IT
|
This paper describes a low-complexity approach for reconstructing average
packet arrival rates and instantaneous packet counts at a router in a
communication network, where the arrivals of packets in each flow follow a
Poisson process. Assuming that the rate vector of this Poisson process is
sparse or approximately sparse, the goal is to maintain a compressed summary of
the process sample paths using a small number of counters, such that at any
time it is possible to reconstruct both the total number of packets in each
flow and the underlying rate vector. We show that these tasks can be
accomplished efficiently and accurately using compressed sensing with expander
graphs. In particular, the compressive counts are a linear transformation of
the underlying counting process by the adjacency matrix of an unbalanced
expander. Such a matrix is binary and sparse, which allows for efficient
incrementing when new packets arrive. We describe, analyze, and compare two
methods that can be used to estimate both the current vector of total packet
counts and the underlying vector of arrival rates.
|
1003.2880
|
Regularized sampling of multiband signals
|
cs.IT math.IT
|
This paper presents a regularized sampling method for multiband signals, that
makes it possible to approach the Landau limit, while keeping the sensitivity
to noise at a low level. The method is based on band-limited windowing,
followed by trigonometric approximation in consecutive time intervals. The key
point is that the trigonometric approximation "inherits" the multiband
property, that is, its coefficients are formed by bursts of non-zero elements
corresponding to the multiband components. It is shown that this method can be
well combined with the recently proposed synchronous multi-rate sampling (SMRS)
scheme, given that the resulting linear system is sparse and formed by ones and
zeroes. The proposed method allows one to trade sampling efficiency for noise
sensitivity, and is specially well suited for bounded signals with unbounded
energy like those in communications, navigation, audio systems, etc. Besides,
it is also applicable to finite energy signals and periodic band-limited
signals (trigonometric polynomials). The paper includes a subspace method for
blindly estimating the support of the multiband signal as well as its
components, and the results are validated through several numerical examples.
|
1003.2883
|
Nearly Optimal Resource Allocation for Downlink OFDMA in 2-D Cellular
Networks
|
cs.IT math.IT
|
In this paper, we propose a resource allocation algorithm for the downlink of
sectorized two-dimensional (2-D) OFDMA cellular networks assuming statistical
Channel State Information (CSI) and fractional frequency reuse. The proposed
algorithm can be implemented in a distributed fashion without the need to any
central controlling units. Its performance is analyzed assuming fast fading
Rayleigh channels and Gaussian distributed multicell interference. We show that
the transmit power of this simple algorithm tends, as the number of users grows
to infinity, to the same limit as the minimal power required to satisfy all
users' rate requirements i.e., the proposed resource allocation algorithm is
asymptotically optimal. As a byproduct of this asymptotic analysis, we
characterize a relevant value of the reuse factor that only depends on an
average state of the network.
|
1003.2914
|
High-Rate Quantization for the Neyman-Pearson Detection of Hidden Markov
Processes
|
cs.IT math.IT
|
This paper investigates the decentralized detection of Hidden Markov
Processes using the Neyman-Pearson test. We consider a network formed by a
large number of distributed sensors. Sensors' observations are noisy snapshots
of a Markov process to be detected. Each (real) observation is quantized on
log2(N) bits before being transmitted to a fusion center which makes the final
decision. For any false alarm level, it is shown that the miss probability of
the Neyman-Pearson test converges to zero exponentially as the number of
sensors tends to infinity. The error exponent is provided using recent results
on Hidden Markov Models. In order to obtain informative expressions of the
error exponent as a function of the quantization rule, we further investigate
the case where the number N of quantization levels tends to infinity, following
the approach developed in [Gupta & Hero, 2003]. In this regime, we provide the
quantization rule maximizing the error exponent. Illustration of our results is
provided in the case of the detection of a Gauss-Markov signal in noise. In
terms of error exponent, the proposed quantization rule significantly
outperforms the one proposed by [Gupta & Hero, 2003] for i.i.d. observations.
|
1003.2941
|
Universal Regularizers For Robust Sparse Coding and Modeling
|
cs.IT math.IT stat.ML
|
Sparse data models, where data is assumed to be well represented as a linear
combination of a few elements from a dictionary, have gained considerable
attention in recent years, and their use has led to state-of-the-art results in
many signal and image processing tasks. It is now well understood that the
choice of the sparsity regularization term is critical in the success of such
models. Based on a codelength minimization interpretation of sparse coding, and
using tools from universal coding theory, we propose a framework for designing
sparsity regularization terms which have theoretical and practical advantages
when compared to the more standard l0 or l1 ones. The presentation of the
framework and theoretical foundations is complemented with examples that show
its practical advantages in image denoising, zooming and classification.
|
1003.3056
|
Spatial multiplexing with MMSE receivers: Single-stream optimality in ad
hoc networks
|
cs.IT math.IT
|
The performance of spatial multiplexing systems with linear
minimum-mean-squared-error receivers is investigated in ad hoc networks. It is
shown that single-stream transmission is preferable over multi-stream
transmission, due to the weaker interference powers from the strongest
interferers remaining after interference-cancelation. This result is obtained
by new exact closed-form expressions we derive for the outage probability and
transmission capacity.
|
1003.3080
|
An Algorithm for Index Multimedia Data (Video) using the Movement
Oriented Method for Real-time Online Services
|
cs.MM cs.IR
|
Multimedia data is a form of data that can represent all types of data
(images, sound and text). The use of multimedia data for the online application
requires a more comprehensive database in the use of storage media, Sorting /
indexing, search and system / data searching. This is necessary in order to
help providers and users to access multimedia data online. Systems that use of
the index image as a reference requires storage media so that the rules and
require special expertise to obtain the desired file. Changes in multimedia
data into a series of stories / storyboard in the form of a text will help
reduce the consumption of media storage, system index / sorting and search
applications. Oriented Movement is one method that is being developed to change
the form of multimedia data into a storyboard.
|
1003.3082
|
Agreement Maintenance Based on Schema and Ontology Change in P2P
Environment
|
cs.AI cs.DB
|
This paper is concern about developing a semantic agreement maintenance
method based on semantic distance by calculating the change of local schema or
ontology. This approach is important in dynamic and autonomous environment, in
which the current approach assumed that agreement or mapping in static
environment. The contribution of this research is to develop a framework based
on semantic agreement maintenance approach for P2P environment. This framework
based on two level hybrid P2P model architecture, which consist of two peer
type: (1) super peer that use to register and manage the other peers, and (2)
simple peer, as a simple peer, it exports and shares its contents with others.
This research develop a model to maintain the semantic agreement in P2P
environment, so the current approach which does not have the mechanism to know
the change, since it assumed that ontology and local schema are in the static
condition, and it is different in dynamic condition. The main issues are how to
calculate the change of local schema or common ontology and the calculation
result is used to determine which algorithm in maintaining the agreement. The
experiment on the job matching domain in Indonesia have been done to show how
far the performance of the approach. From the experiment, the main result are
(i) the more change so the F-measure value tend to be decreased, (ii) there is
no significant different in F-measure value for various modification type (add,
delete, rename), and (iii) the correct choice of algorithm would improve the
F-measure value.
|
1003.3131
|
Strategic Cooperation in Cost Sharing Games
|
cs.GT cs.DS cs.MA
|
In this paper we consider strategic cost sharing games with so-called
arbitrary sharing based on various combinatorial optimization problems, such as
vertex and set cover, facility location, and network design problems. We
concentrate on the existence and computational complexity of strong equilibria,
in which no coalition can improve the cost of each of its members. Our main
result reveals a connection between strong equilibrium in strategic games and
the core in traditional coalitional cost sharing games studied in economics.
For set cover and facility location games this results in a tight
characterization of the existence of strong equilibrium using the integrality
gap of suitable linear programming formulations. Furthermore, it allows to
derive all existing results for strong equilibria in network design cost
sharing games with arbitrary sharing via a unified approach. In addition, we
are able to show that in general the strong price of anarchy is always 1. This
should be contrasted with the price of anarchy of \Theta(n) for Nash
equilibria. Finally, we indicate that the LP-approach can also be used to
compute near-optimal and near-stable approximate strong equilibria.
|
1003.3139
|
Querying Incomplete Data over Extended ER Schemata
|
cs.DB cs.LO
|
Since Chen's Entity-Relationship (ER) model, conceptual modeling has been
playing a fundamental role in relational data design. In this paper we consider
an extended ER (EER) model enriched with cardinality constraints, disjointness
assertions, and is-a relations among both entities and relationships. In this
setting, we consider the case of incomplete data, which is likely to occur, for
instance, when data from different sources are integrated. In such a context,
we address the problem of providing correct answers to conjunctive queries by
reasoning on the schema. Based on previous results about decidability of the
problem, we provide a query answering algorithm that performs rewriting of the
initial query into a recursive Datalog query encoding the information about the
schema. We finally show extensions to more general settings. This paper will
appear in the special issue of Theory and Practice of Logic Programming (TPLP)
titled Logic Programming in Databases: From Datalog to Semantic-Web Rules.
|
1003.3195
|
Zero-error channel capacity and simulation assisted by non-local
correlations
|
quant-ph cs.IT math.IT
|
Shannon's theory of zero-error communication is re-examined in the broader
setting of using one classical channel to simulate another exactly, and in the
presence of various resources that are all classes of non-signalling
correlations: Shared randomness, shared entanglement and arbitrary
non-signalling correlations. Specifically, when the channel being simulated is
noiseless, this reduces to the zero-error capacity of the channel, assisted by
the various classes of non-signalling correlations. When the resource channel
is noiseless, it results in the "reverse" problem of simulating a noisy channel
exactly by a noiseless one, assisted by correlations. In both cases, 'one-shot'
separations between the power of the different assisting correlations are
exhibited. The most striking result of this kind is that entanglement can
assist in zero-error communication, in stark contrast to the standard setting
of communicaton with asymptotically vanishing error in which entanglement does
not help at all. In the asymptotic case, shared randomness is shown to be just
as powerful as arbitrary non-signalling correlations for noisy channel
simulation, which is not true for the asymptotic zero-error capacities. For
assistance by arbitrary non-signalling correlations, linear programming
formulas for capacity and simulation are derived, the former being equal (for
channels with non-zero unassisted capacity) to the feedback-assisted zero-error
capacity originally derived by Shannon to upper bound the unassisted zero-error
capacity. Finally, a kind of reversibility between non-signalling-assisted
capacity and simulation is observed, mirroring the famous "reverse Shannon
theorem".
|
1003.3266
|
Pattern recognition using inverse resonance filtration
|
cs.CV
|
An approach to textures pattern recognition based on inverse resonance
filtration (IRF) is considered. A set of principal resonance harmonics of
textured image signal fluctuations eigen harmonic decomposition (EHD) is used
for the IRF design. It was shown that EHD is invariant to textured image linear
shift. The recognition of texture is made by transfer of its signal into
unstructured signal which simple statistical parameters can be used for texture
pattern recognition. Anomalous variations of this signal point on foreign
objects. Two methods of 2D EHD parameters estimation are considered with the
account of texture signal breaks presence. The first method is based on the
linear symmetry model that is not sensitive to signal phase jumps. The
condition of characteristic polynomial symmetry provides the model stationarity
and periodicity. Second method is based on the eigenvalues problem of matrices
pencil projection into principal vectors space of singular values decomposition
(SVD) of 2D correlation matrix. Two methods of classification of retrieval from
textured image foreign objects are offered.
|
1003.3279
|
A New Heuristic for Feature Selection by Consistent Biclustering
|
cs.LG cs.DM math.CO
|
Given a set of data, biclustering aims at finding simultaneous partitions in
biclusters of its samples and of the features which are used for representing
the samples. Consistent biclusterings allow to obtain correct classifications
of the samples from the known classification of the features, and vice versa,
and they are very useful for performing supervised classifications. The problem
of finding consistent biclusterings can be seen as a feature selection problem,
where the features that are not relevant for classification purposes are
removed from the set of data, while the total number of features is maximized
in order to preserve information. This feature selection problem can be
formulated as a linear fractional 0-1 optimization problem. We propose a
reformulation of this problem as a bilevel optimization problem, and we present
a heuristic algorithm for an efficient solution of the reformulated problem.
Computational experiments show that the presented algorithm is able to find
better solutions with respect to the ones obtained by employing previously
presented heuristic algorithms.
|
1003.3299
|
Improved Bounds on Restricted Isometry Constants for Gaussian Matrices
|
cs.IT math.CO math.IT math.NA
|
The Restricted Isometry Constants (RIC) of a matrix $A$ measures how close to
an isometry is the action of $A$ on vectors with few nonzero entries, measured
in the $\ell^2$ norm. Specifically, the upper and lower RIC of a matrix $A$ of
size $n\times N$ is the maximum and the minimum deviation from unity (one) of
the largest and smallest, respectively, square of singular values of all
${N\choose k}$ matrices formed by taking $k$ columns from $A$. Calculation of
the RIC is intractable for most matrices due to its combinatorial nature;
however, many random matrices typically have bounded RIC in some range of
problem sizes $(k,n,N)$. We provide the best known bound on the RIC for
Gaussian matrices, which is also the smallest known bound on the RIC for any
large rectangular matrix. Improvements over prior bounds are achieved by
exploiting similarity of singular values for matrices which share a substantial
number of columns.
|
1003.3370
|
Adding HL7 version 3 data types to PostgreSQL
|
cs.DB
|
The HL7 standard is widely used to exchange medical information
electronically. As a part of the standard, HL7 defines scalar communication
data types like physical quantity, point in time and concept descriptor but
also complex types such as interval types, collection types and probabilistic
types. Typical HL7 applications will store their communications in a database,
resulting in a translation from HL7 concepts and types into database types.
Since the data types were not designed to be implemented in a relational
database server, this transition is cumbersome and fraught with programmer
error. The purpose of this paper is two fold. First we analyze the HL7 version
3 data type definitions and define a number of conditions that must be met, for
the data type to be suitable for implementation in a relational database. As a
result of this analysis we describe a number of possible improvements in the
HL7 specification. Second we describe an implementation in the PostgreSQL
database server and show that the database server can effectively execute
scientific calculations with units of measure, supports a large number of
operations on time points and intervals, and can perform operations that are
akin to a medical terminology server. Experiments on synthetic data show that
the user defined types perform better than an implementation that uses only
standard data types from the database server.
|
1003.3384
|
Scaling limits for continuous opinion dynamics systems
|
math.PR cs.SI math.AP math.DS
|
Scaling limits are analyzed for stochastic continuous opinion dynamics
systems, also known as gossip models. In such models, agents update their
vector-valued opinion to a convex combination (possibly agent- and
opinion-dependent) of their current value and that of another observed agent.
It is shown that, in the limit of large agent population size, the empirical
opinion density concentrates, at an exponential probability rate, around the
solution of a probability-measure-valued ordinary differential equation
describing the system's mean-field dynamics. Properties of the associated
initial value problem are studied. The asymptotic behavior of the solution is
analyzed for bounded-confidence opinion dynamics, and in the presence of an
heterogeneous influential environment.
|
1003.3386
|
Monomial-like codes
|
cs.IT math.IT
|
As a generalization of cyclic codes of length p^s over F_{p^a}, we study
n-dimensional cyclic codes of length p^{s_1} X ... X p^{s_n} over F_{p^a}
generated by a single "monomial". Namely, we study multi-variable cyclic codes
of the form <(x_1 - 1)^{i_1} ... (x_n - 1)^{i_n}> in F_{p^a}[x_1...x_n] / <
x_1^{p^{s_1}}-1, ..., x_n^{p^{s_n}}-1 >. We call such codes monomial-like
codes. We show that these codes arise from the product of certain single
variable codes and we determine their minimum Hamming distance. We determine
the dual of monomial-like codes yielding a parity check matrix. We also present
an alternative way of constructing a parity check matrix using the Hasse
derivative. We study the weight hierarchy of certain monomial like codes. We
simplify an expression that gives us the weight hierarchy of these codes.
|
1003.3492
|
Generalized Maiorana-McFarland Constructions for Almost Optimal
Resilient Functions
|
cs.CR cs.IT math.CO math.IT
|
In a recent paper \cite{Zhang-Xiao}, Zhang and Xiao describe a technique on
constructing almost optimal resilient functions on even number of variables. In
this paper, we will present an extensive study of the constructions of almost
optimal resilient functions by using the generalized Maiorana-McFarland (GMM)
construction technique. It is shown that for any given $m$, it is possible to
construct infinitely many $n$-variable ($n$ even), $m$-resilient Boolean
functions with nonlinearity equal to $2^{n-1}-2^{n/2-1}-2^{k-1}$ where $k<n/2$.
A generalized version of GMM construction is further described to obtain almost
optimal resilient functions with higher nonlinearity. We then modify the GMM
construction slightly to make the constructed functions satisfying strict
avalanche criterion (SAC). Furthermore we can obtain infinitely many new
resilient functions with nonlinearity $>2^{n-2}-2^{(n-1)/2}$ ($n$ odd) by using
Patterson-Wiedemann functions or Kavut-Y$\ddot{u}$cel functions. Finally, we
provide a GMM construction technique for multiple-output almost optimal
$m$-resilient functions $F: \mathbb{F}_2^n\mapsto \mathbb{F}_2^r$ ($n$ even)
with nonlinearity $>2^{n-1}-2^{n/2}$. Using the methods proposed in this paper,
a large class of previously unknown cryptographic resilient functions are
obtained.
|
1003.3501
|
Generalized Distributed Network Coding Based on Nonbinary Linear Block
Codes for Multi-User Cooperative Communications
|
cs.IT math.IT
|
In this work, we propose and analyze a generalized construction of
distributed network codes for a network consisting of M users sending different
information to a common base station through independent block fading channels.
The aim is to increase the diversity order of the system without reducing its
code rate. The proposed scheme, called generalized dynamic network codes
(GDNC), is a generalization of the dynamic network codes (DNC) recently
proposed by Xiao and Skoglund. The design of the network codes that maximizes
the diversity order is recognized as equivalent to the design of linear block
codes over a nonbinary finite field under the Hamming metric. The proposed
scheme offers a much better tradeoff between rate and diversity order. An
outage probability analysis showing the improved performance is carried out,
and computer simulations results are shown to agree with the analytical
results.
|
1003.3507
|
On the Degrees of Freedom Regions of Two-User MIMO Z and Full
Interference Channels with Reconfigurable Antennas
|
cs.IT math.IT
|
We study the degrees of freedom (DoF) regions of two-user multiple-input
multiple-output (MIMO) Z and full interference channels in this paper. We
assume that the receivers always have perfect channel state information. We
derive the DoF region of Z interference channel with channel state information
at transmitter (CSIT). For full interference channel without CSIT, the DoF
region has been obtained in previous work except for a special case M1<
N1<min(M2,N2), where M_i and N_i are the number of transmit and receive
antennas of user i, respectively. We show that for this case the DoF regions of
the Z and full interference channels are the same. We establish the
achievability based on the assumption of transmitter antenna mode switching. A
systematic way of constructing the DoF-achieving nulling and beamforming
matrices is presented in this paper.
|
1003.3530
|
Topic Map: An Ontology Framework for Information Retrieval
|
cs.DL cs.IR
|
The basic classification techniques for organizing information are thesauri,
taxonomy and faceted classification. Topic map is relatively a new entrant to
this information space. Topic map standard describes how complex relationships
between abstract concepts and real world resources can be represented using XML
syntax. This paper explores how topic map incorporates the traditional
techniques and what are its advantages and disadvantages in several dimensions
such as content management, indexing, knowledge representation, constraint
specification and query languages in the context of information retrieval. The
constructs of topic maps are illustrated with a use-case implemented in XTM
|
1003.3533
|
Towards Automated Lecture Capture, Navigation and Delivery System for
Web-Lecture on Demand
|
cs.MM cs.IR
|
Institutions all over the world are continuously exploring ways to use ICT in
improving teaching and learning effectiveness. The use of course web pages,
discussion groups, bulletin boards, and e-mails have shown considerable impact
on teaching and learning in significant ways, across all disciplines. ELearning
has emerged as an alternative to traditional classroom-based education and
training and web lectures can be a powerful addition to traditional lectures.
They can even serve as a main content source for learning, provided users can
quickly navigate and locate relevant pages in a web lecture. A web lecture
consists of video and audio of the presenter and slides complemented with
screen capturing. In this paper, an automated approach for recording live
lectures and for browsing available web lectures for on-demand applications by
end users is presented.
|
1003.3536
|
Computing the Fewest-turn Map Directions based on the Connectivity of
Natural Roads
|
cs.CG cs.DB cs.DS
|
In this paper, we introduced a novel approach to computing the fewest-turn
map directions or routes based on the concept of natural roads. Natural roads
are joined road segments that perceptually constitute good continuity. This
approach relies on the connectivity of natural roads rather than that of road
segments for computing routes or map directions. Because of this, the derived
routes posses the fewest turns. However, what we intend to achieve are the
routes that not only possess the fewest turns, but are also as short as
possible. This kind of map direction is more effective and favorable by people,
because they bear less cognitive burden. Furthermore, the computation of the
routes is more efficient, since it is based on the graph encoding the
connectivity of roads, which is significantly smaller than the graph of road
segments. We made experiments applied to eight urban street networks from North
America and Europe in order to illustrate the above stated advantages. The
experimental results indicate that the fewest-turn routes posses fewer turns
and shorter distances than the simplest paths and the routes provided by Google
Maps. For example, the fewest-turn-and-shortest routes are on average 15%
shorter than the routes suggested by Google Maps, while the number of turns is
just half as much. This approach is a key technology behind FromToMap.org - a
web mapping service using openstreetmap data.
|
1003.3543
|
Fastest Distributed Consensus Problem on Fusion of Two Star Networks
|
cs.IT cs.DC math.IT
|
Finding optimal weights for the problem of Fastest Distributed Consensus on
networks with different topologies has been an active area of research for a
number of years. Here in this work we present an analytical solution for the
problem of Fastest Distributed Consensus for a network formed from fusion of
two different symmetric star networks or in other words a network consists of
two different symmetric star networks which share the same central node. The
solution procedure consists of stratification of associated connectivity graph
of network and Semidefinite Programming (SDP), particularly solving the
slackness conditions, where the optimal weights are obtained by inductive
comparing of the characteristic polynomials initiated by slackness conditions.
Some numerical simulations are carried out to investigate the trade-off between
the parameters of two fused star networks, namely the length and number of
branches.
|
1003.3619
|
Using Information Theory to Study the Efficiency and Capacity of
Computers and Similar Devices
|
cs.IT cs.CC math.IT
|
We address the problems of estimating the computer efficiency and the
computer capacity. We define the computer efficiency and capacity and suggest a
method for their estimation, based on the analysis of processor instructions
and kinds of accessible memory. It is shown how the suggested method can be
applied to estimate the computer capacity. In particular, this consideration
gives a new look at the organization of the memory of a computer. Obtained
results can be of some interest for practical applications
|
1003.3654
|
Sliding window approach based Text Binarisation from Complex Textual
images
|
cs.CV
|
Text binarisation process classifies individual pixels as text or background
in the textual images. Binarization is necessary to bridge the gap between
localization and recognition by OCR. This paper presents Sliding window method
to binarise text from textual images with textured background. Suitable
preprocessing techniques are applied first to increase the contrast of the
image and blur the background noises due to textured background. Then Edges are
detected by iterative thresholding. Subsequently formed edge boxes are analyzed
to remove unwanted edges due to complex background and binarised by sliding
window approach based character size uniformity check algorithm. The proposed
method has been applied on localized region from heterogeneous textual images
and compared with Otsu, Niblack methods and shown encouraging performance of
the proposed method.
|
1003.3661
|
An HTTP-Based Versioning Mechanism for Linked Data
|
cs.DL cs.IR
|
Dereferencing a URI returns a representation of the current state of the
resource identified by that URI. But, on the Web representations of prior
states of a resource are also available, for example, as resource versions in
Content Management Systems or archival resources in Web Archives such as the
Internet Archive. This paper introduces a resource versioning mechanism that is
fully based on HTTP and uses datetime as a global version indicator. The
approach allows "follow your nose" style navigation both from the current
time-generic resource to associated time-specific version resources as well as
among version resources. The proposed versioning mechanism is congruent with
the Architecture of the World Wide Web, and is based on the Memento framework
that extends HTTP with transparent content negotiation in the datetime
dimension. The paper shows how the versioning approach applies to Linked Data,
and by means of a demonstrator built for DBpedia, it also illustrates how it
can be used to conduct a time-series analysis across versions of Linked Data
descriptions.
|
1003.3676
|
Simple heuristics for the assembly line worker assignment and balancing
problem
|
cs.DS cs.NE
|
We propose simple heuristics for the assembly line worker assignment and
balancing problem. This problem typically occurs in assembly lines in sheltered
work centers for the disabled. Different from the classical simple assembly
line balancing problem, the task execution times vary according to the assigned
worker. We develop a constructive heuristic framework based on task and worker
priority rules defining the order in which the tasks and workers should be
assigned to the workstations. We present a number of such rules and compare
their performance across three possible uses: as a stand-alone method, as an
initial solution generator for meta-heuristics, and as a decoder for a hybrid
genetic algorithm. Our results show that the heuristics are fast, they obtain
good results as a stand-alone method and are efficient when used as a initial
solution generator or as a solution decoder within more elaborate approaches.
|
1003.3707
|
Downlink Interference Alignment
|
cs.IT math.IT
|
We develop an interference alignment (IA) technique for a downlink cellular
system. In the uplink, IA schemes need channel-state-information exchange
across base-stations of different cells, but our downlink IA technique requires
feedback only within a cell. As a result, the proposed scheme can be
implemented with a few changes to an existing cellular system where the
feedback mechanism (within a cell) is already being considered for supporting
multi-user MIMO. Not only is our proposed scheme implementable with little
effort, it can in fact provide substantial gain especially when interference
from a dominant interferer is significantly stronger than the remaining
interference: it is shown that in the two-isolated cell layout, our scheme
provides four-fold gain in throughput performance over a standard multi-user
MIMO technique. We show through simulations that our technique provides
respectable gain under a more realistic scenario: it gives approximately 20%
gain for a 19 hexagonal wrap-around-cell layout. Furthermore, we show that our
scheme has the potential to provide substantial gain for macro-pico cellular
networks where pico-users can be significantly interfered with by the nearby
macro-BS.
|
1003.3754
|
Quantum codes from codes over Gaussian integers with respect to the
Mannheim metric
|
cs.IT math.IT
|
In this paper, some nonbinary quantum codes using classical codes over
Gaussian integers are obtained. Also, some of our quantum codes are better than
or comparable with those known before, (for instance [[8; 2; 5]]4+i).
|
1003.3765
|
Design of Nested LDGM-LDPC Codes for Compress-and-Forward in Relay
Channel
|
cs.IT math.IT
|
A three terminal relay system with binary erasure channel (BEC) was
considered, in which a source forwarded information to a destination with a
relay's "assistance". The nested LDGM (Low-density generator-matrix) -LDPC
(low-density parity-check) was designed to realize Compress-and-forward (CF) at
the relay. LDGM coding compressed the received signals losslessly and LDPC
realized the binning for Slepian-Wolf coding. Firstly a practical coding scheme
was proposed to achieve the cut-set bound on the capacity of the system,
employing LDPC and Nested LDGM-LDPC codes at the source and relay respectively.
Then, the degree distribution of LDGM and LDPC codes was optimized with a given
rate bound, which ensured that the iterative belief propagation (BP) decoding
algorithm at the destination was convergent. Finally, simulations results show
that the performance achieved based on nested codes is very close to
Slepian-Wolf theoretical limit.
|
1003.3766
|
Modelling and simulating retail management practices: a first approach
|
cs.AI cs.CE cs.MA
|
Multi-agent systems offer a new and exciting way of understanding the world
of work. We apply agent-based modeling and simulation to investigate a set of
problems in a retail context. Specifically, we are working to understand the
relationship between people management practices on the shop-floor and retail
performance. Despite the fact we are working within a relatively novel and
complex domain, it is clear that using an agent-based approach offers great
potential for improving organizational capabilities in the future. Our
multi-disciplinary research team has worked closely with one of the UK's top
ten retailers to collect data and build an understanding of shop-floor
operations and the key actors in a department (customers, staff, and managers).
Based on this case study we have built and tested our first version of a retail
branch agent-based simulation model where we have focused on how we can
simulate the effects of people management practices on customer satisfaction
and sales. In our experiments we have looked at employee development and
cashier empowerment as two examples of shop floor management practices. In this
paper we describe the underlying conceptual ideas and the features of our
simulation model. We present a selection of experiments we have conducted in
order to validate our simulation model and to show its potential for answering
"what-if" questions in a retail context. We also introduce a novel performance
measure which we have created to quantify customers' satisfaction with service,
based on their individual shopping experiences.
|
1003.3767
|
Multi-Agent Simulation and Management Practices
|
cs.AI cs.CE cs.MA
|
Intelligent agents offer a new and exciting way of understanding the world of
work. Agent-Based Simulation (ABS), one way of using intelligent agents,
carries great potential for progressing our understanding of management
practices and how they link to retail performance. We have developed simulation
models based on research by a multi-disciplinary team of economists, work
psychologists and computer scientists. We will discuss our experiences of
implementing these concepts working with a well-known retail department store.
There is no doubt that management practices are linked to the performance of an
organisation (Reynolds et al., 2005; Wall & Wood, 2005). Best practices have
been developed, but when it comes down to the actual application of these
guidelines considerable ambiguity remains regarding their effectiveness within
particular contexts (Siebers et al., forthcoming a). Most Operational Research
(OR) methods can only be used as analysis tools once management practices have
been implemented. Often they are not very useful for giving answers to
speculative 'what-if' questions, particularly when one is interested in the
development of the system over time rather than just the state of the system at
a certain point in time. Simulation can be used to analyse the operation of
dynamic and stochastic systems. ABS is particularly useful when complex
interactions between system entities exist, such as autonomous decision making
or negotiation. In an ABS model the researcher explicitly describes the
decision process of simulated actors at the micro level. Structures emerge at
the macro level as a result of the actions of the agents and their interactions
with other agents and the environment. 3 We will show how ABS experiments can
deal with testing and optimising management practices such as training,
empowerment or teamwork. Hence, questions such as "will staff setting their own
break times improve performance?" can be investigated.
|
1003.3775
|
Optimisation of a Crossdocking Distribution Centre Simulation Model
|
cs.AI cs.CE
|
This paper reports on continuing research into the modelling of an order
picking process within a Crossdocking distribution centre using Simulation
Optimisation. The aim of this project is to optimise a discrete event
simulation model and to understand factors that affect finding its optimal
performance. Our initial investigation revealed that the precision of the
selected simulation output performance measure and the number of replications
required for the evaluation of the optimisation objective function through
simulation influences the ability of the optimisation technique. We
experimented with Common Random Numbers, in order to improve the precision of
our simulation output performance measure, and intended to use the number of
replications utilised for this purpose as the initial number of replications
for the optimisation of our Crossdocking distribution centre simulation model.
Our results demonstrate that we can improve the precision of our selected
simulation output performance measure value using Common Random Numbers at
various levels of replications. Furthermore, after optimising our Crossdocking
distribution centre simulation model, we are able to achieve optimal
performance using fewer simulations runs for the simulation model which uses
Common Random Numbers as compared to the simulation model which does not use
Common Random Numbers.
|
1003.3784
|
Simulating Customer Experience and Word Of Mouth in Retail - A Case
Study
|
cs.MA cs.CE
|
Agents offer a new and exciting way of understanding the world of work. In
this paper we describe the development of agent-based simulation models,
designed to help to understand the relationship between people management
practices and retail performance. We report on the current development of our
simulation models which includes new features concerning the evolution of
customers over time. To test the features we have conducted a series of
experiments dealing with customer pool sizes, standard and noise reduction
modes, and the spread of customers' word of mouth. To validate and evaluate our
model, we introduce new performance measure specific to retail operations. We
show that by varying different parameters in our model we can simulate a range
of customer experiences leading to significant differences in performance
measures. Ultimately, we are interested in better understanding the impact of
changes in staff behavior due to changes in store management practices. Our
multi-disciplinary research team draws upon expertise from work psychologists
and computer scientists. Despite the fact we are working within a relatively
novel and complex domain, it is clear that intelligent agents offer potential
for fostering sustainable organizational capabilities in the future.
|
1003.3792
|
On Complexity, Energy- and Implementation-Efficiency of Channel Decoders
|
cs.IT cs.AR math.IT
|
Future wireless communication systems require efficient and flexible baseband
receivers. Meaningful efficiency metrics are key for design space exploration
to quantify the algorithmic and the implementation complexity of a receiver.
Most of the current established efficiency metrics are based on counting
operations, thus neglecting important issues like data and storage complexity.
In this paper we introduce suitable energy and area efficiency metrics which
resolve the afore-mentioned disadvantages. These are decoded information bit
per energy and throughput per area unit. Efficiency metrics are assessed by
various implementations of turbo decoders, LDPC decoders and convolutional
decoders. New exploration methodologies are presented, which permit an
appropriate benchmarking of implementation efficiency, communications
performance, and flexibility trade-offs. These exploration methodologies are
based on efficiency trajectories rather than a single snapshot metric as done
in state-of-the-art approaches.
|
1003.3821
|
A Formal Approach to Modeling the Memory of a Living Organism
|
cs.AI cs.DS cs.LG q-bio.NC
|
We consider a living organism as an observer of the evolution of its
environment recording sensory information about the state space X of the
environment in real time. Sensory information is sampled and then processed on
two levels. On the biological level, the organism serves as an evaluation
mechanism of the subjective relevance of the incoming data to the observer: the
observer assigns excitation values to events in X it could recognize using its
sensory equipment. On the algorithmic level, sensory input is used for updating
a database, the memory of the observer whose purpose is to serve as a
geometric/combinatorial model of X, whose nodes are weighted by the excitation
values produced by the evaluation mechanism. These values serve as a guidance
system for deciding how the database should transform as observation data
mounts. We define a searching problem for the proposed model and discuss the
model's flexibility and its computational efficiency, as well as the
possibility of implementing it as a dynamic network of neuron-like units. We
show how various easily observable properties of the human memory and thought
process can be explained within the framework of this model. These include:
reasoning (with efficiency bounds), errors, temporary and permanent loss of
information. We are also able to define general learning problems in terms of
the new model, such as the language acquisition problem.
|
1003.3908
|
Full Diversity Space-Time Block Codes with Low-Complexity Partial
Interference Cancellation Group Decoding
|
cs.IT math.IT
|
Partial interference cancellation (PIC) group decoding proposed by Guo and
Xia is an attractive low-complexity alternative to the optimal processing for
multiple-input multiple-output (MIMO) wireless communications. It can well deal
with the tradeoff among rate, diversity and complexity of space-time block
codes (STBC). In this paper, a systematic design of full-diversity STBC with
low-complexity PIC group decoding is proposed. The proposed code design is
featured as a group-orthogonal STBC by replacing every element of an Alamouti
code matrix with an elementary matrix composed of multiple diagonal layers of
coded symbols. With the PIC group decoding and a particular grouping scheme,
the proposed STBC can achieve full diversity, a rate of $(2M)/(M+2)$ and a
low-complexity decoding for $M$ transmit antennas. Simulation results show that
the proposed codes can achieve the full diversity with PIC group decoding while
requiring half decoding complexity of the existing codes.
|
1003.3967
|
Adaptive Submodularity: Theory and Applications in Active Learning and
Stochastic Optimization
|
cs.LG cs.AI cs.DS
|
Solving stochastic optimization problems under partial observability, where
one needs to adaptively make decisions with uncertain outcomes, is a
fundamental but notoriously difficult challenge. In this paper, we introduce
the concept of adaptive submodularity, generalizing submodular set functions to
adaptive policies. We prove that if a problem satisfies this property, a simple
adaptive greedy algorithm is guaranteed to be competitive with the optimal
policy. In addition to providing performance guarantees for both stochastic
maximization and coverage, adaptive submodularity can be exploited to
drastically speed up the greedy algorithm by using lazy evaluations. We
illustrate the usefulness of the concept by giving several examples of adaptive
submodular objectives arising in diverse applications including sensor
placement, viral marketing and active learning. Proving adaptive submodularity
for these problems allows us to recover existing results in these applications
as special cases, improve approximation guarantees and handle natural
generalizations.
|
1003.3984
|
On MMSE and MAP Denoising Under Sparse Representation Modeling Over a
Unitary Dictionary
|
cs.CV stat.AP
|
Among the many ways to model signals, a recent approach that draws
considerable attention is sparse representation modeling. In this model, the
signal is assumed to be generated as a random linear combination of a few atoms
from a pre-specified dictionary. In this work we analyze two Bayesian denoising
algorithms -- the Maximum-Aposteriori Probability (MAP) and the
Minimum-Mean-Squared-Error (MMSE) estimators, under the assumption that the
dictionary is unitary. It is well known that both these estimators lead to a
scalar shrinkage on the transformed coefficients, albeit with a different
response curve. In this work we start by deriving closed-form expressions for
these shrinkage curves and then analyze their performance. Upper bounds on the
MAP and the MMSE estimation errors are derived. We tie these to the error
obtained by a so-called oracle estimator, where the support is given,
establishing a worst-case gain-factor between the MAP/MMSE estimation errors
and the oracle's performance. These denoising algorithms are demonstrated on
synthetic signals and on true data (images).
|
1003.3985
|
The Projected GSURE for Automatic Parameter Tuning in Iterative
Shrinkage Methods
|
cs.CV stat.AP
|
Linear inverse problems are very common in signal and image processing. Many
algorithms that aim at solving such problems include unknown parameters that
need tuning. In this work we focus on optimally selecting such parameters in
iterative shrinkage methods for image deblurring and image zooming. Our work
uses the projected Generalized Stein Unbiased Risk Estimator (GSURE) for
determining the threshold value lambda and the iterations number K in these
algorithms. The proposed parameter selection is shown to handle any degradation
operator, including ill-posed and even rectangular ones. This is achieved by
using GSURE on the projected expected error. We further propose an efficient
greedy parameter setting scheme, that tunes the parameter while iterating
without impairing the resulting deblurring performance. Finally, we provide
extensive comparisons to conventional methods for parameter selection, showing
the superiority of the use of the projected GSURE.
|
1003.4021
|
System-theoretic approach to image interest point detection
|
cs.CV
|
Interest point detection is a common task in various computer vision
applications. Although a big variety of detector are developed so far
computational efficiency of interest point based image analysis remains to be
the problem. Current paper proposes a system-theoretic approach to interest
point detection. Starting from the analysis of interdependency between detector
and descriptor it is shown that given a descriptor it is possible to introduce
to notion of detector redundancy. Furthermore for each detector it is possible
to construct its irredundant and equivalent modification. Modified detector
possesses lower computational complexity and is preferable. It is also shown
that several known approaches to reduce computational complexity of image
registration can be generalized in terms of proposed theory.
|
1003.4042
|
MINRES-QLP: a Krylov subspace method for indefinite or singular
symmetric systems
|
math.NA cs.CE cs.NA stat.CO
|
CG, SYMMLQ, and MINRES are Krylov subspace methods for solving symmetric
systems of linear equations. When these methods are applied to an incompatible
system (that is, a singular symmetric least-squares problem), CG could break
down and SYMMLQ's solution could explode, while MINRES would give a
least-squares solution but not necessarily the minimum-length (pseudoinverse)
solution. This understanding motivates us to design a MINRES-like algorithm to
compute minimum-length solutions to singular symmetric systems.
MINRES uses QR factors of the tridiagonal matrix from the Lanczos process
(where R is upper-tridiagonal). MINRES-QLP uses a QLP decomposition (where
rotations on the right reduce R to lower-tridiagonal form). On ill-conditioned
systems (singular or not), MINRES-QLP can give more accurate solutions than
MINRES. We derive preconditioned MINRES-QLP, new stopping rules, and better
estimates of the solution and residual norms, the matrix norm, and the
condition number.
|
1003.4053
|
A Comprehensive Review of Image Enhancement Techniques
|
cs.CV
|
Principle objective of Image enhancement is to process an image so that
result is more suitable than original image for specific application. Digital
image enhancement techniques provide a multitude of choices for improving the
visual quality of images. Appropriate choice of such techniques is greatly
influenced by the imaging modality, task at hand and viewing conditions. This
paper will provide an overview of underlying concepts, along with algorithms
commonly used for image enhancement. The paper focuses on spatial domain
techniques for image enhancement, with particular reference to point processing
methods and histogram processing.
|
1003.4057
|
Construction of optimal codes in deletion and insertion metric
|
cs.IT math.IT
|
We improve Levenshtein's upper bound for the cardinality of a code of length
four that is capable of correcting single deletions over an alphabet of even
size. We also illustrate that the new upper bound is sharp. Furthermore we
construct an optimal perfect code that is capable of correcting single
deletions for the same parameters.
|
1003.4065
|
Plagiarism Detection using ROUGE and WordNet
|
cs.OH cs.CL
|
With the arrival of digital era and Internet, the lack of information control
provides an incentive for people to freely use any content available to them.
Plagiarism occurs when users fail to credit the original owner for the content
referred to, and such behavior leads to violation of intellectual property. Two
main approaches to plagiarism detection are fingerprinting and term occurrence;
however, one common weakness shared by both approaches, especially
fingerprinting, is the incapability to detect modified text plagiarism. This
study proposes adoption of ROUGE and WordNet to plagiarism detection. The
former includes ngram co-occurrence statistics, skip-bigram, and longest common
subsequence (LCS), while the latter acts as a thesaurus and provides semantic
information. N-gram co-occurrence statistics can detect verbatim copy and
certain sentence modification, skip-bigram and LCS are immune from text
modification such as simple addition or deletion of words, and WordNet may
handle the problem of word substitution.
|
1003.4066
|
A Security Based Data Mining Approach in Data Grid
|
cs.DC cs.DB
|
Grid computing is the next logical step to distributed computing. Main
objective of grid computing is an innovative approach to share resources such
as CPU usage; memory sharing and software sharing. Data Grids provide
transparent access to semantically related data resources in a heterogeneous
system. The system incorporates both data mining and grid computing techniques
where Grid application reduces the time for sending results to several clients
at the same time and Data mining application on computational grids gives fast
and sophisticated results to users. In this work, grid based data mining
technique is used to do automatic allocation based on probabilistic mining
frequent sequence algorithm. It finds frequent sequences for many users at a
time with accurate result. It also includes the trust management architecture
for trust enhanced security.
|
1003.4067
|
Computation of Reducts Using Topology and Measure of Significance of
Attributes
|
cs.IR
|
Data generated in the fields of science, technology, business and in many
other fields of research are increasing in an exponential rate. The way to
extract knowledge from a huge set of data is a challenging task. This paper
aims to propose a hybrid and viable method to deal with an information system
in data mining, using topological techniques and the significance of the
attributes measured using rough set theory, to compute the reduct, This will
reduce the randomness in the process of elimination of redundant attributes,
which, in turn, will reduce the complexity of the computation of reducts of an
information system where a large amount of data have to be processed.
|
1003.4068
|
A Novel Approach For Discovery Multi Level Fuzzy Association Rule Mining
|
cs.DB
|
Finding multilevel association rules in transaction databases is most
commonly seen in is widely used in data mining. In this paper, we present a
model of mining multilevel association rules which satisfies the different
minimum support at each level, we have employed fuzzy set concepts, multi-level
taxonomy and different minimum supports to find fuzzy multilevel association
rules in a given transaction data set. Apriori property is used in model to
prune the item sets. The proposed model adopts a topdown progressively
deepening approach to derive large itemsets. This approach incorporates fuzzy
boundaries instead of sharp boundary intervals. An example is also given to
demonstrate and support that the proposed mining algorithm can derive the
multiple-level association rules under different supports in a simple and
effective manner.
|
1003.4075
|
A Neuro-Fuzzy Multi Swarm FastSLAM Framework
|
cs.RO
|
FastSLAM is a framework for simultaneous localization using a
Rao-Blackwellized particle filter. In FastSLAM, particle filter is used for the
mobile robot pose (position and orientation) estimation, and an Extended Kalman
Filter (EKF) is used for the feature location's estimation. However, FastSLAM
degenerates over time. This degeneracy is due to the fact that a particle set
estimating the pose of the robot loses its diversity. One of the main reasons
for loosing particle diversity in FastSLAM is sample impoverishment. It occurs
when likelihood lies in the tail of the proposal distribution. In this case,
most of particle weights are insignificant. Another problem of FastSLAM relates
to the design of an extended Kalman filter for landmark position's estimation.
The performance of the EKF and the quality of the estimation depends heavily on
correct a priori knowledge of the process and measurement noise covariance
matrices (Q and R) that are in most applications unknown. On the other hand, an
incorrect a priori knowledge of Q and R may seriously degrade the performance
of the Kalman filter. This paper presents a Neuro-Fuzzy Multi Swarm FastSLAM
Framework. In our proposed method, a Neuro-Fuzzy extended kalman filter for
landmark feature estimation, and a particle filter based on particle swarm
optimization are presented to overcome the impoverishment of FastSLAM.
Experimental results demonstrate the effectiveness of the proposed algorithm.
|
1003.4076
|
Similarity Data Item Set Approach: An Encoded Temporal Data Base
Technique
|
cs.DB
|
Data mining has been widely recognized as a powerful tool to explore added
value from large-scale databases. Finding frequent item sets in databases is a
crucial in data mining process of extracting association rules. Many algorithms
were developed to find the frequent item sets. This paper presents a summary
and a comparative study of the available FP-growth algorithm variations
produced for mining frequent item sets showing their capabilities and
efficiency in terms of time and memory consumption on association rule mining
by taking application of specific information into account. It proposes pattern
growth mining paradigm based FP-tree growth algorithm, which employs a tree
structure to compress the database. The performance study shows that the anti-
FP-growth method is efficient and scalable for mining both long and short
frequent patterns and is about an order of magnitude faster than the Apriority
algorithm and also faster than some recently reported new frequent-pattern
mining.
|
1003.4079
|
Gene Expression Data Knowledge Discovery using Global and Local
Clustering
|
cs.CE cs.LG q-bio.GN
|
To understand complex biological systems, the research community has produced
huge corpus of gene expression data. A large number of clustering approaches
have been proposed for the analysis of gene expression data. However,
extracting important biological knowledge is still harder. To address this
task, clustering techniques are used. In this paper, hybrid Hierarchical
k-Means algorithm is used for clustering and biclustering gene expression data
is used. To discover both local and global clustering structure biclustering
and clustering algorithms are utilized. A validation technique, Figure of Merit
is used to determine the quality of clustering results. Appropriate knowledge
is mined from the clusters by embedding a BLAST similarity search program into
the clustering and biclustering process. To discover both local and global
clustering structure biclustering and clustering algorithms are utilized. To
determine the quality of clustering results, a validation technique, Figure of
Merit is used. Appropriate knowledge is mined from the clusters by embedding a
BLAST similarity search program into the clustering and biclustering process.
|
1003.4081
|
Fuzzy-based Navigation and Control of a Non-Holonomic Mobile Robot
|
cs.NE cs.RO
|
In recent years, the use of non-analytical methods of computing such as fuzzy
logic, evolutionary computation, and neural networks has demonstrated the
utility and potential of these paradigms for intelligent control of mobile
robot navigation. In this paper, a theoretical model of a fuzzy based
controller for an autonomous mobile robot is developed. The paper begins with
the mathematical model of the robot that involves the kinematic model. Then,
the fuzzy logic controller is developed and discussed in detail. The proposed
method is successfully tested in simulations, and it compares the effectiveness
of three different set of membership of functions. It is shown that fuzzy logic
controller with input membership of three provides better performance compared
with five and seven membership functions.
|
1003.4087
|
Land-cover Classification and Mapping for Eastern Himalayan State Sikkim
|
cs.CV
|
Area of classifying satellite imagery has become a challenging task in
current era where there is tremendous growth in settlement i.e. construction of
buildings, roads, bridges, dam etc. This paper suggests an improvised k-means
and Artificial Neural Network (ANN) classifier for land-cover mapping of
Eastern Himalayan state Sikkim. The improvised k-means algorithm shows
satisfactory results compared to existing methods that includes k-Nearest
Neighbor and maximum likelihood classifier. The strength of the Artificial
Neural Network (ANN) classifier lies in the fact that they are fast and have
good recognition rate and it's capability of self-learning compared to other
classification algorithms has made it widely accepted. Classifier based on ANN
shows satisfactory and accurate result in comparison with the classical method.
|
1003.4140
|
Integrating Real-Time Analysis With The Dendritic Cell Algorithm Through
Segmentation
|
cs.NE cs.AI cs.CR
|
As an immune inspired algorithm, the Dendritic Cell Algorithm (DCA) has been
applied to a range of problems, particularly in the area of intrusion
detection. Ideally, the intrusion detection should be performed in real-time,
to continuously detect misuses as soon as they occur. Consequently, the
analysis process performed by an intrusion detection system must operate in
real-time or near-to real-time. The analysis process of the DCA is currently
performed offline, therefore to improve the algorithm's performance we suggest
the development of a real-time analysis component. The initial step of the
development is to apply segmentation to the DCA. This involves segmenting the
current output of the DCA into slices and performing the analysis in various
ways. Two segmentation approaches are introduced and tested in this paper,
namely antigen based segmentation (ABS) and time based segmentation (TBS). The
results of the corresponding experiments suggest that applying segmentation
produces different and significantly better results in some cases, when
compared to the standard DCA without segmentation. Therefore, we conclude that
the segmentation is applicable to the DCA for the purpose of real-time
analysis.
|
1003.4141
|
Investigating Output Accuracy for a Discrete Event Simulation Model and
an Agent Based Simulation Model
|
cs.AI cs.CE cs.MA
|
In this paper, we investigate output accuracy for a Discrete Event Simulation
(DES) model and Agent Based Simulation (ABS) model. The purpose of this
investigation is to find out which of these simulation techniques is the best
one for modelling human reactive behaviour in the retail sector. In order to
study the output accuracy in both models, we have carried out a validation
experiment in which we compared the results from our simulation models to the
performance of a real system. Our experiment was carried out using a large UK
department store as a case study. We had to determine an efficient
implementation of management policy in the store's fitting room using DES and
ABS. Overall, we have found that both simulation models were a good
representation of the real system when modelling human reactive behaviour.
|
1003.4142
|
Malicious Code Execution Detection and Response Immune System inspired
by the Danger Theory
|
cs.AI cs.CR cs.NE
|
The analysis of system calls is one method employed by anomaly detection
systems to recognise malicious code execution. Similarities can be drawn
between this process and the behaviour of certain cells belonging to the human
immune system, and can be applied to construct an artificial immune system. A
recently developed hypothesis in immunology, the Danger Theory, states that our
immune system responds to the presence of intruders through sensing molecules
belonging to those invaders, plus signals generated by the host indicating
danger and damage. We propose the incorporation of this concept into a
responsive intrusion detection system, where behavioural information of the
system and running processes is combined with information regarding individual
system calls.
|
1003.4145
|
Mimicking the Behaviour of Idiotypic AIS Robot Controllers Using
Probabilistic Systems
|
cs.AI cs.NE cs.RO
|
Previous work has shown that robot navigation systems that employ an
architecture based upon the idiotypic network theory of the immune system have
an advantage over control techniques that rely on reinforcement learning only.
This is thought to be a result of intelligent behaviour selection on the part
of the idiotypic robot. In this paper an attempt is made to imitate idiotypic
dynamics by creating controllers that use reinforcement with a number of
different probabilistic schemes to select robot behaviour. The aims are to show
that the idiotypic system is not merely performing some kind of periodic random
behaviour selection, and to try to gain further insight into the processes that
govern the idiotypic mechanism. Trials are carried out using simulated Pioneer
robots that undertake navigation exercises. Results show that a scheme that
boosts the probability of selecting highly-ranked alternative behaviours to 50%
during stall conditions comes closest to achieving the properties of the
idiotypic system, but remains unable to match it in terms of all round
performance.
|
1003.4146
|
A Mathematical Approach to the Study of the United States Code
|
cs.IR cs.CY cs.DL physics.soc-ph
|
The United States Code (Code) is a document containing over 22 million words
that represents a large and important source of Federal statutory law. Scholars
and policy advocates often discuss the direction and magnitude of changes in
various aspects of the Code. However, few have mathematically formalized the
notions behind these discussions or directly measured the resulting
representations. This paper addresses the current state of the literature in
two ways. First, we formalize a representation of the United States Code as the
union of a hierarchical network and a citation network over vertices containing
the language of the Code. This representation reflects the fact that the Code
is a hierarchically organized document containing language and explicit
citations between provisions. Second, we use this formalization to measure
aspects of the Code as codified in October 2008, November 2009, and March 2010.
These measurements allow for a characterization of the actual changes in the
Code over time. Our findings indicate that in the recent past, the Code has
grown in its amount of structure, interdependence, and language.
|
1003.4149
|
Les Entit\'es Nomm\'ees : usage et degr\'es de pr\'ecision et de
d\'esambigu\"isation
|
cs.CL
|
The recognition and classification of Named Entities (NER) are regarded as an
important component for many Natural Language Processing (NLP) applications.
The classification is usually made by taking into account the immediate context
in which the NE appears. In some cases, this immediate context does not allow
getting the right classification. We show in this paper that the use of an
extended syntactic context and large-scale resources could be very useful in
the NER task.
|
1003.4196
|
Development of a Cargo Screening Process Simulator: A First Approach
|
cs.AI cs.CE cs.MA
|
The efficiency of current cargo screening processes at sea and air ports is
largely unknown as few benchmarks exists against which they could be measured.
Some manufacturers provide benchmarks for individual sensors but we found no
benchmarks that take a holistic view of the overall screening procedures and no
benchmarks that take operator variability into account. Just adding up
resources and manpower used is not an effective way for assessing systems where
human decision-making and operator compliance to rules play a vital role. Our
aim is to develop a decision support tool (cargo-screening system simulator)
that will map the right technology and manpower to the right commodity-threat
combination in order to maximise detection rates. In this paper we present our
ideas for developing such a system and highlight the research challenges we
have identified. Then we introduce our first case study and report on the
progress we have made so far.
|
1003.4216
|
Minimizing the Probability of Lifetime Ruin under Stochastic Volatility
|
q-fin.PM cs.SY math.OC math.PR
|
We assume that an individual invests in a financial market with one riskless
and one risky asset, with the latter's price following a diffusion with
stochastic volatility. In the current financial market especially, it is
important to include stochastic volatility in the risky asset's price process.
Given the rate of consumption, we find the optimal investment strategy for the
individual who wishes to minimize the probability of going bankrupt. To solve
this minimization problem, we use techniques from stochastic optimal control.
|
1003.4270
|
Wireless Network Coding with Imperfect Overhearing
|
cs.IT math.IT
|
Not only is network coding essential to achieve the capacity of a
single-session multicast network, it can also help to improve the throughput of
wireless networks with multiple unicast sessions when overheard information is
available. Most previous research aimed at realizing such improvement by using
perfectly overheard information, while in practice, especially for wireless
networks, overheard information is often imperfect. To date, it is unclear
whether network coding should still be used in such situations with imperfect
overhearing. In this paper, a simple but ubiquitous wireless network model with
two unicast sessions is used to investigate this problem. From the diversity
and multiplexing tradeoff perspective, it is proved that even when overheard
information is imperfect, network coding can still help to improve the overall
system performance. This result implies that network coding should be used
actively regardless of the reception quality of overheard information.
|
1003.4274
|
Unbeatable Imitation
|
cs.GT cs.LG
|
We show that for many classes of symmetric two-player games, the simple
decision rule "imitate-the-best" can hardly be beaten by any other decision
rule. We provide necessary and sufficient conditions for imitation to be
unbeatable and show that it can only be beaten by much in games that are of the
rock-scissors-paper variety. Thus, in many interesting examples, like 2x2
games, Cournot duopoly, price competition, rent seeking, public goods games,
common pool resource games, minimum effort coordination games, arms race,
search, bargaining, etc., imitation cannot be beaten by much even by a very
clever opponent.
|
1003.4287
|
Towards automated high-throughput screening of C. elegans on agar
|
cs.CV q-bio.GN
|
High-throughput screening (HTS) using model organisms is a promising method
to identify a small number of genes or drugs potentially relevant to human
biology or disease. In HTS experiments, robots and computers do a significant
portion of the experimental work. However, one remaining major bottleneck is
the manual analysis of experimental results, which is commonly in the form of
microscopy images. This manual inspection is labor intensive, slow and
subjective. Here we report our progress towards applying computer vision and
machine learning methods to analyze HTS experiments that use Caenorhabditis
elegans (C. elegans) worms grown on agar. Our main contribution is a robust
segmentation algorithm for separating the worms from the background using
brightfield images. We also show that by combining the output of this
segmentation algorithm with an algorithm to detect the fluorescent dye, Nile
Red, we can reliably distinguish different fluorescence-based phenotypes even
though the visual differences are subtle. The accuracy of our method is similar
to that of expert human analysts. This new capability is a significant step
towards fully automated HTS experiments using C. elegans.
|
1003.4302
|
Optimal Unitary Linear Processing for Amplify-and-Forward Cooperative
OFDM systems
|
cs.IT math.IT math.OC
|
In this paper, we consider the amplified-and-forward relaying in an OFDM
system with unitary linear processing at the relay. We proposed a general
analytical framework to find the unitary linear processing matrix that
maximizes the system achievable rate. We show that the optimal processing
matrix is a permutation matrix, which implies that a subcarrier pairing
strategy is optimal. We further derived the optimal subcarrier pairing schemes
for scenarios with and without the direct source-destination path for
diversity. Simulation results are presented to demonstrate the achievable gain
of optimal subcarrier pairing compared with non-optimal linear processing and
non-pairing.
|
1003.4328
|
New inner and outer bounds for the discrete memoryless cognitive
interference channel and some capacity results
|
cs.IT math.IT
|
The cognitive interference channel is an interference channel in which one
transmitter is non-causally provided with the message of the other transmitter.
This channel model has been extensively studied in the past years and capacity
results for certain classes of channels have been proved. In this paper we
present new inner and outer bounds for the capacity region of the cognitive
interference channel as well as new capacity results. Previously proposed outer
bounds are expressed in terms of auxiliary random variables for which no
cardinality constraint is known. Consequently it is not possible to evaluate
such outer bounds explicitly for a given channel model. The outer bound we
derive is based on an idea originally devised by Sato for the broadcast channel
and does not contain auxiliary random variables, allowing it to be more easily
evaluated. The inner bound we derive is the largest known to date and is
explicitly shown to include all previously proposed achievable rate regions.
This comparison highlights which features of the transmission scheme - which
includes rate-splitting, superposition coding, a broadcast channel-like binning
scheme, and Gel'fand Pinsker coding - are most effective in approaching
capacity. We next present new capacity results for a class of discrete
memoryless channels that we term the "better cognitive decoding regime" which
includes all previously known regimes in which capacity results have been
derived as special cases. Finally, we determine the capacity region of the
semi-deterministic cognitive interference channel, in which the signal at the
cognitive receiver is a deterministic function of the channel inputs.
|
1003.4353
|
XPath Whole Query Optimization
|
cs.DB
|
Previous work reports about SXSI, a fast XPath engine which executes tree
automata over compressed XML indexes. Here, reasons are investigated why SXSI
is so fast. It is shown that tree automata can be used as a general framework
for fine grained XML query optimization. We define the "relevant nodes" of a
query as those nodes that a minimal automaton must touch in order to answer the
query. This notion allows to skip many subtrees during execution, and, with the
help of particular tree indexes, even allows to skip internal nodes of the
tree. We efficiently approximate runs over relevant nodes by means of
on-the-fly removal of alternation and non-determinism of (alternating) tree
automata. We also introduce many implementation techniques which allows us to
efficiently evaluate tree automata, even in the absence of special indexes.
Through extensive experiments, we demonstrate the impact of the different
optimization techniques.
|
1003.4355
|
Closed-Form Expressions for Secrecy Capacity over Correlated Rayleigh
Fading Channels
|
cs.IT cs.CR math.IT
|
We investigate the secure communications over correlated wiretap Rayleigh
fading channels assuming the full channel state information (CSI) available.
Based on the information theoretic formulation, we derive closed-form
expressions for the average secrecy capacity and the outage probability.
Simulation results confirm our analytical expressions.
|
1003.4394
|
Mathematical Foundations for a Compositional Distributional Model of
Meaning
|
cs.CL cs.LO math.CT
|
We propose a mathematical framework for a unification of the distributional
theory of meaning in terms of vector space models, and a compositional theory
for grammatical types, for which we rely on the algebra of Pregroups,
introduced by Lambek. This mathematical framework enables us to compute the
meaning of a well-typed sentence from the meanings of its constituents.
Concretely, the type reductions of Pregroups are `lifted' to morphisms in a
category, a procedure that transforms meanings of constituents into a meaning
of the (well-typed) whole. Importantly, meanings of whole sentences live in a
single space, independent of the grammatical structure of the sentence. Hence
the inner-product can be used to compare meanings of arbitrary sentences, as it
is for comparing the meanings of words in the distributional model. The
mathematical structure we employ admits a purely diagrammatic calculus which
exposes how the information flows between the words in a sentence in order to
make up the meaning of the whole sentence. A variation of our `categorical
model' which involves constraining the scalars of the vector spaces to the
semiring of Booleans results in a Montague-style Boolean-valued semantics.
|
1003.4418
|
Evaluation of Query Generators for Entity Search Engines
|
cs.DB cs.IR
|
Dynamic web applications such as mashups need efficient access to web data
that is only accessible via entity search engines (e.g. product or publication
search engines). However, most current mashup systems and applications only
support simple keyword searches for retrieving data from search engines. We
propose the use of more powerful search strategies building on so-called query
generators. For a given set of entities query generators are able to
automatically determine a set of search queries to retrieve these entities from
an entity search engine. We demonstrate the usefulness of query generators for
on-demand web data integration and evaluate the effectiveness and efficiency of
query generators for a challenging real-world integration scenario.
|
1003.4539
|
Linear tail-biting trellises: Characteristic generators and the
BCJR-construction
|
cs.IT math.IT
|
We investigate the constructions of tail-biting trellises for linear block
codes introduced by Koetter/Vardy (2003) and Nori/Shankar (2006). For a given
code we will define the sets of characteristic generators more generally than
by Koetter/Vardy and we will investigate how the choice of characteristic
generators affects the set of resulting product trellises, called KV-trellises.
Furthermore, we will show that each KV-trellis is a BCJR-trellis, defined in a
slightly stronger sense than by Nori/Shankar, and that the latter are always
non-mergeable. Finally, we will address a duality conjecture of Koetter/Vardy
by making use of a dualization technique of BCJR-trellises and prove the
conjecture for minimal trellises.
|
1003.4627
|
Unique and Minimum Distance Decoding of Linear Codes with Reduced
Complexity
|
cs.IT math.IT
|
We show that for (systematic) linear codes the time complexity of unique
decoding is O(n^{2}q^{nRH(delta/2/R)}) and the time complexity of minimum
distance decoding is O(n^{2}q^{nRH(delta/R)}). The proposed algorithm inspects
all error patterns in the information set of the received message of weight
less than d/2 or d, respectively.
|
1003.4657
|
Identification of Convection Heat Transfer Coefficient of Secondary
Cooling Zone of CCM based on Least Squares Method and Stochastic
Approximation Method
|
math.OC cs.CE
|
The detailed mathematical model of heat and mass transfer of steel ingot of
curvilinear continuous casting machine is proposed. The process of heat and
mass transfer is described by nonlinear partial differential equations of
parabolic type. Position of phase boundary is determined by Stefan conditions.
The temperature of cooling water in mould channel is described by a special
balance equation. Boundary conditions of secondary cooling zone include radiant
and convective components of heat exchange and account for the complex
mechanism of heat-conducting due to airmist cooling using compressed air and
water. Convective heat-transfer coefficient of secondary cooling zone is
unknown and considered as distributed parameter. To solve this problem the
algorithm of initial adjustment of parameter and the algorithm of operative
adjustment are developed.
|
1003.4712
|
Game interpretation of Kolmogorov complexity
|
math.LO cs.GT cs.IT math.IT
|
The Kolmogorov complexity function K can be relativized using any oracle A,
and most properties of K remain true for relativized versions. In section 1 we
provide an explanation for this observation by giving a game-theoretic
interpretation and showing that all "natural" properties are either true for
all sufficiently powerful oracles or false for all sufficiently powerful
oracles. This result is a simple consequence of Martin's determinacy theorem,
but its proof is instructive: it shows how one can prove statements about
Kolmogorov complexity by constructing a special game and a winning strategy in
this game. This technique is illustrated by several examples (total conditional
complexity, bijection complexity, randomness extraction, contrasting plain and
prefix complexities).
|
1003.4764
|
Adaptive Beamforming in Interference Networks via Bi-Directional
Training
|
cs.IT math.IT
|
We study distributed algorithms for adjusting beamforming vectors and
receiver filters in multiple-input multiple-output (MIMO) interference
networks, with the assumption that each user uses a single beam and a linear
filter at the receiver. In such a setting there have been several distributed
algorithms studied for maximizing the sum-rate or sum-utility assuming perfect
channel state information (CSI) at the transmitters and receivers. The focus of
this paper is to study adaptive algorithms for time-varying channels, without
assuming any CSI at the transmitters or receivers. Specifically, we consider an
adaptive version of the recent Max-SINR algorithm for a time-division duplex
system. This algorithm uses a period of bi-directional training followed by a
block of data transmission. Training in the forward direction is sent using the
current beam-formers and used to adapt the receive filters. Training in the
reverse direction is sent using the current receive filters as beams and used
to adapt the transmit beamformers. The adaptation of both receive filters and
beamformers is done using a least-squares objective for the current block. In
order to improve the performance when the training data is limited, we also
consider using exponentially weighted data from previous blocks. Numerical
results are presented that compare the performance of the algorithms in
different settings.
|
1003.4778
|
A Unique "Nonnegative" Solution to an Underdetermined System: from
Vectors to Matrices
|
cs.IT math.IT
|
This paper investigates the uniqueness of a nonnegative vector solution and
the uniqueness of a positive semidefinite matrix solution to underdetermined
linear systems. A vector solution is the unique solution to an underdetermined
linear system only if the measurement matrix has a row-span intersecting the
positive orthant. Focusing on two types of binary measurement matrices,
Bernoulli 0-1 matrices and adjacency matrices of general expander graphs, we
show that, in both cases, the support size of a unique nonnegative solution can
grow linearly, namely O(n), with the problem dimension n. We also provide
closed-form characterizations of the ratio of this support size to the signal
dimension. For the matrix case, we show that under a necessary and sufficient
condition for the linear compressed observations operator, there will be a
unique positive semidefinite matrix solution to the compressed linear
observations. We further show that a randomly generated Gaussian linear
compressed observations operator will satisfy this condition with
overwhelmingly high probability.
|
1003.4781
|
Large Margin Boltzmann Machines and Large Margin Sigmoid Belief Networks
|
cs.LG cs.AI cs.CV
|
Current statistical models for structured prediction make simplifying
assumptions about the underlying output graph structure, such as assuming a
low-order Markov chain, because exact inference becomes intractable as the
tree-width of the underlying graph increases. Approximate inference algorithms,
on the other hand, force one to trade off representational power with
computational efficiency. In this paper, we propose two new types of
probabilistic graphical models, large margin Boltzmann machines (LMBMs) and
large margin sigmoid belief networks (LMSBNs), for structured prediction.
LMSBNs in particular allow a very fast inference algorithm for arbitrary graph
structures that runs in polynomial time with a high probability. This
probability is data-distribution dependent and is maximized in learning. The
new approach overcomes the representation-efficiency trade-off in previous
models and allows fast structured prediction with complicated graph structures.
We present results from applying a fully connected model to multi-label scene
classification and demonstrate that the proposed approach can yield significant
performance gains over current state-of-the-art methods.
|
1003.4827
|
Tuple-based abstract data types: full parallelism
|
cs.DB
|
Commutativity has the same inherent limitations as compatibility. Then, it is
worth conceiving simple concurrency control techniques. We propose a restricted
form of commutativity which increases parallelism without incurring a higher
overhead than compatibility. Advantages of our proposition are: (1)
commutativity of operations is determined at compile-time, (2) run-time
checking is as efficient as for compatibility, (3) neither commutativity
relations, (4) nor inverse operations, need to be specified, and (5) log space
utilization is reduced.
|
1003.4828
|
A framework for designing concurrent and recoverable abstract data types
based on commutativity
|
cs.DB
|
In this paper, we try to focus the reader's interest on the problems that
transactional systems have to resolve for taking advantage of commutativity in
a serializable and recoverable way. Our framework is, (as others), based on the
use of conditional commutativity on abstract date types. We present new
features that have not been found in the literature hitherto, that both
increase concurrency and simplify recovery.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.