id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0910.4901
|
Distortion Exponent in MIMO Channels with Feedback
|
cs.IT math.IT
|
The transmission of a Gaussian source over a block-fading multiple antenna
channel in the presence of a feedback link is considered. The feedback link is
assumed to be an error and delay free link of capacity 1 bit per channel use.
Under the short-term power constraint, the optimal exponential behavior of the
end-to-end average distortion is characterized for all source-channel bandwidth
ratios. It is shown that the optimal transmission strategy is successive
refinement source coding followed by progressive transmission over the channel,
in which the channel block is allocated dynamically among the layers based on
the channel state using the feedback link as an instantaneous automatic repeat
request (ARQ) signal.
|
0910.4903
|
Articulation and Clarification of the Dendritic Cell Algorithm
|
cs.AI cs.NE
|
The Dendritic Cell algorithm (DCA) is inspired by recent work in innate
immunity. In this paper a formal description of the DCA is given. The DCA is
described in detail, and its use as an anomaly detector is illustrated within
the context of computer security. A port scan detection task is performed to
substantiate the influence of signal selection on the behaviour of the
algorithm. Experimental results provide a comparison of differing input signal
mappings.
|
0910.4955
|
On the Structure of Real-Time Encoders and Decoders in a Multi-Terminal
Communication System
|
cs.IT math.IT math.OC
|
A real-time communication system with two encoders communicating with a
single receiver over separate noisy channels is considered. The two encoders
make distinct partial observations of a Markov source. Each encoder must encode
its observations into a sequence of discrete symbols. The symbols are
transmitted over noisy channels to a finite memory receiver that attempts to
reconstruct some function of the state of the Markov source. Encoding and
decoding must be done in real-time, that is, the distortion measure does not
tolerate delays. Under the assumption that the encoders' observations are
conditionally independent Markov chains given an unobserved time-invariant
random variable, results on the structure of optimal real-time encoders and the
receiver are obtained. It is shown that there exist finite-dimensional
sufficient statistics for the encoders. The problem with noiseless channels and
perfect memory at the receiver is then considered. A new methodology to find
the structure of optimal real-time encoders is employed. A sufficient statistic
with a time-invariant domain is found for this problem. This methodology
exploits the presence of common information between the encoders and the
receiver when communication is over noiseless channels.
|
0910.5002
|
An Iterative Shrinkage Approach to Total-Variation Image Restoration
|
cs.CV
|
The problem of restoration of digital images from their degraded measurements
plays a central role in a multitude of practically important applications. A
particularly challenging instance of this problem occurs in the case when the
degradation phenomenon is modeled by an ill-conditioned operator. In such a
case, the presence of noise makes it impossible to recover a valuable
approximation of the image of interest without using some a priori information
about its properties. Such a priori information is essential for image
restoration, rendering it stable and robust to noise. Particularly, if the
original image is known to be a piecewise smooth function, one of the standard
priors used in this case is defined by the Rudin-Osher-Fatemi model, which
results in total variation (TV) based image restoration. The current arsenal of
algorithms for TV-based image restoration is vast. In the present paper, a
different approach to the solution of the problem is proposed based on the
method of iterative shrinkage (aka iterated thresholding). In the proposed
method, the TV-based image restoration is performed through a recursive
application of two simple procedures, viz. linear filtering and soft
thresholding. Therefore, the method can be identified as belonging to the group
of first-order algorithms which are efficient in dealing with images of
relatively large sizes. Another valuable feature of the proposed method
consists in its working directly with the TV functional, rather then with its
smoothed versions. Moreover, the method provides a single solution for both
isotropic and anisotropic definitions of the TV functional, thereby
establishing a useful connection between the two formulae.
|
0910.5027
|
Information-theoretically Secret Key Generation for Fading Wireless
Channels
|
cs.CR cs.IT math.IT
|
The multipath-rich wireless environment associated with typical wireless
usage scenarios is characterized by a fading channel response that is
time-varying, location-sensitive, and uniquely shared by a given
transmitter-receiver pair. The complexity associated with a richly scattering
environment implies that the short-term fading process is inherently hard to
predict and best modeled stochastically, with rapid decorrelation properties in
space, time and frequency. In this paper, we demonstrate how the channel state
between a wireless transmitter and receiver can be used as the basis for
building practical secret key generation protocols between two entities. We
begin by presenting a scheme based on level crossings of the fading process,
which is well-suited for the Rayleigh and Rician fading models associated with
a richly scattering environment. Our level crossing algorithm is simple, and
incorporates a self-authenticating mechanism to prevent adversarial
manipulation of message exchanges during the protocol. Since the level crossing
algorithm is best suited for fading processes that exhibit symmetry in their
underlying distribution, we present a second and more powerful approach that is
suited for more general channel state distributions. This second approach is
motivated by observations from quantizing jointly Gaussian processes, but
exploits empirical measurements to set quantization boundaries and a heuristic
log likelihood ratio estimate to achieve an improved secret key generation
rate. We validate both proposed protocols through experimentations using a
customized 802.11a platform, and show for the typical WiFi channel that
reliable secret key establishment can be accomplished at rates on the order of
10 bits/second.
|
0910.5073
|
A General Upper Bound on the Size of Constant-Weight Conflict-Avoiding
Codes
|
cs.IT math.IT
|
Conflict-avoiding codes are used in the multiple-access collision channel
without feedback. The number of codewords in a conflict-avoiding code is the
number of potential users that can be supported in the system. In this paper, a
new upper bound on the size of conflict-avoiding codes is proved. This upper
bound is general in the sense that it is applicable to all code lengths and all
Hamming weights. Several existing constructions for conflict-avoiding codes,
which are known to be optimal for Hamming weights equal to four and five, are
shown to be optimal for all Hamming weights in general.
|
0910.5076
|
Algorithmic randomness and monotone complexity on product space
|
cs.IT math.IT
|
We study algorithmic randomness and monotone complexity on product of the set
of infinite binary sequences. We explore the following problems: monotone
complexity on product space, Lambalgen's theorem for correlated probability,
classification of random sets by likelihood ratio tests, decomposition of
complexity and independence, Bayesian statistics for individual random
sequences. Formerly Lambalgen's theorem for correlated probability is shown
under a uniform computability assumption in [H. Takahashi Inform. Comp. 2008].
In this paper we show the theorem without the assumption.
|
0910.5135
|
Error-correcting codes and phase transitions
|
cs.IT math.IT math.QA
|
The theory of error-correcting codes is concerned with constructing codes
that optimize simultaneously transmission rate and relative minimum distance.
These conflicting requirements determine an asymptotic bound, which is a
continuous curve in the space of parameters. The main goal of this paper is to
relate the asymptotic bound to phase diagrams of quantum statistical mechanical
systems. We first identify the code parameters with Hausdorff and von Neumann
dimensions, by considering fractals consisting of infinite sequences of code
words. We then construct operator algebras associated to individual codes.
These are Toeplitz algebras with a time evolution for which the KMS state at
critical temperature gives the Hausdorff measure on the corresponding fractal.
We extend this construction to algebras associated to limit points of codes,
with non-uniform multi-fractal measures, and to tensor products over varying
parameters.
|
0910.5146
|
Compressed sensing performance bounds under Poisson noise
|
cs.IT math.IT
|
This paper describes performance bounds for compressed sensing (CS) where the
underlying sparse or compressible (sparsely approximable) signal is a vector of
nonnegative intensities whose measurements are corrupted by Poisson noise. In
this setting, standard CS techniques cannot be applied directly for several
reasons. First, the usual signal-independent and/or bounded noise models do not
apply to Poisson noise, which is non-additive and signal-dependent. Second, the
CS matrices typically considered are not feasible in real optical systems
because they do not adhere to important constraints, such as nonnegativity and
photon flux preservation. Third, the typical $\ell_2$--$\ell_1$ minimization
leads to overfitting in the high-intensity regions and oversmoothing in the
low-intensity areas. In this paper, we describe how a feasible positivity- and
flux-preserving sensing matrix can be constructed, and then analyze the
performance of a CS reconstruction approach for Poisson data that minimizes an
objective function consisting of a negative Poisson log likelihood term and a
penalty term which measures signal sparsity. We show that, as the overall
intensity of the underlying signal increases, an upper bound on the
reconstruction error decays at an appropriate rate (depending on the
compressibility of the signal), but that for a fixed signal intensity, the
signal-dependent part of the error bound actually grows with the number of
measurements or sensors. This surprising fact is both proved theoretically and
justified based on physical intuition.
|
0910.5260
|
A Gradient Descent Algorithm on the Grassman Manifold for Matrix
Completion
|
cs.NA cs.LG
|
We consider the problem of reconstructing a low-rank matrix from a small
subset of its entries. In this paper, we describe the implementation of an
efficient algorithm called OptSpace, based on singular value decomposition
followed by local manifold optimization, for solving the low-rank matrix
completion problem. It has been shown that if the number of revealed entries is
large enough, the output of singular value decomposition gives a good estimate
for the original matrix, so that local optimization reconstructs the correct
matrix with high probability. We present numerical results which show that this
algorithm can reconstruct the low rank matrix exactly from a very small subset
of its entries. We further study the robustness of the algorithm with respect
to noise, and its performance on actual collaborative filtering datasets.
|
0910.5261
|
On Detection With Partial Information In The Gaussian Setup
|
cs.IT cs.CR math.IT
|
We introduce the problem of communication with partial information, where
there is an asymmetry between the transmitter and the receiver codebooks.
Practical applications of the proposed setup include the robust signal hashing
problem within the context of multimedia security and asymmetric communications
with resource-lacking receivers. We study this setup in a binary detection
theoretic context for the additive colored Gaussian noise channel. In our
proposed setup, the partial information available at the detector consists of
dimensionality-reduced versions of the transmitter codewords, where the
dimensionality reduction is achieved via a linear transform. We first derive
the corresponding MAP-optimal detection rule and the corresponding conditional
probability of error (conditioned on the partial information the detector
possesses). Then, we constructively quantify an optimal class of linear
transforms, where the cost function is the expected Chernoff bound on the
conditional probability of error of the MAP-optimal detector.
|
0910.5264
|
A Sequential Problem in Decentralized Detection with Communication
|
math.OC cs.IT math.IT
|
A sequential problem in decentralized detection is considered. Two observers
can make repeated noisy observations of a binary hypothesis on the state of the
environment. At any time, observer 1 can stop and send a final binary message
to observer 2 or it may continue to take more measurements. Every time observer
1 postpones its final message to observer 2, it incurs a penalty. Observer 2's
operation under two different scenarios is explored. In the first scenario,
observer 2 waits to receive the final message from observer 1 and then starts
taking measurements of its own. It is then faced with a stopping problem on
whether to stop and declare a decision on the hypothesis or to continue taking
measurements. In the second scenario, observer 2 starts taking measurements
from the beginning. It is then faced with a different stopping problem. At any
time, observer 2 can decide whether to stop and declare a decision on the
hypothesis or to continue to take more measurements and wait for observer 1 to
send its final message. Parametric characterization of optimal policies for the
two observers are obtained under both scenarios. A sequential methodology for
finding the optimal policies is presented. The parametric characterizations are
then extended to problem with increased communication alphabet for the final
message from observer 1 to observer 2; and to the case of multiple peripheral
sensors that each send a single final message to a coordinating sensor who
makes the final decision on the hypothesis.
|
0910.5339
|
On Physically Secure and Stable Slotted ALOHA System
|
cs.IT math.IT
|
In this paper, we consider the standard discrete-time slotted ALOHA with a
finite number of terminals with infinite size buffers. In our study, we jointly
consider the stability of this system together with the physical layer
security. We conduct our studies on both dominant and original systems, where
in a dominant system each terminal always has a packet in its buffer unlike in
the original system. For N = 2, we obtain the secrecy-stability regions for
both dominant and original systems. Furthermore, we obtain the transmission
probabilities, which optimize system throughput. Lastly, this paper proposes a
new methodology in terms of obtaining the joint stability and secrecy regions.
|
0910.5386
|
A theoretical foundation for building Knowledge-work Support Systems
|
cs.HC cs.DL cs.IR
|
In this paper we propose a novel approach aimed at building a new class of
information system platforms which we call the "Knowledge-work Support Systems"
or KwSS. KwSS can play a significant role in enhancing the IS support for
knowledge management processes, including those customarily identified as less
amenable to IS support. In our approach we try to enhance basic functionalities
provided by the computer-based information systems, namely, that of improving
the efficiency of the knowledge workers in accessing, processing and creating
useful information. The improvement, along with proper focus on cultural,
social and other aspects of the knowledge management processes, can enhance the
workers' efficiency significantly in performing high quality knowledge works.
In order to build the proposed approach, we develop several new concepts. The
approach analyzes the information availability and usage from the knowledge
workers and their works' perspectives and consequently brings forth more
transparency in various aspects of information life-cycle with respect to
knowledge management. KsSSes are technology platforms, which can be implemented
independently as well as in conjunction with other knowledge management and
data management technology platforms, to provide significant boost in the
knowledge capabilities of organizations.
|
0910.5405
|
Artificial Immune Tissue using Self-Orgamizing Networks
|
cs.AI cs.NE
|
As introduced by Bentley et al. (2005), artificial immune systems (AIS) are
lacking tissue, which is present in one form or another in all living
multi-cellular organisms. Some have argued that this concept in the context of
AIS brings little novelty to the already saturated field of the immune inspired
computational research. This article aims to show that such a component of an
AIS has the potential to bring an advantage to a data processing algorithm in
terms of data pre-processing, clustering and extraction of features desired by
the immune inspired system. The proposed tissue algorithm is based on
self-organizing networks, such as self-organizing maps (SOM) developed by
Kohonen (1996) and an analogy of the so called Toll-Like Receptors (TLR)
affecting the activation function of the clusters developed by the SOM.
|
0910.5410
|
The Uned systems at Senseval-2
|
cs.CL cs.AI
|
We have participated in the SENSEVAL-2 English tasks (all words and lexical
sample) with an unsupervised system based on mutual information measured over a
large corpus (277 million words) and some additional heuristics. A supervised
extension of the system was also presented to the lexical sample task.
Our system scored first among unsupervised systems in both tasks: 56.9%
recall in all words, 40.2% in lexical sample. This is slightly worse than the
first sense heuristic for all words and 3.6% better for the lexical sample, a
strong indication that unsupervised Word Sense Disambiguation remains being a
strong challenge.
|
0910.5419
|
Word Sense Disambiguation Based on Mutual Information and Syntactic
Patterns
|
cs.CL cs.AI
|
This paper describes a hybrid system for WSD, presented to the English
all-words and lexical-sample tasks, that relies on two different unsupervised
approaches. The first one selects the senses according to mutual information
proximity between a context word a variant of the sense. The second heuristic
analyzes the examples of use in the glosses of the senses so that simple
syntactic patterns are inferred. This patterns are matched against the
disambiguation contexts. We show that the first heuristic obtains a precision
and recall of .58 and .35 respectively in the all words task while the second
obtains .80 and .25. The high precision obtained recommends deeper research of
the techniques. Results for the lexical sample task are also provided.
|
0910.5454
|
The Cyborg Astrobiologist: Testing a Novelty-Detection Algorithm on Two
Mobile Exploration Systems at Rivas Vaciamadrid in Spain and at the Mars
Desert Research Station in Utah
|
cs.CV astro-ph.EP astro-ph.IM cs.LG stat.ML
|
(ABRIDGED) In previous work, two platforms have been developed for testing
computer-vision algorithms for robotic planetary exploration (McGuire et al.
2004b,2005; Bartolo et al. 2007). The wearable-computer platform has been
tested at geological and astrobiological field sites in Spain (Rivas
Vaciamadrid and Riba de Santiuste), and the phone-camera has been tested at a
geological field site in Malta. In this work, we (i) apply a Hopfield
neural-network algorithm for novelty detection based upon color, (ii) integrate
a field-capable digital microscope on the wearable computer platform, (iii)
test this novelty detection with the digital microscope at Rivas Vaciamadrid,
(iv) develop a Bluetooth communication mode for the phone-camera platform, in
order to allow access to a mobile processing computer at the field sites, and
(v) test the novelty detection on the Bluetooth-enabled phone-camera connected
to a netbook computer at the Mars Desert Research Station in Utah. This systems
engineering and field testing have together allowed us to develop a real-time
computer-vision system that is capable, for example, of identifying lichens as
novel within a series of images acquired in semi-arid desert environments. We
acquired sequences of images of geologic outcrops in Utah and Spain consisting
of various rock types and colors to test this algorithm. The algorithm robustly
recognized previously-observed units by their color, while requiring only a
single image or a few images to learn colors as familiar, demonstrating its
fast learning capability.
|
0910.5461
|
Anomaly Detection with Score functions based on Nearest Neighbor Graphs
|
cs.LG
|
We propose a novel non-parametric adaptive anomaly detection algorithm for
high dimensional data based on score functions derived from nearest neighbor
graphs on $n$-point nominal data. Anomalies are declared whenever the score of
a test sample falls below $\alpha$, which is supposed to be the desired false
alarm level. The resulting anomaly detector is shown to be asymptotically
optimal in that it is uniformly most powerful for the specified false alarm
level, $\alpha$, for the case when the anomaly density is a mixture of the
nominal and a known density. Our algorithm is computationally efficient, being
linear in dimension and quadratic in data size. It does not require choosing
complicated tuning parameters or function approximation classes and it can
adapt to local structure such as local change in dimensionality. We demonstrate
the algorithm on both artificial and real data sets in high dimensional feature
spaces.
|
0910.5516
|
Finding overlapping communities in networks by label propagation
|
physics.soc-ph cs.SI
|
We propose an algorithm for finding overlapping community structure in very
large networks. The algorithm is based on the label propagation technique of
Raghavan, Albert, and Kumara, but is able to detect communities that overlap.
Like the original algorithm, vertices have labels that propagate between
neighbouring vertices so that members of a community reach a consensus on their
community membership. Our main contribution is to extend the label and
propagation step to include information about more than one community: each
vertex can now belong to up to v communities, where v is the parameter of the
algorithm. Our algorithm can also handle weighted and bipartite networks. Tests
on an independently designed set of benchmarks, and on real networks, show the
algorithm to be highly effective in recovering overlapping communities. It is
also very fast and can process very large and dense networks in a short time.
|
0910.5537
|
An Analysis of Phase Synchronization Mismatch Sensitivity for Coherent
MIMO Radar Systems
|
cs.IT math.IT
|
In this study, the hybrid Cramer-Rao bound (CRB) is developed for target
localization, to establish the sensitivity of the estimation mean-square error
(MSE) to the level of phase synchronization mismatch in coherent Multiple-Input
Multiple-Output (MIMO) radar systems with widely distributed antennas. The
lower bound on the MSE is derived for the joint estimation of the vector of
unknown parameters, consisting of the target location and the mismatch of the
allegedly known system parameters, i.e., phase offsets at the radars.
Synchronization errors are modeled as being random and Gaussian. A closed-form
expression for the hybrid CRB is derived for the case of orthogonal
waveforms.The bound on the target localization MSE is expressed as the sum of
two terms - the first represents the CRB with no phase mismatch, and the second
captures the mismatch effect. The latter is shown to depend on the phase error
variance, the number of mismatched transmitting and receiving sensors and the
system geometry. For a given phase synchronization error variance, this
expression offers the means to analyze the achievable localization accuracy.
Alternatively, for a predetermined localization MSE target value, the derived
expression may be used to determine the necessary phase synchronization level
in the distributed system.
|
0910.5542
|
Forced Evolution in Silico by Artificial Transposons and their Genetic
Operators: The John Muir Ant Problem
|
cs.NE cs.AI
|
Modern evolutionary computation utilizes heuristic optimizations based upon
concepts borrowed from the Darwinian theory of natural selection. We believe
that a vital direction in this field must be algorithms that model the activity
of genomic parasites, such as transposons, in biological evolution. This
publication is our first step in the direction of developing a minimal
assortment of algorithms that simulate the role of genomic parasites.
Specifically, we started in the domain of genetic algorithms (GA) and selected
the Artificial Ant Problem as a test case. We define these artificial
transposons as a fragment of an ant's code that possesses properties that cause
it to stand apart from the rest. We concluded that artificial transposons,
analogous to real transposons, are truly capable of acting as intelligent
mutators that adapt in response to an evolutionary problem in the course of
co-evolution with their hosts.
|
0910.5559
|
On the characterization of the regions of feasible trajectories in the
workspace of parallel manipulators
|
cs.RO
|
It was shown recently that parallel manipulators with several inverse
kinematic solutions have the ability to avoid parallel singularities [Chablat
1998a] and self-collisions [Chablat 1998b] by choosing appropriate joint
configurations for the legs. In effect, depending on the joint configurations
of the legs, a given configuration of the end-effector may or may not be free
of singularity and collision. Characterization of the
collision/singularity-free workspace is useful but may be insufficient since
two configurations can be accessible without collisions nor singularities but
it may not exist a feasible trajectory between them. The goal of this paper is
to define the maximal regions of the workspace where it is possible to execute
trajectories. Twodifferent families of regions are defined : 1. those regions
where the end-effector can move between any set of points, and 2. the regions
where any continuous path can be tracked. These regions are characterized from
the notion of aspects and free-aspects recently defined for parallel
manipulators [Chablat 1998b]. The construction of these regions is achieved by
enrichment techniques and using an extension of the octree structures to spaces
of dimension greater than three. Illustrative examples show the interest of
this study to the optimization of trajectories and the design of parallel
manipulators.
|
0910.5673
|
Synchronization and Transient Stability in Power Networks and
Non-Uniform Kuramoto Oscillators
|
math.OC cs.SY math-ph math.DS math.MP
|
Motivated by recent interest for multi-agent systems and smart power grid
architectures, we discuss the synchronization problem for the network-reduced
model of a power system with non-trivial transfer conductances. Our key insight
is to exploit the relationship between the power network model and a
first-order model of coupled oscillators. Assuming overdamped generators
(possibly due to local excitation controllers), a singular perturbation
analysis shows the equivalence between the classic swing equations and a
non-uniform Kuramoto model. Here, non-uniform Kuramoto oscillators are
characterized by multiple time constants, non-homogeneous coupling, and
non-uniform phase shifts. Extending methods from transient stability,
synchronization theory, and consensus protocols, we establish sufficient
conditions for synchronization of non-uniform Kuramoto oscillators. These
conditions reduce to and improve upon previously-available tests for the
standard Kuramoto model. Combining our singular perturbation and Kuramoto
analyses, we derive concise and purely algebraic conditions that relate
synchronization and transient stability of a power network to the underlying
system parameters and initial conditions.
|
0910.5682
|
Word Sense Disambiguation Using English-Spanish Aligned Phrases over
Comparable Corpora
|
cs.CL cs.AI
|
In this paper we describe a WSD experiment based on bilingual English-Spanish
comparable corpora in which individual noun phrases have been identified and
aligned with their respective counterparts in the other language. The
evaluation of the experiment has been carried out against SemCor.
We show that, with the alignment algorithm employed, potential precision is
high (74.3%), however the coverage of the method is low (2.7%), due to
alignments being far less frequent than we expected.
Contrary to our intuition, precision does not rise consistently with the
number of alignments. The coverage is low due to several factors; there are
important domain differences, and English and Spanish are too close languages
for this approach to be able to discriminate efficiently between senses,
rendering it unsuitable for WSD, although the method may prove more productive
in machine translation.
|
0910.5697
|
High Dimensional Error-Correcting Codes
|
cs.IT math.IT
|
In this paper we construct multidimensional codes with high dimension. The
codes can correct high dimensional errors which have the form of either small
clusters, or confined to an area with a small radius. We also consider small
number of errors in a small area. The clusters which are discussed are mainly
spheres such as semi-crosses and crosses. Also considered are clusters with
small number of errors such as 2-bursts, two errors in various clusters, and
three errors on a line. Our main focus is on the redundancy of the codes when
the most dominant parameter is the dimension of the code.
|
0910.5759
|
Secure Source Coding with a Helper
|
cs.IT math.IT
|
We consider a secure source coding problem with a rate-limited helper. In
particular, Alice observes an independent and identically distributed (i.i.d.)
source X and wishes to transmit this source losslessly to Bob over a
rate-limited link. A helper (Helen), observes an i.i.d. correlated source Y and
can transmit information to Bob over a separate rate-limited link. A passive
eavesdropper (Eve) can observe the coded output of Alice, i.e., the link from
Alice to Bob is public. The uncertainty about the source X at Eve, is measured
by the conditional entropy of the source given the coded output of Alice. We
completely characterize the rate-equivocation region for this secure source
coding model, where we show that Slepian-Wolf binning of X with respect to the
coded side information received at Bob is optimal. We next consider a
modification of this model in which Alice also has access to the coded output
of Helen. For the two-sided helper model, we characterize the rate-equivocation
region. While the availability of side information at Alice does not reduce the
rate of transmission from Alice, it significantly enhances the resulting
equivocation at Eve. In particular, the resulting equivocation for the
two-sided helper case is shown to be min(H(X),R_y), i.e., one bit from the
two-sided helper provides one bit of uncertainty at Eve. From this result, we
infer that Slepian-Wolf binning of X is suboptimal and one can further decrease
the information leakage to the eavesdropper by utilizing the side information
at Alice. We finally generalize these results to the case in which there is
additional un-coded side information W available at Bob and characterize the
rate-equivocation regions under the assumption that Y-X-W forms a Markov chain.
|
0910.5761
|
Which graphical models are difficult to learn?
|
stat.ML cond-mat.stat-mech cs.LG
|
We consider the problem of learning the structure of Ising models (pairwise
binary Markov random fields) from i.i.d. samples. While several methods have
been proposed to accomplish this task, their relative merits and limitations
remain somewhat obscure. By analyzing a number of concrete examples, we show
that low-complexity algorithms systematically fail when the Markov random field
develops long-range correlations. More precisely, this phenomenon appears to be
related to the Ising model phase transition (although it does not coincide with
it).
|
0910.5794
|
Calibration of 3-d.o.f. Translational Parallel Manipulators Using Leg
Observations
|
cs.RO
|
The paper proposes a novel approach for the geometrical model calibration of
quasi-isotropic parallel kinematic mechanisms of the Orthoglide family. It is
based on the observations of the manipulator leg parallelism during motions
between the specific test postures and employs a low-cost measuring system
composed of standard comparator indicators attached to the universal magnetic
stands. They are sequentially used for measuring the deviation of the relevant
leg location while the manipulator moves the TCP along the Cartesian axes.
Using the measured differences, the developed algorithm estimates the joint
offsets and the leg lengths that are treated as the most essential parameters.
Validity of the proposed calibration technique is confirmed by the experimental
results.
|
0910.5904
|
A quantitative notion of redundancy for finite frames
|
math.FA cs.IT math.IT
|
The objective of this paper is to improve the customary definition of
redundancy by providing quantitative measures in its place, which we coin upper
and lower redundancies, that match better with an intuitive understanding of
redundancy for finite frames in a Hilbert space. This motivates a carefully
chosen list of desired properties for upper and lower redundancies. The means
to achieve these properties is to consider the maximum and minimum of a
redundancy function, which is interesting in itself. The redundancy function is
defined on the sphere of the Hilbert space and measures the concentration of
frame vectors around each point. A complete characterization of functions on
the sphere which coincide with a redundancy function for some frame is given.
The upper and lower redundancies obtained from this function are shown to
satisfy all of the intuitively desirable properties. In addition, the range of
values they assume is characterized.
|
0910.5932
|
Metric and Kernel Learning using a Linear Transformation
|
cs.LG cs.CV cs.IR
|
Metric and kernel learning are important in several machine learning
applications. However, most existing metric learning algorithms are limited to
learning metrics over low-dimensional data, while existing kernel learning
algorithms are often limited to the transductive setting and do not generalize
to new data points. In this paper, we study metric learning as a problem of
learning a linear transformation of the input data. We show that for
high-dimensional data, a particular framework for learning a linear
transformation of the data based on the LogDet divergence can be efficiently
kernelized to learn a metric (or equivalently, a kernel function) over an
arbitrarily high dimensional space. We further demonstrate that a wide class of
convex loss functions for learning linear transformations can similarly be
kernelized, thereby considerably expanding the potential applications of metric
learning. We demonstrate our learning approach by applying it to large-scale
real world problems in computer vision and text mining.
|
0910.5950
|
Limits on the Robustness of MIMO Joint Source-Channel Codes
|
cs.IT math.IT
|
In this paper, the theoretical limits on the robustness of MIMO joint source
channel codes is investigated. The case in which a single joint source channel
code is used for the entire range of SNRs and for all levels of required
fidelity is considered. Limits on the asymptotic performance of such a system
are characterized in terms of upper bounds on the diversity-fidelity tradeoff,
which can be viewed as an analog version of the diversity-multiplexing
tradeoff. In particular, it is shown that there is a considerable gap between
the diversity-fidelity tradeoff of robust joint source-channel codes and the
optimum tradeoff (without the constraint of robustness).
|
0911.0050
|
How to Compare the Scientific Contributions between Research Groups
|
cs.IR cs.CY
|
We present a method to analyse the scientific contributions between research
groups. Given multiple research groups, we construct their journal/proceeding
graphs and then compute the similarity/gap between them using network analysis.
This analysis can be used for measuring similarity/gap of the topics/qualities
between research groups' scientific contributions. We demonstrate the
practicality of our method by comparing the scientific contributions by Korean
researchers with those by the global researchers for information security in
2006 - 2008. The empirical analysis shows that the current security research in
South Korea has been isolated from the global research trend.
|
0911.0054
|
Learning Exponential Families in High-Dimensions: Strong Convexity and
Sparsity
|
cs.LG stat.ML
|
The versatility of exponential families, along with their attendant convexity
properties, make them a popular and effective statistical model. A central
issue is learning these models in high-dimensions, such as when there is some
sparsity pattern of the optimal parameter. This work characterizes a certain
strong convexity property of general exponential families, which allow their
generalization ability to be quantified. In particular, we show how this
property can be used to analyze generic exponential families under L_1
regularization.
|
0911.0089
|
A Secure Communication Game with a Relay Helping the Eavesdropper
|
cs.IT math.IT
|
In this work a four terminal Gaussian network composed of a source, a
destination, an eavesdropper and a jammer relay is studied. The jammer relay
does not hear the source transmission. It assists the eavesdropper and aims to
decrease the achievable secrecy rates. The source, on the other hand, aims to
increase the achievable secrecy rates. Assuming Gaussian strategies at the
source and the jammer relay, this problem is formulated as a two-player
zero-sum continuous game, where the payoff is the achieved secrecy rate. For
this game the Nash Equilibrium is generally achieved with mixed strategies. The
optimal cumulative distribution functions for the source and the jammer relay
that achieve the value of the game, which is the equilibrium secrecy rate, are
found.
|
0911.0090
|
Context-free pairs of groups I: Context-free pairs and graphs
|
math.GR cs.IT math.IT
|
Let $G$ be a finitely generated group, $A$ a finite set of generators and $K$
a subgroup of $G$. We call the pair $(G,K)$ context-free if the set of all
words over $A$ that reduce in $G$ to an element of $K$ is a context-free
language. When $K$ is trivial, $G$ itself is called context-free; context-free
groups have been classified more than 20 years ago in celebrated work of Muller
and Schupp as the virtually free groups.
Here, we derive some basic properties of such group pairs. Context-freeness
is independent of the choice of the generating set. It is preserved under
finite index modifications of $G$ and finite index enlargements of $K$. If $G$
is virtually free and $K$ is finitely generated then $(G,K)$ is context-free. A
basic tool is the following: $(G,K)$ is context-free if and only if the
Schreier graph of $(G,K)$ with respect to $A$ is a context-free graph.
|
0911.0130
|
Minimal Polynomial Algorithms for Finite Sequences
|
cs.IT cs.DM cs.SC math.IT
|
We show that a straightforward rewrite of a known minimal polynomial
algorithm yields a simpler version of a recent algorithm of A. Salagean.
|
0911.0143
|
Large Families of Optimal Two-Dimensional Optical Orthogonal Codes
|
cs.IT math.IT
|
Nine new 2-D OOCs are presented here, all sharing the common feature of a
code size that is much larger in relation to the number of time slots than
those of constructions appearing previously in the literature. Each of these
constructions is either optimal or asymptotically optimal with respect to
either the original Johnson bound or else a non-binary version of the Johnson
bound introduced in this paper.
The first 5 codes are constructed using polynomials over finite fields - the
first construction is optimal while the remaining 4 are asymptotically optimal.
The next two codes are constructed using rational functions in place of
polynomials and these are asymptotically optimal. The last two codes, also
asymptotically optimal, are constructed by composing two of the above codes
with a constant weight binary code.
Also presented, is a three-dimensional OOC that exploits the polarization
dimension.
Finally, phase-encoded optical CDMA is considered and construction of two
efficient codes are provided.
|
0911.0183
|
A Gibbs Sampling Based MAP Detection Algorithm for OFDM Over Rapidly
Varying Mobile Radio Channels
|
cs.IT math.AC math.IT
|
In orthogonal frequency-division multiplexing (OFDM) systems operating over
rapidly time-varying channels, the orthogonality between subcarriers is
destroyed leading to inter-carrier interference (ICI) and resulting in an
irreducible error floor. In this paper, a new and low-complexity maximum {\em a
posteriori} probability (MAP) detection algorithm is proposed for OFDM systems
operating over rapidly time-varying multipath channels. The detection algorithm
exploits the banded structure of the frequency-domain channel matrix whose
bandwidth is a parameter to be adjusted according to the speed of the mobile
terminal. Based on this assumption, the received signal vector is decomposed
into reduced dimensional sub-observations in such a way that all components of
the observation vector contributing to the symbol to be detected are included
in the decomposed observation model. The data symbols are then detected by the
MAP algorithm by means of a Markov chain Monte Carlo (MCMC) technique in an
optimal and computationally efficient way. Computational complexity
investigation as well as simulation results indicate that this algorithm has
significant performance and complexity advantages over existing suboptimal
detection and equalization algorithms proposed earlier in the literature.
|
0911.0225
|
A Mirroring Theorem and its Application to a New Method of Unsupervised
Hierarchical Pattern Classification
|
cs.LG
|
In this paper, we prove a crucial theorem called Mirroring Theorem which
affirms that given a collection of samples with enough information in it such
that it can be classified into classes and subclasses then (i) There exists a
mapping which classifies and subclassifies these samples (ii) There exists a
hierarchical classifier which can be constructed by using Mirroring Neural
Networks (MNNs) in combination with a clustering algorithm that can approximate
this mapping. Thus, the proof of the Mirroring theorem provides a theoretical
basis for the existence and a practical feasibility of constructing
hierarchical classifiers, given the maps. Our proposed Mirroring Theorem can
also be considered as an extension to Kolmogrovs theorem in providing a
realistic solution for unsupervised classification. The techniques we develop,
are general in nature and have led to the construction of learning machines
which are (i) tree like in structure, (ii) modular (iii) with each module
running on a common algorithm (tandem algorithm) and (iv) selfsupervised. We
have actually built the architecture, developed the tandem algorithm of such a
hierarchical classifier and demonstrated it on an example problem.
|
0911.0231
|
Synchronized Task Decomposition for Cooperative Multi-agent Systems
|
cs.MA cs.DC cs.SY
|
It is an amazing fact that remarkably complex behaviors could emerge from a
large collection of very rudimentary dynamical agents through very simple local
interactions. However, it still remains elusive on how to design these local
interactions among agents so as to achieve certain desired collective
behaviors. This paper aims to tackle this challenge and proposes a
divide-and-conquer approach to guarantee specified global behaviors through
local coordination and control design for multi-agent systems. The basic idea
is to decompose the requested global specification into subtasks for each
individual agent. It should be noted that the decomposition is not arbitrary.
The global specification should be decomposed in such a way that the fulfilment
of these subtasks by each individual agent will imply the satisfaction of the
global specification as a team. First, it is shown by a counterexample that not
all specifications can be decomposed in this sense. Then, a natural follow-up
question is what the necessary and sufficient condition should be for the
proposed decomposability of a global specification. The main part of the paper
is set to answer this question. The case of two cooperative agents is
investigated first, and a necessary and sufficient condition is presented and
proven. Later on, the result is generalized to the case of arbitrary finite
number of agents, and a hierarchical algorithm is proposed, which is shown to
be a sufficient condition. Finally, a cooperative control scenario for a team
of three robots is developed to illustrate the task decomposition procedure.
|
0911.0232
|
Distributed strategies for generating weight-balanced and doubly
stochastic digraphs
|
math.OC cs.SY
|
Weight-balanced and doubly stochastic digraphs are two classes of digraphs
that play an essential role in a variety of cooperative control problems,
including formation control, distributed averaging, and optimization. We refer
to a digraph as doubly stochasticable (weight-balanceable) if it admits a
doubly stochastic (weight-balanced) adjacency matrix. This paper studies the
characterization of both classes of digraphs, and introduces distributed
algorithms to compute the appropriate set of weights in each case.
|
0911.0351
|
On the precoder design of flat fading MIMO systems equipped with MMSE
receivers: a large system approach
|
cs.IT math.IT
|
This paper is devoted to the design of precoders maximizing the ergodic
mutual information (EMI) of bi-correlated flat fading MIMO systems equiped with
MMSE receivers. The channel state information and the second order statistics
of the channel are assumed available at the receiver side and at the
transmitter side respectively. As the direct maximization of the EMI needs the
use of non attractive algorithms, it is proposed to optimize an approximation
of the EMI, introduced recently, obtained when the number of transmit and
receive antennas $t$ and $r$ converge to $\infty$ at the same rate. It is
established that the relative error between the actual EMI and its
approximation is a $O(\frac{1}{t^{2}})$ term. It is shown that the left
singular eigenvectors of the optimum precoder coincide with the eigenvectors of
the transmit covariance matrix, and its singular values are solution of a
certain maximization problem. Numerical experiments show that the mutual
information provided by this precoder is close from what is obtained by
maximizing the true EMI, but that the algorithm maximizing the approximation is
much less computationally intensive.
|
0911.0460
|
Feature-Weighted Linear Stacking
|
cs.LG cs.AI
|
Ensemble methods, such as stacking, are designed to boost predictive accuracy
by blending the predictions of multiple machine learning models. Recent work
has shown that the use of meta-features, additional inputs describing each
example in a dataset, can boost the performance of ensemble methods, but the
greatest reported gains have come from nonlinear procedures requiring
significant tuning and training time. Here, we present a linear technique,
Feature-Weighted Linear Stacking (FWLS), that incorporates meta-features for
improved accuracy while retaining the well-known virtues of linear regression
regarding speed, stability, and interpretability. FWLS combines model
predictions linearly using coefficients that are themselves linear functions of
meta-features. This technique was a key facet of the solution of the second
place team in the recently concluded Netflix Prize competition. Significant
increases in accuracy over standard linear stacking are demonstrated on the
Netflix Prize collaborative filtering dataset.
|
0911.0462
|
Strange Bedfellows: Quantum Mechanics and Data Mining
|
cs.LG physics.data-an quant-ph
|
Last year, in 2008, I gave a talk titled {\it Quantum Calisthenics}. This
year I am going to tell you about how the work I described then has spun off
into a most unlikely direction. What I am going to talk about is how one maps
the problem of finding clusters in a given data set into a problem in quantum
mechanics. I will then use the tricks I described to let quantum evolution lets
the clusters come together on their own.
|
0911.0467
|
On Secure Network Coding with Nonuniform or Restricted Wiretap Sets
|
cs.IT math.IT
|
The secrecy capacity of a network, for a given collection of permissible
wiretap sets, is the maximum rate of communication such that observing links in
any permissible wiretap set reveals no information about the message. This
paper considers secure network coding with nonuniform or restricted wiretap
sets, for example, networks with unequal link capacities where a wiretapper can
wiretap any subset of $k$ links, or networks where only a subset of links can
be wiretapped. Existing results show that for the case of uniform wiretap sets
(networks with equal capacity links/packets where any $k$ can be wiretapped),
the secrecy capacity is given by the cut-set bound, and can be achieved by
injecting $k$ random keys at the source which are decoded at the sink along
with the message. This is the case whether or not the communicating users have
information about the choice of wiretap set. In contrast, we show that for the
nonuniform case, the cut-set bound is not achievable in general when the
wiretap set is unknown, whereas it is achievable when the wiretap set is made
known. We give achievable strategies where random keys are canceled at
intermediate non-sink nodes, or injected at intermediate non-source nodes.
Finally, we show that determining the secrecy capacity is a NP-hard problem.
|
0911.0481
|
An Optimal Method For Wake Detection In SAR Images Using Radon
Transformation Combined With Wavelet Filters
|
cs.CV
|
A new fangled method for ship wake detection in synthetic aperture radar
(SAR) images is explored here. Most of the detection procedure applies the
Radon transform as its properties outfit more than any other transformation for
the detection purpose. But still it holds problems when the transform is
applied to an image with a high level of noise. Here this paper articulates the
combination between the radon transformation and the shrinkage methods which
increase the mode of wake detection process. The latter shrinkage method with
RT maximize the signal to noise ratio hence it leads to most optimal detection
of lines in the SAR images. The originality mainly works on the denoising
segment of the proposed algorithm. Experimental work outs are carried over both
in simulated and real SAR images. The detection process is more adequate with
the proposed method and improves better than the conventional methods.
|
0911.0485
|
Novel Intrusion Detection using Probabilistic Neural Network and
Adaptive Boosting
|
cs.NE cs.LG
|
This article applies Machine Learning techniques to solve Intrusion Detection
problems within computer networks. Due to complex and dynamic nature of
computer networks and hacking techniques, detecting malicious activities
remains a challenging task for security experts, that is, currently available
defense systems suffer from low detection capability and high number of false
alarms. To overcome such performance limitations, we propose a novel Machine
Learning algorithm, namely Boosted Subspace Probabilistic Neural Network
(BSPNN), which integrates an adaptive boosting technique and a semi parametric
neural network to obtain good tradeoff between accuracy and generality. As the
result, learning bias and generalization variance can be significantly
minimized. Substantial experiments on KDD 99 intrusion benchmark indicate that
our model outperforms other state of the art learning algorithms, with
significantly improved detection accuracy, minimal false alarms and relatively
small computational complexity.
|
0911.0486
|
Building a Vietnamese Language Query Processing Framework for ELibrary
Searching Systems
|
cs.DL cs.IR
|
In the objective of building intelligent searching systems for Elibraries or
online bookstores, we have proposed a searching system model based on a
Vietnamese language query processing component. Such document searching systems
based on this model can allow users to use Vietnamese queries that represent
content information as input, instead of entering keywords for searching in
specific fields in database. To simplify the realization process of system
based on this searching system model, we set a target of building a framework
to support the rapid development of Vietnamese language query processing
components. Such framework let the implementation of Vietnamese language query
processing component in similar systems in this domain to be done more easily.
|
0911.0490
|
Breast Cancer Detection Using Multilevel Thresholding
|
cs.CV
|
This paper presents an algorithm which aims to assist the radiologist in
identifying breast cancer at its earlier stages. It combines several image
processing techniques like image negative, thresholding and segmentation
techniques for detection of tumor in mammograms. The algorithm is verified by
using mammograms from Mammographic Image Analysis Society. The results obtained
by applying these techniques are described.
|
0911.0492
|
PARNES: A rapidly convergent algorithm for accurate recovery of sparse
and approximately sparse signals
|
math.OC cs.SY math.NA
|
In this article, we propose an algorithm, NESTA-LASSO, for the LASSO problem,
i.e., an underdetermined linear least-squares problem with a 1-norm constraint
on the solution. We prove under the assumption of the restricted isometry
property (RIP) and a sparsity condition on the solution, that NESTA-LASSO is
guaranteed to be almost always locally linearly convergent. As in the case of
the algorithm NESTA proposed by Becker, Bobin, and Candes, we rely on
Nesterov's accelerated proximal gradient method, which takes O(e^{-1/2})
iterations to come within e > 0 of the optimal value. We introduce a
modification to Nesterov's method that regularly updates the prox-center in a
provably optimal manner, and the aforementioned linear convergence is in part
due to this modification.
In the second part of this article, we attempt to solve the basis pursuit
denoising BPDN problem (i.e., approximating the minimum 1-norm solution to an
underdetermined least squares problem) by using NESTA-LASSO in conjunction with
the Pareto root-finding method employed by van den Berg and Friedlander in
their SPGL1 solver. The resulting algorithm is called PARNES. We provide
numerical evidence to show that it is comparable to currently available
solvers.
|
0911.0499
|
An Innovative Scheme For Effectual Fingerprint Data Compression Using
Bezier Curve Representations
|
cs.CV cs.CR cs.MM
|
Naturally, with the mounting application of biometric systems, there arises a
difficulty in storing and handling those acquired biometric data. Fingerprint
recognition has been recognized as one of the most mature and established
technique among all the biometrics systems. In recent times, with fingerprint
recognition receiving increasingly more attention the amount of fingerprints
collected has been constantly creating enormous problems in storage and
transmission. Henceforth, the compression of fingerprints has emerged as an
indispensable step in automated fingerprint recognition systems. Several
researchers have presented approaches for fingerprint image compression. In
this paper, we propose a novel and efficient scheme for fingerprint image
compression. The presented scheme utilizes the Bezier curve representations for
effective compression of fingerprint images. Initially, the ridges present in
the fingerprint image are extracted along with their coordinate values using
the approach presented. Subsequently, the control points are determined for all
the ridges by visualizing each ridge as a Bezier curve. The control points of
all the ridges determined are stored and are used to represent the fingerprint
image. When needed, the fingerprint image is reconstructed from the stored
control points using Bezier curves. The quality of the reconstructed
fingerprint is determined by a formal evaluation. The proposed scheme achieves
considerable memory reduction in storing the fingerprint.
|
0911.0505
|
Scientific Data Mining in Astronomy
|
astro-ph.IM cs.DB cs.IR physics.data-an
|
We describe the application of data mining algorithms to research problems in
astronomy. We posit that data mining has always been fundamental to
astronomical research, since data mining is the basis of evidence-based
discovery, including classification, clustering, and novelty discovery. These
algorithms represent a major set of computational tools for discovery in large
databases, which will be increasingly essential in the era of data-intensive
astronomy. Historical examples of data mining in astronomy are reviewed,
followed by a discussion of one of the largest data-producing projects
anticipated for the coming decade: the Large Synoptic Survey Telescope (LSST).
To facilitate data-driven discoveries in astronomy, we envision a new
data-oriented research paradigm for astronomy and astrophysics --
astroinformatics. Astroinformatics is described as both a research approach and
an educational imperative for modern data-intensive astronomy. An important
application area for large time-domain sky surveys (such as LSST) is the rapid
identification, characterization, and classification of real-time sky events
(including moving objects, photometrically variable objects, and the appearance
of transients). We describe one possible implementation of a classification
broker for such events, which incorporates several astroinformatics techniques:
user annotation, semantic tagging, metadata markup, heterogeneous data
integration, and distributed data mining. Examples of these types of
collaborative classification and discovery approaches within other science
disciplines are presented.
|
0911.0508
|
Optimization and Evaluation of Nested Queries and Procedures
|
cs.DB
|
Many database applications perform complex data retrieval and update tasks.
Nested queries, and queries that invoke user-defined functions, which are
written using a mix of procedural and SQL constructs, are often used in such
applications. A straight-forward evaluation of such queries involves repeated
execution of parameterized sub-queries or blocks containing queries and
procedural code.
An important problem that arises while optimizing nested queries as well as
queries with joins, aggregates and set operations is the problem of finding an
optimal sort order from a factorial number of possible sort orders. We show
that even a special case of this problem is NP-Hard, and present practical
heuristics that are effective and easy to incorporate in existing query
optimizers.
We also consider iterative execution of queries and updates inside complex
procedural blocks such as user-defined functions and stored procedures.
Parameter batching is an important means of improving performance as it enables
set-orientated processing. The key challenge to parameter batching lies in
rewriting a given procedure/function to process a batch of parameter values. We
propose a solution, based on program analysis and rewrite rules, to automate
the generation of batched forms of procedures and replace iterative database
calls within imperative loops with a single call to the batched form.
We present experimental results for the proposed techniques, and the results
show significant gains in performance.
|
0911.0519
|
Xampling: Signal Acquisition and Processing in Union of Subspaces
|
cs.IT math.IT
|
We introduce Xampling, a unified framework for signal acquisition and
processing of signals in a union of subspaces. The main functions of this
framework are two. Analog compression that narrows down the input bandwidth
prior to sampling with commercial devices. A nonlinear algorithm then detects
the input subspace prior to conventional signal processing. A representative
union model of spectrally-sparse signals serves as a test-case to study these
Xampling functions. We adopt three metrics for the choice of analog
compression: robustness to model mismatch, required hardware accuracy and
software complexities. We conduct a comprehensive comparison between two
sub-Nyquist acquisition strategies for spectrally-sparse signals, the random
demodulator and the modulated wideband converter (MWC), in terms of these
metrics and draw operative conclusions regarding the choice of analog
compression. We then address lowrate signal processing and develop an algorithm
for that purpose that enables convenient signal processing at sub-Nyquist rates
from samples obtained by the MWC. We conclude by showing that a variety of
other sampling approaches for different union classes fit nicely into our
framework.
|
0911.0645
|
Bayes estimators for phylogenetic reconstruction
|
q-bio.PE cs.LG q-bio.QM
|
Tree reconstruction methods are often judged by their accuracy, measured by
how close they get to the true tree. Yet most reconstruction methods like ML do
not explicitly maximize this accuracy. To address this problem, we propose a
Bayesian solution. Given tree samples, we propose finding the tree estimate
which is closest on average to the samples. This ``median'' tree is known as
the Bayes estimator (BE). The BE literally maximizes posterior expected
accuracy, measured in terms of closeness (distance) to the true tree. We
discuss a unified framework of BE trees, focusing especially on tree distances
which are expressible as squared euclidean distances. Notable examples include
Robinson--Foulds distance, quartet distance, and squared path difference. Using
simulated data, we show Bayes estimators can be efficiently computed in
practice by hill climbing. We also show that Bayes estimators achieve higher
accuracy, compared to maximum likelihood and neighbor joining.
|
0911.0660
|
The capacity region of a product of two unmatched Gaussian broadcast
channels with three particular messages and a common message
|
cs.IT math.IT
|
This paper considers a Gaussian broadcast channel with two unmatched degraded
components, three particular messages, and a common message that is intended
for all three receivers. It is shown that for this channel superposition coding
and Gaussian signalling is sufficient to achieve every point in the capacity
region.
|
0911.0696
|
A proof of the log-concavity conjecture related to the computation of
the ergodic capacity of MIMO channels
|
cs.IT math.IT
|
An upper bound on the ergodic capacity of {\bf MIMO} channels was introduced
recently in arXiv:0903.1952. This upper bound amounts to the maximization on
the simplex of some multilinear polynomial $p(\lambda_1,...,\lambda_n)$ with
non-negative coefficients. Interestingly, the coefficients are subpermanents of
some non-negative matrix. In general, such maximizations problems are {\bf
NP-HARD}. But if say, the functional $\log(p)$ is concave on the simplex and
can be efficiently evaluated, then the maximization can also be done
efficiently. Such log-concavity was conjectured in arXiv:0903.1952. We give in
this paper self-contained proof of the conjecture, based on the theory of {\bf
H-Stable} polynomials.
|
0911.0709
|
Constellation Precoded Multiple Beamforming
|
cs.IT math.IT
|
Beamforming techniques that employ Singular Value Decomposition (SVD) are
commonly used in Multi-Input Multi-Output (MIMO) wireless communication
systems. In the absence of channel coding, when a single symbol is transmitted,
these systems achieve the full diversity order provided by the channel; whereas
when multiple symbols are simultaneously transmitted, this property is lost.
When channel coding is employed, full diversity order can be achieved. For
example, when Bit-Interleaved Coded Modulation (BICM) is combined with this
technique, full diversity order of NM in an MxN MIMO channel transmitting S
parallel streams is possible, provided a condition on S and the BICM
convolutional code rate is satisfied. In this paper, we present constellation
precoded multiple beamforming which can achieve the full diversity order both
with BICM-coded and uncoded SVD systems. We provide an analytical proof of this
property. To reduce the computational complexity of Maximum Likelihood (ML)
decoding in this system, we employ Sphere Decoding (SD). We report an SD
technique that reduces the computational complexity beyond commonly used
approaches to SD. This technique achieves several orders of magnitude reduction
in computational complexity not only with respect to conventional ML decoding
but also, with respect to conventional SD.
|
0911.0736
|
A simple proof that random matrices are democratic
|
math.NA cs.IT math.IT
|
The recently introduced theory of compressive sensing (CS) enables the
reconstruction of sparse or compressible signals from a small set of
nonadaptive, linear measurements. If properly chosen, the number of
measurements can be significantly smaller than the ambient dimension of the
signal and yet preserve the significant signal information. Interestingly, it
can be shown that random measurement schemes provide a near-optimal encoding in
terms of the required number of measurements. In this report, we explore
another relatively unexplored, though often alluded to, advantage of using
random matrices to acquire CS measurements. Specifically, we show that random
matrices are democractic, meaning that each measurement carries roughly the
same amount of signal information. We demonstrate that by slightly increasing
the number of measurements, the system is robust to the loss of a small number
of arbitrary measurements. In addition, we draw connections to oversampling and
demonstrate stability from the loss of significantly more measurements.
|
0911.0737
|
Multiple Description Coding of Discrete Ergodic Sources
|
cs.IT math.IT
|
We investigate the problem of Multiple Description (MD) coding of discrete
ergodic processes. We introduce the notion of MD stationary coding, and
characterize its relationship to the conventional block MD coding. In
stationary coding, in addition to the two rate constraints normally considered
in the MD problem, we consider another rate constraint which reflects the
conditional entropy of the process generated by the third decoder given the
reconstructions of the two other decoders. The relationship that we establish
between stationary and block MD coding enables us to devise a universal
algorithm for MD coding of discrete ergodic sources, based on simulated
annealing ideas that were recently proven useful for the standard rate
distortion problem.
|
0911.0753
|
An XML-based Multi-Agent System for Supporting Online Recruitment
Services
|
cs.MA
|
In this paper we propose an XML-based multi-agent recommender system for
supporting online recruitment services. Our system is characterized by the
following features: {\em (i)} it handles user profiles for personalizing the
job search over the Internet; {\em (ii)} it is based on the Intelligent Agent
Technology; {\em (iii)} it uses XML for guaranteeing a light, versatile and
standard mechanism for information representation, storing and exchange. The
paper discusses the basic features of the proposed system, presents the results
of an experimental study we have carried out for evaluating its performance,
and makes a comparison between the proposed system and other e-recruitment
systems already presented in the past.
|
0911.0781
|
A Way to Understand Various Patterns of Data Mining Techniques for
Selected Domains
|
cs.DB cs.IR
|
This has much in common with traditional work in statistics and machine
learning. However, there are important new issues which arise because of the
sheer size of the data. One of the important problem in data mining is the
Classification-rule learning which involves finding rules that partition given
data into predefined classes. In the data mining domain where millions of
records and a large number of attributes are involved, the execution time of
existing algorithms can become prohibitive, particularly in interactive
applications.
|
0911.0787
|
Generalized Discriminant Analysis algorithm for feature reduction in
Cyber Attack Detection System
|
cs.CR cs.CV cs.NE
|
This Generalized Discriminant Analysis (GDA) has provided an extremely
powerful approach to extracting non linear features. The network traffic data
provided for the design of intrusion detection system always are large with
ineffective information, thus we need to remove the worthless information from
the original high dimensional database. To improve the generalization ability,
we usually generate a small set of features from the original input variables
by feature extraction. The conventional Linear Discriminant Analysis (LDA)
feature reduction technique has its limitations. It is not suitable for non
linear dataset. Thus we propose an efficient algorithm based on the Generalized
Discriminant Analysis (GDA) feature reduction technique which is novel approach
used in the area of cyber attack detection. This not only reduces the number of
the input features but also increases the classification accuracy and reduces
the training and testing time of the classifiers by selecting most
discriminating features. We use Artificial Neural Network (ANN) and C4.5
classifiers to compare the performance of the proposed technique. The result
indicates the superiority of algorithm.
|
0911.0801
|
Tractable hypergraph properties for constraint satisfaction and
conjunctive queries
|
cs.DS cs.CC cs.DB cs.DM
|
An important question in the study of constraint satisfaction problems (CSP)
is understanding how the graph or hypergraph describing the incidence structure
of the constraints influences the complexity of the problem. For binary CSP
instances (i.e., where each constraint involves only two variables), the
situation is well understood: the complexity of the problem essentially depends
on the treewidth of the graph of the constraints. However, this is not the
correct answer if constraints with unbounded number of variables are allowed,
and in particular, for CSP instances arising from query evaluation problems in
database theory. Formally, if H is a class of hypergraphs, then let CSP(H) be
CSP restricted to instances whose hypergraph is in H. Our goal is to
characterize those classes of hypergraphs for which CSP(H) is polynomial-time
solvable or fixed-parameter tractable, parameterized by the number of
variables. Note that in the applications related to database query evaluation,
we usually assume that the number of variables is much smaller than the size of
the instance, thus parameterization by the number of variables is a meaningful
question. The most general known property of H that makes CSP(H)
polynomial-time solvable is bounded fractional hypertree width. Here we
introduce a new hypergraph measure called submodular width, and show that
bounded submodular width of H implies that CSP(H) is fixed-parameter tractable.
In a matching hardness result, we show that if H has unbounded submodular
width, then CSP(H) is not fixed-parameter tractable, unless the Exponential
Time Hypothesis fails.
|
0911.0820
|
Power and Transmission Duration Control for Un-Slotted Cognitive Radio
Networks
|
cs.IT math.IT
|
We consider an unslotted primary channel with alternating on/off activity and
provide a solution to the problem of finding the optimal secondary transmission
power and duration given some sensing outcome. The goal is to maximize a
weighted sum of the primary and secondary throughput where the weight is
determined by the minimum rate required by the primary terminals. The primary
transmitter sends at a fixed power and a fixed rate. Its on/off durations
follow an exponential distribution. Two sensing schemes are considered: perfect
sensing in which the actual state of the primary channel is revealed, and soft
sensing in which the secondary transmission power and time are determined based
on the sensing metric directly. We use an upperbound for the secondary
throughput assuming that the secondary receiver tracks the instantaneous
secondary channel state information. The objective function is non-convex and,
hence, the optimal solution is obtained via exhaustive search. Our results show
that an increase in the overall weighted throughput can be obtained by allowing
the secondary to transmit even when the channel is found to be busy. For the
examined system parameter values, the throughput gain from soft sensing is
marginal. Further investigation is needed for assessing the potential of soft
sensing.
|
0911.0844
|
Sampling and Reconstruction of Signals in a Reproducing Kernel Subspace
of $L^p({\Bbb R}^d)$
|
cs.IT math.FA math.IT
|
In this paper, we consider sampling and reconstruction of signals in a
reproducing kernel subspace of $L^p(\Rd), 1\le p\le \infty$, associated with an
idempotent integral operator whose kernel has certain off-diagonal decay and
regularity. The space of $p$-integrable non-uniform splines and the
shift-invariant spaces generated by finitely many localized functions are our
model examples of such reproducing kernel subspaces of $L^p(\Rd)$. We show that
a signal in such reproducing kernel subspaces can be reconstructed in a stable
way from its samples taken on a relatively-separated set with sufficiently
small gap. We also study the exponential convergence, consistency, and the
asymptotic pointwise error estimate of the iterative approximation-projection
algorithm and the iterative frame algorithm for reconstructing a signal in
those reproducing kernel spaces from its samples with sufficiently small gap.
|
0911.0874
|
State Information in Bayesian Games
|
cs.IT cs.CR cs.GT math.IT
|
Two-player zero-sum repeated games are well understood. Computing the value
of such a game is straightforward. Additionally, if the payoffs are dependent
on a random state of the game known to one, both, or neither of the players,
the resulting value of the game has been analyzed under the framework of
Bayesian games. This investigation considers the optimal performance in a game
when a helper is transmitting state information to one of the players.
Encoding information for an adversarial setting (game) requires a different
result than rate-distortion theory provides. Game theory has accentuated the
importance of randomization (mixed strategy), which does not find a significant
role in most communication modems and source coding codecs. Higher rates of
communication, used in the right way, allow the message to include the
necessary random component useful in games.
|
0911.0894
|
A New Computational Schema for Euphonic Conjunctions in Sanskrit
Processing
|
cs.CL
|
Automated language processing is central to the drive to enable facilitated
referencing of increasingly available Sanskrit E texts. The first step towards
processing Sanskrit text involves the handling of Sanskrit compound words that
are an integral part of Sanskrit texts. This firstly necessitates the
processing of euphonic conjunctions or sandhis, which are points in words or
between words, at which adjacent letters coalesce and transform. The ancient
Sanskrit grammarian Panini's codification of the Sanskrit grammar is the
accepted authority in the subject. His famed sutras or aphorisms, numbering
approximately four thousand, tersely, precisely and comprehensively codify the
rules of the grammar, including all the rules pertaining to sandhis. This work
presents a fresh new approach to processing sandhis in terms of a computational
schema. This new computational model is based on Panini's complex codification
of the rules of grammar. The model has simple beginnings and is yet powerful,
comprehensive and computationally lean.
|
0911.0905
|
Combining Training and Quantized Feedback in Multi-Antenna Reciprocal
Channels
|
cs.IT math.IT
|
The communication between a multiple-antenna transmitter and multiple
receivers (users) with either a single or multiple-antenna each can be
significantly enhanced by providing the channel state information at the
transmitter (CSIT) of the users, as this allows for scheduling, beamforming and
multiuser multiplexing gains. The traditional view on how to enable CSIT has
been as follows so far: In time-division duplexed (TDD) systems, uplink (UL)
and downlink (DL) channel reciprocity allows the use of a training sequence in
the UL direction, which is exploited to obtain an UL channel estimate. This
estimate is in turn recycled in the next downlink slot. In frequency-division
duplexed (FDD) systems, which lack the UL and DL reciprocity, the CSIT is
provided via the use of a dedicated feedback link of limited capacity between
the receivers and the transmitter. In this paper, we focus on TDD systems and
put this classical approach in question. In particular, we show that the
traditional TDD setup above fails to fully exploit the channel reciprocity in
its true sense. In fact, we show that the system can benefit from a combined
CSIT acquisition strategy mixing the use of limited feedback and that of a
training sequence. This combining gives rise to a very interesting joint
estimation and detection problem for which we propose two iterative algorithms.
An outage rate based framework is also developed which gives the optimal
resource split between training and feedback. We demonstrate the potential of
this hybrid combining in terms of the improved CSIT quality under a global
training and feedback resource constraint.
|
0911.0907
|
ANN-based Innovative Segmentation Method for Handwritten text in
Assamese
|
cs.CL
|
Artificial Neural Network (ANN) s has widely been used for recognition of
optically scanned character, which partially emulates human thinking in the
domain of the Artificial Intelligence. But prior to recognition, it is
necessary to segment the character from the text to sentences, words etc.
Segmentation of words into individual letters has been one of the major
problems in handwriting recognition. Despite several successful works all over
the work, development of such tools in specific languages is still an ongoing
process especially in the Indian context. This work explores the application of
ANN as an aid to segmentation of handwritten characters in Assamese- an
important language in the North Eastern part of India. The work explores the
performance difference obtained in applying an ANN-based dynamic segmentation
algorithm compared to projection- based static segmentation. The algorithm
involves, first training of an ANN with individual handwritten characters
recorded from different individuals. Handwritten sentences are separated out
from text using a static segmentation method. From the segmented line,
individual characters are separated out by first over segmenting the entire
line. Each of the segments thus obtained, next, is fed to the trained ANN. The
point of segmentation at which the ANN recognizes a segment or a combination of
several segments to be similar to a handwritten character, a segmentation
boundary for the character is assumed to exist and segmentation performed. The
segmented character is next compared to the best available match and the
segmentation boundary confirmed.
|
0911.0912
|
Multi-Agent System Interaction in Integrated SCM
|
cs.MA
|
Coordination between organizations on strategic, tactical and operation
levels leads to more effective and efficient supply chains. Supply chain
management is increasing day by day in modern enterprises. The environment is
becoming competitive and many enterprises will find it difficult to survive if
they do not make their sourcing, production and distribution more efficient.
Multi-agent supply chain management has recognized as an effective methodology
for supply chain management. Multi-agent systems (MAS) offer new methods
compared to conventional, centrally organized architectures in the scope of
supply chain management (SCM). Since necessary data are not available within
the whole supply chain, an integrated approach for production planning and
control taking into account all the partners involved is not feasible. In this
study we show how MAS architecture interacts in the integrated SCM architecture
with the help of various intelligent agents to highlight the above problem.
|
0911.0914
|
Enhanced Trustworthy and High-Quality Information Retrieval System for
Web Search Engines
|
cs.IR
|
The WWW is the most important source of information. But, there is no
guarantee for information correctness and lots of conflicting information is
retrieved by the search engines and the quality of provided information also
varies from low quality to high quality. We provide enhanced trustworthiness in
both specific (entity) and broad (content) queries in web searching. The
filtering of trustworthiness is based on 5 factors: Provenance, Authority, Age,
Popularity, and Related Links. The trustworthiness is calculated based on these
5 factors and it is stored thereby increasing the performance in retrieving
trustworthy websites. The calculated trustworthiness is stored only for static
websites. Quality is provided based on policies selected by the user. Quality
based ranking of retrieved trusted information is provided using WIQA (Web
Information Quality Assessment) Framework.
|
0911.0971
|
Multicell Zero-Forcing and User Scheduling on the Downlink of a Linear
Cell Array
|
cs.IT math.IT
|
Coordinated base station (BS) transmission has attracted much interest for
its potential to increase the capacity of wireless networks. Yet at the same
time, the achievable sum-rate with single-cell processing (SCP) scales
optimally with the number of users under Rayleigh fading conditions. One may
therefore ask if the value of BS coordination is limited in the many-user
regime from a sum-rate perspective. With this in mind we consider multicell
zero-forcing beamforming (ZFBF) on the downlink of a linear cell-array. We
first identify the beamforming weights and the optimal scheduling policy under
a per-base power constraint. We then compare the number of users m and n
required per-cell to achieve the same mean SINR, after optimal scheduling, with
SCP and ZFBF respectively. Specifically, we show that the ratio m/n grows
logarithmically with n. Finally, we demonstrate that the gain in sum-rate
between ZFBF and SCP is significant for all practical values of number of
users.
|
0911.1021
|
Examples as Interaction: On Humans Teaching a Computer to Play a Game
|
cs.AI cs.GT
|
This paper reviews an experiment in human-computer interaction, where
interaction takes place when humans attempt to teach a computer to play a
strategy board game. We show that while individually learned models can be
shown to improve the playing performance of the computer, their straightforward
composition results in diluting what was earlier learned. This observation
suggests that interaction cannot be easily distributed when one hopes to
harness multiple human experts to develop a quality computer player. This is
related to similar approaches in robot task learning and to classic approaches
to human learning and reinforces the need to develop tools that facilitate the
mix of human-based tuition and computer self-learning.
|
0911.1054
|
Sum Rates, Rate Allocation, and User Scheduling for Multi-User MIMO
Vector Perturbation Precoding
|
cs.IT math.IT
|
This paper considers the multiuser multiple-input multiple-output (MIMO)
broadcast channel. We consider the case where the multiple transmit antennas
are used to deliver independent data streams to multiple users via vector
perturbation. We derive expressions for the sum rate in terms of the average
energy of the precoded vector, and use this to derive a high signal-to-noise
ratio (SNR) closed-form upper bound, which we show to be tight via simulation.
We also propose a modification to vector perturbation where different rates can
be allocated to different users. We conclude that for vector perturbation
precoding most of the sum rate gains can be achieved by reducing the rate
allocation problem to the user selection problem. We then propose a
low-complexity user selection algorithm that attempts to maximize the high-SNR
sum rate upper bound. Simulations show that the algorithm outperforms other
user selection algorithms of similar complexity.
|
0911.1072
|
Error Correcting Coding for a Non-symmetric Ternary Channel
|
cs.IT math.IT
|
Ternary channels can be used to model the behavior of some memory devices,
where information is stored in three different levels. In this paper, error
correcting coding for a ternary channel where some of the error transitions are
not allowed, is considered. The resulting channel is non-symmetric, therefore
classical linear codes are not optimal for this channel. We define the
maximum-likelihood (ML) decoding rule for ternary codes over this channel and
show that it is complex to compute, since it depends on the channel error
probability. A simpler alternative decoding rule which depends only on code
properties, called $\da$-decoding, is then proposed. It is shown that
$\da$-decoding and ML decoding are equivalent, i.e., $\da$-decoding is optimal,
under certain conditions. Assuming $\da$-decoding, we characterize the error
correcting capabilities of ternary codes over the non-symmetric ternary
channel. We also derive an upper bound and a constructive lower bound on the
size of codes, given the code length and the minimum distance. The results
arising from the constructive lower bound are then compared, for short sizes,
to optimal codes (in terms of code size) found by a clique-based search. It is
shown that the proposed construction method gives good codes, and that in some
cases the codes are optimal.
|
0911.1082
|
Ergodic Fading One-sided Interference Channels without State Information
at Transmitters
|
cs.IT math.IT
|
This work studies the capacity region of a two-user ergodic interference
channel with fading, where only one of the users is subject to interference
from the other user, and the channel state information (CSI) is only available
at the receivers. A layered erasure model with one-sided interference and with
arbitrary fading statistics is studied first, whose capacity region is
completely determined as a polygon. Each dominant rate pair can be regarded as
the outcome of a trade-off between the rate gain of the interference-free user
and the rate loss of the other user due to interference. Using insights from
the layered erasure model, inner and outer bounds of the capacity region are
provided for the one-sided fading Gaussian interference channels. In
particular, the inner bound is achieved by artificially creating layers in the
signaling of the interference-free user. The outer bound is developed by
characterizing a similar trade-off as in the erasure model by taking a
"layered" view using the incremental channel approach. Furthermore, the gap
between the inner and outer bounds is no more than 12.772 bits per channel use
per user, regardless of the signal-to-noise ratios and fading statistics.
|
0911.1090
|
On the Capacity of Constrained Systems
|
cs.IT math.IT
|
In the first chapter of Shannon's "A Mathematical Theory of Communication,"
it is shown that the maximum entropy rate of an input process of a constrained
system is limited by the combinatorial capacity of the system. Shannon
considers systems where the constraints define regular languages and uses
results from matrix theory in his derivations. In this work, the regularity
constraint is dropped. Using generating functions, it is shown that the maximum
entropy rate of an input process is upper-bounded by the combinatorial capacity
in general. The presented results also allow for a new approach to systems with
regular constraints. As an example, the results are applied to binary sequences
that fulfill the (j,k) run-length constraint and by using the proposed
framework, a simple formula for the combinatorial capacity is given and a
maxentropic input process is defined.
|
0911.1112
|
Memento: Time Travel for the Web
|
cs.IR cs.DL
|
The Web is ephemeral. Many resources have representations that change over
time, and many of those representations are lost forever. A lucky few manage to
reappear as archived resources that carry their own URIs. For example, some
content management systems maintain version pages that reflect a frozen prior
state of their changing resources. Archives recurrently crawl the web to obtain
the actual representation of resources, and subsequently make those available
via special-purpose archived resources. In both cases, the archival copies have
URIs that are protocol-wise disconnected from the URI of the resource of which
they represent a prior state. Indeed, the lack of temporal capabilities in the
most common Web protocol, HTTP, prevents getting to an archived resource on the
basis of the URI of its original. This turns accessing archived resources into
a significant discovery challenge for both human and software agents, which
typically involves following a multitude of links from the original to the
archival resource, or of searching archives for the original URI. This paper
proposes the protocol-based Memento solution to address this problem, and
describes a proof-of-concept experiment that includes major servers of archival
content, including Wikipedia and the Internet Archive. The Memento solution is
based on existing HTTP capabilities applied in a novel way to add the temporal
dimension. The result is a framework in which archived resources can seamlessly
be reached via the URI of their original: protocol-based time travel for the
Web.
|
0911.1174
|
Sharp Dichotomies for Regret Minimization in Metric Spaces
|
cs.DS cs.LG
|
The Lipschitz multi-armed bandit (MAB) problem generalizes the classical
multi-armed bandit problem by assuming one is given side information consisting
of a priori upper bounds on the difference in expected payoff between certain
pairs of strategies. Classical results of (Lai and Robbins 1985) and (Auer et
al. 2002) imply a logarithmic regret bound for the Lipschitz MAB problem on
finite metric spaces. Recent results on continuum-armed bandit problems and
their generalizations imply lower bounds of $\sqrt{t}$, or stronger, for many
infinite metric spaces such as the unit interval. Is this dichotomy universal?
We prove that the answer is yes: for every metric space, the optimal regret of
a Lipschitz MAB algorithm is either bounded above by any $f\in \omega(\log t)$,
or bounded below by any $g\in o(\sqrt{t})$. Perhaps surprisingly, this
dichotomy does not coincide with the distinction between finite and infinite
metric spaces; instead it depends on whether the completion of the metric space
is compact and countable. Our proof connects upper and lower bound techniques
in online learning with classical topological notions such as perfect sets and
the Cantor-Bendixson theorem. Among many other results, we show a similar
dichotomy for the "full-feedback" (a.k.a., "best-expert") version.
|
0911.1275
|
Continuity of mutual entropy in the large signal-to-noise ratio limit
|
cs.IT cs.IR math.IT math.PR
|
This article addresses the issue of the proof of the entropy power inequality
(EPI), an important tool in the analysis of Gaussian channels of information
transmission, proposed by Shannon.
We analyse continuity properties of the mutual entropy of the input and
output signals in an additive memoryless channel and discuss assumptions under
which the entropy-power inequality holds true.
|
0911.1298
|
Affine Grassmann Codes
|
cs.IT math.AG math.IT
|
We consider a new class of linear codes, called affine Grassmann codes. These
can be viewed as a variant of generalized Reed-Muller codes and are closely
related to Grassmann codes. We determine the length, dimension, and the minimum
distance of any affine Grassmann code. Moreover, we show that affine Grassmann
codes have a large automorphism group and determine the number of minimum
weight codewords.
|
0911.1305
|
Retrieval of very large numbers of items in the Web of Science: an
exercise to develop accurate search strategies
|
cs.DL cs.IR physics.soc-ph
|
The current communication presents a simple exercise with the aim of solving
a singular problem: the retrieval of extremely large amounts of items in the
Web of Science interface. As it is known, Web of Science interface allows a
user to obtain at most 100,000 items from a single query. But what about
queries that achieve a result of more than 100,000 items? The exercise
developed one possible way to achieve this objective. The case study is the
retrieval of the entire scientific production from the United States in a
specific year. Different sections of items were retrieved using the field
Source of the database. Then, a simple Boolean statement was created with the
aim of eliminating overlapping and to improve the accuracy of the search
strategy. The importance of team work in the development of advanced search
strategies was noted.
|
0911.1318
|
The relation between Pearson's correlation coefficient r and Salton's
cosine measure
|
cs.IR stat.ME
|
The relation between Pearson's correlation coefficient and Salton's cosine
measure is revealed based on the different possible values of the division of
the L1-norm and the L2-norm of a vector. These different values yield a sheaf
of increasingly straight lines which form together a cloud of points, being the
investigated relation. The theoretical results are tested against the author
co-citation relations among 24 informetricians for whom two matrices can be
constructed, based on co-citations: the asymmetric occurrence matrix and the
symmetric co-citation matrix. Both examples completely confirm the theoretical
results. The results enable us to specify an algorithm which provides a
threshold value for the cosine above which none of the corresponding Pearson
correlations would be negative. Using this threshold value can be expected to
optimize the visualization of the vector space.
|
0911.1320
|
Knowledge linkage structures in communication studies using citation
analysis among communication journals
|
cs.DL cs.IR physics.soc-ph
|
This research analyzes a "who cites whom" matrix in terms of aggregated,
journal-journal citations to determine the location of communication studies on
the academic spectrum. Using the Journal of Communication as the seed journal,
the 2006 data in the Journal Citation Reports are used to map communication
studies. The results show that social and experimental psychology journals are
the most frequently used sources of information in this field. In addition,
several journals devoted to the use and effects of media and advertising are
weakly integrated into the larger communication research community, whereas
communication studies are dominated by American journals.
|
0911.1346
|
Optimal Approximation Algorithms for Multi-agent Combinatorial Problems
with Discounted Price Functions
|
cs.MA cs.DS
|
Submodular functions are an important class of functions in combinatorial
optimization which satisfy the natural properties of decreasing marginal costs.
The study of these functions has led to strong structural properties with
applications in many areas. Recently, there has been significant interest in
extending the theory of algorithms for optimizing combinatorial problems (such
as network design problem of spanning tree) over submodular functions.
Unfortunately, the lower bounds under the general class of submodular functions
are known to be very high for many of the classical problems.
In this paper, we introduce and study an important subclass of submodular
functions, which we call discounted price functions. These functions are
succinctly representable and generalize linear cost functions. In this paper we
study the following fundamental combinatorial optimization problems: Edge
Cover, Spanning Tree, Perfect Matching and Shortest Path, and obtain tight
upper and lower bounds for these problems.
The main technical contribution of this paper is designing novel adaptive
greedy algorithms for the above problems. These algorithms greedily build the
solution whist rectifying mistakes made in the previous steps.
|
0911.1368
|
Performance Bounds for Expander-based Compressed Sensing in the presence
of Poisson Noise
|
cs.IT math.IT
|
This paper provides performance bounds for compressed sensing in the presence
of Poisson noise using expander graphs. The Poisson noise model is appropriate
for a variety of applications, including low-light imaging and digital
streaming, where the signal-independent and/or bounded noise models used in the
compressed sensing literature are no longer applicable. In this paper, we
develop a novel sensing paradigm based on expander graphs and propose a MAP
algorithm for recovering sparse or compressible signals from Poisson
observations. The geometry of the expander graphs and the positivity of the
corresponding sensing matrices play a crucial role in establishing the bounds
on the signal reconstruction error of the proposed algorithm. The geometry of
the expander graphs makes them provably superior to random dense sensing
matrices, such as Gaussian or partial Fourier ensembles, for the Poisson noise
model. We support our results with experimental demonstrations.
|
0911.1383
|
Information Geometry and Evolutionary Game Theory
|
cs.IT cs.GT math.DS math.IT nlin.AO
|
The Shahshahani geometry of evolutionary game theory is realized as the
information geometry of the simplex, deriving from the Fisher information
metric of the manifold of categorical probability distributions. Some essential
concepts in evolutionary game theory are realized information-theoretically.
Results are extended to the Lotka-Volterra equation and to multiple population
systems.
|
0911.1386
|
Machine Learning: When and Where the Horses Went Astray?
|
cs.AI cs.LG
|
Machine Learning is usually defined as a subfield of AI, which is busy with
information extraction from raw data sets. Despite of its common acceptance and
widespread recognition, this definition is wrong and groundless. Meaningful
information does not belong to the data that bear it. It belongs to the
observers of the data and it is a shared agreement and a convention among them.
Therefore, this private information cannot be extracted from the data by any
means. Therefore, all further attempts of Machine Learning apologists to
justify their funny business are inappropriate.
|
0911.1388
|
Binary Non-tiles
|
cs.DM cs.IT math.CO math.IT
|
A subset V of GF(2)^n is a tile if GF(2)^n can be covered by disjoint
translates of V. In other words, V is a tile if and only if there is a subset A
of GF(2)^n such that V+A = GF(2)^n uniquely (i.e., v + a = v' + a' implies that
v=v' and a=a' where v,v' in V and a,a' in A). In some problems in coding theory
and hashing we are given a putative tile V, and wish to know whether or not it
is a tile. In this paper we give two computational criteria for certifying that
V is not a tile. The first involves impossibility of a bin-packing problem, and
the second involves infeasibility of a linear program. We apply both criteria
to a list of putative tiles given by Gordon, Miller, and Ostapenko in that none
of them are, in fact, tiles.
|
0911.1419
|
Belief Propagation and Loop Calculus for the Permanent of a Non-Negative
Matrix
|
cs.DS cond-mat.stat-mech cs.DM cs.LG cs.NA math.OC
|
We consider computation of permanent of a positive $(N\times N)$ non-negative
matrix, $P=(P_i^j|i,j=1,\cdots,N)$, or equivalently the problem of weighted
counting of the perfect matchings over the complete bipartite graph $K_{N,N}$.
The problem is known to be of likely exponential complexity. Stated as the
partition function $Z$ of a graphical model, the problem allows exact Loop
Calculus representation [Chertkov, Chernyak '06] in terms of an interior
minimum of the Bethe Free Energy functional over non-integer doubly stochastic
matrix of marginal beliefs, $\beta=(\beta_i^j|i,j=1,\cdots,N)$, also
correspondent to a fixed point of the iterative message-passing algorithm of
the Belief Propagation (BP) type. Our main result is an explicit expression of
the exact partition function (permanent) in terms of the matrix of BP
marginals, $\beta$, as $Z=\mbox{Perm}(P)=Z_{BP}
\mbox{Perm}(\beta_i^j(1-\beta_i^j))/\prod_{i,j}(1-\beta_i^j)$, where $Z_{BP}$
is the BP expression for the permanent stated explicitly in terms if $\beta$.
We give two derivations of the formula, a direct one based on the Bethe Free
Energy and an alternative one combining the Ihara graph-$\zeta$ function and
the Loop Calculus approaches. Assuming that the matrix $\beta$ of the Belief
Propagation marginals is calculated, we provide two lower bounds and one
upper-bound to estimate the multiplicative term. Two complementary lower bounds
are based on the Gurvits-van der Waerden theorem and on a relation between the
modified permanent and determinant respectively.
|
0911.1426
|
On the Capacity of the Half-Duplex Diamond Channel
|
cs.IT math.IT
|
In this paper, a dual-hop communication system composed of a source S and a
destination D connected through two non-interfering half-duplex relays, R1 and
R2, is considered. In the literature of Information Theory, this configuration
is known as the diamond channel. In this setup, four transmission modes are
present, namely: 1) S transmits, and R1 and R2 listen (broadcast mode), 2) S
transmits, R1 listens, and simultaneously, R2 transmits and D listens. 3) S
transmits, R2 listens, and simultaneously, R1 transmits and D listens. 4) R1,
R2 transmit, and D listens (multiple-access mode). Assuming a constant power
constraint for all transmitters, a parameter $\Delta$ is defined, which
captures some important features of the channel. It is proven that for
$\Delta$=0 the capacity of the channel can be attained by successive relaying,
i.e, using modes 2 and 3 defined above in a successive manner. This strategy
may have an infinite gap from the capacity of the channel when $\Delta\neq$0.
To achieve rates as close as 0.71 bits to the capacity, it is shown that the
cases of $\Delta$>0 and $\Delta$<0 should be treated differently. Using new
upper bounds based on the dual problem of the linear program associated with
the cut-set bounds, it is proven that the successive relaying strategy needs to
be enhanced by an additional broadcast mode (mode 1), or multiple access mode
(mode 4), for the cases of $\Delta$<0 and $\Delta$>0, respectively.
Furthermore, it is established that under average power constraints the
aforementioned strategies achieve rates as close as 3.6 bits to the capacity of
the channel.
|
0911.1447
|
On the Normalization and Visualization of Author Co-Citation Data
Salton's Cosine versus the Jaccard Index
|
physics.soc-ph cs.DL cs.IR
|
The debate about which similarity measure one should use for the
normalization in the case of Author Co-citation Analysis (ACA) is further
complicated when one distinguishes between the symmetrical co-citation--or,
more generally, co-occurrence--matrix and the underlying asymmetrical
citation--occurrence--matrix. In the Web environment, the approach of
retrieving original citation data is often not feasible. In that case, one
should use the Jaccard index, but preferentially after adding the number of
total citations (occurrences) on the main diagonal. Unlike Salton's cosine and
the Pearson correlation, the Jaccard index abstracts from the shape of the
distributions and focuses only on the intersection and the sum of the two sets.
Since the correlations in the co-occurrence matrix may partially be spurious,
this property of the Jaccard index can be considered as an advantage in this
case.
|
0911.1451
|
Co-word Analysis using the Chinese Character Set
|
cs.CL cs.DL
|
Until recently, Chinese texts could not be studied using co-word analysis
because the words are not separated by spaces in Chinese (and Japanese). A word
can be composed of one or more characters. The online availability of programs
that separate Chinese texts makes it possible to analyze them using semantic
maps. Chinese characters contain not only information, but also meaning. This
may enhance the readability of semantic maps. In this study, we analyze 58
words which occur ten or more times in the 1652 journal titles of the China
Scientific and Technical Papers and Citations Database. The word occurrence
matrix is visualized and factor-analyzed.
|
0911.1516
|
A Discourse-based Approach in Text-based Machine Translation
|
cs.CL
|
This paper presents a theoretical research based approach to ellipsis
resolution in machine translation. The formula of discourse is applied in order
to resolve ellipses. The validity of the discourse formula is analyzed by
applying it to the real world text, i.e., newspaper fragments. The source text
is converted into mono-sentential discourses where complex discourses require
further dissection either directly into primitive discourses or first into
compound discourses and later into primitive ones. The procedure of dissection
needs further improvement, i.e., discovering as many primitive discourse forms
as possible. An attempt has been made to investigate new primitive discourses
or patterns from the given text.
|
0911.1517
|
Resolution of Unidentified Words in Machine Translation
|
cs.CL
|
This paper presents a mechanism of resolving unidentified lexical units in
Text-based Machine Translation (TBMT). In a Machine Translation (MT) system it
is unlikely to have a complete lexicon and hence there is intense need of a new
mechanism to handle the problem of unidentified words. These unknown words
could be abbreviations, names, acronyms and newly introduced terms. We have
proposed an algorithm for the resolution of the unidentified words. This
algorithm takes discourse unit (primitive discourse) as a unit of analysis and
provides real time updates to the lexicon. We have manually applied the
algorithm to news paper fragments. Along with anaphora and cataphora
resolution, many unknown words especially names and abbreviations were updated
to the lexicon.
|
0911.1564
|
New Bounds for Restricted Isometry Constants
|
cs.IT math.IT
|
In this paper we show that if the restricted isometry constant $\delta_k$ of
the compressed sensing matrix satisfies \[ \delta_k < 0.307, \] then $k$-sparse
signals are guaranteed to be recovered exactly via $\ell_1$ minimization when
no noise is present and $k$-sparse signals can be estimated stably in the noisy
case. It is also shown that the bound cannot be substantively improved. An
explicitly example is constructed in which $\delta_{k}=\frac{k-1}{2k-1} < 0.5$,
but it is impossible to recover certain $k$-sparse signals.
|
0911.1582
|
Manipulating Tournaments in Cup and Round Robin Competitions
|
cs.AI cs.GT cs.MA
|
In sports competitions, teams can manipulate the result by, for instance,
throwing games. We show that we can decide how to manipulate round robin and
cup competitions, two of the most popular types of sporting competitions in
polynomial time. In addition, we show that finding the minimal number of games
that need to be thrown to manipulate the result can also be determined in
polynomial time. Finally, we show that there are several different variations
of standard cup competitions where manipulation remains polynomial.
|
0911.1672
|
Biological Computing Fundamentals and Futures
|
cs.CE q-bio.OT
|
The fields of computing and biology have begun to cross paths in new ways. In
this paper a review of the current research in biological computing is
presented. Fundamental concepts are introduced and these foundational elements
are explored to discuss the possibilities of a new computing paradigm. We
assume the reader to possess a basic knowledge of Biology and Computer Science
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.