id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1201.1216
|
Probabilistic Motion Estimation Based on Temporal Coherence
|
cs.CV cs.IT math.IT
|
We develop a theory for the temporal integration of visual motion motivated
by psychophysical experiments. The theory proposes that input data are
temporally grouped and used to predict and estimate the motion flows in the
image sequence. This temporal grouping can be considered a generalization of
the data association techniques used by engineers to study motion sequences.
Our temporal-grouping theory is expressed in terms of the Bayesian
generalization of standard Kalman filtering. To implement the theory we derive
a parallel network which shares some properties of cortical networks. Computer
simulations of this network demonstrate that our theory qualitatively accounts
for psychophysical experiments on motion occlusion and motion outliers.
|
1201.1221
|
Information Distance: New Developments
|
cs.CV cs.IT math.IT physics.data-an
|
In pattern recognition, learning, and data mining one obtains information
from information-carrying objects. This involves an objective definition of the
information in a single object, the information to go from one object to
another object in a pair of objects, the information to go from one object to
any other object in a multiple of objects, and the shared information between
objects. This is called "information distance." We survey a selection of new
developments in information distance.
|
1201.1259
|
Network Analysis of the French Environmental Code
|
cs.SI cs.DL physics.soc-ph
|
We perform a detailed analysis of the network constituted by the citations in
a legal code, we search for hidden structures and properties. The graph
associated to the Environmental code has a small-world structure and it is
partitioned in several hidden communities of articles that only partially
coincide with the organization of the code as given by its table of content.
Several articles are also connected with a low number of articles but are
intermediate between large communities. The structure of the Environmental Code
is contrasting with the reference network of all the French Legal Codes that
presents a rich-club of ten codes very central to the whole French legal
system, but no small-world property. This comparison shows that the structural
properties of the reference network associated to a legal system strongly
depends on the scale and granularity of the analysis, as is the case for many
complex systems
|
1201.1262
|
A Network Approach to the French System of Legal codes - Part I:
Analysis of a Dense Network
|
cs.SI physics.soc-ph
|
We explore one aspect of the structure of a codified legal system at the
national level using a new type of representation to understand the strong or
weak dependencies between the various fields of law. In Part I of this study,
we analyze the graph associated with the network in which each French legal
code is a vertex and an edge is produced between two vertices when a code cites
another code at least one time. We show that this network distinguishes from
many other real networks from a high density, giving it a particular structure
that we call concentrated world and that differentiates a national legal system
(as considered with a resolution at the code level) from small-world graphs
identified in many social networks. Our analysis then shows that a few
communities (groups of highly wired vertices) of codes covering large domains
of regulation are structuring the whole system. Indeed we mainly find a central
group of influent codes, a group of codes related to social issues and a group
of codes dealing with territories and natural resources. The study of this
codified legal system is also of interest in the field of the analysis of real
networks. In particular we examine the impact of the high density on the
structural characteristics of the graph and on the ways communities are
searched for. Finally we provide an original visualization of this graph on an
hemicyle-like plot, this representation being based on a statistical reduction
of dissimilarity measures between vertices. In Part II (a following paper) we
show how the consideration of the weights attributed to each edge in the
network in proportion to the number of citations between two vertices (codes)
allows deepening the analysis of the French legal system.
|
1201.1278
|
Novel Relations between the Ergodic Capacity and the Average Bit Error
Rate
|
cs.IT math.IT math.PR math.ST stat.TH
|
Ergodic capacity and average bit error rate have been widely used to compare
the performance of different wireless communication systems. As such recent
scientific research and studies revealed strong impact of designing and
implementing wireless technologies based on these two performance indicators.
However and to the best of our knowledge, the direct links between these two
performance indicators have not been explicitly proposed in the literature so
far. In this paper, we propose novel relations between the ergodic capacity and
the average bit error rate of an overall communication system using binary
modulation schemes for signaling with a limited bandwidth and operating over
generalized fading channels. More specifically, we show that these two
performance measures can be represented in terms of each other, without the
need to know the exact end-to-end statistical characterization of the
communication channel. We validate the correctness and accuracy of our newly
proposed relations and illustrated their usefulness by considering some
classical examples.
|
1201.1340
|
A Tiled-Table Convention for Compressing FITS Binary Tables
|
astro-ph.IM cs.DB
|
This document describes a convention for compressing FITS binary tables that
is modeled after the FITS tiled-image compression method (White et al. 2009)
that has been in use for about a decade. The input table is first optionally
subdivided into tiles, each containing an equal number of rows, then every
column of data within each tile is compressed and stored as a variable-length
array of bytes in the output FITS binary table. All the header keywords from
the input table are copied to the header of the output table and remain
uncompressed for efficient access. The output compressed table contains the
same number and order of columns as in the input uncompressed binary table.
There is one row in the output table corresponding to each tile of rows in the
input table. In principle, each column of data can be compressed using a
different algorithm that is optimized for the type of data within that column,
however in the prototype implementation described here, the gzip algorithm is
used to compress every column.
|
1201.1345
|
FITS Checksum Proposal
|
astro-ph.IM cs.DB
|
The checksum keywords described here provide an integrity check on the
information contained in FITS HDUs. (Header and Data Units are the basic
components of FITS files, consisting of header keyword records followed by
optional associated data records). The CHECKSUM keyword is defined to have a
value that forces the 32-bit 1's complement checksum accumulated over all the
2880-byte FITS logical records in the HDU to equal negative 0. (Note that 1's
complement arithmetic has both positive and negative zero elements). Verifying
that the accumulated checksum is still equal to -0 provides a fast and fairly
reliable way to determine that the HDU has not been modified by subsequent data
processing operations or corrupted while copying or storing the file on
physical media.
|
1201.1384
|
A Thermodynamical Approach for Probability Estimation
|
cs.LG physics.data-an stat.ME
|
The issue of discrete probability estimation for samples of small size is
addressed in this study. The maximum likelihood method often suffers
over-fitting when insufficient data is available. Although the Bayesian
approach can avoid over-fitting by using prior distributions, it still has
problems with objective analysis. In response to these drawbacks, a new
theoretical framework based on thermodynamics, where energy and temperature are
introduced, was developed. Entropy and likelihood are placed at the center of
this method. The key principle of inference for probability mass functions is
the minimum free energy, which is shown to unify the two principles of maximum
likelihood and maximum entropy. Our method can robustly estimate probability
functions from small size data.
|
1201.1409
|
Interactive Character Posing by Sparse Coding
|
cs.GR cs.AI
|
Character posing is of interest in computer animation. It is difficult due to
its dependence on inverse kinematics (IK) techniques and articulate property of
human characters . To solve the IK problem, classical methods that rely on
numerical solutions often suffer from the under-determination problem and can
not guarantee naturalness. Existing data-driven methods address this problem by
learning from motion capture data. When facing a large variety of poses
however, these methods may not be able to capture the pose styles or be
applicable in real-time environment. Inspired from the low-rank motion
de-noising and completion model in \cite{lai2011motion}, we propose a novel
model for character posing based on sparse coding. Unlike conventional
approaches, our model directly captures the pose styles in Euclidean space to
provide intuitive training error measurements and facilitate pose synthesis. A
pose dictionary is learned in training stage and based on it natural poses are
synthesized to satisfy users' constraints . We compare our model with existing
models for tasks of pose de-noising and completion. Experiments show our model
obtains lower de-noising and completion error. We also provide User
Interface(UI) examples illustrating that our model is effective for interactive
character posing.
|
1201.1417
|
Picture Collage with Genetic Algorithm and Stereo vision
|
cs.CV
|
In this paper, a salient region extraction method for creating picture
collage based on stereo vision is proposed. Picture collage is a kind of visual
image summary to arrange all input images on a given canvas, allowing overlay,
to maximize visible visual information. The salient regions of each image are
firstly extracted and represented as a depth map. The output picture collage
shows as many visible salient regions (without being overlaid by others) from
all images as possible. A very efficient Genetic algorithm is used here for the
optimization. The experimental results showed the superior performance of the
proposed method.
|
1201.1422
|
Minutiae Extraction from Fingerprint Images - a Review
|
cs.CV cs.CR
|
Fingerprints are the oldest and most widely used form of biometric
identification. Everyone is known to have unique, immutable fingerprints. As
most Automatic Fingerprint Recognition Systems are based on local ridge
features known as minutiae, marking minutiae accurately and rejecting false
ones is very important. However, fingerprint images get degraded and corrupted
due to variations in skin and impression conditions. Thus, image enhancement
techniques are employed prior to minutiae extraction. A critical step in
automatic fingerprint matching is to reliably extract minutiae from the input
fingerprint images. This paper presents a review of a large number of
techniques present in the literature for extracting fingerprint minutiae. The
techniques are broadly classified as those working on binarized images and
those that work on gray scale images directly.
|
1201.1450
|
The Interaction of Entropy-Based Discretization and Sample Size: An
Empirical Study
|
stat.ML cs.LG
|
An empirical investigation of the interaction of sample size and
discretization - in this case the entropy-based method CAIM (Class-Attribute
Interdependence Maximization) - was undertaken to evaluate the impact and
potential bias introduced into data mining performance metrics due to variation
in sample size as it impacts the discretization process. Of particular interest
was the effect of discretizing within cross-validation folds averse to outside
discretization folds. Previous publications have suggested that discretizing
externally can bias performance results; however, a thorough review of the
literature found no empirical evidence to support such an assertion. This
investigation involved construction of over 117,000 models on seven distinct
datasets from the UCI (University of California-Irvine) Machine Learning
Library and multiple modeling methods across a variety of configurations of
sample size and discretization, with each unique "setup" being independently
replicated ten times. The analysis revealed a significant optimistic bias as
sample sizes decreased and discretization was employed. The study also revealed
that there may be a relationship between the interaction that produces such
bias and the numbers and types of predictor attributes, extending the "curse of
dimensionality" concept from feature selection into the discretization realm.
Directions for further exploration are laid out, as well some general
guidelines about the proper application of discretization in light of these
results.
|
1201.1462
|
Symbol-Index-Feedback Polar Coding Schemes for Low-Complexity Devices
|
cs.IT math.IT
|
Recently, a new class of error-control codes, the polar codes, have attracted
much attention. The polar codes are the first known class of capacity-achieving
codes for many important communication channels. In addition, polar codes have
low-complexity encoding algorithms. Therefore, these codes are favorable
choices for low-complexity devices, for example, in ubiquitous computing and
sensor networks. However, the polar codes fall short in terms of finite-length
error probabilities, compared with the state-of-the-art codes, such as the
low-density parity-check codes. In this paper, in order to improve the error
probabilities of the polar codes, we propose novel interactive coding schemes
using receiver feedbacks based on polar codes. The proposed coding schemes have
very low computational complexities at the transmitter side. By experimental
results, we show that the proposed coding schemes achieve significantly lower
error probabilities.
|
1201.1507
|
Sampling properties of directed networks
|
physics.soc-ph cs.SI physics.data-an
|
For many real-world networks only a small "sampled" version of the original
network may be investigated; those results are then used to draw conclusions
about the actual system. Variants of breadth-first search (BFS) sampling, which
are based on epidemic processes, are widely used. Although it is well
established that BFS sampling fails, in most cases, to capture the
IN-component(s) of directed networks, a description of the effects of BFS
sampling on other topological properties are all but absent from the
literature. To systematically study the effects of sampling biases on directed
networks, we compare BFS sampling to random sampling on complete large-scale
directed networks. We present new results and a thorough analysis of the
topological properties of seven different complete directed networks (prior to
sampling), including three versions of Wikipedia, three different sources of
sampled World Wide Web data, and an Internet-based social network. We detail
the differences that sampling method and coverage can make to the structural
properties of sampled versions of these seven networks. Most notably, we find
that sampling method and coverage affect both the bow-tie structure, as well as
the number and structure of strongly connected components in sampled networks.
In addition, at low sampling coverage (i.e. less than 40%), the values of
average degree, variance of out-degree, degree auto-correlation, and link
reciprocity are overestimated by 30% or more in BFS-sampled networks, and only
attain values within 10% of the corresponding values in the complete networks
when sampling coverage is in excess of 65%. These results may cause us to
rethink what we know about the structure, function, and evolution of real-world
directed networks.
|
1201.1512
|
Community detection and tracking on networks from a data fusion
perspective
|
cs.SI math.PR physics.soc-ph
|
Community structure in networks has been investigated from many viewpoints,
usually with the same end result: a community detection algorithm of some kind.
Recent research offers methods for combining the results of such algorithms
into timelines of community evolution. This paper investigates community
detection and tracking from the data fusion perspective. We avoid the kind of
hard calls made by traditional community detection algorithms in favor of
retaining as much uncertainty information as possible. This results in a method
for directly estimating the probabilities that pairs of nodes are in the same
community. We demonstrate that this method is accurate using the LFR testbed,
that it is fast on a number of standard network datasets, and that it is has a
variety of uses that complement those of standard, hard-call methods. Retaining
uncertainty information allows us to develop a Bayesian filter for tracking
communities. We derive equations for the full filter, and marginalize it to
produce a potentially practical version. Finally, we discuss closures for the
marginalized filter and the work that remains to develop this into a
principled, efficient method for tracking time-evolving communities on
time-evolving networks.
|
1201.1547
|
Information Society: Modeling A Complex System With Scarce Data
|
physics.soc-ph cs.IT cs.SI math.IT nlin.AO
|
Considering electronic implications in the Information Society (IS) as a
complex system, complexity science tools are used to describe the processes
that are seen to be taking place. The sometimes troublesome relationship
between the information and communication new technologies and e-society gives
rise to different problems, some of them being unexpected. Probably, the
Digital Divide (DD) and the Internet Governance (IG) are among the most
conflictive ones of internationally based e-Affairs. Admitting that solutions
should be found for these problems, certain international policies are
required. In this context, data gathering and subsequent analysis, as well as
the construction of adequate physical models are extremely important in order
to imagine different future scenarios and suggest some subsequent control. In
the main text, mathematical modelization helps for visualizing how policies
could e.g. influence the individual and collective behavior in an empirical
social agent system. In order to show how this purpose could be achieved, two
approaches, (i) the Ising model and (ii) a generalized Lotka-Volterra model are
used for DD and IG considerations respectively. It can be concluded that the
social modelization of the e-Information Society as a complex system provides
insights about how DD can be reduced and how the a large number of weak members
of the IS could influence the outcomes of the IG.
|
1201.1571
|
A United Image Force for Deformable Models and Direct Transforming
Geometric Active Contorus to Snakes by Level Sets
|
cs.CV
|
A uniform distribution of the image force field around the object fasts the
convergence speed of the segmentation process. However, to achieve this aim, it
causes the force constructed from the heat diffusion model unable to indicate
the object boundaries accurately. The image force based on electrostatic field
model can perform an exact shape recovery. First, this study introduces a
fusion scheme of these two image forces, which is capable of extracting the
object boundary with high precision and fast speed. Until now, there is no
satisfied analysis about the relationship between Snakes and Geometric Active
Contours (GAC). The second contribution of this study addresses that the GAC
model can be deduced directly from Snakes model. It proves that each term in
GAC and Snakes is correspondent and has similar function. However, the two
models are expressed using different mathematics. Further, since losing the
ability of rotating the contour, adoption of level sets can limits the usage of
GAC in some circumstances.
|
1201.1572
|
A dynamical model for competing opinions
|
physics.soc-ph cs.SI
|
We propose an opinion model based on agents located at the vertices of a
regular lattice. Each agent has an independent opinion (among an arbitrary, but
fixed, number of choices) and its own degree of conviction. The latter changes
every time it interacts with another agent who has a different opinion. The
dynamics leads to size distributions of clusters (made up of agents which have
the same opinion and are located at contiguous spatial positions) which follow
a power law, as long as the range of the interaction between the agents is not
too short, i.e. the system self-organizes into a critical state. Short range
interactions lead to an exponential cut off in the size distribution and to
spatial correlations which cause agents which have the same opinion to be
closely grouped. When the diversity of opinions is restricted to two,
non-consensus dynamic is observed, with unequal population fractions, whereas
consensus is reached if the agents are also allowed to interact with those
which are located far from them.
|
1201.1587
|
Feature Selection via Regularized Trees
|
cs.LG stat.ME stat.ML
|
We propose a tree regularization framework, which enables many tree models to
perform feature selection efficiently. The key idea of the regularization
framework is to penalize selecting a new feature for splitting when its gain
(e.g. information gain) is similar to the features used in previous splits. The
regularization framework is applied on random forest and boosted trees here,
and can be easily applied to other tree models. Experimental studies show that
the regularized trees can select high-quality feature subsets with regard to
both strong and weak classifiers. Because tree models can naturally deal with
categorical and numerical variables, missing values, different scales between
variables, interactions and nonlinearities etc., the tree regularization
framework provides an effective and efficient feature selection solution for
many practical problems.
|
1201.1588
|
Upper Bound on the Capacity of Gaussian Channels with Noisy Feedback
|
cs.IT math.IT
|
We consider an additive Gaussian channel with additive Gaussian noise
feedback. We derive an upper bound on the n-block capacity (defined by Cover
[1]). It is shown that this upper bound can be obtained by solving a convex
optimization problem. With stationarity assumptions on Gaussian noise
processes, we characterize the limit of the n-block upper bound and prove that
this limit is the upper bound of the noisy feedback (shannon) capacity.
|
1201.1589
|
The Weakness of Weak Ties in the Classroom
|
cs.SI physics.soc-ph
|
Granovetter's "strength of weak ties" hypothesizes that isolated social ties
offer limited access to external prospects, while heterogeneous social ties
diversify one's opportunities. We analyze the most complete record of college
student interactions to date (approximately 80,000 interactions by 290 students
-- 16 times more interactions with almost 3 times more students than previous
studies on educational networks) and compare the social interaction data with
the academic scores of the students. Our first finding is that social diversity
is negatively correlated with performance. This is explained by our second
finding: highly performing students interact in groups of similarly performing
peers. This effect is stronger the higher the student performance is. Indeed,
low performance students tend to initiate many transient interactions
independently of the performance of their target. In other words, low
performing students act disassortatively with respect to their social network,
whereas high scoring students act assortatively. Our data also reveals that
highly performing students establish persistent interactions before mid and low
performing ones and that they use more structured and longer cascades of
information from which low performing students are excluded.
|
1201.1603
|
Committee Algorithm: An Easy Way to Construct Wavelet Filter Banks
|
math.NA cs.IT math.IT
|
Given a lowpass filter, finding a dual lowpass filter is an essential step in
constructing non-redundant wavelet filter banks. Obtaining dual lowpass filters
is not an easy task. In this paper, we introduce a new method called committee
algorithm that builds a dual filter straightforwardly from two
easily-constructible lowpass filters. It allows to design a wide range of new
wavelet filter banks. An example based on the family of Burt-Adelson's 1-D
Laplacian filters is given.
|
1201.1613
|
Numerical Weather Prediction (NWP) and hybrid ARMA/ANN model to predict
global radiation
|
cs.NE physics.data-an
|
We propose in this paper an original technique to predict global radiation
using a hybrid ARMA/ANN model and data issued from a numerical weather
prediction model (ALADIN). We particularly look at the Multi-Layer Perceptron.
After optimizing our architecture with ALADIN and endogenous data previously
made stationary and using an innovative pre-input layer selection method, we
combined it to an ARMA model from a rule based on the analysis of hourly data
series. This model has been used to forecast the hourly global radiation for
five places in Mediterranean area. Our technique outperforms classical models
for all the places. The nRMSE for our hybrid model ANN/ARMA is 14.9% compared
to 26.2% for the na\"ive persistence predictor. Note that in the stand alone
ANN case the nRMSE is 18.4%. Finally, in order to discuss the reliability of
the forecaster outputs, a complementary study concerning the confidence
interval of each prediction is proposed
|
1201.1623
|
MultiDendrograms: Variable-Group Agglomerative Hierarchical Clusterings
|
cs.IR math.ST physics.comp-ph physics.data-an q-fin.CP stat.CO stat.TH
|
MultiDendrograms is a Java-written application that computes agglomerative
hierarchical clusterings of data. Starting from a distances (or weights)
matrix, MultiDendrograms is able to calculate its dendrograms using the most
common agglomerative hierarchical clustering methods. The application
implements a variable-group algorithm that solves the non-uniqueness problem
found in the standard pair-group algorithm. This problem arises when two or
more minimum distances between different clusters are equal during the
agglomerative process, because then different output clusterings are possible
depending on the criterion used to break ties between distances.
MultiDendrograms solves this problem implementing a variable-group algorithm
that groups more than two clusters at the same time when ties occur.
|
1201.1633
|
On the minimality of Hamming compatible metrics
|
cs.IT math.IT
|
A Hamming compatible metric is an integer-valued metric on the words of a
finite alphabet which agrees with the usual Hamming distance for words of equal
length. We define a new Hamming compatible metric, compute the cardinality of a
sphere with respect to this metric, and show this metric is minimal in the
class of all "well-behaved" Hamming compatible metrics.
|
1201.1634
|
Per-antenna Constant Envelope Precoding for Large Multi-User MIMO
Systems
|
cs.IT math.IT
|
We consider the multi-user MIMO broadcast channel with $M$ single-antenna
users and $N$ transmit antennas under the constraint that each antenna emits
signals having constant envelope (CE). The motivation for this is that CE
signals facilitate the use of power-efficient RF power amplifiers. Analytical
and numerical results show that, under certain mild conditions on the channel
gains, for a fixed $M$, array gain is achievable even under the stringent
per-antenna CE constraint (essentially, for a fixed $M$, at sufficiently large
$N$ the total transmitted power can be reduced with increasing $N$ while
maintaining a fixed information rate to each user). Simulations for the i.i.d.
Rayleigh fading channel show that the total transmit power can be reduced
linearly with increasing $N$ (i.e., an O(N) array gain). We also propose a
precoding scheme which finds near-optimal CE signals to be transmitted, and has
O(MN) complexity. Also, in terms of the total transmit power required to
achieve a fixed desired information sum-rate, despite the stringent per-antenna
CE constraint, the proposed CE precoding scheme performs close to the
sum-capacity achieving scheme for an average-only total transmit power
constrained channel.
|
1201.1652
|
Toward a Motor Theory of Sign Language Perception
|
cs.CL cs.HC
|
Researches on signed languages still strongly dissociate lin- guistic issues
related on phonological and phonetic aspects, and gesture studies for
recognition and synthesis purposes. This paper focuses on the imbrication of
motion and meaning for the analysis, synthesis and evaluation of sign language
gestures. We discuss the relevance and interest of a motor theory of perception
in sign language communication. According to this theory, we consider that
linguistic knowledge is mapped on sensory-motor processes, and propose a
methodology based on the principle of a synthesis-by-analysis approach, guided
by an evaluation process that aims to validate some hypothesis and concepts of
this theory. Examples from existing studies illustrate the di erent concepts
and provide avenues for future work.
|
1201.1656
|
A MacWilliams type identity for m-spotty generalized Lee weight
enumerators over $\mathbb{Z}_q$ q
|
cs.IT math.IT
|
Burst errors are very common in practice. There have been many designs in
order to control and correct such errors. Recently, a new class of byte error
control codes called spotty byte error control codes has been specifically
designed to fit the large capacity memory systems that use high-density random
access memory (RAM) chips with input/output data of 8, 16, and 32 bits. The
MacWilliams identity describes how the weight enumerator of a linear code and
the weight enumerator of its dual code are related. Also, Lee metric which has
attracted many researchers due to its applications. In this paper, we combine
these two interesting topics and introduce the m-spotty generalized Lee weights
and the m-spotty generalized Lee weight enumerators of a code over Z q and
prove a MacWilliams type identity. This generalization includes both the case
of the identity given in the paper [I. Siap, MacWilliams identity for m-spotty
Lee weight enumerators, Appl. Math. Lett. 23 (1) (2010) 13-16] and the identity
given in the paper [M. \"Ozen, V. \c{S}iap, The MacWilliams identity for
m-spotty weight enumerators of linear codes over finite fields, Comput. Math.
Appl. 61 (4) (2011) 1000-1004] over Z2 and Z3 as special cases.
|
1201.1657
|
A Split-Merge MCMC Algorithm for the Hierarchical Dirichlet Process
|
stat.ML cs.AI
|
The hierarchical Dirichlet process (HDP) has become an important Bayesian
nonparametric model for grouped data, such as document collections. The HDP is
used to construct a flexible mixed-membership model where the number of
components is determined by the data. As for most Bayesian nonparametric
models, exact posterior inference is intractable---practitioners use Markov
chain Monte Carlo (MCMC) or variational inference. Inspired by the split-merge
MCMC algorithm for the Dirichlet process (DP) mixture model, we describe a
novel split-merge MCMC sampling algorithm for posterior inference in the HDP.
We study its properties on both synthetic data and text corpora. We find that
split-merge MCMC for the HDP can provide significant improvements over
traditional Gibbs sampling, and we give some understanding of the data
properties that give rise to larger improvements.
|
1201.1662
|
Quickest Search over Brownian Channels
|
math.PR cs.IT math.IT math.OC
|
In this paper we resolve an open problem proposed by Lai, Poor, Xin, and
Georgiadis (2011, IEEE Transactions on Information Theory). Consider a sequence
of Brownian Motions with unknown drift equal to one or zero, which we may be
observed one at a time. We give a procedure for finding, as quickly as
possible, a process which is a Brownian Motion with nonzero drift. This
original quickest search problem, in which the filtration itself is dependent
on the observation strategy, is reduced to a single filtration impulse control
and optimal stopping problem, which is in turn reduced to an optimal stopping
problem for a reflected diffusion, which can be explicitly solved.
|
1201.1670
|
Customers Behavior Modeling by Semi-Supervised Learning in Customer
Relationship Management
|
cs.LG
|
Leveraging the power of increasing amounts of data to analyze customer base
for attracting and retaining the most valuable customers is a major problem
facing companies in this information age. Data mining technologies extract
hidden information and knowledge from large data stored in databases or data
warehouses, thereby supporting the corporate decision making process. CRM uses
data mining (one of the elements of CRM) techniques to interact with customers.
This study investigates the use of a technique, semi-supervised learning, for
the management and analysis of customer-related data warehouse and information.
The idea of semi-supervised learning is to learn not only from the labeled
training data, but to exploit also the structural information in additionally
available unlabeled data. The proposed semi-supervised method is a model by
means of a feed-forward neural network trained by a back propagation algorithm
(multi-layer perceptron) in order to predict the category of an unknown
customer (potential customers). In addition, this technique can be used with
Rapid Miner tools for both labeled and unlabeled data.
|
1201.1671
|
Error-Correcting Codes for Reliable Communications in Microgravity
Platforms
|
cs.IT cs.SY math.IT
|
The PAANDA experiment was conceived to characterize the acceleration ambient
of a rocket launched microgravity platform, specially the microgravity phase.
The recorded data was transmitted to ground stations, leading to loss of
telemetry information sent during the reentry period. Traditionally, an
error-correcting code for this channel consists of a block code with very large
block size to protect against long periods of data loss. Instead, we propose
the use of digital fountain codes along with conventional Reed-Solomon block
codes to protect against long and short burst error periods, respectively.
Aiming to use this approach for a second version of PAANDA to prevent data
corruption, we propose a model for the communication channel based on
information extracted from Cum\~a II's telemetry data, and simulate the
performance of our proposed error-correcting code under this channel model.
Simulation results show that nearly all telemetry data can be recovered,
including data from the reentry period.
|
1201.1676
|
Sufficient Conditions for Formation of a Network Topology by
Self-interested Agents
|
cs.GT cs.SI physics.soc-ph
|
Networks such as organizational network of a global company play an important
role in a variety of knowledge management and information diffusion tasks. The
nodes in these networks correspond to individuals who are self-interested. The
topology of these networks often plays a crucial role in deciding the ease and
speed with which certain tasks can be accomplished using these networks.
Consequently, growing a stable network having a certain topology is of
interest. Motivated by this, we study the following important problem: given a
certain desired network topology, under what conditions would best response
(link addition/deletion) strategies played by self-interested agents lead to
formation of a pairwise stable network with only that topology. We study this
interesting reverse engineering problem by proposing a natural model of
recursive network formation. In this model, nodes enter the network
sequentially and the utility of a node captures principal determinants of
network formation, namely (1) benefits from immediate neighbors, (2) costs of
maintaining links with immediate neighbors, (3) benefits from indirect
neighbors, (4) bridging benefits, and (5) network entry fee. Based on this
model, we analyze relevant network topologies such as star graph, complete
graph, bipartite Turan graph, and multiple stars with interconnected centers,
and derive a set of sufficient conditions under which these topologies emerge
as pairwise stable networks. We also study the social welfare properties of the
above topologies.
|
1201.1684
|
The Three-User Finite-Field Multi-Way Relay Channel with Correlated
Sources
|
cs.IT math.IT
|
This paper studies the three-user finite-field multi-way relay channel, where
the users exchange messages via a relay. The messages are arbitrarily
correlated, and the finite-field channel is linear and is subject to additive
noise of arbitrary distribution. The problem is to determine the minimum
achievable source-channel rate, defined as channel uses per source symbol
needed for reliable communication. We combine Slepian-Wolf source coding and
functional-decode-forward channel coding to obtain the solution for two classes
of source and channel combinations. Furthermore, for correlated sources that
have their common information equal their mutual information, we propose a new
coding scheme to achieve the minimum source-channel rate.
|
1201.1717
|
On the Hyperbolicity of Small-World and Tree-Like Random Graphs
|
cs.SI cs.DM physics.soc-ph
|
Hyperbolicity is a property of a graph that may be viewed as being a "soft"
version of a tree, and recent empirical and theoretical work has suggested that
many graphs arising in Internet and related data applications have hyperbolic
properties. We consider Gromov's notion of \delta-hyperbolicity, and establish
several results for small-world and tree-like random graph models. First, we
study the hyperbolicity of Kleinberg small-world random graphs and show that
the hyperbolicity of these random graphs is not significantly improved
comparing to graph diameter even when it greatly improves decentralized
navigation. Next we study a class of tree-like graphs called ringed trees that
have constant hyperbolicity. We show that adding random links among the leaves
similar to the small-world graph constructions may easily destroy the
hyperbolicity of the graphs, except for a class of random edges added using an
exponentially decaying probability function based on the ring distance among
the leaves.
Our study provides one of the first significant analytical results on the
hyperbolicity of a rich class of random graphs, which shed light on the
relationship between hyperbolicity and navigability of random graphs, as well
as on the sensitivity of hyperbolic {\delta} to noises in random graphs.
|
1201.1728
|
Worst-case efficient dominating sets in digraphs
|
math.CO cs.IT math.IT
|
Let $1\le n\in\Z$. {\it Worst-case efficient dominating sets in digraphs} are
conceived so that their presence in certain strong digraphs $\vec{ST}_n$
corresponds to that of efficient dominating sets in star graphs $ST_n$: The
fact that the star graphs $ST_n$ form a so-called dense segmental neighborly
E-chain is reflected in a corresponding fact for the digraphs $\vec{ST}_n$.
Related chains of graphs and open problems are presented as well.
|
1201.1733
|
On Conditional Decomposability
|
cs.SY cs.FL
|
The requirement of a language to be conditionally decomposable is imposed on
a specification language in the coordination supervisory control framework of
discrete-event systems. In this paper, we present a polynomial-time algorithm
for the verification whether a language is conditionally decomposable with
respect to given alphabets. Moreover, we also present a polynomial-time
algorithm to extend the common alphabet so that the language becomes
conditionally decomposable. A relationship of conditional decomposability to
nonblockingness of modular discrete-event systems is also discussed in this
paper in the general settings. It is shown that conditional decomposability is
a weaker condition than nonblockingness.
|
1201.1754
|
A Note on Undecidability of Observation Consistency for Non-Regular
Languages
|
cs.SY cs.FL
|
One of the most interesting questions concerning hierarchical control of
discrete-event systems with partial observations is a condition under which the
language observability is preserved between the original and the abstracted
plant. Recently, we have characterized two such sufficient
conditions---observation consistency and local observation consistency. In this
paper, we prove that the condition of observation consistency is undecidable
for non-regular (linear, deterministic context-free) languages. The question
whether the condition is decidable for regular languages is open.
|
1201.1755
|
Unraveling Spurious Properties of Interaction Networks with Tailored
Random Networks
|
physics.data-an cs.SI physics.comp-ph physics.soc-ph
|
We investigate interaction networks that we derive from multivariate time
series with methods frequently employed in diverse scientific fields such as
biology, quantitative finance, physics, earth and climate sciences, and the
neurosciences. Mimicking experimental situations, we generate time series with
finite length and varying frequency content but from independent stochastic
processes. Using the correlation coefficient and the maximum cross-correlation,
we estimate interdependencies between these time series. With clustering
coefficient and average shortest path length, we observe unweighted interaction
networks, derived via thresholding the values of interdependence, to possess
non-trivial topologies as compared to Erd\H{o}s-R\'{e}nyi networks, which would
indicate small-world characteristics. These topologies reflect the mostly
unavoidable finiteness of the data, which limits the reliability of typically
used estimators of signal interdependence. We propose random networks that are
tailored to the way interaction networks are derived from empirical data.
Through an exemplary investigation of multichannel electroencephalographic
recordings of epileptic seizures - known for their complex spatial and temporal
dynamics - we show that such random networks help to distinguish network
properties of interdependence structures related to seizure dynamics from those
spuriously induced by the applied methods of analysis.
|
1201.1798
|
Tight p-fusion frames
|
math.NA cs.IT math.IT
|
Fusion frames enable signal decompositions into weighted linear subspace
components. For positive integers p, we introduce p-fusion frames, a sharpening
of the notion of fusion frames. Tight p-fusion frames are closely related to
the classical notions of designs and cubature formulas in Grassmann spaces and
are analyzed with methods from harmonic analysis in the Grassmannians. We
define the p-fusion frame potential, derive bounds for its value, and discuss
the connections to tight p-fusion frames.
|
1201.1812
|
On Polynomial Remainder Codes
|
cs.IT math.IT math.RA
|
Polynomial remainder codes are a large class of codes derived from the
Chinese remainder theorem that includes Reed-Solomon codes as a special case.
In this paper, we revisit these codes and study them more carefully than in
previous work. We explicitly allow the code symbols to be polynomials of
different degrees, which leads to two different notions of weight and distance.
Algebraic decoding is studied in detail. If the moduli are not irreducible,
the notion of an error locator polynomial is replaced by an error factor
polynomial. We then obtain a collection of gcd-based decoding algorithms, some
of which are not quite standard even when specialized to Reed-Solomon codes.
|
1201.1829
|
FITS Foreign File Encapsulation Convention
|
astro-ph.IM cs.DB
|
This document describes a FITS convention developed by the IRAF Group (D.
Tody, R. Seaman, and N. Zarate) at the National Optical Astronomical
Observatory (NOAO). This convention is implemented by the fgread/fgwrite tasks
in the IRAF fitsutil package. It was first used in May 1999 to encapsulate
preview PNG-format graphics files into FITS files in the NOAO High Performance
Pipeline System. A FITS extension of type 'FOREIGN' provides a mechanism for
storing an arbitrary file or tree of files in FITS, allowing it to be restored
to disk at a later time.
|
1201.1835
|
Graph-Based Random Access for the Collision Channel without Feedback:
Capacity Bound
|
cs.IT math.IT
|
A random access scheme for the collision channel without feedback is
proposed. The scheme is based on erasure correcting codes for the recovery of
packet segments that are lost in collisions, and on successive interference
cancellation for resolving collisions. The proposed protocol achieves reliable
communication in the asymptotic setting and attains capacities close to 1
[packets/slot]. A capacity bound as a function of the overall rate of the
scheme is derived, and code distributions tightly approaching the bound
developed.
|
1201.1861
|
Spectrum Sensing in Cognitive Radio Networks: Performance Evaluation and
Optimization
|
cs.IT math.IT math.PR
|
This paper studies cooperative spectrum sensing in cognitive radio networks
where secondary users collect local energy statistics and report their findings
to a secondary base station, i.e., a fusion center. First, the average error
probability is quantitively analyzed to capture the dynamic nature of both
observation and fusion channels, assuming fixed amplifier gains for relaying
local statistics to the fusion center. Second, the system level overhead of
cooperative spectrum sensing is addressed by considering both the local
processing cost and the transmission cost. Local processing cost incorporates
the overhead of sample collection and energy calculation that must be conducted
by each secondary user; the transmission cost accounts for the overhead of
forwarding the energy statistic computed at each secondary user to the fusion
center. Results show that when jointly designing the number of collected energy
samples and transmission amplifier gains, only one secondary user needs to be
actively engaged in spectrum sensing. Furthermore, when number of energy
samples or amplifier gains are fixed, closed form expressions for optimal
solutions are derived and a generalized water-filling algorithm is provided.
|
1201.1900
|
Evolution of public cooperation on interdependent networks: The impact
of biased utility functions
|
physics.soc-ph cond-mat.stat-mech cs.SI q-bio.PE
|
We study the evolution of public cooperation on two interdependent networks
that are connected by means of a utility function, which determines to what
extent payoffs in one network influence the success of players in the other
network. We find that the stronger the bias in the utility function, the higher
the level of public cooperation. Yet the benefits of enhanced public
cooperation on the two networks are just as biased as the utility functions
themselves. While cooperation may thrive on one network, the other may still be
plagued by defectors. Nevertheless, the aggregate level of cooperation on both
networks is higher than the one attainable on an isolated network. This
positive effect of biased utility functions is due to the suppressed feedback
of individual success, which leads to a spontaneous separation of
characteristic time scales of the evolutionary process on the two
interdependent networks. As a result, cooperation is promoted because the
aggressive invasion of defectors is more sensitive to the slowing down than the
build-up of collective efforts in sizable groups.
|
1201.1935
|
Secure Symmetrical Multilevel Diversity Coding
|
cs.IT math.IT
|
Symmetrical Multilevel Diversity Coding (SMDC) is a network compression
problem introduced by Roche (1992) and Yeung (1995). In this setting, a simple
separate coding strategy known as superposition coding was shown to be optimal
in terms of achieving the minimum sum rate (Roche, Yeung, and Hau, 1997) and
the entire admissible rate region (Yeung and Zhang, 1999) of the problem. This
paper considers a natural generalization of SMDC to the secure communication
setting with an additional eavesdropper. It is required that all sources need
to be kept perfectly secret from the eavesdropper as long as the number of
encoder outputs available at the eavesdropper is no more than a given
threshold. First, the problem of encoding individual sources is studied. A
precise characterization of the entire admissible rate region is established
via a connection to the problem of secure coding over a three-layer wiretap
network and utilizing some basic polyhedral structure of the admissible rate
region. Building on this result, it is then shown that superposition coding
remains optimal in terms of achieving the minimum sum rate for the general
secure SMDC problem.
|
1201.1941
|
Relaying for Multiuser Networks in the Absence of Codebook Information
|
cs.IT math.IT
|
This work considers relay assisted transmission for multiuser networks when
the relay has no access to the codebooks used by the transmitters. The relay is
called oblivious for this reason. Of particular interest is the generalized
compress-and-forward (GCF) strategy, where the destinations jointly decode the
compression indices and the transmitted messages, and their optimality in this
setting. The relay-to-destination links are assumed to be out-of-band with
finite capacity. Two models are investigated: the multiple access relay channel
(MARC) and the interference relay channel (IFRC). For the MARC with an
oblivious relay, a new outerbound is derived and it is shown to be tight by
means of achievability of the capacity region using GCF scheme. For the IFRC
with an oblivious relay, a new strong interference condition is established,
under which the capacity region is found by deriving a new outerbound and
showing that it is achievable using GCF scheme. The result is further extended
to establish the capacity region of M-user MARC with an oblivious relay, and
multicast networks containing M sources and K destinations with an oblivious
relay.
|
1201.1990
|
Criteria of stabilizability for switching-control systems with solvable
linear approximations
|
cs.SY math.CA math.OC
|
We study the stability and stabilizability of a continuous-time switched
control system that consists of the time-invariant $n$-dimensional subsystems
\dot{x}=A_ix+B_i(x)u\quad (x\in\mathbb{R}^n, t\in\mathbb{R}_+ \textrm{and}
u\in\mathbb{R}^{m_i}),\qquad \textrm{where} i\in{1,...,N} and a switching
signal $\sigma(\bcdot)\colon\mathbb{R}_+\rightarrow{1,...,N}$ which
orchestrates switching between these subsystems above, where
$A_i\in\mathbb{R}^{n\times n}, n\ge1, N\ge2, m_i\ge1$, and where
$B_i(\bcdot)\colon\mathbb{R}^n\rightarrow\mathbb{R}^{n\times m_i}$ satisfies
the condition $\|B_i(x)\|\le\bbbeta\|x\|\;\forall x\in\mathbb{R}^n$. We show
that, if ${A_1,...,A_N}$ generates a solvable Lie algebra over the field
$\mathbbm{C}$ of complex numbers and there exists an element $\bbA$ in the
convex hull $\mathrm{co}{A_1,...,A_N}$ in $\mathbb{R}^{n\times n}$ such that
the affine system $\dot{x}=\bbA x$ is exponentially stable, then there is a
constant $\bbdelta>0$ for which one can design "sufficiently many"
piecewise-constant switching signals $\sigma(t)$ so that the switching-control
systems \dot{x}(t)=A_{\sigma(t)}x(t)+B_{\sigma(t)}(x(t))u(t),\quad
x(0)\in\mathbb{R}^n\textrm{and} t\in\mathbb{R}_+ are globally exponentially
stable, for any measurable external inputs $u(t)\in\mathbb{R}^{m_{\sigma(t)}}$
with $|u(t)|\le\bbdelta$.
|
1201.1997
|
An Enhanced DMT-optimality Criterion for STBC-schemes for Asymmetric
MIMO Systems
|
cs.IT math.IT
|
For any $n_t$ transmit, $n_r$ receive antenna ($n_t\times n_r$) MIMO system
in a quasi-static Rayleigh fading environment, it was shown by Elia et al. that
linear space-time block code-schemes (LSTBC-schemes) which have the
non-vanishing determinant (NVD) property are diversity-multiplexing gain
tradeoff (DMT)-optimal for arbitrary values of $n_r$ if they have a code-rate
of $n_t$ complex dimensions per channel use. However, for asymmetric MIMO
systems (where $n_r < n_t$), with the exception of a few LSTBC-schemes, it is
unknown whether general LSTBC-schemes with NVD and a code-rate of $n_r$ complex
dimensions per channel use are DMT-optimal. In this paper, an enhanced
sufficient criterion for any STBC-scheme to be DMT-optimal is obtained, and
using this criterion, it is established that any LSTBC-scheme with NVD and a
code-rate of $\min\{n_t,n_r\}$ complex dimensions per channel use is
DMT-optimal. This result settles the DMT-optimality of several well-known,
low-ML-decoding-complexity LSTBC-schemes for certain asymmetric MIMO systems.
|
1201.2004
|
Optimal Fuzzy Model Construction with Statistical Information using
Genetic Algorithm
|
cs.AI
|
Fuzzy rule based models have a capability to approximate any continuous
function to any degree of accuracy on a compact domain. The majority of FLC
design process relies on heuristic knowledge of experience operators. In order
to make the design process automatic we present a genetic approach to learn
fuzzy rules as well as membership function parameters. Moreover, several
statistical information criteria such as the Akaike information criterion
(AIC), the Bhansali-Downham information criterion (BDIC), and the
Schwarz-Rissanen information criterion (SRIC) are used to construct optimal
fuzzy models by reducing fuzzy rules. A genetic scheme is used to design
Takagi-Sugeno-Kang (TSK) model for identification of the antecedent rule
parameters and the identification of the consequent parameters. Computer
simulations are presented confirming the performance of the constructed fuzzy
logic controller.
|
1201.2010
|
Recognizing Bangla Grammar using Predictive Parser
|
cs.CL
|
We describe a Context Free Grammar (CFG) for Bangla language and hence we
propose a Bangla parser based on the grammar. Our approach is very much general
to apply in Bangla Sentences and the method is well accepted for parsing a
language of a grammar. The proposed parser is a predictive parser and we
construct the parse table for recognizing Bangla grammar. Using the parse table
we recognize syntactical mistakes of Bangla sentences when there is no entry
for a terminal in the parse table. If a natural language can be successfully
parsed then grammar checking from this language becomes possible. The proposed
scheme is based on Top down parsing method and we have avoided the left
recursion of the CFG using the idea of left factoring.
|
1201.2035
|
On the Characterization of the Duhem Hysteresis Operator with Clockwise
Input-Output Dynamics
|
math.OC cs.SY
|
In this paper we investigate the dissipativity property of a certain class of
Duhem hysteresis operator, which has clockwise (CW) input-output (I/O)
behavior. In particular, we provide sufficient conditions on the Duhem operator
such that it is CW and propose an explicit construction of the corresponding
storage function satisfying dissipation inequality of CW systems. The result is
used to analyze the stability of a second order system with hysteretic friction
which is described by a Dahl model.
|
1201.2036
|
Hierarchical multiresolution method to overcome the resolution limit in
complex networks
|
physics.data-an cs.SI physics.comp-ph physics.soc-ph
|
The analysis of the modular structure of networks is a major challenge in
complex networks theory. The validity of the modular structure obtained is
essential to confront the problem of the topology-functionality relationship.
Recently, several authors have worked on the limit of resolution that different
community detection algorithms have, making impossible the detection of natural
modules when very different topological scales coexist in the network. Existing
multiresolution methods are not the panacea for solving the problem in extreme
situations, and also fail. Here, we present a new hierarchical multiresolution
scheme that works even when the network decomposition is very close to the
resolution limit. The idea is to split the multiresolution method for optimal
subgraphs of the network, focusing the analysis on each part independently. We
also propose a new algorithm to speed up the computational cost of screening
the mesoscale looking for the resolution parameter that best splits every
subgraph. The hierarchical algorithm is able to solve a difficult benchmark
proposed in [Lancichinetti & Fortunato, 2011], encouraging the further analysis
of hierarchical methods based on the modularity quality function.
|
1201.2046
|
Evaluating the performance of geographical locations in scientific
networks with an aggregation - randomization - re-sampling approach (ARR)
|
physics.soc-ph cs.SI
|
Knowledge creation and dissemination in science and technology systems is
perceived as a prerequisite for socio-economic development. The efficiency of
creating new knowledge is considered to have a geographical component, i.e.
some regions are more capable in scientific knowledge production than others.
This article shows a method to use a network representation of scientific
interaction to assess the relative efficiency of regions with diverse
boundaries in channeling knowledge through a science system. In a first step, a
weighted aggregate of the betweenness centrality is produced from empirical
data (aggregation). The subsequent randomization of this empirical network
produces the necessary Null-model for significance testing and normalization
(randomization). This step is repeated to yield higher confidence about the
results (re-sampling). The results are robust estimates for the relative
regional efficiency to broker knowledge, which is discussed along with
cross-sectional and longitudinal empirical examples. The network representation
acts as a straight-forward metaphor of conceptual ideas from economic geography
and neighboring disciplines. However, the procedure is not limited to
centrality measures, nor is it limited to spatial aggregates. Therefore, it
offers a wide range of application for scientometrics and beyond.
|
1201.2050
|
Adaptive Noise Reduction Scheme for Salt and Pepper
|
cs.CV
|
In this paper, a new adaptive noise reduction scheme for images corrupted by
impulse noise is presented. The proposed scheme efficiently identifies and
reduces salt and pepper noise. MAG (Mean Absolute Gradient) is used to identify
pixels which are most likely corrupted by salt and pepper noise that are
candidates for further median based noise reduction processing. Directional
filtering is then applied after noise reduction to achieve a good tradeoff
between detail preservation and noise removal. The proposed scheme can remove
salt and pepper noise with noise density as high as 90% and produce better
result in terms of qualitative and quantitative measures of images.
|
1201.2056
|
Adaptive Context Tree Weighting
|
cs.IT cs.LG math.IT
|
We describe an adaptive context tree weighting (ACTW) algorithm, as an
extension to the standard context tree weighting (CTW) algorithm. Unlike the
standard CTW algorithm, which weights all observations equally regardless of
the depth, ACTW gives increasing weight to more recent observations, aiming to
improve performance in cases where the input sequence is from a non-stationary
distribution. Data compression results show ACTW variants improving over CTW on
merged files from standard compression benchmark tests while never being
significantly worse on any individual file.
|
1201.2073
|
Pbm: A new dataset for blog mining
|
cs.AI cs.CL cs.IR
|
Text mining is becoming vital as Web 2.0 offers collaborative content
creation and sharing. Now Researchers have growing interest in text mining
methods for discovering knowledge. Text mining researchers come from variety of
areas like: Natural Language Processing, Computational Linguistic, Machine
Learning, and Statistics. A typical text mining application involves
preprocessing of text, stemming and lemmatization, tagging and annotation,
deriving knowledge patterns, evaluating and interpreting the results. There are
numerous approaches for performing text mining tasks, like: clustering,
categorization, sentimental analysis, and summarization. There is a growing
need to standardize the evaluation of these tasks. One major component of
establishing standardization is to provide standard datasets for these tasks.
Although there are various standard datasets available for traditional text
mining tasks, but there are very few and expensive datasets for blog-mining
task. Blogs, a new genre in web 2.0 is a digital diary of web user, which has
chronological entries and contains a lot of useful knowledge, thus offers a lot
of challenges and opportunities for text mining. In this paper, we report a new
indigenous dataset for Pakistani Political Blogosphere. The paper describes the
process of data collection, organization, and standardization. We have used
this dataset for carrying out various text mining tasks for blogosphere, like:
blog-search, political sentiments analysis and tracking, identification of
influential blogger, and clustering of the blog-posts. We wish to offer this
dataset free for others who aspire to pursue further in this domain.
|
1201.2084
|
Sentence based semantic similarity measure for blog-posts
|
cs.AI cs.IR
|
Blogs-Online digital diary like application on web 2.0 has opened new and
easy way to voice opinion, thoughts, and like-dislike of every Internet user to
the World. Blogosphere has no doubt the largest user-generated content
repository full of knowledge. The potential of this knowledge is still to be
explored. Knowledge discovery from this new genre is quite difficult and
challenging as it is totally different from other popular genre of
web-applications like World Wide Web (WWW). Blog-posts unlike web documents are
small in size, thus lack in context and contain relaxed grammatical structures.
Hence, standard text similarity measure fails to provide good results. In this
paper, specialized requirements for comparing a pair of blog-posts is
thoroughly investigated. Based on this we proposed a novel algorithm for
sentence oriented semantic similarity measure of a pair of blog-posts. We
applied this algorithm on a subset of political blogosphere of Pakistan, to
cluster the blogs on different issues of political realm and to identify the
influential bloggers.
|
1201.2100
|
Biologically inspired design framework for Robot in Dynamic Environments
using Framsticks
|
cs.NE
|
Robot design complexity is increasing day by day especially in automated
industries. In this paper we propose biologically inspired design framework for
robots in dynamic world on the basis of Co-Evolution, Virtual Ecology, Life
time learning which are derived from biological creatures. We have created a
virtual khepera robot in Framsticks and tested its operational credibility in
terms hardware and software components by applying the above suggested
techniques. Monitoring complex and non complex behaviors in different
environments and obtaining the parameters that influence software and hardware
design of the robot that influence anticipated and unanticipated failures,
control programs of robot generation are the major concerns of our techniques.
|
1201.2173
|
Automatic Detection of Diabetes Diagnosis using Feature Weighted Support
Vector Machines based on Mutual Information and Modified Cuckoo Search
|
cs.LG
|
Diabetes is a major health problem in both developing and developed countries
and its incidence is rising dramatically. In this study, we investigate a novel
automatic approach to diagnose Diabetes disease based on Feature Weighted
Support Vector Machines (FW-SVMs) and Modified Cuckoo Search (MCS). The
proposed model consists of three stages: Firstly, PCA is applied to select an
optimal subset of features out of set of all the features. Secondly, Mutual
Information is employed to construct the FWSVM by weighting different features
based on their degree of importance. Finally, since parameter selection plays a
vital role in classification accuracy of SVMs, MCS is applied to select the
best parameter values. The proposed MI-MCS-FWSVM method obtains 93.58% accuracy
on UCI dataset. The experimental results demonstrate that our method
outperforms the previous methods by not only giving more accurate results but
also significantly speeding up the classification procedure.
|
1201.2199
|
Memory-Assisted Universal Source Coding
|
cs.IT math.IT
|
The problem of the universal compression of a sequence from a library of
several small to moderate length sequences from similar context arises in many
practical scenarios, such as the compression of the storage data and the
Internet traffic. In such scenarios, it is often required to compress and
decompress every sequence individually. However, the universal compression of
the individual sequences suffers from significant redundancy overhead. In this
paper, we aim at answering whether or not having a memory unit in the middle
can result in a fundamental gain in the universal compression. We present the
problem setup in the most basic scenario consisting of a server node $S$, a
relay node $R$ (i.e., the memory unit), and a client node $C$. We assume that
server $S$ wishes to send the sequence $x^n$ to the client $C$ who has never
had any prior communication with the server, and hence, is not capable of
memorization of the source context. However, $R$ has previously communicated
with $S$ to forward previous sequences from $S$ to the clients other than $C$,
and thus, $R$ has memorized a context $y^m$ shared with $S$. Note that if the
relay node was absent the source could possibly apply universal compression to
$x^n$ and transmit to $C$ whereas the presence of memorized context at $R$ can
possibly reduce the communication overhead in $S$-$R$ link. In this paper, we
investigate the fundamental gain of the context memorization in the
memory-assisted universal compression of the sequence $x^n$ over conventional
universal source coding by providing a lower bound on the gain of
memory-assisted source coding.
|
1201.2201
|
A Performance Metric for Discrete-Time Chaos-Based Truly Random Number
Generators
|
cs.IT math.DS math.IT
|
In this paper, we develop an information entropy based metric that represents
the statistical quality of the generated binary sequence in Truly Random Number
Generators (TRNG). The metric can be used for the design and optimization of
the TRNG circuits as well as the development of efficient post-processing units
for recovering the degraded statistical characteristics of the signal due to
process variations.
|
1201.2205
|
A Cryptographic Treatment of the Wiretap Channel
|
cs.IT cs.CR math.IT
|
The wiretap channel is a setting where one aims to provide
information-theoretic privacy of communicated data based solely on the
assumption that the channel from sender to adversary is "noisier" than the
channel from sender to receiver. It has been the subject of decades of work in
the information and coding (I&C) community. This paper bridges the gap between
this body of work and modern cryptography with contributions along two fronts,
namely metrics (definitions) of security, and schemes. We explain that the
metric currently in use is weak and insufficient to guarantee security of
applications and propose two replacements. One, that we call mis-security, is a
mutual-information based metric in the I&C style. The other, semantic security,
adapts to this setting a cryptographic metric that, in the cryptography
community, has been vetted by decades of evaluation and endorsed as the target
for standards and implementations. We show that they are equivalent (any scheme
secure under one is secure under the other), thereby connecting two
fundamentally different ways of defining security and providing a strong,
unified and well-founded target for designs. Moving on to schemes, results from
the wiretap community are mostly non-constructive, proving the existence of
schemes without necessarily yielding ones that are explicit, let alone
efficient, and only meeting their weak notion of security. We apply
cryptographic methods based on extractors to produce explicit, polynomial-time
and even practical encryption schemes that meet our new and stronger security
target.
|
1201.2207
|
Multi-sensor Information Processing using Prediction Market-based Belief
Aggregation
|
cs.MA
|
We consider the problem of information fusion from multiple sensors of
different types with the objective of improving the confidence of inference
tasks, such as object classification, performed from the data collected by the
sensors. We propose a novel technique based on distributed belief aggregation
using a multi-agent prediction market to solve this information fusion problem.
To monitor the improvement in the confidence of the object classification as
well as to dis-incentivize agents from misreporting information, we have
introduced a market maker that rewards the agents instantaneously as well as at
the end of the inference task, based on the quality of the submitted reports.
We have implemented the market maker's reward calculation in the form of a
scoring rule and have shown analytically that it incentivizes truthful
revelation or accurate reporting by each agent. We have experimentally verified
our technique for multi-sensor information fusion for an automated landmine
detection scenario. Our experimental results show that, for identical data
distributions and settings, using our information aggregation technique
increases the accuracy of object classification favorably as compared to two
other commonly used techniques for information fusion for landmine detection.
|
1201.2231
|
Reduced Functional Dependence Graph and Its Applications
|
cs.IT math.IT
|
Functional dependence graph (FDG) is an important class of directed graph
that captures the dominance relationship among a set of variables. FDG is
frequently used in calculating network coding capacity bounds. However, the
order of FDG is usually much larger than the original network and the
computational complexity of many bounds grows exponentially with the order of
FDG. In this paper, we introduce the concept of reduced FDG, which is obtained
from the original FDG by keeping only those "essential" edges. It is proved
that the reduced FDG gives the same capacity region/bounds with the original
FDG, but requiring much less computation. The applications of reduced FDG in
the algebraic formulation of scalar linear network coding is also discussed.
|
1201.2240
|
Bengali text summarization by sentence extraction
|
cs.IR cs.CL
|
Text summarization is a process to produce an abstract or a summary by
selecting significant portion of the information from one or more texts. In an
automatic text summarization process, a text is given to the computer and the
computer returns a shorter less redundant extract or abstract of the original
text(s). Many techniques have been developed for summarizing English text(s).
But, a very few attempts have been made for Bengali text summarization. This
paper presents a method for Bengali text summarization which extracts important
sentences from a Bengali document to produce a summary.
|
1201.2241
|
Distance-Based Bias in Model-Directed Optimization of Additively
Decomposable Problems
|
cs.NE cs.AI
|
For many optimization problems it is possible to define a distance metric
between problem variables that correlates with the likelihood and strength of
interactions between the variables. For example, one may define a metric so
that the dependencies between variables that are closer to each other with
respect to the metric are expected to be stronger than the dependencies between
variables that are further apart. The purpose of this paper is to describe a
method that combines such a problem-specific distance metric with information
mined from probabilistic models obtained in previous runs of estimation of
distribution algorithms with the goal of solving future problem instances of
similar type with increased speed, accuracy and reliability. While the focus of
the paper is on additively decomposable problems and the hierarchical Bayesian
optimization algorithm, it should be straightforward to generalize the approach
to other model-directed optimization techniques and other problem classes.
Compared to other techniques for learning from experience put forward in the
past, the proposed technique is both more practical and more broadly
applicable.
|
1201.2261
|
Relationships in Large-Scale Graph Computing
|
cs.DS cs.IR
|
In 2009 Grzegorz Czajkowski from Google's system infrastructure team has
published an article which didn't get much attention in the SEO community at
the time. It was titled "Large-scale graph computing at Google" and gave an
excellent insight into the future of Google's search. This article highlights
some of the little known facts which lead to transformation of Google's
algorithm in the last two years.
|
1201.2277
|
A Time Decoupling Approach for Studying Forum Dynamics
|
cs.SI physics.soc-ph
|
Online forums are rich sources of information about user communication
activity over time. Finding temporal patterns in online forum communication
threads can advance our understanding of the dynamics of conversations. The
main challenge of temporal analysis in this context is the complexity of forum
data. There can be thousands of interacting users, who can be numerically
described in many different ways. Moreover, user characteristics can evolve
over time. We propose an approach that decouples temporal information about
users into sequences of user events and inter-event times. We develop a new
feature space to represent the event sequences as paths, and we model the
distribution of the inter-event times. We study over 30,000 users across four
Internet forums, and discover novel patterns in user communication. We find
that users tend to exhibit consistency over time. Furthermore, in our feature
space, we observe regions that represent unlikely user behaviors. Finally, we
show how to derive a numerical representation for each forum, and we then use
this representation to derive a novel clustering of multiple forums.
|
1201.2291
|
Statistical Complexity and Fisher-Shannon Information. Applications
|
nlin.CD cs.IT math.IT physics.atom-ph
|
In this chapter, a statistical measure of complexity and the Fisher-Shannon
information product are introduced and their properties are discussed. These
measures are based on the interplay between the Shannon information, or a
function of it, and the separation of the set of accessible states to a system
from the equiprobability distribution, i.e. the disequilibrium or the Fisher
information, respectively. Different applications in discrete and continuous
systems are shown. Some of them are concerned with quantum systems, from
prototypical systems such as the H-atom, the harmonic oscillator and the square
well to other ones such as He-like ions, Hooke's atoms or just the periodic
table. In all of them, these statistical indicators show an interesting
behavior able to discern and highlight some conformational properties of those
systems.
|
1201.2304
|
Query sensitive comparative summarization of search results using
concept based segmentation
|
cs.IR
|
Query sensitive summarization aims at providing the users with the summary of
the contents of single or multiple web pages based on the search query. This
paper proposes a novel idea of generating a comparative summary from a set of
URLs from the search result. User selects a set of web page links from the
search result produced by search engine. Comparative summary of these selected
web sites is generated. This method makes use of HTML DOM tree structure of
these web pages. HTML documents are segmented into set of concept blocks.
Sentence score of each concept block is computed with respect to the query and
feature keywords. The important sentences from the concept blocks of different
web pages are extracted to compose the comparative summary on the fly. This
system reduces the time and effort required for the user to browse various web
sites to compare the information. The comparative summary of the contents would
help the users in quick decision making.
|
1201.2315
|
Secure Transmission of Sources over Noisy Channels with Side Information
at the Receivers
|
cs.IT math.IT
|
This paper investigates the problem of source-channel coding for secure
transmission with arbitrarily correlated side informations at both receivers.
This scenario consists of an encoder (referred to as Alice) that wishes to
compress a source and send it through a noisy channel to a legitimate receiver
(referred to as Bob). In this context, Alice must simultaneously satisfy the
desired requirements on the distortion level at Bob, and the equivocation rate
at the eavesdropper (referred to as Eve). This setting can be seen as a
generalization of the problems of secure source coding with (uncoded) side
information at the decoders, and the wiretap channel. A general outer bound on
the rate-distortion-equivocation region, as well as an inner bound based on a
pure digital scheme, is derived for arbitrary channels and side informations.
In some special cases of interest, it is proved that this digital scheme is
optimal and that separation holds. However, it is also shown through a simple
counterexample with a binary source that a pure analog scheme can outperform
the digital one while being optimal. According to these observations and
assuming matched bandwidth, a novel hybrid digital/analog scheme that aims to
gather the advantages of both digital and analog ones is then presented. In the
quadratic Gaussian setup when side information is only present at the
eavesdropper, this strategy is proved to be optimal. Furthermore, it
outperforms both digital and analog schemes, and cannot be achieved via
time-sharing. By means of an appropriate coding, the presence of any
statistical difference among the side informations, the channel noises, and the
distortion at Bob can be fully exploited in terms of secrecy.
|
1201.2334
|
Universal Estimation of Directed Information
|
cs.IT math.IT
|
Four estimators of the directed information rate between a pair of jointly
stationary ergodic finite-alphabet processes are proposed, based on universal
probability assignments. The first one is a Shannon--McMillan--Breiman type
estimator, similar to those used by Verd\'u (2005) and Cai, Kulkarni, and
Verd\'u (2006) for estimation of other information measures. We show the almost
sure and $L_1$ convergence properties of the estimator for any underlying
universal probability assignment. The other three estimators map universal
probability assignments to different functionals, each exhibiting relative
merits such as smoothness, nonnegativity, and boundedness. We establish the
consistency of these estimators in almost sure and $L_1$ senses, and derive
near-optimal rates of convergence in the minimax sense under mild conditions.
These estimators carry over directly to estimating other information measures
of stationary ergodic finite-alphabet processes, such as entropy rate and
mutual information rate, with near-optimal performance and provide alternatives
to classical approaches in the existing literature. Guided by these theoretical
results, the proposed estimators are implemented using the context-tree
weighting algorithm as the universal probability assignment. Experiments on
synthetic and real data are presented, demonstrating the potential of the
proposed schemes in practice and the utility of directed information estimation
in detecting and measuring causal influence and delay.
|
1201.2383
|
Impact of Dynamic Interactions on Multi-Scale Analysis of Community
Structure in Networks
|
cs.SI physics.comp-ph physics.data-an physics.soc-ph
|
To find interesting structure in networks, community detection algorithms
have to take into account not only the network topology, but also dynamics of
interactions between nodes. We investigate this claim using the paradigm of
synchronization in a network of coupled oscillators. As the network evolves to
a global steady state, nodes belonging to the same community synchronize faster
than nodes belonging to different communities. Traditionally, nodes in network
synchronization models are coupled via one-to-one, or conservative
interactions. However, social interactions are often one-to-many, as for
example, in social media, where users broadcast messages to all their
followers. We formulate a novel model of synchronization in a network of
coupled oscillators in which the oscillators are coupled via one-to-many, or
non-conservative interactions. We study the dynamics of different interaction
models and contrast their spectral properties. To find multi-scale community
structure in a network of interacting nodes, we define a similarity function
that measures the degree to which nodes are synchronized and use it to
hierarchically cluster nodes. We study real-world social networks, including
networks of two social media providers. To evaluate the quality of the
discovered communities in a social media network we propose a community quality
metric based on user activity. We find that conservative and non-conservative
interaction models lead to dramatically different views of community structure
even within the same network. Our work offers a novel mathematical framework
for exploring the relationship between network structure, topology and
dynamics.
|
1201.2386
|
Bounds on the Minimum Distance of Punctured Quasi-Cyclic LDPC Codes
|
cs.IT math.IT
|
Recent work by Divsalar et al. has shown that properly designed
protograph-based low-density parity-check (LDPC) codes typically have minimum
(Hamming) distance linearly increasing with block length. This fact rests on
ensemble arguments over all possible expansions of the base protograph.
However, when implementation complexity is considered, the expansions are
frequently selected from a smaller class of structured expansions. For example,
protograph expansion by cyclically shifting connections generates a
quasi-cyclic (QC) code. Other recent work by Smarandache and Vontobel has
provided upper bounds on the minimum distance of QC codes. In this paper, we
generalize these bounds to punctured QC codes and then show how to tighten
these for certain classes of codes. We then evaluate these upper bounds for the
family of protograph codes known as AR4JA codes that have been recommended for
use in deep space communications in a standard established by the Consultative
Committee for Space Data Systems (CCSDS). At block lengths larger than 4400
bits, these upper bounds fall well below the ensemble lower bounds.
|
1201.2395
|
Polynomial Regression on Riemannian Manifolds
|
math.ST cs.CV math.DG stat.TH
|
In this paper we develop the theory of parametric polynomial regression in
Riemannian manifolds and Lie groups. We show application of Riemannian
polynomial regression to shape analysis in Kendall shape space. Results are
presented, showing the power of polynomial regression on the classic rat skull
growth data of Bookstein as well as the analysis of the shape changes
associated with aging of the corpus callosum from the OASIS Alzheimer's study.
|
1201.2416
|
Stochastic Low-Rank Kernel Learning for Regression
|
cs.LG
|
We present a novel approach to learn a kernel-based regression function. It
is based on the useof conical combinations of data-based parameterized kernels
and on a new stochastic convex optimization procedure of which we establish
convergence guarantees. The overall learning procedure has the nice properties
that a) the learned conical combination is automatically designed to perform
the regression task at hand and b) the updates implicated by the optimization
procedure are quite inexpensive. In order to shed light on the appositeness of
our learning strategy, we present empirical results from experiments conducted
on various benchmark datasets.
|
1201.2430
|
A Well-typed Lightweight Situation Calculus
|
cs.PL cs.AI
|
Situation calculus has been widely applied in Artificial Intelligence related
fields. This formalism is considered as a dialect of logic programming language
and mostly used in dynamic domain modeling. However, type systems are hardly
deployed in situation calculus in the literature. To achieve a correct and
sound typed program written in situation calculus, adding typing elements into
the current situation calculus will be quite helpful. In this paper, we propose
to add more typing mechanisms to the current version of situation calculus,
especially for three basic elements in situation calculus: situations, actions
and objects, and then perform rigid type checking for existing situation
calculus programs to find out the well-typed and ill-typed ones. In this way,
type correctness and soundness in situation calculus programs can be guaranteed
by type checking based on our type system. This modified version of a
lightweight situation calculus is proved to be a robust and well-typed system.
|
1201.2462
|
The minimax risk of truncated series estimators for symmetric convex
polytopes
|
math.ST cs.IT math.IT math.PR stat.TH
|
We study the optimality of the minimax risk of truncated series estimators
for symmetric convex polytopes. We show that the optimal truncated series
estimator is within $O(\log m)$ factor of the optimal if the polytope is
defined by $m$ hyperplanes. This represents the first such bounds towards
general convex bodies. In proving our result, we first define a geometric
quantity, called the \emph{approximation radius}, for lower bounding the
minimax risk. We then derive our bounds by establishing a connection between
the approximation radius and the Kolmogorov width, the quantity that provides
upper bounds for the truncated series estimator. Besides, our proof contains
several ingredients which might be of independent interest: 1. The notion of
approximation radius depends on the volume of the body. It is an intuitive
notion and is flexible to yield strong minimax lower bounds; 2. The connection
between the approximation radius and the Kolmogorov width is a consequence of a
novel duality relationship on the Kolmogorov width, developed by utilizing some
deep results from convex geometry.
|
1201.2471
|
Eigen-Direction Alignment Based Physical-Layer Network Coding for MIMO
Two-Way Relay Channels
|
cs.IT math.IT
|
In this paper, we propose a novel communication strategy which incorporates
physical-layer network coding (PNC) into multiple-input multiple output (MIMO)
two-way relay channels (TWRCs). At the heart of the proposed scheme lies a new
key technique referred to as eigen-direction alignment (EDA) precoding. The EDA
precoding efficiently aligns the two-user's eigen-modes into the same
directions. Based on that, we carry out multi-stream PNC over the aligned
eigen-modes. We derive an achievable rate of the proposed EDA-PNC scheme, based
on nested lattice codes, over a MIMO TWRC. Asymptotic analysis shows that the
proposed EDA-PNC scheme approaches the capacity upper bound as the number of
user antennas increases towards infinity. For a finite number of user antennas,
we formulate the design criterion of the optimal EDA precoder and present
solutions. Numerical results show that there is only a marginal gap between the
achievable rate of the proposed EDA-PNC scheme and the capacity upper bound of
the MIMO TWRC, in the median-to-large SNR region. We also show that the
proposed EDA-PNC scheme significantly outperforms existing amplify-and-forward
and decode-and-forward based schemes for MIMO TWRCs.
|
1201.2478
|
Global stabilization of nonlinear systems based on vector control
lyapunov functions
|
math.OC cs.SY
|
This paper studies the use of vector Lyapunov functions for the design of
globally stabilizing feedback laws for nonlinear systems. Recent results on
vector Lyapunov functions are utilized. The main result of the paper shows that
the existence of a vector control Lyapunov function is a necessary and
sufficient condition for the existence of a smooth globally stabilizing
feedback. Applications to nonlinear systems are provided: simple and easily
checkable sufficient conditions are proposed to guarantee the existence of a
smooth globally stabilizing feedback law. The obtained results are applied to
the problem of the stabilization of an equilibrium point of a reaction network
taking place in a continuous stirred tank reactor.
|
1201.2483
|
Duality of Channel Encoding and Decoding - Part I: Rate-1 Binary
Convolutional Codes
|
cs.IT math.IT
|
In this paper, we revisit the forward, backward and bidirectional
Bahl-Cocke-Jelinek-Raviv (BCJR) soft-input soft-output (SISO) maximum a
posteriori probability (MAP) decoding process of rate-1 binary convolutional
codes. From this we establish some interesting explicit relationships between
encoding and decoding of rate-1 convolutional codes. We observe that the
forward and backward BCJR SISO MAP decoders can be simply represented by their
dual SISO channel encoders using shift registers in the complex number field.
Similarly, the bidirectional MAP decoding can be implemented by linearly
combining the shift register contents of the dual SISO encoders of the
respective forward and backward decoders. The dual encoder structures for
various recursive and non-recursive rate-1 convolutional codes are derived.
|
1201.2513
|
On Geometric Upper Bounds for Positioning Algorithms in Wireless Sensor
Networks
|
cs.IT math.IT
|
This paper studies the possibility of upper bounding the position error of an
estimate for range based positioning algorithms in wireless sensor networks. In
this study, we argue that in certain situations when the measured distances
between sensor nodes are positively biased, e.g., in non-line-of-sight
conditions, the target node is confined to a closed bounded convex set (a
feasible set) which can be derived from the measurements. Then, we formulate
two classes of geometric upper bounds with respect to the feasible set. If an
estimate is available, either feasible or infeasible, the worst-case position
error can be defined as the maximum distance between the estimate and any point
in the feasible set (the first bound). Alternatively, if an estimate given by a
positioning algorithm is always feasible, we propose to get the maximum length
of the feasible set as the worst-case position error (the second bound). These
bounds are formulated as nonconvex optimization problems. To progress, we relax
the nonconvex problems and obtain convex problems, which can be efficiently
solved. Simulation results indicate that the proposed bounds are reasonably
tight in many situations.
|
1201.2515
|
Integrating Interactive Visualizations in the Search Process of Digital
Libraries and IR Systems
|
cs.DL cs.IR
|
Interactive visualizations for exploring and retrieval have not yet become an
integral part of digital libraries and information retrieval systems. We have
integrated a set of interactive graphics in a real world social science digital
library. These visualizations support the exploration of search queries,
results and authors, can filter search results, show trends in the database and
can support the creation of new search queries. The use of weighted brushing
supports the identification of related metadata for search facets. We discuss
some use cases of the combination of IR systems and interactive graphics. In a
user study we verify that users can gain insights from statistical graphics
intuitively and can adopt interaction techniques.
|
1201.2523
|
At Low SNR Asymmetric Quantizers Are Better
|
cs.IT math.IT
|
We study the capacity of the discrete-time Gaussian channel when its output
is quantized with a one-bit quantizer. We focus on the low signal-to-noise
ratio (SNR) regime, where communication at very low spectral efficiencies takes
place. In this regime a symmetric threshold quantizer is known to reduce
channel capacity by a factor of 2/pi, i.e., to cause an asymptotic power loss
of approximately two decibels. Here it is shown that this power loss can be
avoided by using asymmetric threshold quantizers and asymmetric signaling
constellations. To avoid this power loss, flash-signaling input distributions
are essential. Consequently, one-bit output quantization of the Gaussian
channel reduces spectral efficiency. Threshold quantizers are not only
asymptotically optimal: at every fixed SNR a threshold quantizer maximizes
capacity among all one-bit output quantizers. The picture changes on the
Rayleigh-fading channel. In the noncoherent case a one-bit output quantizer
causes an unavoidable low-SNR asymptotic power loss. In the coherent case,
however, this power loss is avoidable provided that we allow the quantizer to
depend on the fading level.
|
1201.2542
|
An efficient FPGA implementation of MRI image filtering and tumor
characterization using Xilinx system generator
|
cs.AR cs.CV
|
This paper presents an efficient architecture for various image filtering
algorithms and tumor characterization using Xilinx System Generator (XSG). This
architecture offers an alternative through a graphical user interface that
combines MATLAB, Simulink and XSG and explores important aspects concerned to
hardware implementation. Performance of this architecture implemented in
SPARTAN-3E Starter kit (XC3S500E-FG320) exceeds those of similar or greater
resources architectures. The proposed architecture reduces the resources
available on target device by 50%.
|
1201.2555
|
Sparse Reward Processes
|
cs.LG stat.ML
|
We introduce a class of learning problems where the agent is presented with a
series of tasks. Intuitively, if there is relation among those tasks, then the
information gained during execution of one task has value for the execution of
another task. Consequently, the agent is intrinsically motivated to explore its
environment beyond the degree necessary to solve the current task it has at
hand. We develop a decision theoretic setting that generalises standard
reinforcement learning tasks and captures this intuition. More precisely, we
consider a multi-stage stochastic game between a learning agent and an
opponent. We posit that the setting is a good model for the problem of
life-long learning in uncertain environments, where while resources must be
spent learning about currently important tasks, there is also the need to
allocate effort towards learning about aspects of the world which are not
relevant at the moment. This is due to the fact that unpredictable future
events may lead to a change of priorities for the decision maker. Thus, in some
sense, the model "explains" the necessity of curiosity. Apart from introducing
the general formalism, the paper provides algorithms. These are evaluated
experimentally in some exemplary domains. In addition, performance bounds are
proven for some cases of this problem.
|
1201.2564
|
Query-Subquery Nets
|
cs.DB cs.LO
|
We formulate query-subquery nets and use them to create the first framework
for developing algorithms for evaluating queries to Horn knowledge bases with
the properties that: the approach is goal-directed; each subquery is processed
only once and each supplement tuple, if desired, is transferred only once;
operations are done set-at-a-time; and any control strategy can be used. Our
intention is to increase efficiency of query processing by eliminating
redundant computation, increasing flexibility and reducing the number of
accesses to the secondary storage. The framework forms a generic evaluation
method called QSQN. To deal with function symbols, we use a term-depth bound
for atoms and substitutions occurring in the computation and propose to use
iterative deepening search which iteratively increases the term-depth bound. We
prove soundness and completeness of our generic evaluation method and show
that, when the term-depth bound is fixed, the method has PTIME data complexity.
We also present how tail recursion elimination can be incorporated into our
framework and propose two exemplary control strategies, one is to reduce the
number of accesses to the secondary storage, while the other is depth-first
search.
|
1201.2575
|
Joint Approximation of Information and Distributed Link-Scheduling
Decisions in Wireless Networks
|
cs.LG cs.NI
|
For a large multi-hop wireless network, nodes are preferable to make
distributed and localized link-scheduling decisions with only interactions
among a small number of neighbors. However, for a slowly decaying channel and
densely populated interferers, a small size neighborhood often results in
nontrivial link outages and is thus insufficient for making optimal scheduling
decisions. A question arises how to deal with the information outside a
neighborhood in distributed link-scheduling. In this work, we develop joint
approximation of information and distributed link scheduling. We first apply
machine learning approaches to model distributed link-scheduling with complete
information. We then characterize the information outside a neighborhood in
form of residual interference as a random loss variable. The loss variable is
further characterized by either a Mean Field approximation or a normal
distribution based on the Lyapunov central limit theorem. The approximated
information outside a neighborhood is incorporated in a factor graph. This
results in joint approximation and distributed link-scheduling in an iterative
fashion. Link-scheduling decisions are first made at each individual node based
on the approximated loss variables. Loss variables are then updated and used
for next link-scheduling decisions. The algorithm repeats between these two
phases until convergence. Interactive iterations among these variables are
implemented with a message-passing algorithm over a factor graph. Simulation
results show that using learned information outside a neighborhood jointly with
distributed link-scheduling reduces the outage probability close to zero even
for a small neighborhood.
|
1201.2592
|
Interpolatory Weighted-H2 Model Reduction
|
math.NA cs.SY math.DS math.OC
|
This paper introduces an interpolation framework for the weighted-H2 model
reduction problem. We obtain a new representation of the weighted-H2 norm of
SISO systems that provides new interpolatory first order necessary conditions
for an optimal reduced-order model. The H2 norm representation also provides an
error expression that motivates a new weighted-H2 model reduction algorithm.
Several numerical examples illustrate the effectiveness of the proposed
approach.
|
1201.2605
|
Autonomous Cleaning of Corrupted Scanned Documents - A Generative
Modeling Approach
|
cs.CV cs.LG
|
We study the task of cleaning scanned text documents that are strongly
corrupted by dirt such as manual line strokes, spilled ink etc. We aim at
autonomously removing dirt from a single letter-size page based only on the
information the page contains. Our approach, therefore, has to learn character
representations without supervision and requires a mechanism to distinguish
learned representations from irregular patterns. To learn character
representations, we use a probabilistic generative model parameterizing pattern
features, feature variances, the features' planar arrangements, and pattern
frequencies. The latent variables of the model describe pattern class, pattern
position, and the presence or absence of individual pattern features. The model
parameters are optimized using a novel variational EM approximation. After
learning, the parameters represent, independently of their absolute position,
planar feature arrangements and their variances. A quality measure defined
based on the learned representation then allows for an autonomous
discrimination between regular character patterns and the irregular patterns
making up the dirt. The irregular patterns can thus be removed to clean the
document. For a full Latin alphabet we found that a single page does not
contain sufficiently many character examples. However, even if heavily
corrupted by dirt, we show that a page containing a lower number of character
types can efficiently and autonomously be cleaned solely based on the
structural regularity of the characters it contains. In different examples
using characters from different alphabets, we demonstrate generality of the
approach and discuss its implications for future developments.
|
1201.2630
|
Hybrid GPS-GSM Localization of Automobile Tracking System
|
cs.SY cs.AI
|
An integrated GPS-GSM system is proposed to track vehicles using Google Earth
application. The remote module has a GPS mounted on the moving vehicle to
identify its current position, and to be transferred by GSM with other
parameters acquired by the automobile's data port as an SMS to a recipient
station. The received GPS coordinates are filtered using a Kalman filter to
enhance the accuracy of measured position. After data processing, Google Earth
application is used to view the current location and status of each vehicle.
This goal of this system is to manage fleet, police automobiles distribution
and car theft cautions.
|
1201.2698
|
Optimal Allocation of Interconnecting Links in Cyber-Physical Systems:
Interdependence, Cascading Failures and Robustness
|
physics.data-an cs.SI physics.soc-ph
|
We consider a cyber-physical system consisting of two interacting networks,
i.e., a cyber-network overlaying a physical-network. It is envisioned that
these systems are more vulnerable to attacks since node failures in one network
may result in (due to the interdependence) failures in the other network,
causing a cascade of failures that would potentially lead to the collapse of
the entire infrastructure. The robustness of interdependent systems against
this sort of catastrophic failure hinges heavily on the allocation of the
(interconnecting) links that connect nodes in one network to nodes in the other
network. In this paper, we characterize the optimum inter-link allocation
strategy against random attacks in the case where the topology of each
individual network is unknown. In particular, we analyze the "regular"
allocation strategy that allots exactly the same number of bi-directional
inter-network links to all nodes in the system. We show, both analytically and
experimentally, that this strategy yields better performance (from a network
resilience perspective) compared to all possible strategies, including
strategies using random allocation, unidirectional inter-links, etc.
|
1201.2703
|
Faster Approximate Distance Queries and Compact Routing in Sparse Graphs
|
cs.DS cs.DC cs.NI cs.SI
|
A distance oracle is a compact representation of the shortest distance matrix
of a graph. It can be queried to approximate shortest paths between any pair of
vertices. Any distance oracle that returns paths of worst-case stretch (2k-1)
must require space $\Omega(n^{1 + 1/k})$ for graphs of n nodes. The hard cases
that enforce this lower bound are, however, rather dense graphs with average
degree \Omega(n^{1/k}).
We present distance oracles that, for sparse graphs, substantially break the
lower bound barrier at the expense of higher query time. For any 1 \leq \alpha
\leq n, our distance oracles can return stretch 2 paths using O(m + n^2/\alpha)
space and stretch 3 paths using O(m + n^2/\alpha^2) space, at the expense of
O(\alpha m/n) query time. By setting appropriate values of \alpha, we get the
first distance oracles that have size linear in the size of the graph, and
return constant stretch paths in non-trivial query time. The query time can be
further reduced to O(\alpha), by using an additional O(m \alpha) space for all
our distance oracles, or at the cost of a small constant additive stretch.
We use our stretch 2 distance oracle to present the first compact routing
scheme with worst-case stretch 2. Any compact routing scheme with stretch less
than 2 must require linear memory at some nodes even for sparse graphs; our
scheme, hence, achieves the optimal stretch with non-trivial memory
requirements. Moreover, supported by large-scale simulations on graphs
including the AS-level Internet graph, we argue that our stretch-2 scheme would
be simple and efficient to implement as a distributed compact routing protocol.
|
1201.2706
|
Evolution of Ideas: A Novel Memetic Algorithm Based on Semantic Networks
|
cs.NE nlin.AO
|
This paper presents a new type of evolutionary algorithm (EA) based on the
concept of "meme", where the individuals forming the population are represented
by semantic networks and the fitness measure is defined as a function of the
represented knowledge. Our work can be classified as a novel memetic algorithm
(MA), given that (1) it is the units of culture, or information, that are
undergoing variation, transmission, and selection, very close to the original
sense of memetics as it was introduced by Dawkins; and (2) this is different
from existing MA, where the idea of memetics has been utilized as a means of
local refinement by individual learning after classical global sampling of EA.
The individual pieces of information are represented as simple semantic
networks that are directed graphs of concepts and binary relations, going
through variation by memetic versions of operators such as crossover and
mutation, which utilize knowledge from commonsense knowledge bases. In
evaluating this introductory work, as an interesting fitness measure, we focus
on using the structure mapping theory of analogical reasoning from psychology
to evolve pieces of information that are analogous to a given base information.
Considering other possible fitness measures, the proposed representation and
algorithm can serve as a computational tool for modeling memetic theories of
knowledge, such as evolutionary epistemology and cultural selection theory.
|
1201.2711
|
Ultrametric Model of Mind, I: Review
|
cs.AI
|
We mathematically model Ignacio Matte Blanco's principles of symmetric and
asymmetric being through use of an ultrametric topology. We use for this the
highly regarded 1975 book of this Chilean psychiatrist and pyschoanalyst (born
1908, died 1995). Such an ultrametric model corresponds to hierarchical
clustering in the empirical data, e.g. text. We show how an ultrametric
topology can be used as a mathematical model for the structure of the logic
that reflects or expresses Matte Blanco's symmetric being, and hence of the
reasoning and thought processes involved in conscious reasoning or in reasoning
that is lacking, perhaps entirely, in consciousness or awareness of itself. In
a companion paper we study how symmetric (in the sense of Matte Blanco's)
reasoning can be demarcated in a context of symmetric and asymmetric reasoning
provided by narrative text.
|
1201.2719
|
Ultrametric Model of Mind, II: Application to Text Content Analysis
|
cs.AI cs.CL
|
In a companion paper, Murtagh (2012), we discussed how Matte Blanco's work
linked the unrepressed unconscious (in the human) to symmetric logic and
thought processes. We showed how ultrametric topology provides a most useful
representational and computational framework for this. Now we look at the
extent to which we can find ultrametricity in text. We use coherent and
meaningful collections of nearly 1000 texts to show how we can measure inherent
ultrametricity. On the basis of our findings we hypothesize that inherent
ultrametricty is a basis for further exploring unconscious thought processes.
|
1201.2733
|
A remark on the Restricted Isometry Property in Orthogonal Matching
Pursuit
|
cs.IT math.IT
|
This paper demonstrates that if the restricted isometry constant
$\delta_{K+1}$ of the measurement matrix $A$ satisfies $$ \delta_{K+1} <
\frac{1}{\sqrt{K}+1}, $$ then a greedy algorithm called Orthogonal Matching
Pursuit (OMP) can recover every $K$--sparse signal $\mathbf{x}$ in $K$
iterations from $A\x$. By contrast, a matrix is also constructed with the
restricted isometry constant $$ \delta_{K+1} = \frac{1}{\sqrt{K}} $$ such that
OMP can not recover some $K$-sparse signal $\mathbf{x}$ in $K$ iterations. This
result positively verifies the conjecture given by Dai and Milenkovic in 2009.
|
1201.2766
|
ART : Sub-Logarithmic Decentralized Range Query Processing with
Probabilistic Guarantees
|
cs.DB
|
We focus on range query processing on large-scale, typically distributed
infrastructures, such as clouds of thousands of nodes of shared-datacenters, of
p2p distributed overlays, etc. In such distributed environments, efficient
range query processing is the key for managing the distributed data sets per
se, and for monitoring the infrastructure's resources. We wish to develop an
architecture that can support range queries in such large-scale decentralized
environments and can scale in terms of the number of nodes as well as in terms
of the data items stored. Of course, in the last few years there have been a
number of solutions (mostly from researchers in the p2p domain) for designing
such large-scale systems. However, these are inadequate for our purposes, since
at the envisaged scales the classic logarithmic complexity (for point queries)
is still too expensive while for range queries it is even more disappointing.
In this paper we go one step further and achieve a sub-logarithmic complexity.
We contribute the ART, which outperforms the most popular decentralized
structures, including Chord (and some of its successors), BATON (and its
successor) and Skip-Graphs. We contribute theoretical analysis, backed up by
detailed experimental results, showing that the communication cost of query and
update operations is $O(\log_{b}^2 \log N)$ hops, where the base $b$ is a
double-exponentially power of two and $N$ is the total number of nodes.
Moreover, ART is a fully dynamic and fault-tolerant structure, which supports
the join/leave node operations in $O(\log \log N)$ expected w.h.p number of
hops. Our experimental performance studies include a detailed performance
comparison which showcases the improved performance, scalability, and
robustness of ART.
|
1201.2788
|
Inferring global network properties from egocentric data with
applications to epidemics
|
cs.SI math.PR physics.soc-ph
|
Social networks are rarely observed in full detail. In many situations
properties are known for only a sample of the individuals in the network and it
is desirable to induce global properties of the full social network from this
"egocentric" network data. In the current paper we study a few different types
of egocentric data, and show what global network properties are consistent with
those egocentric data. Two global network properties are considered: the size
of the largest connected component in the network (the giant), and secondly,
the possible size of an epidemic outbreak taking place on the network, in which
transmission occurs only between network neighbours, and with probability $p$.
The main conclusion is that in most cases, egocentric data allow for a large
range of possible sizes of the giant and the outbreak. However, there is an
upper bound for the latter. For the case that the network is selected uniformly
among networks with prescribed egocentric data (satisfying some conditions),
the asymptotic size of the giant and the outbreak is characterised.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.