id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1211.1411 | Hidden Variable Theories: Arguments for a Paradigm Shift | quant-ph cs.IT math.IT | Usually the 'hidden variables' of Bell's theorem are supposed to describe the
pair of Bell particles. Here a semantic shift is proposed, namely to attach the
hidden variables to a stochastic medium or field in which the particles move.
It appears that under certain conditions one of the premises of Bell's theorem,
namely 'measurement independence', is not satisfied for such 'background-based'
theories, even if these only involve local interactions. Such theories
therefore do not fall under the restriction of Bell's no-go theorem. A simple
version of such background-based models are Ising models, which we investigate
here in the classical and quantum regime. We also propose to test
background-based models by a straightforward extension of existing experiments.
The present version corrects an error in the preceding version.
|
1211.1441 | Lyapunov Method Based Online Identification of Nonlinear Systems Using
Extreme Learning Machines | cs.SY | Extreme Learning Machine (ELM) is an emerging learning paradigm for nonlinear
regression problems and has shown its effectiveness in the machine learning
community. An important feature of ELM is that the learning speed is extremely
fast thanks to its random projection preprocessing step. This feature is taken
advantage of in designing an online parameter estimation algorithm for
nonlinear dynamic systems in this paper. The ELM type random projection and a
nonlinear transformation in the hidden layer and a linear output layer is
considered as a generalized model structure for a given nonlinear system and a
parameter update law is constructed based on Lyapunov principles. Simulation
results on a DC motor and Lorentz oscillator show that the proposed algorithm
is stable and has improved performance over the online-learning ELM algorithm.
|
1211.1467 | Edge distribution in generalized graph products | cs.DM cs.IT math.CO math.IT | Given a graph $G=(V,E)$, an integer $k$, and a function $f_G:V^k \times V^k
\to {0,1}$, the $k^{th}$ graph product of $G$ w.r.t $f_G$ is the graph with
vertex set $V^k$, and an edge between two vertices $x=(x_1,...,x_k)$ and
$y=(y_1,...,y_k)$ iff $f_G(x,y)=1$. Graph products are a basic combinatorial
object, widely studied and used in different areas such as hardness of
approximation, information theory, etc. We study graph products for functions
$f_G$ of the form $f_G(x,y)=1$ iff there are at least $t$ indices $i \in [k]$
s.t. $(x_i,y_i)\in E$, where $t \in [k]$ is a fixed parameter in $f_G$. This
framework generalizes the well-known graph tensor-product (obtained for $t=k$)
and the graph or-product (obtained for $t=1$). The property that interests us
is the edge distribution in such graphs. We show that if $G$ has a spectral
gap, then the number of edges connecting "large-enough" sets in $G^k$ is
"well-behaved", namely, it is close to the expected value, had the sets been
random. We extend our results to bi-partite graph products as well. For a
bi-partite graph $G=(X,Y,E)$, the $k^{th}$ bi-partite graph product of $G$
w.r.t $f_G$ is the bi-partite graph with vertex sets $X^k$ and $Y^k$ and edges
between $x \in X^k$ and $y \in Y^k$ iff $f_G(x,y)=1$. Finally, for both types
of graph products, optimality is asserted using the "Converse to the Expander
Mixing Lemma" obtained by Bilu and Linial in 2006. A byproduct of our proof
technique is a new explicit construction of a family of co-spectral graphs.
|
1211.1482 | Gender Recognition in Walk Gait through 3D Motion by Quadratic Bezier
Curve and Statistical Techniques | cs.CV | Motion capture is the process of recording the movement of objects or people.
It is used in military, entertainment, sports, and medical applications, and
for validation of computer vision[2] and robotics. In filmmaking and video game
development, it refers to recording actions of human actors, and using that
information to animate digital character models in 2D or 3D computer animation.
When it includes face and fingers or captures subtle
|
1211.1513 | K-Plane Regression | cs.LG | In this paper, we present a novel algorithm for piecewise linear regression
which can learn continuous as well as discontinuous piecewise linear functions.
The main idea is to repeatedly partition the data and learn a liner model in in
each partition. While a simple algorithm incorporating this idea does not work
well, an interesting modification results in a good algorithm. The proposed
algorithm is similar in spirit to $k$-means clustering algorithm. We show that
our algorithm can also be viewed as an EM algorithm for maximum likelihood
estimation of parameters under a reasonable probability model. We empirically
demonstrate the effectiveness of our approach by comparing its performance with
the state of art regression learning algorithms on some real world datasets.
|
1211.1526 | Explosion prediction of oil gas using SVM and Logistic Regression | cs.CE cs.LG | The prevention of dangerous chemical accidents is a primary problem of
industrial manufacturing. In the accidents of dangerous chemicals, the oil gas
explosion plays an important role. The essential task of the explosion
prevention is to estimate the better explosion limit of a given oil gas. In
this paper, Support Vector Machines (SVM) and Logistic Regression (LR) are used
to predict the explosion of oil gas. LR can get the explicit probability
formula of explosion, and the explosive range of the concentrations of oil gas
according to the concentration of oxygen. Meanwhile, SVM gives higher accuracy
of prediction. Furthermore, considering the practical requirements, the effects
of penalty parameter on the distribution of two types of errors are discussed.
|
1211.1544 | Image denoising with multi-layer perceptrons, part 1: comparison with
existing algorithms and with bounds | cs.CV cs.LG | Image denoising can be described as the problem of mapping from a noisy image
to a noise-free image. The best currently available denoising methods
approximate this mapping with cleverly engineered algorithms. In this work we
attempt to learn this mapping directly with plain multi layer perceptrons (MLP)
applied to image patches. We will show that by training on large image
databases we are able to outperform the current state-of-the-art image
denoising methods. In addition, our method achieves results that are superior
to one type of theoretical bound and goes a large way toward closing the gap
with a second type of theoretical bound. Our approach is easily adapted to less
extensively studied types of noise, such as mixed Poisson-Gaussian noise, JPEG
artifacts, salt-and-pepper noise and noise resembling stripes, for which we
achieve excellent results as well. We will show that combining a block-matching
procedure with MLPs can further improve the results on certain images. In a
second paper, we detail the training trade-offs and the inner mechanisms of our
MLPs.
|
1211.1550 | A Riemannian geometry for low-rank matrix completion | cs.LG cs.NA math.OC | We propose a new Riemannian geometry for fixed-rank matrices that is
specifically tailored to the low-rank matrix completion problem. Exploiting the
degree of freedom of a quotient space, we tune the metric on our search space
to the particular least square cost function. At one level, it illustrates in a
novel way how to exploit the versatile framework of optimization on quotient
manifold. At another level, our algorithm can be considered as an improved
version of LMaFit, the state-of-the-art Gauss-Seidel algorithm. We develop
necessary tools needed to perform both first-order and second-order
optimization. In particular, we propose gradient descent schemes (steepest
descent and conjugate gradient) and trust-region algorithms. We also show that,
thanks to the simplicity of the cost function, it is numerically cheap to
perform an exact linesearch given a search direction, which makes our
algorithms competitive with the state-of-the-art on standard low-rank matrix
completion instances.
|
1211.1552 | Image denoising with multi-layer perceptrons, part 2: training
trade-offs and analysis of their mechanisms | cs.CV cs.LG | Image denoising can be described as the problem of mapping from a noisy image
to a noise-free image. In another paper, we show that multi-layer perceptrons
can achieve outstanding image denoising performance for various types of noise
(additive white Gaussian noise, mixed Poisson-Gaussian noise, JPEG artifacts,
salt-and-pepper noise and noise resembling stripes). In this work we discuss in
detail which trade-offs have to be considered during the training procedure. We
will show how to achieve good results and which pitfalls to avoid. By analysing
the activation patterns of the hidden units we are able to make observations
regarding the functioning principle of multi-layer perceptrons trained for
image denoising.
|
1211.1565 | Data Shapes and Data Transformations | cs.DB | Nowadays, information management systems deal with data originating from
different sources including relational databases, NoSQL data stores, and Web
data formats, varying not only in terms of data formats, but also in the
underlying data model. Integrating data from heterogeneous data sources is a
time-consuming and error-prone engineering task; part of this process requires
that the data has to be transformed from its original form to other forms,
repeating all along the life cycle. With this report we provide a principled
overview on the fundamental data shapes tabular, tree, and graph as well as
transformations between them, in order to gain a better understanding for
performing said transformations more efficiently and effectively.
|
1211.1572 | Embedding grayscale halftone pictures in QR Codes using Correction Trees | cs.IT cs.CR cs.MM math.IT | Barcodes like QR Codes have made that encoded messages have entered our
everyday life, what suggests to attach them a second layer of information:
directly available to human receiver for informational or marketing purposes.
We will discuss a general problem of using codes with chosen statistical
constrains, for example reproducing given grayscale picture using halftone
technique. If both sender and receiver know these constrains, the optimal
capacity can be easily approached by entropy coder. The problem is that this
time only the sender knows them - we will refer to these scenarios as
constrained coding. Kuznetsov and Tsybakov problem in which only the sender
knows which bits are fixed can be seen as a special case, surprisingly
approaching the same capacity as if both sides would know the constrains. We
will analyze Correction Trees to approach analogous capacity in the general
case - use weaker: statistical constrains, what allows to apply them to all
bits. Finding satisfying coding is similar to finding the proper correction in
error correction problem, but instead of single ensured possibility, there are
now statistically expected some. While in standard steganography we hide
information in the least important bits, this time we create codes resembling
given picture - hide information in the freedom of realizing grayness by black
and white pixels using halftone technique. We will also discuss combining with
error correction and application to rate distortion problem.
|
1211.1599 | Low-dimensionality energy landscapes: Magnetic switching mechanisms and
rates | cond-mat.mtrl-sci cs.CE | In this paper we propose a new method for the study and visualization of
dynamic processes in magnetic nanostructures, and for the accurate calculation
of rates for such processes. The method is illustrated for the case of
switching of a grain of an exchange-coupled recording medium, which switches
through domain wall nucleation and motion, but is generalizable to other rate
processes such as vortex formation and annihilation. The method involves
calculating the most probable (lowest energy) switching path and projecting the
motion onto that path. The motion is conveniently visualized in a
two-dimensional (2D) projection parameterized by the dipole and quadrupole
moment of the grain. The motion along that path can then be described by a
Langevin equation, and its rate can be computed by the classic method of
Kramers. The rate can be evaluated numerically, or in an analytic approximation
- interestingly, the analytic result for domain-wall switching is very similar
to that obtained by Brown in 1963 for coherent switching, except for a factor
proportional to the domain-wall volume. Thus in addition to its lower
coercivity, an exchange-coupled medium has the additional advantage (over a
uniform medium) of greater thermal stability, for a fixed energy barrier.
|
1211.1621 | Cram\'er-Rao bounds for synchronization of rotations | cs.IT math.IT | Synchronization of rotations is the problem of estimating a set of rotations
R_i in SO(n), i = 1, ..., N, based on noisy measurements of relative rotations
R_i R_j^T. This fundamental problem has found many recent applications, most
importantly in structural biology. We provide a framework to study
synchronization as estimation on Riemannian manifolds for arbitrary n under a
large family of noise models. The noise models we address encompass zero-mean
isotropic noise, and we develop tools for Gaussian-like as well as heavy-tail
types of noise in particular. As a main contribution, we derive the
Cram\'er-Rao bounds of synchronization, that is, lower-bounds on the variance
of unbiased estimators. We find that these bounds are structured by the
pseudoinverse of the measurement graph Laplacian, where edge weights are
proportional to measurement quality. We leverage this to provide interpretation
in terms of random walks and visualization tools for these bounds in both the
anchored and anchor-free scenarios. Similar bounds previously established were
limited to rotations in the plane and Gaussian-like noise.
|
1211.1622 | MISO Broadcast Channel with Delayed and Evolving CSIT | cs.IT math.IT | The work considers the two-user MISO broadcast channel with gradual and
delayed accumulation of channel state information at the transmitter (CSIT),
and addresses the question of how much feedback is necessary, and when, in
order to achieve a certain degrees-of-freedom (DoF) performance. Motivated by
limited-capacity feedback links that may not immediately convey perfect CSIT,
and focusing on the block fading scenario, we consider a progressively
increasing CSIT quality as time progresses across the coherence period (T
channel uses - evolving current CSIT), or at any time after (delayed CSIT).
Specifically, for any set of feedback quality exponents a_t, t=1,...,T,
describing the high-SNR rates-of-decay of the mean square error of the current
CSIT estimates at time t<=T (during the coherence period), the work describes
the optimal DOF region in several different evolving CSIT settings, including
the setting with perfect delayed CSIT, the asymmetric setting where the quality
of feedback differs from user to user, as well as considers the DoF region in
the presence of a imperfect delayed CSIT corresponding to having a limited
number of overall feedback bits. These results are supported by novel
multi-phase precoding schemes that utilize gradually improving CSIT.
The approach here naturally incorporates different settings such as the
perfect-delayed CSIT setting of Maddah-Ali and Tse, the imperfect current CSIT
setting of Yang et al. and of Gou and Jafar, the asymmetric setting of Maleki
et al., as well as the not-so-delayed CSIT setting of Lee and Heath.
|
1211.1634 | Annotations for Supporting Collaboration through Artifacts | cs.HC cs.SI | Shared artifacts and environments play a prominent role in shaping the
collaboration between their users. This article describes this role and
explains how annotations can provide a bridge between direct communication and
collaboration through artifacts. The various functions of annotations are
discussed through examples that represent some of the important trends in
annotation research. Ultimately, some of the research issues are briefly
discussed, followed by my perspective on the future of asynchronous distributed
collaborative systems with respect to annotations.
|
1211.1643 | Hybrid Behaviour of Markov Population Models | cs.SY cs.MA cs.PF q-bio.QM | We investigate the behaviour of population models written in Stochastic
Concurrent Constraint Programming (sCCP), a stochastic extension of Concurrent
Constraint Programming. In particular, we focus on models from which we can
define a semantics of sCCP both in terms of Continuous Time Markov Chains
(CTMC) and in terms of Stochastic Hybrid Systems, in which some populations are
approximated continuously, while others are kept discrete. We will prove the
correctness of the hybrid semantics from the point of view of the limiting
behaviour of a sequence of models for increasing population size. More
specifically, we prove that, under suitable regularity conditions, the sequence
of CTMC constructed from sCCP programs for increasing population size converges
to the hybrid system constructed by means of the hybrid semantics. We
investigate in particular what happens for sCCP models in which some
transitions are guarded by boolean predicates or in the presence of
instantaneous transitions.
|
1211.1650 | Different Operating Systems Compatible for Image Prepress Process in
Color Management: Analysis and Performance Testing | cs.CV | Image computing has become a real catchphrase over the past few years and the
interpretations of the meaning of the term vary greatly. The Imagecomputing
market is currently rapidly evolving with high growth prospects and almost
daily announcements of new devices and application platforms, which results in
an increasing diversification of devices, operating system and development
platforms. Compared to more traditional information technology markets like the
one of desktop computing, mobile computing is much less consolidated and
neither standards nor even industry standards have yet been established. There
are various platforms and interfaces which may be used to perform the desired
tasks through the device. We have tried to compare the various mobile operating
systems and their trade-offs.
|
1211.1654 | A New Randomness Evaluation Method with Applications to Image Shuffling
and Encryption | cs.CR cs.CV stat.AP | This letter discusses the problem of testing the degree of randomness within
an image, particularly for a shuffled or encrypted image. Its key contributions
are: 1) a mathematical model of perfectly shuffled images; 2) the derivation of
the theoretical distribution of pixel differences; 3) a new $Z$-test based
approach to differentiate whether or not a test image is perfectly shuffled;
and 4) a randomized algorithm to unbiasedly evaluate the degree of randomness
within a given image. Simulation results show that the proposed method is
robust and effective in evaluating the degree of randomness within an image,
and may often be more suitable for image applications than commonly used
testing schemes designed for binary data like NIST 800-22. The developed method
may be also useful as a first step in determining whether or not a shuffling or
encryption scheme is suitable for a particular cryptographic application.
|
1211.1656 | James-Stein Type Center Pixel Weights for Non-Local Means Image
Denoising | cs.CV | Non-Local Means (NLM) and variants have been proven to be effective and
robust in many image denoising tasks. In this letter, we study the parameter
selection problem of center pixel weights (CPW) in NLM. Our key contributions
are: 1) we give a novel formulation of the CPW problem from the statistical
shrinkage perspective; 2) we introduce the James-Stein type CPWs for NLM; and
3) we propose a new adaptive CPW that is locally tuned for each image pixel.
Our experimental results showed that compared to existing CPW solutions, the
new proposed CPWs are more robust and effective under various noise levels. In
particular, the NLM with the James-Stein type CPWs attain higher means with
smaller variances in terms of the peak signal and noise ratio, implying they
improve the NLM robustness and make it less sensitive to parameter selection.
|
1211.1660 | Secret-Key Agreement Capacity over Reciprocal Fading Channels: A
Separation Approach | cs.IT math.IT | Fundamental limits of secret-key agreement over reciprocal wireless channels
are investigated. We consider a two-way block-fading channel where the channel
gains in the forward and reverse links between the legitimate terminals are
correlated. The channel gains between the legitimate terminals are not revealed
to any terminal, whereas the channel gains of the eavesdropper are revealed to
the eavesdropper. We propose a two-phase transmission scheme, that reserves a
certain portion of each coherence block for channel estimation, and the
remainder of the coherence block for correlated source generation. The
resulting secret-key involves contributions of both channel sequences and
source sequences, with the contribution of the latter becoming dominant as the
coherence period increases. We also establish an upper bound on the secret-key
capacity, which has a form structurally similar to the lower bound. Our upper
and lower bounds coincide in the limit of high signal-to-noise-ratio (SNR) and
large coherence period, thus establishing the secret-key agreement capacity in
this asymptotic regime. Numerical results indicate that the proposed scheme
achieves significant gains over training-only schemes, even for moderate SNR
and small coherence periods, thus implying the necessity of randomness-sharing
in practical secret-key generation systems.
|
1211.1680 | S2LET: A code to perform fast wavelet analysis on the sphere | cs.IT astro-ph.IM math.IT | We describe S2LET, a fast and robust implementation of the scale-discretised
wavelet transform on the sphere. Wavelets are constructed through a tiling of
the harmonic line and can be used to probe spatially localised, scale-depended
features of signals on the sphere. The scale-discretised wavelet transform was
developed previously and reduces to the needlet transform in the axisymmetric
case. The reconstruction of a signal from its wavelets coefficients is made
exact here through the use of a sampling theorem on the sphere. Moreover, a
multiresolution algorithm is presented to capture all information of each
wavelet scale in the minimal number of samples on the sphere. In addition S2LET
supports the HEALPix pixelisation scheme, in which case the transform is not
exact but nevertheless achieves good numerical accuracy. The core routines of
S2LET are written in C and have interfaces in Matlab, IDL and Java. Real
signals can be written to and read from FITS files and plotted as Mollweide
projections. The S2LET code is made publicly available, is extensively
documented, and ships with several examples in the four languages supported. At
present the code is restricted to axisymmetric wavelets but will be extended to
directional, steerable wavelets in a future release.
|
1211.1690 | Learning Monocular Reactive UAV Control in Cluttered Natural
Environments | cs.RO cs.CV cs.LG cs.SY | Autonomous navigation for large Unmanned Aerial Vehicles (UAVs) is fairly
straight-forward, as expensive sensors and monitoring devices can be employed.
In contrast, obstacle avoidance remains a challenging task for Micro Aerial
Vehicles (MAVs) which operate at low altitude in cluttered environments. Unlike
large vehicles, MAVs can only carry very light sensors, such as cameras, making
autonomous navigation through obstacles much more challenging. In this paper,
we describe a system that navigates a small quadrotor helicopter autonomously
at low altitude through natural forest environments. Using only a single cheap
camera to perceive the environment, we are able to maintain a constant velocity
of up to 1.5m/s. Given a small set of human pilot demonstrations, we use recent
state-of-the-art imitation learning techniques to train a controller that can
avoid trees by adapting the MAVs heading. We demonstrate the performance of our
system in a more controlled environment indoors, and in real natural forest
environments outdoors.
|
1211.1694 | Does a Daily Deal Promotion Signal a Distressed Business? An Empirical
Investigation of Small Business Survival | cs.CE stat.AP | In the last four years, daily deals have emerged from nowhere to become a
multi-billion dollar industry world-wide. Daily deal sites such as Groupon and
Livingsocial offer products and services at deep discounts to consumers via
email and social networks. As the industry matures, there are many questions
regarding the impact of daily deals on the marketplace. Important questions in
this regard concern the reasons why businesses decide to offer daily deals and
their longer-term impact on businesses. In the present paper, we investigate
whether the unobserved factors that make marketers run daily deals are
correlated with the unobserved factors that influence the business, In
particular, we employ the framework of seemingly unrelated regression to model
the correlation between the errors in predicting whether a business uses a
daily deal and the errors in predicting the business' survival. Our analysis
consists of the survival of 985 small businesses that offered daily deals
between January and July 2011 in the city of Chicago. Our results indicate that
there is a statistically significant correlation between the unobserved factors
that influence the business' decision to offer a daily deal and the unobserved
factors that impact its survival. Furthermore, our results indicate that the
correlation coefficient is significant in certain business categories (e.g.
restaurants).
|
1211.1716 | Blind Signal Separation in the Presence of Gaussian Noise | cs.LG cs.DS stat.ML | A prototypical blind signal separation problem is the so-called cocktail
party problem, with n people talking simultaneously and n different microphones
within a room. The goal is to recover each speech signal from the microphone
inputs. Mathematically this can be modeled by assuming that we are given
samples from an n-dimensional random variable X=AS, where S is a vector whose
coordinates are independent random variables corresponding to each speaker. The
objective is to recover the matrix A^{-1} given random samples from X. A range
of techniques collectively known as Independent Component Analysis (ICA) have
been proposed to address this problem in the signal processing and machine
learning literature. Many of these techniques are based on using the kurtosis
or other cumulants to recover the components.
In this paper we propose a new algorithm for solving the blind signal
separation problem in the presence of additive Gaussian noise, when we are
given samples from X=AS+\eta, where \eta is drawn from an unknown, not
necessarily spherical n-dimensional Gaussian distribution. Our approach is
based on a method for decorrelating a sample with additive Gaussian noise under
the assumption that the underlying distribution is a linear transformation of a
distribution with independent components. Our decorrelation routine is based on
the properties of cumulant tensors and can be combined with any standard
cumulant-based method for ICA to get an algorithm that is provably robust in
the presence of Gaussian noise. We derive polynomial bounds for the sample
complexity and error propagation of our method.
|
1211.1722 | Inverse problems in approximate uniform generation | cs.CC cs.DS cs.LG | We initiate the study of \emph{inverse} problems in approximate uniform
generation, focusing on uniform generation of satisfying assignments of various
types of Boolean functions. In such an inverse problem, the algorithm is given
uniform random satisfying assignments of an unknown function $f$ belonging to a
class $\C$ of Boolean functions, and the goal is to output a probability
distribution $D$ which is $\epsilon$-close, in total variation distance, to the
uniform distribution over $f^{-1}(1)$.
Positive results: We prove a general positive result establishing sufficient
conditions for efficient inverse approximate uniform generation for a class
$\C$. We define a new type of algorithm called a \emph{densifier} for $\C$, and
show (roughly speaking) how to combine (i) a densifier, (ii) an approximate
counting / uniform generation algorithm, and (iii) a Statistical Query learning
algorithm, to obtain an inverse approximate uniform generation algorithm. We
apply this general result to obtain a poly$(n,1/\eps)$-time algorithm for the
class of halfspaces; and a quasipoly$(n,1/\eps)$-time algorithm for the class
of $\poly(n)$-size DNF formulas.
Negative results: We prove a general negative result establishing that the
existence of certain types of signature schemes in cryptography implies the
hardness of certain inverse approximate uniform generation problems. This
implies that there are no {subexponential}-time inverse approximate uniform
generation algorithms for 3-CNF formulas; for intersections of two halfspaces;
for degree-2 polynomial threshold functions; and for monotone 2-CNF formulas.
Finally, we show that there is no general relationship between the complexity
of the "forward" approximate uniform generation problem and the complexity of
the inverse problem for a class $\C$ -- it is possible for either one to be
easy while the other is hard.
|
1211.1728 | Maximum Distance Separable Codes for Symbol-Pair Read Channels | cs.IT math.CO math.IT | We study (symbol-pair) codes for symbol-pair read channels introduced
recently by Cassuto and Blaum (2010). A Singleton-type bound on symbol-pair
codes is established and infinite families of optimal symbol-pair codes are
constructed. These codes are maximum distance separable (MDS) in the sense that
they meet the Singleton-type bound. In contrast to classical codes, where all
known q-ary MDS codes have length O(q), we show that q-ary MDS symbol-pair
codes can have length \Omega(q^2). In addition, we completely determine the
existence of MDS symbol-pair codes for certain parameters.
|
1211.1733 | Linear Antenna Array with Suppressed Sidelobe and Sideband Levels using
Time Modulation | cs.NE | In this paper, the goal is to achieve an ultra low sidelobe level (SLL) and
sideband levels (SBL) of a time modulated linear antenna array. The approach
followed here is not to give fixed level of excitation to the elements of an
array, but to change it dynamically with time. The excitation levels of the
different array elements over time are varied to get the low sidelobe and
sideband levels. The mathematics of getting the SLL and SBL furnished in detail
and simulation is done using the mathematical results. The excitation pattern
over time is optimized using Genetic Algorithm (GA). Since, the amplitudes of
the excitations of the elements are varied within a finite limit, results show
it gives better sidelobe and sideband suppression compared to previous time
modulated arrays with uniform amplitude excitations.
|
1211.1752 | 3D Scene Grammar for Parsing RGB-D Pointclouds | cs.CV | We pose 3D scene-understanding as a problem of parsing in a grammar. A
grammar helps us capture the compositional structure of real-word objects,
e.g., a chair is composed of a seat, a back-rest and some legs. Having multiple
rules for an object helps us capture structural variations in objects, e.g., a
chair can optionally also have arm-rests. Finally, having rules to capture
composition at different levels helps us formulate the entire scene-processing
pipeline as a single problem of finding most likely parse-tree---small segments
combine to form parts of objects, parts to objects and objects to a scene. We
attach a generative probability model to our grammar by having a
feature-dependent probability function for every rule. We evaluated it by
extracting labels for every segment and comparing the results with the
state-of-the-art segment-labeling algorithm. Our algorithm was outperformed by
the state-or-the-art method. But, Our model can be trained very efficiently
(within seconds), and it scales only linearly in with the number of rules in
the grammar. Also, we think that this is an important problem for the 3D vision
community. So, we are releasing our dataset and related code.
|
1211.1780 | Annotations, Collaborative Tagging, and Searching Mathematics in
E-Learning | cs.IR cs.DL | This paper presents a new framework for adding semantics into e-learning
system. The proposed approach relies on two principles. The first principle is
the automatic addition of semantic information when creating the mathematical
contents. The second principle is the collaborative tagging and annotation of
the e-learning contents and the use of an ontology to categorize the e-learning
contents. The proposed system encodes the mathematical contents using
presentation MathML with RDFa annotations. The system allows students to
highlight and annotate specific parts of the e-learning contents. The objective
is to add meaning into the e-learning contents, to add relationships between
contents, and to create a framework to facilitate searching the contents. This
semantic information can be used to answer semantic queries (e.g., SPARQL) to
retrieve information request of a user. This work is implemented as an embedded
code into Moodle e-learning system.
|
1211.1788 | An Adaptive parameter free data mining approach for healthcare
application | cs.DB | In today's world, healthcare is the most important factor affecting human
life. Due to heavy work load it is not possible for personal healthcare. The
proposed system acts as a preventive measure for determining whether a person
is fit or unfit based on person's historical and real time data by applying
clustering algorithms like K-means and D-stream. The Density-based clustering
algorithm i.e. the D-stream algorithm overcomes drawbacks of K-Means algorithm.
By calculating their performance measures we finally find out effectiveness and
efficiency of both the algorithms. Both clustering algorithms are applied on
patient's bio-medical historical database. To check the correctness of both the
algorithms, we apply them on patient's current bio-medical data.
|
1211.1790 | Link Prediction in Complex Networks by Multi Degree
Preferential-Attachment Indices | physics.soc-ph cs.SI | In principle, the rules of links formation of a network model can be
considered as a kind of link prediction algorithm. By revisiting the
preferential attachment mechanism for generating a scale-free network, here we
propose a class of preferential attachment indices which are different from the
previous one. Traditionally, the preferential attachment index is defined by
the product of the related nodes degrees, while the new indices will define the
similarity score of a pair of nodes by either the maximum in the two nodes
degrees or the summarization of their degrees. Extensive experiments are
carried out on fourteen real-world networks. Compared with the traditional
preferential attachment index, the new ones, especially the
degree-summarization similarity index, can provide more accurate prediction on
most of the networks. Due to the improved prediction accuracy and low
computational complexity, these proposed preferential attachment indices may be
of help to provide an instruction for mining unknown links in incomplete
networks.
|
1211.1799 | Algorithm for Missing Values Imputation in Categorical Data with Use of
Association Rules | cs.LG | This paper presents algorithm for missing values imputation in categorical
data. The algorithm is based on using association rules and is presented in
three variants. Experimental shows better accuracy of missing values imputation
using the algorithm then using most common attribute value.
|
1211.1800 | A Comparative study of Arabic handwritten characters invariant feature | cs.CV | This paper is practically interested in the unchangeable feature of Arabic
handwritten character. It presents results of comparative study achieved on
certain features extraction techniques of handwritten character, based on Hough
transform, Fourier transform, Wavelet transform and Gabor Filter. Obtained
results show that Hough Transform and Gabor filter are insensible to the
rotation and translation, Fourier Transform is sensible to the rotation but
insensible to the translation, in contrast to Hough Transform and Gabor filter,
Wavelets Transform is sensitive to the rotation as well as to the translation.
|
1211.1819 | Accurate Sampling Timing Acquisition for Baseband OFDM Power-line
Communication in Non-Gaussian Noise | cs.IT math.IT | In this paper, a novel technique is proposed to address the joint sampling
timing acquisition for baseband and broadband power-line communication (BB-PLC)
systems using Orthogonal-Frequency-Division-Multiplexing (OFDM), including the
sampling phase offset (SPO) and the sampling clock offset (SCO). Under pairwise
correlation and joint Gaussian assumption of received signals in frequency
domain, an approximated form of the log-likelihood function is derived. Instead
of a high complexity two-dimension grid-search on the likelihood function, a
five-step method is employed for accurate estimations. Several variants are
presented in the same framework with different complexities. Unlike
conventional pilot-assisted schemes using the extra phase rotations within one
OFDM block, the proposed technique turns to the phase rotations between
adjacent OFDM blocks. Analytical expressions of the variances and biases are
derived. Extensive simulation results indicate significant performance
improvements over conventional schemes. Additionally, effects of several noise
models including non-Gaussianity, cyclo-stationarity, and temporal correlation
are analyzed and simulated. Robustness of the proposed technique against
violation of the joint Gaussian assumption is also verified by simulations.
|
1211.1830 | Fine Residual Carrier Frequency and Sampling Frequency Estimation in
Wireless OFDM Systems | cs.IT math.IT | This paper presents a novel algorithm for residual phase estimation in
wireless OFDM systems, including the carrier frequency offset (CFO) and the
sampling frequency offset (SFO). The subcarriers are partitioned into several
regions which exhibit pairwise correlations. The phase increment between
successive OFDM blocks is exploited which can be estimated by two estimators
with different computational loads. Numerical results of estimation variance
are presented. Simulations indicate performance improvement of the proposed
technique over several conventional schemes in a multipath channel.
|
1211.1858 | A Spectral Expression for the Frequency-Limited H2-norm | cs.SY math.DS | In this paper, a new simple but yet efficient spectral expression of the
frequency-limited H2-norm, denoted H2w-norm, is introduced. The proposed new
formulation requires the computation of the system eigenvalues and eigenvectors
only, and provides thus an alternative to the well established Gramian-based
approach. The interest of this new formulation is in three-folds: (i) it
provides a new theoretical framework for the H2w-norm-based optimization
approach, such as controller synthesis, filter design and model approximation,
(ii) it improves the H2w-norm computation velocity and it applicability to
models of higher dimension, and (iii) under some conditions, it allows to
handle systems with poles on the imaginary axis. Both mathematical proofs and
numerical illustrations are provided to assess this new H2w-norm expression.
|
1211.1861 | Automating Legal Research through Data Mining | cs.IR | The term legal research generally refers to the process of identifying and
retrieving appropriate information necessary to support legal decision making
from past case records. At present, the process is mostly manual, but some
traditional technologies such as keyword searching are commonly used to speed
the process up. But a keyword search is not a comprehensive search to cater to
the requirements of legal research as the search result includes too many false
hits in terms of irrelevant case records. Hence the present generic tools
cannot be used to automate legal research.
This paper presents a framework which was developed by combining several Text
Mining techniques to automate the process overcoming the difficulties in the
existing methods. Further, the research also identifies the possible
enhancements that could be done to enhance the effectiveness of the framework.
|
1211.1893 | Tangent-based manifold approximation with locally linear models | cs.LG cs.CV | In this paper, we consider the problem of manifold approximation with affine
subspaces. Our objective is to discover a set of low dimensional affine
subspaces that represents manifold data accurately while preserving the
manifold's structure. For this purpose, we employ a greedy technique that
partitions manifold samples into groups that can be each approximated by a low
dimensional subspace. We start by considering each manifold sample as a
different group and we use the difference of tangents to determine appropriate
group mergings. We repeat this procedure until we reach the desired number of
sample groups. The best low dimensional affine subspaces corresponding to the
final groups constitute our approximate manifold representation. Our
experiments verify the effectiveness of the proposed scheme and show its
superior performance compared to state-of-the-art methods for manifold
approximation.
|
1211.1909 | On the Convergence of the Hegselmann-Krause System | cs.DS cs.SI nlin.AO | We study convergence of the following discrete-time non-linear dynamical
system: n agents are located in R^d and at every time step, each moves
synchronously to the average location of all agents within a unit distance of
it. This popularly studied system was introduced by Krause to model the
dynamics of opinion formation and is often referred to as the Hegselmann-Krause
model. We prove the first polynomial time bound for the convergence of this
system in arbitrary dimensions. This improves on the bound of n^{O(n)}
resulting from a more general theorem of Chazelle. Also, we show a quadratic
lower bound and improve the upper bound for one-dimensional systems to O(n^3).
|
1211.1932 | Codes with Local Regeneration | cs.IT math.IT | Regenerating codes and codes with locality are two schemes that have recently
been proposed to ensure data collection and reliability in a distributed
storage network. In a situation where one is attempting to repair a failed
node, regenerating codes seek to minimize the amount of data downloaded for
node repair, while codes with locality attempt to minimize the number of helper
nodes accessed. In this paper, we provide several constructions for a class of
vector codes with locality in which the local codes are regenerating codes,
that enjoy both advantages. We derive an upper bound on the minimum distance of
this class of codes and show that the proposed constructions achieve this
bound. The constructions include both the cases where the local regenerating
codes correspond to the MSR as well as the MBR point on the
storage-repair-bandwidth tradeoff curve of regenerating codes. Also included is
a performance comparison of various code constructions for fixed block length
and minimum distance.
|
1211.1968 | Fourier-Bessel rotational invariant eigenimages | cs.CV | We present an efficient and accurate algorithm for principal component
analysis (PCA) of a large set of two dimensional images, and, for each image,
the set of its uniform rotations in the plane and its reflection. The algorithm
starts by expanding each image, originally given on a Cartesian grid, in the
Fourier-Bessel basis for the disk. Because the images are bandlimited in the
Fourier domain, we use a sampling criterion to truncate the Fourier-Bessel
expansion such that the maximum amount of information is preserved without the
effect of aliasing. The constructed covariance matrix is invariant to rotation
and reflection and has a special block diagonal structure. PCA is efficiently
done for each block separately. This Fourier-Bessel based PCA detects more
meaningful eigenimages and has improved denoising capability compared to
traditional PCA for a finite number of noisy images.
|
1211.1969 | Fast Converging Algorithm for Weighted Sum Rate Maximization in
Multicell MISO Downlink | cs.IT math.IT | The problem of maximizing weighted sum rates in the downlink of a multicell
environment is of considerable interest. Unfortunately, this problem is known
to be NP-hard. For the case of multi-antenna base stations and single antenna
mobile terminals, we devise a low complexity, fast and provably convergent
algorithm that locally optimizes the weighted sum rate in the downlink of the
system. In particular, we derive an iterative second-order cone program
formulation of the weighted sum rate maximization problem. The algorithm
converges to a local optimum within a few iterations. Superior performance of
the proposed approach is established by numerically comparing it to other known
solutions.
|
1211.2007 | Multi-input Multi-output Beta Wavelet Network: Modeling of Acoustic
Units for Speech Recognition | cs.CV | In this paper, we propose a novel architecture of wavelet network called
Multi-input Multi-output Wavelet Network MIMOWN as a generalization of the old
architecture of wavelet network. This newel prototype was applied to speech
recognition application especially to model acoustic unit of speech. The
originality of our work is the proposal of MIMOWN to model acoustic unit of
speech. This approach was proposed to overcome limitation of old wavelet
network model. The use of the multi-input multi-output architecture will allows
training wavelet network on various examples of acoustic units.
|
1211.2008 | On multidimensional generalized Cram\'er-Rao inequalities, uncertainty
relations and characterizations of generalized $q$-Gaussian distributions | math-ph cond-mat.stat-mech cs.IT math.IT math.MP | In the present work, we show how the generalized Cram\'er-Rao inequality for
the estimation of a parameter, presented in a recent paper, can be extended to
the mutidimensional case with general norms on $\mathbb{R}^{n}$, and to a wider
context. As a particular case, we obtain a new multidimensional Cram\'er-Rao
inequality which is saturated by generalized $q$-Gaussian distributions. We
also give another related Cram\'er-Rao inequality, for a general norm, which is
saturated as well by these distributions. Finally, we derive uncertainty
relations from these Cram\'er-Rao inequalities. These uncertainty relations
involve moments computed with respect to escort distributions, and we show that
some of these relations are saturated by generalized $q$-Gaussian
distributions. These results introduce extended versions of Fisher information,
new Cram\'er-Rao inequalities, and new characterizations of generalized
$q$-Gaussian distributions which are important in several areas of physics and
mathematics.
|
1211.2037 | Time Complexity Analysis of Binary Space Partitioning Scheme for Image
Compression | cs.CV | Segmentation-based image coding methods provide high compression ratios when
compared with traditional image coding approaches like the transform and sub
band coding for low bit-rate compression applications. In this paper, a
segmentation-based image coding method, namely the Binary Space Partition
scheme, that divides the desired image using a recursive procedure for coding
is presented. The BSP approach partitions the desired image recursively by
using bisecting lines, selected from a collection of discrete optional lines,
in a hierarchical manner. This partitioning procedure generates a binary tree,
which is referred to as the BSP-tree representation of the desired image. The
algorithm is extremely complex in computation and has high execution time. The
time complexity of the BSP scheme is explored in this work.
|
1211.2041 | MaTrust: An Effective Multi-Aspect Trust Inference Model | cs.DB cs.AI | Trust is a fundamental concept in many real-world applications such as
e-commerce and peer-to-peer networks. In these applications, users can generate
local opinions about the counterparts based on direct experiences, and these
opinions can then be aggregated to build trust among unknown users. The
mechanism to build new trust relationships based on existing ones is referred
to as trust inference. State-of-the-art trust inference approaches employ the
transitivity property of trust by propagating trust along connected users. In
this paper, we propose a novel trust inference model (MaTrust) by exploring an
equally important property of trust, i.e., the multi-aspect property. MaTrust
directly characterizes multiple latent factors for each trustor and trustee
from the locally-generated trust relationships. Furthermore, it can naturally
incorporate prior knowledge as specified factors. These factors in turn serve
as the basis to infer the unseen trustworthiness scores. Experimental
evaluations on real data sets show that the proposed MaTrust significantly
outperforms several benchmark trust inference models in both effectiveness and
efficiency.
|
1211.2064 | Distributed Learning and Multiaccess of On-Off Channels | math.OC cs.IT math.IT | The problem of distributed access of a set of N on-off channels by K<N users
is considered. The channels are slotted and modeled as independent but not
necessarily identical alternating renewal processes. Each user decides to
either observe or transmit at the beginning of every slot. A transmission is
successful only if the channel is at the on state and there is only one user
transmitting. When a user observes, it identifies whether a transmission would
have been successful had it decided to transmit. A distributed learning and
access policy referred to as alternating sensing and access (ASA) is proposed.
It is shown that ASA has finite expected regret when compared with the optimal
centralized scheme with fixed channel allocation.
|
1211.2073 | LAGE: A Java Framework to reconstruct Gene Regulatory Networks from
Large-Scale Continues Expression Data | cs.LG cs.CE q-bio.QM stat.ML | LAGE is a systematic framework developed in Java. The motivation of LAGE is
to provide a scalable and parallel solution to reconstruct Gene Regulatory
Networks (GRNs) from continuous gene expression data for very large amount of
genes. The basic idea of our framework is motivated by the philosophy of
divideand-conquer. Specifically, LAGE recursively partitions genes into
multiple overlapping communities with much smaller sizes, learns
intra-community GRNs respectively before merge them altogether. Besides, the
complete information of overlapping communities serves as the byproduct, which
could be used to mine meaningful functional modules in biological networks.
|
1211.2082 | 3D Surface Reconstruction of Underwater Objects | cs.CV | In this paper, we propose a novel technique to reconstruct 3D surface of an
underwater object using stereo images. Reconstructing the 3D surface of an
underwater object is really a challenging task due to degraded quality of
underwater images. There are various reason of quality degradation of
underwater images i.e., non-uniform illumination of light on the surface of
objects, scattering and absorption effects. Floating particles present in
underwater produces Gaussian noise on the captured underwater images which
degrades the quality of images. The degraded underwater images are preprocessed
by applying homomorphic, wavelet denoising and anisotropic filtering
sequentially. The uncalibrated rectification technique is applied to
preprocessed images to rectify the left and right images. The rectified left
and right image lies on a common plane. To find the correspondence points in a
left and right images, we have applied dense stereo matching technique i.e.,
graph cut method. Finally, we estimate the depth of images using triangulation
technique. The experimental result shows that the proposed method reconstruct
3D surface of underwater objects accurately using captured underwater stereo
images.
|
1211.2087 | Secured Wireless Communication using Fuzzy Logic based High Speed
Public-Key Cryptography (FLHSPKC) | cs.CR cs.AI | In this paper secured wireless communication using fuzzy logic based high
speed public key cryptography (FLHSPKC) has been proposed by satisfying the
major issues likes computational safety, power management and restricted usage
of memory in wireless communication. Wireless Sensor Network (WSN) has several
major constraints likes inadequate source of energy, restricted computational
potentiality and limited memory. Though conventional Elliptic Curve
Cryptography (ECC) which is a sort of public key cryptography used in wireless
communication provides equivalent level of security like other existing public
key algorithm using smaller parameters than other but this traditional ECC does
not take care of all these major limitations in WSN. In conventional ECC
consider Elliptic curve point p, an arbitrary integer k and modulus m, ECC
carry out scalar multiplication kP mod m, which takes about 80% of key
computation time on WSN. In this paper proposed FLHSPKC scheme provides some
novel strategy including novel soft computing based strategy to speed up scalar
multiplication in conventional ECC and which in turn takes shorter
computational time and also satisfies power consumption restraint, limited
usage of memory without hampering the security level. Performance analysis of
the different strategies under FLHSPKC scheme and comparison study with
existing conventional ECC methods has been done.
|
1211.2116 | Localisation of Numerical Date Field in an Indian Handwritten Document | cs.CV | This paper describes a method to localise all those areas which may
constitute the date field in an Indian handwritten document. Spatial patterns
of the date field are studied from various handwritten documents and an
algorithm is developed through statistical analysis to identify those sets of
connected components which may constitute the date. Common date patterns
followed in India are considered to classify the date formats in different
classes. Reported results demonstrate promising performance of the proposed
approach
|
1211.2126 | Dynamic Decision Support System Based on Bayesian Networks Application
to fight against the Nosocomial Infections | cs.AI cs.DB | The improvement of medical care quality is a significant interest for the
future years. The fight against nosocomial infections (NI) in the intensive
care units (ICU) is a good example. We will focus on a set of observations
which reflect the dynamic aspect of the decision, result of the application of
a Medical Decision Support System (MDSS). This system has to make dynamic
decision on temporal data. We use dynamic Bayesian network (DBN) to model this
dynamic process. It is a temporal reasoning within a real-time environment; we
are interested in the Dynamic Decision Support Systems in healthcare domain
(MDDSS).
|
1211.2132 | Accelerated Gradient Methods for Networked Optimization | math.OC cs.DC cs.SY | We develop multi-step gradient methods for network-constrained optimization
of strongly convex functions with Lipschitz-continuous gradients. Given the
topology of the underlying network and bounds on the Hessian of the objective
function, we determine the algorithm parameters that guarantee the fastest
convergence and characterize situations when significant speed-ups can be
obtained over the standard gradient method. Furthermore, we quantify how the
performance of the gradient method and its accelerated counterpart are affected
by uncertainty in the problem data, and conclude that in most cases our
proposed method outperforms gradient descent. Finally, we apply the proposed
technique to three engineering problems: resource allocation under network-wide
budget constraints, distributed averaging, and Internet congestion control. In
all cases, we demonstrate that our algorithm converges more rapidly than
alternative algorithms reported in the literature.
|
1211.2150 | NF-SAVO: Neuro-Fuzzy system for Arabic Video OCR | cs.CV | In this paper we propose a robust approach for text extraction and
recognition from video clips which is called Neuro-Fuzzy system for Arabic
Video OCR. In Arabic video text recognition, a number of noise components
provide the text relatively more complicated to separate from the background.
Further, the characters can be moving or presented in a diversity of colors,
sizes and fonts that are not uniform. Added to this, is the fact that the
background is usually moving making text extraction a more intricate process.
Video include two kinds of text, scene text and artificial text. Scene text is
usually text that becomes part of the scene itself as it is recorded at the
time of filming the scene. But artificial text is produced separately and away
from the scene and is laid over it at a later stage or during the post
processing time. The emergence of artificial text is consequently vigilantly
directed. This type of text carries with it important information that helps in
video referencing, indexing and retrieval.
|
1211.2155 | Improved Modeling of the Correlation Between Continuous-Valued Sources
in LDPC-Based DSC | cs.IT math.IT | Accurate modeling of the correlation between the sources plays a crucial role
in the efficiency of distributed source coding (DSC) systems. This correlation
is commonly modeled in the binary domain by using a single binary symmetric
channel (BSC), both for binary and continuous-valued sources. We show that
"one" BSC cannot accurately capture the correlation between continuous-valued
sources; a more accurate model requires "multiple" BSCs, as many as the number
of bits used to represent each sample. We incorporate this new model into the
DSC system that uses low-density parity-check (LDPC) codes for compression. The
standard Slepian-Wolf LDPC decoder requires a slight modification so that the
parameters of all BSCs are integrated in the log-likelihood ratios (LLRs).
Further, using an interleaver the data belonging to different bit-planes are
shuffled to introduce randomness in the binary domain. The new system has the
same complexity and delay as the standard one. Simulation results prove the
effectiveness of the proposed model and system.
|
1211.2162 | A Distributed Differential Space-Time Coding Scheme With Analog Network
Coding in Two-Way Relay Networks | cs.IT math.IT | In this paper, we consider general two-way relay networks (TWRNs) with two
source and N relay nodes. A distributed differential space time coding with
analog network coding (DDSTC-ANC) scheme is proposed. A simple blind estimation
and a differential signal detector are developed to recover the desired signal
at each source. The pairwise error probability (PEP) and block error rate
(BLER) of the DDSTC-ANC scheme are analyzed. Exact and simplified PEP
expressions are derived. To improve the system performance, the optimum power
allocation (OPA) between the source and relay nodes is determined based on the
simplified PEP expression. The analytical results are verified through
simulations.
|
1211.2187 | On Finite-Length Performance of Polar Codes: Stopping Sets, Error Floor,
and Concatenated Design | cs.IT math.IT | This paper investigates properties of polar codes that can be potentially
useful in real-world applications. We start with analyzing the performance of
finite-length polar codes over the binary erasure channel (BEC), while assuming
belief propagation as the decoding method. We provide a stopping set analysis
for the factor graph of polar codes, where we find the size of the minimum
stopping set. We also find the girth of the graph for polar codes. Our analysis
along with bit error rate (BER) simulations demonstrate that finite-length
polar codes show superior error floor performance compared to the conventional
capacity-approaching coding techniques. In order to take advantage from this
property while avoiding the shortcomings of polar codes, we consider the idea
of combining polar codes with other coding schemes. We propose a polar
code-based concatenated scheme to be used in Optical Transport Networks (OTNs)
as a potential real-world application. Comparing against conventional
concatenation techniques for OTNs, we show that the proposed scheme outperforms
the existing methods by closing the gap to the capacity while avoiding error
floor, and maintaining a low complexity at the same time.
|
1211.2190 | Efficient Monte Carlo Methods for Multi-Dimensional Learning with
Classifier Chains | cs.LG stat.CO stat.ML | Multi-dimensional classification (MDC) is the supervised learning problem
where an instance is associated with multiple classes, rather than with a
single class, as in traditional classification problems. Since these classes
are often strongly correlated, modeling the dependencies between them allows
MDC methods to improve their performance - at the expense of an increased
computational cost. In this paper we focus on the classifier chains (CC)
approach for modeling dependencies, one of the most popular and highest-
performing methods for multi-label classification (MLC), a particular case of
MDC which involves only binary classes (i.e., labels). The original CC
algorithm makes a greedy approximation, and is fast but tends to propagate
errors along the chain. Here we present novel Monte Carlo schemes, both for
finding a good chain sequence and performing efficient inference. Our
algorithms remain tractable for high-dimensional data sets and obtain the best
predictive performance across several real data sets.
|
1211.2194 | A Novel Anticlustering Filtering Algorithm for the Prediction of Genes
as a Drug Target | cs.CE q-bio.GN | The high-throughput data generated by microarray experiments provides
complete set of genes being expressed in a given cell or in an organism under
particular conditions. The analysis of these enormous data has opened a new
dimension for the researchers. In this paper we describe a novel algorithm to
microarray data analysis focusing on the identification of genes that are
differentially expressed in particular internal or external conditions and
which could be potential drug targets. The algorithm uses the time-series gene
expression data as an input and recognizes genes which are expressed
differentially. This algorithm implements standard statistics-based gene
functional investigations, such as the log transformation, mean, log-sigmoid
function, coefficient of variations, etc. It does not use clustering analysis.
The proposed algorithm has been implemented in Perl. The time-series gene
expression data on yeast Saccharomyces cerevisiae from the Stanford Microarray
Database (SMD)consisting of 6154 genes have been taken for the validation of
the algorithm. The developed method extracted 48 genes out of total 6154 genes.
These genes are mostly responsible for the yeast's resistants at a high
temperature.
|
1211.2197 | What is the Nature of Chinese MicroBlogging: Unveiling the Unique
Features of Tencent Weibo | cs.SI physics.soc-ph | China has the largest number of online users in the world and about 20%
internet users are from China. This is a huge, as well as a mysterious, market
for IT industry due to various reasons such as culture difference. Twitter is
the largest microblogging service in the world and Tencent Weibo is one of the
largest microblogging services in China. Employ the two data sets as a source
in our study, we try to unveil the unique behaviors of Chinese users. We have
collected the entire Tencent Weibo from 10th, Oct, 2011 to 5th, Jan, 2012 and
obtained 320 million user profiles, 5.15 billion user actions. We study Tencent
Weibo from both macro and micro levels. From the macro level, Tencent users are
more active on forwarding messages, but with less reciprocal relationships than
Twitter users, their topic preferences are very different from Twitter users
from both content and time consuming; besides, information can be diffused more
efficient in Tencent Weibo. From the micro level, we mainly evaluate users'
social influence from two indexes: "Forward" and \Follower", we study how
users' actions will contribute to their social influences, and further identify
unique features of Tencent users. According to our studies, Tencent users'
actions are more personalized and diversity, and the influential users play a
more important part in the whole networks. Based on the above analysis, we
design a graphical model for predicting users' forwarding behaviors. Our
experimental results on the large Tencent Weibo data validate the correctness
of the discoveries and the effectiveness of the proposed model. To the best of
our knowledge, this work is the first quantitative study on the entire
Tencentsphere and information diffusion on it.
|
1211.2198 | Results on Finite Wireless Sensor Networks: Connectivity and Coverage | cs.IT math.IT | Many analytic results for the connectivity, coverage, and capacity of
wireless networks have been reported for the case where the number of nodes,
$n$, tends to infinity (large-scale networks). The majority of these results
have not been extended for small or moderate values of $n$; whereas in many
practical networks, $n$ is not very large. In this paper, we consider finite
(small-scale) wireless sensor networks. We first show that previous asymptotic
results provide poor approximations for such networks. We provide a set of
differences between small-scale and large-scale analysis and propose a
methodology for analysis of finite sensor networks. Furthermore, we consider
two models for such networks: unreliable sensor grids, and sensor networks with
random node deployment. We provide easily computable expressions for bounds on
the coverage and connectivity of these networks. With validation from
simulations, we show that the derived analytic expressions give very good
estimates of such quantities for finite sensor networks. Our investigation
confirms the fact that small-scale networks possesses unique characteristics
different from the large-scale counterparts, necessitating the development of a
new framework for their analysis and design.
|
1211.2227 | Efficient learning of simplices | cs.LG cs.DS stat.ML | We show an efficient algorithm for the following problem: Given uniformly
random points from an arbitrary n-dimensional simplex, estimate the simplex.
The size of the sample and the number of arithmetic operations of our algorithm
are polynomial in n. This answers a question of Frieze, Jerrum and Kannan
[FJK]. Our result can also be interpreted as efficiently learning the
intersection of n+1 half-spaces in R^n in the model where the intersection is
bounded and we are given polynomially many uniform samples from it. Our proof
uses the local search technique from Independent Component Analysis (ICA), also
used by [FJK]. Unlike these previous algorithms, which were based on analyzing
the fourth moment, ours is based on the third moment.
We also show a direct connection between the problem of learning a simplex
and ICA: a simple randomized reduction to ICA from the problem of learning a
simplex. The connection is based on a known representation of the uniform
measure on a simplex. Similar representations lead to a reduction from the
problem of learning an affine transformation of an n-dimensional l_p ball to
ICA.
|
1211.2245 | Composite Strategy for Multicriteria Ranking/Sorting (methodological
issues, examples) | math.OC cs.AI cs.SE | The paper addresses the modular design of composite solving strategies for
multicriteria ranking (sorting). Here a 'scale of creativity' that is close to
creative levels proposed by Altshuller is used as the reference viewpoint: (i)
a basic object, (ii) a selected object, (iii) a modified object, and (iv) a
designed object (e.g., composition of object components). These levels maybe
used in various parts of decision support systems (DSS) (e.g., information,
operations, user). The paper focuses on the more creative above-mentioned level
(i.e., composition or combinatorial synthesis) for the operational part (i.e.,
composite solving strategy). This is important for a search/exploration mode of
decision making process with usage of various procedures and techniques and
analysis/integration of obtained results. The paper describes methodological
issues of decision technology and synthesis of composite strategy for
multicriteria ranking. The synthesis of composite strategies is based on
'hierarchical morphological multicriteria design' (HMMD) which is based on
selection and combination of design alternatives (DAs) (here: local procedures
or techniques) while taking into account their quality and quality of their
interconnections (IC). A new version of HMMD with interval multiset estimates
for DAs is used. The operational environment of DSS COMBI for multicriteria
ranking, consisting of a morphology of local procedures or techniques (as
design alternatives DAs), is examined as a basic one.
|
1211.2260 | No-Regret Algorithms for Unconstrained Online Convex Optimization | cs.LG | Some of the most compelling applications of online convex optimization,
including online prediction and classification, are unconstrained: the natural
feasible set is R^n. Existing algorithms fail to achieve sub-linear regret in
this setting unless constraints on the comparator point x^* are known in
advance. We present algorithms that, without such prior knowledge, offer
near-optimal regret bounds with respect to any choice of x^*. In particular,
regret with respect to x^* = 0 is constant. We then prove lower bounds showing
that our guarantees are near-optimal in this setting.
|
1211.2265 | Optimal Detection For Sparse Mixtures | cs.IT math.IT math.ST stat.TH | Detection of sparse signals arises in a wide range of modern scientific
studies. The focus so far has been mainly on Gaussian mixture models. In this
paper, we consider the detection problem under a general sparse mixture model
and obtain an explicit expression for the detection boundary. It is shown that
the fundamental limits of detection is governed by the behavior of the
log-likelihood ratio evaluated at an appropriate quantile of the null
distribution. We also establish the adaptive optimality of the higher criticism
procedure across all sparse mixtures satisfying certain mild regularity
conditions. In particular, the general results obtained in this paper recover
and extend in a unified manner the previously known results on sparse detection
far beyond the conventional Gaussian model and other exponential families.
|
1211.2280 | A Novel Architecture For Network Coded Electronic Health Record Storage
System | cs.IT math.IT | The use of network coding for large scale content distribution improves
download time. This is demonstrated in this work by the use of network coded
Electronic Health Record Storage System (EHR-SS). An architecture of 4-layer to
build the EHR-SS is designed. The application integrates the data captured for
the patient from three modules namely administrative data, medical records of
consultation and reports of medical tests. The lower layer is the data
capturing layer using RFID reader. The data is captured in the lower level from
different nodes. The data is combined with some linear coefficients using
linear network coding. At the lower level the data from different tags are
combined and stored and at the level 2 coding combines the data from multiple
readers and a corresponding encoding vector is generated. This network coding
is done at the server node through small mat lab net-cod interface software.
While accessing the stored data, the user data has the data type represented in
the form of decoding vector. For storing and retrieval the primary key is the
patient id. The results obtained were observed with a reduction of download
time of about 12% for our case study set up.
|
1211.2290 | Dating Texts without Explicit Temporal Cues | cs.CL cs.AI | This paper tackles temporal resolution of documents, such as determining when
a document is about or when it was written, based only on its text. We apply
techniques from information retrieval that predict dates via language models
over a discretized timeline. Unlike most previous works, we rely {\it solely}
on temporal cues implicit in the text. We consider both document-likelihood and
divergence based techniques and several smoothing methods for both of them. Our
best model predicts the mid-point of individuals' lives with a median of 22 and
mean error of 36 years for Wikipedia biographies from 3800 B.C. to the present
day. We also show that this approach works well when training on such
biographies and predicting dates both for non-biographical Wikipedia pages
about specific years (500 B.C. to 2010 A.D.) and for publication dates of short
stories (1798 to 2008). Together, our work shows that, even in absence of
temporal extraction resources, it is possible to achieve remarkable temporal
locality across a diverse set of texts.
|
1211.2291 | Sequentiality and Adaptivity Gains in Active Hypothesis Testing | cs.IT math.IT math.ST stat.TH | Consider a decision maker who is responsible to collect observations so as to
enhance his information in a speedy manner about an underlying phenomena of
interest. The policies under which the decision maker selects sensing actions
can be categorized based on the following two factors: i) sequential vs.
non-sequential; ii) adaptive vs. non-adaptive. Non-sequential policies collect
a fixed number of observation samples and make the final decision afterwards;
while under sequential policies, the sample size is not known initially and is
determined by the observation outcomes. Under adaptive policies, the decision
maker relies on the previous collected samples to select the next sensing
action; while under non-adaptive policies, the actions are selected independent
of the past observation outcomes.
In this paper, performance bounds are provided for the policies in each
category. Using these bounds, sequentiality gain and adaptivity gain, i.e., the
gains of sequential and adaptive selection of actions are characterized.
|
1211.2292 | Hybrid MPI-OpenMP Paradigm on SMP Clusters: MPEG-2 Encoder and N-Body
Simulation | cs.DC cs.CE cs.PF | Clusters of SMP nodes provide support for a wide diversity of parallel
programming paradigms. Combining both shared memory and message passing
parallelizations within the same application, the hybrid MPI-OpenMP paradigm is
an emerging trend for parallel programming to fully exploit distributed
shared-memory architecture. In this paper, we improve the performance of MPEG-2
encoder and n-body simulation by employing the hybrid MPI-OpenMP programming
paradigm on SMP clusters. The hierarchical image data structure of the MPEG
bit-stream is eminently suitable for the hybrid model to achieve multiple
levels of parallelism: MPI for parallelism at the group of pictures level
across SMP nodes and OpenMP for parallelism within pictures at the slice level
within each SMP node. Similarly, the work load of the force calculation which
accounts for upwards of 90% of the cycles in typical computations in the n-body
simulation is shared among OpenMP threads after ORB domain decomposition among
MPI processes. Besides, loop scheduling of OpenMP threads is adopted with
appropriate chunk size to provide better load balance of work, leading to
enhanced performance. With the n-body simulation, experimental results
demonstrate that the hybrid MPI-OpenMP program outperforms the corresponding
pure MPI program by average factors of 1.52 on a 4-way cluster and 1.21 on a
2-way cluster. Likewise, the hybrid model offers a performance improvement of
18% compared to the MPI model for the MPEG-2 encoder.
|
1211.2293 | Performance Evaluation of Treecode Algorithm for N-Body Simulation Using
GridRPC System | cs.DC cs.CE cs.PF | This paper is aimed at improving the performance of the treecode algorithm
for N-Body simulation by employing the NetSolve GridRPC programming model to
exploit the use of multiple clusters. N-Body is a classical problem, and
appears in many areas of science and engineering, including astrophysics,
molecular dynamics, and graphics. In the simulation of N-Body, the specific
routine for calculating the forces on the bodies which accounts for upwards of
90% of the cycles in typical computations is eminently suitable for obtaining
parallelism with GridRPC calls. It is divided among the compute nodes by
simultaneously calling multiple GridRPC requests to them. The performance of
the GridRPC implementation is then compared to that of the MPI version and
hybrid MPI-OpenMP version for the treecode algorithm on individual clusters.
|
1211.2304 | Probabilistic Combination of Classifier and Cluster Ensembles for
Non-transductive Learning | cs.LG stat.ML | Unsupervised models can provide supplementary soft constraints to help
classify new target data under the assumption that similar objects in the
target set are more likely to share the same class label. Such models can also
help detect possible differences between training and target distributions,
which is useful in applications where concept drift may take place. This paper
describes a Bayesian framework that takes as input class labels from existing
classifiers (designed based on labeled data from the source domain), as well as
cluster labels from a cluster ensemble operating solely on the target data to
be classified, and yields a consensus labeling of the target data. This
framework is particularly useful when the statistics of the target data drift
or change from those of the training data. We also show that the proposed
framework is privacy-aware and allows performing distributed learning when
data/models have sharing restrictions. Experiments show that our framework can
yield superior results to those provided by applying classifier ensembles only.
|
1211.2333 | Predicting the sources of an outbreak with a spectral technique | math-ph cs.SI math.MP physics.soc-ph | The epidemic spreading of a disease can be described by a contact network
whose nodes are persons or centers of contagion and links heterogeneous
relations among them. We provide a procedure to identify multiple sources of an
outbreak or their closer neighbors. Our methodology is based on a simple
spectral technique requiring only the knowledge of the undirected contact
graph. The algorithm is tested on a variety of graphs collected from outbreaks
including fluency, H5N1, Tbc, in urban and rural areas. Results show that the
spectral technique is able to identify the source nodes if the graph
approximates a tree sufficiently.
|
1211.2340 | Time and harmonic study of strongly controllable group systems, group
shifts, and group codes | cs.IT math.IT | In this paper we give a complementary view of some of the results on group
systems by Forney and Trott. We find an encoder of a group system which has the
form of a time convolution. We consider this to be a time domain encoder while
the encoder of Forney and Trott is a spectral domain encoder. We study the
outputs of time and spectral domain encoders when the inputs are the same, and
also study outputs when the same input is used but time runs forward and
backward. In an abelian group system, all four cases give the same output for
the same input, but this may not be true for a nonabelian system. Moreover,
time symmetry and harmonic symmetry are broken for the same reason. We use a
canonic form, a set of tensors, to show how the outputs are related. These
results show there is a time and harmonic theory of group systems.
|
1211.2354 | Privacy Preserving Web Query Log Publishing: A Survey on Anonymization
Techniques | cs.DB cs.CR | Releasing Web query logs which contain valuable information for research or
marketing, can breach the privacy of search engine users. Therefore rendering
query logs to limit linking a query to an individual while preserving the data
usefulness for analysis, is an important research problem. This survey provides
an overview and discussion on the recent studies on this direction.
|
1211.2361 | Genetic Algorithm for Designing a Convenient Facility Layout for a
Circular Flow Path | cs.NE | In this paper, we present a heuristic for designing facility layouts that are
convenient for designing a unidirectional loop for material handling. We use
genetic algorithm where the objective function and crossover and mutation
operators have all been designed specifically for this purpose. Our design is
made under flexible bay structure and comparisons are made with other layouts
from the literature that were designed under flexible bay structure.
|
1211.2367 | IS-LABEL: an Independent-Set based Labeling Scheme for Point-to-Point
Distance Querying on Large Graphs | cs.DB | We study the problem of computing shortest path or distance between two query
vertices in a graph, which has numerous important applications. Quite a number
of indexes have been proposed to answer such distance queries. However, all of
these indexes can only process graphs of size barely up to 1 million vertices,
which is rather small in view of many of the fast-growing real-world graphs
today such as social networks and Web graphs. We propose an efficient index,
which is a novel labeling scheme based on the independent set of a graph. We
show that our method can handle graphs of size three orders of magnitude larger
than those existing indexes.
|
1211.2378 | Hybrid methodology for hourly global radiation forecasting in
Mediterranean area | cs.NE cs.LG physics.ao-ph stat.AP | The renewable energies prediction and particularly global radiation
forecasting is a challenge studied by a growing number of research teams. This
paper proposes an original technique to model the insolation time series based
on combining Artificial Neural Network (ANN) and Auto-Regressive and Moving
Average (ARMA) model. While ANN by its non-linear nature is effective to
predict cloudy days, ARMA techniques are more dedicated to sunny days without
cloud occurrences. Thus, three hybrids models are suggested: the first proposes
simply to use ARMA for 6 months in spring and summer and to use an optimized
ANN for the other part of the year; the second model is equivalent to the first
but with a seasonal learning; the last model depends on the error occurred the
previous hour. These models were used to forecast the hourly global radiation
for five places in Mediterranean area. The forecasting performance was compared
among several models: the 3 above mentioned models, the best ANN and ARMA for
each location. In the best configuration, the coupling of ANN and ARMA allows
an improvement of more than 1%, with a maximum in autumn (3.4%) and a minimum
in winter (0.9%) where ANN alone is the best.
|
1211.2379 | Belief Propagation Reconstruction for Discrete Tomography | cs.NA cond-mat.stat-mech cs.IT math.IT | We consider the reconstruction of a two-dimensional discrete image from a set
of tomographic measurements corresponding to the Radon projection. Assuming
that the image has a structure where neighbouring pixels have a larger
probability to take the same value, we follow a Bayesian approach and introduce
a fast message-passing reconstruction algorithm based on belief propagation.
For numerical results, we specialize to the case of binary tomography. We test
the algorithm on binary synthetic images with different length scales and
compare our results against a more usual convex optimization approach. We
investigate the reconstruction error as a function of the number of tomographic
measurements, corresponding to the number of projection angles. The belief
propagation algorithm turns out to be more efficient than the
convex-optimization algorithm, both in terms of recovery bounds for noise-free
projections, and in terms of reconstruction quality when moderate Gaussian
noise is added to the projections.
|
1211.2399 | Mining Determinism in Human Strategic Behavior | cs.GT cs.AI | This work lies in the fusion of experimental economics and data mining. It
continues author's previous work on mining behaviour rules of human subjects
from experimental data, where game-theoretic predictions partially fail to
work. Game-theoretic predictions aka equilibria only tend to success with
experienced subjects on specific games, what is rarely given. Apart from game
theory, contemporary experimental economics offers a number of alternative
models. In relevant literature, these models are always biased by psychological
and near-psychological theories and are claimed to be proven by the data. This
work introduces a data mining approach to the problem without using vast
psychological background. Apart from determinism, no other biases are regarded.
Two datasets from different human subject experiments are taken for evaluation.
The first one is a repeated mixed strategy zero sum game and the second -
repeated ultimatum game. As result, the way of mining deterministic
regularities in human strategic behaviour is described and evaluated. As future
work, the design of a new representation formalism is discussed.
|
1211.2441 | Exact and Stable Recovery of Rotations for Robust Synchronization | cs.IT math.IT | The synchronization problem over the special orthogonal group $SO(d)$
consists of estimating a set of unknown rotations $R_1,R_2,...,R_n$ from noisy
measurements of a subset of their pairwise ratios $R_{i}^{-1}R_{j}$. The
problem has found applications in computer vision, computer graphics, and
sensor network localization, among others. Its least squares solution can be
approximated by either spectral relaxation or semidefinite programming followed
by a rounding procedure, analogous to the approximation algorithms of
\textsc{Max-Cut}. The contribution of this paper is three-fold: First, we
introduce a robust penalty function involving the sum of unsquared deviations
and derive a relaxation that leads to a convex optimization problem; Second, we
apply the alternating direction method to minimize the penalty function;
Finally, under a specific model of the measurement noise and for both complete
and random measurement graphs, we prove that the rotations are exactly and
stably recovered, exhibiting a phase transition behavior in terms of the
proportion of noisy measurements. Numerical simulations confirm the phase
transition behavior for our method as well as its improved accuracy compared to
existing methods.
|
1211.2459 | Measures of Entropy from Data Using Infinitely Divisible Kernels | cs.LG cs.IT math.IT stat.ML | Information theory provides principled ways to analyze different inference
and learning problems such as hypothesis testing, clustering, dimensionality
reduction, classification, among others. However, the use of information
theoretic quantities as test statistics, that is, as quantities obtained from
empirical data, poses a challenging estimation problem that often leads to
strong simplifications such as Gaussian models, or the use of plug in density
estimators that are restricted to certain representation of the data. In this
paper, a framework to non-parametrically obtain measures of entropy directly
from data using operators in reproducing kernel Hilbert spaces defined by
infinitely divisible kernels is presented. The entropy functionals, which bear
resemblance with quantum entropies, are defined on positive definite matrices
and satisfy similar axioms to those of Renyi's definition of entropy.
Convergence of the proposed estimators follows from concentration results on
the difference between the ordered spectrum of the Gram matrices and the
integral operators associated to the population quantities. In this way,
capitalizing on both the axiomatic definition of entropy and on the
representation power of positive definite kernels, the proposed measure of
entropy avoids the estimation of the probability distribution underlying the
data. Moreover, estimators of kernel-based conditional entropy and mutual
information are also defined. Numerical experiments on independence tests
compare favourably with state of the art.
|
1211.2476 | Random Utility Theory for Social Choice | cs.MA cs.LG stat.ML | Random utility theory models an agent's preferences on alternatives by
drawing a real-valued score on each alternative (typically independently) from
a parameterized distribution, and then ranking the alternatives according to
scores. A special case that has received significant attention is the
Plackett-Luce model, for which fast inference methods for maximum likelihood
estimators are available. This paper develops conditions on general random
utility models that enable fast inference within a Bayesian framework through
MC-EM, providing concave loglikelihood functions and bounded sets of global
maxima solutions. Results on both real-world and simulated data provide support
for the scalability of the approach and capability for model selection among
general random utility models including Plackett-Luce.
|
1211.2487 | Power Control and Interference Management in Dense Wireless Networks | cs.IT math.IT math.OC | We address the problem of interference management and power control in terms
of maximization of a general utility function. For the utility functions under
consideration, we propose a power control algorithm based on a fixed-point
iteration; further, we prove local convergence of the algorithm in the
neighborhood of the optimal power vector. Our algorithm has several benefits
over the previously studied works in the literature: first, the algorithm can
be applied to problems other than network utility maximization (NUM), e.g.,
power control in a relay network; second, for a network with $N$ wireless
transmitters, the computational complexity of the proposed algorithm is
$\mathcal{O}(N^2)$ calculations per iteration (significantly smaller than the
$\mathcal{O}(N^3) $ calculations for Newton's iterations or gradient descent
approaches). Furthermore, the algorithm converges very fast (usually in less
than 15 iterations), and in particular, if initialized close to the optimal
solution, the convergence speed is much faster. This suggests the potential of
tracking variations in slowly fading channels. Finally, when implemented in a
distributed fashion, the algorithm attains the optimal power vector with a
signaling/computational complexity of only $\mathcal{O}(N)$ at each node.
|
1211.2497 | A Note on the Deletion Channel Capacity | cs.IT math.IT | Memoryless channels with deletion errors as defined by a stochastic channel
matrix allowing for bit drop outs are considered in which transmitted bits are
either independently deleted with probability $d$ or unchanged with probability
$1-d$. Such channels are information stable, hence their Shannon capacity
exists. However, computation of the channel capacity is formidable, and only
some upper and lower bounds on the capacity exist. In this paper, we first show
a simple result that the parallel concatenation of two different independent
deletion channels with deletion probabilities $d_1$ and $d_2$, in which every
input bit is either transmitted over the first channel with probability of
$\lambda$ or over the second one with probability of $1-\lambda$, is nothing
but another deletion channel with deletion probability of $d=\lambda
d_1+(1-\lambda)d_2$. We then provide an upper bound on the concatenated
deletion channel capacity $C(d)$ in terms of the weighted average of $C(d_1)$,
$C(d_2)$ and the parameters of the three channels. An interesting consequence
of this bound is that $C(\lambda d_1+(1-\lambda))\leq \lambda C(d_1)$ which
enables us to provide an improved upper bound on the capacity of the i.i.d.
deletion channels, i.e., $C(d)\leq 0.4143(1-d)$ for $d\geq 0.65$. This
generalizes the asymptotic result by Dalai as it remains valid for all $d\geq
0.65$. Using the same approach we are also able to improve upon existing upper
bounds on the capacity of the deletion/substitution channel.
|
1211.2500 | A New Algorithm Based Entropic Threshold for Edge Detection in Images | cs.CV | Edge detection is one of the most critical tasks in automatic image analysis.
There exists no universal edge detection method which works well under all
conditions. This paper shows the new approach based on the one of the most
efficient techniques for edge detection, which is entropy-based thresholding.
The main advantages of the proposed method are its robustness and its
flexibility. We present experimental results for this method, and compare
results of the algorithm against several leading edge detection methods, such
as Canny, LOG, and Sobel. Experimental results demonstrate that the proposed
method achieves better result than some classic methods and the quality of the
edge detector of the output images is robust and decrease the computation time.
|
1211.2502 | New Edge Detection Technique based on the Shannon Entropy in Gray Level
Images | cs.CV | Edge detection is an important field in image processing. Edges characterize
object boundaries and are therefore useful for segmentation, registration,
feature extraction, and identification of objects in a scene. In this paper, an
approach utilizing an improvement of Baljit and Amar method which uses Shannon
entropy other than the evaluation of derivatives of the image in detecting
edges in gray level images has been proposed. The proposed method can reduce
the CPU time required for the edge detection process and the quality of the
edge detector of the output images is robust. A standard test images, the
real-world and synthetic images are used to compare the results of the proposed
edge detector with the Baljit and Amar edge detector method. In order to
validate the results, the run time of the proposed method and the pervious
method are presented. It has been observed that the proposed edge detector
works effectively for different gray scale digital images. The performance
evaluation of the proposed technique in terms of the measured CPU time and the
quality of edge detector method are presented. Experimental results demonstrate
that the proposed method achieve better result than the relevant classic
method.
|
1211.2512 | Minimal cost feature selection of data with normal distribution
measurement errors | cs.AI cs.LG | Minimal cost feature selection is devoted to obtain a trade-off between test
costs and misclassification costs. This issue has been addressed recently on
nominal data. In this paper, we consider numerical data with measurement errors
and study minimal cost feature selection in this model. First, we build a data
model with normal distribution measurement errors. Second, the neighborhood of
each data item is constructed through the confidence interval. Comparing with
discretized intervals, neighborhoods are more reasonable to maintain the
information of data. Third, we define a new minimal total cost feature
selection problem through considering the trade-off between test costs and
misclassification costs. Fourth, we proposed a backtracking algorithm with
three effective pruning techniques to deal with this problem. The algorithm is
tested on four UCI data sets. Experimental results indicate that the pruning
techniques are effective, and the algorithm is efficient for data sets with
nearly one thousand objects.
|
1211.2532 | Iterative Thresholding Algorithm for Sparse Inverse Covariance
Estimation | stat.CO cs.LG stat.ML | The L1-regularized maximum likelihood estimation problem has recently become
a topic of great interest within the machine learning, statistics, and
optimization communities as a method for producing sparse inverse covariance
estimators. In this paper, a proximal gradient method (G-ISTA) for performing
L1-regularized covariance matrix estimation is presented. Although numerous
algorithms have been proposed for solving this problem, this simple proximal
gradient method is found to have attractive theoretical and numerical
properties. G-ISTA has a linear rate of convergence, resulting in an O(log e)
iteration complexity to reach a tolerance of e. This paper gives eigenvalue
bounds for the G-ISTA iterates, providing a closed-form linear convergence
rate. The rate is shown to be closely related to the condition number of the
optimal point. Numerical convergence results and timing comparisons for the
proposed method are presented. G-ISTA is shown to perform very well, especially
when the optimal point is well-conditioned.
|
1211.2555 | Viral spreading of daily information in online social networks | physics.soc-ph cs.SI | We explain a possible mechanism of an information spreading on a network
which spreads extremely far from a seed node, namely the viral spreading. On
the basis of a model of the information spreading in an online social network,
in which the dynamics is expressed as a random multiplicative process of the
spreading rates, we will show that the correlation between the spreading rates
enhances the chance of the viral spreading, shifting the tipping point at which
the spreading goes viral.
|
1211.2556 | A Comparative Study of Gaussian Mixture Model and Radial Basis Function
for Voice Recognition | cs.LG cs.CV stat.ML | A comparative study of the application of Gaussian Mixture Model (GMM) and
Radial Basis Function (RBF) in biometric recognition of voice has been carried
out and presented. The application of machine learning techniques to biometric
authentication and recognition problems has gained a widespread acceptance. In
this research, a GMM model was trained, using Expectation Maximization (EM)
algorithm, on a dataset containing 10 classes of vowels and the model was used
to predict the appropriate classes using a validation dataset. For experimental
validity, the model was compared to the performance of two different versions
of RBF model using the same learning and validation datasets. The results
showed very close recognition accuracy between the GMM and the standard RBF
model, but with GMM performing better than the standard RBF by less than 1% and
the two models outperformed similar models reported in literature. The DTREG
version of RBF outperformed the other two models by producing 94.8% recognition
accuracy. In terms of recognition time, the standard RBF was found to be the
fastest among the three models.
|
1211.2647 | Determining a Loop Material Flow Pattern for Automatic Guided Vehicle
Systems on a Facility Layout | cs.SY | In this paper, we present a heuristic procedure for designing a loop material
flow pattern on a given facility layout with the aim of minimizing the total
material handling distances. We present an approximation of the total material
handling costs and greatly drop the required computational time by minimizing
the approximation instead of the original objective function.
|
1211.2651 | Correlation dimension of complex networks | physics.soc-ph cond-mat.stat-mech cs.SI | We propose a new measure to characterize the dimension of complex networks
based on the ergodic theory of dynamical systems. This measure is derived from
the correlation sum of a trajectory generated by a random walker navigating the
network, and extends the classical Grassberger-Procaccia algorithm to the
context of complex networks. The method is validated with reliable results for
both synthetic networks and real-world networks such as the world
air-transportation network or urban networks, and provides a computationally
fast way for estimating the dimensionality of networks which only relies on the
local information provided by the walkers.
|
1211.2696 | Metastability of Asymptotically Well-Behaved Potential Games | cs.GT cs.DS cs.SI | One of the main criticisms to game theory concerns the assumption of full
rationality. Logit dynamics is a decentralized algorithm in which a level of
irrationality (a.k.a. "noise") is introduced in players' behavior. In this
context, the solution concept of interest becomes the logit equilibrium, as
opposed to Nash equilibria. Logit equilibria are distributions over strategy
profiles that possess several nice properties, including existence and
uniqueness. However, there are games in which their computation may take time
exponential in the number of players. We therefore look at an approximate
version of logit equilibria, called metastable distributions, introduced by
Auletta et al. [SODA 2012]. These are distributions that remain stable (i.e.,
players do not go too far from it) for a super-polynomial number of steps
(rather than forever, as for logit equilibria). The hope is that these
distributions exist and can be reached quickly by logit dynamics.
We identify a class of potential games, called asymptotically well-behaved,
for which the behavior of the logit dynamics is not chaotic as the number of
players increases so to guarantee meaningful asymptotic results. We prove that
any such game admits distributions which are metastable no matter the level of
noise present in the system, and the starting profile of the dynamics. These
distributions can be quickly reached if the rationality level is not too big
when compared to the inverse of the maximum difference in potential. Our proofs
build on results which may be of independent interest, including some spectral
characterizations of the transition matrix defined by logit dynamics for
generic games and the relationship of several convergence measures for Markov
chains.
|
1211.2699 | A Non-Blind Watermarking Scheme for Gray Scale Images in Discrete
Wavelet Transform Domain using Two Subbands | cs.MM cs.CV | Digital watermarking is the process to hide digital pattern directly into a
digital content. Digital watermarking techniques are used to address digital
rights management, protect information and conceal secrets. An invisible
non-blind watermarking approach for gray scale images is proposed in this
paper. The host image is decomposed into 3-levels using Discrete Wavelet
Transform. Based on the parent-child relationship between the wavelet
coefficients the Set Partitioning in Hierarchical Trees (SPIHT) compression
algorithm is performed on the LH3, LH2, HL3 and HL2 subbands to find out the
significant coefficients. The most significant coefficients of LH2 and HL2
bands are selected to embed a binary watermark image. The selected significant
coefficients are modulated using Noise Visibility Function, which is considered
as the best strength to ensure better imperceptibility. The approach is tested
against various image processing attacks such as addition of noise, filtering,
cropping, JPEG compression, histogram equalization and contrast adjustment. The
experimental results reveal the high effectiveness of the method.
|
1211.2717 | Proximal Stochastic Dual Coordinate Ascent | stat.ML cs.LG math.OC | We introduce a proximal version of dual coordinate ascent method. We
demonstrate how the derived algorithmic framework can be used for numerous
regularized loss minimization problems, including $\ell_1$ regularization and
structured output SVM. The convergence rates we obtain match, and sometimes
improve, state-of-the-art results.
|
1211.2719 | Quantum Consciousness Soccer Simulator | cs.AI cs.MA | In cognitive sciences it is not uncommon to use various games effectively.
For example, in artificial intelligence, the RoboCup initiative was to set up
to catalyse research on the field of autonomous agent technology. In this
paper, we introduce a similar soccer simulation initiative to try to
investigate a model of human consciousness and a notion of reality in the form
of a cognitive problem. In addition, for example, the home pitch advantage and
the objective role of the supporters could be naturally described and discussed
in terms of this new soccer simulation model.
|
1211.2723 | On the Relationships among Optimal Symmetric Fix-Free Codes | cs.IT math.IT | Symmetric fix-free codes are prefix condition codes in which each codeword is
required to be a palindrome. Their study is motivated by the topic of joint
source-channel coding. Although they have been considered by a few communities
they are not well understood. In earlier work we used a collection of instances
of Boolean satisfiability problems as a tool in the generation of all optimal
binary symmetric fix-free codes with n codewords and observed that the number
of different optimal codelength sequences grows slowly compared with the
corresponding number for prefix condition codes. We demonstrate that all
optimal symmetric fix-free codes can alternatively be obtained by sequences of
codes generated by simple manipulations starting from one particular code. We
also discuss simplifications in the process of searching for this set of codes.
|
1211.2736 | Hybrid Systems for Knowledge Representation in Artificial Intelligence | cs.AI | There are few knowledge representation (KR) techniques available for
efficiently representing knowledge. However, with the increase in complexity,
better methods are needed. Some researchers came up with hybrid mechanisms by
combining two or more methods. In an effort to construct an intelligent
computer system, a primary consideration is to represent large amounts of
knowledge in a way that allows effective use and efficiently organizing
information to facilitate making the recommended inferences. There are merits
and demerits of combinations, and standardized method of KR is needed. In this
paper, various hybrid schemes of KR were explored at length and details
presented.
|
1211.2737 | An Exploration on Brain Computer Interface and Its Recent Trends | cs.HC cs.ET cs.SY | Detailed exploration on Brain Computer Interface (BCI) and its recent trends
has been done in this paper. Work is being done to identify objects, images,
videos and their color compositions. Efforts are on the way in understanding
speech, words, emotions, feelings and moods. When humans watch the surrounding
environment, visual data is processed by the brain, and it is possible to
reconstruct the same on the screen with some appreciable accuracy by analyzing
the physiological data. This data is acquired by using one of the non-invasive
techniques like electroencephalography (EEG) in BCI. The acquired signal is to
be translated to produce the image on to the screen. This paper also lays
suitable directions for future work.
|
1211.2741 | A Hindi Speech Actuated Computer Interface for Web Search | cs.CL cs.HC cs.IR | Aiming at increasing system simplicity and flexibility, an audio evoked based
system was developed by integrating simplified headphone and user-friendly
software design. This paper describes a Hindi Speech Actuated Computer
Interface for Web search (HSACIWS), which accepts spoken queries in Hindi
language and provides the search result on the screen. This system recognizes
spoken queries by large vocabulary continuous speech recognition (LVCSR),
retrieves relevant document by text retrieval, and provides the search result
on the Web by the integration of the Web and the voice systems. The LVCSR in
this system showed enough performance levels for speech with acoustic and
language models derived from a query corpus with target contents.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.