id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1209.5071
|
Automorphism of order 2p in binary self-dual extremal codes of length a
multiple of 24
|
cs.IT math.IT math.RT
|
Let C be a binary self-dual code with an automorphism g of order 2p, where p
is an odd prime, such that g^p is a fixed point free involution. If C is
extremal of length a multiple of 24 all the involutions are fixed point free,
except the Golay Code and eventually putative codes of length 120. Connecting
module theoretical properties of a self-dual code C with coding theoretical
ones of the subcode C(g^p) which consists of the set of fixed points of g^p, we
prove that C is a projective F_2<g>-module if and only if a natural projection
of C(g^p) is a self-dual code. We then discuss easy to handle criteria to
decide if C is projective or not. As an application we consider in the last
part extremal self-dual codes of length 120, proving that their automorphism
group does not contain elements of order 38 and 58.
|
1209.5077
|
Complexity Reduction for Parameter-Dependent Linear Systems
|
cs.SY math.OC
|
We present a complexity reduction algorithm for a family of
parameter-dependent linear systems when the system parameters belong to a
compact semi-algebraic set. This algorithm potentially describes the underlying
dynamical system with fewer parameters or state variables. To do so, it
minimizes the distance (i.e., H-infinity-norm of the difference) between the
original system and its reduced version. We present a sub-optimal solution to
this problem using sum-of-squares optimization methods. We present the results
for both continuous-time and discrete-time systems. Lastly, we illustrate the
applicability of our proposed algorithm on numerical examples.
|
1209.5083
|
A Simple Proof for the Existence of "Good" Pairs of Nested Lattices
|
cs.IT math.IT
|
This paper provides a simplified proof for the existence of nested lattice
codebooks allowing to achieve the capacity of the additive white Gaussian noise
channel, as well as the optimal rate-distortion trade-off for a Gaussian
source. The proof is self-contained and relies only on basic probabilistic and
geometrical arguments. An ensemble of nested lattices that is different, and
more elementary, than the one used in previous proofs is introduced. This
ensemble is based on lifting different subcodes of a linear code to the
Euclidean space using Construction A. In addition to being simpler, our
analysis is less sensitive to the assumption that the additive noise is
Gaussian. In particular, for additive ergodic noise channels it is shown that
the achievable rates of the nested lattice coding scheme depend on the noise
distribution only via its power. Similarly, the nested lattice source coding
scheme attains the same rate-distortion trade-off for all ergodic sources with
the same second moment.
|
1209.5108
|
Global passive system approximation
|
cs.SY
|
In this paper we present a new approach towards global passive approximation
in order to find a passive transfer function G(s) that is nearest in some
well-defined matrix norm sense to a non-passive transfer function H(s). It is
based on existing solutions to pertinent matrix nearness problems. It is shown
that the key point in constructing the nearest passive transfer function, is to
find a good rational approximation of the well-known ramp function over an
interval defined by the minimum and maximum dissipation of H(s). The proposed
algorithms rely on the stable anti-stable projection of a given transfer
function. Pertinent examples are given to show the scope and accuracy of the
proposed algorithms.
|
1209.5111
|
Making a Science of Model Search
|
cs.CV cs.NE
|
Many computer vision algorithms depend on a variety of parameter choices and
settings that are typically hand-tuned in the course of evaluating the
algorithm. While such parameter tuning is often presented as being incidental
to the algorithm, correctly setting these parameter choices is frequently
critical to evaluating a method's full potential. Compounding matters, these
parameters often must be re-tuned when the algorithm is applied to a new
problem domain, and the tuning process itself often depends on personal
experience and intuition in ways that are hard to describe. Since the
performance of a given technique depends on both the fundamental quality of the
algorithm and the details of its tuning, it can be difficult to determine
whether a given technique is genuinely better, or simply better tuned.
In this work, we propose a meta-modeling approach to support automated hyper
parameter optimization, with the goal of providing practical tools to replace
hand-tuning with a reproducible and unbiased optimization process. Our approach
is to expose the underlying expression graph of how a performance metric (e.g.
classification accuracy on validation examples) is computed from parameters
that govern not only how individual processing steps are applied, but even
which processing steps are included. A hyper parameter optimization algorithm
transforms this graph into a program for optimizing that performance metric.
Our approach yields state of the art results on three disparate computer vision
problems: a face-matching verification task (LFW), a face identification task
(PubFig83) and an object recognition task (CIFAR-10), using a single algorithm.
More broadly, we argue that the formalization of a meta-model supports more
objective, reproducible, and quantitative evaluation of computer vision
algorithms, and that it can serve as a valuable tool for guiding algorithm
development.
|
1209.5145
|
Julia: A Fast Dynamic Language for Technical Computing
|
cs.PL cs.CE
|
Dynamic languages have become popular for scientific computing. They are
generally considered highly productive, but lacking in performance. This paper
presents Julia, a new dynamic language for technical computing, designed for
performance from the beginning by adapting and extending modern programming
language techniques. A design based on generic functions and a rich type system
simultaneously enables an expressive programming model and successful type
inference, leading to good performance for a wide range of programs. This makes
it possible for much of the Julia library to be written in Julia itself, while
also incorporating best-of-breed C and Fortran libraries.
|
1209.5180
|
Stochastic Sensor Scheduling for Networked Control Systems
|
math.OC cs.SY math.PR
|
Optimal sensor scheduling with applications to networked estimation and
control systems is considered. We model sensor measurement and transmission
instances using jumps between states of a continuous-time Markov chain. We
introduce a cost function for this Markov chain as the summation of terms
depending on the average sampling frequencies of the subsystems and the effort
needed for changing the parameters of the underlying Markov chain. By
minimizing this cost function through extending Brockett's recent approach to
optimal control of Markov chains, we extract an optimal scheduling policy to
fairly allocate the network resources among the control loops. We study the
statistical properties of this scheduling policy in order to compute upper
bounds for the closed-loop performance of the networked system, where several
decoupled scalar subsystems are connected to their corresponding estimator or
controller through a shared communication medium. We generalize the estimation
results to observable subsystems of arbitrary order. Finally, we illustrate the
developed results numerically on a networked system composed of several
decoupled water tanks.
|
1209.5187
|
Identification of Sparse Linear Operators
|
cs.IT math.IT
|
We consider the problem of identifying a linear deterministic operator from
its response to a given probing signal. For a large class of linear operators,
we show that stable identifiability is possible if the total support area of
the operator's spreading function satisfies D<=1/2. This result holds for an
arbitrary (possibly fragmented) support region of the spreading function, does
not impose limitations on the total extent of the support region, and, most
importantly, does not require the support region to be known prior to
identification. Furthermore, we prove that stable identifiability of almost all
operators is possible if D<1. This result is surprising as it says that there
is no penalty for not knowing the support region of the spreading function
prior to identification. Algorithms that provably recover all operators with
D<=1/2, and almost all operators with D<1 are presented.
|
1209.5212
|
Error Correction for Cooperative Data Exchange
|
cs.IT math.IT
|
This paper considers the problem of error correction for a cooperative data
exchange (CDE) system, where some clients are compromised or failed and send
false messages. Assuming each client possesses a subset of the total messages,
we analyze the error correction capability when every client is allowed to
broadcast only one linearly-coded message. Our error correction capability
bound determines the maximum number of clients that can be compromised or
failed without jeopardizing the final decoding solution at each client. We show
that deterministic, feasible linear codes exist that can achieve the derived
bound. We also evaluate random linear codes, where the coding coefficients are
drawn randomly, and then develop the probability for a client to withstand a
certain number of compromised or failed peers and successfully deduce the
complete message for any network size and any initial message distributions.
|
1209.5213
|
Capacity Results for Arbitrarily Varying Wiretap Channels
|
cs.IT math.IT
|
In this work the arbitrarily varying wiretap channel AVWC is studied. We
derive a lower bound on the random code secrecy capacity for the average error
criterion and the strong secrecy criterion in the case of a best channel to the
eavesdropper by using Ahlswede's robustification technique for ordinary AVCs.
We show that in the case of a non-symmetrisable channel to the legitimate
receiver the deterministic code secrecy capacity equals the random code secrecy
capacity, a result similar to Ahlswede's dichotomy result for ordinary AVCs.
Using this we can derive that the lower bound is also valid for the
deterministic code capacity of the AVWC. The proof of the dichotomy result is
based on the elimination technique introduced by Ahlswede for ordinary AVCs. We
further prove upper bounds on the deterministic code secrecy capacity in the
general case, which results in a multi-letter expression for the secrecy
capacity in the case of a best channel to the eavesdropper. Using techniques of
Ahlswede, developed to guarantee the validity of a reliability criterion, the
main contribution of this work is to integrate the strong secrecy criterion
into these techniques.
|
1209.5218
|
A New Continuous-Time Equality-Constrained Optimization Method to Avoid
Singularity
|
cs.NE
|
In equality-constrained optimization, a standard regularity assumption is
often associated with feasible point methods, namely the gradients of
constraints are linearly independent. In practice, the regularity assumption
may be violated. To avoid such a singularity, we propose a new projection
matrix, based on which a feasible point method for the continuous-time,
equality-constrained optimization problem is developed. First, the equality
constraint is transformed into a continuous-time dynamical system with
solutions that always satisfy the equality constraint. Then, the singularity is
explained in detail and a new projection matrix is proposed to avoid
singularity. An update (or say a controller) is subsequently designed to
decrease the objective function along the solutions of the transformed system.
The invariance principle is applied to analyze the behavior of the solution. We
also propose a modified approach for addressing cases in which solutions do not
satisfy the equality constraint. Finally, the proposed optimization approaches
are applied to two examples to demonstrate its effectiveness.
|
1209.5221
|
Design of APSK Constellations for Coherent Optical Channels with
Nonlinear Phase Noise
|
cs.IT math.IT
|
We study the design of amplitude phase-shift keying (APSK) constellations for
a coherent fiber-optical communication system where nonlinear phase noise
(NLPN) is the main system impairment. APSK constellations can be regarded as a
union of phase-shift keying (PSK) signal sets with different amplitude levels.
A practical two-stage (TS) detection scheme is analyzed, which performs close
to optimal detection for high enough input power. We optimize APSK
constellations with 4, 8, and 16 points in terms of symbol error probability
(SEP) under TS detection for several combinations of input power and fiber
length. Our results show that APSK is a promising modulation format in order to
cope with NLPN. As an example, for 16 points, performance gains of 3.2 dB can
be achieved at a SEP of 10^-2 compared to 16-QAM by choosing an optimized APSK
constellation. We also demonstrate that in the presence of severe nonlinear
distortions, it may become beneficial to sacrifice a constellation point or an
entire constellation ring to reduce the average SEP. Finally, we discuss the
problem of selecting a good binary labeling for the found constellations. For
the class of rectangular APSK a labeling design method is proposed, resulting
in near-optimal bit error probability.
|
1209.5231
|
Time-Ordered Product Expansions for Computational Stochastic Systems
Biology
|
q-bio.QM cs.CE nlin.AO
|
The time-ordered product framework of quantum field theory can also be used
to understand salient phenomena in stochastic biochemical networks. It is used
here to derive Gillespie's Stochastic Simulation Algorithm (SSA) for chemical
reaction networks; consequently, the SSA can be interpreted in terms of Feynman
diagrams. It is also used here to derive other, more general simulation and
parameter-learning algorithms including simulation algorithms for networks of
stochastic reaction-like processes operating on parameterized objects, and also
hybrid stochastic reaction/differential equation models in which systems of
ordinary differential equations evolve the parameters of objects that can also
undergo stochastic reactions. Thus, the time-ordered product expansion (TOPE)
can be used systematically to derive simulation and parameter-fitting
algorithms for stochastic systems.
|
1209.5244
|
Ranking Search Engine Result Pages based on Trustworthiness of Websites
|
cs.DB
|
The World Wide Web (WWW) is the repository of large number of web pages which
can be accessed via Internet by multiple users at the same time and therefore
it is Ubiquitous in nature. The search engine is a key application used to
search the web pages from this huge repository, which uses the link analysis
for ranking the web pages without considering the facts provided by them. A new
algorithm called Probability of Correctness of Facts(PCF)-Engine is proposed to
find the accuracy of the facts provided by the web pages. It uses the
Probability based similarity function (SIM) which performs the string matching
between the true facts and the facts of web pages to find their probability of
correctness. The existing semantic search engines, may give the relevant result
to the user query but may not be 100% accurate. Our algorithm computes
trustworthiness of websites to rank the web pages. Simulation results show that
our approach is efficient when compared with existing Voting and Truthfinder[1]
algorithms with respect to the trustworthiness of the websites.
|
1209.5245
|
Spike Timing Dependent Competitive Learning in Recurrent Self Organizing
Pulsed Neural Networks Case Study: Phoneme and Word Recognition
|
cs.CV cs.AI q-bio.NC
|
Synaptic plasticity seems to be a capital aspect of the dynamics of neural
networks. It is about the physiological modifications of the synapse, which
have like consequence a variation of the value of the synaptic weight. The
information encoding is based on the precise timing of single spike events that
is based on the relative timing of the pre- and post-synaptic spikes, local
synapse competitions within a single neuron and global competition via lateral
connections. In order to classify temporal sequences, we present in this paper
how to use a local hebbian learning, spike-timing dependent plasticity for
unsupervised competitive learning, preserving self-organizing maps of spiking
neurons. In fact we present three variants of self-organizing maps (SOM) with
spike-timing dependent Hebbian learning rule, the Leaky Integrators Neurons
(LIN), the Spiking_SOM and the recurrent Spiking_SOM (RSSOM) models. The case
study of the proposed SOM variants is phoneme classification and word
recognition in continuous speech and speaker independent.
|
1209.5246
|
Information requirements for enterprise systems
|
cs.SE cs.SI
|
In this paper, we discuss an approach to system requirements engineering,
which is based on using models of the responsibilities assigned to agents in a
multi-agency system of systems. The responsibility models serve as a basis for
identifying the stakeholders that should be considered in establishing the
requirements and provide a basis for a structured approach, described here, for
information requirements elicitation. We illustrate this approach using a case
study drawn from civil emergency management.
|
1209.5251
|
On Move Pattern Trends in a Large Go Games Corpus
|
cs.AI cs.LG
|
We process a large corpus of game records of the board game of Go and propose
a way of extracting summary information on played moves. We then apply several
basic data-mining methods on the summary information to identify the most
differentiating features within the summary information, and discuss their
correspondence with traditional Go knowledge. We show statistically significant
mappings of the features to player attributes such as playing strength or
informally perceived "playing style" (e.g. territoriality or aggressivity),
describe accurate classifiers for these attributes, and propose applications
including seeding real-work ranks of internet players, aiding in Go study and
tuning of Go-playing programs, or contribution to Go-theoretical discussion on
the scope of "playing style".
|
1209.5259
|
Entropy Bounds for Discrete Random Variables via Maximal Coupling
|
cs.IT math.IT math.PR
|
This paper derives new bounds on the difference of the entropies of two
discrete random variables in terms of the local and total variation distances
between their probability mass functions. The derivation of the bounds relies
on maximal coupling, and they apply to discrete random variables which are
defined over finite or countably infinite alphabets. Loosened versions of these
bounds are demonstrated to reproduce some previously reported results. The use
of the new bounds is exemplified for the Poisson approximation, where bounds on
the local and total variation distances follow from Stein's method.
|
1209.5260
|
Towards Ultrahigh Dimensional Feature Selection for Big Data
|
cs.LG
|
In this paper, we present a new adaptive feature scaling scheme for
ultrahigh-dimensional feature selection on Big Data. To solve this problem
effectively, we first reformulate it as a convex semi-infinite programming
(SIP) problem and then propose an efficient \emph{feature generating paradigm}.
In contrast with traditional gradient-based approaches that conduct
optimization on all input features, the proposed method iteratively activates a
group of features and solves a sequence of multiple kernel learning (MKL)
subproblems of much reduced scale. To further speed up the training, we propose
to solve the MKL subproblems in their primal forms through a modified
accelerated proximal gradient approach. Due to such an optimization scheme,
some efficient cache techniques are also developed. The feature generating
paradigm can guarantee that the solution converges globally under mild
conditions and achieve lower feature selection bias. Moreover, the proposed
method can tackle two challenging tasks in feature selection: 1) group-based
feature selection with complex structures and 2) nonlinear feature selection
with explicit feature mappings. Comprehensive experiments on a wide range of
synthetic and real-world datasets containing tens of million data points with
$O(10^{14})$ features demonstrate the competitive performance of the proposed
method over state-of-the-art feature selection methods in terms of
generalization performance and training efficiency.
|
1209.5306
|
A Model of Decision-Making in Groups of Humans
|
physics.soc-ph cs.SI q-bio.NC
|
Decisions by humans depend on their estimations given some uncertain sensory
data. These decisions can also be influenced by the behavior of others. Here we
present a mathematical model to quantify this influence, inviting a further
study on the cognitive consequences of social information. We also expect that
the present model can be used for a better understanding of the neural circuits
implicated in social processing.
|
1209.5331
|
Reliability of swarming algorithms for mobile sensor network
applications
|
cs.MA
|
There are many well-studied swarming algorithms which are often suited to
very specific purposes. As mobile sensor networks become increasingly complex,
and are comprised of more and more agents, it makes sense to consider swarming
algorithms for movement control. We introduce a natural way to measure the
reliability of various swarming algorithms so a balance can be struck between
algorithmic complexity and sampling accuracy.
|
1209.5333
|
Recent Trends of Measurement and Development of Vibration Sensors
|
cs.SY physics.ins-det
|
In recent trends, sensors are devices which monitor a parameter of a system,
hopefully without disturbing that parameter. Vibration measurement has become
an important method in mechanical structural products research, design,
produce, apply and maintenance. Vibration sensor is more and more important as
key devices. Nowadays, with the development of computer technology, electronic
technology and manufacturing process, a variety of vibration sensors have come
forth in succession.
|
1209.5335
|
BPRS: Belief Propagation Based Iterative Recommender System
|
cs.LG
|
In this paper we introduce the first application of the Belief Propagation
(BP) algorithm in the design of recommender systems. We formulate the
recommendation problem as an inference problem and aim to compute the marginal
probability distributions of the variables which represent the ratings to be
predicted. However, computing these marginal probability functions is
computationally prohibitive for large-scale systems. Therefore, we utilize the
BP algorithm to efficiently compute these functions. Recommendations for each
active user are then iteratively computed by probabilistic message passing. As
opposed to the previous recommender algorithms, BPRS does not require solving
the recommendation problem for all the users if it wishes to update the
recommendations for only a single active. Further, BPRS computes the
recommendations for each user with linear complexity and without requiring a
training period. Via computer simulations (using the 100K MovieLens dataset),
we verify that BPRS iteratively reduces the error in the predicted ratings of
the users until it converges. Finally, we confirm that BPRS is comparable to
the state of art methods such as Correlation-based neighborhood model (CorNgbr)
and Singular Value Decomposition (SVD) in terms of rating and precision
accuracy. Therefore, we believe that the BP-based recommendation algorithm is a
new promising approach which offers a significant advantage on scalability
while providing competitive accuracy for the recommender systems.
|
1209.5339
|
Developing Improved Greedy Crossover to Solve Symmetric Traveling
Salesman Problem
|
cs.NE
|
The Traveling Salesman Problem (TSP) is one of the most famous optimization
problems. Greedy crossover designed by Greffenstette et al, can be used while
Symmetric TSP (STSP) is resolved by Genetic Algorithm (GA). Researchers have
proposed several versions of greedy crossover. Here we propose improved version
of it. We compare our greedy crossover with some of recent crossovers, we use
our greedy crossover and some recent crossovers in GA then compare crossovers
on speed and accuracy.
|
1209.5345
|
Mining Social Data to Extract Intellectual Knowledge
|
cs.AI cs.SI
|
Social data mining is an interesting phe-nomenon which colligates different
sources of social data to extract information. This information can be used in
relationship prediction, decision making, pat-tern recognition, social mapping,
responsibility distri-bution and many other applications. This paper presents a
systematical data mining architecture to mine intellectual knowledge from
social data. In this research, we use social networking site facebook as
primary data source. We collect different attributes such as about me,
comments, wall post and age from facebook as raw data and use advanced data
mining approaches to excavate intellectual knowledge. We also analyze our mined
knowledge with comparison for possible usages like as human behavior
prediction, pattern recognition, job responsibility distribution, decision
making and product promoting.
|
1209.5350
|
Learning Topic Models and Latent Bayesian Networks Under Expansion
Constraints
|
stat.ML cs.LG stat.AP
|
Unsupervised estimation of latent variable models is a fundamental problem
central to numerous applications of machine learning and statistics. This work
presents a principled approach for estimating broad classes of such models,
including probabilistic topic models and latent linear Bayesian networks, using
only second-order observed moments. The sufficient conditions for
identifiability of these models are primarily based on weak expansion
constraints on the topic-word matrix, for topic models, and on the directed
acyclic graph, for Bayesian networks. Because no assumptions are made on the
distribution among the latent variables, the approach can handle arbitrary
correlations among the topics or latent factors. In addition, a tractable
learning method via $\ell_1$ optimization is proposed and studied in numerical
experiments.
|
1209.5370
|
Secure Degrees of Freedom of One-hop Wireless Networks
|
cs.IT cs.CR math.IT
|
We study the secure degrees of freedom (d.o.f.) of one-hop wireless networks
by considering four fundamental Gaussian network structures: wiretap channel,
broadcast channel with confidential messages, interference channel with
confidential messages, and multiple access wiretap channel. The secure d.o.f.
of the canonical Gaussian wiretap channel with no helpers is zero. It has been
known that a strictly positive secure d.o.f. can be obtained in the Gaussian
wiretap channel by using a helper which sends structured cooperative signals.
We show that the exact secure d.o.f. of the Gaussian wiretap channel with a
helper is 1/2. Our achievable scheme is based on real interference alignment
and cooperative jamming, which renders the message signal and the cooperative
jamming signal separable at the legitimate receiver, but aligns them perfectly
at the eavesdropper preventing any reliable decoding of the message signal. Our
converse is based on two key lemmas. The first lemma quantifies the secrecy
penalty by showing that the net effect of an eavesdropper on the system is that
it eliminates one of the independent channel inputs. The second lemma
quantifies the role of a helper by developing a direct relationship between the
cooperative jamming signal of a helper and the message rate. We extend this
result to the case of M helpers, and show that the exact secure d.o.f. in this
case is M/(M+1). We then generalize this approach to more general network
structures with multiple messages. We show that the sum secure d.o.f. of the
Gaussian broadcast channel with confidential messages and M helpers is 1, the
sum secure d.o.f. of the two-user interference channel with confidential
messages is 2/3, the sum secure d.o.f. of the two-user interference channel
with confidential messages and M helpers is 1, and the sum secure d.o.f. of the
K-user multiple access wiretap channel is K(K-1)/(K(K-1)+1).
|
1209.5417
|
Model based neuro-fuzzy ASR on Texas processor
|
cs.CV
|
In this paper an algorithm for recognizing speech has been proposed. The
recognized speech is used to execute related commands which use the MFCC and
two kind of classifiers, first one uses MLP and second one uses fuzzy inference
system as a classifier. The experimental results demonstrate the high gain and
efficiency of the proposed algorithm. We have implemented this system based on
graphical design and tested on a fix point digital signal processor (DSP) of
600 MHz, with reference DM6437-EVM of Texas instrument.
|
1209.5426
|
A Coherent Distributed Grid Service for Assimilation and Unification of
Heterogeneous Data Source
|
cs.DB
|
Grid services are heavily used for handling large distributed computations.
They are also very useful to handle heavy data intensive applications where
data are distributed in different sites. Most of the data grid services used in
such situations are meant for homogeneous data source. In case of Heterogeneous
data sources, most of the grid services that are available are designed such a
way that they must be identical in schema definition for their smooth
operation. But there can be situations where the grid site databases are
heterogeneous and their schema definition is different from the central schema
definition. In this paper we propose a light weight coherent grid service for
heterogeneous data sources that is very easily install. It can map and convert
the central SQL schema into that of the grid members and send queries to get
according results from heterogeneous data sources.
|
1209.5429
|
copulaedas: An R Package for Estimation of Distribution Algorithms Based
on Copulas
|
cs.NE cs.MS
|
The use of copula-based models in EDAs (estimation of distribution
algorithms) is currently an active area of research. In this context, the
copulaedas package for R provides a platform where EDAs based on copulas can be
implemented and studied. The package offers complete implementations of various
EDAs based on copulas and vines, a group of well-known optimization problems,
and utility functions to study the performance of the algorithms. Newly
developed EDAs can be easily integrated into the package by extending an S4
class with generic functions for their main components. This paper presents
copulaedas by providing an overview of EDAs based on copulas, a description of
the implementation of the package, and an illustration of its use through
examples. The examples include running the EDAs defined in the package,
implementing new algorithms, and performing an empirical study to compare the
behavior of different algorithms on benchmark functions and a real-world
problem.
|
1209.5430
|
SART: Speeding up Query Processing in Sensor Networks with an Autonomous
Range Tree Structure
|
cs.DC cs.DB
|
We consider the problem of constructing efficient P2P overlays for sensornets
providing "Energy-Level Application and Services". The method presented in
\cite{SOPXM09} presents a novel P2P overlay for Energy Level discovery in a
sensornet. However, this solution is not dynamic, since requires periodical
restructuring. In particular, it is not able to support neither join of
sensor\_nodes with energy level out of the ranges supported by the existing p2p
overlay nor leave of \emph{empty} overlay\_peers to which no sensor\_nodes are
currently associated. On this purpose and based on the efficient P2P method
presented in \cite{SPSTMT10}, we design a dynamic P2P overlay for Energy Level
discovery in a sensornet, the so-called SART (Sensors' Autonomous Range Tree).
The adaptation of the P2P index presented in \cite{SPSTMT10} guarantees the
best-known dynamic query performance of the above operation. We experimentally
verify this performance, via the D-P2P-Sim simulator (D-P2P-Sim is publicly
available at http://code.google.com/p/d-p2p-sim/).
|
1209.5448
|
A New Compression Based Index Structure for Efficient Information
Retrieval
|
cs.IR
|
Finding desired information from large data set is a difficult problem.
Information retrieval is concerned with the structure, analysis, organization,
storage, searching, and retrieval of information. Index is the main constituent
of an IR system. Now a day exponential growth of information makes the index
structure large enough affecting the IR system's quality. So compressing the
Index structure is our main contribution in this paper. We compressed the
document number in inverted file entries using a new coding technique based on
run-length encoding. Our coding mechanism uses a specified code which acts over
run-length coding. We experimented and found that our coding mechanism on an
average compresses 67.34% percent more than the other techniques.
|
1209.5456
|
Relation matroid and its relationship with generalized rough set based
on relation
|
cs.AI
|
Recently, the relationship between matroids and generalized rough sets based
on relations has been studied from the viewpoint of linear independence of
matrices. In this paper, we reveal more relationships by the predecessor and
successor neighborhoods from relations. First, through these two neighborhoods,
we propose a pair of matroids, namely predecessor relation matroid and
successor relation matroid, respectively. Basic characteristics of this pair of
matroids, such as dependent sets, circuits, the rank function and the closure
operator, are described by the predecessor and successor neighborhoods from
relations. Second, we induce a relation from a matroid through the circuits of
the matroid. We prove that the induced relation is always an equivalence
relation. With these two inductions, a relation induces a relation matroid, and
the relation matroid induces an equivalence relation, then the connection
between the original relation and the induced equivalence relation is studied.
Moreover, the relationships between the upper approximation operator in
generalized rough sets and the closure operator in matroids are investigated.
|
1209.5467
|
Minimizing inter-subject variability in fNIRS based Brain Computer
Interfaces via multiple-kernel support vector learning
|
stat.ML cs.LG
|
Brain signal variability in the measurements obtained from different subjects
during different sessions significantly deteriorates the accuracy of most
brain-computer interface (BCI) systems. Moreover these variabilities, also
known as inter-subject or inter-session variabilities, require lengthy
calibration sessions before the BCI system can be used. Furthermore, the
calibration session has to be repeated for each subject independently and
before use of the BCI due to the inter-session variability. In this study, we
present an algorithm in order to minimize the above-mentioned variabilities and
to overcome the time-consuming and usually error-prone calibration time. Our
algorithm is based on linear programming support-vector machines and their
extensions to a multiple kernel learning framework. We tackle the inter-subject
or -session variability in the feature spaces of the classifiers. This is done
by incorporating each subject- or session-specific feature spaces into much
richer feature spaces with a set of optimal decision boundaries. Each decision
boundary represents the subject- or a session specific spatio-temporal
variabilities of neural signals. Consequently, a single classifier with
multiple feature spaces will generalize well to new unseen test patterns even
without the calibration steps. We demonstrate that classifiers maintain good
performances even under the presence of a large degree of BCI variability. The
present study analyzes BCI variability related to oxy-hemoglobin neural signals
measured using a functional near-infrared spectroscopy.
|
1209.5470
|
Matroidal structure of generalized rough sets based on symmetric and
transitive relations
|
cs.AI
|
Rough sets are efficient for data pre-process in data mining. Lower and upper
approximations are two core concepts of rough sets. This paper studies
generalized rough sets based on symmetric and transitive relations from the
operator-oriented view by matroidal approaches. We firstly construct a
matroidal structure of generalized rough sets based on symmetric and transitive
relations, and provide an approach to study the matroid induced by a symmetric
and transitive relation. Secondly, this paper establishes a close relationship
between matroids and generalized rough sets. Approximation quality and
roughness of generalized rough sets can be computed by the circuit of matroid
theory. At last, a symmetric and transitive relation can be constructed by a
matroid with some special properties.
|
1209.5473
|
Some characteristics of matroids through rough sets
|
cs.AI
|
At present, practical application and theoretical discussion of rough sets
are two hot problems in computer science. The core concepts of rough set theory
are upper and lower approximation operators based on equivalence relations.
Matroid, as a branch of mathematics, is a structure that generalizes linear
independence in vector spaces. Further, matroid theory borrows extensively from
the terminology of linear algebra and graph theory. We can combine rough set
theory with matroid theory through using rough sets to study some
characteristics of matroids. In this paper, we apply rough sets to matroids
through defining a family of sets which are constructed from the upper
approximation operator with respect to an equivalence relation. First, we prove
the family of sets satisfies the support set axioms of matroids, and then we
obtain a matroid. We say the matroids induced by the equivalence relation and a
type of matroid, namely support matroid, is induced. Second, through rough
sets, some characteristics of matroids such as independent sets, support sets,
bases, hyperplanes and closed sets are investigated.
|
1209.5477
|
Optimal Weighting of Multi-View Data with Low Dimensional Hidden States
|
stat.ML cs.LG
|
In Natural Language Processing (NLP) tasks, data often has the following two
properties: First, data can be chopped into multi-views which has been
successfully used for dimension reduction purposes. For example, in topic
classification, every paper can be chopped into the title, the main text and
the references. However, it is common that some of the views are less noisier
than other views for supervised learning problems. Second, unlabeled data are
easy to obtain while labeled data are relatively rare. For example, articles
occurred on New York Times in recent 10 years are easy to grab but having them
classified as 'Politics', 'Finance' or 'Sports' need human labor. Hence less
noisy features are preferred before running supervised learning methods. In
this paper we propose an unsupervised algorithm which optimally weights
features from different views when these views are generated from a low
dimensional hidden state, which occurs in widely used models like Mixture
Gaussian Model, Hidden Markov Model (HMM) and Latent Dirichlet Allocation
(LDA).
|
1209.5480
|
Condition for neighborhoods in covering based rough sets to form a
partition
|
cs.AI
|
Neighborhood is an important concept in covering based rough sets. That under
what condition neighborhoods form a partition is a meaningful issue induced by
this concept. Many scholars have paid attention to this issue and presented
some necessary and sufficient conditions. However, there exists one common
trait among these conditions, that is they are established on the basis of all
neighborhoods have been obtained. In this paper, we provide a necessary and
sufficient condition directly based on the covering itself. First, we
investigate the influence of that there are reducible elements in the covering
on neighborhoods. Second, we propose the definition of uniform block and obtain
a sufficient condition from it. Third, we propose the definitions of repeat
degree and excluded number. By means of the two concepts, we obtain a necessary
and sufficient condition for neighborhoods to form a partition. In a word, we
have gained a deeper and more direct understanding of the essence over that
neighborhoods form a partition.
|
1209.5482
|
Rough sets and matroidal contraction
|
cs.AI
|
Rough sets are efficient for data pre-processing in data mining. As a
generalization of the linear independence in vector spaces, matroids provide
well-established platforms for greedy algorithms. In this paper, we apply rough
sets to matroids and study the contraction of the dual of the corresponding
matroid. First, for an equivalence relation on a universe, a matroidal
structure of the rough set is established through the lower approximation
operator. Second, the dual of the matroid and its properties such as
independent sets, bases and rank function are investigated. Finally, the
relationships between the contraction of the dual matroid to the complement of
a single point set and the contraction of the dual matroid to the complement of
the equivalence class of this point are studied.
|
1209.5484
|
Condition for neighborhoods induced by a covering to be equal to the
covering itself
|
cs.AI
|
It is a meaningful issue that under what condition neighborhoods induced by a
covering are equal to the covering itself. A necessary and sufficient condition
for this issue has been provided by some scholars. In this paper, through a
counter-example, we firstly point out the necessary and sufficient condition is
false. Second, we present a necessary and sufficient condition for this issue.
Third, we concentrate on the inverse issue of computing neighborhoods by a
covering, namely giving an arbitrary covering, whether or not there exists
another covering such that the neighborhoods induced by it is just the former
covering. We present a necessary and sufficient condition for this issue as
well. In a word, through the study on the two fundamental issues induced by
neighborhoods, we have gained a deeper understanding of the relationship
between neighborhoods and the covering which induce the neighborhoods.
|
1209.5494
|
Segmentation of Breast Regions in Mammogram Based on Density: A Review
|
cs.CV
|
The focus of this paper is to review approaches for segmentation of breast
regions in mammograms according to breast density. Studies based on density
have been undertaken because of the relationship between breast cancer and
density. Breast cancer usually occurs in the fibroglandular area of breast
tissue, which appears bright on mammograms and is described as breast density.
Most of the studies are focused on the classification methods for glandular
tissue detection. Others highlighted on the segmentation methods for
fibroglandular tissue, while few researchers performed segmentation of the
breast anatomical regions based on density. There have also been works on the
segmentation of other specific parts of breast regions such as either detection
of nipple position, skin-air interface or pectoral muscles. The problems on the
evaluation performance of the segmentation results in relation to ground truth
are also discussed in this paper.
|
1209.5511
|
Diffusion Based Nanonetworking: A New Modulation Technique and
Performance Analysis
|
cs.IT math.IT
|
In this letter, we propose a new molecular modulation scheme for
nanonetworks. To evaluate the scheme we introduce a more realistic system model
for molecule dissemination and propagation processes based on the Poisson
distribution. We derive the probability of error of our proposed scheme as well
as the previously introduced schemes, including concentration and molecular
shift keying modulations by taking into account the error propagation effect of
previously decoded symbols. Since in our scheme the decoding of the current
symbol does not depend on the previously transmitted and decoded symbols, we do
not encounter error propagation; and so as our numerical results indicate, the
proposed scheme outperforms the previously introduced schemes. We then
introduce a general molecular communication system and use information
theoretic tools to derive fundamental limits on its probability of error.
|
1209.5513
|
On Capacity of Large-Scale MIMO Multiple Access Channels with
Distributed Sets of Correlated Antennas
|
cs.IT math.IT
|
In this paper, a deterministic equivalent of ergodic sum rate and an
algorithm for evaluating the capacity-achieving input covariance matrices for
the uplink large-scale multiple-input multiple-output (MIMO) antenna channels
are proposed. We consider a large-scale MIMO system consisting of multiple
users and one base station with several distributed antenna sets. Each link
between a user and an antenna set forms a two-sided spatially correlated MIMO
channel with line-of-sight (LOS) components. Our derivations are based on novel
techniques from large dimensional random matrix theory (RMT) under the
assumption that the numbers of antennas at the terminals approach to infinity
with a fixed ratio. The deterministic equivalent results (the deterministic
equivalent of ergodic sum rate and the capacity-achieving input covariance
matrices) are easy to compute and shown to be accurate for realistic system
dimensions. In addition, they are shown to be invariant to several types of
fading distribution.
|
1209.5518
|
Diversity-induced resonance in the response to social norms
|
physics.soc-ph cond-mat.dis-nn cs.SI
|
In this paper we focus on diversity-induced resonance, which was recently
found in bistable, excitable and other physical systems. We study the
appearance of this phenomenon in a purely economic model of cooperating and
defecting agents. Agent's contribution to a public good is seen as a social
norm. So defecting agents face a social pressure, which decreases if
free-riding becomes widespread. In this model, diversity among agents naturally
appears because of the different sensitivity towards the social norm. We study
the evolution of cooperation as a response to the social norm (i) for the
replicator dynamics, and (ii) for the logit dynamics by means of numerical
simulations. Diversity-induced resonance is observed as a maximum in the
response of agents to changes in the social norm as a function of the degree of
heterogeneity in the population. We provide an analytical, mean-field approach
for the logit dynamics and find very good agreement with the simulations. From
a socio-economic perspective, our results show that, counter-intuitively,
diversity in the individual sensitivity to social norms may result in a society
that better follows such norms as a whole, even if part of the population is
less prone to follow them.
|
1209.5549
|
Towards a learning-theoretic analysis of spike-timing dependent
plasticity
|
q-bio.NC cs.LG stat.ML
|
This paper suggests a learning-theoretic perspective on how synaptic
plasticity benefits global brain functioning. We introduce a model, the
selectron, that (i) arises as the fast time constant limit of leaky
integrate-and-fire neurons equipped with spiking timing dependent plasticity
(STDP) and (ii) is amenable to theoretical analysis. We show that the selectron
encodes reward estimates into spikes and that an error bound on spikes is
controlled by a spiking margin and the sum of synaptic weights. Moreover, the
efficacy of spikes (their usefulness to other reward maximizing selectrons)
also depends on total synaptic strength. Finally, based on our analysis, we
propose a regularized version of STDP, and show the regularization improves the
robustness of neuronal learning when faced with multiple stimuli.
|
1209.5561
|
Supervised Blockmodelling
|
cs.LG cs.SI stat.ML
|
Collective classification models attempt to improve classification
performance by taking into account the class labels of related instances.
However, they tend not to learn patterns of interactions between classes and/or
make the assumption that instances of the same class link to each other
(assortativity assumption). Blockmodels provide a solution to these issues,
being capable of modelling assortative and disassortative interactions, and
learning the pattern of interactions in the form of a summary network. The
Supervised Blockmodel provides good classification performance using link
structure alone, whilst simultaneously providing an interpretable summary of
network interactions to allow a better understanding of the data. This work
explores three variants of supervised blockmodels of varying complexity and
tests them on four structurally different real world networks.
|
1209.5567
|
Closed-set lattice of regular sets based on a serial and transitive
relation through matroids
|
cs.AI
|
Rough sets are efficient for data pre-processing in data mining. Matroids are
based on linear algebra and graph theory, and have a variety of applications in
many fields. Both rough sets and matroids are closely related to lattices. For
a serial and transitive relation on a universe, the collection of all the
regular sets of the generalized rough set is a lattice. In this paper, we use
the lattice to construct a matroid and then study relationships between the
lattice and the closed-set lattice of the matroid. First, the collection of all
the regular sets based on a serial and transitive relation is proved to be a
semimodular lattice. Then, a matroid is constructed through the height function
of the semimodular lattice. Finally, we propose an approach to obtain all the
closed sets of the matroid from the semimodular lattice. Borrowing from
matroids, results show that lattice theory provides an interesting view to
investigate rough sets.
|
1209.5569
|
Lattice structures of fixed points of the lower approximations of two
types of covering-based rough sets
|
cs.AI
|
Covering is a common type of data structure and covering-based rough set
theory is an efficient tool to process this data. Lattice is an important
algebraic structure and used extensively in investigating some types of
generalized rough sets. In this paper, we propose two family of sets and study
the conditions that these two sets become some lattice structures. These two
sets are consisted by the fixed point of the lower approximations of the first
type and the sixth type of covering-based rough sets, respectively. These two
sets are called the fixed point set of neighborhoods and the fixed point set of
covering, respectively. First, for any covering, the fixed point set of
neighborhoods is a complete and distributive lattice, at the same time, it is
also a double p-algebra. Especially, when the neighborhood forms a partition of
the universe, the fixed point set of neighborhoods is both a boolean lattice
and a double Stone algebra. Second, for any covering, the fixed point set of
covering is a complete lattice.When the covering is unary, the fixed point set
of covering becomes a distributive lattice and a double p-algebra. a
distributive lattice and a double p-algebra when the covering is unary.
Especially, when the reduction of the covering forms a partition of the
universe, the fixed point set of covering is both a boolean lattice and a
double Stone algebra.
|
1209.5571
|
A Cookbook for Temporal Conceptual Data Modelling with Description
Logics
|
cs.LO cs.AI
|
We design temporal description logics suitable for reasoning about temporal
conceptual data models and investigate their computational complexity. Our
formalisms are based on DL-Lite logics with three types of concept inclusions
(ranging from atomic concept inclusions and disjointness to the full Booleans),
as well as cardinality constraints and role inclusions. In the temporal
dimension, they capture future and past temporal operators on concepts,
flexible and rigid roles, the operators `always' and `some time' on roles, data
assertions for particular moments of time and global concept inclusions. The
logics are interpreted over the Cartesian products of object domains and the
flow of time (Z,<), satisfying the constant domain assumption. We prove that
the most expressive of our temporal description logics (which can capture
lifespan cardinalities and either qualitative or quantitative evolution
constraints) turn out to be undecidable. However, by omitting some of the
temporal operators on concepts/roles or by restricting the form of concept
inclusions we obtain logics whose complexity ranges between PSpace and
NLogSpace. These positive results were obtained by reduction to various clausal
fragments of propositional temporal logic, which opens a way to employ
propositional or first-order temporal provers for reasoning about temporal data
models.
|
1209.5598
|
Granular association rules on two universes with four measures
|
cs.DB
|
Relational association rules reveal patterns hide in multiple tables.
Existing rules are usually evaluated through two measures, namely support and
confidence. However, these two measures may not be enough to describe the
strength of a rule. In this paper, we introduce granular association rules with
four measures to reveal connections between granules in two universes, and
propose three algorithms for rule mining. An example of such a rule might be
"40% men like at least 30% kinds of alcohol; 45% customers are men and 6%
products are alcohol." Here 45%, 6%, 40%, and 30% are the source coverage, the
target coverage, the source confidence, and the target confidence,
respectively. With these measures, our rules are semantically richer than
existing ones. Three subtypes of rules are obtained through considering special
requirements on the source/target confidence. Then we define a rule mining
problem, and design a sandwich algorithm with different rule checking
approaches for different subtypes. Experiments on a real world dataset show
that the approaches dedicated to three subtypes are 2-3 orders of magnitudes
faster than the one for the general case. A forward algorithm and a backward
algorithm for one particular subtype can speed up the mining process further.
This work opens a new research trend concerning relational association rule
mining, granular computing and rough sets.
|
1209.5599
|
A stochastic model of the tweet diffusion on the Twitter network
|
physics.soc-ph cs.SI physics.data-an
|
We introduce a stochastic model which describes diffusions of tweets on the
Twitter network. By dividing the followers into generations, we describe the
dynamics of the tweet diffusion as a random multiplicative process. We confirm
our model by directly observing the statistics of the multiplicative factors in
the Twitter data.
|
1209.5601
|
Feature selection with test cost constraint
|
cs.AI cs.LG
|
Feature selection is an important preprocessing step in machine learning and
data mining. In real-world applications, costs, including money, time and other
resources, are required to acquire the features. In some cases, there is a test
cost constraint due to limited resources. We shall deliberately select an
informative and cheap feature subset for classification. This paper proposes
the feature selection with test cost constraint problem for this issue. The new
problem has a simple form while described as a constraint satisfaction problem
(CSP). Backtracking is a general algorithm for CSP, and it is efficient in
solving the new problem on medium-sized data. As the backtracking algorithm is
not scalable to large datasets, a heuristic algorithm is also developed.
Experimental results show that the heuristic algorithm can find the optimal
solution in most cases. We also redefine some existing feature selection
problems in rough sets, especially in decision-theoretic rough sets, from the
viewpoint of CSP. These new definitions provide insight to some new research
directions.
|
1209.5625
|
Managing Complex Structured Data In a Fast Evolving Environment
|
cs.DB
|
Criminal data comes in a variety of formats, mandated by state, federal, and
international standards. Specifying the data in a unified fashion is necessary
for any system that intends to integrate with state, federal, and international
law enforcement agencies. However, the contents, format, and structure of the
data is highly inconsistent across jurisdictions, and each datum requires
different ways of being printed, transmitted, and displayed. The goal was to
design a system that is unified in its approach to specify data, and is
amenable to future "unknown unknowns". We have developed a domain-specific
language in Common Lisp which allows the specification of complex data with
evolving formats and structure, and is inter-operable with the Common Lisp
language. The resultant system has enabled the easy handling of complex
evolving information in the general criminal data environment and has made it
possible to manage and extend the system in a high-paced market. The language
has allowed the principal product of Secure Outcomes Inc. to enjoy success with
over 50 users throughout the United States.
|
1209.5656
|
Learning Price-Elasticity of Smart Consumers in Power Distribution
Systems
|
cs.IT cs.NI math.IT
|
Demand Response is an emerging technology which will transform the power grid
of tomorrow. It is revolutionary, not only because it will enable peak load
shaving and will add resources to manage large distribution systems, but mainly
because it will tap into an almost unexplored and extremely powerful pool of
resources comprised of many small individual consumers on distribution grids.
However, to utilize these resources effectively, the methods used to engage
these resources must yield accurate and reliable control. A diversity of
methods have been proposed to engage these new resources. As opposed to direct
load control, many methods rely on consumers and/or loads responding to
exogenous signals, typically in the form of energy pricing, originating from
the utility or system operator. Here, we propose an open loop
communication-lite method for estimating the price elasticity of many customers
comprising a distribution system. We utilize a sparse linear regression method
that relies on operator-controlled, inhomogeneous minor price variations, which
will be fair to all the consumers. Our numerical experiments show that reliable
estimation of individual and thus aggregated instantaneous elasticities is
possible. We describe the limits of the reliable reconstruction as functions of
the three key parameters of the system: (i) ratio of the number of
communication slots (time units) per number of engaged consumers; (ii) level of
sparsity (in consumer response); and (iii) signal-to-noise ratio.
|
1209.5663
|
Semi-automatic annotation process for procedural texts: An application
on cooking recipes
|
cs.AI
|
Taaable is a case-based reasoning system that adapts cooking recipes to user
constraints. Within it, the preparation part of recipes is formalised as a
graph. This graph is a semantic representation of the sequence of instructions
composing the cooking process and is used to compute the procedure adaptation,
conjointly with the textual adaptation. It is composed of cooking actions and
ingredients, among others, represented as vertices, and semantic relations
between those, shown as arcs, and is built automatically thanks to natural
language processing. The results of the automatic annotation process is often a
disconnected graph, representing an incomplete annotation, or may contain
errors. Therefore, a validating and correcting step is required. In this paper,
we present an existing graphic tool named \kcatos, conceived for representing
and editing decision trees, and show how it has been adapted and integrated in
WikiTaaable, the semantic wiki in which the knowledge used by Taaable is
stored. This interface provides the wiki users with a way to correct the case
representation of the cooking process, improving at the same time the quality
of the knowledge about cooking procedures stored in WikiTaaable.
|
1209.5664
|
Extension du formalisme des flux op\'erationnels par une alg\`ebre
temporelle
|
cs.AI cs.LO
|
Workflows constitute an important language to represent knowledge about
processes, but also increasingly to reason on such knowledge. On the other
hand, there is a limit to which time constraints between activities can be
expressed. Qualitative interval algebras can model processes using finer
temporal relations, but they cannot reproduce all workflow patterns. This paper
defines a common ground model-theoretical semantics for both workflows and
interval algebras, making it possible for reasoning systems working with either
to interoperate. Thanks to this, interesting properties and inferences can be
defined, both on workflows and on an extended formalism combining workflows
with interval algebras. Finally, similar formalisms proposing a sound formal
basis for workflows and extending them are discussed.
|
1209.5683
|
Role of conviction in nonequilibrium models of opinion formation
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We analyze the critical behavior of a class of discrete opinion models in the
presence of disorder. Within this class, each agent opinion takes a discrete
value ($\pm 1$ or 0) and its time evolution is ruled by two terms, one
representing agent-agent interactions and the other the degree of conviction or
persuasion (a self-interaction). The mean-field limit, where each agent can
interact evenly with any other, is considered. Disorder is introduced in the
strength of both interactions, with either quenched or annealed random
variables. With probability $p$ (1-$p$), a pairwise interaction reflects a
negative (positive) coupling, while the degree of conviction also follows a
binary probability distribution (two different discrete probability
distributions are considered). Numerical simulations show that a
non-equilibrium continuous phase transition, from a disordered state to a state
with a prevailing opinion, occurs at a critical point $p_{c}$ that depends on
the distribution of the convictions, the transition being spoiled in some
cases. We also show how the critical line, for each model, is affected by the
update scheme (either parallel or sequential) as well as by the kind of
disorder (either quenched or annealed).
|
1209.5695
|
Hybrid Approaches to Image Coding: A Review
|
cs.IT math.IT
|
Nowadays, the digital world is most focused on storage space and speed. With
the growing demand for better bandwidth utilization, efficient image data
compression techniques have emerged as an important factor for image data
transmission and storage. To date, different approaches to image compression
have been developed like the classical predictive coding, popular transform
coding and vector quantization. Several second generation coding schemes or the
segmentation based schemes are also gaining popularity. Practically efficient
compression systems based on hybrid coding which combines the advantages of
different traditional methods of image coding have also been developed over the
years. In this paper, different hybrid approaches to image compression are
discussed. Hybrid coding of images, in this context, deals with combining two
or more traditional approaches to enhance the individual methods and achieve
better-quality reconstructed images with higher compression ratio. Literature
on hybrid techniques of image coding over the past years is also reviewed. An
attempt is made to highlight the neuro-wavelet approach for enhancing coding
efficiency.
|
1209.5698
|
Sampling Error Analysis and Properties of Non-bandlimited Signals That
Are Reconstructed by Generalized Sinc Functions
|
cs.IT math.IT
|
Recently efforts have been made to use generalized sinc functions to
perfectly reconstruct various kinds of non-bandlimited signals. As a
consequence, perfect reconstruction sampling formulas have been established
using such generalized sinc functions. This article studies the error of the
reconstructed non-bandlimited signal when an adaptive truncation scheme is
employed. Further, when there are noises present in the samples, estimation on
the expectation and variance of the error pertinent to the reconstructed signal
is also given. Finally discussed are the reproducing properties and the Sobolev
smoothness of functions in the space of non-bandlimited signals that admits
such a sampling formula.
|
1209.5730
|
The Feasibility of Scalable Video Streaming over Femtocell Networks
|
cs.IT cs.NI math.IT
|
In this paper, we consider femtocell CR networks, where femto base stations
(FBS) are deployed to greatly improve network coverage and capacity. We
investigate the problem of generic data multicast in femtocell networks. We
reformulate the resulting MINLP problem into a simpler form, and derive upper
and lower performance bounds. Then we consider three typical connection
scenarios in the femtocell network, and develop optimal and near-optimal
algorithms for the three scenarios. Second, we tackle the problem of streaming
scalable videos in femtocell CR networks. A framework is developed to captures
the key design issues and trade-offs with a stochastic programming problem
formulation. In the case of a single FBS, we develop an optimum-achieving
distributed algorithm, which is shown also optimal for the case of multiple
non-interfering FBS's. In the case of interfering FBS's, we develop a greedy
algorithm that can compute near-opitmal solutions, and prove a closed-form
lower bound on its performance.
|
1209.5756
|
Environmental Sounds Spectrogram Classification using Log-Gabor Filters
and Multiclass Support Vector Machines
|
cs.CV
|
This paper presents novel approaches for efficient feature extraction using
environmental sound magnitude spectrogram. We propose approach based on the
visual domain. This approach included three methods. The first method is based
on extraction for each spectrogram a single log-Gabor filter followed by mutual
information procedure. In the second method, the spectrogram is passed by the
same steps of the first method but with an averaged bank of 12 log-Gabor
filter. The third method consists of spectrogram segmentation into three
patches, and after that for each spectrogram patch we applied the second
method. The classification results prove that the second method is the most
efficient in our environmental sound classification system.
|
1209.5762
|
Nonbinary Spatially-Coupled LDPC Codes on the Binary Erasure Channel
|
cs.IT math.IT
|
We analyze the asymptotic performance of nonbinary spatially-coupled
low-density parity-check (SC-LDPC) codes built on the general linear group,
when the transmission takes place over the binary erasure channel. We propose
an efficient method to derive an upper bound to the maximum a posteriori
probability (MAP) threshold for nonbinary LDPC codes, and observe that the MAP
performance of regular LDPC codes improves with the alphabet size. We then
consider nonbinary SC-LDPC codes. We show that the same threshold saturation
effect experienced by binary SC-LDPC codes occurs for the nonbinary codes,
hence we conjecture that the BP threshold for large termination length
approaches the MAP threshold of the underlying regular ensemble.
|
1209.5779
|
Chance Constrained Optimal Power Flow: Risk-Aware Network Control under
Uncertainty
|
math.OC cs.SY physics.soc-ph
|
When uncontrollable resources fluctuate, Optimum Power Flow (OPF), routinely
used by the electric power industry to re-dispatch hourly controllable
generation (coal, gas and hydro plants) over control areas of transmission
networks, can result in grid instability, and, potentially, cascading outages.
This risk arises because OPF dispatch is computed without awareness of major
uncertainty, in particular fluctuations in renewable output. As a result, grid
operation under OPF with renewable variability can lead to frequent conditions
where power line flow ratings are significantly exceeded. Such a condition,
which is borne by simulations of real grids, would likely resulting in
automatic line tripping to protect lines from thermal stress, a risky and
undesirable outcome which compromises stability. Smart grid goals include a
commitment to large penetration of highly fluctuating renewables, thus calling
to reconsider current practices, in particular the use of standard OPF. Our
Chance Constrained (CC) OPF corrects the problem and mitigates dangerous
renewable fluctuations with minimal changes in the current operational
procedure. Assuming availability of a reliable wind forecast parameterizing the
distribution function of the uncertain generation, our CC-OPF satisfies all the
constraints with high probability while simultaneously minimizing the cost of
economic re-dispatch. CC-OPF allows efficient implementation, e.g. solving a
typical instance over the 2746-bus Polish network in 20 seconds on a standard
laptop.
|
1209.5785
|
Coupling Data Transmission for Multiple-Access Communications
|
cs.IT math.IT
|
We consider a signaling format where the information to be communicated from
one or multiple transmitters to a receiver is modulated via a superposition of
independent data streams. Each data stream is formed by error-correction
encoding, constellation mapping, replication and permutation of symbols, and
application of signature sequences. The relations between the data bits and
modulation symbols transmitted over the channel can be represented by a sparse
graph. In the case where the modulated data streams are transmitted with time
offsets the receiver observes spatial coupling of the individual graphs into a
graph chain enabling efficient demodulation/decoding. We prove that a two-stage
demodulation/decoding method, in which iterative demodulation based on symbol
estimation and interference cancellation is followed by parallel error
correction decoding, achieves capacity on the additive white Gaussian noise
(AWGN) channel asymptotically. We compare the performance of the two-stage
receiver to the receiver which utilizes hard feedback between the
error-correction encoders and the iterative demodulator.
|
1209.5803
|
Full-Diversity Precoding Design of Bit-Interleaved Coded Multiple
Beamforming with Orthogonal Frequency Division Multiplexing
|
cs.IT math.IT
|
Multi-Input Multi-Output (MIMO) techniques have been incorporated with
Orthogonal Frequency Division Multiplexing (OFDM) for broadband wireless
communication systems. Bit-Interleaved Coded Multiple Beamforming (BICMB) can
achieve both spatial diversity and spatial multiplexing for flat fading MIMO
channels. For frequency selective fading MIMO channels, BICMB with OFDM
(BICMB-OFDM) can be employed to provide both spatial diversity and multipath
diversity, making it an important technique. In our previous work, the
subcarrier grouping technique was applied to combat the negative effect of
subcarrier correlation. It was also proved that full diversity of BICMB-OFDM
with Subcarrier Grouping (BICMB-OFDM-SG) can be achieved within the condition
R_cSL<=1, where R_c, S, and L are the code rate, the number of parallel streams
at each subcarrier, and the number of channel taps, respectively. The full
diversity condition implies that if S increases, R_c may have to decrease to
maintain full diversity. As a result, increasing the number of parallel streams
may not improve the total transmission rate. In this paper, the precoding
technique is employed to overcome the full diversity restriction issue of
R_cSL<=1 for BICMB-OFDM-SG. First, the diversity analysis of precoded
BICMB-OFDM-SG is carried out. Then, the full-diversity precoding design is
developed with the minimum achievable decoding complexity.
|
1209.5805
|
Memoryless Control Design for Persistent Surveillance under Safety
Constraints
|
cs.SY cs.RO math.OC
|
This paper deals with the design of time-invariant memoryless control
policies for robots that move in a finite two- dimensional lattice and are
tasked with persistent surveillance of an area in which there are forbidden
regions. We model each robot as a controlled Markov chain whose state comprises
its position in the lattice and the direction of motion. The goal is to find
the minimum number of robots and an associated time-invariant memoryless
control policy that guarantees that the largest number of states are
persistently surveilled without ever visiting a forbidden state. We propose a
design method that relies on a finitely parametrized convex program inspired by
entropy maximization principles. Numerical examples are provided.
|
1209.5807
|
Fundamental Limits of Caching
|
cs.IT math.IT
|
Caching is a technique to reduce peak traffic rates by prefetching popular
content into memories at the end users. Conventionally, these memories are used
to deliver requested content in part from a locally cached copy rather than
through the network. The gain offered by this approach, which we term local
caching gain, depends on the local cache size (i.e, the memory available at
each individual user). In this paper, we introduce and exploit a second,
global, caching gain not utilized by conventional caching schemes. This gain
depends on the aggregate global cache size (i.e., the cumulative memory
available at all users), even though there is no cooperation among the users.
To evaluate and isolate these two gains, we introduce an
information-theoretic formulation of the caching problem focusing on its basic
structure. For this setting, we propose a novel coded caching scheme that
exploits both local and global caching gains, leading to a multiplicative
improvement in the peak rate compared to previously known schemes. In
particular, the improvement can be on the order of the number of users in the
network. Moreover, we argue that the performance of the proposed scheme is
within a constant factor of the information-theoretic optimum for all values of
the problem parameters.
|
1209.5809
|
Diversifying Citation Recommendations
|
cs.IR cs.DL cs.SI
|
Literature search is arguably one of the most important phases of the
academic and non-academic research. The increase in the number of published
papers each year makes manual search inefficient and furthermore insufficient.
Hence, automatized methods such as search engines have been of interest in the
last thirty years. Unfortunately, these traditional engines use keyword-based
approaches to solve the search problem, but these approaches are prone to
ambiguity and synonymy. On the other hand, bibliographic search techniques
based only on the citation information are not prone to these problems since
they do not consider textual similarity. For many particular research areas and
topics, the amount of knowledge to humankind is immense, and obtaining the
desired information is as hard as looking for a needle in a haystack.
Furthermore, sometimes, what we are looking for is a set of documents where
each one is different than the others, but at the same time, as a whole we want
them to cover all the important parts of the literature relevant to our search.
This paper targets the problem of result diversification in citation-based
bibliographic search. It surveys a set of techniques which aim to find a set of
papers with satisfactory quality and diversity. We enhance these algorithms
with a direction-awareness functionality to allow the users to reach either
old, well-cited, well-known research papers or recent, less-known ones. We also
propose a set of novel techniques for a better diversification of the results.
All the techniques considered are compared by performing a rigorous
experimentation. The results show that some of the proposed techniques are very
successful in practice while performing a search in a bibliographic database.
|
1209.5818
|
Fast Algorithms for the Maximum Clique Problem on Massive Sparse Graphs
|
cs.DS cs.IR
|
The maximum clique problem is a well known NP-Hard problem with applications
in data mining, network analysis, informatics, and many other areas. Although
there exist several algorithms with acceptable runtimes for certain classes of
graphs, many of them are infeasible for massive graphs. We present a new exact
algorithm that employs novel pruning techniques to very quickly find maximum
cliques in large sparse graphs. Extensive experiments on several types of
synthetic and real-world graphs show that our new algorithm is up to several
orders of magnitude faster than existing algorithms for most instances. We also
present a heuristic variant that runs orders of magnitude faster than the exact
algorithm, while providing optimal or near-optimal solutions.
|
1209.5826
|
Refinability of splines from lattice Voronoi cells
|
math.NA cs.CV
|
Splines can be constructed by convolving the indicator function of the
Voronoi cell of a lattice. This paper presents simple criteria that imply that
only a small subset of such spline families can be refined: essentially the
well-known box splines and tensor-product splines. Among the many non-refinable
constructions are hex-splines and their generalization to non-Cartesian
lattices. An example shows how non-refinable splines can exhibit increased
approximation error upon refinement of the lattice.
|
1209.5829
|
Transmission Schemes for Four-Way Relaying in Wireless Cellular Systems
|
cs.IT math.IT
|
Two-way relaying in wireless systems has initiated a large research effort
during the past few years. While one-way relay with a single data flow
introduces loss in spectral efficiency due to its half-duplex operation,
two-way relaying based on wireless network coding regains part of this loss by
simultaneously processing the two data flows. In a broader perspective, the
two-way traffic pattern is rather limited and it is of interest to investigate
other traffic patterns where such a simultaneous processing of information
flows can bring performance advantage. In this paper we consider a scenario
beyond the usual two-way relaying: a four-way relaying, where each of the two
Mobile Stations (MSs) has a two-way connection to the same Base Station (BS),
while each connection is through a dedicated Relay Station (RS). While both RSs
are in the range of the same BS, they are assumed to have antipodal positions
within the cell, such that they do not interfere with each other. We introduce
and analyze a two-phase transmission scheme to serve the four-way traffic
pattern defined in this scenario. Each phase consists of combined broadcast and
multiple access. We analyze the achievable rate region of the new schemes for
two different operational models for the RS, Decode-and-Forward (DF) and
Amplify-and-Forward (AF), respectively. We compare the performance with a
state-of-the-art reference scheme, time sharing is used between the two MSs,
while each MS is served through a two-way relaying scheme. The results indicate
that, when the RS operates in a DF mode, the achievable rate regions are
significantly enlarged. On the other hand, for AF relaying, the gains are
rather modest. The practical implication of the presented work is a novel
insight on how to improve the spatial reuse in wireless cellular networks by
coordinating the transmissions of the antipodal relays.
|
1209.5833
|
Locality-Sensitive Hashing with Margin Based Feature Selection
|
cs.LG cs.IR
|
We propose a learning method with feature selection for Locality-Sensitive
Hashing. Locality-Sensitive Hashing converts feature vectors into bit arrays.
These bit arrays can be used to perform similarity searches and personal
authentication. The proposed method uses bit arrays longer than those used in
the end for similarity and other searches and by learning selects the bits that
will be used. We demonstrated this method can effectively perform optimization
for cases such as fingerprint images with a large number of labels and
extremely few data that share the same labels, as well as verifying that it is
also effective for natural images, handwritten digits, and speech features.
|
1209.5837
|
Three "quantum" models of competition and cooperation in interacting
biological populations and social groups
|
physics.soc-ph cs.SI
|
In present paper we propose the consistent statistical approach which
appropriate for a number of models describing both behavior of biological
populations and various social groups interacting with each other.The approach
proposed based on the ideas of quantum theory of open systems (QTOS) and allows
one to account explicitly both discreteness of a system variables and their
fluctuations near mean values.Therefore this approach can be applied also for
the description of small populations where standard dynamical methods are
failed. We study in detail three typical models of interaction between
populations and groups: 1) antagonistic struggle between two populations 2)
cooperation (or, more precisely, obligatory mutualism) between two species 3)
the formation of coalition between two feeble groups in their conflict with
third one that is more powerful . The models considered in a sense are mutually
complementary and include the most types of interaction between populations and
groups. Besides this method can be generalized on the case of more complex
models in statistical physics and also in ecology, sociology and other "soft'
sciences.
|
1209.5853
|
Efficient Natural Evolution Strategies
|
cs.AI
|
Efficient Natural Evolution Strategies (eNES) is a novel alternative to
conventional evolutionary algorithms, using the natural gradient to adapt the
mutation distribution. Unlike previous methods based on natural gradients, eNES
uses a fast algorithm to calculate the inverse of the exact Fisher information
matrix, thus increasing both robustness and performance of its evolution
gradient estimation, even in higher dimensions. Additional novel aspects of
eNES include optimal fitness baselines and importance mixing (a procedure for
updating the population with very few fitness evaluations). The algorithm
yields competitive results on both unimodal and multimodal benchmarks.
|
1209.5905
|
An Efficient Biological Sequence Compression Technique Using LUT And
Repeat In The Sequence
|
cs.CE q-bio.QM
|
Data compression plays an important role to deal with high volumes of DNA
sequences in the field of Bioinformatics. Again data compression techniques
directly affect the alignment of DNA sequences. So the time needed to
decompress a compressed sequence has to be given equal priorities as with
compression ratio. This article contains first introduction then a brief review
of different biological sequence compression after that my proposed work then
our two improved Biological sequence compression algorithms after that result
followed by conclusion and discussion, future scope and finally references.
These algorithms gain a very good compression factor with higher saving
percentage and less time for compression and decompression than the previous
Biological Sequence compression algorithms. Keywords: Hash map table, Tandem
repeats, compression factor, compression time, saving percentage, compression,
decompression process.
|
1209.5907
|
On Designs of Full Diversity Space-Time Block Codes for Two-User MIMO
Interference Channels
|
cs.IT math.IT
|
In this paper, a design criterion for space-time block codes (STBC) is
proposed for two-user MIMO interference channels when a group zero-forcing (ZF)
algorithm is applied at each receiver to eliminate the inter-user interference.
Based on the design criterion, a design of STBC for two-user interference
channels is proposed that can achieve full diversity for each user with the
group ZF receiver. The code rate approaches one when the time delay in the
encoding (or code block size) gets large. Performance results demonstrate that
the full diversity can be guaranteed by our proposed STBC with the group ZF
receiver.
|
1209.5912
|
Analysis of Sum-Weight-like algorithms for averaging in Wireless Sensor
Networks
|
cs.DC cs.IT math.IT
|
Distributed estimation of the average value over a Wireless Sensor Network
has recently received a lot of attention. Most papers consider single variable
sensors and communications with feedback (e.g. peer-to-peer communications).
However, in order to use efficiently the broadcast nature of the wireless
channel, communications without feedback are advocated. To ensure the
convergence in this feedback-free case, the recently-introduced Sum-Weight-like
algorithms which rely on two variables at each sensor are a promising solution.
In this paper, the convergence towards the consensus over the average of the
initial values is analyzed in depth. Furthermore, it is shown that the squared
error decreases exponentially with the time. In addition, a powerful algorithm
relying on the Sum-Weight structure and taking into account the broadcast
nature of the channel is proposed.
|
1209.5922
|
Towards structured sharing of raw and derived neuroimaging data across
existing resources
|
cs.DB q-bio.NC
|
Data sharing efforts increasingly contribute to the acceleration of
scientific discovery. Neuroimaging data is accumulating in distributed
domain-specific databases and there is currently no integrated access mechanism
nor an accepted format for the critically important meta-data that is necessary
for making use of the combined, available neuroimaging data. In this
manuscript, we present work from the Derived Data Working Group, an open-access
group sponsored by the Biomedical Informatics Research Network (BIRN) and the
International Neuroimaging Coordinating Facility (INCF) focused on practical
tools for distributed access to neuroimaging data. The working group develops
models and tools facilitating the structured interchange of neuroimaging
meta-data and is making progress towards a unified set of tools for such data
and meta-data exchange. We report on the key components required for integrated
access to raw and derived neuroimaging data as well as associated meta-data and
provenance across neuroimaging resources. The components include (1) a
structured terminology that provides semantic context to data, (2) a formal
data model for neuroimaging with robust tracking of data provenance, (3) a web
service-based application programming interface (API) that provides a
consistent mechanism to access and query the data model, and (4) a provenance
library that can be used for the extraction of provenance data by image
analysts and imaging software developers. We believe that the framework and set
of tools outlined in this manuscript have great potential for solving many of
the issues the neuroimaging community faces when sharing raw and derived
neuroimaging data across the various existing database systems for the purpose
of accelerating scientific discovery.
|
1209.5969
|
First-principles multiway spectral partitioning of graphs
|
cs.DS cs.SI
|
We consider the minimum-cut partitioning of a graph into more than two parts
using spectral methods. While there exist well-established spectral algorithms
for this problem that give good results, they have traditionally not been well
motivated. Rather than being derived from first principles by minimizing graph
cuts, they are typically presented without direct derivation and then proved
after the fact to work. In this paper, we take a contrasting approach in which
we start with a matrix formulation of the minimum cut problem and then show,
via a relaxed optimization, how it can be mapped onto a spectral embedding
defined by the leading eigenvectors of the graph Laplacian. The end result is
an algorithm that is similar in spirit to, but different in detail from,
previous spectral partitioning approaches. In tests of the algorithm we find
that it outperforms previous approaches on certain particularly difficult
partitioning problems.
|
1209.5978
|
Two-way Communication with Adaptive Data Acquisition
|
cs.IT math.IT
|
Motivated by computer networks and machine-to-machine communication
applications, a bidirectional link is studied in which two nodes, Node 1 and
Node 2, communicate to fulfill generally conflicting informational
requirements. Node 2 is able to acquire information from the environment, e.g.,
via access to a remote data base or via sensing. Information acquisition is
expensive in terms of system resources, e.g., time, bandwidth and energy and
thus should be done efficiently by adapting the acquisition process to the
needs of the application. As a result of the forward communication from Node 1
to Node 2, the latter wishes to compute some function, such as a suitable
average, of the data available at Node 1 and of the data obtained from the
environment. The forward link is also used by Node 1 to query Node 2 with the
aim of retrieving suitable information from the environment on the backward
link. The problem is formulated in the context of multi-terminal
rate-distortion theory and the optimal trade-off between communication rates,
distortions of the information produced at the two nodes and costs for
information acquisition at Node 2 is derived. The issue of robustness to
possible malfunctioning of the data acquisition process at Node 2 is also
investigated. The results are illustrated via an example that demonstrates the
different roles played by the forward communication, namely data exchange,
query and control.
|
1209.5982
|
PlaceRaider: Virtual Theft in Physical Spaces with Smartphones
|
cs.CR cs.CV
|
As smartphones become more pervasive, they are increasingly targeted by
malware. At the same time, each new generation of smartphone features
increasingly powerful onboard sensor suites. A new strain of sensor malware has
been developing that leverages these sensors to steal information from the
physical environment (e.g., researchers have recently demonstrated how malware
can listen for spoken credit card numbers through the microphone, or feel
keystroke vibrations using the accelerometer). Yet the possibilities of what
malware can see through a camera have been understudied. This paper introduces
a novel visual malware called PlaceRaider, which allows remote attackers to
engage in remote reconnaissance and what we call virtual theft. Through
completely opportunistic use of the camera on the phone and other sensors,
PlaceRaider constructs rich, three dimensional models of indoor environments.
Remote burglars can thus download the physical space, study the environment
carefully, and steal virtual objects from the environment (such as financial
documents, information on computer monitors, and personally identifiable
information). Through two human subject studies we demonstrate the
effectiveness of using mobile devices as powerful surveillance and virtual
theft platforms, and we suggest several possible defenses against visual
malware.
|
1209.5991
|
Subset Selection for Gaussian Markov Random Fields
|
cs.LG stat.ML
|
Given a Gaussian Markov random field, we consider the problem of selecting a
subset of variables to observe which minimizes the total expected squared
prediction error of the unobserved variables. We first show that finding an
exact solution is NP-hard even for a restricted class of Gaussian Markov random
fields, called Gaussian free fields, which arise in semi-supervised learning
and computer vision. We then give a simple greedy approximation algorithm for
Gaussian free fields on arbitrary graphs. Finally, we give a message passing
algorithm for general Gaussian Markov random fields on bounded tree-width
graphs.
|
1209.5998
|
Biased Assimilation, Homophily and the Dynamics of Polarization
|
cs.SI cs.GT physics.soc-ph
|
Are we as a society getting more polarized, and if so, why? We try to answer
this question through a model of opinion formation. Empirical studies have
shown that homophily results in polarization. However, we show that DeGroot's
well-known model of opinion formation based on repeated averaging can never be
polarizing, even if individuals are arbitrarily homophilous. We generalize
DeGroot's model to account for a phenomenon well-known in social psychology as
biased assimilation: when presented with mixed or inconclusive evidence on a
complex issue, individuals draw undue support for their initial position
thereby arriving at a more extreme opinion. We show that in a simple model of
homophilous networks, our biased opinion formation process results in either
polarization, persistent disagreement or consensus depending on how biased
individuals are. In other words, homophily alone, without biased assimilation,
is not sufficient to polarize society. Quite interestingly, biased assimilation
also provides insight into the following related question: do internet based
recommender algorithms that show us personalized content contribute to
polarization? We make a connection between biased assimilation and the
polarizing effects of some random-walk based recommender algorithms that are
similar in spirit to some commonly used recommender algorithms.
|
1209.6001
|
Bayesian Mixture Models for Frequent Itemset Discovery
|
cs.LG cs.IR stat.ML
|
In binary-transaction data-mining, traditional frequent itemset mining often
produces results which are not straightforward to interpret. To overcome this
problem, probability models are often used to produce more compact and
conclusive results, albeit with some loss of accuracy. Bayesian statistics have
been widely used in the development of probability models in machine learning
in recent years and these methods have many advantages, including their
abilities to avoid overfitting. In this paper, we develop two Bayesian mixture
models with the Dirichlet distribution prior and the Dirichlet process (DP)
prior to improve the previous non-Bayesian mixture model developed for
transaction dataset mining. We implement the inference of both mixture models
using two methods: a collapsed Gibbs sampling scheme and a variational
approximation algorithm. Experiments in several benchmark problems have shown
that both mixture models achieve better performance than a non-Bayesian mixture
model. The variational algorithm is the faster of the two approaches while the
Gibbs sampling method achieves a more accurate results. The Dirichlet process
mixture model can automatically grow to a proper complexity for a better
approximation. Once the model is built, it can be very fast to query and run
analysis on (typically 10 times faster than Eclat, as we will show in the
experiment section). However, these approaches also show that mixture models
underestimate the probabilities of frequent itemsets. Consequently, these
models have a higher sensitivity but a lower specificity.
|
1209.6004
|
The Issue-Adjusted Ideal Point Model
|
stat.ML cs.LG stat.AP
|
We develop a model of issue-specific voting behavior. This model can be used
to explore lawmakers' personal voting patterns of voting by issue area,
providing an exploratory window into how the language of the law is correlated
with political support. We derive approximate posterior inference algorithms
based on variational methods. Across 12 years of legislative data, we
demonstrate both improvement in heldout prediction performance and the model's
utility in interpreting an inherently multi-dimensional space.
|
1209.6007
|
Shattering and Compressing Networks for Centrality Analysis
|
cs.DS cs.SI physics.soc-ph
|
Who is more important in a network? Who controls the flow between the nodes
or whose contribution is significant for connections? Centrality metrics play
an important role while answering these questions. The betweenness metric is
useful for network analysis and implemented in various tools. Since it is one
of the most computationally expensive kernels in graph mining, several
techniques have been proposed for fast computation of betweenness centrality.
In this work, we propose and investigate techniques which compress a network
and shatter it into pieces so that the rest of the computation can be handled
independently for each piece. Although we designed and tuned the shattering
process for betweenness, it can be adapted for other centrality metrics in a
straightforward manner. Experimental results show that the proposed techniques
can be a great arsenal to reduce the centrality computation time for various
types of networks.
|
1209.6012
|
Minimum Weight Dynamo and Fast Opinion Spreading
|
cs.SI cs.DM math.CO
|
We consider the following multi--level opinion spreading model on networks.
Initially, each node gets a weight from the set [0..k-1], where such a weight
stands for the individuals conviction of a new idea or product. Then, by
proceeding to rounds, each node updates its weight according to the weights of
its neighbors. We are interested in the initial assignments of weights leading
each node to get the value k-1 --e.g. unanimous maximum level acceptance--
within a given number of rounds. We determine lower bounds on the sum of the
initial weights of the nodes under the irreversible simple majority rules,
where a node increases its weight if and only if the majority of its neighbors
have a weight that is higher than its own one. Moreover, we provide
constructive tight upper bounds for some class of regular topologies: rings,
tori, and cliques.
|
1209.6017
|
Power Allocation in Amplify and Forward Relays with a Power Constrained
Relay
|
cs.IT cs.SY math.IT
|
We consider a two-hop Multiple-Input Multiple-Output channel with a source, a
single Amplify and Forward relay, and the destination. We consider the problem
of designing precoders at the source and the relay, and the receiver matrix at
the destination. In particular, we address the problem of optimal power
allocation scheme at the source which minimizes the source transmit power while
satisfying a given Quality of Service requirement at the destination, and a
power constraint at the relay. We consider two types of receiver at the
destination, a Zero Forcing receiver and an Minimum Mean Square Error receiver.
Simulation Results are provided in the end which compare the performance of
both the receivers.
|
1209.6037
|
Reproduction of Images by Gamut Mapping and Creation of New Test Charts
in Prepress Process
|
cs.CV
|
With the advent of digital images the problem of keeping picture
visualization uniformity arises because each printing or scanning device has
its own color chart. So, universal color profiles are made by ICC to bring
uniformity in various types of devices. Keeping that color profile in mind
various new color charts are created and calibrated with the help of standard
IT8 test charts available in the market. The main objective to color
reproduction is to produce the identical picture at device output. For that
principles for gamut mapping has been designed
|
1209.6050
|
An Introduction to Community Detection in Multi-layered Social Network
|
cs.SI physics.soc-ph
|
Social communities extraction and their dynamics are one of the most
important problems in today's social network analysis. During last few years,
many researchers have proposed their own methods for group discovery in social
networks. However, almost none of them have noticed that modern social networks
are much more complex than few years ago. Due to vast amount of different data
about various user activities available in IT systems, it is possible to
distinguish the new class of social networks called multi-layered social
network. For that reason, the new approach to community detection in the
multi-layered social network, which utilizes multi-layered edge clustering
coefficient is proposed in the paper.
|
1209.6070
|
Movie Popularity Classification based on Inherent Movie Attributes using
C4.5,PART and Correlation Coefficient
|
cs.LG cs.DB cs.IR
|
Abundance of movie data across the internet makes it an obvious candidate for
machine learning and knowledge discovery. But most researches are directed
towards bi-polar classification of movie or generation of a movie
recommendation system based on reviews given by viewers on various internet
sites. Classification of movie popularity based solely on attributes of a movie
i.e. actor, actress, director rating, language, country and budget etc. has
been less highlighted due to large number of attributes that are associated
with each movie and their differences in dimensions. In this paper, we propose
classification scheme of pre-release movie popularity based on inherent
attributes using C4.5 and PART classifier algorithm and define the relation
between attributes of post release movies using correlation coefficient.
|
1209.6129
|
A New Middle Path Approach For Alignements In Blast
|
cs.DS cs.CE q-bio.QM
|
This paper deals with a new middle path approach developed for reducing
alignment calculations in BLAST algorithm. This is a new step which is
introduced in BLAST algorithm in between the ungapped and gapped alignments.
This step of middle path approach between the ungapped and gapped alignments
reduces the number of sequences going for gapped alignment. This results in the
improvement in speed for alignment up to 30 percent.
|
1209.6140
|
DAARIA: Driver Assistance by Augmented Reality for Intelligent
Automobile
|
cs.HC cs.RO
|
Taking into account the drivers' state is a major challenge for designing new
advanced driver assistance systems. In this paper we present a driver
assistance system strongly coupled to the user. DAARIA 1 stands for Driver
Assistance by Augmented Reality for Intelligent Automobile. It is an augmented
reality interface powered by several sensors. The detection has two goals: one
is the position of obstacles and the quantification of the danger represented
by them. The other is the driver's behavior. A suitable visualization metaphor
allows the driver to perceive at any time the location of the relevant hazards
while keeping his eyes on the road. First results show that our method could be
applied to a vehicle but also to aerospace, fluvial or maritime navigation.
|
1209.6151
|
Face Alignment Using Active Shape Model And Support Vector Machine
|
cs.CV
|
The Active Shape Model (ASM) is one of the most popular local texture models
for face alignment. It applies in many fields such as locating facial features
in the image, face synthesis, etc. However, the experimental results show that
the accuracy of the classical ASM for some applications is not high. This paper
suggests some improvements on the classical ASM to increase the performance of
the model in the application: face alignment. Four of our major improvements
include: i) building a model combining Sobel filter and the 2-D profile in
searching face in image; ii) applying Canny algorithm for the enhancement edge
on image; iii) Support Vector Machine (SVM) is used to classify landmarks on
face, in order to determine exactly location of these landmarks support for
ASM; iv)automatically adjust 2-D profile in the multi-level model based on the
size of the input image. The experimental results on Caltech face database and
Technical University of Denmark database (imm_face) show that our proposed
improvement leads to far better performance.
|
1209.6152
|
Parity Declustering for Fault-Tolerant Storage Systems via $t$-designs
|
cs.IT math.IT
|
Parity declustering allows faster reconstruction of a disk array when some
disk fails. Moreover, it guarantees uniform reconstruction workload on all
surviving disks. It has been shown that parity declustering for one-failure
tolerant array codes can be obtained via Balanced Incomplete Block Designs. We
extend this technique for array codes that can tolerate an arbitrary number of
disk failures via $t$-designs.
|
1209.6189
|
The Biometric Menagerie - A Fuzzy and Inconsistent Concept
|
cs.CV
|
This paper proves that in iris recognition, the concepts of sheep, goats,
lambs and wolves - as proposed by Doddington and Yager in the so-called
Biometric Menagerie, are at most fuzzy and at least not quite well defined.
They depend not only on the users or on their biometric templates, but also on
the parameters that calibrate the iris recognition system. This paper shows
that, in the case of iris recognition, the extensions of these concepts have
very unsharp and unstable (non-stationary) boundaries. The membership of a user
to these categories is more often expressed as a degree (as a fuzzy value)
rather than as a crisp value. Moreover, they are defined by fuzzy Sugeno rules
instead of classical (crisp) definitions. For these reasons, we said that the
Biometric Menagerie proposed by Doddington and Yager could be at most a fuzzy
concept of biometry, but even this status is conditioned by improving its
definition. All of these facts are confirmed experimentally in a series of 12
exhaustive iris recognition tests undertaken for University of Bath Iris Image
Database while using three different iris code dimensions (256x16, 128x8 and
64x4), two different iris texture encoders (Log-Gabor and Haar-Hilbert) and two
different types of safety models.
|
1209.6190
|
Noise Influence on the Fuzzy-Linguistic Partitioning of Iris Code Space
|
cs.CV
|
This paper analyses the set of iris codes stored or used in an iris
recognition system as an f-granular space. The f-granulation is given by
identifying in the iris code space the extensions of the fuzzy concepts wolves,
goats, lambs and sheep (previously introduced by Doddington as 'animals' of the
biometric menagerie) - which together form a partitioning of the iris code
space. The main question here is how objective (stable / stationary) this
partitioning is when the iris segments are subject to noisy acquisition. In
order to prove that the f-granulation of iris code space with respect to the
fuzzy concepts that define the biometric menagerie is unstable in noisy
conditions (is sensitive to noise), three types of noise (localvar, motion
blur, salt and pepper) have been alternatively added to the iris segments
extracted from University of Bath Iris Image Database. The results of 180
exhaustive (all-to-all) iris recognition tests are presented and commented
here.
|
1209.6195
|
Examples of Artificial Perceptions in Optical Character Recognition and
Iris Recognition
|
cs.AI
|
This paper assumes the hypothesis that human learning is perception based,
and consequently, the learning process and perceptions should not be
represented and investigated independently or modeled in different simulation
spaces. In order to keep the analogy between the artificial and human learning,
the former is assumed here as being based on the artificial perception. Hence,
instead of choosing to apply or develop a Computational Theory of (human)
Perceptions, we choose to mirror the human perceptions in a numeric
(computational) space as artificial perceptions and to analyze the
interdependence between artificial learning and artificial perception in the
same numeric space, using one of the simplest tools of Artificial Intelligence
and Soft Computing, namely the perceptrons. As practical applications, we
choose to work around two examples: Optical Character Recognition and Iris
Recognition. In both cases a simple Turing test shows that artificial
perceptions of the difference between two characters and between two irides are
fuzzy, whereas the corresponding human perceptions are, in fact, crisp.
|
1209.6204
|
Reclassification formula that provides to surpass K-means method
|
cs.CV cs.DS
|
The paper presents a formula for the reclassification of multidimensional
data points (columns of real numbers, "objects", "vectors", etc.). This formula
describes the change in the total squared error caused by reclassification of
data points from one cluster into another and prompts the way to calculate the
sequence of optimal partitions, which are characterized by a minimum value of
the total squared error E (weighted sum of within-class variance,
within-cluster sum of squares WCSS etc.), i.e. the sum of squared distances
from each data point to its cluster center. At that source data points are
treated with repetitions allowed, and resulting clusters from different
partitions, in general case, overlap each other. The final partitions are
characterized by "equilibrium" stability with respect to the reclassification
of the data points, where the term "stability" means that any prescribed
reclassification of data points does not increase the total squared error E. It
is important that conventional K-means method, in general case, provides
generation of instable partitions with overstated values of the total squared
error E. The proposed method, based on the formula of reclassification, is more
efficient than K-means method owing to converting of any partition into stable
one, as well as involving into the process of reclassification of certain sets
of data points, in contrast to the classification of individual data points
according to K-means method.
|
1209.6217
|
An Evolving model of online bipartite networks
|
physics.soc-ph cs.SI
|
Understanding the structure and evolution of online bipartite networks is a
significant task since they play a crucial role in various e-commerce services
nowadays. Recently, various attempts have been tried to propose different
models, resulting in either power-law or exponential degree
distributions.However, many empirical results show that the user degree
distribution actually follows a shifted power-law distribution, so-called
\emph{Mandelbrot law}, which cannot be fully described by previous models. In
this paper, we propose an evolving model, considering two different user
behaviors: random and preferential attachment. Extensive empirical results on
two real bipartite networks, \emph{Delicious} and \emph{CiteULike}, show that
the theoretical model can well characterize the structure of real networks for
both user and object degree distributions. In addition, we introduce a
structural parameter $p$, to demonstrate that the hybrid user behavior leads to
the shifted power-law degree distribution, and the region of power-law tail
will increase with the increment of $p$. The proposed model might shed some
lights in understanding the underlying laws governing the structure of real
online bipartite networks.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.