id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1207.2592
|
Novel Grey Interval Weight Determining and Hybrid Grey Interval Relation
Method in Multiple Attribute Decision-Making
|
cs.AI
|
This paper proposes a grey interval relation TOPSIS for the decision making
in which all of the attribute weights and attribute values are given by the
interval grey numbers. The feature of our method different from other grey
relation decision-making is that all of the subjective and objective weights
are obtained by interval grey number and that decisionmaking is performed based
on the relative approach degree of grey TOPSIS, the relative approach degree of
grey incidence and the relative membership degree of grey incidence using
2-dimensional Euclidean distance. The weighted Borda method is used for
combining the results of three methods. An example shows the applicability of
the proposed approach.
|
1207.2597
|
Automated Training and Maintenance through Kinect
|
cs.CV cs.ET cs.GR cs.HC
|
In this paper, we have worked on reducing burden on mechanic involving
complex automobile maintenance activities that are performed in centralised
workshops. We have presented a system prototype that combines Augmented Reality
with Kinect. With the use of Kinect, very high quality sensors are available at
considerably low costs, thus reducing overall expenditure for system design.
The system can be operated either in Speech mode or in Gesture mode. The system
can be controlled by various audio commands if user opts for Speech mode. The
same controlling can also be done by using a set of Gestures in Gesture mode.
Gesture recognition is the task performed by Kinect system. This system,
bundled with RGB and Depth camera, processes the skeletal data by keeping track
of 20 different body joints. Recognizing Gestures is done by verifying user
movements and checking them against predefined condition. Augmented Reality
module captures real-time image data streams from high resolution camera. This
module then generates 3D model that is superimposed on real time data.
|
1207.2600
|
Efficient Prediction of DNA-Binding Proteins Using Machine Learning
|
cs.CV q-bio.QM
|
DNA-binding proteins are a class of proteins which have a specific or general
affinity to DNA and include three important components: transcription factors;
nucleases, and histones. DNA-binding proteins also perform important roles in
many types of cellular activities. In this paper we describe machine learning
systems for the prediction of DNA- binding proteins where a Support Vector
Machine and a Cascade Correlation Neural Network are optimized and then
compared to determine the learning algorithm that achieves the best prediction
performance. The information used for classification is derived from
characteristics that include overall charge, patch size and amino acids
composition. In total 121 DNA- binding proteins and 238 non-binding proteins
are used to build and evaluate the system. For SVM using the ANOVA Kernel with
Jack-knife evaluation, an accuracy of 86.7% has been achieved with 91.1% for
sensitivity and 85.3% for specificity. For CCNN optimized over the entire
dataset with Jack knife evaluation we report an accuracy of 75.4%, while the
values of specificity and sensitivity achieved were 72.3% and 82.6%,
respectively.
|
1207.2602
|
A Novel Approach Coloured Object Tracker with Adaptive Model and
Bandwidth using Mean Shift Algorithm
|
cs.CV
|
The traditional color-based mean-shift tracking algorithm is popular among
tracking methods due to its simple and efficient procedure, however, the lack
of dynamism in its target model makes it unsuitable for tracking objects which
have changes in their sizes and shapes. In this paper, we propose a fast novel
threephase colored object tracker algorithm based on mean shift idea while
utilizing adaptive model. The proposed method can improve the mentioned
weaknesses of the original mean-shift algorithm. The experimental results show
that the new method is feasible, robust and has acceptable speed in comparison
with other algorithms.15 page,
|
1207.2608
|
Training Optimization for Energy Harvesting Communication Systems
|
cs.IT math.IT
|
Energy harvesting (EH) has recently emerged as an effective way to solve the
lifetime challenge of wireless sensor networks, as it can continuously harvest
energy from the environment. Unfortunately, it is challenging to guarantee a
satisfactory short-term performance in EH communication systems because the
harvested energy is sporadic. In this paper, we consider the channel training
optimization problem in EH communication systems, i.e., how to obtain accurate
channel state information to improve the communication performance. In contrast
to conventional communication systems, the optimization of the training power
and training period in EH communication systems is a coupled problem, which
makes such optimization very challenging. We shall formulate the optimal
training design problem for EH communication systems, and propose two solutions
that adaptively adjust the training period and power based on either the
instantaneous energy profile or the average energy harvesting rate. Numerical
and simulation results will show that training optimization is important in EH
communication systems. In particular, it will be shown that for short block
lengths, training optimization is critical. In contrast, for long block
lengths, the optimal training period is not too sensitive to the value of the
block length nor to the energy profile. Therefore, a properly selected fixed
training period value can be used.
|
1207.2615
|
Broccoli: Semantic Full-Text Search at your Fingertips
|
cs.IR
|
We present Broccoli, a fast and easy-to-use search engine for what we call
semantic full-text search. Semantic full-text search combines the capabilities
of standard full-text search and ontology search. The search operates on four
kinds of objects: ordinary words (e.g., edible), classes (e.g., plants),
instances (e.g., Broccoli), and relations (e.g., occurs-with or native-to).
Queries are trees, where nodes are arbitrary bags of these objects, and arcs
are relations. The user interface guides the user in incrementally constructing
such trees by instant (search-as-you-type) suggestions of words, classes,
instances, or relations that lead to good hits. Both standard full-text search
and pure ontology search are included as special cases. In this paper, we
describe the query language of Broccoli, the main idea behind a new kind of
index that enables fast processing of queries from that language as well as
fast query suggestion, the natural language processing required, and the user
interface. We evaluated query times and result quality on the full version of
the English Wikipedia (40 GB XML dump) combined with the YAGO ontology (26
million facts). We have implemented a fully functional prototype based on our
ideas and provide a web application to reproduce our quality experiments. Both
are accessible via http://broccoli.informatik.uni-freiburg.de/repro-corr/ .
|
1207.2619
|
Conceptual Modelling and The Quality of Ontologies: Endurantism Vs.
Perdurantism
|
cs.AI cs.DB
|
Ontologies are key enablers for sharing precise and machine-understandable
semantics among different applications and parties. Yet, for ontologies to meet
these expectations, their quality must be of a good standard. The quality of an
ontology is strongly based on the design method employed. This paper addresses
the design problems related to the modelling of ontologies, with specific
concentration on the issues related to the quality of the conceptualisations
produced. The paper aims to demonstrate the impact of the modelling paradigm
adopted on the quality of ontological models and, consequently, the potential
impact that such a decision can have in relation to the development of software
applications. To this aim, an ontology that is conceptualised based on the
Object-Role Modelling (ORM) approach (a representative of endurantism) is
re-engineered into a one modelled on the basis of the Object Paradigm (OP) (a
representative of perdurantism). Next, the two ontologies are analytically
compared using the specified criteria. The conducted comparison highlights that
using the OP for ontology conceptualisation can provide more expressive,
reusable, objective and temporal ontologies than those conceptualised on the
basis of the ORM approach.
|
1207.2630
|
Nugget Discovery with a Multi-objective Cultural Algorithm
|
cs.NE
|
Partial classification popularly known as nugget discovery comes under
descriptive knowledge discovery. It involves mining rules for a target class of
interest. Classification "If-Then" rules are the most sought out by decision
makers since they are the most comprehensible form of knowledge mined by data
mining techniques. The rules have certain properties namely the rule metrics
which are used to evaluate them. Mining rules with user specified properties
can be considered as a multi-objective optimization problem since the rules
have to satisfy more than one property to be used by the user. Cultural
algorithm (CA) with its knowledge sources have been used in solving many
optimization problems. However research gap exists in using cultural algorithm
for multi-objective optimization of rules. In the current study a
multi-objective cultural algorithm is proposed for partial classification.
Results of experiments on benchmark data sets reveal good performance.
|
1207.2641
|
Camera identification by grouping images from database, based on shared
noise patterns
|
cs.CV
|
Previous research showed that camera specific noise patterns, so-called
PRNU-patterns, are extracted from images and related images could be found. In
this particular research the focus is on grouping images from a database, based
on a shared noise pattern as an identification method for cameras. Using the
method as described in this article, groups of images, created using the same
camera, could be linked from a large database of images. Using MATLAB
programming, relevant image noise patterns are extracted from images much
quicker than common methods by the use of faster noise extraction filters and
improvements to reduce the calculation costs. Relating noise patterns, with a
correlation above a certain threshold value, can quickly be matched. Hereby,
from a database of images, groups of relating images could be linked and the
method could be used to scan a large number of images for suspect noise
patterns.
|
1207.2681
|
Oblique Pursuits for Compressed Sensing
|
cs.IT math.IT
|
Compressed sensing is a new data acquisition paradigm enabling universal,
simple, and reduced-cost acquisition, by exploiting a sparse signal model. Most
notably, recovery of the signal by computationally efficient algorithms is
guaranteed for certain randomized acquisition systems. However, there is a
discrepancy between the theoretical guarantees and practical applications. In
applications, including Fourier imaging in various modalities, the measurements
are acquired by inner products with vectors selected randomly (sampled) from a
frame. Currently available guarantees are derived using a so-called restricted
isometry property (RIP), which has only been shown to hold under ideal
assumptions. For example, the sampling from the frame needs to be independent
and identically distributed with the uniform distribution, and the frame must
be tight. In practice though, one or more of the ideal assumptions is typically
violated and none of the existing guarantees applies.
Motivated by this discrepancy, we propose two related changes in the existing
framework: (i) a generalized RIP called the restricted biorthogonality property
(RBOP); and (ii) correspondingly modified versions of existing greedy pursuit
algorithms, which we call oblique pursuits. Oblique pursuits are guaranteed
using the RBOP without requiring ideal assumptions; hence, the guarantees apply
to practical acquisition schemes. Numerical results show that oblique pursuits
also perform competitively with, or sometimes better than their conventional
counterparts.
|
1207.2697
|
Genetic agent approach for improving on-the-fly web map generalization
|
cs.MA cs.CG cs.NE
|
The utilization of web mapping becomes increasingly important in the domain
of cartography. Users want access to spatial data on the web specific to their
needs. For this reason, different approaches were appeared for generating
on-the-fly the maps demanded by users, but those not suffice for guide a
flexible and efficient process. Thus, new approach must be developed for
improving this process according to the user needs. This work focuses on
defining a new strategy which improves on-the-fly map generalization process
and resolves the spatial conflicts. This approach uses the multiple
representation and cartographic generalization. The map generalization process
is based on the implementation of multi- agent system where each agent was
equipped with a genetic patrimony.
|
1207.2711
|
The Outage Probability of a Finite Ad Hoc Network in Nakagami Fading
|
cs.IT math.IT
|
An ad hoc network with a finite spatial extent and number of nodes or mobiles
is analyzed. The mobile locations may be drawn from any spatial distribution,
and interference-avoidance protocols or protection against physical collisions
among the mobiles may be modeled by placing an exclusion zone around each
radio. The channel model accounts for the path loss, Nakagami fading, and
shadowing of each received signal. The Nakagami m-parameter can vary among the
mobiles, taking any positive value for each of the interference signals and any
positive integer value for the desired signal. The analysis is governed by a
new exact expression for the outage probability, defined to be the probability
that the signal-to-interference-and-noise ratio (SINR) drops below a threshold,
and is conditioned on the network geometry and shadowing factors, which have
dynamics over much slower timescales than the fading. By averaging over many
network and shadowing realizations, the average outage probability and
transmission capacity are computed. Using the analysis, many aspects of the
network performance are illuminated. For example, one can determine the
influence of the choice of spreading factors, the effect of the receiver
location within the finite network region, and the impact of both the fading
parameters and the attenuation power laws.
|
1207.2714
|
Clustering based approach extracting collocations
|
cs.CL
|
The following study presents a collocation extraction approach based on
clustering technique. This study uses a combination of several classical
measures which cover all aspects of a given corpus then it suggests separating
bigrams found in the corpus in several disjoint groups according to the
probability of presence of collocations. This will allow excluding groups where
the presence of collocations is very unlikely and thus reducing in a meaningful
way the search space.
|
1207.2734
|
Information-bit error rate and false positives in an MDS code
|
cs.IT math.IT
|
In this paper, a refinement of the weight distribution in an MDS code is
computed. Concretely, the number of codewords with a fixed amount of nonzero
bits in both information and redundancy parts is obtained. This refinement
improves the theoretical approximation of the information-bit and -symbol error
rate, in terms of the channel bit-error rate, in a block transmission through a
discrete memoryless channel. Since a bounded distance reproducing encoder is
assumed, the computation of the here-called false positive (a decoding failure
with no information-symbol error) is provided. As a consequence, a new
performance analysis of an MDS code is proposed.
|
1207.2743
|
The evolutionary origins of modularity
|
q-bio.PE cs.NE q-bio.MN q-bio.NC
|
A central biological question is how natural organisms are so evolvable
(capable of quickly adapting to new environments). A key driver of evolvability
is the widespread modularity of biological networks--their organization as
functional, sparsely connected subunits--but there is no consensus regarding
why modularity itself evolved. While most hypotheses assume indirect selection
for evolvability, here we demonstrate that the ubiquitous, direct selection
pressure to reduce the cost of connections between network nodes causes the
emergence of modular networks. Experiments with selection pressures to maximize
network performance and minimize connection costs yield networks that are
significantly more modular and more evolvable than control experiments that
only select for performance. These results will catalyze research in numerous
disciplines, including neuroscience, genetics and harnessing evolution for
engineering purposes.
|
1207.2761
|
A GPS Pseudorange Based Cooperative Vehicular Distance Measurement
Technique
|
cs.AI cs.RO
|
Accurate vehicular localization is important for various cooperative vehicle
safety (CVS) applications such as collision avoidance, turning assistant, etc.
In this paper, we propose a cooperative vehicular distance measurement
technique based on the sharing of GPS pseudorange measurements and a weighted
least squares method. The classic double difference pseudorange solution, which
was originally designed for high-end survey level GPS systems, is adapted to
low-end navigation level GPS receivers for its wide availability in ground
vehicles. The Carrier to Noise Ratio (CNR) of raw pseudorange measurements are
taken into account for noise mitigation. We present a Dedicated Short Range
Communications (DSRC) based mechanism to implement the exchange of pseudorange
information among neighboring vehicles. As demonstrated in field tests, our
proposed technique increases the accuracy of the distance measurement
significantly compared with the distance obtained from the GPS fixes.
|
1207.2776
|
Receive Combining vs. Multi-Stream Multiplexing in Downlink Systems with
Multi-Antenna Users
|
cs.IT math.IT
|
In downlink multi-antenna systems with many users, the multiplexing gain is
strictly limited by the number of transmit antennas $N$ and the use of these
antennas. Assuming that the total number of receive antennas at the
multi-antenna users is much larger than $N$, the maximal multiplexing gain can
be achieved with many different transmission/reception strategies. For example,
the excess number of receive antennas can be utilized to schedule users with
effective channels that are near-orthogonal, for multi-stream multiplexing to
users with well-conditioned channels, and/or to enable interference-aware
receive combining. In this paper, we try to answer the question if the $N$ data
streams should be divided among few users (many streams per user) or many users
(few streams per user, enabling receive combining). Analytic results are
derived to show how user selection, spatial correlation, heterogeneous user
conditions, and imperfect channel acquisition (quantization or estimation
errors) affect the performance when sending the maximal number of streams or
one stream per scheduled user---the two extremes in data stream allocation.
While contradicting observations on this topic have been reported in prior
works, we show that selecting many users and allocating one stream per user
(i.e., exploiting receive combining) is the best candidate under realistic
conditions. This is explained by the provably stronger resilience towards
spatial correlation and the larger benefit from multi-user diversity. This
fundamental result has positive implications for the design of downlink systems
as it reduces the hardware requirements at the user devices and simplifies the
throughput optimization.
|
1207.2788
|
Diffusion dynamics on multiplex networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We study the time scales associated to diffusion processes that take place on
multiplex networks, i.e. on a set of networks linked through interconnected
layers. To this end, we propose the construction of a supra-Laplacian matrix,
which consists of a dimensional lifting of the Laplacian matrix of each layer
of the multiplex network. We use perturbative analysis to reveal analytically
the structure of eigenvectors and eigenvalues of the complete network in terms
of the spectral properties of the individual layers. The spectrum of the
supra-Laplacian allows us to understand the physics of diffusion-like processes
on top of multiplex networks.
|
1207.2793
|
Cascade Source Coding with a Side Information "Vending Machine"
|
cs.IT math.IT
|
The model of a side information "vending machine" (VM) accounts for scenarios
in which the measurement of side information sequences can be controlled via
the selection of cost-constrained actions. In this paper, the three-node
cascade source coding problem is studied under the assumption that a side
information VM is available and the intermediate and/or at the end node of the
cascade. A single-letter characterization of the achievable trade-off among the
transmission rates, the distortions in the reconstructions at the intermediate
and at the end node, and the cost for acquiring the side information is derived
for a number of relevant special cases. It is shown that a joint design of the
description of the source and of the control signals used to guide the
selection of the actions at downstream nodes is generally necessary for an
efficient use of the available communication links. In particular, for all the
considered models, layered coding strategies prove to be optimal, whereby the
base layer fulfills two network objectives: determining the actions of
downstream nodes and simultaneously providing a coarse description of the
source. Design of the optimal coding strategy is shown via examples to depend
on both the network topology and the action costs. Examples also illustrate the
involved performance trade-offs across the network.
|
1207.2802
|
Coupled dynamics of mobility and pattern formation in optional public
goods games
|
physics.soc-ph cs.SI
|
In a static environment, optional participation and a local agglomeration of
cooperators are found to be beneficial for the occurrence and maintenance of
cooperation. In the optional public goods game, the rock-scissors-paper cycles
of different strategies yield oscillatory cooperation but not stable
cooperation. In this paper, by incorporating population density and individual
mobility into the spatial optional public goods game, we study the
coevolutionary dynamics of strategy updating and benefit-seeking migration.
With low population density and slow movement, an optimal level of cooperation
is easy to be reached. An increase in population density and speed-up of
free-floating of competitive agents will suppress cooperation. A log-log
relation between the levels of cooperation and the free-floating probability is
found. Theoretical analysis indicates that the decrease of cooperator frequency
in the present model should result from the increased interactions between
different agents, which may originate from the increased cluster size or the
speed-up of random-movement.
|
1207.2807
|
Practical Power Allocation and Greedy Partner Selection for Cooperative
Networks
|
cs.SY cs.IT math.IT
|
In this paper, we present a novel algorithm for power allocation in the
Amplify-and-Forward cooperative communication that minimizes the outage
probability with a given value of total power. We present the problem with new
formulation and solve the optimal power allocation for a fixed set of partners.
The proposed solution provides a direct power allocation scheme with a simple
formula that can be also be represented by a simple lookup table which makes it
easy for practical implementation. We present simulation results to demonstrate
that the performances of the proposed algorithms are very close to results of
the previously published iterative optimal power allocation algorithms. We also
consider the issue of partner selection in a cooperative network.
|
1207.2812
|
Near-Optimal Algorithms for Differentially-Private Principal Components
|
stat.ML cs.CR cs.LG
|
Principal components analysis (PCA) is a standard tool for identifying good
low-dimensional approximations to data in high dimension. Many data sets of
interest contain private or sensitive information about individuals. Algorithms
which operate on such data should be sensitive to the privacy risks in
publishing their outputs. Differential privacy is a framework for developing
tradeoffs between privacy and the utility of these outputs. In this paper we
investigate the theory and empirical performance of differentially private
approximations to PCA and propose a new method which explicitly optimizes the
utility of the output. We show that the sample complexity of the proposed
method differs from the existing procedure in the scaling with the data
dimension, and that our method is nearly optimal in terms of this scaling. We
furthermore illustrate our results, showing that on real data there is a large
performance gap between the existing method and our method.
|
1207.2825
|
Guard Zones and the Near-Far Problem in DS-CDMA Ad Hoc Networks
|
cs.IT cs.NI math.IT
|
The central issue in direct-sequence code-division multiple-access (DS-CDMA)
ad hoc networks is the prevention of a near-far problem. This paper considers
two types of guard zones that may be used to control the near-far problem: a
fundamental exclusion zone and an additional CSMA guard zone that may be
established by the carrier-sense multiple-access (CSMA) protocol. In the
exclusion zone, no mobiles are physically present, modeling the minimum
physical separation among mobiles that is always present in actual networks.
Potentially interfering mobiles beyond a transmitting mobile's exclusion zone,
but within its CSMA guard zone, are deactivated by the protocol. This paper
provides an analysis of DS-CSMA networks with either or both types of guard
zones. A network of finite extent with a finite number of mobiles is modeled as
a uniform clustering process. The analysis uses a closed-form expression for
the outage probability in the presence of Nakagami fading, conditioned on the
network geometry. By using the analysis developed in this paper, the tradeoffs
between exclusion zones and CSMA guard zones are explored for DS-CDMA and
unspread networks.
|
1207.2829
|
Sparse Recovery with Graph Constraints
|
cs.IT cs.NI math.IT
|
Sparse recovery can recover sparse signals from a set of underdetermined
linear measurements. Motivated by the need to monitor large-scale networks from
a limited number of measurements, this paper addresses the problem of
recovering sparse signals in the presence of network topological constraints.
Unlike conventional sparse recovery where a measurement can contain any subset
of the unknown variables, we use a graph to characterize the topological
constraints and allow an additive measurement over nodes (unknown variables)
only if they induce a connected subgraph. We provide explicit measurement
constructions for several special graphs, and the number of measurements by our
construction is less than that needed by existing random constructions.
Moreover, our construction for a line network is provably optimal in the sense
that it requires the minimum number of measurements. A measurement construction
algorithm for general graphs is also proposed and evaluated. For any given
graph $G$ with $n$ nodes, we derive bounds of the minimum number of
measurements needed to recover any $k$-sparse vector over $G$ ($M^G_{k,n}$).
Using the Erd\H{o}s-R\'enyi random graph as an example, we characterize the
dependence of $M^G_{k,n}$ on the graph structure.
|
1207.2837
|
Search Algorithms for Conceptual Graph Databases
|
cs.DS cs.DB cs.DM math.CO
|
We consider a database composed of a set of conceptual graphs. Using
conceptual graphs and graph homomorphism it is possible to build a basic
query-answering mechanism based on semantic search. Graph homomorphism defines
a partial order over conceptual graphs. Since graph homomorphism checking is an
NP-Complete problem, the main requirement for database organizing and managing
algorithms is to reduce the number of homomorphism checks. Searching is a basic
operation for database manipulating problems. We consider the problem of
searching for an element in a partially ordered set. The goal is to minimize
the number of queries required to find a target element in the worst case.
First we analyse conceptual graph database operations. Then we propose a new
algorithm for a subclass of lattices. Finally, we suggest a parallel search
algorithm for a general poset. Keywords. Conceptual Graph, Graph Homomorphism,
Partial Order, Lattice, Search, Database.
|
1207.2853
|
Compressed sensing with sparse, structured matrices
|
cs.IT cond-mat.dis-nn cond-mat.stat-mech math.IT
|
In the context of the compressed sensing problem, we propose a new ensemble
of sparse random matrices which allow one (i) to acquire and compress a
{\rho}0-sparse signal of length N in a time linear in N and (ii) to perfectly
recover the original signal, compressed at a rate {\alpha}, by using a message
passing algorithm (Expectation Maximization Belief Propagation) that runs in a
time linear in N. In the large N limit, the scheme proposed here closely
approaches the theoretical bound {\rho}0 = {\alpha}, and so it is both optimal
and efficient (linear time complexity). More generally, we show that several
ensembles of dense random matrices can be converted into ensembles of sparse
random matrices, having the same thresholds, but much lower computational
complexity.
|
1207.2900
|
Privacy Preserving MFI Based Similarity Measure For Hierarchical
Document Clustering
|
cs.DB cs.IR
|
The increasing nature of World Wide Web has imposed great challenges for
researchers in improving the search efficiency over the internet. Now days web
document clustering has become an important research topic to provide most
relevant documents in huge volumes of results returned in response to a simple
query. In this paper, first we proposed a novel approach, to precisely define
clusters based on maximal frequent item set (MFI) by Apriori algorithm.
Afterwards utilizing the same maximal frequent item set (MFI) based similarity
measure for Hierarchical document clustering. By considering maximal frequent
item sets, the dimensionality of document set is decreased. Secondly, providing
privacy preserving of open web documents is to avoiding duplicate documents.
There by we can protect the privacy of individual copy rights of documents.
This can be achieved using equivalence relation.
|
1207.2922
|
ROI Segmentation for Feature Extraction from Human Facial Images
|
cs.CV cs.HC
|
Human Computer Interaction (HCI) is the biggest goal of computer vision
researchers. Features form the different facial images are able to provide a
very deep knowledge about the activities performed by the different facial
movements. In this paper we presented a technique for feature extraction from
various regions of interest with the help of Skin color segmentation technique,
Thresholding, knowledge based technique for face recognition.
|
1207.2940
|
Expectation Propagation in Gaussian Process Dynamical Systems: Extended
Version
|
stat.ML cs.LG cs.SY
|
Rich and complex time-series data, such as those generated from engineering
systems, financial markets, videos or neural recordings, are now a common
feature of modern data analysis. Explaining the phenomena underlying these
diverse data sets requires flexible and accurate models. In this paper, we
promote Gaussian process dynamical systems (GPDS) as a rich model class that is
appropriate for such analysis. In particular, we present a message passing
algorithm for approximate inference in GPDSs based on expectation propagation.
By posing inference as a general message passing problem, we iterate
forward-backward smoothing. Thus, we obtain more accurate posterior
distributions over latent structures, resulting in improved predictive
performance compared to state-of-the-art GPDS smoothers, which are special
cases of our general message passing algorithm. Hence, we provide a unifying
approach within which to contextualize message passing in GPDSs.
|
1207.3012
|
Optimal rates for first-order stochastic convex optimization under
Tsybakov noise condition
|
cs.LG stat.ML
|
We focus on the problem of minimizing a convex function $f$ over a convex set
$S$ given $T$ queries to a stochastic first order oracle. We argue that the
complexity of convex minimization is only determined by the rate of growth of
the function around its minimizer $x^*_{f,S}$, as quantified by a Tsybakov-like
noise condition. Specifically, we prove that if $f$ grows at least as fast as
$\|x-x^*_{f,S}\|^\kappa$ around its minimum, for some $\kappa > 1$, then the
optimal rate of learning $f(x^*_{f,S})$ is
$\Theta(T^{-\frac{\kappa}{2\kappa-2}})$. The classic rate $\Theta(1/\sqrt T)$
for convex functions and $\Theta(1/T)$ for strongly convex functions are
special cases of our result for $\kappa \rightarrow \infty$ and $\kappa=2$, and
even faster rates are attained for $\kappa <2$. We also derive tight bounds for
the complexity of learning $x_{f,S}^*$, where the optimal rate is
$\Theta(T^{-\frac{1}{2\kappa-2}})$. Interestingly, these precise rates for
convex optimization also characterize the complexity of active learning and our
results further strengthen the connections between the two fields, both of
which rely on feedback-driven queries.
|
1207.3018
|
Fundamental Limits of Communications in Interference Networks-Part I:
Basic Structures
|
cs.IT math.IT
|
In these series of multi-part papers, a systematic study of fundamental
limits of communications in interference networks is established. Here,
interference network is referred to as a general single-hop communication
scenario with arbitrary number of transmitters and receivers, and also
arbitrary distribution of messages among transmitters and receivers. It is
shown that the information flow in such networks follows similar derivations
from many aspects. This systematic study is launched by considering the basic
building blocks in Part I. The Multiple Access Channel (MAC), the Broadcast
Channel (BC), the Classical Interference Channel (CIC) and the Cognitive Radio
Channel (CRC) are proposed as the main building blocks for all interference
networks. First, a brief review of existing results regarding these basic
structures is presented. New observations are also presented in this regard.
Specifically, it is shown that the well-known strong interference conditions
for the two-user CIC do not change if the inputs are dependent. Next, new
capacity outer bounds are established for the basic structures with two
receivers. These outer bounds are all derived based on a unified framework. By
using the derived outer bounds, some new capacity results are proved for the
CIC and the CRC; a mixed interference regime is identified for the two-user
discrete CIC where the sum-rate capacity is established. Also, a noisy
interference regime is derived for the one-sided discrete CIC. For the CRC, a
full characterization of the capacity region for a class of more-capable
channels is obtained. Moreover, it is shown that the derived outer bounds are
useful to study the channels with one-sided receiver side information wherein
one of the receivers has access to the non-intended message; capacity bounds
are also discussed in details for such scenarios.
|
1207.3027
|
Fundamental Limits of Communications in Interference Networks-Part II:
Information Flow in Degraded Networks
|
cs.IT math.IT
|
In this second part of our multi-part papers, the information flow in
degraded interference networks is studied. A full characterization of the
sum-rate capacity for the degraded networks with any possible configuration is
established. It is shown that a successive decoding scheme is sum-rate optimal
for these networks. Also, it is proved that the transmission of only a certain
subset of messages is sufficient to achieve the sum-rate capacity in such
networks. Algorithms are presented to determine this subset of messages
explicitly. According to these algorithms, the optimal strategy to achieve the
sum-rate capacity in degraded networks is that the transmitters try to send
information for the stronger receivers and, if possible, avoid sending the
messages with respect to the weaker receivers. The algorithms are easily
understood using our graphical illustrations for the achievability schemes
based on directed graphs. The sum-rate expression for the degraded networks is
then used to derive a unified outer bound on the sum-rate capacity of arbitrary
non-degraded networks. Several variations of the degraded networks are
identified for which the derived outer bound is sum-rate optimal. Specifically,
noisy interference regimes are derived for certain classes of
multi-user/multi-message interference networks. Also, for the first time,
network scenarios are identified where the incorporation of both successive
decoding and treating interference as noise achieves their sum-rate capacity.
Finally, by taking insight from our results for degraded networks, we establish
a unified outer bound on the entire capacity region of the general interference
networks. These outer bounds for a broad range of network scenarios are tighter
than the existing cut-set bound.
|
1207.3031
|
Distributed Strongly Convex Optimization
|
cs.DC cs.LG stat.ML
|
A lot of effort has been invested into characterizing the convergence rates
of gradient based algorithms for non-linear convex optimization. Recently,
motivated by large datasets and problems in machine learning, the interest has
shifted towards distributed optimization. In this work we present a distributed
algorithm for strongly convex constrained optimization. Each node in a network
of n computers converges to the optimum of a strongly convex, L-Lipchitz
continuous, separable objective at a rate O(log (sqrt(n) T) / T) where T is the
number of iterations. This rate is achieved in the online setting where the
data is revealed one at a time to the nodes, and in the batch setting where
each node has access to its full local dataset from the start. The same
convergence rate is achieved in expectation when the subgradients used at each
node are corrupted with additive zero-mean noise.
|
1207.3035
|
Fundamental Limits of Communications in Interference Networks-Part III:
Information Flow in Strong Interference Regime
|
cs.IT math.IT
|
This third part of the paper is related to the study of information flow in
networks with strong interference. First, the two-receiver networks are
considered. A unified outer bound for the capacity region of these networks is
established. It is shown that this outer bound can be systematically translated
into simple capacity outer bounds for special cases such as the two-user
Classical Interference Channel (CIC) and the Broadcast Channel with Cognitive
Relays (BCCR) with common information. For these channels, special cases are
presented where our outer bounds are tight, which yield the exact capacity.
More importantly, by using the derived outer bounds, a strong interference
regime is identified for the general two-receiver interference networks with
any arbitrary topology. This strong interference regime, which is represented
by only two conditions, includes all previously known results for simple
topologies such as the two-user CIC, the cognitive radio channel, and many
others. Then, networks with arbitrary number of receivers are considered.
Finding non-trivial strong interference regime for such networks, specifically
for the CICs with more than two users, has been one of the open problems in
network information theory. In this paper, we will give a solution to this
problem. Specifically, a new approach is developed based on which one can
obtain strong interference regimes not only for the multi-user CICs but also
for any interference network of arbitrary large sizes. For this development,
some new technical lemmas are proved which have a central role in the
derivations. As a result, this paper establishes the first non-trivial capacity
result for the multi-user classical interference channel. A general formula is
also presented to derive strong interference conditions for any given network
topology.
|
1207.3040
|
Fundamental Limits of Communications in Interference Networks-Part IV:
Networks with a Sequence of Less-Noisy Receivers
|
cs.IT math.IT
|
In this fourth part of our multi-part papers, classes of interference
networks with a sequence of less-noisy receivers are identified for which a
successive decoding scheme achieve the sum-rate capacity. First, the
two-receiver networks are analyzed: it is demonstrated that the unified outer
bounds derived in Part III of our multi-part papers are sum-rate optimal for
network scenarios which satisfy certain less-noisy conditions. Then, the
multi-receiver networks are considered. These networks are far less understood.
One of the main difficulties in the analysis of such scenarios is how to
establish useful capacity outer bounds. In this paper, a novel technique
requiring a sequential application of the Csiszar-Korner identity is developed
to establish powerful single-letter outer bounds on the sum-rate capacity of
multi-receiver interference networks which satisfy certain less-noisy
conditions. By using these outer bounds, a full characterization of the
sum-rate capacity is derived for general interference networks of arbitrary
large sizes with a sequence of less-noisy receivers. Some generalizations of
these outer bounds are also presented each of which is efficient to obtain the
exact sum-rate capacity for various scenarios.
|
1207.3045
|
The K-User Interference Channel: Strong Interference Regime
|
cs.IT math.IT
|
This paper gives a solution to one of the long-standing open problems in
network information theory: "What is the generalization of the strong
interference regime to the K-user interference channel?"
|
1207.3050
|
How Much Rate Splitting Is Required for a Random Coding Scheme? A new
Achievable Rate Region for the Broadcast Channel with Cognitive Relays
|
cs.IT math.IT
|
In this paper, it is shown that for any given single-hop communication
network with two receivers, splitting messages into more than two sub-messages
in a random coding scheme is redundant. To this end, the Broadcast Channel with
Cognitive Relays (BCCR) is considered. A novel achievability scheme is designed
for this network. Our achievability design is derived by a systematic
combination of the best known achievability schemes for the basic building
blocks included in the network: the Han-Kobayashi scheme for the two-user
interference channel and the Marton coding scheme for the broadcast channel.
Meanwhile, in our scheme each private message is split into only two
sub-messages which is identically exploited also in the Han-Kobayashi scheme.
It is shown that the resultant achievable rate region includes previous results
as well. More importantly, the procedure of the achievability design is
described by graphical illustrations based on directed graphs. Then, it is
argued that by extending the proposed scheme on the MACCM plan of messages, one
can derive similar achievability schemes for any other single-hop communication
network.
|
1207.3056
|
Non-Local Euclidean Medians
|
cs.CV cs.DS
|
In this letter, we note that the denoising performance of Non-Local Means
(NLM) at large noise levels can be improved by replacing the mean by the
Euclidean median. We call this new denoising algorithm the Non-Local Euclidean
Medians (NLEM). At the heart of NLEM is the observation that the median is more
robust to outliers than the mean. In particular, we provide a simple geometric
insight that explains why NLEM performs better than NLM in the vicinity of
edges, particularly at large noise levels. NLEM can be efficiently implemented
using iteratively reweighted least squares, and its computational complexity is
comparable to that of NLM. We provide some preliminary results to study the
proposed algorithm and to compare it with NLM.
|
1207.3071
|
Supervised Texture Classification Using a Novel Compression-Based
Similarity Measure
|
cs.CV cs.LG
|
Supervised pixel-based texture classification is usually performed in the
feature space. We propose to perform this task in (dis)similarity space by
introducing a new compression-based (dis)similarity measure. The proposed
measure utilizes two dimensional MPEG-1 encoder, which takes into consideration
the spatial locality and connectivity of pixels in the images. The proposed
formulation has been carefully designed based on MPEG encoder functionality. To
this end, by design, it solely uses P-frame coding to find the (dis)similarity
among patches/images. We show that the proposed measure works properly on both
small and large patch sizes. Experimental results show that the proposed
approach significantly improves the performance of supervised pixel-based
texture classification on Brodatz and outdoor images compared to other
compression-based dissimilarity measures as well as approaches performed in
feature space. It also improves the computation speed by about 40% compared to
its rivals.
|
1207.3091
|
Hidden stochastic, quantum and dynamic information of Markov diffusion
process and its evaluation by an entropy integral measure under the impulse
controls actions, applied to information observer
|
nlin.AO cs.IT math.IT
|
Hidden information emerges under impulse interactions with Markov diffusion
process modeling interactive random environment. Impulse yes no action cuts
Markov correlations revealing Bit of hidden information connected correlated
states. Information appears phenomenon of interaction cutting correlations
carrying entropy. Each inter action models Kronicker impulse, delta impulse
models interaction between the Kronicker impulses. Each impulse step down
action cuts maximum of impulse minimal entropy and impulse step up action
transits cutting minimal entropy to each step up action of merging delta
function. Delta step down action kills delivering entropy producing equivalent
minimax information. The merging action initiates quantum microprocess.
Multiple cutting entropy is converting to information micro macroprocess.
Cutting impulse entropy integrates entropy functional EF along trajectories of
multidimensional diffusion process. Information which delivers ending states of
each impulse integrates information path functional IPF along process
trajectories. Hidden information evaluates Feller kernel whose minimal path
transforms Markov transition probability to probability of Brownian diffusion.
Each transitive transformation virtually observes origin of hidden information
probabilities correlated states. IPF integrates observing Bits along minimal
path assembling information Observer. Minimax imposes variation principle on EF
and IPF whose extreme equations describe observing micro and macroprocess which
describes irreversible thermodynamics. Hidden information curries free
information frozen from correlated connections. Free information binds
observing micro macro processes in information macrodynamics. Each dynamic
three free information composes triplet structures. Three structural triplets
assemble information network. Triple networks free information cooperate
information Observer.
|
1207.3094
|
Vanishingly Sparse Matrices and Expander Graphs, With Application to
Compressed Sensing
|
cs.IT math.IT math.NA math.PR
|
We revisit the probabilistic construction of sparse random matrices where
each column has a fixed number of nonzeros whose row indices are drawn
uniformly at random with replacement. These matrices have a one-to-one
correspondence with the adjacency matrices of fixed left degree expander
graphs. We present formulae for the expected cardinality of the set of
neighbors for these graphs, and present tail bounds on the probability that
this cardinality will be less than the expected value. Deducible from these
bounds are similar bounds for the expansion of the graph which is of interest
in many applications. These bounds are derived through a more detailed analysis
of collisions in unions of sets. Key to this analysis is a novel {\em dyadic
splitting} technique. The analysis led to the derivation of better order
constants that allow for quantitative theorems on existence of lossless
expander graphs and hence the sparse random matrices we consider and also
quantitative compressed sensing sampling theorems when using sparse non
mean-zero measurement matrices.
|
1207.3100
|
Set-valued dynamic treatment regimes for competing outcomes
|
stat.ME cs.AI
|
Dynamic treatment regimes operationalize the clinical decision process as a
sequence of functions, one for each clinical decision, where each function
takes as input up-to-date patient information and gives as output a single
recommended treatment. Current methods for estimating optimal dynamic treatment
regimes, for example Q-learning, require the specification of a single outcome
by which the `goodness' of competing dynamic treatment regimes are measured.
However, this is an over-simplification of the goal of clinical decision
making, which aims to balance several potentially competing outcomes. For
example, often a balance must be struck between treatment effectiveness and
side-effect burden. We propose a method for constructing dynamic treatment
regimes that accommodates competing outcomes by recommending sets of treatments
at each decision point. Formally, we construct a sequence of set-valued
functions that take as input up-to-date patient information and give as output
a recommended subset of the possible treatments. For a given patient history,
the recommended set of treatments contains all treatments that are not inferior
according to any of the competing outcomes. When there is more than one
decision point, constructing these set-valued functions requires solving a
non-trivial enumeration problem. We offer an exact enumeration algorithm by
recasting the problem as a linear mixed integer program. The proposed methods
are illustrated using data from a depression study and the CATIE schizophrenia
study.
|
1207.3107
|
Expectation-Maximization Gaussian-Mixture Approximate Message Passing
|
cs.IT math.IT
|
When recovering a sparse signal from noisy compressive linear measurements,
the distribution of the signal's non-zero coefficients can have a profound
effect on recovery mean-squared error (MSE). If this distribution was apriori
known, then one could use computationally efficient approximate message passing
(AMP) techniques for nearly minimum MSE (MMSE) recovery. In practice, though,
the distribution is unknown, motivating the use of robust algorithms like
LASSO---which is nearly minimax optimal---at the cost of significantly larger
MSE for non-least-favorable distributions. As an alternative, we propose an
empirical-Bayesian technique that simultaneously learns the signal distribution
while MMSE-recovering the signal---according to the learned
distribution---using AMP. In particular, we model the non-zero distribution as
a Gaussian mixture, and learn its parameters through expectation maximization,
using AMP to implement the expectation step. Numerical experiments on a wide
range of signal classes confirm the state-of-the-art performance of our
approach, in both reconstruction error and runtime, in the high-dimensional
regime, for most (but not all) sensing operators.
|
1207.3110
|
Real-Time Peer-to-Peer Streaming Over Multiple Random Hamiltonian Cycles
|
cs.NI cs.MM cs.SI
|
We are motivated by the problem of designing a simple distributed algorithm
for Peer-to-Peer streaming applications that can achieve high throughput and
low delay, while allowing the neighbor set maintained by each peer to be small.
While previous works have mostly used tree structures, our algorithm constructs
multiple random directed Hamiltonian cycles and disseminates content over the
superposed graph of the cycles. We show that it is possible to achieve the
maximum streaming capacity even when each peer only transmits to and receives
from Theta(1) neighbors. Further, we show that the proposed algorithm achieves
the streaming delay of Theta(log N) when the streaming rate is less than
(1-1/K) of the maximum capacity for any fixed integer K>1, where N denotes the
number of peers in the network. The key theoretical contribution is to
characterize the distance between peers in a graph formed by the superposition
of directed random Hamiltonian cycles, in which edges from one of the cycles
may be dropped at random. We use Doob martingales and graph expansion ideas to
characterize this distance as a function of N, with high probability.
|
1207.3127
|
Tracking Tetrahymena Pyriformis Cells using Decision Trees
|
cs.CV cs.LG eess.IV q-bio.CB stat.ML
|
Matching cells over time has long been the most difficult step in cell
tracking. In this paper, we approach this problem by recasting it as a
classification problem. We construct a feature set for each cell, and compute a
feature difference vector between a cell in the current frame and a cell in a
previous frame. Then we determine whether the two cells represent the same cell
over time by training decision trees as our binary classifiers. With the output
of decision trees, we are able to formulate an assignment problem for our cell
association task and solve it using a modified version of the Hungarian
algorithm.
|
1207.3132
|
On the Automorphism Groups and Equivalence of Cyclic Combinatorial
Objects
|
cs.IT math.IT
|
We determine the permutation groups that arise as the automorphism groups of
cyclic combinatorial objects. As special cases we classify the automorphism
groups of cyclic codes. We also give the permutations by which two cyclic
combinatorial objects on $p^m$ elements are equivalent.
|
1207.3133
|
New Symmetric and Asymmetric Quantum Codes
|
cs.IT math.IT
|
New infinite families of quantum symmetric and asymmetric codes are
constructed. Several of these are MDS. The codes obtained are shown to have
parameters which are better than previously known. A number of known codes are
special cases of the codes given here.
|
1207.3136
|
Derivation of the Maximum a Posterori Estimate for Discrete Time
Descriptor Systems
|
cs.SY math.DS math.ST stat.TH
|
In this report a derivation of the MAP state estimator objective function for
general (possibly non-square) discrete time causal/non-causal descriptor
systems is presented. The derivation made use of the Kronecker Canonical
Transformation to extract the prior distribution on the descriptor state vector
so that Maximum a Posteriori (MAP) point estimation can be used. The analysis
indicates that the MAP estimate for index 1 causal descriptor systems does not
require any model transformations and can be found recursively. Furthermore, if
the descriptor system is of index 2 or higher and the noise free system is
causal, then the MAP estimate can also be found recursively without model
transformations provided that model causality is accounted for in designing the
stochastic model.
|
1207.3142
|
Color Constancy based on Image Similarity via Bilayer Sparse Coding
|
cs.CV
|
Computational color constancy is a very important topic in computer vision
and has attracted many researchers' attention. Recently, lots of research has
shown the effects of high level visual content information for illumination
estimation. However, all of these existing methods are essentially
combinational strategies in which image's content analysis is only used to
guide the combination or selection from a variety of individual illumination
estimation methods. In this paper, we propose a novel bilayer sparse coding
model for illumination estimation that considers image similarity in terms of
both low level color distribution and high level image scene content
simultaneously. For the purpose, the image's scene content information is
integrated with its color distribution to obtain optimal illumination
estimation model. The experimental results on two real-world image sets show
that our algorithm is superior to other prevailing illumination estimation
methods, even better than combinational methods.
|
1207.3146
|
Achievable rate region for three user discrete broadcast channel based
on coset codes
|
cs.IT math.IT
|
We present an achievable rate region for the general three user discrete
memoryless broadcast channel, based on nested coset codes. We characterize
3-to-1 discrete broadcast channels, a class of broadcast channels for which the
best known coding technique\footnote{We henceforth refer to this as Marton's
coding for three user discrete broadcast channel.}, which is obtained by a
natural generalization of that proposed by Marton for the general two user
discrete broadcast channel, is strictly sub-optimal. In particular, we identify
a novel 3-to-1 discrete broadcast channel for which Marton's coding is
\textit{analytically} proved to be strictly suboptimal. We present achievable
rate regions for the general 3-to-1 discrete broadcast channels, based on
nested coset codes, that strictly enlarge Marton's rate region for the
aforementioned channel. We generalize this to present achievable rate region
for the general three user discrete broadcast channel. Combining together
Marton's coding and that proposed herein, we propose the best known coding
technique, for a general three user discrete broadcast channel.
|
1207.3169
|
The law of brevity in macaque vocal communication is not an artifact of
analyzing mean call durations
|
q-bio.NC cs.CL physics.data-an
|
Words follow the law of brevity, i.e. more frequent words tend to be shorter.
From a statistical point of view, this qualitative definition of the law states
that word length and word frequency are negatively correlated. Here the recent
finding of patterning consistent with the law of brevity in Formosan macaque
vocal communication (Semple et al., 2010) is revisited. It is shown that the
negative correlation between mean duration and frequency of use in the
vocalizations of Formosan macaques is not an artifact of the use of a mean
duration for each call type instead of the customary 'word' length of studies
of the law in human language. The key point demonstrated is that the total
duration of calls of a particular type increases with the number of calls of
that type. The finding of the law of brevity in the vocalizations of these
macaques therefore defies a trivial explanation.
|
1207.3178
|
Distributed MPC Via Dual Decomposition and Alternating Direction Method
of Multipliers
|
math.OC cs.SY
|
A conventional way to handle model predictive control (MPC) problems
distributedly is to solve them via dual decomposition and gradient ascent.
However, at each time-step, it might not be feasible to wait for the dual
algorithm to converge. As a result, the algorithm might be needed to be
terminated prematurely. One is then interested to see if the solution at the
point of termination is close to the optimal solution and when one should
terminate the algorithm if a certain distance to optimality is to be
guaranteed. In this chapter, we look at this problem for distributed systems
under general dynamical and performance couplings, then, we make a statement on
validity of similar results where the problem is solved using alternating
direction method of multipliers.
|
1207.3205
|
A network with tunable clustering, degree correlation and degree
distribution, and an epidemic thereon
|
math.PR cs.SI physics.soc-ph q-bio.PE
|
A random network model which allows for tunable, quite general forms of
clustering, degree correlation and degree distribution is defined. The model is
an extension of the configuration model, in which stubs (half-edges) are paired
to form a network. Clustering is obtained by forming small completely connected
subgroups, and positive (negative) degree correlation is obtained by connecting
a fraction of the stubs with stubs of similar (dissimilar) degree. An SIR
(Susceptible -> Infective -> Recovered) epidemic model is defined on this
network. Asymptotic properties of both the network and the epidemic, as the
population size tends to infinity, are derived: the degree distribution, degree
correlation and clustering coefficient, as well as a reproduction number $R_*$,
the probability of a major outbreak and the relative size of such an outbreak.
The theory is illustrated by Monte Carlo simulations and numerical examples.
The main findings are that clustering tends to decrease the spread of disease,
the effect of degree correlation is appreciably greater when the disease is
close to threshold than when it is well above threshold and disease spread
broadly increases with degree correlation $\rho$ when $R_*$ is just above its
threshold value of one and decreases with $\rho$ when $R_*$ is well above one.
|
1207.3234
|
An Empirical Study of the Relation Between Community Structure and
Transitivity
|
cs.SI physics.soc-ph
|
One of the most prominent properties in real-world networks is the presence
of a community structure, i.e. dense and loosely interconnected groups of nodes
called communities. In an attempt to better understand this concept, we study
the relationship between the strength of the community structure and the
network transitivity (or clustering coefficient). Although intuitively
appealing, this analysis was not performed before. We adopt an approach based
on random models to empirically study how one property varies depending on the
other. It turns out the transitivity increases with the community structure
strength, and is also affected by the distribution of the community sizes.
Furthermore, increasing the transitivity also results in a stronger community
structure. More surprisingly, if a very weak community structure causes almost
zero transitivity, the opposite is not true and a network with a close to zero
transitivity can still have a clearly defined community structure. Further
analytical work is necessary to characterize the exact nature of the identified
relationship.
|
1207.3265
|
The Sufficiency Principle for Decentralized Data Reduction
|
cs.IT math.IT
|
This paper develops the sufficiency principle suitable for data reduction in
decentralized inference systems. Both parallel and tandem networks are studied
and we focus on the cases where observations at decentralized nodes are
conditionally dependent. For a parallel network, through the introduction of a
hidden variable that induces conditional independence among the observations,
the locally sufficient statistics, defined with respect to the hidden variable,
are shown to be globally sufficient for the parameter of inference interest.
For a tandem network, the notion of conditional sufficiency is introduced and
the related theories and tools are developed. Finally, connections between the
sufficiency principle and some distributed source coding problems are explored.
|
1207.3269
|
The Price of Privacy in Untrusted Recommendation Engines
|
cs.LG cs.IT math.IT
|
Recent increase in online privacy concerns prompts the following question:
can a recommender system be accurate if users do not entrust it with their
private data? To answer this, we study the problem of learning item-clusters
under local differential privacy, a powerful, formal notion of data privacy. We
develop bounds on the sample-complexity of learning item-clusters from
privatized user inputs. Significantly, our results identify a sample-complexity
separation between learning in an information-rich and an information-scarce
regime, thereby highlighting the interaction between privacy and the amount of
information (ratings) available to each user.
In the information-rich regime, where each user rates at least a constant
fraction of items, a spectral clustering approach is shown to achieve a
sample-complexity lower bound derived from a simple information-theoretic
argument based on Fano's inequality. However, the information-scarce regime,
where each user rates only a vanishing fraction of items, is found to require a
fundamentally different approach both for lower bounds and algorithms. To this
end, we develop new techniques for bounding mutual information under a notion
of channel-mismatch, and also propose a new algorithm, MaxSense, and show that
it achieves optimal sample-complexity in this setting.
The techniques we develop for bounding mutual information may be of broader
interest. To illustrate this, we show their applicability to $(i)$ learning
based on 1-bit sketches, and $(ii)$ adaptive learning, where queries can be
adapted based on answers to past queries.
|
1207.3270
|
Probabilistic Event Calculus for Event Recognition
|
cs.AI
|
Symbolic event recognition systems have been successfully applied to a
variety of application domains, extracting useful information in the form of
events, allowing experts or other systems to monitor and respond when
significant events are recognised. In a typical event recognition application,
however, these systems often have to deal with a significant amount of
uncertainty. In this paper, we address the issue of uncertainty in logic-based
event recognition by extending the Event Calculus with probabilistic reasoning.
Markov Logic Networks are a natural candidate for our logic-based formalism.
However, the temporal semantics of the Event Calculus introduce a number of
challenges for the proposed model. We show how and under what assumptions we
can overcome these problems. Additionally, we study how probabilistic modelling
changes the behaviour of the formalism, affecting its key property, the inertia
of fluents. Furthermore, we demonstrate the advantages of the probabilistic
Event Calculus through examples and experiments in the domain of activity
recognition, using a publicly available dataset for video surveillance.
|
1207.3285
|
Biogeography-Based Informative Gene Selection and Cancer Classification
Using SVM and Random Forests
|
cs.NE stat.ML
|
Microarray cancer gene expression data comprise of very high dimensions.
Reducing the dimensions helps in improving the overall analysis and
classification performance. We propose two hybrid techniques, Biogeography -
based Optimization - Random Forests (BBO - RF) and BBO - SVM (Support Vector
Machines) with gene ranking as a heuristic, for microarray gene expression
analysis. This heuristic is obtained from information gain filter ranking
procedure. The BBO algorithm generates a population of candidate subset of
genes, as part of an ecosystem of habitats, and employs the migration and
mutation processes across multiple generations of the population to improve the
classification accuracy. The fitness of each gene subset is assessed by the
classifiers - SVM and Random Forests. The performances of these hybrid
techniques are evaluated on three cancer gene expression datasets retrieved
from the Kent Ridge Biomedical datasets collection and the libSVM data
repository. Our results demonstrate that genes selected by the proposed
techniques yield classification accuracies comparable to previously reported
algorithms.
|
1207.3289
|
The Origin, Evolution and Development of Bilateral Symmetry in
Multicellular Organisms
|
q-bio.TO cs.CE
|
A computational theory and model of the ontogeny and development of bilateral
symmetry in multicellular organisms is presented. Understanding the origin and
evolution of bilateral organisms requires an understanding of how bilateral
symmetry develops, starting from a single cell. Bilateral symmetric growth of a
multicellular organism from a single starter cell is explained as resulting
from the opposite handedness and orientation along one axis in two daughter
founder cells that are in equivalent developmental control network states.
Several methods of establishing the initial orientation of the daughter cells
(including oriented cell division and cell signaling) are discussed. The
orientation states of the daughter cells are epigenetically inherited by their
progeny. This results in mirror development with the two founding daughter
cells generating complementary mirror image multicellular morphologies. The end
product is a bilateral symmetric organism. The theory gives a unified
explanation of diverse phenomena including symmetry breaking, situs inversus,
gynandromorphs, inside-out growth, bilaterally symmetric cancers, and the
rapid, punctuated evolution of bilaterally symmetric organisms in the Cambrian
Explosion. The theory is supported by experimental results on early embryonic
development. The theory makes precise testable predications.
|
1207.3292
|
The Han-Kobayashi Region for a Class of Gaussian Interference Channels
with Mixed Interference
|
cs.IT math.IT
|
A simple encoding scheme based on Sato's non-na\"ive frequency division is
proposed for a class of Gaussian interference channels with mixed interference.
The achievable region is shown to be equivalent to that of Costa's noiseberg
region for the onesided Gaussian interference channel. This allows for an
indirect proof that this simple achievable rate region is indeed equivalent to
the Han-Kobayashi (HK) region with Gaussian input and with time sharing for
this class of Gaussian interference channels with mixed interference.
|
1207.3315
|
Verifying an algorithm computing Discrete Vector Fields for digital
imaging
|
cs.AI cs.LO cs.MS math.AT
|
In this paper, we present a formalization of an algorithm to construct
admissible discrete vector fields in the Coq theorem prover taking advantage of
the SSReflect library. Discrete vector fields are a tool which has been
welcomed in the homological analysis of digital images since it provides a
procedure to reduce the amount of information but preserving the homological
properties. In particular, thanks to discrete vector fields, we are able to
compute, inside Coq, homological properties of biomedical images which
otherwise are out of the reach of this system.
|
1207.3316
|
SUMIS: Near-Optimal Soft-In Soft-Out MIMO Detection With Low and Fixed
Complexity
|
cs.IT math.IT
|
The fundamental problem of our interest here is soft-input soft-output
multiple-input multiple-output (MIMO) detection. We propose a method, referred
to as subspace marginalization with interference suppression (SUMIS), that
yields unprecedented performance at low and fixed (deterministic) complexity.
Our method provides a well-defined tradeoff between computational complexity
and performance. Apart from an initial sorting step consisting of selecting
channel-matrix columns, the algorithm involves no searching nor algorithmic
branching; hence the algorithm has a completely predictable run-time and allows
for a highly parallel implementation. We numerically assess the performance of
SUMIS in different practical settings: full/partial channel state information,
sequential/iterative decoding, and low/high rate outer codes. We also comment
on how the SUMIS method performs in systems with a large number of transmit
antennas.
|
1207.3322
|
On the Sum Capacity of the Discrete Memoryless Interference Channel with
One-Sided Weak Interference and Mixed Interference
|
cs.IT math.IT
|
The sum capacity of a class of discrete memoryless interference channels is
determined. This class of channels is defined analogous to the Gaussian
Z-interference channel with weak interference; as a result, the sum capacity is
achieved by letting the transceiver pair subject to the interference
communicates at a rate such that its message can be decoded at the unintended
receiver using single user detection. Moreover, this class of discrete
memoryless interference channels is equivalent in capacity region to certain
discrete degraded interference channels. This allows the construction of a
capacity outer-bound using the capacity region of associated degraded broadcast
channels. The same technique is then used to determine the sum capacity of the
discrete memoryless interference channel with mixed interference. The above
results allow one to determine sum capacities or capacity regions of several
new discrete memoryless interference channels.
|
1207.3368
|
Learning the Pseudoinverse Solution to Network Weights
|
cs.NE
|
The last decade has seen the parallel emergence in computational neuroscience
and machine learning of neural network structures which spread the input signal
randomly to a higher dimensional space; perform a nonlinear activation; and
then solve for a regression or classification output by means of a mathematical
pseudoinverse operation. In the field of neuromorphic engineering, these
methods are increasingly popular for synthesizing biologically plausible neural
networks, but the "learning method" - computation of the pseudoinverse by
singular value decomposition - is problematic both for biological plausibility
and because it is not an online or an adaptive method. We present an online or
incremental method of computing the pseudoinverse, which we argue is
biologically plausible as a learning method, and which can be made adaptable
for non-stationary data streams. The method is significantly more
memory-efficient than the conventional computation of pseudoinverses by
singular value decomposition.
|
1207.3370
|
Deconvolution of vibroacoustic images using a simulation model based on
a three dimensional point spread function
|
cs.CV
|
Vibro-acoustography (VA) is a medical imaging method based on the
difference-frequency generation produced by the mixture of two focused
ultrasound beams. VA has been applied to different problems in medical imaging
such as imaging bones, microcalcifications in the breast, mass lesions, and
calcified arteries. The obtained images may have a resolution of 0.7--0.8 mm.
Current VA systems based on confocal or linear array transducers generate
C-scan images at the beam focal plane. Images on the axial plane are also
possible, however the system resolution along depth worsens when compared to
the lateral one. Typical axial resolution is about 1.0 cm. Furthermore, the
elevation resolution of linear array systems is larger than that in lateral
direction. This asymmetry degrades C-scan images obtained using linear arrays.
The purpose of this article is to study VA image restoration based on a 3D
point spread function (PSF) using classical deconvolution algorithms: Wiener,
constrained least-squares (CLSs), and geometric mean filters. To assess the
filters' performance, we use an image quality index that accounts for
correlation loss, luminance and contrast distortion. Results for simulated VA
images show that the quality index achieved with the Wiener filter is 0.9 (1
indicates perfect restoration). This filter yielded the best result in
comparison with the other ones. Moreover, the deconvolution algorithms were
applied to an experimental VA image of a phantom composed of three stretched
0.5 mm wires. Experiments were performed using transducer driven at two
frequencies, 3075 kHz and 3125 kHz, which resulted in the difference-frequency
of 50 kHz. Restorations with the theoretical line spread function (LSF) did not
recover sufficient information to identify the wires in the images. However,
using an estimated LSF the obtained results displayed enough information to
spot the wires in the images.
|
1207.3384
|
MDS and Self-dual Codes over Rings
|
cs.IT math.IT
|
In this paper we give the structure of constacyclic codes over formal power
series and chain rings. We also present necessary and sufficient conditions on
the existence of MDS codes over principal ideal rings. These results allow for
the construction of infinite families of MDS self-dual codes over finite chain
rings, formal power series and principal ideal rings.
|
1207.3385
|
Construction of Cyclic Codes over $\mathbb{F}_2+u\mathbb{F}_2$ for DNA
Computing
|
cs.IT math.IT
|
We construct codes over the ring $\mathbb{F}_2+u\mathbb{F}_2$ with $u^2=0$.
These code are designed for use in DNA computing applications. The codes
obtained satisfy the reverse complement constraint, the $GC$ content constraint
and avoid the secondary structure. they are derived from the cyclic complement
reversible codes over the ring $\mathbb{F}_2+u\mathbb{F}_2$. We also construct
an infinite family of BCH DNA codes.
|
1207.3387
|
Self-dual Repeated Root Cyclic and Negacyclic Codes over Finite Fields
|
cs.IT math.IT
|
In this paper we investigate repeated root cyclic and negacyclic codes of
length $p^rm$ over $\mathbb{F}_{p^s}$ with $(m,p)=1$. In the case $p$ odd, we
give necessary and sufficient conditions on the existence of negacyclic
self-dual codes. When $m=2m'$ with $m'$ odd, we characterize the codes in terms
of their generator polynomials. This provides simple conditions on the
existence of self-dual negacyclic codes, and generalizes the results of Dinh
\cite{dinh}. We also answer an open problem concerning the number of self-dual
cyclic codes given by Jia et al. \cite{jia}.
|
1207.3388
|
Eradicating Computer Viruses on Networks
|
physics.soc-ph cs.NI cs.SI
|
Spread of computer viruses can be modeled as the SIS
(susceptible-infected-susceptible) epidemic propagation. We show that in order
to ensure the random immunization or the targeted immunization effectively
prevent computer viruses propagation on homogeneous networks, we should install
antivirus programs in every computer node and frequently update those programs.
This may produce large work and cost to install and update antivirus programs.
Then we propose a new policy called "network monitors" to tackle this problem.
In this policy, we only install and update antivirus programs for small number
of computer nodes, namely the "network monitors". Further, the "network
monitors" can monitor their neighboring nodes' behavior. This mechanism incur
relative small cost to install and update antivirus programs.We also indicate
that the policy of the "network monitors" is efficient to protect the network's
safety. Numerical simulations confirm our analysis.
|
1207.3389
|
Incremental Learning of 3D-DCT Compact Representations for Robust Visual
Tracking
|
cs.CV cs.LG
|
Visual tracking usually requires an object appearance model that is robust to
changing illumination, pose and other factors encountered in video. In this
paper, we construct an appearance model using the 3D discrete cosine transform
(3D-DCT). The 3D-DCT is based on a set of cosine basis functions, which are
determined by the dimensions of the 3D signal and thus independent of the input
video data. In addition, the 3D-DCT can generate a compact energy spectrum
whose high-frequency coefficients are sparse if the appearance samples are
similar. By discarding these high-frequency coefficients, we simultaneously
obtain a compact 3D-DCT based object representation and a signal
reconstruction-based similarity measure (reflecting the information loss from
signal reconstruction). To efficiently update the object representation, we
propose an incremental 3D-DCT algorithm, which decomposes the 3D-DCT into
successive operations of the 2D discrete cosine transform (2D-DCT) and 1D
discrete cosine transform (1D-DCT) on the input video data.
|
1207.3394
|
Dimension Reduction by Mutual Information Feature Extraction
|
cs.LG cs.CV
|
During the past decades, to study high-dimensional data in a large variety of
problems, researchers have proposed many Feature Extraction algorithms. One of
the most effective approaches for optimal feature extraction is based on mutual
information (MI). However it is not always easy to get an accurate estimation
for high dimensional MI. In terms of MI, the optimal feature extraction is
creating a feature set from the data which jointly have the largest dependency
on the target class and minimum redundancy. In this paper, a
component-by-component gradient ascent method is proposed for feature
extraction which is based on one-dimensional MI estimates. We will refer to
this algorithm as Mutual Information Feature Extraction (MIFX). The performance
of this proposed method is evaluated using UCI databases. The results indicate
that MIFX provides a robust performance over different data sets which are
almost always the best or comparable to the best ones.
|
1207.3414
|
Google matrix of Twitter
|
cs.SI physics.soc-ph
|
We construct the Google matrix of the entire Twitter network, dated by July
2009, and analyze its spectrum and eigenstate properties including the PageRank
and CheiRank vectors and 2DRanking of all nodes. Our studies show much stronger
inter-connectivity between top PageRank nodes for the Twitter network compared
to the networks of Wikipedia and British Universities studied previously. Our
analysis allows to locate the top Twitter users which control the information
flow on the network. We argue that this small fraction of the whole number of
users, which can be viewed as the social network elite, plays the dominant role
in the process of opinion formation on the network.
|
1207.3434
|
An Approach to Model Interest for Planetary Rover through
Dezert-Smarandache Theory
|
cs.AI cs.RO cs.SY
|
In this paper, we propose an approach for assigning an interest level to the
goals of a planetary rover. Assigning an interest level to goals, allows the
rover autonomously to transform and reallocate the goals. The interest level is
defined by data-fusing payload and navigation information. The fusion yields an
"interest map", that quantifies the level of interest of each area around the
rover. In this way the planner can choose the most interesting scientific
objectives to be analyzed, with limited human intervention, and reallocates its
goals autonomously. The Dezert-Smarandache Theory of Plausible and Paradoxical
Reasoning was used for information fusion: this theory allows dealing with
vague and conflicting data. In particular, it allows us directly to model the
behavior of the scientists that have to evaluate the relevance of a particular
set of goals. The paper shows an application of the proposed approach to the
generation of a reliable interest map.
|
1207.3437
|
Robust Mission Design Through Evidence Theory and Multi-Agent
Collaborative Search
|
cs.CE cs.NE cs.SY math.OC math.PR
|
In this paper, the preliminary design of a space mission is approached
introducing uncertainties on the design parameters and formulating the
resulting reliable design problem as a multiobjective optimization problem.
Uncertainties are modelled through evidence theory and the belief, or
credibility, in the successful achievement of mission goals is maximised along
with the reliability of constraint satisfaction. The multiobjective
optimisation problem is solved through a novel algorithm based on the
collaboration of a population of agents in search for the set of highly
reliable solutions. Two typical problems in mission analysis are used to
illustrate the proposed methodology.
|
1207.3438
|
MahNMF: Manhattan Non-negative Matrix Factorization
|
stat.ML cs.LG cs.NA
|
Non-negative matrix factorization (NMF) approximates a non-negative matrix
$X$ by a product of two non-negative low-rank factor matrices $W$ and $H$. NMF
and its extensions minimize either the Kullback-Leibler divergence or the
Euclidean distance between $X$ and $W^T H$ to model the Poisson noise or the
Gaussian noise. In practice, when the noise distribution is heavy tailed, they
cannot perform well. This paper presents Manhattan NMF (MahNMF) which minimizes
the Manhattan distance between $X$ and $W^T H$ for modeling the heavy tailed
Laplacian noise. Similar to sparse and low-rank matrix decompositions, MahNMF
robustly estimates the low-rank part and the sparse part of a non-negative
matrix and thus performs effectively when data are contaminated by outliers. We
extend MahNMF for various practical applications by developing box-constrained
MahNMF, manifold regularized MahNMF, group sparse MahNMF, elastic net inducing
MahNMF, and symmetric MahNMF. The major contribution of this paper lies in two
fast optimization algorithms for MahNMF and its extensions: the rank-one
residual iteration (RRI) method and Nesterov's smoothing method. In particular,
by approximating the residual matrix by the outer product of one row of W and
one row of $H$ in MahNMF, we develop an RRI method to iteratively update each
variable of $W$ and $H$ in a closed form solution. Although RRI is efficient
for small scale MahNMF and some of its extensions, it is neither scalable to
large scale matrices nor flexible enough to optimize all MahNMF extensions.
Since the objective functions of MahNMF and its extensions are neither convex
nor smooth, we apply Nesterov's smoothing method to recursively optimize one
factor matrix with another matrix fixed. By setting the smoothing parameter
inversely proportional to the iteration number, we improve the approximation
accuracy iteratively for both MahNMF and its extensions.
|
1207.3441
|
Isabelle/jEdit --- a Prover IDE within the PIDE framework
|
cs.LO cs.AI cs.MS
|
PIDE is a general framework for document-oriented prover interaction and
integration, based on a bilingual architecture that combines ML and Scala. The
overall aim is to connect LCF-style provers like Isabelle (or Coq or HOL) with
sophisticated front-end technology on the JVM platform, overcoming command-line
interaction at last.
The present system description specifically covers Isabelle/jEdit as part of
the official release of Isabelle2011-1 (October 2011). It is a concrete Prover
IDE implementation based on Isabelle/PIDE library modules (implemented in
Scala) on the one hand, and the well-known text editor framework of jEdit
(implemented in Java) on the other hand.
The interaction model of our Prover IDE follows the idea of continuous proof
checking: the theory source text is annotated by semantic information by the
prover as it becomes available incrementally. This works via an asynchronous
protocol that neither blocks the editor nor stops the prover from exploiting
parallelism on multi-core hardware. The jEdit GUI provides standard metaphors
for augmented text editing (highlighting, squiggles, tooltips, hyperlinks etc.)
that we have instrumented to render the formal content from the prover context.
Further refinement of the jEdit display engine via suitable plugins and fonts
approximates mathematical rendering in the text buffer, including symbols from
the TeX repertoire, and sub-/superscripts.
Isabelle/jEdit is presented here both as a usable interface for current
Isabelle, and as a reference application to inspire further projects based on
PIDE.
|
1207.3442
|
Approximated Computation of Belief Functions for Robust Design
Optimization
|
cs.CE cs.NE cs.SY math.OC math.PR
|
This paper presents some ideas to reduce the computational cost of
evidence-based robust design optimization. Evidence Theory crystallizes both
the aleatory and epistemic uncertainties in the design parameters, providing
two quantitative measures, Belief and Plausibility, of the credibility of the
computed value of the design budgets. The paper proposes some techniques to
compute an approximation of Belief and Plausibility at a cost that is a
fraction of the one required for an accurate calculation of the two values.
Some simple test cases will show how the proposed techniques scale with the
dimension of the problem. Finally a simple example of spacecraft system design
is presented.
|
1207.3451
|
Analysis and Optimization of a Frequency-Hopping Ad Hoc Network in
Rayleigh Fading
|
cs.IT math.IT
|
This paper proposes a new method for optimizing frequency-hopping ad hoc
networks in the presence of Rayleigh fading. It is assumed that the system uses
a capacity-approaching code (e.g., turbo or LDPC) and noncoherent binary
continuous-phase frequency-shift keying (CPFSK) modulation. By using
transmission capacity as the performance metric, the number of hopping
channels, CPFSK modulation index, and code rate are jointly optimized. Mobiles
in the network are assumed to be uniformly located within a finite area.
Closed-form expressions for outage probability are given for a network
characterized by a physical interference channel. The outage probability is
first found conditioned on the locations of the mobiles, and then averaged over
the spatial distribution of the mobiles. The transmission capacity, which is a
measure of the spatial spectral efficiency, is obtained from the outage
probability. The transmission capacity is modified to account for the
constraints of the CPFSK modulation and capacity-approaching coding. Two
optimization methods are proposed for maximizing the transmission capacity. The
first is a brute-force method and the second is a gradient-search algorithm.
The results obtained from the optimization shed new insight into the
fundamental tradeoffs among the number of frequency-hopping channels, the
modulation index, and the rate of the error-correcting code.
|
1207.3472
|
Optimal Selection of Assets Investing Composition Plan based on Grey
Multi Objective Programming method
|
cs.CE math.OC
|
The problem for selection of appropriate assets investing composition
projects such as assets rationalization plays an important role in promotion of
business systems. We consider the assets investing composition plan problems
subject to grey multiobjective programming with the grey inequality
constraints. In this paper, we show in detail the entire process of the
application from modeling the case problem to generating its solution. To solve
the grey multi objective programming problem, we then develop and apply an
algorithm of grey multiple objective programming by weighting method and an
algorithm of grey multiple objective programming based on q -positioned
programming method. These algorithms all regard as of great importance
uncertainty (greyness) at grey multiobjective programming and simple and easy
the calculating process. The calculating examples of paper also show ability
and effectiveness of algorithms.
|
1207.3510
|
HMRF-EM-image: Implementation of the Hidden Markov Random Field Model
and its Expectation-Maximization Algorithm
|
cs.CV
|
In this project, we study the hidden Markov random field (HMRF) model and its
expectation-maximization (EM) algorithm. We implement a MATLAB toolbox named
HMRF-EM-image for 2D image segmentation using the HMRF-EM framework. This
toolbox also implements edge-prior-preserving image segmentation, and can be
easily reconfigured for other problems, such as 3D image segmentation.
|
1207.3513
|
Secure Channel Simulation
|
cs.IT math.IT
|
In this paper the Output Statistics of Random Binning (OSRB) framework is
used to prove a new inner bound for the problem of secure channel simulation.
Our results subsume some recent results on the secure function computation. We
also provide an achievability result for the problem of simultaneously
simulating a channel and creating a shared secret key. A special case of this
result generalizes the lower bound of Gohari and Anantharam on the source model
to include constraints on the rates of the public discussion.
|
1207.3520
|
Improved brain pattern recovery through ranking approaches
|
cs.LG stat.ML
|
Inferring the functional specificity of brain regions from functional
Magnetic Resonance Images (fMRI) data is a challenging statistical problem.
While the General Linear Model (GLM) remains the standard approach for brain
mapping, supervised learning techniques (a.k.a.} decoding) have proven to be
useful to capture multivariate statistical effects distributed across voxels
and brain regions. Up to now, much effort has been made to improve decoding by
incorporating prior knowledge in the form of a particular regularization term.
In this paper we demonstrate that further improvement can be made by accounting
for non-linearities using a ranking approach rather than the commonly used
least-square regression. Through simulation, we compare the recovery properties
of our approach to linear models commonly used in fMRI based decoding. We
demonstrate the superiority of ranking with a real fMRI dataset.
|
1207.3532
|
Memory Efficient De Bruijn Graph Construction
|
cs.DS cs.DB
|
Massively parallel DNA sequencing technologies are revolutionizing genomics
research. Billions of short reads generated at low costs can be assembled for
reconstructing the whole genomes. Unfortunately, the large memory footprint of
the existing de novo assembly algorithms makes it challenging to get the
assembly done for higher eukaryotes like mammals. In this work, we investigate
the memory issue of constructing de Bruijn graph, a core task in leading
assembly algorithms, which often consumes several hundreds of gigabytes memory
for large genomes. We propose a disk-based partition method, called Minimum
Substring Partitioning (MSP), to complete the task using less than 10 gigabytes
memory, without runtime slowdown. MSP breaks the short reads into multiple
small disjoint partitions so that each partition can be loaded into memory,
processed individually and later merged with others to form a de Bruijn graph.
By leveraging the overlaps among the k-mers (substring of length k), MSP
achieves astonishing compression ratio: The total size of partitions is reduced
from $\Theta(kn)$ to $\Theta(n)$, where $n$ is the size of the short read
database, and $k$ is the length of a $k$-mer. Experimental results show that
our method can build de Bruijn graphs using a commodity computer for any
large-volume sequence dataset.
|
1207.3538
|
Kernel Principal Component Analysis and its Applications in Face
Recognition and Active Shape Models
|
cs.CV
|
Principal component analysis (PCA) is a popular tool for linear
dimensionality reduction and feature extraction. Kernel PCA is the nonlinear
form of PCA, which better exploits the complicated spatial structure of
high-dimensional features. In this paper, we first review the basic ideas of
PCA and kernel PCA. Then we focus on the reconstruction of pre-images for
kernel PCA. We also give an introduction on how PCA is used in active shape
models (ASMs), and discuss how kernel PCA can be applied to improve traditional
ASMs. Then we show some experimental results to compare the performance of
kernel PCA and standard PCA for classification problems. We also implement the
kernel PCA-based ASMs, and use it to construct human face models.
|
1207.3543
|
Classification of Approaches and Challenges of Frequent Subgraphs Mining
in Biological Networks
|
cs.AI
|
Understanding the structure and dynamics of biological networks is one of the
important challenges in system biology. In addition, increasing amount of
experimental data in biological networks necessitate the use of efficient
methods to analyze these huge amounts of data. Such methods require to
recognize common patterns to analyze data. As biological networks can be
modeled by graphs, the problem of common patterns recognition is equivalent
with frequent sub graph mining in a set of graphs. In this paper, at first the
challenges of frequent subgrpahs mining in biological networks are introduced
and the existing approaches are classified for each challenge. then the
algorithms are analyzed on the basis of the type of the approach they apply for
each of the challenges.
|
1207.3554
|
Designing various component analysis at will
|
cs.CV cs.NA stat.ME stat.ML
|
This paper provides a generic framework of component analysis (CA) methods
introducing a new expression for scatter matrices and Gram matrices, called
Generalized Pairwise Expression (GPE). This expression is quite compact but
highly powerful: The framework includes not only (1) the standard CA methods
but also (2) several regularization techniques, (3) weighted extensions, (4)
some clustering methods, and (5) their semi-supervised extensions. This paper
also presents quite a simple methodology for designing a desired CA method from
the proposed framework: Adopting the known GPEs as templates, and generating a
new method by combining these templates appropriately.
|
1207.3560
|
Diagnosing client faults using SVM-based intelligent inference from TCP
packet traces
|
cs.NI cs.AI cs.LG
|
We present the Intelligent Automated Client Diagnostic (IACD) system, which
only relies on inference from Transmission Control Protocol (TCP) packet traces
for rapid diagnosis of client device problems that cause network performance
issues. Using soft-margin Support Vector Machine (SVM) classifiers, the system
(i) distinguishes link problems from client problems, and (ii) identifies
characteristics unique to client faults to report the root cause of the client
device problem. Experimental evaluation demonstrated the capability of the IACD
system to distinguish between faulty and healthy links and to diagnose the
client faults with 98% accuracy in healthy links. The system can perform fault
diagnosis independent of the client's specific TCP implementation, enabling
diagnosis capability on diverse range of client computers.
|
1207.3572
|
On the Equal-Rate Capacity of the AWGN Multiway Relay Channel
|
cs.IT math.IT
|
The L-user additive white Gaussian noise multiway relay channel is
investigated, where L users exchange information at the same rate through a
single relay. A new achievable rate region, based on the
functional-decode-forward coding strategy, is derived. For the case where there
are three or more users, and all nodes transmit at the same power, the capacity
is obtained. For the case where the relay power scales with the number of
users, it is shown that both compress-forward and functional-decode-forward
achieve rates within a constant number of bits of the capacity at all SNR
levels; in addition, functional-decode-forward outperforms compress-forward and
complete-decode-forward at high SNR levels.
|
1207.3574
|
On the Capacity of the Binary-Symmetric Parallel-Relay Network
|
cs.IT math.IT
|
We investigate the binary-symmetric parallel-relay network where there is one
source, one destination, and multiple relays in parallel. We show that
forwarding relays, where the relays merely transmit their received signals,
achieve the capacity in two ways: with coded transmission at the source and a
finite number of relays, or uncoded transmission at the source and a
sufficiently large number of relays. On the other hand, decoding relays, where
the relays decode the source message, re-encode, and forward it to the
destination, achieve the capacity when the number of relays is small. In
addition, we show that any coding scheme that requires decoding at any relay is
suboptimal in large parallel-relay networks, where forwarding relays achieve
strictly higher rates.
|
1207.3576
|
Hierarchical Approach for Total Variation Digital Image Inpainting
|
cs.CV
|
The art of recovering an image from damage in an undetectable form is known
as inpainting. The manual work of inpainting is most often a very time
consuming process. Due to digitalization of this technique, it is automatic and
faster. In this paper, after the user selects the regions to be reconstructed,
the algorithm automatically reconstruct the lost regions with the help of the
information surrounding them. The existing methods perform very well when the
region to be reconstructed is very small, but fails in proper reconstruction as
the area increases. This paper describes a Hierarchical method by which the
area to be inpainted is reduced in multiple levels and Total Variation(TV)
method is used to inpaint in each level. This algorithm gives better
performance when compared with other existing algorithms such as nearest
neighbor interpolation, Inpainting through Blurring and Sobolev Inpainting.
|
1207.3582
|
Erasure Coding for Real-Time Streaming
|
cs.IT math.IT
|
We consider a real-time streaming system where messages are created
sequentially at the source, and are encoded for transmission to the receiver
over a packet erasure link. Each message must subsequently be decoded at the
receiver within a given delay from its creation time. The goal is to construct
an erasure correction code that achieves the maximum message size when all
messages must be decoded by their respective deadlines under a specified set of
erasure patterns (erasure model). We present an explicit intrasession code
construction that is asymptotically optimal under erasure models containing a
limited number of erasures per coding window, per sliding window, and
containing erasure bursts of a limited length.
|
1207.3583
|
Information Retrieval Model: A Social Network Extraction Perspective
|
cs.IR
|
Future Information Retrieval, especially in connection with the internet,
will incorporate the content descriptions that are generated with social
network extraction technologies and preferably incorporate the probability
theory for assigning the semantic. Although there is an increasing interest
about social network extraction, but a little of them has a significant impact
to infomation retrieval. Therefore this paper proposes a model of information
retrieval from the social network extraction.
|
1207.3598
|
Learning to rank from medical imaging data
|
cs.LG cs.CV
|
Medical images can be used to predict a clinical score coding for the
severity of a disease, a pain level or the complexity of a cognitive task. In
all these cases, the predicted variable has a natural order. While a standard
classifier discards this information, we would like to take it into account in
order to improve prediction performance. A standard linear regression does
model such information, however the linearity assumption is likely not be
satisfied when predicting from pixel intensities in an image. In this paper we
address these modeling challenges with a supervised learning procedure where
the model aims to order or rank images. We use a linear model for its
robustness in high dimension and its possible interpretation. We show on
simulations and two fMRI datasets that this approach is able to predict the
correct ordering on pairs of images, yielding higher prediction accuracy than
standard regression and multiclass classification techniques.
|
1207.3603
|
Qualitative Comparison of Community Detection Algorithms
|
cs.SI cs.CV physics.soc-ph
|
Community detection is a very active field in complex networks analysis,
consisting in identifying groups of nodes more densely interconnected
relatively to the rest of the network. The existing algorithms are usually
tested and compared on real-world and artificial networks, their performance
being assessed through some partition similarity measure. However, artificial
networks realism can be questioned, and the appropriateness of those measures
is not obvious. In this study, we take advantage of recent advances concerning
the characterization of community structures to tackle these questions. We
first generate networks thanks to the most realistic model available to date.
Their analysis reveals they display only some of the properties observed in
real-world community structures. We then apply five community detection
algorithms on these networks and find out the performance assessed
quantitatively does not necessarily agree with a qualitative analysis of the
identified communities. It therefore seems both approaches should be applied to
perform a relevant comparison of the algorithms.
|
1207.3607
|
Fusing image representations for classification using support vector
machines
|
cs.CV cs.LG
|
In order to improve classification accuracy different image representations
are usually combined. This can be done by using two different fusing schemes.
In feature level fusion schemes, image representations are combined before the
classification process. In classifier fusion, the decisions taken separately
based on individual representations are fused to make a decision. In this paper
the main methods derived for both strategies are evaluated. Our experimental
results show that classifier fusion performs better. Specifically Bayes belief
integration is the best performing strategy for image classification task.
|
1207.3628
|
Identify Web-page Content meaning using Knowledge based System for Dual
Meaning Words
|
cs.IR
|
Meaning of Web-page content plays a big role while produced a search result
from a search engine. Most of the cases Web-page meaning stored in title or
meta-tag area but those meanings do not always match with Web-page content. To
overcome this situation we need to go through the Web-page content to identify
the Web-page meaning. In such cases, where Webpage content holds dual meaning
words that time it is really difficult to identify the meaning of the Web-page.
In this paper, we are introducing a new design and development mechanism of
identifying the Web-page content meaning which holds dual meaning words in
their Web-page content.
|
1207.3646
|
OGCOSMO: An auxiliary tool for the study of the Universe within
hierarchical scenario of structure formation
|
cs.CE astro-ph.CO astro-ph.IM
|
In this work is presented the software OGCOSMO. This program was written
using high level design methodology (HLDM), that is based on the use of very
high level (VHL) programing language as main, and the use of the intermediate
level (IL) language only for the critical processing time. The languages used
are PYTHON (VHL) and FORTRAN (IL). The core of OGCOSMO is a package called
OGC{\_}lib. This package contains a group of modules for the study of
cosmological and astrophysical processes, such as: comoving distance, relation
between redshift and time, cosmic star formation rate, number density of dark
matter haloes and mass function of supermassive black holes (SMBHs). The
software is under development and some new features will be implemented for the
research of stochastic background of gravitational waves (GWs) generated by:
stellar collapse to form black holes, binary systems of SMBHs. Even more, we
show that the use of HLDM with PYTHON and FORTRAN is a powerful tool for
producing astrophysical softwares.
|
1207.3654
|
Joint Filter Design of Alternate MIMO AF Relaying Networks with
Interference Alignment
|
cs.IT math.IT
|
We study in this paper a two-hop relaying network consisting of one source,
one destination, and three amplify-and-forward (AF) relays operating in a
half-duplex mode. In order to compensate for the inherent loss of capacity
pre-log factor 1/2 in a half-duplex mode, we consider alternate transmission
protocol among three relays where two relays and the other relay alternately
forward messages from source to destination. We consider a multiple-antenna
environment where all nodes have $M$ antennas. Aligning the inter-relay
interference due to the alternate transmission is utilized to make additional
degrees of freedom (DOFs) and recover the pre-log factor loss. It is shown that
the proposed relaying scheme can achieve $\frac{3M}{4}$ DOFs compared with the
$\frac{M}{2}$ DOFs of conventional AF relaying. In addition, suboptimal linear
filter designs for a source and three relays are proposed to maximize the
system achievable sum-rate for different fading scenarios when the destination
utilizes a linear minimum mean-square error filter for decoding. We verify from
our selected numerical results that the proposed filter designs give
significant improvement over a naive filter or conventional relaying schemes.
|
1207.3658
|
Programing Using High Level Design With Python and FORTRAN: A Study Case
in Astrophysics
|
cs.CE astro-ph.IM
|
In this work, we present a short review about the high level design
methodology (HLDM), that is based on the use of very high level (VHL)
programing language as main, and the use of the intermediate level (IL)
language only for the critical processing time. The languages used are Python
(VHL) and FORTRAN (IL). Moreover, this methodology, making use of the oriented
object programing (OOP), permits to produce a readable, portable and reusable
code. Also is presented the concept of computational framework, that naturally
appears from the OOP paradigm. As an example, we present the framework called
PYGRAWC (Python framework for Gravitational Waves from Cosmological origin).
Even more, we show that the use of HLDM with Python and FORTRAN produces a
powerful tool for solving astrophysical problems.
|
1207.3704
|
Gibbsian Method for the Self-Optimization of Cellular Networks
|
math.OC cs.SY
|
In this work, we propose and analyze a class of distributed algorithms
performing the joint optimization of radio resources in heterogeneous cellular
networks made of a juxtaposition of macro and small cells. Within this context,
it is essential to use algorithms able to simultaneously solve the problems of
channel selection, user association and power control. In such networks, the
unpredictability of the cell and user patterns also requires distributed
optimization schemes. The proposed method is inspired from statistical physics
and based on the Gibbs sampler. It does not require the concavity/convexity,
monotonicity or duality properties common to classical optimization problems.
Besides, it supports discrete optimization which is especially useful to
practical systems. We show that it can be implemented in a fully distributed
way and nevertheless achieves system-wide optimality. We use simulation to
compare this solution to today's default operational methods in terms of both
throughput and energy consumption. Finally, we address concrete issues for the
implementation of this solution and analyze the overhead traffic required
within the framework of 3GPP and femtocell standards.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.