id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1208.6063
|
Nonlinear spread of rumor and inoculation strategies in the nodes with
degree dependent tie strength in complex networks
|
cs.SI physics.soc-ph
|
In earlier rumor spreading models, at each time step nodes contact all of
their neighbors. In more realistic scenario it is possible that a node may
contact only some of its neighbors to spread the rumor. Therefore it is must in
real world complex networks, the classic rumor spreading model need to be
modified to consider the dependence of rumor spread rate on the degree of the
spreader and the informed nodes. We have given a modified rumor spreading model
to accommodate these facts. This new model, has been studied for rumor
spreading in complex networks in this work. Nonlinear rumor spread exponent
$\alpha$ and degree dependent tie strength exponent $\beta$ in any complex
network gives rumor threshold as some finite value. In the present work, the
modified rumor spreading model has been studied in scale free networks. It is
also found that if $ \alpha $ and $ \beta $ parameters are tuned to appropriate
value, the rumor threshold becomes independent of network size. In any social
network, rumors can spread may have undesirable effect. One of the possible
solutions to control rumor spread, is to inoculate a certain fraction of nodes
against rumors. The inoculation can be done randomly or in a targeted fashion.
We have used modified rumor spreading model over scale free networks to
investigate the efficacy of inoculation. Random and targeted inoculation
schemes have been applied. It has been observed that rumor threshold in random
inoculation scheme is greater than the rumor threshold in the model without any
inoculation scheme. But random inoculation is not that much effective. The
rumor threshold in targeted inoculation is very high than the rumor threshold
in the random inoculation in suppressing the rumor.
|
1208.6064
|
Minimax Linear Quadratic Gaussian Control of Nonlinear MIMO System with
Time Varying Uncertainties
|
cs.SY
|
In this paper, a robust nonlinear control scheme is proposed for a nonlinear
multi-input multi-output (MIMO) system subject to bounded time varying
uncertainty which satisfies a certain integral quadratic constraint condition.
The scheme develops a robust feedback linarization approach which uses standard
feedback linearization approach to linearize the nominal nonlinear dynamics of
the uncertain nonlinear system and linearizes the nonlinear time varying
uncertainties at an arbitrary point using the mean value theorem. This approach
transforms uncertain nonlinear MIMO systems into an equivalent MIMO linear
uncertain system model with unstructured uncertainty. Finally, a robust minimax
linear quadratic Gaussian (LQG) control design is proposed for the linearized
model. The scheme guarantees the internal stability of the closed loop system
and provides robust performance. In order to illustrate the effectiveness of
this approach, the proposed method is applied to a tracking control problem for
an air-breathing hypersonic flight vehicle (AHFV).
|
1208.6067
|
Efficient Touch Based Localization through Submodularity
|
cs.RO
|
Many robotic systems deal with uncertainty by performing a sequence of
information gathering actions. In this work, we focus on the problem of
efficiently constructing such a sequence by drawing an explicit connection to
submodularity. Ideally, we would like a method that finds the optimal sequence,
taking the minimum amount of time while providing sufficient information.
Finding this sequence, however, is generally intractable. As a result, many
well-established methods select actions greedily. Surprisingly, this often
performs well. Our work first explains this high performance -- we note a
commonly used metric, reduction of Shannon entropy, is submodular under certain
assumptions, rendering the greedy solution comparable to the optimal plan in
the offline setting. However, reacting online to observations can increase
performance. Recently developed notions of adaptive submodularity provide
guarantees for a greedy algorithm in this online setting. In this work, we
develop new methods based on adaptive submodularity for selecting a sequence of
information gathering actions online. In addition to providing guarantees, we
can capitalize on submodularity to attain additional computational speedups. We
demonstrate the effectiveness of these methods in simulation and on a robot.
|
1208.6094
|
The Cycle Consistency Matrix Approach to Absorbing Sets in Separable
Circulant-Based LDPC Codes
|
cs.IT math.IT
|
For LDPC codes operating over additive white Gaussian noise channels and
decoded using message-passing decoders with limited precision, absorbing sets
have been shown to be a key factor in error floor behavior. Focusing on this
scenario, this paper introduces the cycle consistency matrix (CCM) as a
powerful analytical tool for characterizing and avoiding absorbing sets in
separable circulant-based (SCB) LDPC codes. SCB codes include a wide variety of
regular LDPC codes such as array-based LDPC codes as well as many common
quasi-cyclic codes. As a consequence of its cycle structure, each potential
absorbing set in an SCB LDPC code has a CCM, and an absorbing set can be
present in an SCB LDPC code only if the associated CCM has a nontrivial null
space.
CCM-based analysis can determine the multiplicity of an absorbing set in an
SCB code and CCM-based constructions avoid certain small absorbing sets
completely. While these techniques can be applied to an SCB code of any rate,
lower-rate SCB codes can usually avoid small absorbing sets because of their
higher variable node degree. This paper focuses attention on the high-rate
scenario in which the CCM constructions provide the most benefit. Simulation
results demonstrate that under limited-precision decoding the new codes have
steeper error-floor slopes and can provide one order of magnitude of
improvement in the low FER region.
|
1208.6106
|
Epistemic Temporal Logic for Information Flow Security
|
cs.CR cs.LO cs.MA
|
Temporal epistemic logic is a well-established framework for expressing
agents knowledge and how it evolves over time. Within language-based security
these are central issues, for instance in the context of declassification. We
propose to bring these two areas together. The paper presents a computational
model and an epistemic temporal logic used to reason about knowledge acquired
by observing program outputs. This approach is shown to elegantly capture
standard notions of noninterference and declassification in the literature as
well as information flow properties where sensitive and public data intermingle
in delicate ways.
|
1208.6109
|
Average word length dynamics as indicator of cultural changes in society
|
cs.CL
|
Dynamics of average length of words in Russian and English is analysed in the
article. Words belonging to the diachronic text corpus Google Books Ngram and
dated back to the last two centuries are studied. It was found out that average
word length slightly increased in the 19th century, and then it was growing
rapidly most of the 20th century and started decreasing over the period from
the end of the 20th - to the beginning of the 21th century. Words which
contributed mostly to increase or decrease of word average length were
identified. At that, content words and functional words are analysed
separately. Long content words contribute mostly to word average length of
word. As it was shown, these words reflect the main tendencies of social
development and thus, are used frequently. Change of frequency of personal
pronouns also contributes significantly to change of average word length. The
other parameters connected with average length of word were also analysed.
|
1208.6119
|
Improving information filtering via network manipulation
|
physics.soc-ph cs.SI physics.data-an
|
Recommender system is a very promising way to address the problem of
overabundant information for online users. Though the information filtering for
the online commercial systems received much attention recently, almost all of
the previous works are dedicated to design new algorithms and consider the
user-item bipartite networks as given and constant information. However, many
problems for recommender systems such as the cold-start problem (i.e. low
recommendation accuracy for the small degree items) are actually due to the
limitation of the underlying user-item bipartite networks. In this letter, we
propose a strategy to enhance the performance of the already existing
recommendation algorithms by directly manipulating the user-item bipartite
networks, namely adding some virtual connections to the networks. Numerical
analyses on two benchmark data sets, MovieLens and Netflix, show that our
method can remarkably improve the recommendation performance. Specifically, it
not only improve the recommendations accuracy (especially for the small degree
items), but also help the recommender systems generate more diverse and novel
recommendations.
|
1208.6125
|
Bounded-Contention Coding for Wireless Networks in the High SNR Regime
|
cs.NI cs.DC cs.DS cs.IT math.IT
|
Efficient communication in wireless networks is typically challenged by the
possibility of interference among several transmitting nodes. Much important
research has been invested in decreasing the number of collisions in order to
obtain faster algorithms for communication in such networks.
This paper proposes a novel approach for wireless communication, which
embraces collisions rather than avoiding them, over an additive channel. It
introduces a coding technique called Bounded-Contention Coding (BCC) that
allows collisions to be successfully decoded by the receiving nodes into the
original transmissions and whose complexity depends on a bound on the
contention among the transmitters.
BCC enables deterministic local broadcast in a network with n nodes and at
most a transmitters with information of l bits each within O(a log n + al) bits
of communication with full-duplex radios, and O((a log n + al)(log n)) bits,
with high probability, with half-duplex radios. When combined with random
linear network coding, BCC gives global broadcast within O((D + a + log n)(a
log n + l)) bits, with high probability. This also holds in dynamic networks
that can change arbitrarily over time by a worst-case adversary. When no bound
on the contention is given, it is shown how to probabilistically estimate it
and obtain global broadcast that is adaptive to the true contention in the
network.
|
1208.6137
|
Benchmarking recognition results on word image datasets
|
cs.CV
|
We have benchmarked the maximum obtainable recognition accuracy on various
word image datasets using manual segmentation and a currently available
commercial OCR. We have developed a Matlab program, with graphical user
interface, for semi-automated pixel level segmentation of word images. We
discuss the advantages of pixel level annotation. We have covered five
databases adding up to over 3600 word images. These word images have been
cropped from camera captured scene, born-digital and street view images. We
recognize the segmented word image using the trial version of Nuance Omnipage
OCR. We also discuss, how the degradations introduced during acquisition or
inaccuracies introduced during creation of word images affect the recognition
of the word present in the image. Word images for different kinds of
degradations and correction for slant and curvy nature of words are also
discussed. The word recognition rates obtained on ICDAR 2003, Sign evaluation,
Street view, Born-digital and ICDAR 2011 datasets are 83.9%, 89.3%, 79.6%,
88.5% and 86.7% respectively.
|
1208.6157
|
Resampling effects on significance analysis of network clustering and
ranking
|
physics.soc-ph cs.SI
|
Community detection helps us simplify the complex configuration of networks,
but communities are reliable only if they are statistically significant. To
detect statistically significant communities, a common approach is to resample
the original network and analyze the communities. But resampling assumes
independence between samples, while the components of a network are inherently
dependent. Therefore, we must understand how breaking dependencies between
resampled components affects the results of the significance analysis. Here we
use scientific communication as a model system to analyze this effect. Our
dataset includes citations among articles published in journals in the years
1984-2010. We compare parametric resampling of citations with non-parametric
article resampling. While citation resampling breaks link dependencies, article
resampling maintains such dependencies. We find that citation resampling
underestimates the variance of link weights. Moreover, this underestimation
explains most of the differences in the significance analysis of ranking and
clustering. Therefore, when only link weights are available and article
resampling is not an option, we suggest a simple parametric resampling scheme
that generates link-weight variances close to the link-weight variances of
article resampling. Nevertheless, when we highlight and summarize important
structural changes in science, the more dependencies we can maintain in the
resampling scheme, the earlier we can predict structural change.
|
1208.6189
|
Preserving Link Privacy in Social Network Based Systems
|
cs.CR cs.SI
|
A growing body of research leverages social network based trust relationships
to improve the functionality of the system. However, these systems expose
users' trust relationships, which is considered sensitive information in
today's society, to an adversary.
In this work, we make the following contributions. First, we propose an
algorithm that perturbs the structure of a social graph in order to provide
link privacy, at the cost of slight reduction in the utility of the social
graph. Second we define general metrics for characterizing the utility and
privacy of perturbed graphs. Third, we evaluate the utility and privacy of our
proposed algorithm using real world social graphs. Finally, we demonstrate the
applicability of our perturbation algorithm on a broad range of secure systems,
including Sybil defenses and secure routing.
|
1208.6231
|
Link Prediction via Generalized Coupled Tensor Factorisation
|
cs.LG
|
This study deals with the missing link prediction problem: the problem of
predicting the existence of missing connections between entities of interest.
We address link prediction using coupled analysis of relational datasets
represented as heterogeneous data, i.e., datasets in the form of matrices and
higher-order tensors. We propose to use an approach based on probabilistic
interpretation of tensor factorisation models, i.e., Generalised Coupled Tensor
Factorisation, which can simultaneously fit a large class of tensor models to
higher-order tensors/matrices with com- mon latent factors using different loss
functions. Numerical experiments demonstrate that joint analysis of data from
multiple sources via coupled factorisation improves the link prediction
performance and the selection of right loss function and tensor model is
crucial for accurately predicting missing links.
|
1208.6247
|
Solving Quadratic Equations via PhaseLift when There Are About As Many
Equations As Unknowns
|
cs.IT math.IT math.NA
|
This note shows that we can recover a complex vector x in C^n exactly from on
the order of n quadratic equations of the form |<a_i, x>|^2 = b_i, i = 1, ...,
m, by using a semidefinite program known as PhaseLift. This improves upon
earlier bounds in [3], which required the number of equations to be at least on
the order of n log n. We also demonstrate optimal recovery results from noisy
quadratic measurements; these results are much sharper than previously known
results.
|
1208.6255
|
Hierarchy in directed random networks
|
physics.soc-ph cond-mat.stat-mech cs.SI physics.data-an
|
In recent years, the theory and application of complex networks have been
quickly developing in a markable way due to the increasing amount of data from
real systems and to the fruitful application of powerful methods used in
statistical physics. Many important characteristics of social or biological
systems can be described by the study of their underlying structure of
interactions. Hierarchy is one of these features that can be formulated in the
language of networks. In this paper we present some (qualitative) analytic
results on the hierarchical properties of random network models with zero
correlations and also investigate, mainly numerically, the effects of different
type of correlations. The behavior of hierarchy is different in the absence and
the presence of the giant components. We show that the hierarchical structure
can be drastically different if there are one-point correlations in the
network. We also show numerical results suggesting that hierarchy does not
change monotonously with the correlations and there is an optimal level of
non-zero correlations maximizing the level of hierarchy.
|
1208.6268
|
Authorship Identification in Bengali Literature: a Comparative Analysis
|
cs.CL cs.IR
|
Stylometry is the study of the unique linguistic styles and writing behaviors
of individuals. It belongs to the core task of text categorization like
authorship identification, plagiarism detection etc. Though reasonable number
of studies have been conducted in English language, no major work has been done
so far in Bengali. In this work, We will present a demonstration of authorship
identification of the documents written in Bengali. We adopt a set of
fine-grained stylistic features for the analysis of the text and use them to
develop two different models: statistical similarity model consisting of three
measures and their combination, and machine learning model with Decision Tree,
Neural Network and SVM. Experimental results show that SVM outperforms other
state-of-the-art methods after 10-fold cross validations. We also validate the
relative importance of each stylistic feature to show that some of them remain
consistently significant in every model used in this experiment.
|
1208.6273
|
Communicating Using an Energy Harvesting Transmitter: Optimum Policies
Under Energy Storage Losses
|
cs.IT math.IT
|
In this paper, short-term throughput optimal power allocation policies are
derived for an energy harvesting transmitter with energy storage losses. In
particular, the energy harvesting transmitter is equipped with a battery that
loses a fraction of its stored energy. Both single user, i.e. one
transmitter-one receiver, and the broadcast channel, i.e., one
transmitter-multiple receiver settings are considered, initially with an
infinite capacity battery. It is shown that the optimal policies for these
models are threshold policies. Specifically, storing energy when harvested
power is above an upper threshold, retrieving energy when harvested power is
below a lower threshold, and transmitting with the harvested energy in between
is shown to maximize the weighted sum-rate. It is observed that the two
thresholds are related through the storage efficiency of the battery, and are
nondecreasing during the transmission. The results are then extended to the
case with finite battery capacity, where it is shown that a similar
double-threshold structure arises but the thresholds are no longer monotonic. A
dynamic program that yields an optimal online power allocation is derived, and
is shown to have a similar double-threshold structure. A simpler online policy
is proposed and observed to perform close to the optimal policy.
|
1208.6279
|
One-bit compressed sensing with non-Gaussian measurements
|
cs.IT math.IT
|
In one-bit compressed sensing, previous results state that sparse signals may
be robustly recovered when the measurements are taken using Gaussian random
vectors. In contrast to standard compressed sensing, these results are not
extendable to natural non-Gaussian distributions without further assumptions,
as can be demonstrated by simple counter-examples. We show that approximately
sparse signals that are not extremely sparse can be accurately reconstructed
from single-bit measurements sampled according to a sub-gaussian distribution,
and the reconstruction comes as the solution to a convex program.
|
1208.6289
|
Lift-off dynamics in a simple jumping robot
|
physics.class-ph cs.RO nlin.CD
|
We study vertical jumping in a simple robot comprising an actuated
mass-spring arrangement. The actuator frequency and phase are systematically
varied to find optimal performance. Optimal jumps occur above and below (but
not at) the robot's resonant frequency $f_0$. Two distinct jumping modes
emerge: a simple jump which is optimal above $f_0$ is achievable with a squat
maneuver, and a peculiar stutter jump which is optimal below $f_0$ is generated
with a counter-movement. A simple dynamical model reveals how optimal lift-off
results from non-resonant transient dynamics.
|
1208.6308
|
Distributed Cross-Layer Optimization in Wireless Networks: A
Second-Order Approach
|
cs.NI cs.DC cs.IT cs.SY math.IT math.OC
|
Due to the rapidly growing scale and heterogeneity of wireless networks, the
design of distributed cross-layer optimization algorithms have received
significant interest from the networking research community. So far, the
standard distributed cross-layer approach in the literature is based on
first-order Lagrangian dual decomposition and the subgradient method, which
suffers a slow convergence rate. In this paper, we make the first known attempt
to develop a distributed Newton's method, which is second-order and enjoys a
quadratic convergence rate. However, due to interference in wireless networks,
the Hessian matrix of the cross-layer problem has an non-separable structure.
As a result, developing a distributed second-order algorithm is far more
challenging than its counterpart for wireline networks. Our main results in
this paper are two-fold: i) For a special network setting where all links
mutually interfere, we derive decentralized closed-form expressions to compute
the Hessian inverse; ii) For general wireless networks where the interference
relationships are arbitrary, we propose a distributed iterative matrix
splitting scheme for the Hessian inverse. These results successfully lead to a
new theoretical framework for cross-layer optimization in wireless networks.
More importantly, our work contributes to an exciting second-order paradigm
shift in wireless networks optimization theory.
|
1208.6310
|
Automated Marble Plate Classification System Based On Different Neural
Network Input Training Sets and PLC Implementation
|
cs.NE cs.LG
|
The process of sorting marble plates according to their surface texture is an
important task in the automated marble plate production. Nowadays some
inspection systems in marble industry that automate the classification tasks
are too expensive and are compatible only with specific technological equipment
in the plant. In this paper a new approach to the design of an Automated Marble
Plate Classification System (AMPCS),based on different neural network input
training sets is proposed, aiming at high classification accuracy using simple
processing and application of only standard devices. It is based on training a
classification MLP neural network with three different input training sets:
extracted texture histograms, Discrete Cosine and Wavelet Transform over the
histograms. The algorithm is implemented in a PLC for real-time operation. The
performance of the system is assessed with each one of the input training sets.
The experimental test results regarding classification accuracy and quick
operation are represented and discussed.
|
1208.6326
|
Pisces: Anonymous Communication Using Social Networks
|
cs.CR cs.NI cs.SI
|
The architectures of deployed anonymity systems such as Tor suffer from two
key problems that limit user's trust in these systems. First, paths for
anonymous communication are built without considering trust relationships
between users and relays in the system. Second, the network architecture relies
on a set of centralized servers. In this paper, we propose Pisces, a
decentralized protocol for anonymous communications that leverages users'
social links to build circuits for onion routing. We argue that such an
approach greatly improves the system's resilience to attackers.
A fundamental challenge in this setting is the design of a secure process to
discover peers for use in a user's circuit. All existing solutions for secure
peer discovery leverage structured topologies and cannot be applied to
unstructured social network topologies. In Pisces, we discover peers by using
random walks in the social network graph with a bias away from highly connected
nodes to prevent a few nodes from dominating the circuit creation process. To
secure the random walks, we leverage the reciprocal neighbor policy: if
malicious nodes try to exclude honest nodes during peer discovery so as to
improve the chance of being selected, then honest nodes can use a tit-for-tat
approach and reciprocally exclude the malicious nodes from their routing
tables. We describe a fully decentralized protocol for enforcing this policy,
and use it to build the Pisces anonymity system.
Using theoretical modeling and experiments on real-world social network
topologies, we show that (a) the reciprocal neighbor policy mitigates active
attacks that an adversary can perform, (b) our decentralized protocol to
enforce this policy is secure and has low overhead, and (c) the overall
anonymity provided by our system significantly outperforms existing approaches.
|
1208.6335
|
Comparative Study and Optimization of Feature-Extraction Techniques for
Content based Image Retrieval
|
cs.CV cs.AI cs.IR cs.LG cs.MM
|
The aim of a Content-Based Image Retrieval (CBIR) system, also known as Query
by Image Content (QBIC), is to help users to retrieve relevant images based on
their contents. CBIR technologies provide a method to find images in large
databases by using unique descriptors from a trained image. The image
descriptors include texture, color, intensity and shape of the object inside an
image. Several feature-extraction techniques viz., Average RGB, Color Moments,
Co-occurrence, Local Color Histogram, Global Color Histogram and Geometric
Moment have been critically compared in this paper. However, individually these
techniques result in poor performance. So, combinations of these techniques
have also been evaluated and results for the most efficient combination of
techniques have been presented and optimized for each class of image query. We
also propose an improvement in image retrieval performance by introducing the
idea of Query modification through image cropping. It enables the user to
identify a region of interest and modify the initial query to refine and
personalize the image retrieval results.
|
1208.6338
|
A Widely Applicable Bayesian Information Criterion
|
cs.LG stat.ML
|
A statistical model or a learning machine is called regular if the map taking
a parameter to a probability distribution is one-to-one and if its Fisher
information matrix is always positive definite. If otherwise, it is called
singular. In regular statistical models, the Bayes free energy, which is
defined by the minus logarithm of Bayes marginal likelihood, can be
asymptotically approximated by the Schwarz Bayes information criterion (BIC),
whereas in singular models such approximation does not hold.
Recently, it was proved that the Bayes free energy of a singular model is
asymptotically given by a generalized formula using a birational invariant, the
real log canonical threshold (RLCT), instead of half the number of parameters
in BIC. Theoretical values of RLCTs in several statistical models are now being
discovered based on algebraic geometrical methodology. However, it has been
difficult to estimate the Bayes free energy using only training samples,
because an RLCT depends on an unknown true distribution.
In the present paper, we define a widely applicable Bayesian information
criterion (WBIC) by the average log likelihood function over the posterior
distribution with the inverse temperature $1/\log n$, where $n$ is the number
of training samples. We mathematically prove that WBIC has the same asymptotic
expansion as the Bayes free energy, even if a statistical model is singular for
and unrealizable by a statistical model. Since WBIC can be numerically
calculated without any information about a true distribution, it is a
generalized version of BIC onto singular statistical models.
|
1208.6357
|
Linear Transceiver Design for a MIMO Interfering Broadcast Channel
Achieving Max-Min Fairness
|
cs.IT math.IT
|
We consider the problem of linear transceiver design to achieve max-min
fairness in a downlink MIMO multicell network. This problem can be formulated
as maximizing the minimum rate among all the users in an interfering broadcast
channel (IBC). In this paper we show that when the number of antennas is at
least two at each of the transmitters and the receivers, the min rate
maximization problem is NP-hard in the number of users. Moreover, we develop a
low-complexity algorithm for this problem by iteratively solving a sequence of
convex subproblems, and establish its global convergence to a stationary point
of the original minimum rate maximization problem. Numerical simulations show
that this algorithm is efficient in achieving fairness among all the users.
|
1208.6379
|
Development of a Novel Robot for Transperineal Needle Based
Interventions: Focal Therapy, Brachytherapy and Prostate Biopsies
|
cs.RO physics.med-ph
|
Purpose: We report what is to our knowledge the initial experience with a new
3-dimensional ultrasound robotic system for prostate brachytherapy assistance,
focal therapy and prostate biopsies. Its ability to track prostate motion
intraoperatively allows it to manage motions and guide needles to predefined
targets. Materials and Methods: A robotic system was created for transrectal
ultrasound guided needle implantation combined with intraoperative prostate
tracking. Experiments were done on 90 targets embedded in a total of 9 mobile,
deformable, synthetic prostate phantoms. Experiments involved trying to insert
glass beads as close as possible to targets in multimodal anthropomorphic
imaging phantoms. Results were measured by segmenting the inserted beads in
computerized tomography volumes of the phantoms. Results: The robot reached the
chosen targets in phantoms with a median accuracy of 2.73 mm and a median
prostate motion of 5.46 mm. Accuracy was better at the apex than at the base
(2.28 vs 3.83 mm, p <0.001), and similar for horizontal and angled needle
inclinations (2.7 vs 2.82 mm, p = 0.18). Conclusions: To our knowledge this
robot for prostate focal therapy, brachytherapy and targeted prostate biopsies
is the first system to use intraoperative prostate motion tracking to guide
needles into the prostate. Preliminary experiments show its ability to reach
targets despite prostate motion.
|
1208.6388
|
First Clinical Experience in Urologic Surgery with a Novel Robotic
Lightweight Laparoscope Holder
|
cs.RO physics.med-ph
|
Purpose: To report the feasibility and the safety of a surgeon-controlled
robotic endoscope holder in laparoscopic surgery. Materials and methods: From
March 2010 to September 2010, 20 patients were enrolled prospectively to
undergo a laparoscopic surgery using an innovative robotic endoscope holder.
Two surgeons performed 6 adrenalectomies, 4 sacrocolpopexies, 5 pyeloplasties,
4 radical prostatectomies and 1 radical nephrectomy. Demographic data, overall
set-up time, operative time, number of assistants needed were reviewed.
Surgeon's satisfaction regarding the ergonomics was assessed using a ten point
scale. Postoperative clinical outcomes were reviewed at day 1 and 1 month
postoperatively. Results: The per-protocol analysis was performed on 17
patients for whom the robot was effectively used for surgery. Median age was 63
years, 10 patients were female (59%). Median BMI was 26.8. Surgical procedures
were completed with the robot in 12 cases (71 %). Median number of surgical
assistant was 0. Overall set-up time with the robot was 19 min, operative time
was 130 min) during which the robot was used 71% of the time. Mean hospital
stay was 6.94 days $\pm$ 2.3. Median score regarding the easiness of use was 7.
Median pain level was 1.5/10 at day 1 and 0 at 1 month postoperatively. Open
conversion was needed in 1 case (6 %) and 4 minor complications occurred in 2
patients (12%). Conclusion: This use of this novel robotic laparoscope holder
is safe, feasible and it provides a good comfort to the surgeon.
|
1208.6412
|
Adaptive Generation Method of OFDM Signals in SLM Schemes for
Low-complexity
|
cs.IT math.IT
|
There are many selected mapping (SLM) schemes to reduce the peak-to-average
power ratio (PAPR) of orthogonal frequency division multiplexing (OFDM)
signals. Beginning with the conventional SLM scheme, there have been proposed
many low-complexity SLM schemes including Lim's, Wang's, and Baxely's SLM
schemes typically. In this paper, we propose an adaptive generation (AG) method
of OFDM signals in SLM schemes. By generating the alternative OFDM signals
adaptively, unnecessary computational complexity of SLM schemes can be removed
without any degradation of their PAPR reduction performance. In this paper, we
apply the AG method to various SLM schemes which are the conventional SLM
scheme and its low-complexity versions such as Lim's, Wang's, and Baxely's SLM
schemes. Of course, the AG method can be applied to most of existing SLM
schemes easily. The numerical results show that the AG method can reduce their
computational complexity substantially.
|
1208.6416
|
Relational Databases and Bell's Theorem
|
cs.LO cs.DB quant-ph
|
Our aim in this paper is to point out a surprising formal connection, between
two topics which seem on face value to have nothing to do with each other:
relational database theory, and the study of non-locality and contextuality in
the foundations of quantum mechanics. We shall show that there is a remarkably
direct correspondence between central results such as Bell's theorem in the
foundations of quantum mechanics, and questions which arise naturally and have
been well-studied in relational database theory.
|
1208.6421
|
A Novel Service Oriented Model for Query Identification and Solution
Development using Semantic Web and Multi Agent System
|
cs.MA cs.SE
|
In this paper, we propose to develop service model architecture by merging
multi-agentsystems and semantic web technology. The proposed architecture works
in two stages namely, Query Identification and Solution Development. A person
referred to as customer will submit the problem details or requirements which
will be referred to as a query. Anyone who can provide a service will need to
register with the registrar module of the architecture. Services can be
anything ranging from expert consultancy in the field of agriculture to
academic research, from selling products to manufacturing goods, from medical
help to legal issues or even providing logistics. Query submitted by customer
is first parsed and then iteratively understood with the help of domain experts
and the customer to get a precise set of properties. Query thus identified will
be solved again with the help of intelligent agent systems which will search
the semantic web for all those who can find or provide a solution. A workable
solution workflow is created and then depending on the requirements, using the
techniques of negotiation or auctioning, solution is implemented to complete
the service for customer. This part is termed as solution development. In this
service oriented architecture, we first try to analyze the complex set of user
requirements then try to provide best possible solution in an optimized way by
combining better information searches through semantic web and better workflow
provisioning using multi agent systems.
|
1208.6454
|
Spread of Influence and Content in Mobile Opportunistic Networks
|
cs.SI cs.MA cs.SY
|
We consider a setting in which a single item of content (such as a song or a
video clip) is disseminated in a population of mobile nodes by opportunistic
copying when pairs of nodes come in radio contact. We propose and study models
that capture the joint evolution of the population of nodes interested in the
content (referred to as destinations), and the population of nodes that possess
the content. The evolution of interest in the content is captured using an
influence spread model and the content spread occurs via epidemic copying.
Nodes not yet interested in the content are called relays; the influence spread
process converts relays into destinations. We consider the decentralized
setting, where interest in the content and the spread of the content evolve by
pairwise interactions between the mobiles. We derive fluid limits for the joint
evolution models and obtain optimal policies for copying to relay nodes in
order to deliver content to a desired fraction of destinations. We prove that a
time-threshold policy is optimal while copying to relays. We then provide
insights into the effects of various system parameters on the co-evolution
model through simulations.
|
1208.6464
|
Bayesian compressed sensing with new sparsity-inducing prior
|
cs.IT math.IT
|
Sparse Bayesian learning (SBL) is a popular approach to sparse signal
recovery in compressed sensing (CS). In SBL, the signal sparsity information is
exploited by assuming a sparsity-inducing prior for the signal that is then
estimated using Bayesian inference. In this paper, a new sparsity-inducing
prior is introduced and efficient algorithms are developed for signal recovery.
The main algorithm is shown to produce a sparser solution than existing SBL
methods while preserving their desirable properties. Numerical simulations with
one-dimensional synthetic signals and two-dimensional images verify our
analysis and show that for sparse signals the proposed algorithm outperforms
its SBL peers in both the signal recovery accuracy and computational speed. Its
improved performance is also demonstrated in comparison with other
state-of-the-art methods in CS.
|
1208.6493
|
Shannon's sampling theorem in a distributional setting
|
math.FA cs.IT math.IT
|
The classical Shannon sampling theorem states that a signal f with Fourier
transform F in L^2(R) having its support contained in (-\pi,\pi) can be
recovered from the sequence of samples (f(n))_{n in Z} via f(t)=\sum_{n in Z}
f(n) (sin(\pi (t -n)))/(\pi (t-n)) (t in R). In this article we prove a
generalization of this result under the assumption that F is a compactly
supported distribution with its support contained in (-\pi,\pi).
|
1208.6516
|
A two-stage denoising filter: the preprocessed Yaroslavsky filter
|
cs.CV math.ST stat.TH
|
This paper describes a simple image noise removal method which combines a
preprocessing step with the Yaroslavsky filter for strong numerical, visual,
and theoretical performance on a broad class of images. The framework developed
is a two-stage approach. In the first stage the image is filtered with a
classical denoising method (e.g., wavelet or curvelet thresholding). In the
second stage a modification of the Yaroslavsky filter is performed on the
original noisy image, where the weights of the filters are governed by pixel
similarities in the denoised image from the first stage. Similar prefiltering
ideas have proved effective previously in the literature, and this paper
provides theoretical guarantees and important insight into why prefiltering can
be effective. Empirically, this simple approach achieves very good performance
for cartoon images, and can be computed much more quickly than current
patch-based denoising algorithms.
|
1208.6523
|
Combinatorial Gradient Fields for 2D Images with Empirically Convergent
Separatrices
|
cs.CV cs.CG cs.DM
|
This paper proposes an efficient probabilistic method that computes
combinatorial gradient fields for two dimensional image data. In contrast to
existing algorithms, this approach yields a geometric Morse-Smale complex that
converges almost surely to its continuous counterpart when the image resolution
is increased. This approach is motivated using basic ideas from probability
theory and builds upon an algorithm from discrete Morse theory with a strong
mathematical foundation. While a formal proof is only hinted at, we do provide
a thorough numerical evaluation of our method and compare it to established
algorithms.
|
1209.0001
|
An Improved Bound for the Nystrom Method for Large Eigengap
|
cs.LG cs.NA stat.ML
|
We develop an improved bound for the approximation error of the Nystr\"{o}m
method under the assumption that there is a large eigengap in the spectrum of
kernel matrix. This is based on the empirical observation that the eigengap has
a significant impact on the approximation error of the Nystr\"{o}m method. Our
approach is based on the concentration inequality of integral operator and the
theory of matrix perturbation. Our analysis shows that when there is a large
eigengap, we can improve the approximation error of the Nystr\"{o}m method from
$O(N/m^{1/4})$ to $O(N/m^{1/2})$ when measured in Frobenius norm, where $N$ is
the size of the kernel matrix, and $m$ is the number of sampled columns.
|
1209.0029
|
Statistically adaptive learning for a general class of cost functions
(SA L-BFGS)
|
cs.LG stat.ML
|
We present a system that enables rapid model experimentation for tera-scale
machine learning with trillions of non-zero features, billions of training
examples, and millions of parameters. Our contribution to the literature is a
new method (SA L-BFGS) for changing batch L-BFGS to perform in near real-time
by using statistical tools to balance the contributions of previous weights,
old training examples, and new training examples to achieve fast convergence
with few iterations. The result is, to our knowledge, the most scalable and
flexible linear learning system reported in the literature, beating standard
practice with the current best system (Vowpal Wabbit and AllReduce). Using the
KDD Cup 2012 data set from Tencent, Inc. we provide experimental results to
verify the performance of this method.
|
1209.0047
|
The Degrees of Freedom Region of the MIMO Interference Channel with
Hybrid CSIT
|
cs.IT math.IT
|
The degrees of freedom (DoF) region of the two-user MIMO (multiple-input
multiple-output) interference channel is established under a new model termed
as hybrid CSIT. In this model, one transmitter has delayed channel state
information (CSI) and the other transmitter has instantaneous CSIT, of incoming
channel matrices at the respective unpaired receivers, and neither transmitter
has any knowledge of the incoming channel matrices of its respective paired
receiver. The DoF region for hybrid CSIT, and consequently that of
$2\times2\times3^{5}$ CSIT models, is completely characterized, and a new
achievable scheme based on a combination of transmit beamforming and
retrospective interference alignment is developed. Conditions are obtained on
the numbers of antennas at each of the four terminals such that the DoF region
under hybrid CSIT is equal to that under (a) global and instantaneous CSIT and
(b) global and delayed CSIT, with the remaining cases resulting in a DoF region
with hybrid CSIT that lies somewhere in between the DoF regions under the
instantaneous and delayed CSIT settings. Further synergistic benefits accruing
from switching between the two hybrid CSIT models are also explored.
|
1209.0053
|
A Session Based Blind Watermarking Technique within the NROI of Retinal
Fundus Images for Authentication Using DWT, Spread Spectrum and Harris Corner
Detection
|
cs.CV cs.CY
|
Digital Retinal Fundus Images helps to detect various ophthalmic diseases by
detecting morphological changes in optical cup, optical disc and macula.
Present work proposes a method for the authentication of medical images based
on Discrete Wavelet Transformation (DWT) and Spread Spectrum. Proper selection
of the Non Region of Interest (NROI) for watermarking is crucial, as the area
under concern has to be the least required portion conveying any medical
information. Proposed method discusses both the selection of least impact area
and the blind watermarking technique. Watermark is embedded within the
High-High (HH) sub band. During embedding, watermarked image is dispersed
within the band using a pseudo random sequence and a Session key. Watermarked
image is extracted using the session key and the size of the image. In this
approach the generated watermarked image having an acceptable level of
imperceptibility and distortion is compared to the Original retinal image based
on Peak Signal to Noise Ratio (PSNR) and correlation value.
|
1209.0056
|
Learning implicitly in reasoning in PAC-Semantics
|
cs.AI cs.DS cs.LG cs.LO
|
We consider the problem of answering queries about formulas of propositional
logic based on background knowledge partially represented explicitly as other
formulas, and partially represented as partially obscured examples
independently drawn from a fixed probability distribution, where the queries
are answered with respect to a weaker semantics than usual -- PAC-Semantics,
introduced by Valiant (2000) -- that is defined using the distribution of
examples. We describe a fairly general, efficient reduction to limited versions
of the decision problem for a proof system (e.g., bounded space treelike
resolution, bounded degree polynomial calculus, etc.) from corresponding
versions of the reasoning problem where some of the background knowledge is not
explicitly given as formulas, only learnable from the examples. Crucially, we
do not generate an explicit representation of the knowledge extracted from the
examples, and so the "learning" of the background knowledge is only done
implicitly. As a consequence, this approach can utilize formulas as background
knowledge that are not perfectly valid over the distribution---essentially the
analogue of agnostic learning here.
|
1209.0057
|
Anchoring Bias in Online Voting
|
physics.data-an cs.IR physics.soc-ph
|
Voting online with explicit ratings could largely reflect people's
preferences and objects' qualities, but ratings are always irrational, because
they may be affected by many unpredictable factors like mood, weather, as well
as other people's votes. By analyzing two real systems, this paper reveals a
systematic bias embedding in the individual decision-making processes, namely
people tend to give a low rating after a low rating, as well as a high rating
following a high rating. This so-called \emph{anchoring bias} is validated via
extensive comparisons with null models, and numerically speaking, the extent of
bias decays with interval voting number in a logarithmic form. Our findings
could be applied in the design of recommender systems and considered as
important complementary materials to previous knowledge about anchoring effects
on financial trades, performance judgements, auctions, and so on.
|
1209.0061
|
Compensation of IQ-Imbalance and Phase Noise in MIMO-OFDM Systems
|
cs.IT math.IT
|
The degrading effect of RF impairments on the performance of wireless
communication systems is more pronounced in MIMO-OFDM transmission. Two of the
most common impairments that significantly limit the performance of MIMO-OFDM
transceivers are IQ-imbalance and phase noise. Low-complexity estimation and
compensation techniques that can jointly remove the effect of these impairments
are highly desirable. In this paper, we propose a simple joint estimation and
compensation technique to estimate channel, phase noise and IQ-imbalance
parameters in MIMO-OFDM systems under multipath slow fading channels. A
subcarrier multiplexed preamble structure to estimate the channel and
impairment parameters with minimum overhead is introduced and used in the
estimation of IQ-imbalance parameters as well as the initial estimation of
effective channel matrix including common phase error (CPE). We then use a
novel tracking method based on the second order statistics of the inter-carrier
interference (ICI) and noise to update the effective channel matrix throughout
an OFDM frame. Simulation results for a variety of scenarios show that the
proposed low-complexity estimation and compensation technique can efficiently
improve the performance of MIMO-OFDM systems in terms of bit-error-rate (BER).
|
1209.0082
|
Recursive quantum convolutional encoders are catastrophic: A simple
proof
|
quant-ph cs.IT math.IT
|
Poulin, Tillich, and Ollivier discovered an important separation between the
classical and quantum theories of convolutional coding, by proving that a
quantum convolutional encoder cannot be both non-catastrophic and recursive.
Non-catastrophicity is desirable so that an iterative decoding algorithm
converges when decoding a quantum turbo code whose constituents are quantum
convolutional codes, and recursiveness is as well so that a quantum turbo code
has a minimum distance growing nearly linearly with the length of the code,
respectively. Their proof of the aforementioned theorem was admittedly "rather
involved," and as such, it has been desirable since their result to find a
simpler proof. In this paper, we furnish a proof that is arguably simpler. Our
approach is group-theoretic---we show that the subgroup of memory states that
are part of a zero physical-weight cycle of a quantum convolutional encoder is
equivalent to the centralizer of its "finite-memory" subgroup (the subgroup of
memory states which eventually reach the identity memory state by identity
operator inputs for the information qubits and identity or Pauli-Z operator
inputs for the ancilla qubits). After proving that this symmetry holds for any
quantum convolutional encoder, it easily follows that an encoder is
non-recursive if it is non-catastrophic. Our proof also illuminates why this
no-go theorem does not apply to entanglement-assisted quantum convolutional
encoders---the introduction of shared entanglement as a resource allows the
above symmetry to be broken.
|
1209.0089
|
Estimating the historical and future probabilities of large terrorist
events
|
physics.data-an cs.LG physics.soc-ph stat.AP stat.ME
|
Quantities with right-skewed distributions are ubiquitous in complex social
systems, including political conflict, economics and social networks, and these
systems sometimes produce extremely large events. For instance, the 9/11
terrorist events produced nearly 3000 fatalities, nearly six times more than
the next largest event. But, was this enormous loss of life statistically
unlikely given modern terrorism's historical record? Accurately estimating the
probability of such an event is complicated by the large fluctuations in the
empirical distribution's upper tail. We present a generic statistical algorithm
for making such estimates, which combines semi-parametric models of tail
behavior and a nonparametric bootstrap. Applied to a global database of
terrorist events, we estimate the worldwide historical probability of observing
at least one 9/11-sized or larger event since 1968 to be 11-35%. These results
are robust to conditioning on global variations in economic development,
domestic versus international events, the type of weapon used and a truncated
history that stops at 1998. We then use this procedure to make a data-driven
statistical forecast of at least one similar event over the next decade.
|
1209.0113
|
Using Space-Time trellis Codes For AF Relay Channels
|
cs.IT math.IT
|
We consider the analysis and design of space-time trellis codes (STTCs) for a
cooperative relay channel operating in amplify-and-forward (AF) mode assuming
the source and destination nodes are equipped with multiple antennas but the
relay node has single antenna. We derive a pairwise error probability (PEP)
expression for the performance of STTCs in this type of channels. A simple
upper-bound on PEP is then derived and is maximized to find the optimum STTCs.
We show that the designed STTCs based on the derived criterion achieve full
diversity in the AF relay channels especially at high signal-to-noise-ratios
(SNRs). The maximum achievable diversity in relay channels with single-antenna
relay is bounded by $\min(M,N)$ where $M$ and $N$ are respectively the number
of antennas in source and destination nodes. Simulation results confirm that
the proposed codes achieve the maximum diversity and also provide an appealing
coding gain.
|
1209.0125
|
A History of Cluster Analysis Using the Classification Society's
Bibliography Over Four Decades
|
cs.DL cs.LG stat.ML
|
The Classification Literature Automated Search Service, an annual
bibliography based on citation of one or more of a set of around 80 book or
journal publications, ran from 1972 to 2012. We analyze here the years 1994 to
2011. The Classification Society's Service, as it was termed, has been produced
by the Classification Society. In earlier decades it was distributed as a
diskette or CD with the Journal of Classification. Among our findings are the
following: an enormous increase in scholarly production post approximately
2000; a very major increase in quantity, coupled with work in different
disciplines, from approximately 2004; and a major shift also from cluster
analysis in earlier times having mathematics and psychology as disciplines of
the journals published in, and affiliations of authors, contrasted with, in
more recent times, a "centre of gravity" in management and engineering.
|
1209.0126
|
Evaluation of some Information Retrieval models for Gujarati Ad hoc
Monolingual Tasks
|
cs.IR
|
This paper describes the work towards Gujarati Ad hoc Monolingual Retrieval
task for widely used Information Retrieval (IR) models. We present an indexing
baseline for the Gujarati Language represented by Mean Average Precision (MAP)
values. Our objective is to obtain a relative picture of a better IR model for
Gujarati Language. Results show that Classical IR models like Term Frequency
Inverse Document Frequency (TF_IDF) performs better when compared to few recent
probabilistic IR models. The experiments helped to identify the outperforming
IR models for Gujarati Language.
|
1209.0127
|
Autoregressive short-term prediction of turning points using support
vector regression
|
cs.LG cs.CE cs.NE
|
This work is concerned with autoregressive prediction of turning points in
financial price sequences. Such turning points are critical local extrema
points along a series, which mark the start of new swings. Predicting the
future time of such turning points or even their early or late identification
slightly before or after the fact has useful applications in economics and
finance. Building on recently proposed neural network model for turning point
prediction, we propose and study a new autoregressive model for predicting
turning points of small swings. Our method relies on a known turning point
indicator, a Fourier enriched representation of price histories, and support
vector regression. We empirically examine the performance of the proposed
method over a long history of the Dow Jones Industrial average. Our study shows
that the proposed method is superior to the previous neural network model, in
terms of trading performance of a simple trading application and also exhibits
a quantifiable advantage over the buy-and-hold benchmark.
|
1209.0136
|
Incremental Control Synthesis in Probabilistic Environments with
Temporal Logic Constraints
|
cs.RO cs.LO
|
In this paper, we present a method for optimal control synthesis of a plant
that interacts with a set of agents in a graph-like environment. The control
specification is given as a temporal logic statement about some properties that
hold at the vertices of the environment. The plant is assumed to be
deterministic, while the agents are probabilistic Markov models. The goal is to
control the plant such that the probability of satisfying a syntactically
co-safe Linear Temporal Logic formula is maximized. We propose a
computationally efficient incremental approach based on the fact that temporal
logic verification is computationally cheaper than synthesis. We present a
case-study where we compare our approach to the classical non-incremental
approach in terms of computation time and memory usage.
|
1209.0167
|
Automatic ECG Beat Arrhythmia Detection
|
cs.NE
|
Background: In recent years automated data analysis techniques have drawn
great attention and are used in almost every field of research including
biomedical. Artificial Neural Networks (ANNs) are one of the Computer- Aided-
Diagnosis tools which are used extensively by advances in computer hardware
technology. The application of these techniques for disease diagnosis has made
great progress and is widely used by physicians. An Electrocardiogram carries
vital information about heart activity and physicians use this signal for
cardiac disease diagnosis which was the great motivation towards our study.
Methods: In this study we are using Probabilistic Neural Networks (PNN) as an
automatic technique for ECG signal analysis along with a Genetic Algorithm
(GA). As every real signal recorded by the equipment can have different
artifacts, we need to do some preprocessing steps before feeding it to the ANN.
Wavelet transform is used for extracting the morphological parameters and
median filter for data reduction of the ECG signal. The subset of morphological
parameters are chosen and optimized using GA. We had two approaches in our
investigation, the first one uses the whole signal with 289 normalized and
de-noised data points as input to the ANN. In the second approach after
applying all the preprocessing steps the signal is reduced to 29 data points
and also their important parameters extracted to form the ANN input with 35
data points. Results: The outcome of the two approaches for 8 types of
arrhythmia shows that the second approach is superior than the first one with
an average accuracy of %99.42.
|
1209.0196
|
Short-time homomorphic wavelet estimation
|
physics.geo-ph cs.CV physics.data-an
|
Successful wavelet estimation is an essential step for seismic methods like
impedance inversion, analysis of amplitude variations with offset and full
waveform inversion. Homomorphic deconvolution has long intrigued as a
potentially elegant solution to the wavelet estimation problem. Yet a
successful implementation has proven difficult. Associated disadvantages like
phase unwrapping and restrictions of sparsity in the reflectivity function
limit its application. We explore short-time homomorphic wavelet estimation as
a combination of the classical homomorphic analysis and log-spectral averaging.
The introduced method of log-spectral averaging using a short-term Fourier
transform increases the number of sample points, thus reducing estimation
variances. We apply the developed method on synthetic and real data examples
and demonstrate good performance.
|
1209.0219
|
Dynamical networks reconstructed from time series
|
physics.data-an cond-mat.stat-mech cs.SI physics.soc-ph
|
Novel method of reconstructing dynamical networks from empirically measured
time series is proposed. By examining the variable--derivative correlation of
network node pairs, we derive a simple equation that directly yields the
adjacency matrix, assuming the intra-network interaction functions to be known.
We illustrate the method on a simple example, and discuss the dependence of the
reconstruction precision on the properties of time series. Our method is
applicable to any network, allowing for reconstruction precision to be
maximized, and errors to be estimated.
|
1209.0229
|
Efficiency-Risk Tradeoffs in Dynamic Oligopoly Markets - with
application to electricity markets
|
cs.SY
|
In this paper, we examine in an abstract framework, how a tradeoff between
efficiency and robustness arises in different dynamic oligopolistic market
architectures. We consider a market in which there is a monopolistic resource
provider and agents that enter and exit the market following a random process.
Self-interested and fully rational agents dynamically update their resource
consumption decisions over a finite time horizon, under the constraint that the
total resource consumption requirements are met before each individual's
deadline. We then compare the statistics of the stationary aggregate demand
processes induced by the non-cooperative and cooperative load scheduling
schemes. We show that although the non-cooperative load scheduling scheme leads
to an efficiency loss - widely known as the "price of anarchy" - the stationary
distribution of the corresponding aggregate demand process has a smaller tail.
This tail, which corresponds to rare and undesirable demand spikes, is
important in many applications of interest. On the other hand, when the agents
can cooperate with each other in optimizing their total cost, a higher market
efficiency is achieved at the cost of a higher probability of demand spikes. We
thus posit that the origins of endogenous risk in such systems may lie in the
market architecture, which is an inherent characteristic of the system.
|
1209.0236
|
Cross-Bifix-Free Codes Within a Constant Factor of Optimality
|
cs.IT math.CO math.IT
|
A cross-bifix-free code is a set of words in which no prefix of any length of
any word is the suffix of any word in the set. Cross-bifix-free codes arise in
the study of distributed sequences for frame synchronization. We provide a new
construction of cross-bifix-free codes which generalizes the construction in
Bajic (2007) to longer code lengths and to any alphabet size. The codes are
shown to be nearly optimal in size. We also establish new results on Fibonacci
sequences, that are used in estimating the size of the cross-bifix-free codes.
|
1209.0237
|
Bi-stochastic kernels via asymmetric affinity functions
|
math.CA cs.IT math.IT math.PR math.SP
|
In this short letter we present the construction of a bi-stochastic kernel p
for an arbitrary data set X that is derived from an asymmetric affinity
function {\alpha}. The affinity function {\alpha} measures the similarity
between points in X and some reference set Y. Unlike other methods that
construct bi-stochastic kernels via some convergent iteration process or
through solving an optimization problem, the construction presented here is
quite simple. Furthermore, it can be viewed through the lens of out of sample
extensions, making it useful for massive data sets.
|
1209.0245
|
Diffusion maps for changing data
|
math.CA cs.IT math.IT math.PR math.SP
|
Graph Laplacians and related nonlinear mappings into low dimensional spaces
have been shown to be powerful tools for organizing high dimensional data. Here
we consider a data set X in which the graph associated with it changes
depending on some set of parameters. We analyze this type of data in terms of
the diffusion distance and the corresponding diffusion map. As the data changes
over the parameter space, the low dimensional embedding changes as well. We
give a way to go between these embeddings, and furthermore, map them all into a
common space, allowing one to track the evolution of X in its intrinsic
geometry. A global diffusion distance is also defined, which gives a measure of
the global behavior of the data over the parameter space. Approximation
theorems in terms of randomly sampled data are presented, as are potential
applications.
|
1209.0249
|
Robopinion: Opinion Mining Framework Inspired by Autonomous Robot
Navigation
|
cs.CL cs.IR
|
Data association methods are used by autonomous robots to find matches
between the current landmarks and the new set of observed features. We seek a
framework for opinion mining to benefit from advancements in autonomous robot
navigation in both research and development
|
1209.0308
|
Optimizing Supply Chain Management using Gravitational Search Algorithm
and Multi Agent System
|
cs.MA cs.AI
|
Supply chain management is a very dynamic operation research problem where
one has to quickly adapt according to the changes perceived in environment in
order to maximize the benefit or minimize the loss. Therefore we require a
system which changes as per the changing requirements. Multi agent system
technology in recent times has emerged as a possible way of efficient solution
implementation for many such complex problems. Our research here focuses on
building a Multi Agent System (MAS), which implements a modified version of
Gravitational Search swarm intelligence Algorithm (GSA) to find out an optimal
strategy in managing the demand supply chain. We target the grains distribution
system among various centers of Food Corporation of India (FCI) as application
domain. We assume centers with larger stocks as objects of greater mass and
vice versa. Applying Newtonian law of gravity as suggested in GSA, larger
objects attract objects of smaller mass towards itself, creating a virtual
grain supply source. As heavier object sheds its mass by supplying some to the
one in demand, it loses its gravitational pull and thus keeps the whole system
of supply chain per-fectly in balance. The multi agent system helps in
continuous updation of the whole system with the help of autonomous agents
which react to the change in environment and act accordingly. This model also
reduces the communication bottleneck to greater extents.
|
1209.0320
|
Integrated Symbolic Design of Unstable Nonlinear Networked Control
Systems
|
cs.SY
|
The research area of Networked Control Systems (NCS) has been the topic of
intensive study in the last decade. In this paper we give a contribution to
this research line by addressing symbolic control design of (possibly unstable)
nonlinear NCS with specifications expressed in terms of automata. We first
derive symbolic models that are shown to approximate the given NCS in the sense
of (alternating) approximate simulation. We then address symbolic control
design with specifications expressed in terms of automata. We finally derive
efficient algorithms for the synthesis of the proposed symbolic controllers
that cope with the inherent computational complexity of the problem at hand.
|
1209.0341
|
Structural Analysis of Viral Spreading Processes in Social and
Communication Networks Using Egonets
|
math.OC cs.DM cs.SI cs.SY physics.soc-ph
|
We study how the behavior of viral spreading processes is influenced by local
structural properties of the network over which they propagate. For a wide
variety of spreading processes, the largest eigenvalue of the adjacency matrix
of the network plays a key role on their global dynamical behavior. For many
real-world large-scale networks, it is unfeasible to exactly retrieve the
complete network structure to compute its largest eigenvalue. Instead, one
usually have access to myopic, egocentric views of the network structure, also
called egonets. In this paper, we propose a mathematical framework, based on
algebraic graph theory and convex optimization, to study how local structural
properties of the network constrain the interval of possible values in which
the largest eigenvalue must lie. Based on this framework, we present a
computationally efficient approach to find this interval from a collection of
egonets. Our numerical simulations show that, for several social and
communication networks, local structural properties of the network strongly
constrain the location of the largest eigenvalue and the resulting spreading
dynamics. From a practical point of view, our results can be used to dictate
immunization strategies to tame the spreading of a virus, or to design network
topologies that facilitate the spreading of information virally.
|
1209.0368
|
Proximal methods for the latent group lasso penalty
|
math.OC cs.LG stat.ML
|
We consider a regularized least squares problem, with regularization by
structured sparsity-inducing norms, which extend the usual $\ell_1$ and the
group lasso penalty, by allowing the subsets to overlap. Such regularizations
lead to nonsmooth problems that are difficult to optimize, and we propose in
this paper a suitable version of an accelerated proximal method to solve them.
We prove convergence of a nested procedure, obtained composing an accelerated
proximal method with an inner algorithm for computing the proximity operator.
By exploiting the geometrical properties of the penalty, we devise a new active
set strategy, thanks to which the inner iteration is relatively fast, thus
guaranteeing good computational performances of the overall algorithm. Our
approach allows to deal with high dimensional problems without pre-processing
for dimensionality reduction, leading to better computational and prediction
performances with respect to the state-of-the art methods, as shown empirically
both on toy and real data.
|
1209.0377
|
A Perturbation Inequality for the Schatten-$p$ Quasi-Norm and Its
Applications to Low-Rank Matrix Recovery
|
math.OC cs.IT math.IT
|
In this paper, we establish the following perturbation result concerning the
singular values of a matrix: Let $A,B \in \mathbb{R}^{m\times n}$ be given
matrices, and let $f:\mathbb{R}_+\rightarrow\mathbb{R}_+$ be a concave function
satisfying $f(0)=0$. Then, we have $$ \sum_{i=1}^{\min\{m,n\}} \big|
f(\sigma_i(A)) - f(\sigma_i(B)) \big| \le \sum_{i=1}^{\min\{m,n\}}
f(\sigma_i(A-B)), $$ where $\sigma_i(\cdot)$ denotes the $i$--th largest
singular value of a matrix. This answers an open question that is of interest
to both the compressive sensing and linear algebra communities. In particular,
by taking $f(\cdot)=(\cdot)^p$ for any $p \in (0,1]$, we obtain a perturbation
inequality for the so--called Schatten $p$--quasi--norm, which allows us to
confirm the validity of a number of previously conjectured conditions for the
recovery of low--rank matrices via the popular Schatten $p$--quasi--norm
heuristic. We believe that our result will find further applications,
especially in the study of low--rank matrix recovery.
|
1209.0378
|
Provenance for SPARQL queries
|
cs.DB
|
Determining trust of data available in the Semantic Web is fundamental for
applications and users, in particular for linked open data obtained from SPARQL
endpoints. There exist several proposals in the literature to annotate SPARQL
query results with values from abstract models, adapting the seminal works on
provenance for annotated relational databases. We provide an approach capable
of providing provenance information for a large and significant fragment of
SPARQL 1.1, including for the first time the major non-monotonic constructs
under multiset semantics. The approach is based on the translation of SPARQL
into relational queries over annotated relations with values of the most
general m-semiring, and in this way also refuting a claim in the literature
that the OPTIONAL construct of SPARQL cannot be captured appropriately with the
known abstract models.
|
1209.0410
|
Approximate Similarity Search for Online Multimedia Services on
Distributed CPU-GPU Platforms
|
cs.MM cs.DB cs.DC
|
Similarity search in high-dimentional spaces is a pivotal operation found a
variety of database applications. Recently, there has been an increase interest
in similarity search for online content-based multimedia services. Those
services, however, introduce new challenges with respect to the very large
volumes of data that have to be indexed/searched, and the need to minimize
response times observed by the end-users. Additionally, those users dynamically
interact with the systems creating fluctuating query request rates, requiring
the search algorithm to adapt in order to better utilize the underline hardware
to reduce response times. In order to address these challenges, we introduce
hypercurves, a flexible framework for answering approximate k-nearest neighbor
(kNN) queries for very large multimedia databases, aiming at online
content-based multimedia services. Hypercurves executes on hybrid CPU--GPU
environments, and is able to employ those devices cooperatively to support
massive query request rates. In order to keep the response times optimal as the
request rates vary, it employs a novel dynamic scheduler to partition the work
between CPU and GPU. Hypercurves was throughly evaluated using a large database
of multimedia descriptors. Its cooperative CPU--GPU execution achieved
performance improvements of up to 30x when compared to the single CPU-core
version. The dynamic work partition mechanism reduces the observed query
response times in about 50% when compared to the best static CPU--GPU task
partition configuration. In addition, Hypercurves achieves superlinear
scalability in distributed (multi-node) executions, while keeping a high
guarantee of equivalence with its sequential version --- thanks to the proof of
probabilistic equivalence, which supported its aggressive parallelization
design.
|
1209.0424
|
On the changeover timescales of technology transitions and induced
efficiency changes: an overarching theory
|
math.DS cs.SI physics.soc-ph q-fin.GN
|
This paper presents a general theory that aims at explaining timescales
observed empirically in technology transitions and predicting those of future
transitions. This framework is used further to derive a theory for exploring
the dynamics that underlie the complex phenomenon of irreversible and path
dependent price or policy induced efficiency changes. Technology transitions
are known to follow patterns well described by logistic functions, which should
more rigorously be modelled mathematically using the Lotka-Volterra family of
differential equations, originally developed to described the population growth
of competing species. The dynamic evolution of technology has also been
described theoretically using evolutionary dynamics similar to that observed in
nature. The theory presented here joins both approaches and presents a
methodology for predicting changeover time constants in order to describe real
systems of competing technologies. The problem of price or policy induced
efficiency changes becomes naturally explained using this framework. Examples
of application are given.
|
1209.0430
|
Fixed-rank matrix factorizations and Riemannian low-rank optimization
|
cs.LG math.OC
|
Motivated by the problem of learning a linear regression model whose
parameter is a large fixed-rank non-symmetric matrix, we consider the
optimization of a smooth cost function defined on the set of fixed-rank
matrices. We adopt the geometric framework of optimization on Riemannian
quotient manifolds. We study the underlying geometries of several well-known
fixed-rank matrix factorizations and then exploit the Riemannian quotient
geometry of the search space in the design of a class of gradient descent and
trust-region algorithms. The proposed algorithms generalize our previous
results on fixed-rank symmetric positive semidefinite matrices, apply to a
broad range of applications, scale to high-dimensional problems and confer a
geometric basis to recent contributions on the learning of fixed-rank
non-symmetric matrices. We make connections with existing algorithms in the
context of low-rank matrix completion and discuss relative usefulness of the
proposed framework. Numerical experiments suggest that the proposed algorithms
compete with the state-of-the-art and that manifold optimization offers an
effective and versatile framework for the design of machine learning algorithms
that learn a fixed-rank matrix.
|
1209.0444
|
Affine characterizations of minimum and mode-dependent dwell-times for
uncertain linear switched systems
|
math.OC cs.SY math.CA math.DS
|
An alternative approach for minimum and mode-dependent dwell-time
characterization for switched systems is derived. The proposed technique is
related to Lyapunov looped-functionals, a new type of functionals leading to
stability conditions affine in the system matrices, unlike standard results for
minimum dwell-time. These conditions are expressed as infinite-dimensional LMIs
which can be solved using recent polynomial optimization techniques such as
sum-of-squares. The specific structure of the conditions is finally utilized in
order to derive dwell-time stability results for uncertain switched systems.
Several examples illustrate the efficiency of the approach.
|
1209.0488
|
Learning Prioritized Control of Motor Primitives
|
cs.RO
|
Many tasks in robotics can be decomposed into sub-tasks that are performed
simultaneously. In many cases, these sub-tasks cannot all be achieved jointly
and a prioritization of such sub-tasks is required to resolve this issue. In
this paper, we discuss a novel learning approach that allows to learn a
prioritized control law built on a set of sub-tasks represented by motor
primitives. The primitives are executed simultaneously but have different
priorities. Primitives of higher priority can override the commands of the
conflicting lower priority ones. The dominance structure of these primitives
has a significant impact on the performance of the prioritized control law. We
evaluate the proposed approach with a ball bouncing task on a Barrett WAM.
|
1209.0491
|
Coding Opportunity Densification Strategies for Instantly Decodable
Network Coding
|
cs.IT math.IT
|
In this paper, we aim to identify the strategies that can maximize and
monotonically increase the density of the coding opportunities in instantly
decodable network coding (IDNC).Using the well-known graph representation of
IDNC, first derive an expression for the exact evolution of the edge set size
after the transmission of any arbitrary coded packet. From the derived
expressions, we show that sending commonly wanted packets for all the receivers
can maximize the number of coding opportunities. Since guaranteeing such
property in IDNC is usually impossible, this strategy does not guarantee the
achievement of our target. Consequently, we further investigate the problem by
deriving the expectation of the edge set size evolution after ignoring the
identities of the packets requested by the different receivers and considering
only their numbers. We then employ this expected expression to show that
serving the maximum number of receivers having the largest numbers of missing
packets and erasure probabilities tends to both maximize and monotonically
increase the expected density of coding opportunities. Simulation results
justify our theoretical findings. Finally, we validate the importance of our
work through two case studies showing that our identified strategy outperforms
the step-by-step service maximization solution in optimizing both the IDNC
completion delay and receiver goodput.
|
1209.0514
|
Monotonicity of Fitness Landscapes and Mutation Rate Control
|
q-bio.PE cs.IT cs.NE math.IT math.OC
|
A common view in evolutionary biology is that mutation rates are minimised.
However, studies in combinatorial optimisation and search have shown a clear
advantage of using variable mutation rates as a control parameter to optimise
the performance of evolutionary algorithms. Much biological theory in this area
is based on Ronald Fisher's work, who used Euclidean geometry to study the
relation between mutation size and expected fitness of the offspring in
infinite phenotypic spaces. Here we reconsider this theory based on the
alternative geometry of discrete and finite spaces of DNA sequences. First, we
consider the geometric case of fitness being isomorphic to distance from an
optimum, and show how problems of optimal mutation rate control can be solved
exactly or approximately depending on additional constraints of the problem.
Then we consider the general case of fitness communicating only partial
information about the distance. We define weak monotonicity of fitness
landscapes and prove that this property holds in all landscapes that are
continuous and open at the optimum. This theoretical result motivates our
hypothesis that optimal mutation rate functions in such landscapes will
increase when fitness decreases in some neighbourhood of an optimum, resembling
the control functions derived in the geometric case. We test this hypothesis
experimentally by analysing approximately optimal mutation rate control
functions in 115 complete landscapes of binding scores between DNA sequences
and transcription factors. Our findings support the hypothesis and find that
the increase of mutation rate is more rapid in landscapes that are less
monotonic (more rugged). We discuss the relevance of these findings to living
organisms.
|
1209.0521
|
Efficient EM Training of Gaussian Mixtures with Missing Data
|
cs.LG stat.ML
|
In data-mining applications, we are frequently faced with a large fraction of
missing entries in the data matrix, which is problematic for most discriminant
machine learning algorithms. A solution that we explore in this paper is the
use of a generative model (a mixture of Gaussians) to compute the conditional
expectation of the missing variables given the observed variables. Since
training a Gaussian mixture with many different patterns of missing values can
be computationally very expensive, we introduce a spanning-tree based algorithm
that significantly speeds up training in these conditions. We also observe that
good results can be obtained by using the generative model to fill-in the
missing values for a separate discriminant learning algorithm.
|
1209.0532
|
Controlling the Error Floor in LDPC Decoding
|
cs.IT math.IT
|
The error floor of LDPC is revisited as an effect of dynamic message behavior
in the so-called absorption sets of the code. It is shown that if the signal
growth in the absorption sets is properly balanced by the growth of
set-external messages, the error floor can be lowered to essentially
arbitrarily low levels. Importance sampling techniques are discussed and used
to verify the analysis, as well as to discuss the impact of iterations and
message quantization on the code performance in the ultra-low BER (error floor)
regime.
|
1209.0537
|
One-sided Precoder Designs on Manifolds for Interference Alignment
|
cs.IT math.IT
|
Interference alignment (IA) is a technique recently shown to achieve the
maximum degrees of freedom (DoF) of $K$-user interference channel. In this
paper, we focus on the precoder designs on manifolds for IA. By restricting the
optimization only at the transmitters' side, it will alleviate the overhead
induced by alternation between the forward and reverse links significantly.
Firstly a classical steepest descent (SD) algorithm in multi-dimensional
complex space is proposed to achieve feasible IA. Then we reform the
optimization problem on Stiefel manifold, and propose a novel SD algorithm
based on this manifold with lower dimensions. Moreover, aiming at further
reducing the complexity, Grassmann manifold is introduced to derive
corresponding algorithm for reaching the perfect IA. Numerical simulations show
that the proposed algorithms on manifolds have better convergence performance
and higher system capacity than previous methods, also achieve the maximum DoF.
|
1209.0578
|
Social Cheesecake: An UX-driven designed interface for managing contacts
|
cs.HC cs.SI
|
Social network management interfaces should consider separation of contexts
and tie strength. This paper shows the design process upon building the Social
Cheesecake, an interface that addresses both issues. Paper and screen
prototyping were used in the design process. Paper prototype interactions
helped to explore the metaphors in the domain, while screen prototype
consolidated the model. The prototype was finally built using HTML5 and
Javascript.
|
1209.0616
|
Well Placement Optimization under Uncertainty with CMA-ES Using the
Neighborhood
|
cs.CE
|
In the well placement problem, as well as in other field development
optimization problems, geological uncertainty is a key source of risk affecting
the viability of field development projects. Well placement problems under
geological uncertainty are formulated as optimization problems in which the
objective function is evaluated using a reservoir simulator on a number of
possible geological realizations. In this paper, we present a new approach to
handle geological uncertainty for the well placement problem with a reduced
number of reservoir simulations. The proposed approach uses already simulated
well configurations in the neighborhood of each well configuration for the
objective function evaluation. We use thus only one single reservoir simulation
performed on a randomly chosen realization together with the neighborhood to
estimate the objective function instead of using multiple simulations on
multiple realizations. This approach is combined with the stochastic optimizer
CMA-ES. The proposed approach is shown on the benchmark reservoir case PUNQ-S3
to be able to capture the geological uncertainty using a smaller number of
reservoir simulations. This approach is compared to the reference approach
using all the possible realizations for each well configuration, and shown to
be able to reduce significantly the number of reservoir simulations (around
80%).
|
1209.0622
|
Sensor Webs for Environmental research
|
cs.SY cs.NI
|
The ongoing massive global environmental changes and the past learnings have
highlighted the urgency and importance of further detailed understanding of the
earth system and implementation of social ecological sustainability measures in
a much more effective and transparent manner. This short communication discuss
the potential of sensor webs in addressing those research challenges,
highlighting it in the context of air pollution issues.
|
1209.0652
|
Necessary and sufficient conditions of solution uniqueness in $\ell_1$
minimization
|
cs.IT math.IT math.NA math.OC
|
This paper shows that the solutions to various convex $\ell_1$ minimization
problems are \emph{unique} if and only if a common set of conditions are
satisfied. This result applies broadly to the basis pursuit model, basis
pursuit denoising model, Lasso model, as well as other $\ell_1$ models that
either minimize $f(Ax-b)$ or impose the constraint $f(Ax-b)\leq\sigma$, where
$f$ is a strictly convex function. For these models, this paper proves that,
given a solution $x^*$ and defining $I=\supp(x^*)$ and $s=\sign(x^*_I)$, $x^*$
is the unique solution if and only if $A_I$ has full column rank and there
exists $y$ such that $A_I^Ty=s$ and $|a_i^Ty|_\infty<1$ for $i\not\in I$. This
condition is previously known to be sufficient for the basis pursuit model to
have a unique solution supported on $I$. Indeed, it is also necessary, and
applies to a variety of other $\ell_1$ models. The paper also discusses ways to
recognize unique solutions and verify the uniqueness conditions numerically.
|
1209.0654
|
Compressive Optical Deflectometric Tomography: A Constrained
Total-Variation Minimization Approach
|
cs.CV math.OC
|
Optical Deflectometric Tomography (ODT) provides an accurate characterization
of transparent materials whose complex surfaces present a real challenge for
manufacture and control. In ODT, the refractive index map (RIM) of a
transparent object is reconstructed by measuring light deflection under
multiple orientations. We show that this imaging modality can be made
"compressive", i.e., a correct RIM reconstruction is achievable with far less
observations than required by traditional Filtered Back Projection (FBP)
methods. Assuming a cartoon-shape RIM model, this reconstruction is driven by
minimizing the map Total-Variation under a fidelity constraint with the
available observations. Moreover, two other realistic assumptions are added to
improve the stability of our approach: the map positivity and a frontier
condition. Numerically, our method relies on an accurate ODT sensing model and
on a primal-dual minimization scheme, including easily the sensing operator and
the proposed RIM constraints. We conclude this paper by demonstrating the power
of our method on synthetic and experimental data under various compressive
scenarios. In particular, the compressiveness of the stabilized ODT problem is
demonstrated by observing a typical gain of 20 dB compared to FBP at only 5% of
360 incident light angles for moderately noisy sensing.
|
1209.0676
|
Channel Assignment in Dense MC-MR Wireless Networks: Scaling Laws and
Algorithms
|
cs.NI cs.IT cs.PF math.IT
|
We investigate optimal channel assignment algorithms that maximize per node
throughput in dense multichannel multi-radio (MC-MR) wireless networks.
Specifically, we consider an MC-MR network where all nodes are within the
transmission range of each other. This situation is encountered in many
real-life settings such as students in a lecture hall, delegates attending a
conference, or soldiers in a battlefield. In this scenario, we show that
intelligent assignment of the available channels results in a significantly
higher per node throughput. We first propose a class of channel assignment
algorithms, parameterized by T (the number of transceivers per node), that can
achieve $\Theta(1/N^{1/T})$ per node throughput using $\Theta(TN^{1-1/T})$
channels. In view of practical constraints on $T$, we then propose another
algorithm that can achieve $\Theta(1/(\log_2 N)^2)$ per node throughput using
only two transceivers per node. Finally, we identify a fundamental relationship
between the achievable per node throughput, the total number of channels used,
and the network size under any strategy. Using analysis and simulations, we
show that our algorithms achieve close to optimal performance at different
operating points on this curve. Our work has several interesting implications
on the optimal network design for dense MC-MR wireless networks.
|
1209.0715
|
The Synthesis and Analysis of Stochastic Switching Circuits
|
cs.IT math.IT
|
Stochastic switching circuits are relay circuits that consist of stochastic
switches called pswitches. The study of stochastic switching circuits has
widespread applications in many fields of computer science, neuroscience, and
biochemistry. In this paper, we discuss several properties of stochastic
switching circuits, including robustness, expressibility, and probability
approximation.
First, we study the robustness, namely, the effect caused by introducing an
error of size \epsilon to each pswitch in a stochastic circuit. We analyze two
constructions and prove that simple series-parallel circuits are robust to
small error perturbations, while general series-parallel circuits are not.
Specifically, the total error introduced by perturbations of size less than
\epsilon is bounded by a constant multiple of \epsilon in a simple
series-parallel circuit, independent of the size of the circuit.
Next, we study the expressibility of stochastic switching circuits: Given an
integer q and a pswitch set S=\{\frac{1}{q},\frac{2}{q},...,\frac{q-1}{q}\},
can we synthesize any rational probability with denominator q^n (for arbitrary
n) with a simple series-parallel stochastic switching circuit? We generalize
previous results and prove that when q is a multiple of 2 or 3, the answer is
yes. We also show that when q is a prime number larger than 3, the answer is
no.
Probability approximation is studied for a general case of an arbitrary
pswitch set S=\{s_1,s_2,...,s_{|S|}\}. In this case, we propose an algorithm
based on local optimization to approximate any desired probability. The
analysis reveals that the approximation error of a switching circuit decreases
exponentially with an increasing circuit size.
|
1209.0724
|
Synthesis of Stochastic Flow Networks
|
cs.IT cs.NE math.IT math.PR
|
A stochastic flow network is a directed graph with incoming edges (inputs)
and outgoing edges (outputs), tokens enter through the input edges, travel
stochastically in the network, and can exit the network through the output
edges. Each node in the network is a splitter, namely, a token can enter a node
through an incoming edge and exit on one of the output edges according to a
predefined probability distribution. Stochastic flow networks can be easily
implemented by DNA-based chemical reactions, with promising applications in
molecular computing and stochastic computing. In this paper, we address a
fundamental synthesis question: Given a finite set of possible splitters and an
arbitrary rational probability distribution, design a stochastic flow network,
such that every token that enters the input edge will exit the outputs with the
prescribed probability distribution.
The problem of probability transformation dates back to von Neumann's 1951
work and was followed, among others, by Knuth and Yao in 1976. Most existing
works have been focusing on the "simulation" of target distributions. In this
paper, we design optimal-sized stochastic flow networks for "synthesizing"
target distributions. It shows that when each splitter has two outgoing edges
and is unbiased, an arbitrary rational probability \frac{a}{b} with a\leq b\leq
2^n can be realized by a stochastic flow network of size n that is optimal.
Compared to the other stochastic systems, feedback (cycles in networks)
strongly improves the expressibility of stochastic flow networks.
|
1209.0726
|
A Universal Scheme for Transforming Binary Algorithms to Generate Random
Bits from Loaded Dice
|
cs.IT math.IT math.PR
|
In this paper, we present a universal scheme for transforming an arbitrary
algorithm for biased 2-face coins to generate random bits from the general
source of an m-sided die, hence enabling the application of existing algorithms
to general sources. In addition, we study approaches of efficiently generating
a prescribed number of random bits from an arbitrary biased coin. This
contrasts with most existing works, which typically assume that the number of
coin tosses is fixed, and they generate a variable number of random bits.
|
1209.0730
|
Streaming Algorithms for Optimal Generation of Random Bits
|
cs.IT cs.DS math.IT math.PR
|
Generating random bits from a source of biased coins (the biased is unknown)
is a classical question that was originally studied by von Neumann. There are a
number of known algorithms that have asymptotically optimal information
efficiency, namely, the expected number of generated random bits per input bit
is asymptotically close to the entropy of the source. However, only the
original von Neumann algorithm has a `streaming property' - it operates on a
single input bit at a time and it generates random bits when possible, alas, it
does not have an optimal information efficiency.
The main contribution of this paper is an algorithm that generates random bit
streams from biased coins, uses bounded space and runs in expected linear time.
As the size of the allotted space increases, the algorithm approaches the
information-theoretic upper bound on efficiency. In addition, we discuss how to
extend this algorithm to generate random bit streams from m-sided dice or
correlated sources such as Markov chains.
|
1209.0732
|
Linear Transformations for Randomness Extraction
|
cs.IT cs.CR math.IT math.PR
|
Information-efficient approaches for extracting randomness from imperfect
sources have been extensively studied, but simpler and faster ones are required
in the high-speed applications of random number generation. In this paper, we
focus on linear constructions, namely, applying linear transformation for
randomness extraction. We show that linear transformations based on sparse
random matrices are asymptotically optimal to extract randomness from
independent sources and bit-fixing sources, and they are efficient (may not be
optimal) to extract randomness from hidden Markov sources. Further study
demonstrates the flexibility of such constructions on source models as well as
their excellent information-preserving capabilities. Since linear
transformations based on sparse random matrices are computationally fast and
can be easy to implement using hardware like FPGAs, they are very attractive in
the high-speed applications. In addition, we explore explicit constructions of
transformation matrices. We show that the generator matrices of primitive BCH
codes are good choices, but linear transformations based on such matrices
require more computational time due to their high densities.
|
1209.0734
|
Efficiently Extracting Randomness from Imperfect Stochastic Processes
|
cs.IT cs.CR math.IT math.PR
|
We study the problem of extracting a prescribed number of random bits by
reading the smallest possible number of symbols from non-ideal stochastic
processes. The related interval algorithm proposed by Han and Hoshi has
asymptotically optimal performance; however, it assumes that the distribution
of the input stochastic process is known. The motivation for our work is the
fact that, in practice, sources of randomness have inherent correlations and
are affected by measurement's noise. Namely, it is hard to obtain an accurate
estimation of the distribution. This challenge was addressed by the concepts of
seeded and seedless extractors that can handle general random sources with
unknown distributions. However, known seeded and seedless extractors provide
extraction efficiencies that are substantially smaller than Shannon's entropy
limit. Our main contribution is the design of extractors that have a variable
input-length and a fixed output length, are efficient in the consumption of
symbols from the source, are capable of generating random bits from general
stochastic processes and approach the information theoretic upper bound on
efficiency.
|
1209.0736
|
On Set Size Distribution Estimation and the Characterization of Large
Networks via Sampling
|
math.ST cs.IT cs.SI math.IT stat.TH
|
In this work we study the set size distribution estimation problem, where
elements are randomly sampled from a collection of non-overlapping sets and we
seek to recover the original set size distribution from the samples. This
problem has applications to capacity planning, network theory, among other
areas. Examples of real-world applications include characterizing in-degree
distributions in large graphs and uncovering TCP/IP flow size distributions on
the Internet. We demonstrate that it is hard to estimate the original set size
distribution. The recoverability of original set size distributions presents a
sharp threshold with respect to the fraction of elements that remain in the
sets. If this fraction remains below a threshold, typically half of the
elements in power-law and heavier-than-exponential-tailed distributions, then
the original set size distribution is unrecoverable. We also discuss practical
implications of our findings.
|
1209.0738
|
Sparse coding for multitask and transfer learning
|
cs.LG stat.ML
|
We investigate the use of sparse coding and dictionary learning in the
context of multitask and transfer learning. The central assumption of our
learning method is that the tasks parameters are well approximated by sparse
linear combinations of the atoms of a dictionary on a high or infinite
dimensional space. This assumption, together with the large quantity of
available data in the multitask and transfer learning settings, allows a
principled choice of the dictionary. We provide bounds on the generalization
error of this approach, for both settings. Numerical experiments on one
synthetic and two real datasets show the advantage of our method over single
task learning, a previous method based on orthogonal and dense representation
of the tasks and a related method learning task grouping.
|
1209.0740
|
Nonuniform Codes for Correcting Asymmetric Errors in Data Storage
|
cs.IT math.IT
|
The construction of asymmetric error correcting codes is a topic that was
studied extensively, however, the existing approach for code construction
assumes that every codeword should tolerate $t$ asymmetric errors. Our main
observation is that in contrast to symmetric errors, asymmetric errors are
content dependent. For example, in Z-channels, the all-1 codeword is prone to
have more errors than the all-0 codeword. This motivates us to develop
nonuniform codes whose codewords can tolerate different numbers of asymmetric
errors depending on their Hamming weights. The idea in a nonuniform codes'
construction is to augment the redundancy in a content-dependent way and
guarantee the worst case reliability while maximizing the code size. In this
paper, we first study nonuniform codes for Z-channels, namely, they only suffer
one type of errors, say 1 to 0. Specifically, we derive their upper bounds,
analyze their asymptotic performances, and introduce two general constructions.
Then we extend the concept and results of nonuniform codes to general binary
asymmetric channels, where the error probability for each bit from 0 to 1 is
smaller than that from 1 to 0.
|
1209.0741
|
Optimal Coordinated Beamforming in the Multicell Downlink with
Transceiver Impairments
|
cs.IT math.IT
|
Physical wireless transceivers suffer from a variety of impairments that
distort the transmitted and received signals. Their degrading impact is
particularly evident in modern systems with multiuser transmission, high
transmit power, and low-cost devices, but their existence is routinely ignored
in the optimization literature for multicell transmission. This paper provides
a detailed analysis of coordinated beamforming in the multicell downlink. We
solve two optimization problems under a transceiver impairment model and derive
the structure of the optimal solutions. We show numerically that these
solutions greatly reduce the impact of impairments, compared with beamforming
developed for ideal transceivers. Although the so-called multiplexing gain is
zero under transceiver impairments, we show that the gain of multiplexing can
be large at practical SNRs.
|
1209.0744
|
Balanced Modulation for Nonvolatile Memories
|
cs.IT math.IT
|
This paper presents a practical writing/reading scheme in nonvolatile
memories, called balanced modulation, for minimizing the asymmetric component
of errors. The main idea is to encode data using a balanced error-correcting
code. When reading information from a block, it adjusts the reading threshold
such that the resulting word is also balanced or approximately balanced.
Balanced modulation has suboptimal performance for any cell-level distribution
and it can be easily implemented in the current systems of nonvolatile
memories. Furthermore, we studied the construction of balanced error-correcting
codes, in particular, balanced LDPC codes. It has very efficient encoding and
decoding algorithms, and it is more efficient than prior construction of
balanced error-correcting codes.
|
1209.0748
|
Force-Directed Graph Drawing Using Social Gravity and Scaling
|
cs.CG cs.SI physics.soc-ph
|
Force-directed layout algorithms produce graph drawings by resolving a system
of emulated physical forces. We present techniques for using social gravity as
an additional force in force-directed layouts, together with a scaling
technique, to produce drawings of trees and forests, as well as more complex
social networks. Social gravity assigns mass to vertices in proportion to their
network centrality, which allows vertices that are more graph-theoretically
central to be visualized in physically central locations. Scaling varies the
gravitational force throughout the simulation, and reduces crossings relative
to unscaled gravity. In addition to providing this algorithmic framework, we
apply our algorithms to social networks produced by Mark Lombardi, and we show
how social gravity can be incorporated into force-directed Lombardi-style
drawings.
|
1209.0781
|
World citation and collaboration networks: uncovering the role of
geography in science
|
physics.soc-ph cs.DL cs.SI physics.data-an
|
Modern information and communication technologies, especially the Internet,
have diminished the role of spatial distances and territorial boundaries on the
access and transmissibility of information. This has enabled scientists for
closer collaboration and internationalization. Nevertheless, geography remains
an important factor affecting the dynamics of science. Here we present a
systematic analysis of citation and collaboration networks between cities and
countries, by assigning papers to the geographic locations of their authors'
affiliations. The citation flows as well as the collaboration strengths between
cities decrease with the distance between them and follow gravity laws. In
addition, the total research impact of a country grows linearly with the amount
of national funding for research & development. However, the average impact
reveals a peculiar threshold effect: the scientific output of a country may
reach an impact larger than the world average only if the country invests more
than about 100,000 USD per researcher annually.
|
1209.0811
|
Exponential synchronization rate of Kuramoto oscillators in the presence
of a pacemaker
|
cs.SY nlin.AO
|
The exponential synchronization rate is addressed for Kuramoto oscillators in
the presence of a pacemaker. When natural frequencies are identical, we prove
that synchronization can be ensured even when the phases are not constrained in
an open half-circle, which improves the existing results in the literature. We
derive a lower bound on the exponential synchronization rate, which is proven
to be an increasing function of pacemaker strength, but may be an increasing or
decreasing function of local coupling strength. A similar conclusion is
obtained for phase locking when the natural frequencies are non-identical. An
approach to trapping phase differences in an arbitrary interval is also given,
which ensures synchronization in the sense that synchronization error can be
reduced to an arbitrary level.
|
1209.0814
|
Increasing sync rate of pulse-coupled oscillators via phase response
function design: theory and application to wireless networks
|
cs.SY nlin.AO
|
This paper addresses the synchronization rate of weakly connected
pulse-coupled oscillators (PCOs). We prove that besides coupling strength, the
phase response function is also a determinant of synchronization rate. Inspired
by the result, we propose to increase the synchronization rate of PCOs by
designing the phase response function. This has important significance in
PCO-based clock synchronization of wireless networks. By designing the phase
response function, synchronization rate is increased even under a fixed
transmission power. Given that energy consumption in synchronization is
determined by the product of synchronization time and transformation power, the
new strategy reduces energy consumption in clock synchronization. QualNet
experiments confirm the theoretical results.
|
1209.0835
|
Evolution of Social-Attribute Networks: Measurements, Modeling, and
Implications using Google+
|
cs.SI cs.CY physics.soc-ph
|
Understanding social network structure and evolution has important
implications for many aspects of network and system design including
provisioning, bootstrapping trust and reputation systems via social networks,
and defenses against Sybil attacks. Several recent results suggest that
augmenting the social network structure with user attributes (e.g., location,
employer, communities of interest) can provide a more fine-grained
understanding of social networks. However, there have been few studies to
provide a systematic understanding of these effects at scale. We bridge this
gap using a unique dataset collected as the Google+ social network grew over
time since its release in late June 2011. We observe novel phenomena with
respect to both standard social network metrics and new attribute-related
metrics (that we define). We also observe interesting evolutionary patterns as
Google+ went from a bootstrap phase to a steady invitation-only stage before a
public release. Based on our empirical observations, we develop a new
generative model to jointly reproduce the social structure and the node
attributes. Using theoretical analysis and empirical evaluations, we show that
our model can accurately reproduce the social and attribute structure of real
social networks. We also demonstrate that our model provides more accurate
predictions for practical application contexts.
|
1209.0841
|
Constructing the L2-Graph for Robust Subspace Learning and Subspace
Clustering
|
cs.CV cs.MM
|
Under the framework of graph-based learning, the key to robust subspace
clustering and subspace learning is to obtain a good similarity graph that
eliminates the effects of errors and retains only connections between the data
points from the same subspace (i.e., intra-subspace data points). Recent works
achieve good performance by modeling errors into their objective functions to
remove the errors from the inputs. However, these approaches face the
limitations that the structure of errors should be known prior and a complex
convex problem must be solved. In this paper, we present a novel method to
eliminate the effects of the errors from the projection space (representation)
rather than from the input space. We first prove that $\ell_1$-, $\ell_2$-,
$\ell_{\infty}$-, and nuclear-norm based linear projection spaces share the
property of Intra-subspace Projection Dominance (IPD), i.e., the coefficients
over intra-subspace data points are larger than those over inter-subspace data
points. Based on this property, we introduce a method to construct a sparse
similarity graph, called L2-Graph. The subspace clustering and subspace
learning algorithms are developed upon L2-Graph. Experiments show that L2-Graph
algorithms outperform the state-of-the-art methods for feature extraction,
image clustering, and motion segmentation in terms of accuracy, robustness, and
time efficiency.
|
1209.0846
|
Discovery Signal Design and Its Application to Peer-to-Peer
Communications in OFDMA Cellular Networks
|
cs.IT math.IT
|
This paper proposes a unique discovery signal as an enabler of peer-to-peer
(P2P) communication which overlays a cellular network and shares its resources.
Applying P2P communication to cellular network has two key issues: 1.
Conventional ad hoc P2P connections may be unstable since stringent resource
and interference coordination is usually difficult to achieve for ad hoc P2P
communications; 2. The large overhead required by P2P communication may offset
its gain. We solve these two issues by using a special discovery signal to aid
cellular network-supervised resource sharing and interference management
between cellular and P2P connections. The discovery signal, which facilitates
efficient neighbor discovery in a cellular system, consists of un-modulated
tones transmitted on a sequence of OFDM symbols. This discovery signal not only
possesses the properties of high power efficiency, high interference tolerance,
and freedom from near-far effects, but also has minimal overhead. A practical
discovery-signal-based P2P in an OFDMA cellular system is also proposed.
Numerical results are presented which show the potential of improving local
service and edge device performance in a cellular network.
|
1209.0852
|
Automatic firewall rules generator for anomaly detection systems with
Apriori algorithm
|
cs.AI
|
Network intrusion detection systems have become a crucial issue for computer
systems security infrastructures. Different methods and algorithms are
developed and proposed in recent years to improve intrusion detection systems.
The most important issue in current systems is that they are poor at detecting
novel anomaly attacks. These kinds of attacks refer to any action that
significantly deviates from the normal behaviour which is considered intrusion.
This paper proposed a model to improve this problem based on data mining
techniques. Apriori algorithm is used to predict novel attacks and generate
real-time rules for firewall. Apriori algorithm extracts interesting
correlation relationships among large set of data items. This paper illustrates
how to use Apriori algorithm in intrusion detection systems to cerate a
automatic firewall rules generator to detect novel anomaly attack. Apriori is
the best-known algorithm to mine association rules. This is an innovative way
to find association rules on large scale.
|
1209.0853
|
Improving the K-means algorithm using improved downhill simplex search
|
cs.LG
|
The k-means algorithm is one of the well-known and most popular clustering
algorithms. K-means seeks an optimal partition of the data by minimizing the
sum of squared error with an iterative optimization procedure, which belongs to
the category of hill climbing algorithms. As we know hill climbing searches are
famous for converging to local optimums. Since k-means can converge to a local
optimum, different initial points generally lead to different convergence
cancroids, which makes it important to start with a reasonable initial
partition in order to achieve high quality clustering solutions. However, in
theory, there exist no efficient and universal methods for determining such
initial partitions. In this paper we tried to find an optimum initial
partitioning for k-means algorithm. To achieve this goal we proposed a new
improved version of downhill simplex search, and then we used it in order to
find an optimal result for clustering approach and then compare this algorithm
with Genetic Algorithm base (GA), Genetic K-Means (GKM), Improved Genetic
K-Means (IGKM) and k-means algorithms.
|
1209.0863
|
Agile Missile Controller Based on Adaptive Nonlinear Backstepping
Control
|
cs.SY
|
This paper deals with a nonlinear adaptive autopilot design for agile missile
systems. In advance of the autopilot design, an investigation of the agile turn
maneuver, based on the trajectory optimization, is performed to determine state
behaviors during the agile turn phase. This investigation shows that there
exist highly nonlinear, rapidly changing dynamics and aerodynamic
uncertainties. To handle of these difficulties, we propose a longitudinal
autopilot for angle-of-attack tracking based on backstepping control
methodology in conjunction with the time-delay adaptation scheme.
|
1209.0864
|
Missile Acceleration Controller Design using PI and Time-Delay Adaptive
Feedback Linearization Methodology
|
cs.SY
|
A straight forward application of feedback linearization to the missile
autopilot design for acceleration control may be limited due to the nonminimum
characteristics and the model uncertainties. As a remedy, this paper presents a
cascade structure of an acceleration controller based on approximate feedback
linearization methodology with a time-delay adaptation scheme. The inner loop
controller is constructed by applying feedback linearization to the approximate
system which is a minimum phase system and provides the desired acceleration
signal caused by the angle-of-attack. This controller is augmented by the
time-delay adaptive law and the outer loop PI (proportional-integral)
controller in order to adaptively compensate for feedback linearization error
because of model uncertainty and in order to track the desired acceleration
signal. The performance of the proposed method is examined through numerical
simulations. Moreover, the proposed controller is tested by using an intercept
scenario in 6DOF nonlinear simulations.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.