id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1402.4861 | A Quasi-Newton Method for Large Scale Support Vector Machines | cs.LG | This paper adapts a recently developed regularized stochastic version of the
Broyden, Fletcher, Goldfarb, and Shanno (BFGS) quasi-Newton method for the
solution of support vector machine classification problems. The proposed method
is shown to converge almost surely to the optimal classifier at a rate that is
linear in expectation. Numerical results show that the proposed method exhibits
a convergence rate that degrades smoothly with the dimensionality of the
feature vectors.
|
1402.4862 | Learning the Parameters of Determinantal Point Process Kernels | stat.ML cs.LG | Determinantal point processes (DPPs) are well-suited for modeling repulsion
and have proven useful in many applications where diversity is desired. While
DPPs have many appealing properties, such as efficient sampling, learning the
parameters of a DPP is still considered a difficult problem due to the
non-convex nature of the likelihood function. In this paper, we propose using
Bayesian methods to learn the DPP kernel parameters. These methods are
applicable in large-scale and continuous DPP settings even when the exact form
of the eigendecomposition is unknown. We demonstrate the utility of our DPP
learning methods in studying the progression of diabetic neuropathy based on
spatial distribution of nerve fibers, and in studying human perception of
diversity in images.
|
1402.4869 | The Impact of Cost and Network Topology on Urban Mobility: A Study of
Public Bicycle Usage in 2 U.S. Cities | cs.SI physics.soc-ph | Understanding the drivers of urban mobility is vital for epidemiology, urban
planning, and communication networks. Human movements have so far been studied
by observing people's positions in a given space and time, though most recent
models only implicitly account for expected costs and returns for movements.
This paper explores the explicit impact of cost and network topology on
mobility dynamics, using data from 2 city-wide public bicycle share systems in
the USA. User mobility is characterized through the distribution of trip
durations, while network topology is characterized through the pairwise
distances between stations and the popularity of stations and routes. Despite
significant differences in station density and physical layout between the 2
cities, trip durations follow remarkably similar distributions that exhibit
cost sensitive trends around pricing point boundaries, particularly with
long-term users of the system. Based on the results, recommendations for
dynamic pricing and incentive schemes are provided to positively influence
mobility patterns and guide improved planning and management of public bicycle
systems to increase uptake.
|
1402.4876 | MICA: A fast short-read aligner that takes full advantage of Intel Many
Integrated Core Architecture (MIC) | cs.DC cs.CE q-bio.GN | Background: Short-read aligners have recently gained a lot of speed by
exploiting the massive parallelism of GPU. An uprising alternative to GPU is
Intel MIC; supercomputers like Tianhe-2, currently top of TOP500, is built with
48,000 MIC boards to offer ~55 PFLOPS. The CPU-like architecture of MIC allows
CPU-based software to be parallelized easily; however, the performance is often
inferior to GPU counterparts as an MIC board contains only ~60 cores (while a
GPU board typically has over a thousand cores). Results: To better utilize
MIC-enabled computers for NGS data analysis, we developed a new short-read
aligner MICA that is optimized in view of MICs limitation and the extra
parallelism inside each MIC core. Experiments on aligning 150bp paired-end
reads show that MICA using one MIC board is 4.9 times faster than the BWA-MEM
(using 6-core of a top-end CPU), and slightly faster than SOAP3-dp (using a
GPU). Furthermore, MICAs simplicity allows very efficient scale-up when
multiple MIC boards are used in a node (3 cards give a 14.1-fold speedup over
BWA-MEM). Summary: MICA can be readily used by MIC-enabled supercomputers for
production purpose. We have tested MICA on Tianhe-2 with 90 WGS samples (17.47
Tera-bases), which can be aligned in an hour less than 400 nodes. MICA has
impressive performance even though the current MIC is at its initial stage of
development (the next generation of MIC has been announced to release in late
2014).
|
1402.4881 | Fixed Error Asymptotics For Erasure and List Decoding | cs.IT math.IT | We derive the optimum second-order coding rates, known as second-order
capacities, for erasure and list decoding. For erasure decoding for discrete
memoryless channels, we show that second-order capacity is
$\sqrt{V}\Phi^{-1}(\epsilon_t)$ where $V$ is the channel dispersion and
$\epsilon_t$ is the total error probability, i.e., the sum of the erasure and
undetected errors. We show numerically that the expected rate at finite
blocklength for erasures decoding can exceed the finite blocklength channel
coding rate. We also show that the analogous result also holds for lossless
source coding with decoder side information, i.e., Slepian-Wolf coding. For
list decoding, we consider list codes of deterministic size that scales as
$\exp(\sqrt{n}l)$ and show that the second-order capacity is
$l+\sqrt{V}\Phi^{-1}(\epsilon)$ where $\epsilon$ is the permissible error
probability. We also consider lists of polynomial size $n^\alpha$ and derive
bounds on the third-order coding rate in terms of the order of the polynomial
$\alpha$. These bounds are tight for symmetric and singular channels. The
direct parts of the coding theorems leverage on the simple threshold decoder
and converses are proved using variants of the hypothesis testing converse.
|
1402.4888 | Survey on Sparse Coded Features for Content Based Face Image Retrieval | cs.IR cs.CV cs.LG stat.ML | Content based image retrieval, a technique which uses visual contents of
image to search images from large scale image databases according to users'
interests. This paper provides a comprehensive survey on recent technology used
in the area of content based face image retrieval. Nowadays digital devices and
photo sharing sites are getting more popularity, large human face photos are
available in database. Multiple types of facial features are used to represent
discriminality on large scale human facial image database. Searching and mining
of facial images are challenging problems and important research issues. Sparse
representation on features provides significant improvement in indexing related
images to query image.
|
1402.4892 | Sub-Modularity of Waterfilling with Applications to Online Basestation
Allocation | cs.NI cs.DS cs.IT math.IT | We show that the popular water-filling algorithm for maximizing the mutual
information in parallel Gaussian channels is sub-modular. The sub-modularity of
water-filling algorithm is then used to derive online basestation allocation
algorithms, where mobile users are assigned to one of many possible
basestations immediately and irrevocably upon arrival without knowing the
future user information. The goal of the allocation is to maximize the sum-rate
of the system under power allocation at each basestation. We present online
algorithms with competitive ratio of at most 2 when compared to offline
algorithms that have knowledge of all future user arrivals.
|
1402.4893 | Anisotropic Mesh Adaptation for Image Representation | cs.CV math.NA | Triangular meshes have gained much interest in image representation and have
been widely used in image processing. This paper introduces a framework of
anisotropic mesh adaptation (AMA) methods to image representation and proposes
a GPRAMA method that is based on AMA and greedy-point removal (GPR) scheme.
Different than many other methods that triangulate sample points to form the
mesh, the AMA methods start directly with a triangular mesh and then adapt the
mesh based on a user-defined metric tensor to represent the image. The AMA
methods have clear mathematical framework and provides flexibility for both
image representation and image reconstruction. A mesh patching technique is
developed for the implementation of the GPRAMA method, which leads to an
improved version of the popular GPRFS-ED method. The GPRAMA method can achieve
better quality than the GPRFS-ED method but with lower computational cost.
|
1402.4907 | Line Maps in Cluttered Environments | cs.RO | This paper uses the smoothing and mapping framework to solve the SLAM problem
in indoor environments; focusing on how some key issues such as feature
extraction and data association can be handled by applying probabilistic
techniques. For feature extraction, an odds ratio approach to find multiple
lines from laser scans is proposed, this criterion allows to decide which model
must be merged and to output the best number of models. In addition, to solve
the data association problem a method based on the segments of each line is
proposed. Experimental results show that high quality indoor maps can be
obtained from noisy data
|
1402.4914 | Building fast Bayesian computing machines out of intentionally
stochastic, digital parts | cs.AI cs.AR stat.CO | The brain interprets ambiguous sensory information faster and more reliably
than modern computers, using neurons that are slower and less reliable than
logic gates. But Bayesian inference, which underpins many computational models
of perception and cognition, appears computationally challenging even given
modern transistor speeds and energy budgets. The computational principles and
structures needed to narrow this gap are unknown. Here we show how to build
fast Bayesian computing machines using intentionally stochastic, digital parts,
narrowing this efficiency gap by multiple orders of magnitude. We find that by
connecting stochastic digital components according to simple mathematical
rules, one can build massively parallel, low precision circuits that solve
Bayesian inference problems and are compatible with the Poisson firing
statistics of cortical neurons. We evaluate circuits for depth and motion
perception, perceptual learning and causal reasoning, each performing inference
over 10,000+ latent variables in real time - a 1,000x speed advantage over
commodity microprocessors. These results suggest a new role for randomness in
the engineering and reverse-engineering of intelligent computation.
|
1402.4933 | On the Estimation of Channel State Transitions for Cognitive Radio
Systems | cs.IT math.IT | Coexistence by means of shared access is a cognitive radio application. The
secondary user models the slotted primary users channel access as a Markov
process. The model parameters, i.e, the state transition probabilities
(alpha,beta) help secondary user to determine the channel occupancy, thereby
enables secondary user to rank the primary user channels. These parameters are
unknown and need to be estimated by secondary users for each channel. To do so,
the secondary users have to sense all the primary user channels in every time
slot, which is unrealistic for a large and sparsely allocated primary user
spectrum. With no other choice left, the secondary user has to sense a channel
at random time intervals and estimate the parametric information for all the
channels using the observed slots.
|
1402.4936 | Enhanced Secure Algorithm for Fingerprint Recognition | cs.CV | Fingerprint recognition requires a minimal effort from the user, does not
capture other information than strictly necessary for the recognition process,
and provides relatively good performance. A critical step in fingerprint
identification system is thinning of the input fingerprint image. The
performance of a minutiae extraction algorithm relies heavily on the quality of
the thinning algorithm. So, a fast fingerprint thinning algorithm is proposed.
The algorithm works directly on the gray-scale image as binarization of
fingerprint causes many spurious minutiae and also removes many important
features. The performance of the thinning algorithm is evaluated and
experimental results show that the proposed thinning algorithm is both fast and
accurate. A new minutiae-based fingerprint matching technique is proposed. The
main idea is that each fingerprint is represented by a minutiae table of just
two columns in the database. The number of different minutiae types
(terminations and bifurcations) found in each track of a certain width around
the core point of the fingerprint is recorded in this table. Each row in the
table represents a certain track, in the first column, the number of
terminations in each track is recorded, in the second column, the number of
bifurcations in each track is recorded. The algorithm is rotation and
translation invariant, and needs less storage size. Experimental results show
that recognition accuracy is 98%, with Equal Error Rate (EER) of 2%. Finally,
the integrity of the data transmission via communication channels must be
secure all the way from the scanner to the application. After applying Gaussian
noise addition, and JPEG compression with high and moderate quality factors on
the watermarked fingerprint images, recognition accuracy decreases slightly to
reach 96%.
|
1402.4963 | Vesselness via Multiple Scale Orientation Scores | cs.CV | The multi-scale Frangi vesselness filter is an established tool in (retinal)
vascular imaging. However, it cannot cope with crossings or bifurcations, since
it only looks for elongated structures. Therefore, we disentangle crossing
structures in the image via (multiple scale) invertible orientation scores. The
described vesselness filter via scale-orientation scores performs considerably
better at enhancing vessels throughout crossings and bifurcations than the
Frangi version. Both methods are evaluated on a public dataset. Performance is
measured by comparing ground truth data to the segmentation results obtained by
basic thresholding and morphological component analysis of the filtered images.
|
1402.4995 | Minimizing Running Costs in Consumption Systems | cs.SY | A standard approach to optimizing long-run running costs of discrete systems
is based on minimizing the mean-payoff, i.e., the long-run average amount of
resources ("energy") consumed per transition. However, this approach inherently
assumes that the energy source has an unbounded capacity, which is not always
realistic. For example, an autonomous robotic device has a battery of finite
capacity that has to be recharged periodically, and the total amount of energy
consumed between two successive charging cycles is bounded by the capacity.
Hence, a controller minimizing the mean-payoff must obey this restriction. In
this paper we study the controller synthesis problem for consumption systems
with a finite battery capacity, where the task of the controller is to minimize
the mean-payoff while preserving the functionality of the system encoded by a
given linear-time property. We show that an optimal controller always exists,
and it may either need only finite memory or require infinite memory (it is
decidable in polynomial time which of the two cases holds). Further, we show
how to compute an effective description of an optimal controller in polynomial
time. Finally, we consider the limit values achievable by larger and larger
battery capacity, show that these values are computable in polynomial time, and
we also analyze the corresponding rate of convergence. To the best of our
knowledge, these are the first results about optimizing the long-run running
costs in systems with bounded energy stores.
|
1402.5034 | Using the Crowd to Generate Content for Scenario-Based Serious-Games | cs.AI cs.HC | In the last decade, scenario-based serious-games have become a main tool for
learning new skills and capabilities. An important factor in the development of
such systems is the overhead in time, cost and human resources to manually
create the content for these scenarios. We focus on how to create content for
scenarios in medical, military, commerce and gaming applications where
maintaining the integrity and coherence of the content is integral for the
system's success. To do so, we present an automatic method for generating
content about everyday activities through combining computer science techniques
with the crowd. We use the crowd in three basic ways: to capture a database of
scenarios of everyday activities, to generate a database of likely replacements
for specific events within that scenario, and to evaluate the resulting
scenarios. We found that the generated scenarios were rated as reliable and
consistent by the crowd when compared to the scenarios that were originally
captured. We also compared the generated scenarios to those created by
traditional planning techniques. We found that both methods were equally
effective in generated reliable and consistent scenarios, yet the main
advantages of our approach is that the content we generate is more varied and
much easier to create. We have begun integrating this approach within a
scenario-based training application for novice investigators within the law
enforcement departments to improve their questioning skills.
|
1402.5037 | Assessing the Reach and Impact of Game-Based Learning Approaches to
Cultural Competency and Behavioural Change | cs.AI | As digital games continue to be explored as solutions to educational and
behavioural challenges, the need for evaluation methodologies which support
both the unique nature of the format and the need for comparison with other
approaches continues to increase. In this workshop paper, a range of challenges
are described related specifically to the case of cultural learning using
digital games, in terms of how it may best be assessed, understood, and
sustained through an iterative process supported by research. An evaluation
framework is proposed, identifying metrics for reach and impact and their
associated challenges, as well as presenting ethical considerations and the
means to utilize evaluation outcomes within an iterative cycle, and to provide
feedback to learners. Presenting as a case study a serious game from the Mobile
Assistance for Social Inclusion and Empowerment of Immigrants with Persuasive
Learning Technologies and Social Networks (MASELTOV) project, the use of the
framework in the context of an integrative project is discussed, with emphasis
on the need to view game-based learning as a blended component of the cultural
learning process, rather than a standalone solution. The particular case of
mobile gaming is also considered within this case study, providing a platform
by which to deliver and update content in response to evaluation outcomes.
Discussion reflects upon the general challenges related to the assessment of
cultural learning, and behavioural change in more general terms, suggesting
future work should address the need to provide sustainable, research-driven
platforms for game-based learning content.
|
1402.5039 | Interpreting social cues to generate credible affective reactions of
virtual job interviewers | cs.AI cs.CY | In this paper we describe a mechanism of generating credible affective
reactions in a virtual recruiter during an interaction with a user. This is
done using communicative performance computation based on the behaviours of the
user as detected by a recognition module. The proposed software pipeline is
part of the TARDIS system which aims to aid young job seekers in acquiring job
interview related social skills. In this context, our system enables the
virtual recruiter to realistically adapt and react to the user in real-time.
|
1402.5043 | A logical model of Theory of Mind for virtual agents in the context of
job interview simulation | cs.AI | Job interview simulation with a virtual agents aims at improving people's
social skills and supporting professional inclusion. In such simulators, the
virtual agent must be capable of representing and reasoning about the user's
mental state based on social cues that inform the system about his/her affects
and social attitude. In this paper, we propose a formal model of Theory of Mind
(ToM) for virtual agent in the context of human-agent interaction that focuses
on the affective dimension. It relies on a hybrid ToM that combines the two
major paradigms of the domain. Our framework is based on modal logic and
inference rules about the mental states, emotions and social relations of both
actors. Finally, we present preliminary results regarding the impact of such a
model on natural interaction in the context of job interviews simulation.
|
1402.5045 | Expressing social attitudes in virtual agents for social training games | cs.HC cs.AI cs.CY | The use of virtual agents in social coaching has increased rapidly in the
last decade. In order to train the user in different situations than can occur
in real life, the virtual agent should be able to express different social
attitudes. In this paper, we propose a model of social attitudes that enables a
virtual agent to reason on the appropriate social attitude to express during
the interaction with a user given the course of the interaction, but also the
emotions, mood and personality of the agent. Moreover, the model enables the
virtual agent to display its social attitude through its non-verbal behaviour.
The proposed model has been developed in the context of job interview
simulation. The methodology used to develop such a model combined a theoretical
and an empirical approach. Indeed, the model is based both on the literature in
Human and Social Sciences on social attitudes but also on the analysis of an
audiovisual corpus of job interviews and on post-hoc interviews with the
recruiters on their expressed attitudes during the job interview.
|
1402.5047 | Real-time Automatic Emotion Recognition from Body Gestures | cs.HC cs.CV | Although psychological research indicates that bodily expressions convey
important affective information, to date research in emotion recognition
focused mainly on facial expression or voice analysis. In this paper we propose
an approach to realtime automatic emotion recognition from body movements. A
set of postural, kinematic, and geometrical features are extracted from
sequences 3D skeletons and fed to a multi-class SVM classifier. The proposed
method has been assessed on data acquired through two different systems: a
professionalgrade optical motion capture system, and Microsoft Kinect. The
system has been assessed on a "six emotions" recognition problem, and using a
leave-one-subject-out cross validation strategy, reached an overall recognition
rate of 61.3% which is very close to the recognition rate of 61.9% obtained by
human observers. To provide further testing of the system, two games were
developed, where one or two users have to interact to understand and express
emotions with their body.
|
1402.5051 | On Coset Leader Graphs of LDPC Codes | cs.IT cs.DM math.IT | Our main technical result is that, in the coset leader graph of a linear
binary code of block length n, the metric balls spanned by constant-weight
vectors grow exponentially slower than those in $\{0,1\}^n$.
Following the approach of Friedman and Tillich (2006), we use this fact to
improve on the first linear programming bound on the rate of LDPC codes, as the
function of their minimal distance. This improvement, combined with the
techniques of Ben-Haim and Lytsin (2006), improves the rate vs distance bounds
for LDPC codes in a significant sub-range of relative distances.
|
1402.5073 | Exploiting Two-Dimensional Group Sparsity in 1-Bit Compressive Sensing | cs.CV cs.IT math.IT | We propose a new approach, {\it two-dimensional fused binary compressive
sensing} (2DFBCS) to recover 2D sparse piece-wise signals from 1-bit
measurements, exploiting 2D group sparsity for 1-bit compressive sensing
recovery. The proposed method is a modified 2D version of the previous {\it
binary iterative hard thresholding} (2DBIHT) algorithm, where the objective
function includes a 2D one-sided $\ell_1$ (or $\ell_2$) penalty function
encouraging agreement with the observed data, an indicator function of
$K$-sparsity, and a total variation (TV) or modified TV (MTV) constraint. The
subgradient of the 2D one-sided $\ell_1$ (or $\ell_2$) penalty and the
projection onto the $K$-sparsity and TV or MTV constraint can be computed
efficiently, allowing the appliaction of algorithms of the {\it
forward-backward splitting} (a.k.a. {\it iterative shrinkage-thresholding})
family. Experiments on the recovery of 2D sparse piece-wise smooth signals show
that the proposed approach is able to take advantage of the piece-wise
smoothness of the original signal, achieving more accurate recovery than
2DBIHT. More specifically, 2DFBCS with the MTV and the $\ell_2$ penalty
performs best amongst the algorithms tested.
|
1402.5074 | Binary Fused Compressive Sensing: 1-Bit Compressive Sensing meets Group
Sparsity | cs.CV cs.IT math.IT | We propose a new method, {\it binary fused compressive sensing} (BFCS), to
recover sparse piece-wise smooth signals from 1-bit compressive measurements.
The proposed algorithm is a modification of the previous {\it binary iterative
hard thresholding} (BIHT) algorithm, where, in addition to the sparsity
constraint, the total-variation of the recovered signal is upper constrained.
As in BIHT, the data term of the objective function is an one-sided $\ell_1$
(or $\ell_2$) norm. Experiments on the recovery of sparse piece-wise smooth
signals show that the proposed algorithm is able to take advantage of the
piece-wise smoothness of the original signal, achieving more accurate recovery
than BIHT.
|
1402.5076 | Robust Binary Fused Compressive Sensing using Adaptive Outlier Pursuit | cs.CV cs.IT math.IT | We propose a new method, {\it robust binary fused compressive sensing}
(RoBFCS), to recover sparse piece-wise smooth signals from 1-bit compressive
measurements. The proposed method is a modification of our previous {\it binary
fused compressive sensing} (BFCS) algorithm, which is based on the {\it binary
iterative hard thresholding} (BIHT) algorithm. As in BIHT, the data term of the
objective function is a one-sided $\ell_1$ (or $\ell_2$) norm. Experiments show
that the proposed algorithm is able to take advantage of the piece-wise
smoothness of the original signal and detect sign flips and correct them,
achieving more accurate recovery than BFCS and BIHT.
|
1402.5077 | Group-sparse Matrix Recovery | cs.LG cs.CV stat.ML | We apply the OSCAR (octagonal selection and clustering algorithms for
regression) in recovering group-sparse matrices (two-dimensional---2D---arrays)
from compressive measurements. We propose a 2D version of OSCAR (2OSCAR)
consisting of the $\ell_1$ norm and the pair-wise $\ell_{\infty}$ norm, which
is convex but non-differentiable. We show that the proximity operator of 2OSCAR
can be computed based on that of OSCAR. The 2OSCAR problem can thus be
efficiently solved by state-of-the-art proximal splitting algorithms.
Experiments on group-sparse 2D array recovery show that 2OSCAR regularization
solved by the SpaRSA algorithm is the fastest choice, while the PADMM algorithm
(with debiasing) yields the most accurate results.
|
1402.5110 | Singular Layer Transmission for Continuous-Variable Quantum Key
Distribution | quant-ph cs.IT math.IT | We develop a singular layer transmission model for continuous-variable
quantum key distribution (CVQKD). In CVQKD, the transmit information is carried
by continuous-variable (CV) quantum states, particularly by Gaussian random
distributed position and momentum quadratures. The reliable transmission of the
quadrature components over a noisy link is a cornerstone of CVQKD protocols.
The proposed singular layer uses the singular value decomposition of the
Gaussian quantum channel, which yields an additional degree of freedom for the
phase space transmission. This additional degree of freedom can further be
exploited in a multiple-access scenario. The singular layer defines the
eigenchannels of the Gaussian physical link, which can be used for the
simultaneous reliable transmission of multiple user data streams. Our
transmission model also includes the singular interference avoider (SIA)
precoding scheme. The proposed SIA precoding scheme prevents the eigenchannel
interference to reach an optimal transmission over a Gaussian link. We
demonstrate the results through the adaptive multicarrier quadrature
division-multiuser quadrature allocation (AMQD-MQA) CVQKD multiple-access
scheme. We define the singular model of AMQD-MQA and characterize the
properties of the eigenchannel interference. We propose the SIA precoding of
Gaussian random quadratures and the optimal decoding at the receiver. We show a
random phase space constellation scheme for the Gaussian sub-channels. The
singular layer transmission provides improved simultaneous transmission rates
for the users with unconditional security in a multiple-access scenario,
particularly in crucial low signal-to-noise ratio regimes.
|
1402.5114 | Analysing Membership Profile Privacy Issues in Online Social Networks | cs.SI cs.SY | A social networking site is an on-line service that attracts a society of
subscribers and provides such users with a multiplicity of tools for
distribution personal data and creating subscribers generated content directed
to a given users interest and personal life. Operators of online social
networks are gradually giving out potentially sensitive information about users
and their relationships with advertisers, application developers, and
data-mining researchers. Some criminals too uses information gathered through
membership profile in social networks to break peoples PINs and passwords. In
this paper, we looked at the field structure of membership profiles in ten
popular social networking sites. We also analysed how private information can
easily be made public in such sites. At the end recommendations and
countermeasures were made on how to safe guard subscribers personal data.
|
1402.5123 | Detecting Opinions in Tweets | cs.CL cs.SI | Given the incessant growth of documents describing the opinions of different
people circulating on the web, including Web 2.0 has made it possible to give
an opinion on any product in the net. In this paper, we examine the various
opinions expressed in the tweets and classify them positive, negative or
neutral by using the emoticons for the Bayesian method and adjectives and
adverbs for the Turney's method
|
1402.5131 | Multi-Step Stochastic ADMM in High Dimensions: Applications to Sparse
Optimization and Noisy Matrix Decomposition | cs.LG math.OC stat.ML | We propose an efficient ADMM method with guarantees for high-dimensional
problems. We provide explicit bounds for the sparse optimization problem and
the noisy matrix decomposition problem. For sparse optimization, we establish
that the modified ADMM method has an optimal convergence rate of
$\mathcal{O}(s\log d/T)$, where $s$ is the sparsity level, $d$ is the data
dimension and $T$ is the number of steps. This matches with the minimax lower
bounds for sparse estimation. For matrix decomposition into sparse and low rank
components, we provide the first guarantees for any online method, and prove a
convergence rate of $\tilde{\mathcal{O}}((s+r)\beta^2(p) /T) +
\mathcal{O}(1/p)$ for a $p\times p$ matrix, where $s$ is the sparsity level,
$r$ is the rank and $\Theta(\sqrt{p})\leq \beta(p)\leq \Theta(p)$. Our
guarantees match the minimax lower bound with respect to $s,r$ and $T$. In
addition, we match the minimax lower bound with respect to the matrix dimension
$p$, i.e. $\beta(p)=\Theta(\sqrt{p})$, for many important statistical models
including the independent noise model, the linear Bayesian network and the
latent Gaussian graphical model under some conditions. Our ADMM method is based
on epoch-based annealing and consists of inexpensive steps which involve
projections on to simple norm balls. Experiments show that for both sparse
optimization and matrix decomposition problems, our algorithm outperforms the
state-of-the-art methods. In particular, we reach higher accuracy with same
time complexity.
|
1402.5161 | Statistical Constraints | cs.AI stat.ME | We introduce statistical constraints, a declarative modelling tool that links
statistics and constraint programming. We discuss two statistical constraints
and some associated filtering algorithms. Finally, we illustrate applications
to standard problems encountered in statistics and to a novel inspection
scheduling problem in which the aim is to find inspection plans with desirable
statistical properties.
|
1402.5164 | Distribution-Independent Reliable Learning | cs.LG cs.CC cs.DS | We study several questions in the reliable agnostic learning framework of
Kalai et al. (2009), which captures learning tasks in which one type of error
is costlier than others. A positive reliable classifier is one that makes no
false positive errors. The goal in the positive reliable agnostic framework is
to output a hypothesis with the following properties: (i) its false positive
error rate is at most $\epsilon$, (ii) its false negative error rate is at most
$\epsilon$ more than that of the best positive reliable classifier from the
class. A closely related notion is fully reliable agnostic learning, which
considers partial classifiers that are allowed to predict "unknown" on some
inputs. The best fully reliable partial classifier is one that makes no errors
and minimizes the probability of predicting "unknown", and the goal in fully
reliable learning is to output a hypothesis that is almost as good as the best
fully reliable partial classifier from a class.
For distribution-independent learning, the best known algorithms for PAC
learning typically utilize polynomial threshold representations, while the
state of the art agnostic learning algorithms use point-wise polynomial
approximations. We show that one-sided polynomial approximations, an
intermediate notion between polynomial threshold representations and point-wise
polynomial approximations, suffice for learning in the reliable agnostic
settings. We then show that majorities can be fully reliably learned and
disjunctions of majorities can be positive reliably learned, through
constructions of appropriate one-sided polynomial approximations. Our fully
reliable algorithm for majorities provides the first evidence that fully
reliable learning may be strictly easier than agnostic learning. Our algorithms
also satisfy strong attribute-efficiency properties, and provide smooth
tradeoffs between sample complexity and running time.
|
1402.5176 | Pareto-depth for Multiple-query Image Retrieval | cs.IR cs.LG stat.ML | Most content-based image retrieval systems consider either one single query,
or multiple queries that include the same object or represent the same semantic
information. In this paper we consider the content-based image retrieval
problem for multiple query images corresponding to different image semantics.
We propose a novel multiple-query information retrieval algorithm that combines
the Pareto front method (PFM) with efficient manifold ranking (EMR). We show
that our proposed algorithm outperforms state of the art multiple-query
retrieval algorithms on real-world image databases. We attribute this
performance improvement to concavity properties of the Pareto fronts, and prove
a theoretical result that characterizes the asymptotic concavity of the fronts.
|
1402.5180 | Guaranteed Non-Orthogonal Tensor Decomposition via Alternating Rank-$1$
Updates | cs.LG math.NA stat.ML | In this paper, we provide local and global convergence guarantees for
recovering CP (Candecomp/Parafac) tensor decomposition. The main step of the
proposed algorithm is a simple alternating rank-$1$ update which is the
alternating version of the tensor power iteration adapted for asymmetric
tensors. Local convergence guarantees are established for third order tensors
of rank $k$ in $d$ dimensions, when $k=o \bigl( d^{1.5} \bigr)$ and the tensor
components are incoherent. Thus, we can recover overcomplete tensor
decomposition. We also strengthen the results to global convergence guarantees
under stricter rank condition $k \le \beta d$ (for arbitrary constant $\beta >
1$) through a simple initialization procedure where the algorithm is
initialized by top singular vectors of random tensor slices. Furthermore, the
approximate local convergence guarantees for $p$-th order tensors are also
provided under rank condition $k=o \bigl( d^{p/2} \bigr)$. The guarantees also
include tight perturbation analysis given noisy tensor.
|
1402.5188 | Collision free autonomous navigation and formation building for
non-holonomic ground robots | cs.RO math.OC | The primary objective of a safe navigation algorithm is to guide the object
from its current position to the target position while avoiding any collision
with the en-route obstacles, and the appropriate obstacle avoidance strategies
are the key factors to ensure safe navigation tasks in dynamic environments. In
this report, three different obstacle avoidance strategies for safe navigation
in dynamic environments have been presented. The biologically-inspired
navigation algorithm (BINA) is efficient in terms of avoidance time. The
equidistant based navigation algorithm (ENA) is able to achieve navigation task
with in uncertain dynamic environments. The navigation algorithm algorithm
based on an integrated environment representation (NAIER) allows the object to
seek a safe path through obstacles in unknown dynamic environment in a
human-like fashion. The performances and features of the proposed navigation
algorithms are confirmed by extensive simulation results and experiments with a
real non-holonomic mobile robot. The algorithms have been implemented on two
real control systems: intelligent wheelchair and robotic hospital bed. The
performance of the proposed algorithms with SAM and Flexbed demonstrate their
capabilities to achieve navigation tasks in complicated real time scenarios.
The proposed algorithms are easy to be implemented in real time and costly
efficient. An extra study on networked multi-robots formation building
algorithm is presented in this paper. A constructive and easy-to-implement
decentralised control is proposed for a formation building of a group of random
positioned objects. Furthermore, the problem of formation building with
anonymous objects is addressed. This randomised decentralised navigation
algorithm achieves the convergence to a desired configuration with probability
1.
|
1402.5192 | Power and Bit Allocation for Wireless OFDM Channels with Finite-Rate
Feedback and Subcarrier Clustering | cs.IT math.IT | The study investigated the allocation of transmission power and bits for a
point-to-point orthogonal frequency-division multiplexing channel assuming
perfect channel information at the receiver, but imperfect channel information
at the transmitter. Channel information was quantized at the receiver and was
sent back to the transmitter via a finite-rate feedback channel. Based on
limited feedback from the receiver, the corresponding transmitter adapted the
power level and/or modulation across subcarriers. To reduce the amount of
feedback, subcarriers were partitioned into different clusters and an on/off
threshold-based power allocation was applied to subcarrier clusters. In
addition, two options were proposed to interpolate a channel frequency response
from a set of quantized channel gains and apply the optimal water-filling
allocation or a greedy bit allocation based on channel interpolation. Proposed
schemes with finite feedback rates were shown to perform close to the optimal
allocation without a feedback-rate constraint. In the numerical example,
channel capacity decreased about 6% from the optimum when one bit of feedback
per subcarrier was used.
|
1402.5194 | On Big Data Benchmarking | cs.PF cs.DB | Big data systems address the challenges of capturing, storing, managing,
analyzing, and visualizing big data. Within this context, developing benchmarks
to evaluate and compare big data systems has become an active topic for both
research and industry communities. To date, most of the state-of-the-art big
data benchmarks are designed for specific types of systems. Based on our
experience, however, we argue that considering the complexity, diversity, and
rapid evolution of big data systems, for the sake of fairness, big data
benchmarks must include diversity of data and workloads. Given this motivation,
in this paper, we first propose the key requirements and challenges in
developing big data benchmarks from the perspectives of generating data with 4V
properties (i.e. volume, velocity, variety and veracity) of big data, as well
as generating tests with comprehensive workloads for big data systems. We then
present the methodology on big data benchmarking designed to address these
challenges. Next, the state-of-the-art are summarized and compared, following
by our vision for future research directions.
|
1402.5196 | Synchronization-Free Delay Tomography Based on Compressed Sensing | cs.NI cs.IT math.IT | Delay tomography has so far burdened source and receiver measurement nodes in
a network with two requirements such as path establishment and clock
synchronization between them. In this letter, we focus on the clock
synchronization problem in delay tomography and propose a synchronization-free
delay tomography scheme. The proposed scheme selects a path between source and
receiver measurement nodes as a reference path, which results in a loss of
equation in a conventional delay tomography problem. However, by utilizing
compressed sensing, the proposed scheme becomes robust to the loss. Simulation
experiments confirm that the proposed scheme works comparable to a conventional
delay tomography scheme in networks with no clock synchronization between
source and receiver measurement nodes.
|
1402.5205 | A Survey on Dynamic Job Scheduling in Grid Environment Based on
Heuristic Algorithms | cs.DC cs.AI | Computational Grids are a new trend in distributed computing systems. They
allow the sharing of geographically distributed resources in an efficient way,
extending the boundaries of what we perceive as distributed computing. Various
sciences can benefit from the use of grids to solve CPU-intensive problems,
creating potential benefits to the entire society. Job scheduling is an
integrated part of parallel and distributed computing. It allows selecting
correct match of resource for a particular job and thus increases the job
throughput and utilization of resources. Job should be scheduled in an
automatic way to make the system more reliable, accessible and less sensitive
to subsystem failures. This paper provides a survey on various heuristic
algorithms, used for scheduling in grid.
|
1402.5208 | Densely Entangled Financial Systems | q-fin.RM cs.CE | In [1] Zawadoski introduces a banking network model in which the asset and
counter-party risks are treated separately and the banks hedge their assets
risks by appropriate OTC contracts. In his model, each bank has only two
counter-party neighbors, a bank fails due to the counter-party risk only if at
least one of its two neighbors default, and such a counter-party risk is a low
probability event. Informally, the author shows that the banks will hedge their
asset risks by appropriate OTC contracts, and, though it may be socially
optimal to insure against counter-party risk, in equilibrium banks will {\em
not} choose to insure this low probability event.
In this paper, we consider the above model for more general network
topologies, namely when each node has exactly 2r counter-party neighbors for
some integer r>0. We extend the analysis of [1] to show that as the number of
counter-party neighbors increase the probability of counter-party risk also
increases, and in particular the socially optimal solution becomes privately
sustainable when each bank hedges its risk to at least n/2 banks, where n is
the number of banks in the network, i.e., when 2r is at least n/2, banks not
only hedge their asset risk but also hedge its counter-party risk.
|
1402.5233 | Study of the Dynamic Coupling Term (\mu) in Parallel Force/Velocity
Actuated Systems | cs.RO | Presented in this paper is an actuator concept, called a Parallel
Force/Velocity Actuator (PFVA), that combines two fundamentally distinct
actuators (one using low gear reduction or even direct drive, which we will
call a Force Actuator (FA) and the other with a high reduction gear train that
we will refer to as a Velocity Actuator (VA)). The objective of this work is to
evaluate the effect of the relative scale factor, RSF, (ratio of gear
reductions) between these inputs on their dynamic coupling. We conceptually
describe a Parallel Force/Velocity Actuator (PFVA) based on a
Dual-Input-Single- Output (DISO) epicyclic gear train. We then present an
analytical formulation for the variation of the dynamic coupling term w.r.t.
RSF. Conclusions from this formulation are illustrated through a numerical
example involving a 1-DOF four-bar linkage. It is shown, both analytically and
numerically, that as we increase the RSF, the two inputs to the PFVA are
decoupled w.r.t. the inertia torques. This understanding can serve as an
important design guideline for PFVAs. The paper also presents two limitations
of this study and suggests future work based on these caveats.
|
1402.5255 | Analysing Parallel and Passive Web Browsing Behavior and its Effects on
Website Metrics | cs.HC cs.IR | Getting deeper insights into the online browsing behavior of Web users has
been a major research topic since the advent of the WWW. It provides useful
information to optimize website design, Web browser design, search engines
offerings, and online advertisement. We argue that new technologies and new
services continue to have significant effects on the way how people browse the
Web. For example, listening to music clips on YouTube or to a radio station on
Last.fm does not require users to sit in front of their computer. Social media
and networking sites like Facebook or micro-blogging sites like Twitter have
attracted new types of users that previously were less inclined to go online.
These changes in how people browse the Web feature new characteristics which
are not well understood so far. In this paper, we provide novel and unique
insights by presenting first results of DOBBS, our long-term effort to create a
comprehensive and representative dataset capturing online user behavior. We
firstly investigate the concepts of parallel browsing and passive browsing,
showing that browsing the Web is no longer a dedicated task for many users.
Based on these results, we then analyze their impact on the calculation of a
user's dwell time -- i.e., the time the user spends on a webpage -- which has
become an important metric to quantify the popularity of websites.
|
1402.5259 | An Analysis of Rank Aggregation Algorithms | cs.DS cs.GT cs.MA | Rank aggregation is an essential approach for aggregating the preferences of
multiple agents. One rule of particular interest is the Kemeny rule, which
maximises the number of pairwise agreements between the final ranking and the
existing rankings. However, Kemeny rankings are NP-hard to compute. This has
resulted in the development of various algorithms. Fortunately, NP-hardness may
not reflect the difficulty of solving problems that arise in practice. As a
result, we aim to demonstrate that the Kemeny consensus can be computed
efficiently when aggregating different rankings in real case. In this paper, we
extend a dynamic programming algorithm originally for Kemeny scores. We also
provide details on the implementation of the algorithm. Finally, we present
results obtained from an empirical comparison of our algorithm and two other
popular algorithms based on real world and randomly generated problem
instances. Experimental results show the usefulness and efficiency of the
algorithm in practical settings.
|
1402.5265 | Coalitional Games in MISO Interference Channels: Epsilon-Core and
Coalition Structure Stable Set | cs.IT math.IT | The multiple-input single-output interference channel is considered. Each
transmitter is assumed to know the channels between itself and all receivers
perfectly and the receivers are assumed to treat interference as additive
noise. In this setting, noncooperative transmission does not take into account
the interference generated at other receivers which generally leads to
inefficient performance of the links. To improve this situation, we study
cooperation between the links using coalitional games. The players (links) in a
coalition either perform zero forcing transmission or Wiener filter precoding
to each other. The $\epsilon$-core is a solution concept for coalitional games
which takes into account the overhead required in coalition deviation. We
provide necessary and sufficient conditions for the strong and weak
$\epsilon$-core of our coalitional game not to be empty with zero forcing
transmission. Since, the $\epsilon$-core only considers the possibility of
joint cooperation of all links, we study coalitional games in partition form in
which several distinct coalitions can form. We propose a polynomial time
distributed coalition formation algorithm based on coalition merging and prove
that its solution lies in the coalition structure stable set of our coalition
formation game. Simulation results reveal the cooperation gains for different
coalition formation complexities and deviation overhead models.
|
1402.5284 | Convergence results for projected line-search methods on varieties of
low-rank matrices via \L{}ojasiewicz inequality | math.OC cs.LG math.NA | The aim of this paper is to derive convergence results for projected
line-search methods on the real-algebraic variety $\mathcal{M}_{\le k}$ of real
$m \times n$ matrices of rank at most $k$. Such methods extend Riemannian
optimization methods, which are successfully used on the smooth manifold
$\mathcal{M}_k$ of rank-$k$ matrices, to its closure by taking steps along
gradient-related directions in the tangent cone, and afterwards projecting back
to $\mathcal{M}_{\le k}$. Considering such a method circumvents the
difficulties which arise from the nonclosedness and the unbounded curvature of
$\mathcal{M}_k$. The pointwise convergence is obtained for real-analytic
functions on the basis of a \L{}ojasiewicz inequality for the projection of the
antigradient to the tangent cone. If the derived limit point lies on the smooth
part of $\mathcal{M}_{\le k}$, i.e. in $\mathcal{M}_k$, this boils down to more
or less known results, but with the benefit that asymptotic convergence rate
estimates (for specific step-sizes) can be obtained without an a priori
curvature bound, simply from the fact that the limit lies on a smooth manifold.
At the same time, one can give a convincing justification for assuming critical
points to lie in $\mathcal{M}_k$: if $X$ is a critical point of $f$ on
$\mathcal{M}_{\le k}$, then either $X$ has rank $k$, or $\nabla f(X) = 0$.
|
1402.5310 | Toward automatic censorship detection in microblogs | cs.SI physics.soc-ph | Social media is an area where users often experience censorship through a
variety of means such as the restriction of search terms or active and
retroactive deletion of messages. In this paper we examine the feasibility of
automatically detecting censorship of microblogs. We use a network growing
model to simulate discussion over a microblog follow network and compare two
censorship strategies to simulate varying levels of message deletion. Using
topological features extracted from the resulting graphs, a classifier is
trained to detect whether or not a given communication graph has been censored.
The results show that censorship detection is feasible under empirically
measured levels of message deletion. The proposed framework can enable
automated censorship measurement and tracking, which, when combined with
aggregated citizen reports of censorship, can allow users to make informed
decisions about online communication habits.
|
1402.5323 | PDBCirclePlot: A Novel Visualization Method for Protein Structures | q-bio.QM cs.CE q-bio.BM | Interactive molecular graphics applications facilitate analysis of three
dimensional protein structures. Naturally, non-interactive 2-D snapshots of the
protein structures do not convey the same level of geometric detail. Several
2-D visualization methods have been in use to summarize structural information,
including contact maps and 2-D cartoon views. We present a new approach for 2-D
visualization of protein structures where amino acid residues are displayed on
a circle and spatially close residues are depicted by links. Furthermore,
residue-specific properties, such as conservation, accessibility, temperature
factor, can be displayed as plots on the same circular view.
|
1402.5324 | On Asymptotic Incoherence and its Implications for Compressed Sensing of
Inverse Problems | cs.IT math.IT math.NA | Recently, it has been shown that incoherence is an unrealistic assumption for
compressed sensing when applied to many inverse problems. Instead, the key
property that permits efficient recovery in such problems is so-called local
incoherence. Similarly, the standard notion of sparsity is also inadequate for
many real world problems. In particular, in many applications, the optimal
sampling strategy depends on asymptotic incoherence and the signal sparsity
structure. The purpose of this paper is to study asymptotic incoherence and its
implications towards the design of optimal sampling strategies and efficient
sparsity bases. It is determined how fast asymptotic incoherence can decay in
general for isometries. Furthermore it is shown that Fourier sampling and
wavelet sparsity, whilst globally coherent, yield optimal asymptotic
incoherence as a power law up to a constant factor. Sharp bounds on the
asymptotic incoherence for Fourier sampling with polynomial bases are also
provided. A numerical experiment is also presented to demonstrate the role of
asymptotic incoherence in finding good subsampling strategies.
|
1402.5326 | Channel Diversity needed for Vector Space Interference Alignment | cs.IT math.IT | We consider vector space interference alignment strategies over the $K$-user
interference channel and derive an upper bound on the achievable degrees of
freedom as a function of the channel diversity $L$, where the channel diversity
is modeled by $L$ real-valued parallel channels with coefficients drawn from a
non-degenerate joint distribution. The seminal work of Cadambe and Jafar shows
that when $L$ is unbounded, vector space interference alignment can achieve
$1/2$ degrees of freedom per user independent of the number of users $K$.
However wireless channels have limited diversity in practice, dictated by their
coherence time and bandwidth, and an important question is the number of
degrees of freedom achievable at finite $L$. When $K=3$ and if $L$ is finite,
Bresler et al show that the number of degrees of freedom achievable with vector
space interference alignment is bounded away from $1/2$, and the gap decreases
inversely proportional to $L$. In this paper, we show that when $K\geq4$, the
gap is significantly larger. In particular, the gap to the optimal $1/2$
degrees of freedom per user can decrease at most like $1/\sqrt{L}$, and when
$L$ is smaller than the order of $2^{(K-2)(K-3)}$, it decays at most like
$1/\sqrt[4]{L}$.
|
1402.5358 | Extended Breadth-First Search Algorithm | cs.AI | The task of artificial intelligence is to provide representation techniques
for describing problems, as well as search algorithms that can be used to
answer our questions. A widespread and elaborated model is state-space
representation, which, however, has some shortcomings. Classical search
algorithms are not applicable in practice when the state space contains even
only a few tens of thousands of states. We can give remedy to this problem by
defining some kind of heuristic knowledge. In case of classical state-space
representation, heuristic must be defined so that it qualifies an arbitrary
state based on its "goodness," which is obviously not trivial. In our paper, we
introduce an algorithm that gives us the ability to handle huge state spaces
and to use a heuristic concept which is easier to embed into search algorithms.
|
1402.5359 | On the Capacity of the 2-User Interference Channel with Transmitter
Cooperation and Secrecy Constraints | cs.IT math.IT | This paper studies the value of limited rate cooperation between the
transmitters for managing interference and simultaneously ensuring secrecy, in
the 2-user Gaussian symmetric interference channel (GSIC). First, the problem
is studied in the symmetric linear deterministic IC (SLDIC) setting, and
achievable schemes are proposed, based on interference cancelation, relaying of
the other user's data bits, and transmission of random bits. In the proposed
achievable scheme, the limited rate cooperative link is used to share a
combination of data bits and random bits depending on the model parameters.
Outer bounds on the secrecy rate are also derived, using a novel partitioning
of the encoded messages and outputs depending on the relative strength of the
signal and the interference. The inner and outer bounds are derived under all
possible parameter settings. It is found that, for some parameter settings, the
inner and outer bounds match, yielding the capacity of the SLDIC under
transmitter cooperation and secrecy constraints. In some other scenarios, the
achievable rate matches with the capacity region of the 2-user SLDIC without
secrecy constraints derived by Wang and Tse [1]; thus, the proposed scheme
offers secrecy for free, in these cases. Inspired by the achievable schemes and
outer bounds in the deterministic case, achievable schemes and outer bounds are
derived in the Gaussian case. The proposed achievable scheme for the Gaussian
case is based on Marton's coding scheme and stochastic encoding along with
dummy message transmission. One of the key techniques used in the achievable
scheme for both the models is interference cancelation, which simultaneously
offers two seemingly conflicting benefits: it cancels interference and ensures
secrecy. Many of the results derived in this paper extend to the asymmetric
case also.
|
1402.5360 | Important Molecular Descriptors Selection Using Self Tuned Reweighted
Sampling Method for Prediction of Antituberculosis Activity | cs.LG stat.AP stat.ML | In this paper, a new descriptor selection method for selecting an optimal
combination of important descriptors of sulfonamide derivatives data, named
self tuned reweighted sampling (STRS), is developed. descriptors are defined as
the descriptors with large absolute coefficients in a multivariate linear
regression model such as partial least squares(PLS). In this study, the
absolute values of regression coefficients of PLS model are used as an index
for evaluating the importance of each descriptor Then, based on the importance
level of each descriptor, STRS sequentially selects N subsets of descriptors
from N Monte Carlo (MC) sampling runs in an iterative and competitive manner.
In each sampling run, a fixed ratio (e.g. 80%) of samples is first randomly
selected to establish a regresson model. Next, based on the regression
coefficients, a two-step procedure including rapidly decreasing function (RDF)
based enforced descriptor selection and self tuned sampling (STS) based
competitive descriptor selection is adopted to select the important
descriptorss. After running the loops, a number of subsets of descriptors are
obtained and root mean squared error of cross validation (RMSECV) of PLS models
established with subsets of descriptors is computed. The subset of descriptors
with the lowest RMSECV is considered as the optimal descriptor subset. The
performance of the proposed algorithm is evaluated by sulfanomide derivative
dataset. The results reveal an good characteristic of STRS that it can usually
locate an optimal combination of some important descriptors which are
interpretable to the biologically of interest. Additionally, our study shows
that better prediction is obtained by STRS when compared to full descriptor set
PLS modeling, Monte Carlo uninformative variable elimination (MC-UVE).
|
1402.5371 | On the Equivalence of Two Security Notions for Hierarchical Key
Assignment Schemes in the Unconditional Setting | cs.CR cs.IT math.IT | The access control problem in a hierarchy can be solved by using a
hierarchical key assignment scheme, where each class is assigned an encryption
key and some private information. A formal security analysis for hierarchical
key assignment schemes has been traditionally considered in two different
settings, i.e., the unconditionally secure and the computationally secure
setting, and with respect to two different notions: security against key
recovery (KR-security) and security with respect to key indistinguishability
(KI-security), with the latter notion being cryptographically stronger.
Recently, Freire, Paterson and Poettering proposed strong key
indistinguishability (SKI-security) as a new security notion in the
computationally secure setting, arguing that SKI-security is strictly stronger
than KI-security in such a setting. In this paper we consider the
unconditionally secure setting for hierarchical key assignment schemes. In such
a setting the security of the schemes is not based on specific unproven
computational assumptions, i.e., it relies on the theoretical impossibility of
breaking them, despite the computational power of an adversary coalition. We
prove that, in this setting, SKI-security is not stronger than KI-security,
i.e., the two notions are fully equivalent from an information-theoretic point
of view.
|
1402.5379 | What Is It Like to Be a Brain Simulation? | cs.AI | We frame the question of what kind of subjective experience a brain
simulation would have in contrast to a biological brain. We discuss the brain
prosthesis thought experiment. We evaluate how the experience of the brain
simulation might differ from the biological, according to a number of
hypotheses about experience and the properties of simulation. Then, we identify
finer questions relating to the original inquiry, and answer them from both a
general physicalist, and panexperientialist perspective.
|
1402.5380 | Godseed: Benevolent or Malevolent? | cs.AI | It is hypothesized by some thinkers that benign looking AI objectives may
result in powerful AI drives that may pose an existential risk to human
society. We analyze this scenario and find the underlying assumptions to be
unlikely. We examine the alternative scenario of what happens when universal
goals that are not human-centric are used for designing AI agents. We follow a
design approach that tries to exclude malevolent motivations from AI agents,
however, we see that objectives that seem benevolent may pose significant risk.
We consider the following meta-rules: preserve and pervade life and culture,
maximize the number of free minds, maximize intelligence, maximize wisdom,
maximize energy production, behave like human, seek pleasure, accelerate
evolution, survive, maximize control, and maximize capital. We also discuss
various solution approaches for benevolent behavior including selfless goals,
hybrid designs, Darwinism, universal constraints, semi-autonomy, and
generalization of robot laws. A "prime directive" for AI may help in
formulating an encompassing constraint for avoiding malicious behavior. We
hypothesize that social instincts for autonomous robots may be effective such
as attachment learning. We mention multiple beneficial scenarios for an
advanced semi-autonomous AGI agent in the near future including space
exploration, automation of industries, state functions, and cities. We conclude
that a beneficial AI agent with intelligence beyond human-level is possible and
has many practical use cases.
|
1402.5388 | Strategic Resource Allocation for Competitive Influence in Social
Networks | cs.SI cs.GT | One of the main objectives of data mining is to help companies determine to
which potential customers to market and how many resources to allocate to these
potential customers. Most previous works on competitive influence in social
networks focus on the first issue. In this work, our focus is on the second
issue, i.e., we are interested on the competitive influence of marketing
campaigns who need to simultaneously decide how many resources to allocate to
their potential customers to advertise their products. Using results from game
theory, we are able to completely characterize the optimal strategic resource
allocation for the voter model of social networks and prove that the price of
competition of this game is unbounded. This work is a step towards providing a
solid foundation for marketing advertising in more general scenarios.
|
1402.5428 | An Evolutionary approach for solving Shr\"odinger Equation | cs.NE | The purpose of this paper is to present a method of solving the Shr\"odinger
Equation (SE) by Genetic Algorithms and Grammatical Evolution. The method forms
generations of trial solutions expressed in an analytical form. We illustrate
the effectiveness of this method providing, for example, the results of its
application to a quantum system minimal energy, and we compare these results
with those produced by traditional analytical methods
|
1402.5436 | Characterizing and computing stable models of logic programs: The
non-stratified case | cs.AI cs.LO | Stable Logic Programming (SLP) is an emergent, alternative style of logic
programming: each solution to a problem is represented by a stable model of a
deductive database/function-free logic program encoding the problem itself.
Several implementations now exist for stable logic programming, and their
performance is rapidly improving. To make SLP generally applicable, it should
be possible to check for consistency (i.e., existence of stable models) of the
input program before attempting to answer queries. In the literature, only
rather strong sufficient conditions have been proposed for consistency, e.g.,
stratification. This paper extends these results in several directions. First,
the syntactic features of programs, viz. cyclic negative dependencies,
affecting the existence of stable models are characterized, and their relevance
is discussed. Next, a new graph representation of logic programs, the Extended
Dependency Graph (EDG), is introduced, which conveys enough information for
reasoning about stable models (while the traditional Dependency Graph does
not). Finally, we show that the problem of the existence of stable models can
be reformulated in terms of coloring of the EDG.
|
1402.5443 | Topicality and Social Impact: Diverse Messages but Focused Messengers | cs.SI cs.CY physics.soc-ph | Are users who comment on a variety of matters more likely to achieve high
influence than those who delve into one focused field? Do general Twitter
hashtags, such as #lol, tend to be more popular than novel ones, such as
#instantlyinlove? Questions like these demand a way to detect topics hidden
behind messages associated with an individual or a hashtag, and a gauge of
similarity among these topics. Here we develop such an approach to identify
clusters of similar hashtags by detecting communities in the hashtag
co-occurrence network. Then the topical diversity of a user's interests is
quantified by the entropy of her hashtags across different topic clusters. A
similar measure is applied to hashtags, based on co-occurring tags. We find
that high topical diversity of early adopters or co-occurring tags implies high
future popularity of hashtags. In contrast, low diversity helps an individual
accumulate social influence. In short, diverse messages and focused messengers
are more likely to gain impact.
|
1402.5450 | State Estimation for a Humanoid Robot | cs.RO | This paper introduces a framework for state estimation on a humanoid robot
platform using only common proprioceptive sensors and knowledge of leg
kinematics. The presented approach extends that detailed in [1] on a quadruped
platform by incorporating the rotational constraints imposed by the humanoid's
flat feet. As in previous work, the proposed Extended Kalman Filter (EKF)
accommodates contact switching and makes no assumptions about gait or terrain,
making it applicable on any humanoid platform for use in any task. The filter
employs a sensor-based prediction model which uses inertial data from an IMU
and corrects for integrated error using a kinematics-based measurement model
which relies on joint encoders and a kinematic model to determine the relative
position and orientation of the feet. A nonlinear observability analysis is
performed on both the original and updated filters and it is concluded that the
new filter significantly simplifies singular cases and improves the
observability characteristics of the system. Results on simulated walking and
squatting datasets demonstrate the performance gain of the flat-foot filter as
well as confirm the results of the presented observability analysis.
|
1402.5456 | Energy Management for a User Interactive Smart Community: A Stackelberg
Game Approach | cs.SY cs.GT | This paper studies a three party energy management problem in a user
interactive smart community that consists of a large number of residential
units (RUs) with distributed energy resources (DERs), a shared facility
controller (SFC) and the main grid. A Stackelberg game is formulated to benefit
both the SFC and RUs, in terms of incurred cost and achieved utility
respectively, from their energy trading with each other and the grid. The
properties of the game are studied and it is shown that there exists a unique
Stackelberg equilibrium (SE). A novel algorithm is proposed that can be
implemented in a distributed fashion by both RUs and the SFC to reach the SE.
The convergence of the algorithm is also proven, and shown to always reach the
SE. Numerical examples are used to assess the properties and effectiveness of
the proposed scheme.
|
1402.5458 | Information Aggregation in Exponential Family Markets | cs.AI cs.GT stat.ML | We consider the design of prediction market mechanisms known as automated
market makers. We show that we can design these mechanisms via the mold of
\emph{exponential family distributions}, a popular and well-studied probability
distribution template used in statistics. We give a full development of this
relationship and explore a range of benefits. We draw connections between the
information aggregation of market prices and the belief aggregation of learning
agents that rely on exponential family distributions. We develop a very natural
analysis of the market behavior as well as the price equilibrium under the
assumption that the traders exhibit risk aversion according to exponential
utility. We also consider similar aspects under alternative models, such as
when traders are budget constrained.
|
1402.5461 | Preliminary Studies on Force/Motion Control of Intelligent Mechanical
Systems | cs.RO | To rationalize the relatively high investment that industrial automation
systems entail, research in the field of intelligent machines should target
high value functions such as fettling, die-finishing, deburring, and
fixtureless manufacturing. For achieving this goal, past work has concentrated
on force control algorithms at the system level with limited focus on
performance expansion at the actuator level. We present a comprehensive
literature review on robot force control, including algorithms, specialized
actuators, and robot control software. A robot force control testbed was
developed using Schunk's PowerCube 6-DOF Arm and a six-axis ATI force/torque
sensor. Using parameter identification experiments, manipulator module inertias
and the motor torque constant were estimated. Experiments were conducted to
study the practical issues involved in implementing stable contact transitions
and programmable endpoint impedance. Applications to human augmentation,
virtual fixtures, and teleoperation are discussed. These experiments are used
as a vehicle to understand the performance improvement achievable at the
actuator level. The approach at UTRRG has been to maximize the choices within
the actuator to enhance its intelligence. Drawing on this 20-year research
history in electromechanical actuator architecture, we propose a new concept
that mixes two inputs, distinct in their velocity ratios, within the same dual
actuator called a Force/Motion Actuator (FMA). Detailed kinematic and dynamic
models of this dual actuator are developed. The actuator performance is
evaluated using simulations with an output velocity specification and resolving
input trajectories using a minimum-norm solution. It is shown that a design
choice of 14:1 motion scaling between the two inputs results in good
sensitivity to output force disturbances without compromising velocity tracking
performance.
|
1402.5466 | Predictive Comparative QSAR analysis of Sulfathiazole Analogues as
Mycobacterium Tuberculosis H37RV Inhabitors | cs.CE q-bio.QM | Antitubercular activity of Sulfathiazole Derivitives series were subjected to
Quantitative Structure Activity Relationship (QSAR) Analysis with an attempt to
derive and understand a correlation between the Biologically Activity as
dependent variable and various descriptors as independent variables. QSAR
models generated using 28 compounds. Several statistical regression expressions
were obtained using Partial Least Squares (PLS) Regression, Multiple Linear
Regression (MLR) and Principal Component Regression (PCR) methods. The among
these methods, Partial Least Square Regression (PLS) method has shown very
promising result as compare to other two methods. A QSAR model was generated by
a training set of 18 molecules with correlation coefficient r (r square) of
0.9191, significant cross validated correlation coefficient (q square) of
0.8300, F test of 53.5783, r square for external test set pred_r square
-3.6132, coefficient of correlation of predicted data set pred_r_se square
1.4859 and degree of freedom 14 by Partial Least Squares Regression Method.
|
1402.5468 | Uncertainty Principle in Control Theory, Part I: Analysis of Performance
Limitations | cs.SY | This paper investigates performance limitations and tradeoffs in the control
design for linear time-invariant systems. It is shown that control
specifications in time domain and in frequency domain are always mutually
exclusive determined by uncertainty relations. The uncertainty principle from
quantum mechanics and harmonic analysis therefore embeds itself inherently in
control theory. The relations among transient specifications, system bandwidth
and control energy are obtained within the framework of uncertainty principle.
If the control system is provided with a large bandwidth or great control
energy, then it can ensure transient specifications as good as it can be. Such
a control system could be approximated by prolate spheroidal wave functions.
The obtained results are also applicable to filter design due to the duality of
filtering and control.
|
1402.5475 | Soft Consistency Reconstruction: A Robust 1-bit Compressive Sensing
Algorithm | cs.IT math.IT | A class of recovering algorithms for 1-bit compressive sensing (CS) named
Soft Consistency Reconstructions (SCRs) are proposed. Recognizing that CS
recovery is essentially an optimization problem, we endeavor to improve the
characteristics of the objective function under noisy environments. With a
family of re-designed consistency criteria, SCRs achieve remarkable
counter-noise performance gain over the existing counterparts, thus acquiring
the desired robustness in many real-world applications. The benefits of soft
decisions are exemplified through structural analysis of the objective
function, with intuition described for better understanding. As expected,
through comparisons with existing methods in simulations, SCRs demonstrate
preferable robustness against noise in low signal-to-noise ratio (SNR) regime,
while maintaining comparable performance in high SNR regime.
|
1402.5477 | Mobile Conductance and Gossip-based Information Spreading in Mobile
Networks | cs.SI | In this paper, we propose a general analytical framework for information
spreading in mobile networks based on a new performance metric, mobile
conductance, which allows us to separate the details of mobility models from
the study of mobile spreading time. We derive a general result for the
information spreading time in mobile networks in terms of this new metric, and
instantiate it through several popular mobility models. Large scale network
simulation is conducted to verify our analysis.
|
1402.5481 | From Predictive to Prescriptive Analytics | stat.ML cs.LG math.OC | In this paper, we combine ideas from machine learning (ML) and operations
research and management science (OR/MS) in developing a framework, along with
specific methods, for using data to prescribe optimal decisions in OR/MS
problems. In a departure from other work on data-driven optimization and
reflecting our practical experience with the data available in applications of
OR/MS, we consider data consisting, not only of observations of quantities with
direct effect on costs/revenues, such as demand or returns, but predominantly
of observations of associated auxiliary quantities. The main problem of
interest is a conditional stochastic optimization problem, given imperfect
observations, where the joint probability distributions that specify the
problem are unknown. We demonstrate that our proposed solution methods, which
are inspired by ML methods such as local regression, CART, and random forests,
are generally applicable to a wide range of decision problems. We prove that
they are tractable and asymptotically optimal even when data is not iid and may
be censored. We extend this to the case where decision variables may directly
affect uncertainty in unknown ways, such as pricing's effect on demand. As an
analogue to R^2, we develop a metric P termed the coefficient of
prescriptiveness to measure the prescriptive content of data and the efficacy
of a policy from an operations perspective. To demonstrate the power of our
approach in a real-world setting we study an inventory management problem faced
by the distribution arm of an international media conglomerate, which ships an
average of 1bil units per year. We leverage internal data and public online
data harvested from IMDb, Rotten Tomatoes, and Google to prescribe operational
decisions that outperform baseline measures. Specifically, the data we collect,
leveraged by our methods, accounts for an 88\% improvement as measured by our
P.
|
1402.5483 | Energy Efficient Joint Source and Channel Sensing in Cognitive Radio
Sensor Networks | cs.IT math.IT | A novel concept of Joint Source and Channel Sensing (JSCS) is introduced in
the context of Cognitive Radio Sensor Networks (CRSN). Every sensor node has
two basic tasks: application-oriented source sensing and ambient-oriented
channel sensing. The former is to collect the application-specific source
information and deliver it to the access point within some limit of distortion,
while the latter is to find the vacant channels and provide spectrum access
opportunities for the sensed source information. With in-depth exploration, we
find that these two tasks are actually interrelated when taking into account
the energy constraints. The main focus of this paper is to minimize the total
power consumed by these two tasks while bounding the distortion of the
application-specific source information. Firstly, we present a specific slotted
sensing and transmission scheme, and establish the multi-task power consumption
model. Secondly, we jointly analyze the interplay between these two sensing
tasks, and then propose a proper sensing and power allocation scheme to
minimize the total power consumption. Finally, Simulation results are given to
validate the proposed scheme.
|
1402.5486 | Rateless-Coding-Assisted Multi-Packet Spreading over Mobile Networks | cs.NI cs.IT math.IT | A novel Rateless-coding-assisted Multi-Packet Relaying (RMPR) protocol is
proposed for large-size data spreading in mobile wireless networks. With this
lightweight and robust protocol, the packet redundancy is reduced by a factor
of $\sqrt n$, while the spreading time is reduced at least by a factor of $\ln
(n)$. Closed-form bounds and explicit non-asymptotic results are presented,
which are further validated through simulations. Besides, the packet
duplication phenomenon in the network setting is analyzed for the first time.
|
1402.5488 | Distributed Spectrum-Aware Clustering in Cognitive Radio Sensor Networks | cs.NI cs.IT math.IT | A novel Distributed Spectrum-Aware Clustering (DSAC) scheme is proposed in
the context of Cognitive Radio Sensor Networks (CRSN). DSAC aims at forming
energy efficient clusters in a self-organized fashion while restricting
interference to Primary User (PU) systems. The spectrum-aware clustered
structure is presented where the communications consist of intra-cluster
aggregation and inter-cluster relaying. In order to save communication power,
the optimal number of clusters is derived and the idea of groupwise constrained
clustering is introduced to minimize intra-cluster distance under
spectrum-aware constraint. In terms of practical implementation, DSAC
demonstrates preferable scalability and stability because of its low complexity
and quick convergence under dynamic PU activity. Finally, simulation results
are given to validate the proposed scheme.
|
1402.5497 | Efficient Semidefinite Spectral Clustering via Lagrange Duality | cs.LG cs.CV | We propose an efficient approach to semidefinite spectral clustering (SSC),
which addresses the Frobenius normalization with the positive semidefinite
(p.s.d.) constraint for spectral clustering. Compared with the original
Frobenius norm approximation based algorithm, the proposed algorithm can more
accurately find the closest doubly stochastic approximation to the affinity
matrix by considering the p.s.d. constraint. In this paper, SSC is formulated
as a semidefinite programming (SDP) problem. In order to solve the high
computational complexity of SDP, we present a dual algorithm based on the
Lagrange dual formalization. Two versions of the proposed algorithm are
proffered: one with less memory usage and the other with faster convergence
rate. The proposed algorithm has much lower time complexity than that of the
standard interior-point based SDP solvers. Experimental results on both UCI
data sets and real-world image data sets demonstrate that 1) compared with the
state-of-the-art spectral clustering methods, the proposed algorithm achieves
better clustering performance; and 2) our algorithm is much more efficient and
can solve larger-scale SSC problems than those standard interior-point SDP
solvers.
|
1402.5500 | Handbook of Network Analysis [KONECT -- the Koblenz Network Collection] | cs.SI physics.soc-ph | This is the handbook for the KONECT project, the \emph{Koblenz Network
Collection}, a scientific project to collect, analyse, and provide network
datasets for researchers in all related fields of research, by the Namur Center
for Complex Systems (naXys) at the University of Namur, Belgium, with web
hosting provided by the Institute for Web Science and Technologies (WeST) at
the University of Koblenz--Landau, Germany.
|
1402.5503 | Distributed Compressed Wideband Sensing in Cognitive Radio Sensor
Networks | cs.NI cs.IT math.IT | A novel distributed compressed wideband sensing scheme for Cognitive Radio
Sensor Networks (CRSN) is proposed in this paper. Taking advantage of the
distributive nature of CRSN, the proposed scheme deploys only one single
narrowband sampler with ultra-low sampling rate at each nodes to accomplish the
wideband spectrum sensing. First, the practical structure of the compressed
sampler at each node is described in detail. Second, we show how the Fusion
Center (FC) exploits the sampled signals with their spectrum randomly-aliased
to detect the global wideband spectrum activity. Finally, the proposed scheme
is validated through extensive simulations, which shows that it is particularly
suitable for CRSN.
|
1402.5511 | A Generalized Robust Filtering Framework for Nonlinear
Differential-Algebraic Systems | cs.SY math.OC | A generalized dynamical robust nonlinear filtering framework is established
for a class of Lipschitz differential algebraic systems, in which the
nonlinearities appear both in the state and measured output equations. The
system is assumed to be affected by norm-bounded disturbance and to have both
norm-bounded uncertainties in the realization matrices as well as nonlinear
model uncertainties. We synthesize a robust H_infty filter through semidefinite
programming and strict linear matrix inequalities (LMIs). The admissible
Lipschitz constants of the nonlinear functions are maximized through LMI
optimization. The resulting H_infty filter guarantees asymptotic stability of
the estimation error dynamics with prespecified disturbance attenuation level
and is robust against time-varying parametric uncertainties as well as
Lipschitz nonlinear additive uncertainty. Explicit bound on the tolerable
nonlinear uncertainty is derived based on a norm-wise robustness analysis.
|
1402.5516 | Minimizing Seed Set Selection with Probabilistic Coverage Guarantee in a
Social Network | cs.SI cs.DS | A topic propagating in a social network reaches its tipping point if the
number of users discussing it in the network exceeds a critical threshold such
that a wide cascade on the topic is likely to occur. In this paper, we consider
the task of selecting initial seed users of a topic with minimum size so that
with a guaranteed probability the number of users discussing the topic would
reach a given threshold. We formulate the task as an optimization problem
called seed minimization with probabilistic coverage guarantee (SM-PCG). This
problem departs from the previous studies on social influence maximization or
seed minimization because it considers influence coverage with probabilistic
guarantees instead of guarantees on expected influence coverage. We show that
the problem is not submodular, and thus is harder than previously studied
problems based on submodular function optimization. We provide an approximation
algorithm and show that it approximates the optimal solution with both a
multiplicative ratio and an additive error. The multiplicative ratio is tight
while the additive error would be small if influence coverage distributions of
certain seed sets are well concentrated. For one-way bipartite graphs we
analytically prove the concentration condition and obtain an approximation
algorithm with an $O(\log n)$ multiplicative ratio and an $O(\sqrt{n})$
additive error, where $n$ is the total number of nodes in the social graph.
Moreover, we empirically verify the concentration condition in real-world
networks and experimentally demonstrate the effectiveness of our proposed
algorithm comparing to commonly adopted benchmark algorithms.
|
1402.5521 | Parallel Selective Algorithms for Big Data Optimization | cs.DC cs.IT cs.NA math.IT math.OC | We propose a decomposition framework for the parallel optimization of the sum
of a differentiable (possibly nonconvex) function and a (block) separable
nonsmooth, convex one. The latter term is usually employed to enforce structure
in the solution, typically sparsity. Our framework is very flexible and
includes both fully parallel Jacobi schemes and Gauss- Seidel (i.e.,
sequential) ones, as well as virtually all possibilities "in between" with only
a subset of variables updated at each iteration. Our theoretical convergence
results improve on existing ones, and numerical results on LASSO, logistic
regression, and some nonconvex quadratic problems show that the new method
consistently outperforms existing algorithms.
|
1402.5564 | Structure Tensor Based Image Interpolation Method | cs.CV | Feature preserving image interpolation is an active area in image processing
field. In this paper a new direct edge directed image super-resolution
algorithm based on structure tensors is proposed. Using an isotropic Gaussian
filter, the structure tensor at each pixel of the input image is computed and
the pixels are classified to three distinct classes; uniform region, corners
and edges, according to the eigenvalues of the structure tensor. Due to
application of the isotropic Gaussian filter, the classification is robust to
noise presented in image. Based on the tangent eigenvector of the structure
tensor, the edge direction is determined and used for interpolation along the
edges. In comparison to some previous edge directed image interpolation
methods, the proposed method achieves higher quality in both subjective and
objective aspects. Also the proposed method outperforms previous methods in
case of noisy and JPEG compressed images. Furthermore, without the need for
optimization in the process, the algorithm can achieve higher speed.
|
1402.5565 | Semi-Supervised Nonlinear Distance Metric Learning via Forests of
Max-Margin Cluster Hierarchies | stat.ML cs.IR cs.LG | Metric learning is a key problem for many data mining and machine learning
applications, and has long been dominated by Mahalanobis methods. Recent
advances in nonlinear metric learning have demonstrated the potential power of
non-Mahalanobis distance functions, particularly tree-based functions. We
propose a novel nonlinear metric learning method that uses an iterative,
hierarchical variant of semi-supervised max-margin clustering to construct a
forest of cluster hierarchies, where each individual hierarchy can be
interpreted as a weak metric over the data. By introducing randomness during
hierarchy training and combining the output of many of the resulting
semi-random weak hierarchy metrics, we can obtain a powerful and robust
nonlinear metric model. This method has two primary contributions: first, it is
semi-supervised, incorporating information from both constrained and
unconstrained points. Second, we take a relaxed approach to constraint
satisfaction, allowing the method to satisfy different subsets of the
constraints at different levels of the hierarchy rather than attempting to
simultaneously satisfy all of them. This leads to a more robust learning
algorithm. We compare our method to a number of state-of-the-art benchmarks on
$k$-nearest neighbor classification, large-scale image retrieval and
semi-supervised clustering problems, and find that our algorithm yields results
comparable or superior to the state-of-the-art, and is significantly more
robust to noise.
|
1402.5572 | Collective oscillation period of inter-coupled biological negative
cyclic feedback oscillators | cs.SY | A number of biological rhythms originate from networks comprised of multiple
cellular oscillators. But analytical results are still lacking on the
collective oscillation period of inter-coupled gene regulatory oscillators,
which, as has been reported, may be different from that of an autonomous
oscillator. Based on cyclic feedback oscillators, we analyze the collective
oscillation pattern of coupled cellular oscillators. First we give a condition
under which the oscillator network exhibits oscillatory and synchronized
behavior. Then we estimate the collective oscillation period based on a novel
multivariable harmonic balance technique. Analytical results are derived in
terms of biochemical parameters, thus giving insight into the basic mechanism
of biological oscillation and providing guidance in synthetic biology design.
|
1402.5584 | Path Thresholding: Asymptotically Tuning-Free High-Dimensional Sparse
Regression | math.ST cs.IT math.IT stat.ML stat.TH | In this paper, we address the challenging problem of selecting tuning
parameters for high-dimensional sparse regression. We propose a simple and
computationally efficient method, called path thresholding (PaTh), that
transforms any tuning parameter-dependent sparse regression algorithm into an
asymptotically tuning-free sparse regression algorithm. More specifically, we
prove that, as the problem size becomes large (in the number of variables and
in the number of observations), PaTh performs accurate sparse regression, under
appropriate conditions, without specifying a tuning parameter. In
finite-dimensional settings, we demonstrate that PaTh can alleviate the
computational burden of model selection algorithms by significantly reducing
the search space of tuning parameters.
|
1402.5586 | Adaptive Zero Reaction Motion Control for Free-Floating Space
Manipulators | cs.SY | This paper investigates adaptive zero reaction motion control for
free-floating space manipulators with uncertain kinematics and dynamics. The
challenge in deriving the adaptive reaction null-space (RNS) based control
scheme is that it is difficult to obtain a linear expression, which is the
basis of the adaptive control. The main contribution of this paper is that we
skillfully obtain such a linear expression, based on which, an adaptive version
of the RNS-based controller (referred to as the adaptive zero reaction motion
controller in the sequel) is developed at the velocity level, taking into
account both the kinematic and dynamic uncertainties. It is shown that the
proposed controller achieves both the spacecraft attitude regulation and
end-effector trajectory tracking. The performance of the proposed adaptive
controller is shown by numerical simulations with a planar 3-DOF
(degree-of-freedom) space manipulator.
|
1402.5593 | Reciprocity in Gift-Exchange-Games | cs.AI | This paper presents an analysis of data from a gift-exchange-game experiment.
The experiment was described in `The Impact of Social Comparisons on
Reciprocity' by G\"achter et al. 2012. Since this paper uses state-of-art data
science techniques, the results provide a different point of view on the
problem. As already shown in relevant literature from experimental economics,
human decisions deviate from rational payoff maximization. The average gift
rate was $31$%. Gift rate was under no conditions zero. Further, we derive some
special findings and calculate their significance.
|
1402.5596 | Exact Post Model Selection Inference for Marginal Screening | stat.ME cs.LG math.ST stat.ML stat.TH | We develop a framework for post model selection inference, via marginal
screening, in linear regression. At the core of this framework is a result that
characterizes the exact distribution of linear functions of the response $y$,
conditional on the model being selected (``condition on selection" framework).
This allows us to construct valid confidence intervals and hypothesis tests for
regression coefficients that account for the selection procedure. In contrast
to recent work in high-dimensional statistics, our results are exact
(non-asymptotic) and require no eigenvalue-like assumptions on the design
matrix $X$. Furthermore, the computational cost of marginal regression,
constructing confidence intervals and hypothesis testing is negligible compared
to the cost of linear regression, thus making our methods particularly suitable
for extremely large datasets. Although we focus on marginal screening to
illustrate the applicability of the condition on selection framework, this
framework is much more broadly applicable. We show how to apply the proposed
framework to several other selection procedures including orthogonal matching
pursuit, non-negative least squares, and marginal screening+Lasso.
|
1402.5599 | Formal Specification and Quantitative Analysis of a Constellation of
Navigation Satellites | cs.SY | Navigation satellites are a core component of navigation satellite based
systems such as GPS, GLONASS and Galileo which provide location and timing
information for a variety of uses. Such satellites are designed for operating
on orbit to perform tasks and have lifetimes of 10 years or more. Reliability,
availability and maintainability (RAM) analysis of systems has been
indispensable in the design phase of satellites in order to achieve minimum
failures or to increase mean time between failures (MTBF) and thus to plan
maintenance strategies, optimise reliability and maximise availability. In this
paper, we present formal models of both a single satellite and a navigation
satellite constellation and logical specification of their reliability,
availability and maintainability properties respectively. The probabilistic
model checker PRISM has been used to perform automated analysis of these
quantitative properties.
|
1402.5604 | Three-Dimensional Integrated Guidance and Control Based on Small-Gain
Theorem | cs.SY | A three-dimensional (3D) integrated guidance and control (IGC) design
approach is proposed by using small-gain theorem in this paper. The 3D IGC
model is formulated by combining nonlinear pursuer dynamics with the nonlinear
dynamics describing pursuitevasion motion. Small-gain theorem and ISS theory
are iteratively utilized to design desired attack angle, sideslip angle and
attitude angular rates (virtual controls), and eventually an IGC law is
proposed. Theoretical analysis shows that the IGC approach can make the LOS
rate converge into a small neighborhood of zero, and the stability of the
overall system can be guaranteed as well.
|
1402.5619 | A Novel Histogram Based Robust Image Registration Technique | cs.CV | In this paper, a method for Automatic Image Registration (AIR) through
histogram is proposed. Automatic image registration is one of the crucial steps
in the analysis of remotely sensed data. A new acquired image must be
transformed, using image registration techniques, to match the orientation and
scale of previous related images. This new approach combines several
segmentations of the pair of images to be registered. A relaxation parameter on
the histogram modes delineation is introduced. It is followed by
characterization of the extracted objects through the objects area, axis ratio,
and perimeter and fractal dimension. The matched objects are used for rotation
and translation estimation. It allows for the registration of pairs of images
with differences in rotation and translation. This method contributes to
subpixel accuracy.
|
1402.5623 | Localization of License Plate Using Morphological Operations | cs.CV | It is believed that there are currently millions of vehicles on the roads
worldwide. The over speed of vehicles,theft of vehicles, disobeying traffic
rules in public, an unauthorized person entering the restricted area are keep
on increasing. In order restrict against these criminal activities, we need an
automatic public security system. Each vehicle has their own Vehicle
Identification Number (VIN) as their primary identifier. The VIN is actually a
License Number which states a legal license to participate in the public
traffic. The proposed paper is to identify the vehicle with the help of
vehicles License Plate (LP).LPRS is one the most important part of the
Intelligent Transportation System (ITS) to locate the LP. In this paper certain
existing algorithm drawbacks are overcome by the proposed morphological
operations for LPRS. Morphological operation is chosen due to its higher
efficiency, noise filter capacity, accuracy, exact localization of LP and
speed.
|
1402.5634 | To go deep or wide in learning? | cs.LG | To achieve acceptable performance for AI tasks, one can either use
sophisticated feature extraction methods as the first layer in a two-layered
supervised learning model, or learn the features directly using a deep
(multi-layered) model. While the first approach is very problem-specific, the
second approach has computational overheads in learning multiple layers and
fine-tuning of the model. In this paper, we propose an approach called wide
learning based on arc-cosine kernels, that learns a single layer of infinite
width. We propose exact and inexact learning strategies for wide learning and
show that wide learning with single layer outperforms single layer as well as
deep architectures of finite width for some benchmark datasets.
|
1402.5639 | Decentralized Rendezvous of Nonholonomic Robots with Sensing and
Connectivity Constraints | cs.SY cs.RO | A group of wheeled robots with nonholonomic constraints is considered to
rendezvous at a common specified setpoint with a desired orientation while
maintaining network connectivity and ensuring collision avoidance within the
robots. Given communication and sensing constraints for each robot, only a
subset of the robots are aware or informed of the global destination, and the
remaining robots must move within the network connectivity constraint so that
the informed robots can guide the group to the goal. The mobile robots are also
required to avoid collisions with each other outside a neighborhood of the
common rendezvous point. To achieve the rendezvous control objective,
decentralized time-varying controllers are developed based on a navigation
function framework to steer the robots to perform rendezvous while preserving
network connectivity and ensuring collision avoidance. Only local sensing
feedback, which includes position feedback from immediate neighbors and
absolute orientation measurement, is used to navigate the robots and enables
radio silence during navigation. Simulation results demonstrate the performance
of the developed approach.
|
1402.5644 | Containment Control for a Social Network with State-Dependent
Connectivity | cs.SY | Social interactions influence our thoughts, opinions and actions. In this
paper, social interactions are studied within a group of individuals composed
of influential social leaders and followers. Each person is assumed to maintain
a social state, which can be an emotional state or an opinion. Followers update
their social states based on the states of local neighbors, while social
leaders maintain a constant desired state. Social interactions are modeled as a
general directed graph where each directed edge represents an influence from
one person to another. Motivated by the non-local property of fractional-order
systems, the social response of individuals in the network are modeled by
fractional-order dynamics whose states depend on influences from local
neighbors and past experiences. A decentralized influence method is then
developed to maintain existing social influence between individuals (i.e.,
without isolating peers in the group) and to influence the social group to a
common desired state (i.e., within a convex hull spanned by social leaders).
Mittag-Leffler stability methods are used to prove asymptotic stability of the
networked fractional-order system.
|
1402.5666 | Dynamic Rate and Channel Selection in Cognitive Radio Systems | cs.IT cs.LG math.IT | In this paper, we investigate dynamic channel and rate selection in cognitive
radio systems which exploit a large number of channels free from primary users.
In such systems, transmitters may rapidly change the selected (channel, rate)
pair to opportunistically learn and track the pair offering the highest
throughput. We formulate the problem of sequential channel and rate selection
as an online optimization problem, and show its equivalence to a {\it
structured} Multi-Armed Bandit problem. The structure stems from inherent
properties of the achieved throughput as a function of the selected channel and
rate. We derive fundamental performance limits satisfied by {\it any} channel
and rate adaptation algorithm, and propose algorithms that achieve (or
approach) these limits. In turn, the proposed algorithms optimally exploit the
inherent structure of the throughput. We illustrate the efficiency of our
algorithms using both test-bed and simulation experiments, in both stationary
and non-stationary radio environments. In stationary environments, the packet
successful transmission probabilities at the various channel and rate pairs do
not evolve over time, whereas in non-stationary environments, they may evolve.
In practical scenarios, the proposed algorithms are able to track the best
channel and rate quite accurately without the need of any explicit measurement
and feedback of the quality of the various channels.
|
1402.5684 | Discriminative Functional Connectivity Measures for Brain Decoding | cs.AI cs.CE cs.CV cs.LG | We propose a statistical learning model for classifying cognitive processes
based on distributed patterns of neural activation in the brain, acquired via
functional magnetic resonance imaging (fMRI). In the proposed learning method,
local meshes are formed around each voxel. The distance between voxels in the
mesh is determined by using a functional neighbourhood concept. In order to
define the functional neighbourhood, the similarities between the time series
recorded for voxels are measured and functional connectivity matrices are
constructed. Then, the local mesh for each voxel is formed by including the
functionally closest neighbouring voxels in the mesh. The relationship between
the voxels within a mesh is estimated by using a linear regression model. These
relationship vectors, called Functional Connectivity aware Local Relational
Features (FC-LRF) are then used to train a statistical learning machine. The
proposed method was tested on a recognition memory experiment, including data
pertaining to encoding and retrieval of words belonging to ten different
semantic categories. Two popular classifiers, namely k-nearest neighbour (k-nn)
and Support Vector Machine (SVM), are trained in order to predict the semantic
category of the item being retrieved, based on activation patterns during
encoding. The classification performance of the Functional Mesh Learning model,
which range in 62%-71% is superior to the classical multi-voxel pattern
analysis (MVPA) methods, which range in 40%-48%, for ten semantic categories.
|
1402.5691 | Data-Adaptive Reduced-Dimension Robust Beamforming Algorithms | cs.IT cs.SY math.IT | We present low complexity, quickly converging robust adaptive beamformers
that combine robust Capon beamformer (RCB) methods and data-adaptive Krylov
subspace dimensionality reduction techniques. We extend a recently proposed
reduced-dimension RCB framework, which ensures proper combination of RCBs with
any form of dimensionality reduction that can be expressed using a full-rank
dimension reducing transform, providing new results for data-adaptive
dimensionality reduction. We consider Krylov subspace methods computed with the
Powers-of-R (PoR) and Conjugate Gradient (CG) techniques, illustrating how a
fast CG-based algorithm can be formed by beneficially exploiting that the
CG-algorithm diagonalizes the reduced-dimension covariance. Our simulations
show the benefits of the proposed approaches.
|
1402.5692 | Repeat Accumulate Based Designs for LDPC Codes on Fading Channels | cs.IT math.IT | Irregular repeat-accumulate Root-Check LDPC codes based on Progressive Edge
Growth (PEG) techniques for block-fading channels are proposed. The proposed
Root-Check LDPC codes are {both suitable for channels under $F = 2, 3$
independent fadings per codeword and} for fast fading channels. An IRA(A)
Root-Check structure is devised for $F = 2, 3$ independent fadings. The
performance of the new codes is investigated in terms of the Frame Error Rate
(FER). Numerical results show that the IRAA LDPC codes constructed by the
proposed algorithm {outperform by about 1dB the existing} IRA Root-Check LDPC
codes under fast-fading channels.
|
1402.5693 | On Estimation Error Outage for Scalar Gauss-Markov Signals Sent Over
Fading Channels | cs.IT math.IT math.ST stat.TH | Measurements of a scalar linear Gauss-Markov process are sent over a fading
channel. The fading channel is modeled as independent and identically
distributed random variables with known realization at the receiver. The
optimal estimator at the receiver is the Kalman filter. In contrast to the
classical Kalman filter theory, given a random channel, the Kalman gain and the
error covariance become random. Then the probability distribution function of
expected estimation error and its outage probability can be chosen for
estimation quality assessment. In this paper and in order to get the estimation
error outage, we provide means to characterize the stationary probability
density function of the random expected estimation error. Furthermore and for
the particular case of the i.i.d. Rayleigh fading channels, upper and lower
bounds for the outage probability are derived which provide insight and simpler
means for design purposes. We also show that the bounds are tight for the high
SNR regime, and that the outage probability decreases linearly with the inverse
of the average channel SNR.
|
1402.5697 | Exemplar-based Linear Discriminant Analysis for Robust Object Tracking | cs.CV | Tracking-by-detection has become an attractive tracking technique, which
treats tracking as a category detection problem. However, the task in tracking
is to search for a specific object, rather than an object category as in
detection. In this paper, we propose a novel tracking framework based on
exemplar detector rather than category detector. The proposed tracker is an
ensemble of exemplar-based linear discriminant analysis (ELDA) detectors. Each
detector is quite specific and discriminative, because it is trained by a
single object instance and massive negatives. To improve its adaptivity, we
update both object and background models. Experimental results on several
challenging video sequences demonstrate the effectiveness and robustness of our
tracking algorithm.
|
1402.5708 | The Cerebellum: New Computational Model that Reveals its Primary
Function to Calculate Multibody Dynamics Conform to Lagrange-Euler
Formulation | cs.NE cs.CE cs.RO q-bio.NC | Cerebellum is part of the brain that occupies only 10% of the brain volume,
but it contains about 80% of total number of brain neurons. New cerebellar
function model is developed that sets cerebellar circuits in context of
multibody dynamics model computations, as important step in controlling balance
and movement coordination, functions performed by two oldest parts of the
cerebellum. Model gives new functional interpretation for granule cells-Golgi
cell circuit, including distinct function for upper and lower Golgi cell
dendritc trees, and resolves issue of sharing Granule cells between Purkinje
cells. Sets new function for basket cells, and for stellate cells according to
position in molecular layer. New model enables easily and direct integration of
sensory information from vestibular system and cutaneous mechanoreceptors, for
balance, movement and interaction with environments. Model gives explanation of
Purkinje cells convergence on deep-cerebellar nuclei.
|
1402.5715 | Variational Particle Approximations | stat.ML cs.LG | Approximate inference in high-dimensional, discrete probabilistic models is a
central problem in computational statistics and machine learning. This paper
describes discrete particle variational inference (DPVI), a new approach that
combines key strengths of Monte Carlo, variational and search-based techniques.
DPVI is based on a novel family of particle-based variational approximations
that can be fit using simple, fast, deterministic search techniques. Like Monte
Carlo, DPVI can handle multiple modes, and yields exact results in a
well-defined limit. Like unstructured mean-field, DPVI is based on optimizing a
lower bound on the partition function; when this quantity is not of intrinsic
interest, it facilitates convergence assessment and debugging. Like both Monte
Carlo and combinatorial search, DPVI can take advantage of factorization,
sequential structure, and custom search operators. This paper defines DPVI
particle-based approximation family and partition function lower bounds, along
with the sequential DPVI and local DPVI algorithm templates for optimizing
them. DPVI is illustrated and evaluated via experiments on lattice Markov
Random Fields, nonparametric Bayesian mixtures and block-models, and parametric
as well as non-parametric hidden Markov models. Results include applications to
real-world spike-sorting and relational modeling problems, and show that DPVI
can offer appealing time/accuracy trade-offs as compared to multiple
alternatives.
|
1402.5726 | On Power and Load Coupling in Cellular Networks for Energy Optimization | cs.IT math.IT | We consider the problem of minimization of sum transmission energy in
cellular networks where coupling occurs between cells due to mutual
interference. The coupling relation is characterized by the
signal-to-interference-and-noise-ratio (SINR) coupling model. Both cell load
and transmission power, where cell load measures the average level of resource
usage in the cell, interact via the coupling model. The coupling is implicitly
characterized with load and power as the variables of interest using two
equivalent equations, namely, non-linear load coupling equation (NLCE) and
non-linear power coupling equation (NPCE), respectively. By analyzing the NLCE
and NPCE, we prove that operating at full load is optimal in minimizing sum
energy, and provide an iterative power adjustment algorithm to obtain the
corresponding optimal power solution with guaranteed convergence, where in each
iteration a standard bisection search is employed. To obtain the algorithmic
result, we use the properties of the so-called standard interference function;
the proof is non-standard because the NPCE cannot even be expressed as a
closed-form expression with power as the implicit variable of interest. We
present numerical results illustrating the theoretical findings for a real-life
and large-scale cellular network, showing the advantage of our solution
compared to the conventional solution of deploying uniform power for base
stations.
|
1402.5728 | Machine Learning Methods in the Computational Biology of Cancer | q-bio.QM cs.LG stat.ML | The objectives of this "perspective" paper are to review some recent advances
in sparse feature selection for regression and classification, as well as
compressed sensing, and to discuss how these might be used to develop tools to
advance personalized cancer therapy. As an illustration of the possibilities, a
new algorithm for sparse regression is presented, and is applied to predict the
time to tumor recurrence in ovarian cancer. A new algorithm for sparse feature
selection in classification problems is presented, and its validation in
endometrial cancer is briefly discussed. Some open problems are also presented.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.