id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1311.1013 | Interference Alignment (IA) and Coordinated Multi-Point (CoMP) with
IEEE802.11ac feedback compression: testbed results | cs.IT math.IT | We have implemented interference alignment (IA) and joint transmission
coordinated multipoint (CoMP) on a wireless testbed using the feedback
compression scheme of the new 802.11ac standard. The performance as a function
of the frequency domain granularity is assessed. Realistic throughput gains are
obtained by probing each spatial modulation stream with ten different coding
and modulation schemes. The gain of IA and CoMP over TDMA MIMO is found to be
26% and 71%, respectively under stationary conditions. In our dense indoor
office deployment, the frequency domain granularity of the feedback can be
reduced down to every 8th subcarrier (2.5MHz), without sacrificing performance.
|
1311.1040 | Combined Independent Component Analysis and Canonical Polyadic
Decomposition via Joint Diagonalization | stat.ML cs.LG | Recently, there has been a trend to combine independent component analysis
and canonical polyadic decomposition (ICA-CPD) for an enhanced robustness for
the computation of CPD, and ICA-CPD could be further converted into CPD of a
5th-order partially symmetric tensor, by calculating the eigenmatrices of the
4th-order cumulant slices of a trilinear mixture. In this study, we propose a
new 5th-order CPD algorithm constrained with partial symmetry based on joint
diagonalization. As the main steps involved in the proposed algorithm undergo
no updating iterations for the loading matrices, it is much faster than the
existing algorithm based on alternating least squares and enhanced line search,
with competent performances. Simulation results are provided to demonstrate the
performance of the proposed algorithm.
|
1311.1053 | Guessing a password over a wireless channel (on the effect of noise
non-uniformity) | cs.IT math.IT | A string is sent over a noisy channel that erases some of its characters.
Knowing the statistical properties of the string's source and which characters
were erased, a listener that is equipped with an ability to test the veracity
of a string, one string at a time, wishes to fill in the missing pieces. Here
we characterize the influence of the stochastic properties of both the string's
source and the noise on the channel on the distribution of the number of
attempts required to identify the string, its guesswork. In particular, we
establish that the average noise on the channel is not a determining factor for
the average guesswork and illustrate simple settings where one recipient with,
on average, a better channel than another recipient, has higher average
guesswork. These results stand in contrast to those for the capacity of wiretap
channels and suggest the use of techniques such as friendly jamming with
pseudo-random sequences to exploit this guesswork behavior.
|
1311.1082 | Motivation for hyperlink creation using inter-page relationships | cs.DL cs.IR cs.SI | Using raw hyperlink counts for webometrics research has been shown to be
unreliable and researchers have looked for alternatives. One alternative is
classifying hyperlinks in a website based on the motivation behind the
hyperlink creation. The method used for this type of classification involves
manually visiting a webpage and then classifying individual links on the
webpage. This is time consuming, making it infeasible for large scale studies.
This paper speeds up the classification of hyperlinks in UK academic websites
by using a machine learning technique, decision tree induction, to group web
pages found in UK academic websites into one of eight categories and then infer
the motivation for the creation of a hyperlink in a webpage based on the
linking pattern of the category the webpage belongs to.
|
1311.1090 | Polyhedrons and Perceptrons Are Functionally Equivalent | cs.NE | Mathematical definitions of polyhedrons and perceptron networks are
discussed. The formalization of polyhedrons is done in a rather traditional
way. For networks, previously proposed systems are developed. Perceptron
networks in disjunctive normal form (DNF) and conjunctive normal forms (CNF)
are introduced. The main theme is that single output perceptron neural networks
and characteristic functions of polyhedrons are one and the same class of
functions. A rigorous formulation and proof that three layers suffice is
obtained. The various constructions and results are among several steps
required for algorithms that replace incremental and statistical learning with
more efficient, direct and exact geometric methods for calculation of
perceptron architecture and weights.
|
1311.1132 | Motion and audio analysis in mobile devices for remote monitoring of
physical activities and user authentication | cs.HC cs.CV | In this article we propose the use of accelerometer embedded by default in
smartphone as a cost-effective, reliable and efficient way to provide remote
physical activity monitoring for the elderly and people requiring healthcare
service. Mobile phones are regularly carried by users during their day-to-day
work routine, physical movement information can be captured by the mobile phone
accelerometer, processed and sent to a remote server for monitoring. The
acceleration pattern can deliver information related to the pattern of physical
activities the user is engaged in. We further show how this technique can be
extended to provide implicit real-time security by analysing unexpected
movements captured by the phone accelerometer, and automatically locking the
phone in such situation to prevent unauthorised access. This technique is also
shown to provide implicit continuous user authentication, by capturing regular
user movements such as walking, and requesting for re-authentication whenever
it detects a non-regular movement.
|
1311.1162 | Semantic Stability in Social Tagging Streams | cs.CY cs.IR cs.SI physics.soc-ph | One potential disadvantage of social tagging systems is that due to the lack
of a centralized vocabulary, a crowd of users may never manage to reach a
consensus on the description of resources (e.g., books, users or songs) on the
Web. Yet, previous research has provided interesting evidence that the tag
distributions of resources may become semantically stable over time as more and
more users tag them. At the same time, previous work has raised an array of new
questions such as: (i) How can we assess the semantic stability of social
tagging systems in a robust and methodical way? (ii) Does semantic
stabilization of tags vary across different social tagging systems and
ultimately, (iii) what are the factors that can explain semantic stabilization
in such systems? In this work we tackle these questions by (i) presenting a
novel and robust method which overcomes a number of limitations in existing
methods, (ii) empirically investigating semantic stabilization processes in a
wide range of social tagging systems with distinct domains and properties and
(iii) detecting potential causes for semantic stabilization, specifically
imitation behavior, shared background knowledge and intrinsic properties of
natural language. Our results show that tagging streams which are generated by
a combination of imitation dynamics and shared background knowledge exhibit
faster and higher semantic stability than tagging streams which are generated
via imitation dynamics or natural language streams alone.
|
1311.1163 | Sparse Time-Frequency decomposition by dictionary learning | cs.IT math.IT | In this paper, we propose a time-frequency analysis method to obtain
instantaneous frequencies and the corresponding decomposition by solving an
optimization problem. In this optimization problem, the basis to decompose the
signal is not known. Instead, it is adapted to the signal and is determined as
part of the optimization problem. In this sense, this optimization problem can
be seen as a dictionary learning problem. This dictionary learning problem is
solved by using the Augmented Lagrangian Multiplier method (ALM) iteratively.
We further accelerate the convergence of the ALM method in each iteration by
using the fast wavelet transform. We apply our method to decompose several
signals, including signals with poor scale separation, signals with outliers
and polluted by noise and a real signal. The results show that this method can
give accurate recovery of both the instantaneous frequencies and the intrinsic
mode functions.
|
1311.1169 | Using Robust PCA to estimate regional characteristics of language use
from geo-tagged Twitter messages | cs.CL | Principal component analysis (PCA) and related techniques have been
successfully employed in natural language processing. Text mining applications
in the age of the online social media (OSM) face new challenges due to
properties specific to these use cases (e.g. spelling issues specific to texts
posted by users, the presence of spammers and bots, service announcements,
etc.). In this paper, we employ a Robust PCA technique to separate typical
outliers and highly localized topics from the low-dimensional structure present
in language use in online social networks. Our focus is on identifying
geospatial features among the messages posted by the users of the Twitter
microblogging service. Using a dataset which consists of over 200 million
geolocated tweets collected over the course of a year, we investigate whether
the information present in word usage frequencies can be used to identify
regional features of language use and topics of interest. Using the PCA pursuit
method, we are able to identify important low-dimensional features, which
constitute smoothly varying functions of the geographic location.
|
1311.1181 | Toward a New Approach for Modeling Dependability of Data Warehouse
System | cs.DB cs.SE | The sustainability of any Data Warehouse System (DWS) is closely correlated
with user satisfaction. Therefore, analysts, designers and developers focused
more on achieving all its functionality, without considering others kinds of
requirement such as dependability s aspects. Moreover, these latter are often
considered as properties of the system that will must be checked and corrected
once the project is completed. The practice of "fix it later" can cause the
obsolescence of the entire Data Warehouse System. Therefore, it requires the
adoption of a methodology that will ensure the integration of aspects of
dependability since the early stages of project DWS. In this paper, we first
define the concepts related to dependability of DWS. Then we present our
approach inspired from the MDA (Model Driven Architecture) approach to model
dependability s aspects namely: availability, reliability, maintainability and
security, taking into account their interaction.
|
1311.1187 | Constrained Codes for Joint Energy and Information Transfer | cs.IT math.IT | In various wireless systems, such as sensor RFID networks and body area
networks with implantable devices, the transmitted signals are simultaneously
used both for information transmission and for energy transfer. In order to
satisfy the conflicting requirements on information and energy transfer, this
paper proposes the use of constrained run-length limited (RLL) codes in lieu of
conventional unconstrained (i.e., random-like) capacity-achieving codes. The
receiver's energy utilization requirements are modeled stochastically, and
constraints are imposed on the probabilities of battery underflow and overflow
at the receiver. It is demonstrated that the codewords' structure afforded by
the use of constrained codes enables the transmission strategy to be better
adjusted to the receiver's energy utilization pattern, as compared to classical
unstructured codes. As a result, constrained codes allow a wider range of
trade-offs between the rate of information transmission and the performance of
energy transfer to be achieved.
|
1311.1189 | Statistical Inference in Hidden Markov Models using $k$-segment
Constraints | stat.ME cs.LG stat.ML | Hidden Markov models (HMMs) are one of the most widely used statistical
methods for analyzing sequence data. However, the reporting of output from HMMs
has largely been restricted to the presentation of the most-probable (MAP)
hidden state sequence, found via the Viterbi algorithm, or the sequence of most
probable marginals using the forward-backward (F-B) algorithm. In this article,
we expand the amount of information we could obtain from the posterior
distribution of an HMM by introducing linear-time dynamic programming
algorithms that, we collectively call $k$-segment algorithms, that allow us to
i) find MAP sequences, ii) compute posterior probabilities and iii) simulate
sample paths conditional on a user specified number of segments, i.e.
contiguous runs in a hidden state, possibly of a particular type. We illustrate
the utility of these methods using simulated and real examples and highlight
the application of prospective and retrospective use of these methods for
fitting HMMs or exploring existing model fits.
|
1311.1194 | Identifying Purpose Behind Electoral Tweets | cs.CL | Tweets pertaining to a single event, such as a national election, can number
in the hundreds of millions. Automatically analyzing them is beneficial in many
downstream natural language applications such as question answering and
summarization. In this paper, we propose a new task: identifying the purpose
behind electoral tweets--why do people post election-oriented tweets? We show
that identifying purpose is correlated with the related phenomenon of sentiment
and emotion detection, but yet significantly different. Detecting purpose has a
number of applications including detecting the mood of the electorate,
estimating the popularity of policies, identifying key issues of contention,
and predicting the course of events. We create a large dataset of electoral
tweets and annotate a few thousand tweets for purpose. We develop a system that
automatically classifies electoral tweets as per their purpose, obtaining an
accuracy of 43.56% on an 11-class task and an accuracy of 73.91% on a 3-class
task (both accuracies well above the most-frequent-class baseline). Finally, we
show that resources developed for emotion detection are also helpful for
detecting purpose.
|
1311.1213 | A Big Data Approach to Computational Creativity | cs.CY cs.AI cs.HC physics.soc-ph | Computational creativity is an emerging branch of artificial intelligence
that places computers in the center of the creative process. Broadly,
creativity involves a generative step to produce many ideas and a selective
step to determine the ones that are the best. Many previous attempts at
computational creativity, however, have not been able to achieve a valid
selective step. This work shows how bringing data sources from the creative
domain and from hedonic psychophysics together with big data analytics
techniques can overcome this shortcoming to yield a system that can produce
novel and high-quality creative artifacts. Our data-driven approach is
demonstrated through a computational creativity system for culinary recipes and
menus we developed and deployed, which can operate either autonomously or
semi-autonomously with human interaction. We also comment on the volume,
velocity, variety, and veracity of data in computational creativity.
|
1311.1223 | Quality Assessment of Pixel-Level ImageFusion Using Fuzzy Logic | cs.CV | Image fusion is to reduce uncertainty and minimize redundancy in the output
while maximizing relevant information from two or more images of a scene into a
single composite image that is more informative and is more suitable for visual
perception or processing tasks like medical imaging, remote sensing, concealed
weapon detection, weather forecasting, biometrics etc. Image fusion combines
registered images to produce a high quality fused image with spatial and
spectral information. The fused image with more information will improve the
performance of image analysis algorithms used in different applications. In
this paper, we proposed a fuzzy logic method to fuse images from different
sensors, in order to enhance the quality and compared proposed method with two
other methods i.e. image fusion using wavelet transform and weighted average
discrete wavelet transform based image fusion using genetic algorithm (here
onwards abbreviated as GA) along with quality evaluation parameters image
quality index (IQI), mutual information measure (MIM), root mean square error
(RMSE), peak signal to noise ratio (PSNR), fusion factor (FF), fusion symmetry
(FS) and fusion index (FI) and entropy. The results obtained from proposed
fuzzy based image fusion approach improves quality of fused image as compared
to earlier reported methods, wavelet transform based image fusion and weighted
average discrete wavelet transform based image fusion using genetic algorithm.
|
1311.1247 | LA-CTR: A Limited Attention Collaborative Topic Regression for Social
Media | cs.IR cs.SI physics.soc-ph | Probabilistic models can learn users' preferences from the history of their
item adoptions on a social media site, and in turn, recommend new items to
users based on learned preferences. However, current models ignore
psychological factors that play an important role in shaping online social
behavior. One such factor is attention, the mechanism that integrates
perceptual and cognitive features to select the items the user will consciously
process and may eventually adopt. Recent research has shown that people have
finite attention, which constrains their online interactions, and that they
divide their limited attention non-uniformly over other people. We propose a
collaborative topic regression model that incorporates limited, non-uniformly
divided attention. We show that the proposed model is able to learn more
accurate user preferences than state-of-art models, which do not take human
cognitive factors into account. Specifically we analyze voting on news items on
the social news aggregator and show that our model is better able to predict
held out votes than alternate models. Our study demonstrates that
psycho-socially motivated models are better able to describe and predict
observed behavior than models which only consider latent social structure and
content.
|
1311.1259 | Multi-target Radar Detection within a Sparsity Framework | cs.IT math.IT | Traditional radar detection schemes are typically studied for single target
scenarios and they can be non-optimal when there are multiple targets in the
scene. In this paper, we develop a framework to discuss multi-target detection
schemes with sparse reconstruction techniques that is based on the
Neyman-Pearson criterion. We will describe an initial result in this framework
concerning false alarm probability with LASSO as the sparse reconstruction
technique. Then, several simulations validating this result will be discussed.
Finally, we describe several research avenues to further pursue this framework.
|
1311.1264 | Dynamic Network Formation with Incomplete Information | cs.SI cs.GT physics.soc-ph | How do networks form and what is their ultimate topology? Most of the
literature that addresses these questions assumes complete information: agents
know in advance the value of linking to other agents, even with agents they
have never met and with whom they have had no previous interaction (direct or
indirect). This paper addresses the same questions under what seems to us to be
the much more natural assumption of incomplete information: agents do not know
in advance -- but must learn -- the value of linking to agents they have never
met. We show that the assumption of incomplete information has profound
implications for the process of network formation and the topology of networks
that ultimately form. Under complete information, the networks that form and
are stable typically have a star, wheel or core-periphery form, with high-value
agents in the core. Under incomplete information, the presence of positive
externalities (the value of indirect links) implies that a much wider
collection of network topologies can emerge and be stable. Moreover, even when
the topologies that emerge are the same, the locations of agents can be very
different. For instance, when information is incomplete, it is possible for a
hub-and-spokes network with a low-value agent in the center to form and endure
permanently: an agent can achieve a central position purely as the result of
chance rather than as the result of merit. Perhaps even more strikingly: when
information is incomplete, a connected network could form and persist even if,
when information were complete, no links would ever form, so that the final
form would be a totally disconnected network. All of this can occur even in
settings where agents eventually learn everything so that information, although
initially incomplete, eventually becomes complete.
|
1311.1266 | Topological-collaborative approach for disambiguating authors' names in
collaborative networks | cs.SI physics.soc-ph | Concepts and methods of complex networks have been employed to uncover
patterns in a myriad of complex systems. Unfortunately, the relevance and
significance of these patterns strongly depends on the reliability of the data
sets. In the study of collaboration networks, for instance, unavoidable noise
pervading author's collaboration datasets arises when authors share the same
name. To address this problem, we derive a hybrid approach based on authors'
collaboration patterns and on topological features of collaborative networks.
Our results show that the combination of strategies, in most cases, performs
better than the traditional approach which disregards topological features. We
also show that the main factor for improving the discriminability of homonymous
authors is the average distance between authors. Finally, we show that it is
possible to predict the weighting associated to each strategy compounding the
hybrid system by examining the discrimination obtained from the traditional
analysis of collaboration patterns. Once the methodology devised here is
generic, our approach is potentially useful to classify many other networked
systems governed by complex interactions.
|
1311.1279 | Face Recognition via Globality-Locality Preserving Projections | cs.CV | We present an improved Locality Preserving Projections (LPP) method, named
Gloablity-Locality Preserving Projections (GLPP), to preserve both the global
and local geometric structures of data. In our approach, an additional
constraint of the geometry of classes is imposed to the objective function of
conventional LPP for respecting some more global manifold structures. Moreover,
we formulate a two-dimensional extension of GLPP (2D-GLPP) as an example to
show how to extend GLPP with some other statistical techniques. We apply our
works to face recognition on four popular face databases, namely ORL, Yale,
FERET and LFW-A databases, and extensive experimental results demonstrate that
the considered global manifold information can significantly improve the
performance of LPP and the proposed face recognition methods outperform the
state-of-the-arts.
|
1311.1288 | Performance of Uplink Multiuser Massive MIMO Systems | cs.IT math.IT | We study the performance of uplink transmission in a large-scale (massive)
MIMO system, where all the transmitters have single antennas and the receiver
(base station) has a large number of antennas. Specifically, we analyze
achievable degrees of freedom of the system without assuming channel state
information at the receiver. Also, we quantify the amount of power saving that
is possible with increasing number of receive antennas.
|
1311.1291 | Multiuser SM-MIMO versus Massive MIMO: Uplink Performance Comparison | cs.IT math.IT | In this paper, we propose algorithms for signal detection in large-scale
multiuser {\em spatial modulation multiple-input multiple-output (SM-MIMO)}
systems. In large-scale SM-MIMO, each user is equipped with multiple transmit
antennas (e.g., 2 or 4 antennas) but only one transmit RF chain, and the base
station (BS) is equipped with tens to hundreds of (e.g., 128) receive antennas.
In SM-MIMO, in a given channel use, each user activates any one of its multiple
transmit antennas and the index of the activated antenna conveys information
bits in addition to the information bits conveyed through conventional
modulation symbols (e.g., QAM). We propose two different algorithms for
detection of large-scale SM-MIMO signals at the BS; one is based on {\em
message passing} and the other is based on {\em local search}. The proposed
algorithms are shown to achieve very good performance and scale well. Also, for
the same spectral efficiency, multiuser SM-MIMO outperforms conventional
multiuser MIMO (recently being referred to as massive MIMO) by several dBs; for
e.g., with 16 users, 128 antennas at the BS and 4 bpcu per user, SM-MIMO with 4
transmit antennas per user and 4-QAM outperforms massive MIMO with 1 transmit
antenna per user and 16-QAM by about 4 to 5 dB at $10^{-3}$ uncoded BER. The
SNR advantage of SM-MIMO over massive MIMO can be attributed to the following
reasons: (i) because of the spatial index bits, SM-MIMO can use a lower-order
QAM alphabet compared to that in massive MIMO to achieve the same spectral
efficiency, and (ii) for the same spectral efficiency and QAM size, massive
MIMO will need more spatial streams per user which leads to increased spatial
interference.
|
1311.1294 | Delay Learning Architectures for Memory and Classification | cs.NE q-bio.NC | We present a neuromorphic spiking neural network, the DELTRON, that can
remember and store patterns by changing the delays of every connection as
opposed to modifying the weights. The advantage of this architecture over
traditional weight based ones is simpler hardware implementation without
multipliers or digital-analog converters (DACs) as well as being suited to
time-based computing. The name is derived due to similarity in the learning
rule with an earlier architecture called Tempotron. The DELTRON can remember
more patterns than other delay-based networks by modifying a few delays to
remember the most 'salient' or synchronous part of every spike pattern. We
present simulations of memory capacity and classification ability of the
DELTRON for different random spatio-temporal spike patterns. The memory
capacity for noisy spike patterns and missing spikes are also shown. Finally,
we present SPICE simulation results of the core circuits involved in a
reconfigurable mixed signal implementation of this architecture.
|
1311.1309 | Convex lifted conditions for robust stability analysis and stabilization
of linear discrete-time switched systems | math.OC cs.SY | Stability analysis of discrete-time switched systems under minimum dwell-time
is studied using a new type of LMI conditions. These conditions are convex in
the matrices of the system and shown to be equivalent to the nonconvex
conditions proposed by Geromel and Colaneri. The convexification of the
conditions is performed by a lifting process which introduces a moderate number
of additional decision variables. The convexity of the conditions can be
exploited to extend the results to uncertain systems, control design and
$\ell_2$-gain computation without introducing additional conservatism. Several
examples are presented to show the effectiveness of the approach.
|
1311.1312 | Two are Better Than One: Adaptive Sparse System Identification using
Affine Combination of Two Sparse Adaptive Filters | cs.IT math.IT | Sparse system identification problems often exist in many applications, such
as echo interference cancellation, sparse channel estimation, and adaptive
beamforming. One of popular adaptive sparse system identification (ASSI)
methods is adopting only one sparse least mean square (LMS) filter. However,
the adoption of only one sparse LMS filter cannot simultaneously achieve fast
convergence speed and small steady-state mean state deviation (MSD). Unlike the
conventional method, we propose an improved ASSI method using affine
combination of two sparse LMS filters to simultaneously achieving fast
convergence and low steady-state MSD. First, problem formulation and standard
affine combination of LMS filters are introduced. Then an approximate optimum
affine combiner is adopted for the proposed filter according to stochastic
gradient search method. Later, to verify the proposed filter for ASSI, computer
simulations are provided to confirm effectiveness of the proposed filter which
can achieve better estimation performance than the conventional one and
standard affine combination of LMS filters.
|
1311.1314 | Variable is Better Than Invariable: Stable Sparse VSS-NLMS Algorithms
with Application to Estimating MIMO Channels | cs.IT math.IT | To estimate multiple-input multiple-output (MIMO) channels, invariable
step-size normalized least mean square (ISSNLMS) algorithm was applied to
adaptive channel estimation (ACE). Since the MIMO channel is often described by
sparse channel model due to broadband signal transmission, such sparsity can be
exploited by adaptive sparse channel estimation (ASCE) methods using sparse
ISS-NLMS algorithms. It is well known that step-size is a critical parameter
which controls three aspects: algorithm stability, estimation performance and
computational cost. The previous approaches can exploit channel sparsity but
their step-sizes are keeping invariant which unable balances well the three
aspects and easily cause either estimation performance loss or instability. In
this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS)
algorithms to improve the accuracy of MIMO channel estimators. First, ASCE for
estimating MIMO channels is formulated in MIMO systems. Second, different
sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition,
difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones are
explained. At last, to verify the effectiveness of the proposed algorithms for
ASCE, several selected simulation results are shown to prove that the proposed
sparse VSS-NLMS algorithms can achieve better estimation performance than the
conventional methods via mean square error (MSE) and bit error rate (BER)
metrics.
|
1311.1315 | Variable Earns Profit: Improved Adaptive Channel Estimation using Sparse
VSS-NLMS Algorithms | cs.IT math.IT | Accurate channel estimation is essential for broadband wireless
communications. As wireless channels often exhibit sparse structure, the
adaptive sparse channel estimation algorithms based on normalized least mean
square (NLMS) have been proposed, e.g., the zero-attracting NLMS (ZA-NLMS)
algorithm and reweighted zero-attracting NLMS (RZA-NLMS). In these NLMS-based
algorithms, the step size used to iteratively update the channel estimate is a
critical parameter to control the estimation accuracy and the convergence speed
(so the computational cost). However, invariable step-size (ISS) is usually
used in conventional algorithms, which leads to provide performance loss or/and
low convergence speed as well as high computational cost. To solve these
problems, based on the observation that large step size is preferred for fast
convergence while small step size is preferred for accurate estimation, we
propose to replace the ISS by variable step size (VSS) in conventional
NLMS-based algorithms to improve the adaptive sparse channel estimation in
terms of bit error rate (BER) and mean square error (MSE) metrics. The proposed
VSS-ZA-NLMS and VSS-RZA-NLMS algorithms adopt VSS, which can be adaptive to the
estimation error in each iteration, i.e., large step size is used in the case
of large estimation error to accelerate the convergence speed, while small step
size is used when the estimation error is small to improve the steady-state
estimation accuracy. Simulation results are provided to validate the
effectiveness of the proposed scheme.
|
1311.1329 | Physical Layer Network Coding: A Cautionary Story with Interference and
Spatial Reservation | cs.IT math.IT | Physical layer network coding (PLNC) has the potential to improve throughput
of multi-hop networks. However, most of the works are focused on the simple,
three-node model with two-way relaying, not taking into account the fact that
there can be other neighboring nodes that can cause/receive interference. The
way to deal with this problem in distributed wireless networks is usage of
MAC-layer mechanisms that make a spatial reservation of the shared wireless
medium, similar to the well-known RTS/CTS in IEEE 802.11 wireless networks. In
this paper, we investigate two-way relaying in presence of interfering nodes
and usage of spatial reservation mechanisms. Specifically, we introduce a
reserved area in order to protect the nodes involved in two-way relaying from
the interference caused by neighboring nodes. We analytically derive the
end-to-end rate achieved by PLNC considering the impact of interference and
reserved area. A relevant performance measure is data rate per unit area, in
order to reflect the fact that any spatial reservation blocks another data
exchange in the reserved area. The numerical results carry a cautionary message
that the gains brought by PLNC over one-way relaying may be vanishing when the
two-way relaying is considered in a broader context of a larger wireless
network.
|
1311.1339 | Zero-Error Capacity of a Class of Timing Channels | cs.IT cs.DM math.IT | We analyze the problem of zero-error communication through timing channels
that can be interpreted as discrete-time queues with bounded waiting times. The
channel model includes the following assumptions: 1) Time is slotted, 2) at
most $ N $ "particles" are sent in each time slot, 3) every particle is delayed
in the channel for a number of slots chosen randomly from the set $ \{0, 1,
\ldots, K\} $, and 4) the particles are identical. It is shown that the
zero-error capacity of this channel is $ \log r $, where $ r $ is the unique
positive real root of the polynomial $ x^{K+1} - x^{K} - N $.
Capacity-achieving codes are explicitly constructed, and a linear-time decoding
algorithm for these codes devised. In the particular case $ N = 1 $, $ K = 1 $,
the capacity is equal to $ \log \phi $, where $ \phi = (1 + \sqrt{5}) / 2 $ is
the golden ratio, and the constructed codes give another interpretation of the
Fibonacci sequence.
|
1311.1354 | How to Center Binary Deep Boltzmann Machines | stat.ML cs.LG | This work analyzes centered binary Restricted Boltzmann Machines (RBMs) and
binary Deep Boltzmann Machines (DBMs), where centering is done by subtracting
offset values from visible and hidden variables. We show analytically that (i)
centering results in a different but equivalent parameterization for artificial
neural networks in general, (ii) the expected performance of centered binary
RBMs/DBMs is invariant under simultaneous flip of data and offsets, for any
offset value in the range of zero to one, (iii) centering can be reformulated
as a different update rule for normal binary RBMs/DBMs, and (iv) using the
enhanced gradient is equivalent to setting the offset values to the average
over model and data mean. Furthermore, numerical simulations suggest that (i)
optimal generative performance is achieved by subtracting mean values from
visible as well as hidden variables, (ii) centered RBMs/DBMs reach
significantly higher log-likelihood values than normal binary RBMs/DBMs, (iii)
centering variants whose offsets depend on the model mean, like the enhanced
gradient, suffer from severe divergence problems, (iv) learning is stabilized
if an exponentially moving average over the batch means is used for the offset
values instead of the current batch mean, which also prevents the enhanced
gradient from diverging, (v) centered RBMs/DBMs reach higher LL values than
normal RBMs/DBMs while having a smaller norm of the weight matrix, (vi)
centering leads to an update direction that is closer to the natural gradient
and that the natural gradient is extremly efficient for training RBMs, (vii)
centering dispense the need for greedy layer-wise pre-training of DBMs, (viii)
furthermore we show that pre-training often even worsen the results
independently whether centering is used or not, and (ix) centering is also
beneficial for auto encoders.
|
1311.1358 | Scalar Compandor Design Based on Optimal Compressor Function
Approximating by Spline Functions | cs.IT math.IT | In this paper the approximation of the optimal compressor function using the
first-degree spline functions and quadratic spline functions is done.
Coefficients on which we form approximative spline functions are determined by
solving equation systems that are formed from treshold conditions. For Gaussian
source at the input of the quantizer, using the obtained approximate spline
functions a companding quantizer designing is done. On the basis of the
comparison with the SQNR of the optimal compandor it can be noticed that the
proposed companding quantizer based on approximate spline functions achieved
SQNR arbitrary close to that of the optimal compandor.
|
1311.1363 | On Known-Plaintext Attacks to a Compressed Sensing-based Encryption: A
Quantitative Analysis | cs.IT math.IT | Despite the linearity of its encoding, compressed sensing may be used to
provide a limited form of data protection when random encoding matrices are
used to produce sets of low-dimensional measurements (ciphertexts). In this
paper we quantify by theoretical means the resistance of the least complex form
of this kind of encoding against known-plaintext attacks. For both standard
compressed sensing with antipodal random matrices and recent multiclass
encryption schemes based on it, we show how the number of candidate encoding
matrices that match a typical plaintext-ciphertext pair is so large that the
search for the true encoding matrix inconclusive. Such results on the practical
ineffectiveness of known-plaintext attacks underlie the fact that even
closely-related signal recovery under encoding matrix uncertainty is doomed to
fail.
Practical attacks are then exemplified by applying compressed sensing with
antipodal random matrices as a multiclass encryption scheme to signals such as
images and electrocardiographic tracks, showing that the extracted information
on the true encoding matrix from a plaintext-ciphertext pair leads to no
significant signal recovery quality increase. This theoretical and empirical
evidence clarifies that, although not perfectly secure, both standard
compressed sensing and multiclass encryption schemes feature a noteworthy level
of security against known-plaintext attacks, therefore increasing its appeal as
a negligible-cost encryption method for resource-limited sensing applications.
|
1311.1372 | Minimum-Variance Importance-Sampling Bernoulli Estimator for Fast
Simulation of Linear Block Codes over Binary Symmetric Channels | cs.IT math.IT | In this paper the choice of the Bernoulli distribution as biased distribution
for importance sampling (IS) Monte-Carlo (MC) simulation of linear block codes
over binary symmetric channels (BSCs) is studied. Based on the analytical
derivation of the optimal IS Bernoulli distribution, with explicit calculation
of the variance of the corresponding IS estimator, two novel algorithms for
fast-simulation of linear block codes are proposed. For sufficiently high
signal-to-noise ratios (SNRs) one of the proposed algorithm is SNR-invariant,
i.e. the IS estimator does not depend on the cross-over probability of the
channel. Also, the proposed algorithms are shown to be suitable for the
estimation of the error-correcting capability of the code and the decoder.
Finally, the effectiveness of the algorithms is confirmed through simulation
results in comparison to standard Monte Carlo method.
|
1311.1406 | TOP-SPIN: TOPic discovery via Sparse Principal component INterference | cs.CV cs.IR cs.LG | We propose a novel topic discovery algorithm for unlabeled images based on
the bag-of-words (BoW) framework. We first extract a dictionary of visual words
and subsequently for each image compute a visual word occurrence histogram. We
view these histograms as rows of a large matrix from which we extract sparse
principal components (PCs). Each PC identifies a sparse combination of visual
words which co-occur frequently in some images but seldom appear in others.
Each sparse PC corresponds to a topic, and images whose interference with the
PC is high belong to that topic, revealing the common parts possessed by the
images. We propose to solve the associated sparse PCA problems using an
Alternating Maximization (AM) method, which we modify for purpose of
efficiently extracting multiple PCs in a deflation scheme. Our approach attacks
the maximization problem in sparse PCA directly and is scalable to
high-dimensional data. Experiments on automatic topic discovery and category
prediction demonstrate encouraging performance of our approach.
|
1311.1411 | Effective Secrecy: Reliability, Confusion and Stealth | cs.IT cs.CR math.IT | A security measure called effective security is defined that includes strong
secrecy and stealth communication. Effective secrecy ensures that a message
cannot be deciphered and that the presence of meaningful communication is
hidden. To measure stealth we use resolvability and relate this to binary
hypothesis testing. Results are developed for wire-tap channels and broadcast
channels with confidential messages.
|
1311.1422 | Structural Learning for Template-free Protein Folding | cs.LG cs.CE q-bio.QM | The thesis is aimed to solve the template-free protein folding problem by
tackling two important components: efficient sampling in vast conformation
space, and design of knowledge-based potentials with high accuracy. We have
proposed the first-order and second-order CRF-Sampler to sample structures from
the continuous local dihedral angles space by modeling the lower and higher
order conditional dependency between neighboring dihedral angles given the
primary sequence information. A framework combining the Conditional Random
Fields and the energy function is introduced to guide the local conformation
sampling using long range constraints with the energy function.
The relationship between the sequence profile and the local dihedral angle
distribution is nonlinear. Hence we proposed the CNF-Folder to model this
complex relationship by applying a novel machine learning model Conditional
Neural Fields which utilizes the structural graphical model with the neural
network. CRF-Samplers and CNF-Folder perform very well in CASP8 and CASP9.
Further, a novel pairwise distance statistical potential (EPAD) is designed
to capture the dependency of the energy profile on the positions of the
interacting amino acids as well as the types of those amino acids, opposing the
common assumption that this energy profile depends only on the types of amino
acids. EPAD has also been successfully applied in the CASP 10 Free Modeling
experiment with CNF-Folder, especially outstanding on some uncommon structured
targets.
|
1311.1436 | Adaptive Measurement-Based Policy-Driven QoS Management with
Fuzzy-Rule-based Resource Allocation | cs.NI cs.AI | Fixed and wireless networks are increasingly converging towards common
connectivity with IP-based core networks. Providing effective end-to-end
resource and QoS management in such complex heterogeneous converged network
scenarios requires unified, adaptive and scalable solutions to integrate and
co-ordinate diverse QoS mechanisms of different access technologies with
IP-based QoS. Policy-Based Network Management (PBNM) is one approach that could
be employed to address this challenge. Hence, a policy-based framework for
end-to-end QoS management in converged networks, CNQF (Converged Networks QoS
Management Framework) has been proposed within our project. In this paper, the
CNQF architecture, a Java implementation of its prototype and experimental
validation of key elements are discussed. We then present a fuzzy-based CNQF
resource management approach and study the performance of our implementation
with real traffic flows on an experimental testbed. The results demonstrate the
efficacy of our resource-adaptive approach for practical PBNM systems.
|
1311.1484 | Regional properties of global communication as reflected in aggregated
Twitter data | physics.soc-ph cs.SI | Twitter is a popular public conversation platform with world-wide audience
and diverse forms of connections between users. In this paper we introduce the
concept of aggregated regional Twitter networks in order to characterize
communication between geopolitical regions. We present the study of a follower
and a mention graph created from an extensive data set collected during the
second half of the year of $2012$. With a k-shell decomposition the global
core-periphery structure is revealed and by means of a modified Regional-SIR
model we also consider basic information spreading properties.
|
1311.1490 | On Unconditionally Secure Multiparty Computation for Realizing
Correlated Equilibria in Games | cs.CR cs.GT cs.IT math.IT | In game theory, a trusted mediator acting on behalf of the players can enable
the attainment of correlated equilibria, which may provide better payoffs than
those available from the Nash equilibria alone. We explore the approach of
replacing the trusted mediator with an unconditionally secure sampling protocol
that jointly generates the players' actions. We characterize the joint
distributions that can be securely sampled by malicious players via protocols
using error-free communication. This class of distributions depends on whether
players may speak simultaneously ("cheap talk") or must speak in turn ("polite
talk"). In applying sampling protocols toward attaining correlated equilibria
with rational players, we observe that security against malicious parties may
be much stronger than necessary. We propose the concept of secure sampling by
rational players, and show that many more distributions are feasible given
certain utility functions. However, the payoffs attainable via secure sampling
by malicious players are a dominant subset of the rationally attainable
payoffs.
|
1311.1539 | Category-Theoretic Quantitative Compositional Distributional Models of
Natural Language Semantics | cs.CL cs.LG math.CT math.LO | This thesis is about the problem of compositionality in distributional
semantics. Distributional semantics presupposes that the meanings of words are
a function of their occurrences in textual contexts. It models words as
distributions over these contexts and represents them as vectors in high
dimensional spaces. The problem of compositionality for such models concerns
itself with how to produce representations for larger units of text by
composing the representations of smaller units of text.
This thesis focuses on a particular approach to this compositionality
problem, namely using the categorical framework developed by Coecke, Sadrzadeh,
and Clark, which combines syntactic analysis formalisms with distributional
semantic representations of meaning to produce syntactically motivated
composition operations. This thesis shows how this approach can be
theoretically extended and practically implemented to produce concrete
compositional distributional models of natural language semantics. It
furthermore demonstrates that such models can perform on par with, or better
than, other competing approaches in the field of natural language processing.
There are three principal contributions to computational linguistics in this
thesis. The first is to extend the DisCoCat framework on the syntactic front
and semantic front, incorporating a number of syntactic analysis formalisms and
providing learning procedures allowing for the generation of concrete
compositional distributional models. The second contribution is to evaluate the
models developed from the procedures presented here, showing that they
outperform other compositional distributional models present in the literature.
The third contribution is to show how using category theory to solve linguistic
problems forms a sound basis for research, illustrated by examples of work on
this topic, that also suggest directions for future research.
|
1311.1565 | On Maximal Ratio Diversity with Weighting Errors for Physical Layer
Security | cs.IT math.IT | In this letter, we introduce the performance of maximal ratio combining (MRC)
with weighting errors for physical layer security. We assume both legitimate
user and eavesdropper each equipped with multiple antennas employ non ideal
MRC. The non ideal MRC is designed in terms of power correlation between the
estimated and actual fadings. We derive new closedform and generalized
expressions for secrecy outage probability. Next, we investigate the asymptotic
behavior of secrecy outage probability for high signal-to-noise ratio in the
main channel between legitimate user and transmitter. The asymptotic analysis
provides the insights about actual diversity provided by MRC with weighting
errors. We substantiate our claims with the analytic results and numerical
evaluations.
|
1311.1567 | Beamforming Design for Joint Localization and Data Transmission in
Distributed Antenna System | cs.IT math.IT | A distributed antenna system is studied whose goal is to provide data
communication and positioning functionalities to Mobile Stations (MSs). Each MS
receives data from a number of Base Stations (BSs), and uses the received
signal not only to extract the information but also to determine its location.
This is done based on Time of Arrival (TOA) or Time Difference of Arrival
(TDOA) measurements, depending on the assumed synchronization conditions. The
problem of minimizing the overall power expenditure of the BSs under data
throughput and localization accuracy requirements is formulated with respect to
the beamforming vectors used at the BSs. The analysis covers both
frequency-flat and frequency-selective channels, and accounts also for
robustness constraints in the presence of parameter uncertainty. The proposed
algorithmic solutions are based on rank-relaxation and Difference-of-Convex
(DC) programming.
|
1311.1568 | Stability of Sequence-Based Control with Random Delays and Dropouts | math.OC cs.SY | We study networked control of non-linear systems where system states and
tentative plant input sequences are transmitted over unreliable communication
channels. The sequences are calculated recursively by using a pre-designed
nominally stabilizing state-feedback control mapping to plant state
predictions. The controller does not require receipt acknowledgments or
knowledge of delay or dropout distributions. For the i.i.d. case, in which case
the numbers of consecutive dropouts are geometrically distributed, we show how
the resulting closed loop system can be modeled as a Markov non-linear jump
system and establish sufficient conditions for stochastic stability.
|
1311.1626 | Trade-offs Computing Minimum Hub Cover toward Optimized Graph Query
Processing | cs.DB cs.DS | As techniques for graph query processing mature, the need for optimization is
increasingly becoming an imperative. Indices are one of the key ingredients
toward efficient query processing strategies via cost-based optimization. Due
to the apparent absence of a common representation model, it is difficult to
make a focused effort toward developing access structures, metrics to evaluate
query costs, and choose alternatives. In this context, recent interests in
covering-based graph matching appears to be a promising direction of research.
In this paper, our goal is to formally introduce a new graph representation
model, called Minimum Hub Cover, and demonstrate that this representation
offers interesting strategic advantages, facilitates construction of candidate
graphs from graph fragments, and helps leverage indices in novel ways for query
optimization. However, similar to other covering problems, minimum hub cover is
NP-hard, and thus is a natural candidate for optimization. We claim that
computing the minimum hub cover leads to substantial cost reduction for graph
query processing. We present a computational characterization of minimum hub
cover based on integer programming to substantiate our claim and investigate
its computational cost on various graph types.
|
1311.1632 | Persistence, Change, and the Integration of Objects and Processes in the
Framework of the General Formal Ontology | cs.AI | In this paper we discuss various problems, associated to temporal phenomena.
These problems include persistence and change, the integration of objects and
processes, and truth-makers for temporal propositions. We propose an approach
which interprets persistence as a phenomenon emanating from the activity of the
mind, and which, additionally, postulates that persistence, finally, rests on
personal identity. The General Formal Ontology (GFO) is a top level ontology
being developed at the University of Leipzig. Top level ontologies can be
roughly divided into 3D-ontologies, and 4D-ontologies. GFO is the only top
level ontology, used in applications, which is a 4D-ontology admitting
additionally 3D objects. Objects and processes are integrated in a natural way.
|
1311.1640 | Computer simulations reveal complex distribution of haemodynamic forces
in a mouse retina model of angiogenesis | cs.CE physics.bio-ph q-bio.QM | There is currently limited understanding of the role played by haemodynamic
forces on the processes governing vascular development. One of many obstacles
to be overcome is being able to measure those forces, at the required
resolution level, on vessels only a few micrometres thick. In the current
paper, we present an in silico method for the computation of the haemodynamic
forces experienced by murine retinal vasculature (a widely used vascular
development animal model) beyond what is measurable experimentally. Our results
show that it is possible to reconstruct high-resolution three-dimensional
geometrical models directly from samples of retinal vasculature and that the
lattice-Boltzmann algorithm can be used to obtain accurate estimates of the
haemodynamics in these domains. We generate flow models from samples obtained
at postnatal days (P) 5 and 6. Our simulations show important differences
between the flow patterns recovered in both cases, including observations of
regression occurring in areas where wall shear stress gradients exist. We
propose two possible mechanisms to account for the observed increase in
velocity and wall shear stress between P5 and P6: i) the measured reduction in
typical vessel diameter between both time points, ii) the reduction in network
density triggered by the pruning process. The methodology developed herein is
applicable to other biomedical domains where microvasculature can be imaged but
experimental flow measurements are unavailable or difficult to obtain.
|
1311.1642 | Quasi-Linear Compressed Sensing | math.NA cs.IT math.IT | Inspired by significant real-life applications, in particular, sparse phase
retrieval and sparse pulsation frequency detection in Asteroseismology, we
investigate a general framework for compressed sensing, where the measurements
are quasi-linear. We formulate natural generalizations of the well-known
Restricted Isometry Property (RIP) towards nonlinear measurements, which allow
us to prove both unique identifiability of sparse signals as well as the
convergence of recovery algorithms to compute them efficiently. We show that
for certain randomized quasi-linear measurements, including Lipschitz
perturbations of classical RIP matrices and phase retrieval from random
projections, the proposed restricted isometry properties hold with high
probability. We analyze a generalized Orthogonal Least Squares (OLS) under the
assumption that magnitudes of signal entries to be recovered decay fast. Greed
is good again, as we show that this algorithm performs efficiently in phase
retrieval and asteroseismology. For situations where the decay assumption on
the signal does not necessarily hold, we propose two alternative algorithms,
which are natural generalizations of the well-known iterative hard and
soft-thresholding. While these algorithms are rarely successful for the
mentioned applications, we show their strong recovery guarantees for
quasi-linear measurements which are Lipschitz perturbations of RIP matrices.
|
1311.1644 | The Maximum Entropy Relaxation Path | cs.LG math.OC stat.ML | The relaxed maximum entropy problem is concerned with finding a probability
distribution on a finite set that minimizes the relative entropy to a given
prior distribution, while satisfying relaxed max-norm constraints with respect
to a third observed multinomial distribution. We study the entire relaxation
path for this problem in detail. We show existence and a geometric description
of the relaxation path. Specifically, we show that the maximum entropy
relaxation path admits a planar geometric description as an increasing,
piecewise linear function in the inverse relaxation parameter. We derive fast
algorithms for tracking the path. In various realistic settings, our algorithms
require $O(n\log(n))$ operations for probability distributions on $n$ points,
making it possible to handle large problems. Once the path has been recovered,
we show that given a validation set, the family of admissible models is reduced
from an infinite family to a small, discrete set. We demonstrate the merits of
our approach in experiments with synthetic data and discuss its potential for
the estimation of compact n-gram language models.
|
1311.1694 | Biometric Signature Processing & Recognition Using Radial Basis Function
Network | cs.CV | Automatic recognition of signature is a challenging problem which has
received much attention during recent years due to its many applications in
different fields. Signature has been used for long time for verification and
authentication purpose. Earlier methods were manual but nowadays they are
getting digitized. This paper provides an efficient method to signature
recognition using Radial Basis Function Network. The network is trained with
sample images in database. Feature extraction is performed before using them
for training. For testing purpose, an image is made to undergo
rotation-translation-scaling correction and then given to network. The network
successfully identifies the original image and gives correct output for stored
database images also. The method provides recognition rate of approximately 80%
for 200 samples.
|
1311.1704 | Scalable Recommendation with Poisson Factorization | cs.IR cs.AI cs.LG stat.ML | We develop a Bayesian Poisson matrix factorization model for forming
recommendations from sparse user behavior data. These data are large user/item
matrices where each user has provided feedback on only a small subset of items,
either explicitly (e.g., through star ratings) or implicitly (e.g., through
views or purchases). In contrast to traditional matrix factorization
approaches, Poisson factorization implicitly models each user's limited
attention to consume items. Moreover, because of the mathematical form of the
Poisson likelihood, the model needs only to explicitly consider the observed
entries in the matrix, leading to both scalable computation and good predictive
performance. We develop a variational inference algorithm for approximate
posterior inference that scales up to massive data sets. This is an efficient
algorithm that iterates over the observed entries and adjusts an approximate
posterior over the user/item representations. We apply our method to large
real-world user data containing users rating movies, users listening to songs,
and users reading scientific papers. In all these settings, Bayesian Poisson
factorization outperforms state-of-the-art matrix factorization methods.
|
1311.1712 | Approximate Bayesian Probabilistic-Data-Association-Aided Iterative
Detection for MIMO Systems Using Arbitrary M-ary Modulation | cs.IT math.IT | In this paper, the issue of designing an iterative-detection-and-decoding
(IDD)-aided receiver, relying on the low-complexity probabilistic data
association (PDA) method, is addressed for turbo-coded
multiple-input-multiple-output (MIMO) systems using general M-ary modulations.
We demonstrate that the classic candidate-search-aided bit-based extrinsic
log-likelihood ratio (LLR) calculation method is not applicable to the family
of PDA-based detectors. Additionally, we reveal that, in contrast to the
interpretation in the existing literature, the output symbol probabilities of
existing PDA algorithms are not the true a posteriori probabilities (APPs) but,
rather, the normalized symbol likelihoods. Therefore, the classic relationship,
where the extrinsic LLRs are given by subtracting the a priori LLRs from the a
posteriori LLRs, does not hold for the existing PDA-based detectors. Motivated
by these revelations, we conceive a new approximate Bayesian-theorem-based
logarithmic-domain PDA (AB-Log-PDA) method and unveil the technique of
calculating bit-based extrinsic LLRs for the AB-Log-PDA, which facilitates the
employment of the AB-Log-PDA in a simplified IDD receiver structure.
Additionally, we demonstrate that we may dispense with inner iterations within
the AB-Log-PDA in the context of IDD receivers. Our complexity analysis and
numerical results recorded for Nakagami-m fading channels demonstrate that the
proposed AB-Log-PDA-based IDD scheme is capable of achieving a performance
comparable with that of the optimal maximum a posteriori (MAP)-detector-based
IDD receiver, while imposing significantly lower computational complexity in
the scenarios considered.
|
1311.1723 | On Probability Estimation via Relative Frequencies and Discount | cs.IT math.IT | Probability estimation is an elementary building block of every statistical
data compression algorithm. In practice probability estimation is often based
on relative letter frequencies which get scaled down, when their sum is too
large. Such algorithms are attractive in terms of memory requirements, running
time and practical performance. However, there still is a lack of theoretical
understanding. In this work we formulate a typical probability estimation
algorithm based on relative frequencies and frequency discount, Algorithm RFD.
Our main contribution is its theoretical analysis. We show that the code length
it requires above an arbitrary piecewise stationary model with bounded and
unbounded letter probabilities is small. This theoretically confirms the
recency effect of periodic frequency discount, which has often been observed
empirically.
|
1311.1731 | Stochastic blockmodel approximation of a graphon: Theory and consistent
estimation | stat.ME cs.LG cs.SI physics.data-an stat.ML | Non-parametric approaches for analyzing network data based on exchangeable
graph models (ExGM) have recently gained interest. The key object that defines
an ExGM is often referred to as a graphon. This non-parametric perspective on
network modeling poses challenging questions on how to make inference on the
graphon underlying observed network data. In this paper, we propose a
computationally efficient procedure to estimate a graphon from a set of
observed networks generated from it. This procedure is based on a stochastic
blockmodel approximation (SBA) of the graphon. We show that, by approximating
the graphon with a stochastic block model, the graphon can be consistently
estimated, that is, the estimation error vanishes as the size of the graph
approaches infinity.
|
1311.1757 | Failure dynamics of the global risk network | cs.CY cs.SI physics.soc-ph | Risks threatening modern societies form an intricately interconnected network
that often underlies crisis situations. Yet, little is known about how risk
materializations in distinct domains influence each other. Here we present an
approach in which expert assessments of risks likelihoods and influence
underlie a quantitative model of the global risk network dynamics. The modeled
risks range from environmental to economic and technological and include
difficult to quantify risks, such as geo-political or social. Using the maximum
likelihood estimation, we find the optimal model parameters and demonstrate
that the model including network effects significantly outperforms the others,
uncovering full value of the expert collected data. We analyze the model
dynamics and study its resilience and stability. Our findings include such risk
properties as contagion potential, persistence, roles in cascades of failures
and the identity of risks most detrimental to system stability. The model
provides quantitative means for measuring the adverse effects of risk
interdependence and the materialization of risks in the network.
|
1311.1759 | Dimensionality reduction and spectral properties of multilayer networks | physics.soc-ph cs.SI math.CO | Network representations are useful for describing the structure of a large
variety of complex systems. Although most studies of real-world networks
suppose that nodes are connected by only a single type of edge, most natural
and engineered systems include multiple subsystems and layers of connectivity.
This new paradigm has attracted a great deal of attention and one fundamental
challenge is to characterize multilayer networks both structurally and
dynamically. One way to address this question is to study the spectral
properties of such networks. Here, we apply the framework of graph quotients,
which occurs naturally in this context, and the associated eigenvalue
interlacing results, to the adjacency and Laplacian matrices of undirected
multilayer networks. Specifically, we describe relationships between the
eigenvalue spectra of multilayer networks and their two most natural quotients,
the network of layers and the aggregate network, and show the dynamical
implications of working with either of the two simplified representations. Our
work thus contributes in particular to the study of dynamical processes whose
critical properties are determined by the spectral properties of the underlying
network.
|
1311.1761 | Exploring Deep and Recurrent Architectures for Optimal Control | cs.LG cs.AI cs.NE cs.RO cs.SY | Sophisticated multilayer neural networks have achieved state of the art
results on multiple supervised tasks. However, successful applications of such
multilayer networks to control have so far been limited largely to the
perception portion of the control pipeline. In this paper, we explore the
application of deep and recurrent neural networks to a continuous,
high-dimensional locomotion task, where the network is used to represent a
control policy that maps the state of the system (represented by joint angles)
directly to the torques at each joint. By using a recent reinforcement learning
algorithm called guided policy search, we can successfully train neural network
controllers with thousands of parameters, allowing us to compare a variety of
architectures. We discuss the differences between the locomotion control task
and previous supervised perception tasks, present experimental results
comparing various architectures, and discuss future directions in the
application of techniques from deep learning to the problem of optimal control.
|
1311.1764 | Automatic ontology generation for data mining using fca and clustering | cs.DB cs.AI | Motivated by the increased need for formalized representations of the domain
of Data Mining, the success of using Formal Concept Analysis (FCA) and Ontology
in several Computer Science fields, we present in this paper a new approach for
automatic generation of Fuzzy Ontology of Data Mining (FODM), through the
fusion of conceptual clustering, fuzzy logic, and FCA. In our approach, we
propose to generate ontology taking in consideration another degree of
granularity into the process of generation. Indeed, we suggest to define an
ontology between classes resulting from a preliminary classification on the
data. We prove that this approach optimize the definition of the ontology,
offered a better interpretation of the data and optimized both the space memory
and the execution time for exploiting this data.
|
1311.1780 | Learned-Norm Pooling for Deep Feedforward and Recurrent Neural Networks | cs.NE cs.LG stat.ML | In this paper we propose and investigate a novel nonlinear unit, called $L_p$
unit, for deep neural networks. The proposed $L_p$ unit receives signals from
several projections of a subset of units in the layer below and computes a
normalized $L_p$ norm. We notice two interesting interpretations of the $L_p$
unit. First, the proposed unit can be understood as a generalization of a
number of conventional pooling operators such as average, root-mean-square and
max pooling widely used in, for instance, convolutional neural networks (CNN),
HMAX models and neocognitrons. Furthermore, the $L_p$ unit is, to a certain
degree, similar to the recently proposed maxout unit (Goodfellow et al., 2013)
which achieved the state-of-the-art object recognition results on a number of
benchmark datasets. Secondly, we provide a geometrical interpretation of the
activation function based on which we argue that the $L_p$ unit is more
efficient at representing complex, nonlinear separating boundaries. Each $L_p$
unit defines a superelliptic boundary, with its exact shape defined by the
order $p$. We claim that this makes it possible to model arbitrarily shaped,
curved boundaries more efficiently by combining a few $L_p$ units of different
orders. This insight justifies the need for learning different orders for each
unit in the model. We empirically evaluate the proposed $L_p$ units on a number
of datasets and show that multilayer perceptrons (MLP) consisting of the $L_p$
units achieve the state-of-the-art results on a number of benchmark datasets.
Furthermore, we evaluate the proposed $L_p$ unit on the recently proposed deep
recurrent neural networks (RNN).
|
1311.1838 | Efficient Regularization of Squared Curvature | cs.CV | Curvature has received increased attention as an important alternative to
length based regularization in computer vision. In contrast to length, it
preserves elongated structures and fine details. Existing approaches are either
inefficient, or have low angular resolution and yield results with strong block
artifacts. We derive a new model for computing squared curvature based on
integral geometry. The model counts responses of straight line triple cliques.
The corresponding energy decomposes into submodular and supermodular pairwise
potentials. We show that this energy can be efficiently minimized even for high
angular resolutions using the trust region framework. Our results confirm that
we obtain accurate and visually pleasing solutions without strong artifacts at
reasonable run times.
|
1311.1839 | An Efficiently Solvable Quadratic Program for Stabilizing Dynamic
Locomotion | cs.RO | We describe a whole-body dynamic walking controller implemented as a convex
quadratic program. The controller solves an optimal control problem using an
approximate value function derived from a simple walking model while respecting
the dynamic, input, and contact constraints of the full robot dynamics. By
exploiting sparsity and temporal structure in the optimization with a custom
active-set algorithm, we surpass the performance of the best available
off-the-shelf solvers and achieve 1kHz control rates for a 34-DOF humanoid. We
describe applications to balancing and walking tasks using the simulated Atlas
robot in the DARPA Virtual Robotics Challenge.
|
1311.1856 | Submodularization for Quadratic Pseudo-Boolean Optimization | cs.CV | Many computer vision problems require optimization of binary non-submodular
energies. We propose a general optimization framework based on local submodular
approximations (LSA). Unlike standard LP relaxation methods that linearize the
whole energy globally, our approach iteratively approximates the energies
locally. On the other hand, unlike standard local optimization methods (e.g.
gradient descent or projection techniques) we use non-linear submodular
approximations and optimize them without leaving the domain of integer
solutions. We discuss two specific LSA algorithms based on "trust region" and
"auxiliary function" principles, LSA-TR and LSA-AUX. These methods obtain
state-of-the-art results on a wide range of applications outperforming many
standard techniques such as LBP, QPBO, and TRWS. While our paper is focused on
pairwise energies, our ideas extend to higher-order problems. The code is
available online (http://vision.csd.uwo.ca/code/).
|
1311.1869 | Optimization, Learning, and Games with Predictable Sequences | cs.LG cs.GT | We provide several applications of Optimistic Mirror Descent, an online
learning algorithm based on the idea of predictable sequences. First, we
recover the Mirror Prox algorithm for offline optimization, prove an extension
to Holder-smooth functions, and apply the results to saddle-point type
problems. Next, we prove that a version of Optimistic Mirror Descent (which has
a close relation to the Exponential Weights algorithm) can be used by two
strongly-uncoupled players in a finite zero-sum matrix game to converge to the
minimax equilibrium at the rate of O((log T)/T). This addresses a question of
Daskalakis et al 2011. Further, we consider a partial information version of
the problem. We then apply the results to convex programming and exhibit a
simple algorithm for the approximate Max Flow problem.
|
1311.1885 | Verifiable Control System Development for Gas Turbine Engines | cs.SY math.OC | A control software verification framework for gas turbine engines is
developed. A stability proof is presented for gain scheduled closed-loop engine
system based on global linearization and linear matrix inequality (LMI)
techniques. Using convex optimization tools, a single quadratic Lyapunov
function is computed for multiple linearizations near equilibrium points of the
closed-loop system. With the computed stability matrices, ellipsoid invariant
sets are constructed, which are used efficiently for DGEN turbofan engine
control code stability analysis. Then a verifiable linear gain scheduled
controller for DGEN engine is developed based on formal methods, and tested on
the engine virtual test bench. Simulation results show that the developed
verifiable gain scheduled controller is capable of regulating the engine in a
stable fashion with proper tracking performance.
|
1311.1887 | Demand Side Management in Smart Grids using a Repeated Game Framework | cs.SY cs.GT | Demand side management (DSM) is a key solution for reducing the peak-time
power consumption in smart grids. To provide incentives for consumers to shift
their consumption to off-peak times, the utility company charges consumers
differential pricing for using power at different times of the day. Consumers
take into account these differential prices when deciding when and how much
power to consume daily. Importantly, while consumers enjoy lower billing costs
when shifting their power usage to off-peak times, they also incur discomfort
costs due to the altering of their power consumption patterns. Existing works
propose stationary strategies for the myopic consumers to minimize their
short-term billing and discomfort costs. In contrast, we model the interaction
emerging among self-interested, foresighted consumers as a repeated energy
scheduling game and prove that the stationary strategies are suboptimal in
terms of long-term total billing and discomfort costs. Subsequently, we propose
a novel framework for determining optimal nonstationary DSM strategies, in
which consumers can choose different daily power consumption patterns depending
on their preferences, routines, and needs. As a direct consequence of the
nonstationary DSM policy, different subsets of consumers are allowed to use
power in peak times at a low price. The subset of consumers that are selected
daily to have their joint discomfort and billing costs minimized is determined
based on the consumers' power consumption preferences as well as on the past
history of which consumers have shifted their usage previously. Importantly, we
show that the proposed strategies are incentive-compatible. Simulations confirm
that, given the same peak-to-average ratio, the proposed strategy can reduce
the total cost (billing and discomfort costs) by up to 50% compared to existing
DSM strategies.
|
1311.1888 | D$^3$PO - Denoising, Deconvolving, and Decomposing Photon Observations | astro-ph.IM cs.IT math.IT physics.data-an stat.CO | The analysis of astronomical images is a non-trivial task. The D3PO algorithm
addresses the inference problem of denoising, deconvolving, and decomposing
photon observations. Its primary goal is the simultaneous but individual
reconstruction of the diffuse and point-like photon flux given a single photon
count image, where the fluxes are superimposed. In order to discriminate
between these morphologically different signal components, a probabilistic
algorithm is derived in the language of information field theory based on a
hierarchical Bayesian parameter model. The signal inference exploits prior
information on the spatial correlation structure of the diffuse component and
the brightness distribution of the spatially uncorrelated point-like sources. A
maximum a posteriori solution and a solution minimizing the Gibbs free energy
of the inference problem using variational Bayesian methods are discussed.
Since the derivation of the solution is not dependent on the underlying
position space, the implementation of the D3PO algorithm uses the NIFTY package
to ensure applicability to various spatial grids and at any resolution. The
fidelity of the algorithm is validated by the analysis of simulated data,
including a realistic high energy photon count image showing a 32 x 32 arcmin^2
observation with a spatial resolution of 0.1 arcmin. In all tests the D3PO
algorithm successfully denoised, deconvolved, and decomposed the data into a
diffuse and a point-like signal estimate for the respective photon flux
components.
|
1311.1897 | Logique math\'ematique et linguistique formelle | math.LO cs.CL | As the etymology of the word shows, logic is intimately related to language,
as exemplified by the work of philosophers from Antiquity and from the
Middle-Age. At the beginning of the XX century, the crisis of the foundations
of mathematics invented mathematical logic and imposed logic as a
language-based foundation for mathematics. How did the relations between logic
and language evolved in this newly defined mathematical framework? After a
survey of the history of the relation between logic and linguistics,
traditionally focused on semantics, we focus on some present issues: 1) grammar
as a deductive system 2) the transformation of the syntactic structure of a
sentence to a logical formula representing its meaning 3) taking into account
the context when interpreting words. This lecture shows that type theory
provides a convenient framework both for natural language syntax and for the
interpretation of any of tis level (words, sentences, discourse).
|
1311.1903 | Moment-based Uniform Deviation Bounds for $k$-means and Friends | cs.LG stat.ML | Suppose $k$ centers are fit to $m$ points by heuristically minimizing the
$k$-means cost; what is the corresponding fit over the source distribution?
This question is resolved here for distributions with $p\geq 4$ bounded
moments; in particular, the difference between the sample cost and distribution
cost decays with $m$ and $p$ as $m^{\min\{-1/4, -1/2+2/p\}}$. The essential
technical contribution is a mechanism to uniformly control deviations in the
face of unbounded parameter sets, cost functions, and source distributions. To
further demonstrate this mechanism, a soft clustering variant of $k$-means cost
is also considered, namely the log likelihood of a Gaussian mixture, subject to
the constraint that all covariance matrices have bounded spectrum. Lastly, a
rate with refined constants is provided for $k$-means instances possessing some
cluster structure.
|
1311.1935 | Unsupervised learning human's activities by overexpressed recognized
non-speech sounds | cs.AI cs.HC | Human activity and environment produces sounds such as, at home, the noise
produced by water, cough, or television. These sounds can be used to determine
the activity in the environment. The objective is to monitor a person's
activity or determine his environment using a single low cost microphone by
sound analysis. The purpose is to adapt programs to the activity or environment
or detect abnormal situations. Some patterns of over expressed repeatedly in
the sequences of recognized sounds inter and intra environment allow to
characterize activities such as the entrance of a person in the house, or a tv
program watched. We first manually annotated 1500 sounds of daily life activity
of old persons living at home recognized sounds. Then we inferred an ontology
and enriched the database of annotation with a crowed sourced manual annotation
of 7500 sounds to help with the annotation of the most frequent sounds. Using
learning sound algorithms, we defined 50 types of the most frequent sounds. We
used this set of recognizable sounds as a base to tag sounds and put tags on
them. By using over expressed number of motifs of sequences of the tags, we
were able to categorize using only a single low-cost microphone, complex
activities of daily life of a persona at home as watching TV, entrance in the
apartment of a person, or phone conversation including detecting unknown
activities as repeated tasks performed by users.
|
1311.1939 | Fast Tracking via Spatio-Temporal Context Learning | cs.CV | In this paper, we present a simple yet fast and robust algorithm which
exploits the spatio-temporal context for visual tracking. Our approach
formulates the spatio-temporal relationships between the object of interest and
its local context based on a Bayesian framework, which models the statistical
correlation between the low-level features (i.e., image intensity and position)
from the target and its surrounding regions. The tracking problem is posed by
computing a confidence map, and obtaining the best target location by
maximizing an object location likelihood function. The Fast Fourier Transform
is adopted for fast learning and detection in this work. Implemented in MATLAB
without code optimization, the proposed tracker runs at 350 frames per second
on an i7 machine. Extensive experimental results show that the proposed
algorithm performs favorably against state-of-the-art methods in terms of
efficiency, accuracy and robustness.
|
1311.1940 | Power Decoding of Reed-Solomon Codes Revisited | cs.IT math.IT | Power decoding, or "decoding by virtual interleaving", of Reed--Solomon codes
is a method for unique decoding beyond half the minimum distance. We give a new
variant of the Power decoding scheme, building upon the key equation of Gao. We
show various interesting properties such as behavioural equivalence to the
classical scheme using syndromes, as well as a new bound on the failure
probability when the powering degree is 3.
|
1311.1958 | Constructing Time Series Shape Association Measures: Minkowski Distance
and Data Standardization | cs.LG | It is surprising that last two decades many works in time series data mining
and clustering were concerned with measures of similarity of time series but
not with measures of association that can be used for measuring possible direct
and inverse relationships between time series. Inverse relationships can exist
between dynamics of prices and sell volumes, between growth patterns of
competitive companies, between well production data in oilfields, between wind
velocity and air pollution concentration etc. The paper develops a theoretical
basis for analysis and construction of time series shape association measures.
Starting from the axioms of time series shape association measures it studies
the methods of construction of measures satisfying these axioms. Several
general methods of construction of such measures suitable for measuring time
series shape similarity and shape association are proposed. Time series shape
association measures based on Minkowski distance and data standardization
methods are considered. The cosine similarity and the Pearsons correlation
coefficient are obtained as particular cases of the proposed general methods
that can be used also for construction of new association measures in data
analysis.
|
1311.2003 | Threshold Saturation for Nonbinary SC-LDPC Codes on the Binary Erasure
Channel | cs.IT math.IT | We analyze the asymptotic performance of nonbinary spatially-coupled
low-density parity-check (SC-LDPC) code ensembles defined over the general
linear group on the binary erasure channel. In particular, we prove threshold
saturation of belief propagation decoding to the so called potential threshold,
using the proof technique based on potential functions introduced by Yedla
\textit{et al.}, assuming that the potential function exists. We rewrite the
density evolution of nonbinary SC-LDPC codes in an equivalent vector recursion
form which is suited for the use of the potential function. We then discuss the
existence of the potential function for the general case of vector recursions
defined by multivariate polynomials, and give a method to construct it. We
define a potential function in a slightly more general form than one by Yedla
\textit{et al.}, in order to make the technique based on potential functions
applicable to the case of nonbinary LDPC codes. We show that the potential
function exists if a solution to a carefully designed system of linear
equations exists. Furthermore, we show numerically the existence of a solution
to the system of linear equations for a large number of nonbinary LDPC code
ensembles, which allows us to define their potential function and thus prove
threshold saturation.
|
1311.2008 | Statistical validation of high-dimensional models of growing networks | physics.soc-ph cs.SI physics.data-an | The abundance of models of complex networks and the current insufficient
validation standards make it difficult to judge which models are strongly
supported by data and which are not. We focus here on likelihood maximization
methods for models of growing networks with many parameters and compare their
performance on artificial and real datasets. While high dimensionality of the
parameter space harms the performance of direct likelihood maximization on
artificial data, this can be improved by introducing a suitable penalization
term. Likelihood maximization on real data shows that the presented approach is
able to discriminate among available network models. To make large-scale
datasets accessible to this kind of analysis, we propose a subset sampling
technique and show that it yields substantial model evidence in a fraction of
time necessary for the analysis of the complete data.
|
1311.2012 | Quasi-Static Multiple-Antenna Fading Channels at Finite Blocklength | cs.IT math.IT | This paper investigates the maximal achievable rate for a given blocklength
and error probability over quasi-static multiple-input multiple-output (MIMO)
fading channels, with and without channel state information (CSI) at the
transmitter and/or the receiver. The principal finding is that outage capacity,
despite being an asymptotic quantity, is a sharp proxy for the
finite-blocklength fundamental limits of slow-fading channels. Specifically,
the channel dispersion is shown to be zero regardless of whether the fading
realizations are available at both transmitter and receiver, at only one of
them, or at neither of them. These results follow from analytically tractable
converse and achievability bounds. Numerical evaluation of these bounds
verifies that zero dispersion may indeed imply fast convergence to the outage
capacity as the blocklength increases. In the example of a particular $1 \times
2$ single-input multiple-output (SIMO) Rician fading channel, the blocklength
required to achieve $90\%$ of capacity is about an order of magnitude smaller
compared to the blocklength required for an AWGN channel with the same
capacity. For this specific scenario, the coding/decoding schemes adopted in
the LTE-Advanced standard are benchmarked against the finite-blocklength
achievability and converse bounds.
|
1311.2013 | Google matrix analysis of C.elegans neural network | physics.soc-ph cs.SI q-bio.NC | We study the structural properties of the neural network of the C.elegans
(worm) from a directed graph point of view. The Google matrix analysis is used
to characterize the neuron connectivity structure and node classifications are
discussed and compared with physiological properties of the cells. Our results
are obtained by a proper definition of neural directed network and subsequent
eigenvector analysis which recovers some results of previous studies. Our
analysis highlights particular sets of important neurons constituting the core
of the neural system. The applications of PageRank, CheiRank and ImpactRank to
characterization of interdependency of neurons are discussed.
|
1311.2014 | A new stopping criterion for the mean shift iterative algorithm | cs.CV | The mean shift iterative algorithm was proposed in 2006, for using the
entropy as a stopping criterion. From then on, a theoretical base has been
developed and a group of applications has been carried out using this
algorithm. This paper proposes a new stopping criterion for the mean shift
iterative algorithm, where stopping threshold via entropy is used now, but in
another way. Many segmentation experiments were carried out by utilizing
standard images and it was verified that a better segmentation was reached, and
that the algorithm had better stability. An analysis on the convergence,
through a theorem, with the new stopping criterion was carried out. The goal of
this paper is to compare the new stopping criterion with the old criterion. For
this reason, the obtained results were not compared with other segmentation
approaches, since with the old stopping criterion were previously carried out.
|
1311.2023 | Information dissemination processes in directed social networks | cs.SI physics.soc-ph | Social networks can have asymmetric relationships. In the online social
network Twitter, a follower receives tweets from a followed person but the
followed person is not obliged to subscribe to the channel of the follower.
Thus, it is natural to consider the dissemination of information in directed
networks. In this work we use the mean-field approach to derive differential
equations that describe the dissemination of information in a social network
with asymmetric relationships. In particular, our model reflects the impact of
the degree distribution on the information propagation process. We further show
that for an important subclass of our model, the differential equations can be
solved analytically.
|
1311.2064 | Credible Autocoding of Fault Detection Observers | cs.SY | In this paper, we present a domain specific process to assist the
verification of observer-based fault detection software. Observer-based fault
detection systems, like control systems, yield invariant properties of
quadratic types. These quadratic invariants express both safety properties of
the software such as the boundedness of the states and correctness properties
such as the absence of false alarms from the fault detector. We seek to
leverage these quadratic invariants, in an automated fashion, for the formal
verification of the fault detection software. The approach, referred to as the
credible autocoding framework [1], can be characterized as autocoding with
proofs. The process starts with the fault detector model, along with its safety
and correctness properties, all expressed formally in a synchronous modeling
environment such as Simulink. The model is then transformed by a prototype
credible autocoder into both code and analyzable annotations for the code. We
demonstrate the credible autocoding process on a running example of an output
observer fault detector for a 3-degrees-of-freedom (3DOF) helicopter control
system.
|
1311.2079 | Nonparametric Multi-group Membership Model for Dynamic Networks | cs.SI physics.soc-ph stat.ML | Relational data-like graphs, networks, and matrices-is often dynamic, where
the relational structure evolves over time. A fundamental problem in the
analysis of time-varying network data is to extract a summary of the common
structure and the dynamics of the underlying relations between the entities.
Here we build on the intuition that changes in the network structure are driven
by the dynamics at the level of groups of nodes. We propose a nonparametric
multi-group membership model for dynamic networks. Our model contains three
main components: We model the birth and death of individual groups with respect
to the dynamics of the network structure via a distance dependent Indian Buffet
Process. We capture the evolution of individual node group memberships via a
Factorial Hidden Markov model. And, we explain the dynamics of the network
structure by explicitly modeling the connectivity structure of groups. We
demonstrate our model's capability of identifying the dynamics of latent groups
in a number of different types of network data. Experimental results show that
our model provides improved predictive performance over existing dynamic
network models on future network forecasting and missing link prediction.
|
1311.2092 | Relating and contrasting plain and prefix Kolmogorov complexity | cs.CC cs.IT math.IT | In [3] a short proof is given that some strings have maximal plain Kolmogorov
complexity but not maximal prefix-free complexity. The proof uses Levin's
symmetry of information, Levin's formula relating plain and prefix complexity
and Gacs' theorem that complexity of complexity given the string can be high.
We argue that the proof technique and results mentioned above are useful to
simplify existing proofs and to solve open questions.
We present a short proof of Solovay's result [21] relating plain and prefix
complexity: $K (x) = C (x) + CC (x) + O(CCC (x))$ and $C (x) = K (x) - KK (x) +
O(KKK (x))$, (here $CC(x)$ denotes $C(C(x))$, etc.).
We show that there exist $\omega$ such that $\liminf C(\omega_1\dots
\omega_n) - C(n)$ is infinite and $\liminf K(\omega_1\dots \omega_n) - K(n)$ is
finite, i.e. the infinitely often C-trivial reals are not the same as the
infinitely often K-trivial reals (i.e. [1,Question 1]).
Solovay showed that for infinitely many $x$ we have $|x| - C (x) \le O(1)$
and $|x| + K (|x|) - K (x) \ge \log^{(2)} |x| - O(\log^{(3)} |x|)$, (here $|x|$
denotes the length of $x$ and $\log^{(2)} = \log\log$, etc.). We show that this
result holds for prefixes of some 2-random sequences.
Finally, we generalize our proof technique and show that no monotone relation
exists between expectation and probability bounded randomness deficiency (i.e.
[6, Question 1]).
|
1311.2095 | Wave-absorbing vehicular platoon controller | cs.SY | The paper tailors the so-called wave-based control popular in the field of
flexible mechanical structures to the field of distributed control of vehicular
platoons. The proposed solution augments the symmetric bidirectional control
algorithm with a wave-absorbing controller implemented on the leader, and/or on
the rear-end vehicle. The wave-absorbing controller actively absorbs an
incoming wave of positional changes in the platoon and thus prevents
oscillations of inter-vehicle distances. The proposed controller significantly
improves the performance of platoon manoeuvrers such as
acceleration/deceleration or changing the distances between vehicles without
making the platoon string unstable. Numerical simulations show that the
wave-absorbing controller performs efficiently even for platoons with a large
number of vehicles, for which other platooning algorithms are inefficient or
require wireless communication between vehicles.
|
1311.2097 | Risk-sensitive Reinforcement Learning | cs.LG | We derive a family of risk-sensitive reinforcement learning methods for
agents, who face sequential decision-making tasks in uncertain environments. By
applying a utility function to the temporal difference (TD) error, nonlinear
transformations are effectively applied not only to the received rewards but
also to the true transition probabilities of the underlying Markov decision
process. When appropriate utility functions are chosen, the agents' behaviors
express key features of human behavior as predicted by prospect theory
(Kahneman and Tversky, 1979), for example different risk-preferences for gains
and losses as well as the shape of subjective probability curves. We derive a
risk-sensitive Q-learning algorithm, which is necessary for modeling human
behavior when transition probabilities are unknown, and prove its convergence.
As a proof of principle for the applicability of the new framework we apply it
to quantify human behavior in a sequential investment task. We find, that the
risk-sensitive variant provides a significantly better fit to the behavioral
data and that it leads to an interpretation of the subject's responses which is
indeed consistent with prospect theory. The analysis of simultaneously measured
fMRI signals show a significant correlation of the risk-sensitive TD error with
BOLD signal change in the ventral striatum. In addition we find a significant
correlation of the risk-sensitive Q-values with neural activity in the
striatum, cingulate cortex and insula, which is not present if standard
Q-values are used.
|
1311.2100 | Querying Knowledge Graphs by Example Entity Tuples | cs.DB | We witness an unprecedented proliferation of knowledge graphs that record
millions of entities and their relationships. While knowledge graphs are
structure-flexible and content rich, they are difficult to use. The challenge
lies in the gap between their overwhelming complexity and the limited database
knowledge of non-professional users. If writing structured queries over simple
tables is difficult, complex graphs are only harder to query. As an initial
step toward improving the usability of knowledge graphs, we propose to query
such data by example entity tuples, without requiring users to form complex
graph queries. Our system, GQBE (Graph Query By Example), automatically derives
a weighted hidden maximal query graph based on input query tuples, to capture a
user's query intent. It efficiently finds and ranks the top approximate answer
tuples. For fast query processing, GQBE only partially evaluates query graphs.
We conducted experiments and user studies on the large Freebase and DBpedia
datasets and observed appealing accuracy and efficiency. Our system provides a
complementary approach to the existing keyword-based methods, facilitating
user-friendly graph querying. To the best of our knowledge, there was no such
proposal in the past in the context of graphs.
|
1311.2102 | An Experimental Comparison of Trust Region and Level Sets | cs.CV | High-order (non-linear) functionals have become very popular in segmentation,
stereo and other computer vision problems. Level sets is a well established
general gradient descent framework, which is directly applicable to
optimization of such functionals and widely used in practice. Recently, another
general optimization approach based on trust region methodology was proposed
for regional non-linear functionals. Our goal is a comprehensive experimental
comparison of these two frameworks in regard to practical efficiency,
robustness to parameters, and optimality. We experiment on a wide range of
problems with non-linear constraints on segment volume, appearance and shape.
|
1311.2103 | Idea of a new Personality-Type based Recommendation Engine | cs.IR | Myers-Briggs Type Indicator (MBTI) types depict the psychological preferences
by which a person perceives the world and make decisions. There are 4 principal
functions through which the people see the world: sensation, intuition,
feeling, and thinking. These functions along with the Introverted\Extroverted
nature of the person, there are 16 personalities types, the humans are divided
into. Here an idea is presented where a user can get recommendations for books,
web media content, music and movies on the basis of the users' MBTI type. Only
things like books and other media content has been chosen because the
preferences in such things are mostly subjective. Apart from the recommended
content that is generally generated on the basis of the previous purchases,
searches can be enhanced by using the MBTI. A minimalist survey was designed
for collecting the data. This has a more than 100 features that show the
preference of a personality type. Those include preferences in book genres,
music genres, movie genres and even video games genres. After analyzing the
data that is collected from the survey, some inferences were drawn from it
which can be used to design a new recommendation engine for recommending the
content that coincides with the personality of the user.
|
1311.2106 | Submodular Optimization with Submodular Cover and Submodular Knapsack
Constraints | cs.DS cs.AI cs.DM | We investigate two new optimization problems -- minimizing a submodular
function subject to a submodular lower bound constraint (submodular cover) and
maximizing a submodular function subject to a submodular upper bound constraint
(submodular knapsack). We are motivated by a number of real-world applications
in machine learning including sensor placement and data subset selection, which
require maximizing a certain submodular function (like coverage or diversity)
while simultaneously minimizing another (like cooperative cost). These problems
are often posed as minimizing the difference between submodular functions [14,
35] which is in the worst case inapproximable. We show, however, that by
phrasing these problems as constrained optimization, which is more natural for
many applications, we achieve a number of bounded approximation guarantees. We
also show that both these problems are closely related and an approximation
algorithm solving one can be used to obtain an approximation guarantee for the
other. We provide hardness results for both problems thus showing that our
approximation factors are tight up to log-factors. Finally, we empirically
demonstrate the performance and good scalability properties of our algorithms.
|
1311.2110 | Curvature and Optimal Algorithms for Learning and Minimizing Submodular
Functions | cs.DS cs.DM cs.LG | We investigate three related and important problems connected to machine
learning: approximating a submodular function everywhere, learning a submodular
function (in a PAC-like setting [53]), and constrained minimization of
submodular functions. We show that the complexity of all three problems depends
on the 'curvature' of the submodular function, and provide lower and upper
bounds that refine and improve previous results [3, 16, 18, 52]. Our proof
techniques are fairly generic. We either use a black-box transformation of the
function (for approximation and learning), or a transformation of algorithms to
use an appropriate surrogate function (for minimization). Curiously, curvature
has been known to influence approximations for submodular maximization [7, 55],
but its effect on minimization, approximation and learning has hitherto been
open. We complete this picture, and also support our theoretical claims by
empirical results.
|
1311.2115 | Fast large-scale optimization by unifying stochastic gradient and
quasi-Newton methods | cs.LG | We present an algorithm for minimizing a sum of functions that combines the
computational efficiency of stochastic gradient descent (SGD) with the second
order curvature information leveraged by quasi-Newton methods. We unify these
disparate approaches by maintaining an independent Hessian approximation for
each contributing function in the sum. We maintain computational tractability
and limit memory requirements even for high dimensional optimization problems
by storing and manipulating these quadratic approximations in a shared, time
evolving, low dimensional subspace. Each update step requires only a single
contributing function or minibatch evaluation (as in SGD), and each step is
scaled using an approximate inverse Hessian and little to no adjustment of
hyperparameters is required (as is typical for quasi-Newton methods). This
algorithm contrasts with earlier stochastic second order techniques that treat
the Hessian of each contributing function as a noisy approximation to the full
Hessian, rather than as a target for direct estimation. We experimentally
demonstrate improved convergence on seven diverse optimization problems. The
algorithm is released as open source Python and MATLAB packages.
|
1311.2121 | Asynchronous Systems and Binary Diagonal Random Matrices: A Proof and
Convergence Rate | math.SP cs.IT math.IT | In a synchronized network of $n$ nodes, each node will update its parameter
based on the system state in a given iteration. It is well-known that the
updates can converge to a fixed point if the maximum absolute eigenvalue
(spectral radius) of the $n \times n$ iterative matrix ${\bf{F}}$ is less than
one (i.e. $\rho({\bf{F}})<1$). However, if only a subset of the nodes update
their parameter in an iteration (due to delays or stale feedback) then this
effectively renders the spectral radius of the iterative matrix as one. We
consider matrices of unit spectral radii generated from ${\bf{F}}$ due to
random delays in the updates. We show that if each node updates at least once
in every $T$ iterations, then the product of the random matrices (joint
spectral radius) corresponding to these iterations is less than one. We then
use this property to prove convergence of asynchronous iterative systems.
Finally, we show that the convergence rate of such a system is
$\rho({\bf{F}})\frac{(1-(1-\gamma)^T)^n}{T}$, where assuming ergodicity,
$\gamma$ is the lowest bound on the probability that a node will update in any
given iteration.
|
1311.2123 | Linear-Complexity Overhead-Optimized Random Linear Network Codes | cs.IT math.IT | Sparse random linear network coding (SRLNC) is an attractive technique
proposed in the literature to reduce the decoding complexity of random linear
network coding. Recognizing the fact that the existing SRLNC schemes are not
efficient in terms of the required reception overhead, we consider the problem
of designing overhead-optimized SRLNC schemes. To this end, we introduce a new
design of SRLNC scheme that enjoys very small reception overhead while
maintaining the main benefit of SRLNC, i.e., its linear encoding/decoding
complexity. We also provide a mathematical framework for the asymptotic
analysis and design of this class of codes based on density evolution (DE)
equations. To the best of our knowledge, this work introduces the first DE
analysis in the context of network coding. Our analysis method then enables us
to design network codes with reception overheads in the order of a few percent.
We also investigate the finite-length performance of the proposed codes and
through numerical examples we show that our proposed codes have significantly
lower reception overheads compared to all existing linear-complexity random
linear network coding schemes.
|
1311.2137 | A Structured Prediction Approach for Missing Value Imputation | cs.LG | Missing value imputation is an important practical problem. There is a large
body of work on it, but there does not exist any work that formulates the
problem in a structured output setting. Also, most applications have
constraints on the imputed data, for example on the distribution associated
with each variable. None of the existing imputation methods use these
constraints. In this paper we propose a structured output approach for missing
value imputation that also incorporates domain constraints. We focus on large
margin models, but it is easy to extend the ideas to probabilistic models. We
deal with the intractable inference step in learning via a piecewise training
technique that is simple, efficient, and effective. Comparison with existing
state-of-the-art and baseline imputation methods shows that our method gives
significantly improved performance on the Hamming loss measure.
|
1311.2139 | Large Margin Semi-supervised Structured Output Learning | cs.LG | In structured output learning, obtaining labelled data for real-world
applications is usually costly, while unlabelled examples are available in
abundance. Semi-supervised structured classification has been developed to
handle large amounts of unlabelled structured data. In this work, we consider
semi-supervised structural SVMs with domain constraints. The optimization
problem, which in general is not convex, contains the loss terms associated
with the labelled and unlabelled examples along with the domain constraints. We
propose a simple optimization approach, which alternates between solving a
supervised learning problem and a constraint matching problem. Solving the
constraint matching problem is difficult for structured prediction, and we
propose an efficient and effective hill-climbing method to solve it. The
alternating optimization is carried out within a deterministic annealing
framework, which helps in effective constraint matching, and avoiding local
minima which are not very useful. The algorithm is simple to implement and
achieves comparable generalization performance on benchmark datasets.
|
1311.2146 | Coding based Data Broadcasting for Time Critical Applications with Rate
Adaptation | cs.IT math.IT | In this paper, we dynamically select the transmission rate and design
wireless network coding to improve the quality of services such as delay for
time critical applications. In a network coded system, with low transmission
rate and hence longer transmission range, more packets may be encoded, which
increases the coding opportunity. However, low transmission rate may incur
extra transmission delay, which is intolerable for time critical applications.
We design a novel joint rate selection and wireless network coding (RSNC)
scheme with delay constraint, so as to maximize the total benefit (where we can
define the benefit based on the priority or importance of a packet for example)
of the packets that are successfully received at the destinations without
missing their deadlines. We prove that the proposed problem is NP-hard, and
propose a novel graph model to mathematically formulate the problem. For the
general case, we propose a transmission metric and design an efficient
algorithm to determine the transmission rate and coding strategy for each
transmission. For a special case when all delay constraints are the same, we
study the pairwise coding and present a polynomial time pairwise coding
algorithm that achieves an approximation ratio of 1 - 1/e to the optimal
pairwise coding solution, where e is the base of the natural logarithm.
Finally, simulation results demonstrate the superiority of the proposed RSNC
scheme.
|
1311.2150 | Pattern-Coupled Sparse Bayesian Learning for Recovery of Block-Sparse
Signals | cs.IT cs.LG math.IT stat.ML | We consider the problem of recovering block-sparse signals whose structures
are unknown \emph{a priori}. Block-sparse signals with nonzero coefficients
occurring in clusters arise naturally in many practical scenarios. However, the
knowledge of the block structure is usually unavailable in practice. In this
paper, we develop a new sparse Bayesian learning method for recovery of
block-sparse signals with unknown cluster patterns. Specifically, a
pattern-coupled hierarchical Gaussian prior model is introduced to characterize
the statistical dependencies among coefficients, in which a set of
hyperparameters are employed to control the sparsity of signal coefficients.
Unlike the conventional sparse Bayesian learning framework in which each
individual hyperparameter is associated independently with each coefficient, in
this paper, the prior for each coefficient not only involves its own
hyperparameter, but also the hyperparameters of its immediate neighbors. In
doing this way, the sparsity patterns of neighboring coefficients are related
to each other and the hierarchical model has the potential to encourage
structured-sparse solutions. The hyperparameters, along with the sparse signal,
are learned by maximizing their posterior probability via an
expectation-maximization (EM) algorithm. Numerical results show that the
proposed algorithm presents uniform superiority over other existing methods in
a series of experiments.
|
1311.2167 | SPH Entropy Errors and the Pressure Blip | cs.CE | The spurious pressure jump at a contact discontinuity, in SPH simulations of
the compressible Euler equations is investigated. From the spatiotemporal
behaviour of the error, the SPH pressure jump is likened to entropy errors
observed for artificial viscosity based finite difference/volume schemes. The
error is observed to be generated at start-up and dissipation is the only
recourse to mitigate it's effect. We show that similar errors are generated for
the Lagrangian plus remap version of the Piecewise Parabolic Method (PPM)
finite volume code (PPMLR). Through a comparison with the direct Eulerian
version of the PPM code (PPMDE), we argue that a lack of diffusion across the
material wave (contact discontinuity) is responsible for the error in PPMLR. We
verify this hypothesis by constructing a more dissipative version of the remap
code using a piecewise constant reconstruction. As an application to SPH, we
propose a hybrid GSPH scheme that adds the requisite dissipation by utilizing a
more dissipative Riemann solver for the energy equation. The proposed
modification to the GSPH scheme, and it's improved treatment of the anomaly is
verified for flows with strong shocks in one and two dimensions. The result
that dissipation must act across the density and energy equations provides a
consistent explanation for many of the hitherto proposed "cures" or "fixes" for
the problem.
|
1311.2180 | Adaptive Epidemic Dynamics in Networks: Thresholds and Control | math.DS cs.CR cs.SI math.OC q-bio.PE | Theoretical modeling of computer virus/worm epidemic dynamics is an important
problem that has attracted many studies. However, most existing models are
adapted from biological epidemic ones. Although biological epidemic models can
certainly be adapted to capture some computer virus spreading scenarios
(especially when the so-called homogeneity assumption holds), the problem of
computer virus spreading is not well understood because it has many important
perspectives that are not necessarily accommodated in the biological epidemic
models. In this paper we initiate the study of such a perspective, namely that
of adaptive defense against epidemic spreading in arbitrary networks. More
specifically, we investigate a non-homogeneous
Susceptible-Infectious-Susceptible (SIS) model where the model parameters may
vary with respect to time. In particular, we focus on two scenarios we call
semi-adaptive defense and fully-adaptive} defense, which accommodate implicit
and explicit dependency relationships between the model parameters,
respectively. In the semi-adaptive defense scenario, the model's input
parameters are given; the defense is semi-adaptive because the adjustment is
implicitly dependent upon the outcome of virus spreading. For this scenario, we
present a set of sufficient conditions (some are more general or succinct than
others) under which the virus spreading will die out; such sufficient
conditions are also known as epidemic thresholds in the literature. In the
fully-adaptive defense scenario, some input parameters are not known (i.e., the
aforementioned sufficient conditions are not applicable) but the defender can
observe the outcome of virus spreading. For this scenario, we present adaptive
control strategies under which the virus spreading will die out or will be
contained to a desired level.
|
1311.2191 | Neighborhood filters and the decreasing rearrangement | cs.CV | Nonlocal filters are simple and powerful techniques for image denoising. In
this paper, we give new insights into the analysis of one kind of them, the
Neighborhood filter, by using a classical although not very common
transformation: the decreasing rearrangement of a function (the image).
Independently of the dimension of the image, we reformulate the Neighborhood
filter and its iterative variants as an integral operator defined in a
one-dimensional space. The simplicity of this formulation allows to perform a
detailed analysis of its properties. Among others, we prove that the filter
behaves asymptotically as a shock filter combined with a border diffusive term,
responsible for the staircaising effect and the loss of contrast.
|
1311.2229 | Joint Sparsity Recovery for Spectral Compressed Sensing | cs.IT math.IT | Compressed Sensing (CS) is an effective approach to reduce the required
number of samples for reconstructing a sparse signal in an a priori basis, but
may suffer severely from the issue of basis mismatch. In this paper we study
the problem of simultaneously recovering multiple spectrally-sparse signals
that are supported on the same frequencies lying arbitrarily on the unit
circle. We propose an atomic norm minimization problem, which can be regarded
as a continuous counterpart of the discrete CS formulation and be solved
efficiently via semidefinite programming. Through numerical experiments, we
show that the number of samples per signal may be further reduced by harnessing
the joint sparsity pattern of multiple signals.
|
1311.2234 | FuSSO: Functional Shrinkage and Selection Operator | stat.ML cs.LG math.ST stat.TH | We present the FuSSO, a functional analogue to the LASSO, that efficiently
finds a sparse set of functional input covariates to regress a real-valued
response against. The FuSSO does so in a semi-parametric fashion, making no
parametric assumptions about the nature of input functional covariates and
assuming a linear form to the mapping of functional covariates to the response.
We provide a statistical backing for use of the FuSSO via proof of asymptotic
sparsistency under various conditions. Furthermore, we observe good results on
both synthetic and real-world data.
|
1311.2236 | Fast Distribution To Real Regression | stat.ML cs.LG math.ST stat.TH | We study the problem of distribution to real-value regression, where one aims
to regress a mapping $f$ that takes in a distribution input covariate $P\in
\mathcal{I}$ (for a non-parametric family of distributions $\mathcal{I}$) and
outputs a real-valued response $Y=f(P) + \epsilon$. This setting was recently
studied, and a "Kernel-Kernel" estimator was introduced and shown to have a
polynomial rate of convergence. However, evaluating a new prediction with the
Kernel-Kernel estimator scales as $\Omega(N)$. This causes the difficult
situation where a large amount of data may be necessary for a low estimation
risk, but the computation cost of estimation becomes infeasible when the
data-set is too large. To this end, we propose the Double-Basis estimator,
which looks to alleviate this big data problem in two ways: first, the
Double-Basis estimator is shown to have a computation complexity that is
independent of the number of of instances $N$ when evaluating new predictions
after training; secondly, the Double-Basis estimator is shown to have a fast
rate of convergence for a general class of mappings $f\in\mathcal{F}$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.