id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1309.0691 | Information Filtering via Collaborative User Clustering Modeling | cs.IR cs.SI physics.soc-ph | The past few years have witnessed the great success of recommender systems,
which can significantly help users find out personalized items for them from
the information era. One of the most widely applied recommendation methods is
the Matrix Factorization (MF). However, most of researches on this topic have
focused on mining the direct relationships between users and items. In this
paper, we optimize the standard MF by integrating the user clustering
regularization term. Our model considers not only the user-item rating
information, but also takes into account the user interest. We compared the
proposed model with three typical other methods: User-Mean (UM), Item-Mean (IM)
and standard MF. Experimental results on a real-world dataset,
\emph{MovieLens}, show that our method performs much better than other three
methods in the accuracy of recommendation.
|
1309.0707 | Feedback Communication Systems with Limitations on Incremental
Redundancy | cs.IT math.IT | This paper explores feedback systems using incremental redundancy (IR) with
noiseless transmitter confirmation (NTC). For IR-NTC systems based on {\em
finite-length} codes (with blocklength $N$) and decoding attempts only at {\em
certain specified decoding times}, this paper presents the asymptotic expansion
achieved by random coding, provides rate-compatible sphere-packing (RCSP)
performance approximations, and presents simulation results of tail-biting
convolutional codes.
The information-theoretic analysis shows that values of $N$ relatively close
to the expected latency yield the same random-coding achievability expansion as
with $N = \infty$. However, the penalty introduced in the expansion by limiting
decoding times is linear in the interval between decoding times. For binary
symmetric channels, the RCSP approximation provides an efficiently-computed
approximation of performance that shows excellent agreement with a family of
rate-compatible, tail-biting convolutional codes in the short-latency regime.
For the additive white Gaussian noise channel, bounded-distance decoding
simplifies the computation of the marginal RCSP approximation and produces
similar results as analysis based on maximum-likelihood decoding for latencies
greater than 200. The efficiency of the marginal RCSP approximation facilitates
optimization of the lengths of incremental transmissions when the number of
incremental transmissions is constrained to be small or the length of the
incremental transmissions is constrained to be uniform after the first
transmission. Finally, an RCSP-based decoding error trajectory is introduced
that provides target error rates for the design of rate-compatible code
families for use in feedback communication systems.
|
1309.0719 | Understanding Evolutionary Potential in Virtual CPU Instruction Set
Architectures | cs.NE | We investigate fundamental decisions in the design of instruction set
architectures for linear genetic programs that are used as both model systems
in evolutionary biology and underlying solution representations in evolutionary
computation. We subjected digital organisms with each tested architecture to
seven different computational environments designed to present a range of
evolutionary challenges. Our goal was to engineer a general purpose
architecture that would be effective under a broad range of evolutionary
conditions. We evaluated six different types of architectural features for the
virtual CPUs: (1) genetic flexibility: we allowed digital organisms to more
precisely modify the function of genetic instructions, (2) memory: we provided
an increased number of registers in the virtual CPUs, (3) decoupled sensors and
actuators: we separated input and output operations to enable greater control
over data flow. We also tested a variety of methods to regulate expression: (4)
explicit labels that allow programs to dynamically refer to specific genome
positions, (5) position-relative search instructions, and (6) multiple new flow
control instructions, including conditionals and jumps. Each of these features
also adds complication to the instruction set and risks slowing evolution due
to epistatic interactions. Two features (multiple argument specification and
separated I/O) demonstrated substantial improvements int the majority of test
environments. Some of the remaining tested modifications were detrimental,
thought most exhibit no systematic effects on evolutionary potential,
highlighting the robustness of digital evolution. Combined, these observations
enhance our understanding of how instruction architecture impacts evolutionary
potential, enabling the creation of architectures that support more rapid
evolution of complex solutions to a broad range of challenges.
|
1309.0750 | Application of Expurgated PPM to Indoor Visible Light Communications -
Part I: Single-User Systems | cs.IT math.IT | Visible light communications (VLC) in indoor environments suffer from the
limited bandwidth of LEDs as well as from the inter-symbol interference (ISI)
imposed by multipath. In this work, transmission schemes to improve the
performance of indoor optical wireless communication (OWC) systems are
introduced. Expurgated pulse-position modulation (EPPM) is proposed for this
application since it can provide a wide range of peak to average power ratios
(PAPR) needed for dimming of the indoor illumination. A correlation decoder
used at the receiver is shown to be optimal for indoor VLC systems, which are
shot noise and background-light limited. Interleaving applied on EPPM in order
to decrease the ISI effect in dispersive VLC channels can significantly
decrease the error probability. The proposed interleaving technique makes EPPM
a better modulation option compared to PPM for VLC systems or any other
dispersive OWC system. An overlapped EPPM pulse technique is proposed to
increase the transmission rate when bandwidth-limited white LEDs are used as
sources.
|
1309.0766 | Discrete and Continuous, Probabilistic Anticipation for Autonomous
Robots in Urban Environments | cs.RO | This paper develops a probabilistic anticipation algorithm for dynamic
objects observed by an autonomous robot in an urban environment. Predictive
Gaussian mixture models are used due to their ability to probabilistically
capture continuous and discrete obstacle decisions and behaviors; the
predictive system uses the probabilistic output (state estimate and covariance)
of a tracking system, and map of the environment to compute the probability
distribution over future obstacle states for a specified anticipation horizon.
A Gaussian splitting method is proposed based on the sigma-point transform and
the nonlinear dynamics function, which enables increased accuracy as the number
of mixands grows. An approach to caching elements of this optimal splitting
method is proposed, in order to enable real-time implementation. Simulation
results and evaluations on data from the research community demonstrate that
the proposed algorithm can accurately anticipate the probability distributions
over future states of nonlinear systems.
|
1309.0775 | Application of Expurgated PPM to Indoor Visible Light Communications -
Part II: Access Networks | cs.IT math.IT | Providing network access for multiple users in a visible light communication
(VLC) system that utilizes white light emitting diodes (LED) as sources
requires new networking techniques adapted to the lighting features. In this
paper we introduce two multiple access techniques using expurgated PPM (EPPM)
that can be implemented using LEDs and support lighting features such as
dimming. Multilevel symbols are used to provide M-ary signaling for multiple
users using multilevel EPPM (MEPPM). Using these multiple-access schemes we are
able to control the optical peak to average power ratio (PAPR) in the system,
and hereby control the dimming level. In the first technique, the M-ary data of
each user is first encoded using an optical orthogonal code (OOC) assigned to
the user, and the result is fed into a EPPM encoder to generate a multilevel
signal. The second multiple access method uses sub-sets of the EPPM
constellation to apply MEPPM to the data of each user. While the first approach
has a larger Hamming distance between the symbols of each user, the latter can
provide higher bit-rates for users in VLC systems using bandwidth-limited LEDs.
|
1309.0781 | An Exploratory Data Survey of Drug Name Incidence and Prevalence From
the FDA's Adverse Event Reporting System, 2004 to 2012Q2 | cs.CE stat.AP | Drug Names, Population Level Surveillance and the FDA's Adverse Event
Reporting System: An Exploratory Data Survey of Drug Name Incidence and
Prevalence, 2004-2012Q2 Purpose: To count and monitor the drug names reported
in the publicly available version of the Federal Adverse Event Reporting System
(FAERS) from 2004 to 2012Q2 in a maximized sensitivity relational model.
Methods: Data mining and data modeling was conducted and event based summary
statistics with plots were created from over nine continuous years of
continuous FAERS data. Results: This FAERS model contains 344,452 individual
drug names and 432,541,994 count references which occurred across 4,148,761
human subjects in the 34 quarter study period. Plots for the top 100 scoring
drug name references are reported by year and quarter; the top 100 drug names
contain 143,384,240 references or 33% of all drug name references over 34
quarters of continuous FAERS data. Conclusions: While FAERS contains many drugs
and adverse event reports, its data pertains to very few of them. Drug name
incidence lends timely and effective surveillance of large populations of
Averse Event Reports and does not require the cause of the AE, nor its validity
to be known to detect a mass poisoning.
|
1309.0787 | Online Tensor Methods for Learning Latent Variable Models | cs.LG cs.DC cs.SI stat.ML | We introduce an online tensor decomposition based approach for two latent
variable modeling problems namely, (1) community detection, in which we learn
the latent communities that the social actors in social networks belong to, and
(2) topic modeling, in which we infer hidden topics of text articles. We
consider decomposition of moment tensors using stochastic gradient descent. We
conduct optimization of multilinear operations in SGD and avoid directly
forming the tensors, to save computational and storage costs. We present
optimized algorithm in two platforms. Our GPU-based implementation exploits the
parallelism of SIMD architectures to allow for maximum speed-up by a careful
optimization of storage and data transfer, whereas our CPU-based implementation
uses efficient sparse matrix computations and is suitable for large sparse
datasets. For the community detection problem, we demonstrate accuracy and
computational efficiency on Facebook, Yelp and DBLP datasets, and for the topic
modeling problem, we also demonstrate good performance on the New York Times
dataset. We compare our results to the state-of-the-art algorithms such as the
variational method, and report a gain of accuracy and a gain of several orders
of magnitude in the execution time.
|
1309.0790 | SKYNET: an efficient and robust neural network training tool for machine
learning in astronomy | astro-ph.IM cs.LG cs.NE physics.data-an stat.ML | We present the first public release of our generic neural network training
algorithm, called SkyNet. This efficient and robust machine learning tool is
able to train large and deep feed-forward neural networks, including
autoencoders, for use in a wide range of supervised and unsupervised learning
applications, such as regression, classification, density estimation,
clustering and dimensionality reduction. SkyNet uses a `pre-training' method to
obtain a set of network parameters that has empirically been shown to be close
to a good solution, followed by further optimisation using a regularised
variant of Newton's method, where the level of regularisation is determined and
adjusted automatically; the latter uses second-order derivative information to
improve convergence, but without the need to evaluate or store the full Hessian
matrix, by using a fast approximate method to calculate Hessian-vector
products. This combination of methods allows for the training of complicated
networks that are difficult to optimise using standard backpropagation
techniques. SkyNet employs convergence criteria that naturally prevent
overfitting, and also includes a fast algorithm for estimating the accuracy of
network outputs. The utility and flexibility of SkyNet are demonstrated by
application to a number of toy problems, and to astronomical problems focusing
on the recovery of structure from blurred and noisy images, the identification
of gamma-ray bursters, and the compression and denoising of galaxy images. The
SkyNet software, which is implemented in standard ANSI C and fully parallelised
using MPI, is available at http://www.mrao.cam.ac.uk/software/skynet/.
|
1309.0799 | Linear Degrees of Freedom of the X-Channel with Delayed CSIT | cs.IT math.IT | We establish the degrees of freedom of the two-user X-channel with delayed
channel knowledge at transmitters (i.e., delayed CSIT), assuming linear coding
strategies at the transmitters. We derive a new upper bound and characterize
the linear degrees of freedom of this network to be 6/5. The converse builds
upon our development of a general lemma that shows that, if two distributed
transmitters employ linear strategies, the ratio of the dimensions of received
linear subspaces at the two receivers cannot exceed 3/2, due to delayed CSIT.
As a byproduct, we also apply this general lemma to the three-user interference
channel with delayed CSIT, thereby deriving a new upper bound of 9/7 on its
linear degrees of freedom. This is the first bound that captures the impact of
delayed CSIT on the degrees of freedom of this network, under the assumption of
linear encoding strategies.
|
1309.0834 | Performance Analysis and Optimal Power Allocation for Linear Receivers
Based on Superimposed Training | cs.IT math.IT | In this paper, we derive a performance comparison between two training-based
schemes for Multiple-Input Multiple-Output (MIMO) systems. The two schemes are
thetime-division multiplexing scheme and the recently proposed data-dependent
superimposed pilot scheme. For both schemes, a closed-form expressions for the
Bit Error Rate (BER) is provided. We also determine, for both schemes, the
optimal allocation of power between pilot and data that minimizes the BER.
|
1309.0858 | Joint Sparse Recovery Method for Compressed Sensing with Structured
Dictionary Mismatches | cs.IT math.IT | In traditional compressed sensing theory, the dictionary matrix is given a
priori, whereas in real applications this matrix suffers from random noise and
fluctuations. In this paper we consider a signal model where each column in the
dictionary matrix is affected by a structured noise. This formulation is common
in direction-of-arrival (DOA) estimation of off-grid targets, encountered in
both radar systems and array processing. We propose to use joint sparse signal
recovery to solve the compressed sensing problem with structured dictionary
mismatches and also give an analytical performance bound on this joint sparse
recovery. We show that, under mild conditions, the reconstruction error of the
original sparse signal is bounded by both the sparsity and the noise level in
the measurement model. Moreover, we implement fast first-order algorithms to
speed up the computing process. Numerical examples demonstrate the good
performance of the proposed algorithm, and also show that the joint-sparse
recovery method yields a better reconstruction result than existing methods. By
implementing the joint sparse recovery method, the accuracy and efficiency of
DOA estimation are improved in both passive and active sensing cases.
|
1309.0861 | System Power Minimization to Access Non-Contiguous Spectrum in Cognitive
Radio Networks | cs.NI cs.IT math.IT | Wireless transmission using non-contiguous chunks of spectrum is becoming
increasingly important due to a variety of scenarios such as: secondary users
avoiding incumbent users in TV white space; anticipated spectrum sharing
between commercial and military systems; and spectrum sharing among
uncoordinated interferers in unlicensed bands. Multi-Channel Multi-Radio (MCMR)
platforms and Non-Contiguous Orthogonal Frequency Division Multiple Access
(NC-OFDMA) technology are the two commercially viable transmission choices to
access these non-contiguous spectrum chunks. Fixed MC-MRs do not scale with
increasing number of non-contiguous spectrum chunks due to their fixed set of
supporting radio front ends. NC-OFDMA allows nodes to access these
non-contiguous spectrum chunks and put null sub-carriers in the remaining
chunks. However, nulling sub-carriers increases the sampling rate (spectrum
span) which, in turn, increases the power consumption of radio front ends. Our
work characterizes this trade-off from a cross-layer perspective, specifically
by showing how the slope of ADC/DAC power consumption versus sampling rate
curve influences scheduling decisions in a multi-hop network. Specifically, we
provide a branch and bound algorithm based mixed integer linear programming
solution that performs joint power control, spectrum span selection, scheduling
and routing in order to minimize the system power of multi-hop NC-OFDMA
networks. We also provide a low complexity (O(E^2 M^2)) greedy algorithm where
M and E denote the number of channels and links respectively. Numerical
simulations suggest that our approach reduces system power by 30% over
classical transmit power minimization based cross-layer algorithms.
|
1309.0866 | On the Robustness of Temporal Properties for Stochastic Models | cs.LO cs.AI cs.LG cs.SY | Stochastic models such as Continuous-Time Markov Chains (CTMC) and Stochastic
Hybrid Automata (SHA) are powerful formalisms to model and to reason about the
dynamics of biological systems, due to their ability to capture the
stochasticity inherent in biological processes. A classical question in formal
modelling with clear relevance to biological modelling is the model checking
problem. i.e. calculate the probability that a behaviour, expressed for
instance in terms of a certain temporal logic formula, may occur in a given
stochastic process. However, one may not only be interested in the notion of
satisfiability, but also in the capacity of a system to mantain a particular
emergent behaviour unaffected by the perturbations, caused e.g. from extrinsic
noise, or by possible small changes in the model parameters. To address this
issue, researchers from the verification community have recently proposed
several notions of robustness for temporal logic providing suitable definitions
of distance between a trajectory of a (deterministic) dynamical system and the
boundaries of the set of trajectories satisfying the property of interest. The
contributions of this paper are twofold. First, we extend the notion of
robustness to stochastic systems, showing that this naturally leads to a
distribution of robustness scores. By discussing two examples, we show how to
approximate the distribution of the robustness score and its key indicators:
the average robustness and the conditional average robustness. Secondly, we
show how to combine these indicators with the satisfaction probability to
address the system design problem, where the goal is to optimize some control
parameters of a stochastic model in order to best maximize robustness of the
desired specifications.
|
1309.0867 | Robustness Analysis for Value-Freezing Signal Temporal Logic | cs.LO cs.CE cs.SY | In our previous work we have introduced the logic STL*, an extension of
Signal Temporal Logic (STL) that allows value freezing. In this paper, we
define robustness measures for STL* by adapting the robustness measures
previously introduced for Metric Temporal Logic (MTL). Furthermore, we present
an algorithm for STL* robustness computation, which is implemented in the tool
Parasim. Application of STL* robustness analysis is demonstrated on case
studies.
|
1309.0868 | The impact of high density receptor clusters on VEGF signaling | cs.SY q-bio.MN | Vascular endothelial growth factor (VEGF) signaling is involved in the
process of blood vessel development and maintenance. Signaling is initiated by
binding of the bivalent VEGF ligand to the membrane-bound receptors (VEGFR),
which in turn stimulates receptor dimerization. Herein, we discuss experimental
evidence that VEGF receptors localize in caveloae and other regions of the
plasma membrane, and for other receptors, it has been shown that receptor
clustering has an impact on dimerization and thus also on signaling. Overall,
receptor clustering is part of a complex ecosystem of interactions and how
receptor clustering impacts dimerization is not well understood. To address
these questions, we have formulated the simplest possible model. We have
postulated the existence of a single high affinity region in the cell membrane,
which acts as a transient trap for receptors. We have defined an ODE model by
introducing high- and low-density receptor variables and introduce the
corresponding reactions from a realistic model of VEGF signal initiation.
Finally, we use the model to investigate the relation between the degree of
VEGFR concentration, ligand availability, and signaling. In conclusion, our
simulation results provide a deeper understanding of the role of receptor
clustering in cell signaling.
|
1309.0869 | Falsifying Oscillation Properties of Parametric Biological Models | cs.LO cs.CE cs.SY | We propose an approach to falsification of oscillation properties of
parametric biological models, based on the recently developed techniques for
testing continuous and hybrid systems. In this approach, an oscillation
property can be specified using a hybrid automaton, which is then used to guide
the exploration in the state and input spaces to search for the behaviors that
do not satisfy the property. We illustrate the approach with the Laub-Loomis
model for spontaneous oscillations during the aggregation stage of
Dictyostelium.
|
1309.0870 | A hybrid mammalian cell cycle model | cs.CE q-bio.MN | Hybrid modeling provides an effective solution to cope with multiple time
scales dynamics in systems biology. Among the applications of this method, one
of the most important is the cell cycle regulation. The machinery of the cell
cycle, leading to cell division and proliferation, combines slow growth,
spatio-temporal re-organisation of the cell, and rapid changes of regulatory
proteins concentrations induced by post-translational modifications. The
advancement through the cell cycle comprises a well defined sequence of stages,
separated by checkpoint transitions. The combination of continuous and discrete
changes justifies hybrid modelling approaches to cell cycle dynamics. We
present a piecewise-smooth version of a mammalian cell cycle model, obtained by
hybridization from a smooth biochemical model. The approximate hybridization
scheme, leading to simplified reaction rates and binary event location
functions, is based on learning from a training set of trajectories of the
smooth model. We discuss several learning strategies for the parameters of the
hybrid model.
|
1309.0871 | Exploring the Dynamics of Mass Action Systems | cs.CE cs.CY | We present the Populus toolkit for exploring the dynamics of mass action
systems under different assumptions.
|
1309.0872 | Producing a Set of Models for the Iron Homeostasis Network | cs.CE cs.LO q-bio.MN | This paper presents a method for modeling biological systems which combines
formal techniques on intervals, numerical simulations and satisfaction of
Signal Temporal Logic (STL) formulas. The main modeling challenge addressed by
this approach is the large uncertainty in the values of the parameters due to
the experimental difficulties of getting accurate biological data. This method
considers intervals for each parameter and a formal description of the expected
behavior of the model. In a first step, it produces reduced intervals of
possible parameter values. Then by performing a systematic search in these
intervals, it defines sets of parameter values used in the next step. This
procedure aims at finding a sub-space where the model robustly behaves as
expected. We apply this method to the modeling of the cellular iron homeostasis
network in erythroid progenitors. The produced model describes explicitly the
regulation mechanism which acts at the translational level.
|
1309.0873 | A Hybrid Model of a Genetic Regulatory Network in Mammalian Sclera | cs.SY q-bio.MN | Myopia in human and animals is caused by the axial elongation of the eye and
is closely linked to the thinning of the sclera which supports the eye tissue.
This thinning has been correlated with the overproduction of matrix
metalloproteinase (MMP-2), an enzyme that degrades the collagen structure of
the sclera. In this short paper, we propose a descriptive model of a regulatory
network with hysteresis, which seems necessary for creating oscillatory
behavior in the hybrid model between MMP-2, MT1-MMP and TIMP-2. Numerical
results provide insight on the type of equilibria present in the system.
|
1309.0874 | Shortest Paths in Microseconds | cs.DC cs.DS cs.SI physics.soc-ph | Computing shortest paths is a fundamental primitive for several social
network applications including socially-sensitive ranking, location-aware
search, social auctions and social network privacy. Since these applications
compute paths in response to a user query, the goal is to minimize latency
while maintaining feasible memory requirements. We present ASAP, a system that
achieves this goal by exploiting the structure of social networks.
ASAP preprocesses a given network to compute and store a partial shortest
path tree (PSPT) for each node. The PSPTs have the property that for any two
nodes, each edge along the shortest path is with high probability contained in
the PSPT of at least one of the nodes. We show that the structure of social
networks enable the PSPT of each node to be an extremely small fraction of the
entire network; hence, PSPTs can be stored efficiently and each shortest path
can be computed extremely quickly.
For a real network with 5 million nodes and 69 million edges, ASAP computes a
shortest path for most node pairs in less than 49 microseconds per pair. ASAP,
unlike any previous technique, also computes hundreds of paths (along with
corresponding distances) between any node pair in less than 100 microseconds.
Finally, ASAP admits efficient implementation on distributed programming
frameworks like MapReduce.
|
1309.0898 | Two-Hop Interference Channels: Impact of Linear Schemes | cs.IT math.IT | We consider the two-hop interference channel (IC), which consists of two
source-destination pairs communicating with each other via two relays. We
analyze the degrees of freedom (DoF) of this network when the relays are
restricted to perform linear schemes, and the channel gains are constant (i.e.,
slow fading). We show that, somewhat surprisingly, by using vector-linear
strategies at the relays, it is possible to achieve 4/3 sum-DoF when the
channel gains are real. The key achievability idea is to alternate relaying
coefficients across time, to create different end-to-end interference
structures (or topologies) at different times. Although each of these
topologies has only 1 sum-DoF, we manage to achieve 4/3 by coding across them.
Furthermore, we develop a novel outer bound that matches our achievability,
hence characterizing the sum-DoF of two-hop interference channels with linear
schemes. As for the case of complex channel gains, we characterize the sum-DoF
with linear schemes to be 5/3. We also generalize the results to the
multi-antenna setting, characterizing the sum-DoF with linear schemes to be
2M-1/3 (for complex channel gains), where M is the number of antennas at each
node.
|
1309.0961 | Exactly scale-free scale-free networks | physics.soc-ph cs.SI nlin.AO | Many complex natural and physical systems exhibit patterns of interconnection
that conform, approximately, to a network structure referred to as scale-free.
Preferential attachment is one of many algorithms that have been introduced to
model the growth and structure of scale-free networks. With so many different
models of scale-free networks it is unclear what properties of scale-free
networks are typical, and what properties are peculiarities of a particular
growth or construction process. We propose a simple maximum entropy process
which provides the best representation of what are typical properties of
scale-free networks, and provides a standard against which real and
algorithmically generated networks can be compared. As an example we consider
preferential attachment and find that this particular growth model does not
yield typical realizations of scale-free networks. In particular, the widely
discussed "fragility" of scale-free networks is actually found to be due to the
peculiar "hub-centric" structure of preferential attachment networks. We
provide a method to generate or remove this latent hub-centric bias --- thereby
demonstrating exactly which features of preferential attachment networks are
atypical of the broader class of scale-free networks. We are also able to
statistically demonstrate whether real networks are typical realizations of
scale-free networks, or networks with that particular degree distribution;
using a new surrogate generation method for complex networks, exactly analogous
the the widely used surrogate tests of nonlinear time series analysis.
|
1309.0962 | Random Variables Recorded under Mutually Exclusive Conditions:
Contextuality-by-Default | quant-ph cs.AI math.PR q-bio.QM | We present general principles underlying analysis of the dependence of random
variables (outputs) on deterministic conditions (inputs). Random outputs
recorded under mutually exclusive input values are labeled by these values and
considered stochastically unrelated, possessing no joint distribution. An input
that does not directly influence an output creates a context for the latter.
Any constraint imposed on the dependence of random outputs on inputs can be
characterized by considering all possible couplings (joint distributions)
imposed on stochastically unrelated outputs. The target application of these
principles is a quantum mechanical system of entangled particles, with
directions of spin measurements chosen for each particle being inputs and the
spins recorded outputs. The sphere of applicability, however, spans systems
across physical, biological, and behavioral sciences.
|
1309.0985 | Efficient binary tomographic reconstruction | physics.class-ph cs.CV | Tomographic reconstruction of a binary image from few projections is
considered. A novel {\em heuristic} algorithm is proposed, the central element
of which is a nonlinear transformation $\psi(p)=\log(p/(1-p))$ of the
probability $p$ that a pixel of the sought image be 1-valued. It consists of
backprojections based on $\psi(p)$ and iterative corrections. Application of
this algorithm to a series of artificial test cases leads to exact binary
reconstructions, (i.e recovery of the binary image for each single pixel) from
the knowledge of projection data over a few directions. Images up to $10^6$
pixels are reconstructed in a few seconds. A series of test cases is performed
for comparison with previous methods, showing a better efficiency and reduced
computation times.
|
1309.0999 | Minutiae Based Thermal Face Recognition using Blood Perfusion Data | cs.CV | This paper describes an efficient approach for human face recognition based
on blood perfusion data from infra-red face images. Blood perfusion data are
characterized by the regional blood flow in human tissue and therefore do not
depend entirely on surrounding temperature. These data bear a great potential
for deriving discriminating facial thermogram for better classification and
recognition of face images in comparison to optical image data. Blood perfusion
data are related to distribution of blood vessels under the face skin. A
distribution of blood vessels are unique for each person and as a set of
extracted minutiae points from a blood perfusion data of a human face should be
unique for that face. There may be several such minutiae point sets for a
single face but all of these correspond to that particular face only. Entire
face image is partitioned into equal blocks and the total number of minutiae
points from each block is computed to construct final vector. Therefore, the
size of the feature vectors is found to be same as total number of blocks
considered. For classification, a five layer feed-forward backpropagation
neural network has been used. A number of experiments were conducted to
evaluate the performance of the proposed face recognition system with varying
block sizes. Experiments have been performed on the database created at our own
laboratory. The maximum success of 91.47% recognition has been achieved with
block size 8X8.
|
1309.1000 | Automated Thermal Face recognition based on Minutiae Extraction | cs.CV | In this paper an efficient approach for human face recognition based on the
use of minutiae points in thermal face image is proposed. The thermogram of
human face is captured by thermal infra-red camera. Image processing methods
are used to pre-process the captured thermogram, from which different
physiological features based on blood perfusion data are extracted. Blood
perfusion data are related to distribution of blood vessels under the face
skin. In the present work, three different methods have been used to get the
blood perfusion image, namely bit-plane slicing and medial axis transform,
morphological erosion and medial axis transform, sobel edge operators.
Distribution of blood vessels is unique for each person and a set of extracted
minutiae points from a blood perfusion data of a human face should be unique
for that face. Two different methods are discussed for extracting minutiae
points from blood perfusion data. For extraction of features entire face image
is partitioned into equal size blocks and the total number of minutiae points
from each block is computed to construct final feature vector. Therefore, the
size of the feature vectors is found to be same as total number of blocks
considered. A five layer feed-forward back propagation neural network is used
as the classification tool. A number of experiments were conducted to evaluate
the performance of the proposed face recognition methodologies with varying
block size on the database created at our own laboratory. It has been found
that the first method supercedes the other two producing an accuracy of 97.62%
with block size 16X16 for bit-plane 4.
|
1309.1007 | Concentration in unbounded metric spaces and algorithmic stability | math.PR cs.LG math.FA | We prove an extension of McDiarmid's inequality for metric spaces with
unbounded diameter. To this end, we introduce the notion of the {\em
subgaussian diameter}, which is a distribution-dependent refinement of the
metric diameter. Our technique provides an alternative approach to that of
Kutin and Niyogi's method of weakly difference-bounded functions, and yields
nontrivial, dimension-free results in some interesting cases where the former
does not. As an application, we give apparently the first generalization bound
in the algorithmic stability setting that holds for unbounded loss functions.
We furthermore extend our concentration inequality to strongly mixing
processes.
|
1309.1009 | A Comparative Study of Human thermal face recognition based on Haar
wavelet transform (HWT) and Local Binary Pattern (LBP) | cs.CV | Thermal infra-red (IR) images focus on changes of temperature distribution on
facial muscles and blood vessels. These temperature changes can be regarded as
texture features of images. A comparative study of face recognition methods
working in thermal spectrum is carried out in this paper. In these study two
local-matching methods based on Haar wavelet transform and Local Binary Pattern
(LBP) are analyzed. Wavelet transform is a good tool to analyze multi-scale,
multi-direction changes of texture. Local binary patterns (LBP) are a type of
feature used for classification in computer vision. Firstly, human thermal IR
face image is preprocessed and cropped the face region only from the entire
image. Secondly, two different approaches are used to extract the features from
the cropped face region. In the first approach, the training images and the
test images are processed with Haar wavelet transform and the LL band and the
average of LH/HL/HH bands sub-images are created for each face image. Then a
total confidence matrix is formed for each face image by taking a weighted sum
of the corresponding pixel values of the LL band and average band. For LBP
feature extraction, each of the face images in training and test datasets is
divided into 161 numbers of sub images, each of size 8X8 pixels. For each such
sub images, LBP features are extracted which are concatenated in row wise
manner. PCA is performed separately on the individual feature set for
dimensionality reeducation. Finally two different classifiers are used to
classify face images. One such classifier multi-layer feed forward neural
network and another classifier is minimum distance classifier. The Experiments
have been performed on the database created at our own laboratory and Terravic
Facial IR Database.
|
1309.1014 | Advances in the Logical Representation of Lexical Semantics | cs.CL | The integration of lexical semantics and pragmatics in the analysis of the
meaning of natural lan- guage has prompted changes to the global framework
derived from Montague. In those works, the original lexicon, in which words
were assigned an atomic type of a single-sorted logic, has been re- placed by a
set of many-facetted lexical items that can compose their meaning with salient
contextual properties using a rich typing system as a guide. Having related our
proposal for such an expanded framework \LambdaTYn, we present some recent
advances in the logical formalisms associated, including constraints on lexical
transformations and polymorphic quantifiers, and ongoing discussions and
research on the granularity of the type system and the limits of transitivity.
|
1309.1026 | Parallel Decoders of Polar Codes | cs.IT math.IT | In this letter, we propose parallel SC (Successive Cancellation) decoder and
parallel SC-List decoder for polar codes. The parallel decoder is composed of
M=2^m(m>=1) component decoders working in parallel and each component decoder
decodes a Polar code of a block size of 1/M of the original Polar code.
Therefore the parallel decoder has M times faster decoding speed. Our
simulation results show that the parallel decoder has almost the same
error-rate performance as the conventional non-parallel decoder.
|
1309.1029 | Sensor Setups for State and Wind Estimation for Airborne Wind Energy
Converters | cs.SY cs.RO | An unscented Kalman filter with joint state and parameter estimation is
proposed for aerodynamics, states and wind conditions for airborne wind energy
converters. The proposed estimator relies on different measurement setups. Due
to the strict economic constraints of wind energy converters, the sensor setups
are chosen with minimal cost and reliability issues in mind. Simulation data
with a high fidelity system model and experimental tests using flight data,
together with wind measurements obtained using a lidar system for altitude wind
measurements, are used for validation. The data was obtained during test
flights of the EnerK\'ite EK30, an airborne wind energy converter currently in
research operation in Brandenburg, Germany. Feasible accuracies were achieved
even with the simplest of setups and illustrate the gain achievable by airborne
sensors. Additionally, the results encourage further research into use of the
obtained wind estimates for site assessment.
|
1309.1080 | Boosting in Location Space | cs.CV | The goal of object detection is to find objects in an image. An object
detector accepts an image and produces a list of locations as $(x,y)$ pairs.
Here we introduce a new concept: {\bf location-based boosting}. Location-based
boosting differs from previous boosting algorithms because it optimizes a new
spatial loss function to combine object detectors, each of which may have
marginal performance, into a single, more accurate object detector. A
structured representation of object locations as a list of $(x,y)$ pairs is a
more natural domain for object detection than the spatially unstructured
representation produced by classifiers. Furthermore, this formulation allows us
to take advantage of the intuition that large areas of the background are
uninteresting and it is not worth expending computational effort on them. This
results in a more scalable algorithm because it does not need to take measures
to prevent the background data from swamping the foreground data such as
subsampling or applying an ad-hoc weighting to the pixels. We first present the
theory of location-based boosting, and then motivate it with empirical results
on a challenging data set.
|
1309.1125 | Learning to answer questions | cs.CL | We present an open-domain Question-Answering system that learns to answer
questions based on successful past interactions. We follow a pattern-based
approach to Answer-Extraction, where (lexico-syntactic) patterns that relate a
question to its answer are automatically learned and used to answer future
questions. Results show that our approach contributes to the system's best
performance when it is conjugated with typical Answer-Extraction strategies.
Moreover, it allows the system to learn with the answered questions and to
rectify wrong or unsolved past questions.
|
1309.1129 | Analysing Quality of English-Hindi Machine Translation Engine Outputs
Using Bayesian Classification | cs.CL | This paper considers the problem for estimating the quality of machine
translation outputs which are independent of human intervention and are
generally addressed using machine learning techniques.There are various
measures through which a machine learns translations quality. Automatic
Evaluation metrics produce good co-relation at corpus level but cannot produce
the same results at the same segment or sentence level. In this paper 16
features are extracted from the input sentences and their translations and a
quality score is obtained based on Bayesian inference produced from training
data.
|
1309.1131 | Is the Voter Model a model for voters? | physics.soc-ph cs.SI | The voter model has been studied extensively as a paradigmatic opinion
dynamics' model. However, its ability for modeling real opinion dynamics has
not been addressed. We introduce a noisy voter model (accounting for social
influence) with agents' recurrent mobility (as a proxy for social context),
where the spatial and population diversity are taken as inputs to the model. We
show that the dynamics can be described as a noisy diffusive process that
contains the proper anysotropic coupling topology given by population and
mobility heterogeneity. The model captures statistical features of the US
presidential elections as the stationary vote-share fluctuations across
counties, and the long-range spatial correlations that decay logarithmically
with the distance. Furthermore, it recovers the behavior of these properties
when a real-space renormalization is performed by coarse-graining the
geographical scale from county level through congressional districts and up to
states. Finally, we analyze the role of the mobility range and the randomness
in decision making which are consistent with the empirical observations.
|
1309.1151 | Non-Malleable Coding Against Bit-wise and Split-State Tampering | cs.IT cs.CC cs.CR math.IT | Non-malleable coding, introduced by Dziembowski, Pietrzak and Wichs (ICS
2010), aims for protecting the integrity of information against tampering
attacks in situations where error-detection is impossible. Intuitively,
information encoded by a non-malleable code either decodes to the original
message or, in presence of any tampering, to an unrelated message. Dziembowski
et al. show existence of non-malleable codes for any class of tampering
functions of bounded size.
We consider constructions of coding schemes against two well-studied classes
of tampering functions: bit-wise tampering functions (where the adversary
tampers each bit of the encoding independently) and split-state adversaries
(where two independent adversaries arbitrarily tamper each half of the encoded
sequence).
1. For bit-tampering, we obtain explicit and efficiently encodable and
decodable codes of length $n$ achieving rate $1-o(1)$ and error (security)
$\exp(-\tilde{\Omega}(n^{1/7}))$. We improve the error to
$\exp(-\tilde{\Omega}(n))$ at the cost of making the construction Monte Carlo
with success probability $1-\exp(-\Omega(n))$. Previously, the best known
construction of bit-tampering codes was the Monte Carlo construction of
Dziembowski et al. (ICS 2010) achieving rate ~.1887.
2. We initiate the study of seedless non-malleable extractors as a variation
of non-malleable extractors introduced by Dodis and Wichs (STOC 2009). We show
that construction of non-malleable codes for the split-state model reduces to
construction of non-malleable two-source extractors. We prove existence of such
extractors, which implies that codes obtained from our reduction can achieve
rates arbitrarily close to 1/5 and exponentially small error. Currently, the
best known explicit construction of split-state coding schemes is due to
Aggarwal, Dodis and Lovett (ECCC TR13-081) which only achieves vanishing
(polynomially small) rate.
|
1309.1155 | Minutiae Based Thermal Human Face Recognition using Label Connected
Component Algorithm | cs.CV | In this paper, a thermal infra red face recognition system for human
identification and verification using blood perfusion data and back propagation
feed forward neural network is proposed. The system consists of three steps. At
the very first step face region is cropped from the colour 24-bit input images.
Secondly face features are extracted from the croped region, which will be
taken as the input of the back propagation feed forward neural network in the
third step and classification and recognition is carried out. The proposed
approaches are tested on a number of human thermal infra red face images
created at our own laboratory. Experimental results reveal the higher degree
performance
|
1309.1156 | Thermal Human face recognition based on Haar wavelet transform and
series matching technique | cs.CV | Thermal infrared (IR) images represent the heat patterns emitted from hot
object and they do not consider the energies reflected from an object. Objects
living or non-living emit different amounts of IR energy according to their
body temperature and characteristics. Humans are homoeothermic and hence
capable of maintaining constant temperature under different surrounding
temperature. Face recognition from thermal (IR) images should focus on changes
of temperature on facial blood vessels. These temperature changes can be
regarded as texture features of images and wavelet transform is a very good
tool to analyze multi-scale and multi-directional texture. Wavelet transform is
also used for image dimensionality reduction, by removing redundancies and
preserving original features of the image. The sizes of the facial images are
normally large. So, the wavelet transform is used before image similarity is
measured. Therefore this paper describes an efficient approach of human face
recognition based on wavelet transform from thermal IR images. The system
consists of three steps. At the very first step, human thermal IR face image is
preprocessed and the face region is only cropped from the entire image.
Secondly, Haar wavelet is used to extract low frequency band from the cropped
face region. Lastly, the image classification between the training images and
the test images is done, which is based on low-frequency components. The
proposed approach is tested on a number of human thermal infrared face images
created at our own laboratory and Terravic Facial IR Database. Experimental
results indicated that the thermal infra red face images can be recognized by
the proposed system effectively. The maximum success of 95% recognition has
been achieved.
|
1309.1193 | Confidence-constrained joint sparsity recovery under the Poisson noise
model | stat.ML cs.LG | Our work is focused on the joint sparsity recovery problem where the common
sparsity pattern is corrupted by Poisson noise. We formulate the
confidence-constrained optimization problem in both least squares (LS) and
maximum likelihood (ML) frameworks and study the conditions for perfect
reconstruction of the original row sparsity and row sparsity pattern. However,
the confidence-constrained optimization problem is non-convex. Using convex
relaxation, an alternative convex reformulation of the problem is proposed. We
evaluate the performance of the proposed approach using simulation results on
synthetic data and show the effectiveness of proposed row sparsity and row
sparsity pattern recovery framework.
|
1309.1199 | Experiences with Automated Build and Test for Geodynamics Simulation
Codes | cs.CE cs.MS | The Computational Infrastructure for Geodynamics (CIG) is an NSF funded
project that develops, supports, and disseminates community-accessible software
for the geodynamics research community. CIG software supports a variety of
computational geodynamic research from mantle and core dynamics, to crustal and
earthquake dynamics, to magma migration and seismology. To support this type of
project a backend computational infrastructure is necessary.
Part of this backend infrastructure is an automated build and testing system
to ensure codes and changes to them are compatible with multiple platforms and
that the changes do not significantly affect the scientific results. In this
paper we describe the build and test infrastructure for CIG based on the BaTLab
system, how it is organized, and how it assists in operations. We demonstrate
the use of this type of testing for a suite of geophysics codes, show why codes
may compile on one platform but not on another, and demonstrate how minor
changes may alter the computed results in unexpected ways that can influence
the scientific interpretation. Finally, we examine result comparison between
platforms and show how the compiler or operating system may affect results.
|
1309.1204 | Achieving High Performance with Unified Residual Evaluation | cs.MS cs.CE | We examine residual evaluation, perhaps the most basic operation in numerical
simulation. By raising the level of abstraction in this operation, we can
eliminate specialized code, enable optimization, and greatly increase the
extensibility of existing code.
|
1309.1218 | Optimal Ternary Cyclic Codes with Minimum Distance Four and Five | cs.IT math.IT | Cyclic codes are an important subclass of linear codes and have wide
applications in data storage systems, communication systems and consumer
electronics. In this paper, two families of optimal ternary cyclic codes are
presented. The first family of cyclic codes has parameters $[3^m-1, 3^m-1-2m,
4]$ and contains a class of conjectured cyclic codes and several new classes of
optimal cyclic codes. The second family of cyclic codes has parameters $[3^m-1,
3^m-2-2m, 5]$ and contains a number of classes of cyclic codes that are
obtained from perfect nonlinear functions over $\fthreem$, where $m>1$ and is a
positive integer.
|
1309.1226 | Graded Causation and Defaults | cs.AI | Recent work in psychology and experimental philosophy has shown that
judgments of actual causation are often influenced by consideration of
defaults, typicality, and normality. A number of philosophers and computer
scientists have also suggested that an appeal to such factors can help deal
with problems facing existing accounts of actual causation. This paper develops
a flexible formal framework for incorporating defaults, typicality, and
normality into an account of actual causation. The resulting account takes
actual causation to be both graded and comparative. We then show how our
account would handle a number of standard cases.
|
1309.1227 | Compact Representations of Extended Causal Models | cs.AI | Judea Pearl was the first to propose a definition of actual causation using
causal models. A number of authors have suggested that an adequate account of
actual causation must appeal not only to causal structure, but also to
considerations of normality. In earlier work, we provided a definition of
actual causation using extended causal models, which include information about
both causal structure and normality. Extended causal models are potentially
very complex. In this paper, we show how it is possible to achieve a compact
representation of extended causal models.
|
1309.1228 | Weighted regret-based likelihood: a new approach to describing
uncertainty | cs.AI | Recently, Halpern and Leung suggested representing uncertainty by a weighted
set of probability measures, and suggested a way of making decisions based on
this representation of uncertainty: maximizing weighted regret. Their paper
does not answer an apparently simpler question: what it means, according to
this representation of uncertainty, for an event E to be more likely than an
event E'. In this paper, a notion of comparative likelihood when uncertainty is
represented by a weighted set of probability measures is defined. It
generalizes the ordering defined by probability (and by lower probability) in a
natural way; a generalization of upper probability can also be defined. A
complete axiomatic characterization of this notion of regret-based likelihood
is given.
|
1309.1274 | A Small Universal Petri Net | cs.FL cs.CC cs.DC cs.NE | A universal deterministic inhibitor Petri net with 14 places, 29 transitions
and 138 arcs was constructed via simulation of Neary and Woods' weakly
universal Turing machine with 2 states and 4 symbols; the total time complexity
is exponential in the running time of their weak machine. To simulate the blank
words of the weakly universal Turing machine, a couple of dedicated transitions
insert their codes when reaching edges of the working zone. To complete a chain
of a given Petri net encoding to be executed by the universal Petri net, a
translation of a bi-tag system into a Turing machine was constructed. The
constructed Petri net is universal in the standard sense; a weaker form of
universality for Petri nets was not introduced in this work.
|
1309.1286 | On a Family of Circulant Matrices for Quasi-Cyclic Low-Density Generator
Matrix Codes | cs.IT math.IT | We present a new class of sparse and easily invertible circulant matrices
that can have a sparse inverse though not being permutation matrices. Their
study is useful in the design of quasi-cyclic low-density generator matrix
codes, that are able to join the inner structure of quasi-cyclic codes with
sparse generator matrices, so limiting the number of elementary operations
needed for encoding. Circulant matrices of the proposed class permit to hit
both targets without resorting to identity or permutation matrices that may
penalize the code minimum distance and often cause significant error floors.
|
1309.1300 | Electrical Structure-Based PMU Placement in Electric Power Systems | cs.SY | Recent work on complex networks compared the topological and electrical
structures of the power grid, taking into account the underlying physical laws
that govern the electrical connectivity between various components in the
network. A distance metric, namely, resistance distance was introduced to
provide a more comprehensive description of interconnections in power systems
compared with the topological structure, which is based only on geographic
connections between network components. Motivated by these studies, in this
paper we revisit the phasor measurement unit (PMU) placement problem by
deriving the connectivity matrix of the network using resistance distances
between buses in the grid, and use it in the integer program formulations for
several standard IEEE bus systems. The main result of this paper is rather
discouraging: more number of PMUs are required, compared with those obtained
using the topological structure, to meet the desired objective of complete
network observability without zero injection measurements. However, in light of
recent advances in the electrical structure of the grid, our study provides a
more realistic perspective of PMU placement in power systems. By further
exploring the connectivity matrix derived using the electrical structure, we
devise a procedure to solve the placement problem without resorting to linear
programming.
|
1309.1319 | Characterization of the Least Periods of the Generalized Self-Shrinking
Sequences | cs.IT math.IT | In 2004, Y. Hu and G. Xiao introduced the generalized self-shrinking
generator, a simple bit-stream generator considered as a specialization of the
shrinking generator as well as a generalization of the self-shrinking
generator. The authors conjectured that the family of generalized
self-shrinking sequences took their least periods in the set {1, 2, 2**(L-1)},
where L is the length of the Linear Feedback Shift Register included in the
generator. In this correspondence, it is proved that the least periods of such
generated sequences take values exclusively in such a set. As a straight
consequence of this result, other characteristics of such sequences (linear
complexity or pseudorandomness) and their potential use in cryptography are
also analyzed.
|
1309.1323 | From Instantly Decodable to Random Linear Network Coding | cs.IT math.IT | Our primary goal in this paper is to traverse the performance gap between two
linear network coding schemes: random linear network coding (RLNC) and
instantly decodable network coding (IDNC) in terms of throughput and decoding
delay. We first redefine the concept of packet generation and use it to
partition a block of partially-received data packets in a novel way, based on
the coding sets in an IDNC solution. By varying the generation size, we obtain
a general coding framework which consists of a series of coding schemes, with
RLNC and IDNC identified as two extreme cases. We then prove that the
throughput and decoding delay performance of all coding schemes in this coding
framework are bounded between the performance of RLNC and IDNC and hence
throughput-delay tradeoff becomes possible. We also propose implementations of
this coding framework to further improve its throughput and decoding delay
performance, to manage feedback frequency and coding complexity, or to achieve
in-block performance adaption. Extensive simulations are then provided to
verify the performance of the proposed coding schemes and their
implementations.
|
1309.1333 | The Stability Region of the Two-User Interference Channel | cs.IT math.IT | The stable throughput region of the two-user interference channel is
investigated here. First, the stability region for the general case is
characterized. Second, we study the cases where the receivers treat
interference as noise or perform successive interference cancelation. Finally,
we provide conditions for the convexity/concavity of the stability region and
for which a certain interference management strategy leads to broader stability
region.
|
1309.1334 | Proceedings of the 14th International Symposium on Database Programming
Languages (DBPL 2013), August 30, 2013, Riva del Garda, Trento, Italy | cs.DB cs.PL | This volume contains the papers presented at the 14th Symposium on Database
Programming Languages (DBPL 2013) held on August 30th, 2013, in Riva del Garda,
co-located with the 39th International Conference on Very Large Databases (VLDB
2013). They cover a wide range of topics including the application of
programming language techniques to further the expressiveness of database
languages, schema management, and the practical use of XPath. To complement
this technical program, DBPL 2013 featured three invited talks by Serge
Abiteboul (Inria), J\'er\^ome Sim\'eon (IBM), and Soren Lassen (Facebook).
|
1309.1338 | On the Stability Region of a Relay-Assisted Multiple Access Scheme | cs.IT math.IT | In this paper we study the impact of a relay node in a two-user network. We
assume a random access collision channel model with erasures. In particular we
obtain an inner and an outer bound for the stability region.
|
1309.1349 | Ergodic Randomized Algorithms and Dynamics over Networks | cs.SY | Algorithms and dynamics over networks often involve randomization, and
randomization may result in oscillating dynamics which fail to converge in a
deterministic sense. In this paper, we observe this undesired feature in three
applications, in which the dynamics is the randomized asynchronous counterpart
of a well-behaved synchronous one. These three applications are network
localization, PageRank computation, and opinion dynamics. Motivated by their
formal similarity, we show the following general fact, under the assumptions of
independence across time and linearities of the updates: if the expected
dynamics is stable and converges to the same limit of the original synchronous
dynamics, then the oscillations are ergodic and the desired limit can be
locally recovered via time-averaging.
|
1309.1369 | Semistochastic Quadratic Bound Methods | stat.ML cs.LG math.NA stat.CO | Partition functions arise in a variety of settings, including conditional
random fields, logistic regression, and latent gaussian models. In this paper,
we consider semistochastic quadratic bound (SQB) methods for maximum likelihood
inference based on partition function optimization. Batch methods based on the
quadratic bound were recently proposed for this class of problems, and
performed favorably in comparison to state-of-the-art techniques.
Semistochastic methods fall in between batch algorithms, which use all the
data, and stochastic gradient type methods, which use small random selections
at each iteration. We build semistochastic quadratic bound-based methods, and
prove both global convergence (to a stationary point) under very weak
assumptions, and linear convergence rate under stronger assumptions on the
objective. To make the proposed methods faster and more stable, we consider
inexact subproblem minimization and batch-size selection schemes. The efficacy
of SQB methods is demonstrated via comparison with several state-of-the-art
techniques on commonly used datasets.
|
1309.1380 | Belief propagation, robust reconstruction and optimal recovery of block
models | math.PR cs.SI | We consider the problem of reconstructing sparse symmetric block models with
two blocks and connection probabilities $a/n$ and $b/n$ for inter- and
intra-block edge probabilities, respectively. It was recently shown that one
can do better than a random guess if and only if $(a-b)^2>2(a+b)$. Using a
variant of belief propagation, we give a reconstruction algorithm that is
optimal in the sense that if $(a-b)^2>C(a+b)$ for some constant $C$ then our
algorithm maximizes the fraction of the nodes labeled correctly. Ours is the
only algorithm proven to achieve the optimal fraction of nodes labeled
correctly. Along the way, we prove some results of independent interest
regarding robust reconstruction for the Ising model on regular and Poisson
trees.
|
1309.1392 | Bayesian Structural Inference for Hidden Processes | stat.ML cs.LG math.ST nlin.CD physics.data-an stat.TH | We introduce a Bayesian approach to discovering patterns in structurally
complex processes. The proposed method of Bayesian Structural Inference (BSI)
relies on a set of candidate unifilar HMM (uHMM) topologies for inference of
process structure from a data series. We employ a recently developed exact
enumeration of topological epsilon-machines. (A sequel then removes the
topological restriction.) This subset of the uHMM topologies has the added
benefit that inferred models are guaranteed to be epsilon-machines,
irrespective of estimated transition probabilities. Properties of
epsilon-machines and uHMMs allow for the derivation of analytic expressions for
estimating transition probabilities, inferring start states, and comparing the
posterior probability of candidate model topologies, despite process internal
structure being only indirectly present in data. We demonstrate BSI's
effectiveness in estimating a process's randomness, as reflected by the Shannon
entropy rate, and its structure, as quantified by the statistical complexity.
We also compare using the posterior distribution over candidate models and the
single, maximum a posteriori model for point estimation and show that the
former more accurately reflects uncertainty in estimated values. We apply BSI
to in-class examples of finite- and infinite-order Markov processes, as well to
an out-of-class, infinite-state hidden process.
|
1309.1410 | A binary deletion channel with a fixed number of deletions | math.PR cs.IT math.CO math.IT | Suppose a binary string x = x_1...x_n is being broadcast repeatedly over a
faulty communication channel. Each time, the channel delivers a fixed number m
of the digits (m<n) with the lost digits chosen uniformly at random, and the
order of the surviving digits preserved. How large does m have to be to
reconstruct the message?
|
1309.1418 | Algorithmic Data Analytics, Small Data Matters and Correlation versus
Causation | cs.CE cs.CC cs.IT math.IT | This is a review of aspects of the theory of algorithmic information that may
contribute to a framework for formulating questions related to complex highly
unpredictable systems. We start by contrasting Shannon Entropy and
Kolmogorov-Chaitin complexity epitomizing the difference between correlation
and causation to then move onto surveying classical results from algorithmic
complexity and algorithmic probability, highlighting their deep connection to
the study of automata frequency distributions. We end showing how long-range
algorithmic predicting models for economic and biological systems may require
infinite computation but locally approximated short-range estimations are
possible thereby showing how small data can deliver important insights into
important features of complex "Big Data".
|
1309.1453 | Parallel machine scheduling with step deteriorating jobs and setup times
by a hybrid discrete cuckoo search algorithm | math.OC cs.DS cs.NE | This article considers the parallel machine scheduling problem with
step-deteriorating jobs and sequence-dependent setup times. The objective is to
minimize the total tardiness by determining the allocation and sequence of jobs
on identical parallel machines. In this problem, the processing time of each
job is a step function dependent upon its starting time. An individual extended
time is penalized when the starting time of a job is later than a specific
deterioration date. The possibility of deterioration of a job makes the
parallel machine scheduling problem more challenging than ordinary ones. A
mixed integer programming model for the optimal solution is derived. Due to its
NP-hard nature, a hybrid discrete cuckoo search algorithm is proposed to solve
this problem. In order to generate a good initial swarm, a modified heuristic
named the MBHG is incorporated into the initialization of population. Several
discrete operators are proposed in the random walk of L\'{e}vy Flights and the
crossover search. Moreover, a local search procedure based on variable
neighborhood descent is integrated into the algorithm as a hybrid strategy in
order to improve the quality of elite solutions. Computational experiments are
executed on two sets of randomly generated test instances. The results show
that the proposed hybrid algorithm can yield better solutions in comparison
with the commercial solver CPLEX with one hour time limit, discrete cuckoo
search algorithm and the existing variable neighborhood search algorithm.
|
1309.1496 | Methods for Large Scale Hydraulic Fracture Monitoring | physics.geo-ph cs.CE | In this paper we propose computationally efficient and robust methods for
estimating the moment tensor and location of micro-seismic event(s) for large
search volumes. Our contribution is two-fold. First, we propose a novel
joint-complexity measure, namely the sum of nuclear norms which while imposing
sparsity on the number of fractures (locations) over a large spatial volume,
also captures the rank-1 nature of the induced wavefield pattern. This
wavefield pattern is modeled as the outer-product of the source signature with
the amplitude pattern across the receivers from a seismic source. A rank-1
factorization of the estimated wavefield pattern at each location can therefore
be used to estimate the seismic moment tensor using the knowledge of the array
geometry. In contrast to existing work this approach allows us to drop any
other assumption on the source signature. Second, we exploit the recently
proposed first-order incremental projection algorithms for a fast and efficient
implementation of the resulting optimization problem and develop a hybrid
stochastic & deterministic algorithm which results in significant computational
savings.
|
1309.1501 | Improvements to deep convolutional neural networks for LVCSR | cs.LG cs.CL cs.NE math.OC stat.ML | Deep Convolutional Neural Networks (CNNs) are more powerful than Deep Neural
Networks (DNN), as they are able to better reduce spectral variation in the
input signal. This has also been confirmed experimentally, with CNNs showing
improvements in word error rate (WER) between 4-12% relative compared to DNNs
across a variety of LVCSR tasks. In this paper, we describe different methods
to further improve CNN performance. First, we conduct a deep analysis comparing
limited weight sharing and full weight sharing with state-of-the-art features.
Second, we apply various pooling strategies that have shown improvements in
computer vision to an LVCSR speech task. Third, we introduce a method to
effectively incorporate speaker adaptation, namely fMLLR, into log-mel
features. Fourth, we introduce an effective strategy to use dropout during
Hessian-free sequence training. We find that with these improvements,
particularly with fMLLR and dropout, we are able to achieve an additional 2-3%
relative improvement in WER on a 50-hour Broadcast News task over our previous
best CNN baseline. On a larger 400-hour BN task, we find an additional 4-5%
relative improvement over our previous best CNN baseline.
|
1309.1507 | A Quantized Johnson Lindenstrauss Lemma: The Finding of Buffon's Needle | cs.IT cs.DS math.IT math.PR | In 1733, Georges-Louis Leclerc, Comte de Buffon in France, set the ground of
geometric probability theory by defining an enlightening problem: What is the
probability that a needle thrown randomly on a ground made of equispaced
parallel strips lies on two of them? In this work, we show that the solution to
this problem, and its generalization to $N$ dimensions, allows us to discover a
quantized form of the Johnson-Lindenstrauss (JL) Lemma, i.e., one that combines
a linear dimensionality reduction procedure with a uniform quantization of
precision $\delta>0$. In particular, given a finite set $\mathcal S \subset
\mathbb R^N$ of $S$ points and a distortion level $\epsilon>0$, as soon as $M >
M_0 = O(\epsilon^{-2} \log S)$, we can (randomly) construct a mapping from
$(\mathcal S, \ell_2)$ to $(\delta\mathbb Z^M, \ell_1)$ that approximately
preserves the pairwise distances between the points of $\mathcal S$.
Interestingly, compared to the common JL Lemma, the mapping is quasi-isometric
and we observe both an additive and a multiplicative distortions on the
embedded distances. These two distortions, however, decay as $O(\sqrt{(\log
S)/M})$ when $M$ increases. Moreover, for coarse quantization, i.e., for high
$\delta$ compared to the set radius, the distortion is mainly additive, while
for small $\delta$ we tend to a Lipschitz isometric embedding. Finally, we
prove the existence of a "nearly" quasi-isometric embedding of $(\mathcal S,
\ell_2)$ into $(\delta\mathbb Z^M, \ell_2)$. This one involves a non-linear
distortion of the $\ell_2$-distance in $\mathcal S$ that vanishes for distant
points in this set. Noticeably, the additive distortion in this case is slower,
and decays as $O(\sqrt[4]{(\log S)/M})$.
|
1309.1508 | Accelerating Hessian-free optimization for deep neural networks by
implicit preconditioning and sampling | cs.LG cs.CL cs.NE math.OC stat.ML | Hessian-free training has become a popular parallel second or- der
optimization technique for Deep Neural Network training. This study aims at
speeding up Hessian-free training, both by means of decreasing the amount of
data used for training, as well as through reduction of the number of Krylov
subspace solver iterations used for implicit estimation of the Hessian. In this
paper, we develop an L-BFGS based preconditioning scheme that avoids the need
to access the Hessian explicitly. Since L-BFGS cannot be regarded as a
fixed-point iteration, we further propose the employment of flexible Krylov
subspace solvers that retain the desired theoretical convergence guarantees of
their conventional counterparts. Second, we propose a new sampling algorithm,
which geometrically increases the amount of data utilized for gradient and
Krylov subspace iteration calculations. On a 50-hr English Broadcast News task,
we find that these methodologies provide roughly a 1.5x speed-up, whereas, on a
300-hr Switchboard task, these techniques provide over a 2.3x speedup, with no
loss in WER. These results suggest that even further speed-up is expected, as
problems scale and complexity grows.
|
1309.1516 | On Secrecy Capacity of Fast Fading MIMOME Wiretap Channels With
Statistical CSIT | cs.IT math.IT | In this paper, we consider secure transmissions in ergodic Rayleigh
fast-faded multiple-input multiple-output multiple-antenna-eavesdropper
(MIMOME) wiretap channels with only statistical channel state information at
the transmitter (CSIT). When the legitimate receiver has more (or equal)
antennas than the eavesdropper, we prove the first MIMOME secrecy capacity with
partial CSIT by establishing a new secrecy capacity upper-bound. The key step
is to form an MIMOME degraded channel by dividing the legitimate receiver's
channel matrix into two submatrices, and setting one of the submatrices to be
the same as the eavesdropper's channel matrix. Next, under the total power
constraint over all transmit antennas, we analytically solve the channel-input
covariance matrix optimization problem to fully characterize the MIMOME secrecy
capacity. Typically, the MIMOME optimization problems are non-concave. However,
thank to the proposed degraded channel, we can transform the stochastic MIMOME
optimization problem to be a Schur-concave one and then find its solution.
Besides total power constraint, we also investigate the secrecy capacity when
the transmitter is subject to the practical per-antenna power constraint. The
corresponding optimization problem is even more difficult since it is not
Schuar-concave. Under the two power constraints considered, the corresponding
MIMOME secrecy capacities can both scale with the signal-to-noise ratios (SNR)
when the difference between numbers of antennas at legitimate receiver and
eavesdropper are large enough. However, when the legitimate receiver and
eavesdropper have a single antenna each, such SNR scalings do not exist for
both cases.
|
1309.1517 | On the Capacity of Networks with Correlated Sources | cs.IT math.IT | Characterizing the capacity region for a network can be extremely difficult.
Even with independent sources, determining the capacity region can be as hard
as the open problem of characterizing all information inequalities. The
majority of computable outer bounds in the literature are relaxations of the
Linear Programming bound which involves entropy functions of random variables
related to the sources and link messages. When sources are not independent, the
problem is even more complicated. Extension of linear programming bounds to
networks with correlated sources is largely open. Source dependence is usually
specified via a joint probability distribution, and one of the main challenges
in extending linear programming bounds is the difficulty (or impossibility) of
characterizing arbitrary dependencies via entropy functions. This paper tackles
the problem by answering the question of how well entropy functions can
characterize correlation among sources. We show that by using carefully chosen
auxiliary random variables, the characterization can be fairly "accurate".
|
1309.1518 | Modeling, Analysis and Optimization of Multicast Device-to-Device
Transmission | cs.IT math.IT | Multicast device-to-device (D2D) transmission is important for applications
like local file transfer in commercial networks and is also a required feature
in public safety networks. In this paper we propose a tractable baseline
multicast D2D model, and use it to analyze important multicast metrics like the
coverage probability, mean number of covered receivers and throughput. In
addition, we examine how the multicast performance would be affected by certain
factors like mobility and network assistance. Take the mean number of covered
receivers as an example. We find that simple repetitive transmissions help but
the gain quickly diminishes as the repetition time increases. Meanwhile,
mobility and network assistance (i.e. allowing the network to relay the
multicast signals) can help cover more receivers. We also explore how to
optimize multicasting, e.g. by choosing the optimal multicast rate and the
optimal number of retransmission times.
|
1309.1521 | Nano-scale reservoir computing | cs.ET cs.NE nlin.AO | This work describes preliminary steps towards nano-scale reservoir computing
using quantum dots. Our research has focused on the development of an
accumulator-based sensing system that reacts to changes in the environment, as
well as the development of a software simulation. The investigated systems
generate nonlinear responses to inputs that make them suitable for a physical
implementation of a neural network. This development will enable
miniaturisation of the neurons to the molecular level, leading to a range of
applications including monitoring of changes in materials or structures. The
system is based around the optical properties of quantum dots. The paper will
report on experimental work on systems using Cadmium Selenide (CdSe) quantum
dots and on the various methods to render the systems sensitive to pH, redox
potential or specific ion concentration. Once the quantum dot-based systems are
rendered sensitive to these triggers they can provide a distributed array that
can monitor and transmit information on changes within the material.
|
1309.1524 | Guided Self-Organization of Input-Driven Recurrent Neural Networks | cs.NE cs.AI nlin.AO | We review attempts that have been made towards understanding the
computational properties and mechanisms of input-driven dynamical systems like
RNNs, and reservoir computing networks in particular. We provide details on
methods that have been developed to give quantitative answers to the questions
above. Following this, we show how self-organization may be used to improve
reservoirs for better performance, in some cases guided by the measures
presented before. We also present a possible way to quantify task performance
using an information-theoretic approach, and finally discuss promising future
directions aimed at a better understanding of how these systems perform their
computations and how to best guide self-organized processes for their
optimization.
|
1309.1536 | Rank-frequency relation for Chinese characters | cs.CL physics.data-an | We show that the Zipf's law for Chinese characters perfectly holds for
sufficiently short texts (few thousand different characters). The scenario of
its validity is similar to the Zipf's law for words in short English texts. For
long Chinese texts (or for mixtures of short Chinese texts), rank-frequency
relations for Chinese characters display a two-layer, hierarchic structure that
combines a Zipfian power-law regime for frequent characters (first layer) with
an exponential-like regime for less frequent characters (second layer). For
these two layers we provide different (though related) theoretical descriptions
that include the range of low-frequency characters (hapax legomena). The
comparative analysis of rank-frequency relations for Chinese characters versus
English words illustrates the extent to which the characters play for Chinese
writers the same role as the words for those writing within alphabetical
systems.
|
1309.1539 | Practical Matrix Completion and Corruption Recovery using Proximal
Alternating Robust Subspace Minimization | cs.CV | Low-rank matrix completion is a problem of immense practical importance.
Recent works on the subject often use nuclear norm as a convex surrogate of the
rank function. Despite its solid theoretical foundation, the convex version of
the problem often fails to work satisfactorily in real-life applications. Real
data often suffer from very few observations, with support not meeting the
random requirements, ubiquitous presence of noise and potentially gross
corruptions, sometimes with these simultaneously occurring.
This paper proposes a Proximal Alternating Robust Subspace Minimization
(PARSuMi) method to tackle the three problems. The proximal alternating scheme
explicitly exploits the rank constraint on the completed matrix and uses the
$\ell_0$ pseudo-norm directly in the corruption recovery step. We show that the
proposed method for the non-convex and non-smooth model converges to a
stationary point. Although it is not guaranteed to find the global optimal
solution, in practice we find that our algorithm can typically arrive at a good
local minimizer when it is supplied with a reasonably good starting point based
on convex optimization. Extensive experiments with challenging synthetic and
real data demonstrate that our algorithm succeeds in a much larger range of
practical problems where convex optimization fails, and it also outperforms
various state-of-the-art algorithms.
|
1309.1541 | Projection onto the probability simplex: An efficient algorithm with a
simple proof, and an application | cs.LG math.OC stat.ML | We provide an elementary proof of a simple, efficient algorithm for computing
the Euclidean projection of a point onto the probability simplex. We also show
an application in Laplacian K-modes clustering.
|
1309.1543 | A Comparism of the Performance of Supervised and Unsupervised Machine
Learning Techniques in evolving Awale/Mancala/Ayo Game Player | cs.LG cs.GT | Awale games have become widely recognized across the world, for their
innovative strategies and techniques which were used in evolving the agents
(player) and have produced interesting results under various conditions. This
paper will compare the results of the two major machine learning techniques by
reviewing their performance when using minimax, endgame database, a combination
of both techniques or other techniques, and will determine which are the best
techniques.
|
1309.1555 | A New Chase-type Soft-decision Decoding Algorithm for Reed-Solomon Codes | cs.IT math.IT | A new Chase-type soft-decision decoding algorithm for Reed-Solomon codes is
proposed, referred to as tree-based Chase-type algorithm}. The proposed
tree-based Chase-type algorithm takes the set of all vectors as the set of
testing patterns, and hence definitely delivers the most-likely codeword
provided that the computational resources are allowed. All the testing patterns
are arranged in an ordered rooted tree according to the likelihood bounds of
the possibly generated codewords, which is an extension of Wu and Pados' method
from binary into $q$-ary linear block codes. While performing the algorithm,
the ordered rooted tree is constructed progressively by adding at most two
leafs at each trial. The ordered tree naturally induces a sufficient condition
for the most-likely codeword. That is, whenever the tree-based Chase-type
algorithm exits before a preset maximum number of trials is reached, the output
codeword must be the most-likely one. But, in fact, the algorithm can be
terminated by setting a discrepancy threshold instead of a maximum number of
trials. When the tree-based Chase-type algorithm is combined with
Guruswami-Sudan (GS) algorithm, each trial can be implement in an extremely
simple way by removing from the gradually updated Grobner basis one old point
and interpolating one new point. Simulation results show that the tree-based
Chase-type algorithm performs better than the recently proposed Chase-type
algorithm by Bellorado et al with less trials (on average) given that the
maximum number of trials is the same.
|
1309.1556 | Hyper-Graph Based Database Partitioning for Transactional Workloads | cs.DB | A common approach to scaling transactional databases in practice is
horizontal partitioning, which increases system scalability, high availability
and self-manageability. Usu- ally it is very challenging to choose or design an
optimal partitioning scheme for a given workload and database. In this
technical report, we propose a fine-grained hyper-graph based database
partitioning system for transactional work- loads. The partitioning system
takes a database, a workload, a node cluster and partitioning constraints as
input and out- puts a lookup-table encoding the final database partitioning
decision. The database partitioning problem is modeled as a multi-constraints
hyper-graph partitioning problem. By deriving a min-cut of the hyper-graph, our
system can min- imize the total number of distributed transactions in the
workload, balance the sizes and workload accesses of the partitions and satisfy
all the partition constraints imposed. Our system is highly interactive as it
allows users to im- pose partition constraints, watch visualized partitioning
ef- fects, and provide feedback based on human expertise and indirect domain
knowledge for generating better partition- ing schemes.
|
1309.1567 | On Variant Strategies To Solve The Magnitude Least Squares Optimization
Problem In Parallel Transmission Pulse Design And Under Strict SAR And Power
Constraints | physics.ins-det cs.CE | Parallel transmission has been a very promising candidate technology to
mitigate the inevitable radio-frequency field inhomogeneity in magnetic
resonance imaging (MRI) at ultra-high field (UHF). For the first few years,
pulse design utilizing this technique was expressed as a least squares problem
with crude power regularizations aimed at controlling the specific absorption
rate (SAR), hence the patient safety. This approach being suboptimal for many
applications sensitive mostly to the magnitude of the spin excitation, and not
its phase, the magnitude least squares (MLS) problem then was first formulated
in 2007. Despite its importance and the availability of other powerful
numerical optimization methods, this problem yet has been faced exclusively by
the pulse designer with the so-called variable exchange method. In this paper,
we investigate other strategies and incorporate directly the strict SAR and
hardware constraints. Different schemes such as sequential quadratic
programming (SQP), interior point (I-P) methods, semi-definite programming
(SDP) and magnitude squared least squares (MSLS) relaxations are studied both
in the small and large tip angle regimes with real data sets obtained in-vivo
on a human brain at 7 Tesla. Convergence and robustness of the different
approaches are analyzed, and recommendations to tackle this specific problem
are finally given. Small tip angle and inversion pulses are returned in a few
seconds and in under a minute respectively while respecting the constraints,
allowing the use of the proposed approach in routine.
|
1309.1574 | Identification of nonlinear controllers from data: theory and
computation | cs.SY math.DS math.OC | This manuscript contains technical details and proofs of recent results
developed by the authors, pertaining to the design of nonlinear controllers
from the experimental data measured on an existing feedback control system.
|
1309.1585 | Network-Level Cooperation in Energy Harvesting Wireless Networks | cs.IT cs.NI math.IT | We consider a two-hop communication network consisted of a source node, a
relay and a destination node in which the source and the relay node have
external traffic arrivals. The relay forwards a fraction of the source node's
traffic to the destination and the cooperation is performed at the network
level. In addition, both source and relay nodes have energy harvesting
capabilities and an unlimited battery to store the harvested energy. We study
the impact of the energy constraints on the stability region. Specifically, we
provide inner and outer bounds on the stability region of the two-hop network
with energy harvesting source and relay.
|
1309.1596 | Security analysis of epsilon-almost dual universal2 hash functions:
smoothing of min entropy vs. smoothing of R\'enyi entropy of order 2 | cs.IT cs.CR math.IT | Recently, $\varepsilon$-almost dual universal$_2$ hash functions has been
proposed as a new and wider class of hash functions. Using this class of hash
functions, several efficient hash functions were proposed. This paper evaluates
the security performance when we apply this kind of hash functions. We evaluate
the security in several kinds of setting based on the $L_1$ distinguishability
criterion and the modified mutual information criterion. The obtained
evaluation is based on smoothing of R\'{e}nyi entropy of order 2 and/or min
entropy. We clarify the difference between these two methods.
|
1309.1621 | Skew Generalized Quasi-Cyclic Codes over Finite Fields | cs.IT math.IT | In this work, we study a class of generalized quasi-cyclic (GQC) codes called
skew GQC codes. By the factorization theory of ideals, we give the Chinese
Remainder Theorem over the skew polynomial ring, which leads to a canonical
decomposition of skew GQC codes. We also focus on some characteristics of skew
GQC codes in details. For a 1-generator skew GQC code, we define the
parity-check polynomial, determine the dimension and give a lower bound on the
minimum Hamming distance. The skew quasi-cyclic (QC) codes are also discussed
briefly.
|
1309.1623 | Quasi-Cyclic Codes Over Finite Chain Rings | cs.IT math.IT | In this paper, we mainly consider quasi-cyclic (QC) codes over finite chain
rings. We study module structures and trace representations of QC codes, which
lead to some lower bounds on the minimum Hamming distance of QC codes.
Moreover, we investigate the structural properties of 1-generator QC codes.
Under some conditions, we discuss the enumeration of 1-generator QC codes and
describe how to obtain the one and only one generator for each 1-generator QC
code.
|
1309.1628 | Topology preserving thinning for cell complexes | cs.CV | A topology preserving skeleton is a synthetic representation of an object
that retains its topology and many of its significant morphological properties.
The process of obtaining the skeleton, referred to as skeletonization or
thinning, is a very active research area. It plays a central role in reducing
the amount of information to be processed during image analysis and
visualization, computer-aided diagnosis or by pattern recognition algorithms.
This paper introduces a novel topology preserving thinning algorithm which
removes \textit{simple cells}---a generalization of simple points---of a given
cell complex. The test for simple cells is based on \textit{acyclicity tables}
automatically produced in advance with homology computations. Using acyclicity
tables render the implementation of thinning algorithms straightforward.
Moreover, the fact that tables are automatically filled for all possible
configurations allows to rigorously prove the generality of the algorithm and
to obtain fool-proof implementations. The novel approach enables, for the first
time, according to our knowledge, to thin a general unstructured simplicial
complex. Acyclicity tables for cubical and simplicial complexes and an open
source implementation of the thinning algorithm are provided as additional
material to allow their immediate use in the vast number of practical
applications arising in medical imaging and beyond.
|
1309.1644 | Power Efficient MISO Beamforming for Secure Layered Transmission | cs.IT math.IT | This paper studies secure layered video transmission in a multiuser
multiple-input single-output (MISO) beamforming downlink communication system.
The power allocation algorithm design is formulated as a non-convex
optimization problem for minimizing the total transmit power while guaranteeing
a minimum received signal-to-interference-plus-noise ratio (SINR) at the
desired receiver. In particular, the proposed problem formulation takes into
account the self-protecting architecture of layered transmission and artificial
noise generation to prevent potential information eavesdropping. A
semi-definite programming (SDP) relaxation based power allocation algorithm is
proposed to obtain an upper bound solution. A sufficient condition for the
global optimal solution is examined to reveal the tightness of the upper bound
solution. Subsequently, two suboptimal power allocation schemes with low
computational complexity are proposed for enabling secure layered video
transmission. Simulation results demonstrate significant transmit power savings
achieved by the proposed algorithms and layered transmission compared to the
baseline schemes.
|
1309.1649 | Preparing Korean Data for the Shared Task on Parsing Morphologically
Rich Languages | cs.CL | This document gives a brief description of Korean data prepared for the SPMRL
2013 shared task. A total of 27,363 sentences with 350,090 tokens are used for
the shared task. All constituent trees are collected from the KAIST Treebank
and transformed to the Penn Treebank style. All dependency trees are converted
from the transformed constituent trees using heuristics and labeling rules de-
signed specifically for the KAIST Treebank. In addition to the gold-standard
morphological analysis provided by the KAIST Treebank, two sets of automatic
morphological analysis are provided for the shared task, one is generated by
the HanNanum morphological analyzer, and the other is generated by the Sejong
morphological analyzer.
|
1309.1747 | Stochastic Agent-Based Simulations of Social Networks | cs.SI physics.soc-ph | The rapidly growing field of network analytics requires data sets for use in
evaluation. Real world data often lack truth and simulated data lack narrative
fidelity or statistical generality. This paper presents a novel,
mixed-membership, agentbased simulation model to generate activity data with
narrative power while providing statistical diversity through random draws. The
model generalizes to a variety of network activity types such as Internet and
cellular communications, human mobility, and social network interactions. The
simulated actions over all agents can then drive an application specific
observational model to render measurements as one would collect in real-world
experiments. We apply this framework to human mobility and demonstrate its
utility in generating high fidelity traffic data for network analytics.
|
1309.1761 | Convergence of Nearest Neighbor Pattern Classification with Selective
Sampling | cs.LG stat.ML | In the panoply of pattern classification techniques, few enjoy the intuitive
appeal and simplicity of the nearest neighbor rule: given a set of samples in
some metric domain space whose value under some function is known, we estimate
the function anywhere in the domain by giving the value of the nearest sample
per the metric. More generally, one may use the modal value of the m nearest
samples, where m is a fixed positive integer (although m=1 is known to be
admissible in the sense that no larger value is asymptotically superior in
terms of prediction error). The nearest neighbor rule is nonparametric and
extremely general, requiring in principle only that the domain be a metric
space. The classic paper on the technique, proving convergence under
independent, identically-distributed (iid) sampling, is due to Cover and Hart
(1967). Because taking samples is costly, there has been much research in
recent years on selective sampling, in which each sample is selected from a
pool of candidates ranked by a heuristic; the heuristic tries to guess which
candidate would be the most "informative" sample. Lindenbaum et al. (2004)
apply selective sampling to the nearest neighbor rule, but their approach
sacrifices the austere generality of Cover and Hart; furthermore, their
heuristic algorithm is complex and computationally expensive. Here we report
recent results that enable selective sampling in the original Cover-Hart
setting. Our results pose three selection heuristics and prove that their
nearest neighbor rule predictions converge to the true pattern. Two of the
algorithms are computationally cheap, with complexity growing linearly in the
number of samples. We believe that these results constitute an important
advance in the art.
|
1309.1780 | Software Abstractions and Methodologies for HPC Simulation Codes on
Future Architectures | cs.CE cs.MS cs.SE | Large, complex, multi-scale, multi-physics simulation codes, running on high
performance com-puting (HPC) platforms, have become essential to advancing
science and engineering. These codes simulate multi-scale, multi-physics
phenomena with unprecedented fidelity on petascale platforms, and are used by
large communities. Continued ability of these codes to run on future platforms
is as crucial to their communities as continued improvements in instruments and
facilities are to experimental scientists. However, the ability of code
developers to do these things faces a serious challenge with the paradigm shift
underway in platform architecture. The complexity and uncertainty of the future
platforms makes it essential to approach this challenge cooperatively as a
community. We need to develop common abstractions, frameworks, programming
models and software development methodologies that can be applied across a
broad range of complex simulation codes, and common software infrastructure to
support them. In this position paper we express and discuss our belief that
such an infrastructure is critical to the deployment of existing and new large,
multi-scale, multi-physics codes on future HPC platforms.
|
1309.1781 | Experiences from Software Engineering of Large Scale AMR Multiphysics
Code Frameworks | cs.CE cs.MS cs.SE | Among the present generation of multiphysics HPC simulation codes there are
many that are built upon general infrastructural frameworks. This is especially
true of the codes that make use of structured adaptive mesh refinement (SAMR)
because of unique demands placed on the housekeeping aspects of the code. They
have varying degrees of abstractions between the infrastructure such as mesh
management and IO and the numerics of the physics solvers. In this experience
report we summarize the experiences and lessons learned from two of such major
software efforts, FLASH and Chombo.
|
1309.1784 | Enabling Reproducible Science with VisTrails | cs.SE cs.DB | With the increasing amount of data and use of computation in science,
software has become an important component in many different domains. Computing
is now being used more often and in more aspects of scientific work including
data acquisition, simulation, analysis, and visualization. To ensure
reproducibility, it is important to capture the different computational
processes used as well as their executions. VisTrails is an open-source
scientific workflow system for data analysis and visualization that seeks to
address the problem of integrating varied tools as well as automatically
documenting the methods and parameters employed. Growing from a specific
project need to supporting a wide array of users required close collaborations
in addition to new research ideas to design a usable and efficient system. The
VisTrails project now includes standard software processes like unit testing
and developer documentation while serving as a base for further research. In
this paper, we describe how VisTrails has developed and how our efforts in
structuring and advertising the system have contributed to its adoption in many
domains.
|
1309.1785 | #Santiago is not #Chile, or is it? A Model to Normalize Social Media
Impact | cs.SI physics.soc-ph | Online social networks are known to be demographically biased. Currently
there are questions about what degree of representativity of the physical
population they have, and how population biases impact user-generated content.
In this paper we focus on centralism, a problem affecting Chile. Assuming that
local differences exist in a country, in terms of vocabulary, we built a
methodology based on the vector space model to find distinctive content from
different locations, and use it to create classifiers to predict whether the
content of a micro-post is related to a particular location, having in mind a
geographically diverse selection of micro-posts. We evaluate them in a case
study where we analyze the virtual population of Chile that participated in the
Twitter social network during an event of national relevance: the municipal
(local governments) elections held in 2012. We observe that the participating
virtual population is spatially representative of the physical population,
implying that there is centralism in Twitter. Our classifiers out-perform a non
geographically-diverse baseline at the regional level, and have the same
accuracy at a provincial level. However, our approach makes assumptions that
need to be tested in multi-thematic and more general datasets. We leave this
for future work.
|
1309.1788 | Web Standards as Standard Pieces in Robotics | cs.SY cs.RO | Modern robotics often involves the use of web technologies as a means to cope
with the complexity of design and operation. Many of these technologies have
been formalized into standards, which are often avoided by those in robotics
and controls because of a sometimes warranted fear that "the web" is too slow,
or too uncertain for meaningful control applications.
In this work we argue that while web technologies may not be applicable for
all control, they should not be dismissed outright because they can provide
critical help with system integration. Web technologies have also advanced
significantly over the past decade. We present the details of an application of
a web server to perform open and close-loop control (between 3Hz and 1kHz) over
a variety of different network topologies. In our study we also consider the
impact of a web browser to implement the control of the plant. Our results
confirm that meaningful control can be performed using web technologies, and
also highlight design choices that can limit their applicability.
|
1309.1794 | An Adaptive Algorithm for Synchronization in Diffusively Coupled Systems | math.DS cs.SY math.OC | We present an adaptive algorithm that guarantees synchronization in
diffusively coupled systems. We first consider compartmental systems of ODEs,
where each compartment represents a spatial domain of components interconnected
through diffusion terms with like components in different compartments. Each
set of like components may have its own weighted undirected graph describing
the topology of the interconnection between compartments. The link weights are
updated adaptively according to the magnitude of the difference between
neighboring agents connected by the link. We next consider reaction-diffusion
PDEs with Neumann boundary conditions, and derive an analogous algorithm
guaranteeing spatial homogenization of solutions. We provide a numerical
example demonstrating the results.
|
1309.1795 | Finding role communities in directed networks using Role-Based
Similarity, Markov Stability and the Relaxed Minimum Spanning Tree | cs.SI physics.soc-ph q-bio.NC | We present a framework to cluster nodes in directed networks according to
their roles by combining Role-Based Similarity (RBS) and Markov Stability, two
techniques based on flows. First we compute the RBS matrix, which contains the
pairwise similarities between nodes according to the scaled number of in- and
out-directed paths of different lengths. The weighted RBS similarity matrix is
then transformed into an undirected similarity network using the Relaxed
Minimum-Spanning Tree (RMST) algorithm, which uses the geometric structure of
the RBS matrix to unblur the network, such that edges between nodes with high,
direct RBS are preserved. Finally, we partition the RMST similarity network
into role-communities of nodes at all scales using Markov Stability to find a
robust set of roles in the network. We showcase our framework through a
biological and a man-made network.
|
1309.1805 | nanoHUB.org: Experiences and Challenges in Software Sustainability for a
Large Scientific Community | cs.SE cs.CE cs.DL | Managing and growing a successful cyberinfrastructure such as nanoHUB.org
presents a variety of opportunities and challenges, particularly in regard to
software. This position paper details a number of those issues and how we have
approached them.
|
1309.1807 | Aggregate-Max Nearest Neighbor Searching in the Plane | cs.CG cs.DB cs.DS | We study the aggregate/group nearest neighbor searching for the MAX operator
in the plane. For a set $P$ of $n$ points and a query set $Q$ of $m$ points,
the query asks for a point of $P$ whose maximum distance to the points in $Q$
is minimized. We present data structures for answering such queries for both
$L_1$ and $L_2$ distance measures. Previously, only heuristic and approximation
algorithms were given for both versions. For the $L_1$ version, we build a data
structure of O(n) size in $O(n\log n)$ time, such that each query can be
answered in $O(m+\log n)$ time. For the $L_2$ version, we build a data
structure in $O(n\log n)$ time and $O(n\log \log n)$ space, such that each
query can be answered in $O(m\sqrt{n}\log^{O(1)} n)$ time, and alternatively,
we build a data structure in $O(n^{2+\epsilon})$ time and space for any
$\epsilon>0$, such that each query can be answered in $O(m\log n)$ time.
Further, we extend our result for the $L_1$ version to the top-$k$ queries
where each query asks for the $k$ points of $P$ whose maximum distances to $Q$
are the smallest for any $k$ with $1\leq k\leq n$: We build a data structure of
O(n) size in $O(n\log n)$ time, such that each top-$k$ query can be answered in
$O(m+k\log n)$ time.
|
1309.1812 | Cactus: Issues for Sustainable Simulation Software | cs.CE cs.MS cs.SE | The Cactus Framework is an open-source, modular, portable programming
environment for the collaborative development and deployment of scientific
applications using high-performance computing. Its roots reach back to 1996 at
the National Center for Supercomputer Applications and the Albert Einstein
Institute in Germany, where its development jumpstarted. Since then, the Cactus
framework has witnessed major changes in hardware infrastructure as well as its
own community. This paper describes its endurance through these past changes
and, drawing upon lessons from its past, also discusses future
|
1309.1825 | Social Interactive Media Tools and Knowledge Sharing: A Case Study | cs.DL cs.SI | Purpose: Social Media Tools (SMT) have provided new opportunities for
libraries and librarians in the world. In academic libraries, we can use of
them as a powerful tool for communication. This study is to determine the use
of the social interactive media tools [Social Networking Tools (SNT), Social
Bookmarking Tools (SBT), Image or Video Sharing Tools (IVShT), and Mashup Tools
(MT)] in disseminating knowledge and information among librarians in the
Limerick University, Ireland. Methodology: The study was a descriptive survey.
The research population included all librarians in Glucksman library. A
questionnaire survey was done to collect data. Statistical software, SPSS16 was
used at two levels (descriptive and inferential statistics) for data analyzing.
Findings: The findings show that the mean (out of 5.00) of using each of SMT in
sharing knowledge by the librarians of Glucksman library is as the following:
SNT (2.49), SBT (2.92), IVShT (2.99) and MT (2.5). It shows that most of their
interaction related to share of image or video. Originality: SMT provides an
excellent platform for the exchange information between students, faculty
members, and the librarians themselves. The Glucksman library at the University
of Limerick is using this technology. This paper gives an example of how using
these tools in the field of Library and Information Science in Ireland. The
issues expressed could be beneficial for the development of library services in
general and knowledge sharing among librarians in particular.
|
1309.1828 | Sustainable Software Development for Next-Gen Sequencing (NGS)
Bioinformatics on Emerging Platforms | cs.CE cs.DC | DNA sequence analysis is fundamental to life science research. The rapid
development of next generation sequencing (NGS) technologies, and the richness
and diversity of applications it makes feasible, have created an enormous gulf
between the potential of this technology and the development of computational
methods to realize this potential. Bridging this gap holds possibilities for
broad impacts toward multiple grand challenges and offers unprecedented
opportunities for software innovation and research. We argue that NGS-enabled
applications need a critical mass of sustainable software to benefit from
emerging computing platforms' transformative potential. Accumulating the
necessary critical mass will require leaders in computational biology,
bioinformatics, computer science, and computer engineering work together to
identify core opportunity areas, critical software infrastructure, and software
sustainability challenges. Furthermore, due to the quickly changing nature of
both bioinformatics software and accelerator technology, we conclude that
creating sustainable accelerated bioinformatics software means constructing a
sustainable bridge between the two fields. In particular, sustained
collaboration between domain developers and technology experts is needed to
develop the accelerated kernels, libraries, frameworks and middleware that
could provide the needed flexible link from NGS bioinformatics applications to
emerging platforms.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.