id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1402.3722 | word2vec Explained: deriving Mikolov et al.'s negative-sampling
word-embedding method | cs.CL cs.LG stat.ML | The word2vec software of Tomas Mikolov and colleagues
(https://code.google.com/p/word2vec/ ) has gained a lot of traction lately, and
provides state-of-the-art word embeddings. The learning models behind the
software are described in two research papers. We found the description of the
models in these papers to be somewhat cryptic and hard to follow. While the
motivations and presentation may be obvious to the neural-networks
language-modeling crowd, we had to struggle quite a bit to figure out the
rationale behind the equations.
This note is an attempt to explain equation (4) (negative sampling) in
"Distributed Representations of Words and Phrases and their Compositionality"
by Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado and Jeffrey Dean.
|
1402.3727 | Multi-user Linear Precoding for Multi-polarized Massive MIMO System
under Imperfect CSIT | cs.IT math.IT | The space limitation and the channel acquisition prevent Massive MIMO from
being easily deployed in a practical setup. Motivated by current deployments of
LTE-Advanced, the use of multi-polarized antennas can be an efficient solution
to address the space constraint. Furthermore, the dual-structured precoding, in
which a preprocessing based on the spatial correlation and a subsequent linear
precoding based on the short-term channel state information at the transmitter
(CSIT) are concatenated, can reduce the feedback overhead efficiently. By
grouping and preprocessing spatially correlated mobile stations (MSs), the
dimension of the precoding signal space is reduced and the corresponding
short-term CSIT dimension is reduced. In this paper, to reduce the feedback
overhead further, we propose a dual-structured multi-user linear precoding, in
which the subgrouping method based on co-polarization is additionally applied
to the spatially grouped MSs in the preprocessing stage. Furthermore, under
imperfect CSIT, the proposed scheme is asymptotically analyzed based on random
matrix theory. By investigating the behavior of the asymptotic performance, we
also propose a new dual-structured precoding in which the precoding mode is
switched between two dual-structured precoding strategies with 1) the
preprocessing based only on the spatial correlation and 2) the preprocessing
based on both the spatial correlation and polarization. Finally, we extend it
to 3D dual-structured precoding.
|
1402.3735 | Decentralized Goal Assignment and Safe Trajectory Generation in
Multi-Robot Networks via Multiple Lyapunov Functions | cs.MA cs.RO cs.SY | This paper considers the problem of decentralized goal assignment and
trajectory generation for multi-robot networks when only local communication is
available, and proposes an approach based on methods related to switched
systems and set invariance. A family of Lyapunov-like functions is employed to
encode the (local) decision making among candidate goal assignments, under
which a group of connected agents chooses the assignment that results in the
shortest total distance to the goals. An additional family of Lyapunov-like
barrier functions is activated in the case when the optimal assignment may lead
to colliding trajectories, maintaining thus system safety while preserving the
convergence guarantees. The proposed switching strategies give rise to feedback
control policies that are computationally efficient and scalable with the
number of agents, and therefore suitable for applications including
first-response deployment of robotic networks under limited information
sharing. The efficacy of the proposed method is demonstrated via simulation
results and experiments with six ground robots.
|
1402.3749 | Particle Computation: Designing Worlds to Control Robot Swarms with only
Global Signals | cs.RO | Micro- and nanorobots are often controlled by global input signals, such as
an electromagnetic or gravitational field. These fields move each robot
maximally until it hits a stationary obstacle or another stationary robot. This
paper investigates 2D motion-planning complexity for large swarms of simple
mobile robots (such as bacteria, sensors, or smart building material).
In previous work we proved it is NP-hard to decide whether a given initial
configuration can be transformed into a desired target configuration; in this
paper we prove a stronger result: the problem of finding an optimal control
sequence is PSPACE-complete. On the positive side, we show we can build useful
systems by designing obstacles. We present a reconfigurable hardware platform
and demonstrate how to form arbitrary permutations and build a compact absolute
encoder. We then take the same platform and use dual-rail logic to build a
universal logic gate that concurrently evaluates AND, NAND, NOR and OR
operations. Using many of these gates and appropriate interconnects we can
evaluate any logical expression.
|
1402.3783 | Map-Aware Models for Indoor Wireless Localization Systems: An
Experimental Study | cs.IT math.IT stat.AP | The accuracy of indoor wireless localization systems can be substantially
enhanced by map-awareness, i.e., by the knowledge of the map of the environment
in which localization signals are acquired. In fact, this knowledge can be
exploited to cancel out, at least to some extent, the signal degradation due to
propagation through physical obstructions, i.e., to the so called
non-line-of-sight bias. This result can be achieved by developing novel
localization techniques that rely on proper map-aware statistical modelling of
the measurements they process. In this manuscript a unified statistical model
for the measurements acquired in map-aware localization systems based on
time-of-arrival and received signal strength techniques is developed and its
experimental validation is illustrated. Finally, the accuracy of the proposed
map-aware model is assessed and compared with that offered by its map-unaware
counterparts. Our numerical results show that, when the quality of acquired
measurements is poor, map-aware modelling can enhance localization accuracy by
up to 110% in certain scenarios.
|
1402.3797 | Scalable Positional Analysis for Studying Evolution of Nodes in Networks | cs.SI physics.soc-ph | In social network analysis, the fundamental idea behind the notion of
position is to discover actors who have similar structural signatures.
Positional analysis of social networks involves partitioning the actors into
disjoint sets using a notion of equivalence which captures the structure of
relationships among actors. Classical approaches to Positional Analysis, such
as Regular equivalence and Equitable Partitions, are too strict in grouping
actors and often lead to trivial partitioning of actors in real world networks.
An Epsilon Equitable Partition (EEP) of a graph, which is similar in spirit to
Stochastic Blockmodels, is a useful relaxation to the notion of structural
equivalence which results in meaningful partitioning of actors. In this paper
we propose and implement a new scalable distributed algorithm based on
MapReduce methodology to find EEP of a graph. Empirical studies on random
power-law graphs show that our algorithm is highly scalable for sparse graphs,
thereby giving us the ability to study positional analysis on very large scale
networks. We also present the results of our algorithm on time evolving
snapshots of the facebook and flickr social graphs. Results show the importance
of positional analysis on large dynamic networks.
|
1402.3801 | On Heterogeneous Regenerating Codes and Capacity of Distributed Storage
Systems | cs.IT math.IT | Heterogeneous Distributed Storage Systems (DSS) are close to real world
applications for data storage. Internet caching system and peer-to-peer storage
clouds are the examples of such DSS. In this work, we calculate the capacity
formula for such systems where each node store different number of packets and
each having a different repair bandwidth (node can be repaired by contacting a
specific set of nodes). The tradeoff curve between storage and repair bandwidth
is studied for such heterogeneous DSS. By analyzing the capacity formula new
minimum bandwidth regenerating (MBR) and minimum storage regenerating (MBR)
points are obtained on the curve. It is shown that in some cases these are
better than the homogeneous DSS.
|
1402.3811 | Dropout Rademacher Complexity of Deep Neural Networks | cs.NE stat.ML | Great successes of deep neural networks have been witnessed in various real
applications. Many algorithmic and implementation techniques have been
developed, however, theoretical understanding of many aspects of deep neural
networks is far from clear. A particular interesting issue is the usefulness of
dropout, which was motivated from the intuition of preventing complex
co-adaptation of feature detectors. In this paper, we study the Rademacher
complexity of different types of dropout, and our theoretical results disclose
that for shallow neural networks (with one or none hidden layer) dropout is
able to reduce the Rademacher complexity in polynomial, whereas for deep neural
networks it can amazingly lead to an exponential reduction of the Rademacher
complexity.
|
1402.3847 | Towards the reproducibility in soil erosion modeling: a new Pan-European
soil erosion map | cs.SY cs.CE physics.geo-ph | Soil erosion by water is a widespread phenomenon throughout Europe and has
the potentiality, with his on-site and off-site effects, to affect water
quality, food security and floods. Despite the implementation of numerous and
different models for estimating soil erosion by water in Europe, there is still
a lack of harmonization of assessment methodologies.
Often, different approaches result in soil erosion rates significantly
different. Even when the same model is applied to the same region the results
may differ. This can be due to the way the model is implemented (i.e. with the
selection of different algorithms when available) and/or to the use of datasets
having different resolution or accuracy. Scientific computation is emerging as
one of the central topic of the scientific method, for overcoming these
problems there is thus the necessity to develop reproducible computational
method where codes and data are available.
The present study illustrates this approach. Using only public available
datasets, we applied the Revised Universal Soil loss Equation (RUSLE) to locate
the most sensitive areas to soil erosion by water in Europe.
A significant effort was made for selecting the better simplified equations
to be used when a strict application of the RUSLE model is not possible. In
particular for the computation of the Rainfall Erosivity factor (R) the
reproducible research paradigm was applied. The calculation of the R factor was
implemented using public datasets and the GNU R language. An easily
reproducible validation procedure based on measured precipitation time series
was applied using MATLAB language. Designing the computational modelling
architecture with the aim to ease as much as possible the future reuse of the
model in analysing climate change scenarios is also a challenging goal of the
research.
|
1402.3849 | Scalable Kernel Clustering: Approximate Kernel k-means | cs.CV cs.DS cs.LG | Kernel-based clustering algorithms have the ability to capture the non-linear
structure in real world data. Among various kernel-based clustering algorithms,
kernel k-means has gained popularity due to its simple iterative nature and
ease of implementation. However, its run-time complexity and memory footprint
increase quadratically in terms of the size of the data set, and hence, large
data sets cannot be clustered efficiently. In this paper, we propose an
approximation scheme based on randomization, called the Approximate Kernel
k-means. We approximate the cluster centers using the kernel similarity between
a few sampled points and all the points in the data set. We show that the
proposed method achieves better clustering performance than the traditional low
rank kernel approximation based clustering schemes. We also demonstrate that
its running time and memory requirements are significantly lower than those of
kernel k-means, with only a small reduction in the clustering quality on
several public domain large data sets. We then employ ensemble clustering
techniques to further enhance the performance of our algorithm.
|
1402.3869 | FTVd is beyond Fast Total Variation regularized Deconvolution | cs.CV | In this paper, we revisit the "FTVd" algorithm for Fast Total Variation
Regularized Deconvolution, which has been widely used in the past few years.
Both its original version implemented in the MATLAB software FTVd 3.0 and its
related variant implemented in the latter version FTVd 4.0 are considered
\cite{Wang08FTVdsoftware}. We propose that the intermediate results during the
iterations are the solutions of a series of combined Tikhonov and total
variation regularized image deconvolution models and therefore some of them
often have even better image quality than the final solution, which is
corresponding to the pure total variation regularized model.
|
1402.3891 | Performance Evaluation of Machine Learning Classifiers in Sentiment
Mining | cs.LG cs.CL cs.IR | In recent years, the use of machine learning classifiers is of great value in
solving a variety of problems in text classification. Sentiment mining is a
kind of text classification in which, messages are classified according to
sentiment orientation such as positive or negative. This paper extends the idea
of evaluating the performance of various classifiers to show their
effectiveness in sentiment mining of online product reviews. The product
reviews are collected from Amazon reviews. To evaluate the performance of
classifiers various evaluation methods like random sampling, linear sampling
and bootstrap sampling are used. Our results shows that support vector machine
with bootstrap sampling method outperforms others classifiers and sampling
methods in terms of misclassification rate.
|
1402.3892 | Simulating Congestion Dynamics of Train Rapid Transit using Smart Card
Data | cs.MA physics.soc-ph | Investigating congestion in train rapid transit systems (RTS) in today's
urban cities is a challenge compounded by limited data availability and
difficulties in model validation. Here, we integrate information from travel
smart card data, a mathematical model of route choice, and a full-scale
agent-based model of the Singapore RTS to provide a more comprehensive
understanding of the congestion dynamics than can be obtained through
analytical modelling alone. Our model is empirically validated, and allows for
close inspection of the dynamics including station crowdedness, average travel
duration, and frequency of missed trains---all highly pertinent factors in
service quality. Using current data, the crowdedness in all 121 stations
appears to be distributed log-normally. In our preliminary scenarios, we
investigate the effect of population growth on service quality. We find that
the current population (2 million) lies below a critical point; and increasing
it beyond a factor of $\sim10\%$ leads to an exponential deterioration in
service quality. We also predict that incentivizing commuters to avoid the most
congested hours can bring modest improvements to the service quality provided
the population remains under the critical point. Finally, our model can be used
to generate simulated data for analytical modelling when such data are not
empirically available, as is often the case.
|
1402.3895 | Bounding Multiple Unicasts through Index Coding and Locally Repairable
Codes | cs.IT math.IT | We establish a duality result between linear index coding and Locally
Repairable Codes (LRCs). Specifically, we show that a natural extension of LRCs
we call Generalized Locally Repairable Codes (GLCRs) are exactly dual to linear
index codes. In a GLRC, every node is decodable from a specific set of other
nodes and these sets induce a recoverability directed graph. We show that the
dual linear subspace of a GLRC is a solution to an index coding instance where
the side information graph is this GLRC recoverability graph. We show that the
GLRC rate is equivalent to the complementary index coding rate, i.e. the number
of transmissions saved by coding. Our second result uses this duality to
establish a new upper bound for the multiple unicast network coding problem. In
multiple unicast network coding, we are given a directed acyclic graph and r
sources that want to send independent messages to r corresponding destinations.
Our new upper bound is efficiently computable and relies on a strong
approximation result for complementary index coding. We believe that our bound
could lead to a logarithmic approximation factor for multiple unicast network
coding if a plausible connection we state is verified.
|
1402.3898 | Graph Theory versus Minimum Rank for Index Coding | cs.IT math.IT | We obtain novel index coding schemes and show that they provably outperform
all previously known graph theoretic bounds proposed so far. Further, we
establish a rather strong negative result: all known graph theoretic bounds are
within a logarithmic factor from the chromatic number. This is in striking
contrast to minrank since prior work has shown that it can outperform the
chromatic number by a polynomial factor in some cases. The conclusion is that
all known graph theoretic bounds are not much stronger than the chromatic
number.
|
1402.3902 | Sparse Polynomial Learning and Graph Sketching | cs.LG | Let $f:\{-1,1\}^n$ be a polynomial with at most $s$ non-zero real
coefficients. We give an algorithm for exactly reconstructing f given random
examples from the uniform distribution on $\{-1,1\}^n$ that runs in time
polynomial in $n$ and $2s$ and succeeds if the function satisfies the unique
sign property: there is one output value which corresponds to a unique set of
values of the participating parities. This sufficient condition is satisfied
when every coefficient of f is perturbed by a small random noise, or satisfied
with high probability when s parity functions are chosen randomly or when all
the coefficients are positive. Learning sparse polynomials over the Boolean
domain in time polynomial in $n$ and $2s$ is considered notoriously hard in the
worst-case. Our result shows that the problem is tractable for almost all
sparse polynomials. Then, we show an application of this result to hypergraph
sketching which is the problem of learning a sparse (both in the number of
hyperedges and the size of the hyperedges) hypergraph from uniformly drawn
random cuts. We also provide experimental results on a real world dataset.
|
1402.3926 | Sparse Coding Approach for Multi-Frame Image Super Resolution | cs.CV | An image super-resolution method from multiple observation of low-resolution
images is proposed. The method is based on sub-pixel accuracy block matching
for estimating relative displacements of observed images, and sparse signal
representation for estimating the corresponding high-resolution image. Relative
displacements of small patches of observed low-resolution images are accurately
estimated by a computationally efficient block matching method. Since the
estimated displacements are also regarded as a warping component of image
degradation process, the matching results are directly utilized to generate
low-resolution dictionary for sparse image representation. The matching scores
of the block matching are used to select a subset of low-resolution patches for
reconstructing a high-resolution patch, that is, an adaptive selection of
informative low-resolution images is realized. When there is only one
low-resolution image, the proposed method works as a single-frame
super-resolution method. The proposed method is shown to perform comparable or
superior to conventional single- and multi-frame super-resolution methods
through experiments using various real-world datasets.
|
1402.3928 | Parametrization of completeness in symbolic abstraction of bounded input
linear systems | cs.SY | A good state-time quantized symbolic abstraction of an already input
quantized control system would satisfy three conditions: proximity, soundness
and completeness. Extant approaches for symbolic abstraction of unstable
systems limit to satisfying proximity and soundness but not completeness.
Instability of systems is an impediment to constructing fully complete
state-time quantized symbolic models for bounded and quantized input unstable
systems, even using supervisory feedback. Therefore, in this paper we come up
with a way of parametrization of completeness of the symbolic model through the
quintessential notion of Trimmed-Input Approximate Bisimulation which is
introduced in the paper. The amount of completeness is specified by a parameter
called trimming of the set of input trajectories. We subsequently discuss a
procedure of constructing state-time quantized symbolic models which are
near-complete in addition to being sound and proximate with respect to the time
quantized models.
|
1402.3939 | IMRank: Influence Maximization via Finding Self-Consistent Ranking | cs.SI cs.DS | Influence maximization, fundamental for word-of-mouth marketing and viral
marketing, aims to find a set of seed nodes maximizing influence spread on
social network. Early methods mainly fall into two paradigms with certain
benefits and drawbacks: (1)Greedy algorithms, selecting seed nodes one by one,
give a guaranteed accuracy relying on the accurate approximation of influence
spread with high computational cost; (2)Heuristic algorithms, estimating
influence spread using efficient heuristics, have low computational cost but
unstable accuracy.
We first point out that greedy algorithms are essentially finding a
self-consistent ranking, where nodes' ranks are consistent with their
ranking-based marginal influence spread. This insight motivates us to develop
an iterative ranking framework, i.e., IMRank, to efficiently solve influence
maximization problem under independent cascade model. Starting from an initial
ranking, e.g., one obtained from efficient heuristic algorithm, IMRank finds a
self-consistent ranking by reordering nodes iteratively in terms of their
ranking-based marginal influence spread computed according to current ranking.
We also prove that IMRank definitely converges to a self-consistent ranking
starting from any initial ranking. Furthermore, within this framework, a
last-to-first allocating strategy and a generalization of this strategy are
proposed to improve the efficiency of estimating ranking-based marginal
influence spread for a given ranking. In this way, IMRank achieves both
remarkable efficiency and high accuracy by leveraging simultaneously the
benefits of greedy algorithms and heuristic algorithms. As demonstrated by
extensive experiments on large scale real-world social networks, IMRank always
achieves high accuracy comparable to greedy algorithms, with computational cost
reduced dramatically, even about $10-100$ times faster than other scalable
heuristics.
|
1402.3941 | The Saddlepoint Approximation: Unified Random Coding Asymptotics for
Fixed and Varying Rates | cs.IT math.IT | This paper presents a saddlepoint approximation of the random-coding union
bound of Polyanskiy et al. for i.i.d. random coding over discrete memoryless
channels. The approximation is single-letter, and can thus be computed
efficiently. Moreover, it is shown to be asymptotically tight for both fixed
and varying rates, unifying existing achievability results in the regimes of
error exponents, second-order coding rates, and moderate deviations. For fixed
rates, novel exact-asymptotics expressions are specified to within a
multiplicative 1+o(1) term. A numerical example is provided for which the
approximation is remarkably accurate even at short block lengths.
|
1402.3973 | Dimensionality reduction with subgaussian matrices: a unified theory | cs.IT cs.DS math.IT stat.ML | We present a theory for Euclidean dimensionality reduction with subgaussian
matrices which unifies several restricted isometry property and
Johnson-Lindenstrauss type results obtained earlier for specific data sets. In
particular, we recover and, in several cases, improve results for sets of
sparse and structured sparse vectors, low-rank matrices and tensors, and smooth
manifolds. In addition, we establish a new Johnson-Lindenstrauss embedding for
data sets taking the form of an infinite union of subspaces of a Hilbert space.
|
1402.3986 | New Mechanism for Multiagent Extensible Negotiations | cs.MA | Multiagent negotiation mechanisms advise original solutions to several
problems for which usual problem solving methods are inappropriate. Mainly
negotiation models are based on agents' interactions through messages. Agents
interact in order to reach an agreement for solving a specific problem. In this
work, we study a new variant of negotiations, which has not yet been addressed
in existing works. This negotiation form is denoted extensible negotiation. In
contrast with current negotiation models, this form of negotiation allows the
agents to dynamically extend the set of items under negotiation. This facility
gives more acceptable solutions for the agents in their negotiation. The
advantage of enlarging the negotiation space is to certainly offer more
facilities for the agents for reaching new agreements which would not have been
obtained using usual negotiation methods. This paper presents the protocol and
the strategies used by the agents to deal with such negotiations.
|
1402.4004 | Design of a Hybrid Robot Control System using Memristor-Model and
Ant-Inspired Based Information Transfer Protocols | cs.RO cs.ET cs.SY | It is not always possible for a robot to process all the information from its
sensors in a timely manner and thus quick and yet valid approximations of the
robot's situation are needed. Here we design hybrid control for a robot within
this limit using algorithms inspired by ant worker placement behaviour and
based on memristor-based non-linearity.
|
1402.4007 | Does the D.C. Response of Memristors Allow Robotic Short-Term Memory and
a Possible Route to Artificial Time Perception? | cs.RO cs.ET cs.NE | Time perception is essential for task switching, and in the mammalian brain
appears alongside other processes. Memristors are electronic components used as
synapses and as models for neurons. The d.c. response of memristors can be
considered as a type of short-term memory. Interactions of the memristor d.c.
response within networks of memristors leads to the emergence of oscillatory
dynamics and intermittent spike trains, which are similar to neural dynamics.
Based on this data, the structure of a memristor network control for a robot as
it undergoes task switching is discussed and it is suggested that these
emergent network dynamics could improve the performance of role switching and
learning in an artificial intelligence and perhaps create artificial time
perception.
|
1402.4029 | Connecting Spiking Neurons to a Spiking Memristor Network Changes the
Memristor Dynamics | cs.ET cs.NE physics.bio-ph | Memristors have been suggested as neuromorphic computing elements. Spike-time
dependent plasticity and the Hodgkin-Huxley model of the neuron have both been
modelled effectively by memristor theory. The d.c. response of the memristor is
a current spike. Based on these three facts we suggest that memristors are
well-placed to interface directly with neurons. In this paper we show that
connecting a spiking memristor network to spiking neuronal cells causes a
change in the memristor network dynamics by: removing the memristor spikes,
which we show is due to the effects of connection to aqueous medium; causing a
change in current decay rate consistent with a change in memristor state;
presenting more-linear $I-t$ dynamics; and increasing the memristor spiking
rate, as a consequence of interaction with the spiking neurons. This
demonstrates that neurons are capable of communicating directly with
memristors, without the need for computer translation.
|
1402.4031 | Estimation with Strategic Sensors | cs.GT cs.SY math.OC | We introduce a model of estimation in the presence of strategic,
self-interested sensors. We employ a game-theoretic setup to model the
interaction between the sensors and the receiver. The cost function of the
receiver is equal to the estimation error variance while the cost function of
the sensor contains an extra term which is determined by its private
information. We start by the single sensor case in which the receiver has
access to a noisy but honest side information in addition to the message
transmitted by a strategic sensor. We study both static and dynamic estimation
problems. For both these problems, we characterize a family of equilibria in
which the sensor and the receiver employ simple strategies. Interestingly, for
the dynamic estimation problem, we find an equilibrium for which the strategic
sensor uses a memory-less policy. We generalize the static estimation setup to
multiple sensors with synchronous communication structure (i.e., all the
sensors transmit their messages simultaneously). We prove the maybe surprising
fact that, for the constructed equilibrium in affine strategies, the estimation
quality degrades as the number of sensors increases. However, if the sensors
are herding (i.e., copying each other policies), the quality of the receiver's
estimation improves as the number of sensors increases. Finally, we consider
the asynchronous communication structure (i.e., the sensors transmit their
messages sequentially).
|
1402.4033 | Friendship Prediction in Composite Social Networks | cs.SI physics.soc-ph | Friendship prediction is an important task in social network analysis (SNA).
It can help users identify friends and improve their level of activity. Most
previous approaches predict users' friendship based on their historical
records, such as their existing friendship, social interactions, etc. However,
in reality, most users have limited friends in a single network, and the data
can be very sparse. The sparsity problem causes existing methods to overfit the
rare observations and suffer from serious performance degradation. This is
particularly true when a new social network just starts to form. We observe
that many of today's social networks are composite in nature, where people are
often engaged in multiple networks. In addition, users' friendships are always
correlated, for example, they are both friends on Facebook and Google+. Thus,
by considering those overlapping users as the bridge, the friendship knowledge
in other networks can help predict their friendships in the current network.
This can be achieved by exploiting the knowledge in different networks in a
collective manner. However, as each individual network has its own properties
that can be incompatible and inconsistent with other networks, the naive
merging of all networks into a single one may not work well. The proposed
solution is to extract the common behaviors between different networks via a
hierarchical Bayesian model. It captures the common knowledge across networks,
while avoiding negative impacts due to network differences. Empirical studies
demonstrate that the proposed approach improves the mean average precision of
friendship prediction over state-of-the-art baselines on nine real-world social
networking datasets significantly.
|
1402.4036 | Is Spiking Logic the Route to Memristor-Based Computers? | cs.ET cond-mat.mtrl-sci cs.AR cs.NE | Memristors have been suggested as a novel route to neuromorphic computing
based on the similarity between neurons (synapses and ion pumps) and
memristors. The D.C. action of the memristor is a current spike, which we think
will be fruitful for building memristor computers. In this paper, we introduce
4 different logical assignations to implement sequential logic in the memristor
and introduce the physical rules, summation, `bounce-back', directionality and
`diminishing returns', elucidated from our investigations. We then demonstrate
how memristor sequential logic works by instantiating a NOT gate, an AND gate
and a Full Adder with a single memristor. The Full Adder makes use of the
memristor's memory to add three binary values together and outputs the value,
the carry digit and even the order they were input in.
|
1402.4050 | Minority Becomes Majority in Social Networks | cs.GT cs.DS cs.MA cs.SI | It is often observed that agents tend to imitate the behavior of their
neighbors in a social network. This imitating behavior might lead to the
strategic decision of adopting a public behavior that differs from what the
agent believes is the right one and this can subvert the behavior of the
population as a whole.
In this paper, we consider the case in which agents express preferences over
two alternatives and model social pressure with the majority dynamics: at each
step an agent is selected and its preference is replaced by the majority of the
preferences of her neighbors. In case of a tie, the agent does not change her
current preference. A profile of the agents' preferences is stable if the
preference of each agent coincides with the preference of at least half of the
neighbors (thus, the system is in equilibrium).
We ask whether there are network topologies that are robust to social
pressure. That is, we ask if there are graphs in which the majority of
preferences in an initial profile always coincides with the majority of the
preference in all stable profiles reachable from that profile. We completely
characterize the graphs with this robustness property by showing that this is
possible only if the graph has no edge or is a clique or very close to a
clique. In other words, except for this handful of graphs, every graph admits
at least one initial profile of preferences in which the majority dynamics can
subvert the initial majority. We also show that deciding whether a graph admits
a minority that becomes majority is NP-hard when the minority size is at most
1/4-th of the social network size.
|
1402.4053 | The Algebraic Approach to Phase Retrieval and Explicit Inversion at the
Identifiability Threshold | math.FA cs.CV cs.IT math.AG math.IT stat.ML | We study phase retrieval from magnitude measurements of an unknown signal as
an algebraic estimation problem. Indeed, phase retrieval from rank-one and more
general linear measurements can be treated in an algebraic way. It is verified
that a certain number of generic rank-one or generic linear measurements are
sufficient to enable signal reconstruction for generic signals, and slightly
more generic measurements yield reconstructability for all signals. Our results
solve a few open problems stated in the recent literature. Furthermore, we show
how the algebraic estimation problem can be solved by a closed-form algebraic
estimation technique, termed ideal regression, providing non-asymptotic success
guarantees.
|
1402.4067 | Statistical Noise Analysis in SENSE Parallel MRI | cs.CV | A complete first and second order statistical characterization of noise in
SENSE reconstructed data is proposed. SENSE acquisitions have usually been
modeled as Rician distributed, since the data reconstruction takes place into
the spatial domain, where Gaussian noise is assumed. However, this model just
holds for the first order statistics and obviates other effects induced by
coils correlations and the reconstruction interpolation. Those effects are
properly taken into account in this study, in order to fully justify a final
SENSE noise model. As a result, some interesting features of the reconstructed
image arise: (1) There is a strong correlation between adjacent lines. (2) The
resulting distribution is non-stationary and therefore the variance of noise
will vary from point to point across the image. Closed equations for the
calculation of the variance of noise and the correlation coefficient between
lines are proposed. The proposed model is totally compatible with g-factor
formulations.
|
1402.4069 | Application of the Ring Theory in the Segmentation of Digital Images | cs.CV | Ring theory is one of the branches of the abstract algebra that has been
broadly used in images. However, ring theory has not been very related with
image segmentation. In this paper, we propose a new index of similarity among
images using Zn rings and the entropy function. This new index was applied as a
new stopping criterion to the Mean Shift Iterative Algorithm with the goal to
reach a better segmentation. An analysis on the performance of the algorithm
with this new stopping criterion is carried out. The obtained results proved
that the new index is a suitable tool to compare images.
|
1402.4073 | Threshold and Symmetric Functions over Bitmaps | cs.DB cs.DS | Bitmap indexes are routinely used to speed up simple aggregate queries in
databases. Set operations such as intersections, unions and complements can be
represented as logical operations (AND, OR, NOT). However, less is known about
the application of bitmap indexes to more advanced queries. We want to extend
the applicability of bitmap indexes. As a starting point, we consider symmetric
Boolean queries (e.g., threshold functions). For example, we might consider
stores as sets of products, and ask for products that are on sale in 2 to 10
stores. Such symmetric Boolean queries generalize intersection, union, and
T-occurrence queries.
It may not be immediately obvious to an engineer how to use bitmap indexes
for symmetric Boolean queries. Yet, maybe surprisingly, we find that the best
of our bitmap-based algorithms are competitive with the state-of-the-art
algorithms for important special cases (e.g., MergeOpt, MergeSkip, DivideSkip,
ScanCount). Moreover, unlike the competing algorithms, the result of our
computation is again a bitmap which can be further processed within a bitmap
index.
We review algorithmic design issues such as the aggregation of many
compressed bitmaps. We conclude with a discussion on other advanced queries
that bitmap indexes might be able to support efficiently.
|
1402.4084 | Selective Sampling with Drift | cs.LG | Recently there has been much work on selective sampling, an online active
learning setting, in which algorithms work in rounds. On each round an
algorithm receives an input and makes a prediction. Then, it can decide whether
to query a label, and if so to update its model, otherwise the input is
discarded. Most of this work is focused on the stationary case, where it is
assumed that there is a fixed target model, and the performance of the
algorithm is compared to a fixed model. However, in many real-world
applications, such as spam prediction, the best target function may drift over
time, or have shifts from time to time. We develop a novel selective sampling
algorithm for the drifting setting, analyze it under no assumptions on the
mechanism generating the sequence of instances, and derive new mistake bounds
that depend on the amount of drift in the problem. Simulations on synthetic and
real-world datasets demonstrate the superiority of our algorithms as a
selective sampling algorithm in the drifting setting.
|
1402.4100 | Generalized Area Spectral Efficiency: An Effective Performance Metric
for Green Wireless Communications | cs.NI cs.IT math.IT | Area spectral efficiency (ASE) was introduced as a metric to quantify the
spectral utilization efficiency of cellular systems. Unlike other performance
metrics, ASE takes into account the spatial property of cellular systems. In
this paper, we generalize the concept of ASE to study arbitrary wireless
transmissions. Specifically, we introduce the notion of affected area to
characterize the spatial property of arbitrary wireless transmissions. Based on
the definition of affected area, we define the performance metric, generalized
area spectral efficiency (GASE), to quantify the spatial spectral utilization
efficiency as well as the greenness of wireless transmissions. After
illustrating its evaluation for point-to-point transmission, we analyze the
GASE performance of several different transmission scenarios, including
dual-hop relay transmission, three-node cooperative relay transmission and
underlay cognitive radio transmission. We derive closed-form expressions for
the GASE metric of each transmission scenario under Rayleigh fading environment
whenever possible. Through mathematical analysis and numerical examples, we
show that the GASE metric provides a new perspective on the design and
optimization of wireless transmissions, especially on the transmitting power
selection. We also show that introducing relay nodes can greatly improve the
spatial utilization efficiency of wireless systems. We illustrate that the GASE
metric can help optimize the deployment of underlay cognitive radio systems.
|
1402.4101 | First steps to Virtual Mammography: Simulating external compressions of
the breast with the Surface Evolver | cs.CE physics.med-ph | In this paper we introduce a computational modelling that reproduces the
breast compression processes used to obtain the mammogram. The main result is a
programme in which one can track the first steps of virtual mammography. On the
one hand, our modelling enables addition of structures that represent different
tissues, muscles and glands in the breast. On the other hand, we shall validate
and implement it by means of laboratory tests with phantoms. To the best of our
knowledge, these two characteristics do confer originality to our research.
This is because their interrelation seems not to be properly established
elsewhere yet. We conclude that our model reproduces the same shapes and
measurements really taken from the volunteer's breasts.
|
1402.4102 | Stochastic Gradient Hamiltonian Monte Carlo | stat.ME cs.LG stat.ML | Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for
defining distant proposals with high acceptance probabilities in a
Metropolis-Hastings framework, enabling more efficient exploration of the state
space than standard random-walk proposals. The popularity of such methods has
grown significantly in recent years. However, a limitation of HMC methods is
the required gradient computation for simulation of the Hamiltonian dynamical
system-such computation is infeasible in problems involving a large sample size
or streaming data. Instead, we must rely on a noisy gradient estimate computed
from a subset of the data. In this paper, we explore the properties of such a
stochastic gradient HMC approach. Surprisingly, the natural implementation of
the stochastic approximation can be arbitrarily bad. To address this problem we
introduce a variant that uses second-order Langevin dynamics with a friction
term that counteracts the effects of the noisy gradient, maintaining the
desired target distribution as the invariant distribution. Results on simulated
data validate our theory. We also provide an application of our methods to a
classification task using neural networks and to online Bayesian matrix
factorization.
|
1402.4157 | Conservative collision prediction and avoidance for stochastic
trajectories in continuous time and space | cs.AI cs.MA cs.RO | Existing work in multi-agent collision prediction and avoidance typically
assumes discrete-time trajectories with Gaussian uncertainty or that are
completely deterministic. We propose an approach that allows detection of
collisions even between continuous, stochastic trajectories with the only
restriction that means and variances can be computed. To this end, we employ
probabilistic bounds to derive criterion functions whose negative sign provably
is indicative of probable collisions. For criterion functions that are
Lipschitz, an algorithm is provided to rapidly find negative values or prove
their absence. We propose an iterative policy-search approach that avoids prior
discretisations and yields collision-free trajectories with adjustably high
certainty. We test our method with both fixed-priority and auction-based
protocols for coordinating the iterative planning process. Results are provided
in collision-avoidance simulations of feedback controlled plants.
|
1402.4159 | Application of Pseudo-Transient Continuation Method in Dynamic Stability
Analysis | cs.SY | In this paper, pseudo-transient continuation method has been modified and
implemented in power system long-term stability analysis. This method is a
middle ground between integration and steady state calculation, thus is a good
compromise between accuracy and efficiency. Pseudo-transient continuation
method can be applied in the long-term stability model directly to accelerate
simulation speed and can also be implemented in the QSS model to overcome
numerical difficulties. Numerical examples show that pseudo-transient
continuation method can provide correct approximations for the long-term
stability model in terms of trajectories and stability assessment.
|
1402.4179 | Network robustness assessed within a dual connectivity perspective | physics.soc-ph cs.SI | Network robustness against attacks has been widely studied in fields as
diverse as the Internet, power grids and human societies. Typically, in these
studies, robustness is assessed only in terms of the connectivity of the nodes
unaffected by the attack. Here we put forward the idea that the connectivity of
the affected nodes can play a crucial role in properly evaluating the overall
network robustness and its future recovery from the attack. Specifically, we
propose a dual perspective approach wherein at any instant in the network
evolution under attack, two distinct networks are defined: (i) the Active
Network (AN) composed of the unaffected nodes and (ii) the Idle Network (IN)
composed of the affected nodes. The proposed robustness metric considers both
the efficiency of destroying the AN and the efficiency of building-up the IN.
We show, via analysis of both prototype networks and real world data, that
trade-offs between the efficiency of Active and Idle network dynamics give rise
to surprising crossovers and re-ranking of different attack strategies,
pointing to significant implications for decision making.
|
1402.4225 | Information Theory of Matrix Completion | cs.IT math.IT | Matrix completion is a fundamental problem that comes up in a variety of
applications like the Netflix problem, collaborative filtering, computer
vision, and crowdsourcing. The goal of the problem is to recover a k-by-n
unknown matrix from a subset of its noiseless (or noisy) entries. We define an
information-theoretic notion of completion capacity C that quantifies the
maximum number of entries that one observation of an entry can resolve. This
number provides the minimum number m of entries required for reliable
reconstruction: m=kn/C. Translating the problem into a distributed joint
source-channel coding problem with encoder restriction, we characterize the
completion capacity for a wide class of stochastic models of the unknown matrix
and the observation process. Our achievability proof is inspired by that of the
Slepian-Wolf theorem. For an arbitrary stochastic matrix, we derive an upper
bound on the completion capacity.
|
1402.4238 | Downlink and Uplink Energy Minimization Through User Association and
Beamforming in Cloud RAN | cs.IT math.IT | The cloud radio access network (C-RAN) concept, in which densely deployed
access points (APs) are empowered by cloud computing to cooperatively support
mobile users (MUs), to improve mobile data rates, has been recently proposed.
However, the high density of active ("on") APs results in severe interference
and also inefficient energy consumption. Moreover, the growing popularity of
highly interactive applications with stringent uplink (UL) requirements, e.g.
network gaming and real-time broadcasting by wireless users, means that the UL
transmission is becoming more crucial and requires special attention. Therefore
in this paper, we propose a joint downlink (DL) and UL MU-AP association and
beamforming design to coordinate interference in the C-RAN for energy
minimization, a problem which is shown to be NP hard. Due to the new
consideration of UL transmission, it is shown that the two state-of-the-art
approaches for finding computationally efficient solutions of joint MU-AP
association and beamforming considering only the DL, i.e., group-sparse
optimization and relaxed-integer programming, cannot be modified in a
straightforward way to solve our problem. Leveraging on the celebrated UL-DL
duality result, we show that by establishing a virtual DL transmission for the
original UL transmission, the joint DL and UL optimization problem can be
converted to an equivalent DL problem in C-RAN with two inter-related
subproblems for the original and virtual DL transmissions, respectively. Based
on this transformation, two efficient algorithms for joint DL and UL MU-AP
association and beamforming design are proposed, whose performances are
evaluated and compared with other benchmarking schemes through extensive
simulations.
|
1402.4246 | Precoding by Priority: A UEP Scheme for RaptorQ Codes | cs.IT math.IT | Raptor codes are the first class of fountain codes with linear time encoding
and decoding. These codes are recommended in standards such as Third Generation
Partnership Project (3GPP) and digital video broadcasting. RaptorQ codes are an
extension to Raptor codes, having better coding efficiency and flexibility.
Standard Raptor and RaptorQ codes are systematic with equal error protection of
the data. However, in many applications such as MPEG transmission, there is a
need for Unequal Error Protection (UEP): namely, some data symbols require
higher error correction capabilities compared to others. We propose an approach
that we call Priority Based Precode Ratio (PBPR) to achieve UEP for systematic
RaptorQ and Raptor codes. Our UEP assumes that all symbols in a source block
belong to the same importance class. The UEP is achieved by changing the number
of precode symbols depending on the priority of the information symbols in the
source block. PBPR provides UEP with the same number of decoding overhead
symbols for source blocks with different importance classes. We demonstrate
consistent improvements in the error correction capability of higher importance
class compared to the lower importance class across the entire range of channel
erasure probabilities. We also show that PBPR does not result in a significant
increase in decoding and encoding times compared to the standard
implementation.
|
1402.4259 | Extracting Networks of Characters and Places from Written Works with
CHAPLIN | cs.CY cs.CL | We are proposing a tool able to gather information on social networks from
narrative texts. Its name is CHAPLIN, CHAracters and PLaces Interaction
Network, implemented in VB.NET. Characters and places of the narrative works
are extracted in a list of raw words. Aided by the interface, the user selects
names out of them. After this choice, the tool allows the user to enter some
parameters, and, according to them, creates a network where the nodes are the
characters and places, and the edges their interactions. Edges are labelled by
performances. The output is a GV file, written in the DOT graph scripting
language, which is rendered by means of the free open source software Graphviz.
|
1402.4279 | A Bayesian Model of node interaction in networks | cs.LG stat.ME stat.ML | We are concerned with modeling the strength of links in networks by taking
into account how often those links are used. Link usage is a strong indicator
of how closely two nodes are related, but existing network models in Bayesian
Statistics and Machine Learning are able to predict only wether a link exists
at all. As priors for latent attributes of network nodes we explore the Chinese
Restaurant Process (CRP) and a multivariate Gaussian with fixed dimensionality.
The model is applied to a social network dataset and a word coocurrence
dataset.
|
1402.4283 | Discretization of Temporal Data: A Survey | cs.DB cs.LG | In real world, the huge amount of temporal data is to be processed in many
application areas such as scientific, financial, network monitoring, sensor
data analysis. Data mining techniques are primarily oriented to handle discrete
features. In the case of temporal data the time plays an important role on the
characteristics of data. To consider this effect, the data discretization
techniques have to consider the time while processing to resolve the issue by
finding the intervals of data which are more concise and precise with respect
to time. Here, this research is reviewing different data discretization
techniques used in temporal data applications according to the inclusion or
exclusion of: class label, temporal order of the data and handling of stream
data to open the research direction for temporal data discretization to improve
the performance of data mining technique.
|
1402.4293 | The Random Forest Kernel and other kernels for big data from random
partitions | stat.ML cs.LG | We present Random Partition Kernels, a new class of kernels derived by
demonstrating a natural connection between random partitions of objects and
kernels between those objects. We show how the construction can be used to
create kernels from methods that would not normally be viewed as random
partitions, such as Random Forest. To demonstrate the potential of this method,
we propose two new kernels, the Random Forest Kernel and the Fast Cluster
Kernel, and show that these kernels consistently outperform standard kernels on
problems involving real-world datasets. Finally, we show how the form of these
kernels lend themselves to a natural approximation that is appropriate for
certain big data problems, allowing $O(N)$ inference in methods such as
Gaussian Processes, Support Vector Machines and Kernel PCA.
|
1402.4303 | Finding Preference Profiles of Condorcet Dimension $k$ via SAT | cs.MA cs.AI cs.LO | Condorcet winning sets are a set-valued generalization of the well-known
concept of a Condorcet winner. As supersets of Condorcet winning sets are
always Condorcet winning sets themselves, an interesting property of preference
profiles is the size of the smallest Condorcet winning set they admit. This
smallest size is called the Condorcet dimension of a preference profile. Since
little is known about profiles that have a certain Condorcet dimension, we show
in this paper how the problem of finding a preference profile that has a given
Condorcet dimension can be encoded as a satisfiability problem and solved by a
SAT solver. Initial results include a minimal example of a preference profile
of Condorcet dimension 3, improving previously known examples both in terms of
the number of agents as well as alternatives. Due to the high complexity of
such problems it remains open whether a preference profile of Condorcet
dimension 4 exists.
|
1402.4304 | Automatic Construction and Natural-Language Description of Nonparametric
Regression Models | stat.ML cs.LG | This paper presents the beginnings of an automatic statistician, focusing on
regression problems. Our system explores an open-ended space of statistical
models to discover a good explanation of a data set, and then produces a
detailed report with figures and natural-language text. Our approach treats
unknown regression functions nonparametrically using Gaussian processes, which
has two important consequences. First, Gaussian processes can model functions
in terms of high-level properties (e.g. smoothness, trends, periodicity,
changepoints). Taken together with the compositional structure of our language
of models this allows us to automatically describe functions in simple terms.
Second, the use of flexible nonparametric models and a rich language for
composing them in an open-ended manner also results in state-of-the-art
extrapolation performance evaluated over 13 real time series data sets from
various domains.
|
1402.4306 | Student-t Processes as Alternatives to Gaussian Processes | stat.ML cs.AI cs.LG stat.ME | We investigate the Student-t process as an alternative to the Gaussian
process as a nonparametric prior over functions. We derive closed form
expressions for the marginal likelihood and predictive distribution of a
Student-t process, by integrating away an inverse Wishart process prior over
the covariance kernel of a Gaussian process model. We show surprising
equivalences between different hierarchical Gaussian process models leading to
Student-t processes, and derive a new sampling scheme for the inverse Wishart
process, which helps elucidate these equivalences. Overall, we show that a
Student-t process can retain the attractive properties of a Gaussian process --
a nonparametric representation, analytic marginal and predictive distributions,
and easy model selection through covariance kernels -- but has enhanced
flexibility, and predictive covariances that, unlike a Gaussian process,
explicitly depend on the values of training observations. We verify empirically
that a Student-t process is especially useful in situations where there are
changes in covariance structure, or in applications like Bayesian optimization,
where accurate predictive covariances are critical for good performance. These
advantages come at no additional computational cost over Gaussian processes.
|
1402.4308 | Lossy Source Coding with Reconstruction Privacy | cs.IT math.IT | We consider the problem of lossy source coding with side information under a
privacy constraint that the reconstruction sequence at a decoder should be kept
secret to a certain extent from another terminal such as an eavesdropper, a
sender, or a helper. We are interested in how the reconstruction privacy
constraint at a particular terminal affects the rate-distortion tradeoff. In
this work, we allow the decoder to use a random mapping, and give inner and
outer bounds to the rate-distortion-equivocation region for different cases
where the side information is available non-causally and causally at the
decoder. In the special case where each reconstruction symbol depends only on
the source description and current side information symbol, the complete
rate-distortion-equivocation region is provided. A binary example illustrating
a new tradeoff due to the new privacy constraint, and a gain from the use of a
stochastic decoder is given.
|
1402.4310 | Distributed Storage over Unidirectional Ring Networks | cs.IT math.IT | In this paper, we study distributed storage problems over unidirectional ring
networks, whose storage nodes form a directed ring and data is transmitted
along the same direction. The original data is distributed to store on these
nodes. Each user can connect one and only one storage node to download the
total data. A lower bound on the reconstructing bandwidth to recover the
original data for each user is proposed, and it is achievable for arbitrary
parameters. If a distributed storage scheme can achieve this lower bound with
equality for every user, we say it an optimal reconstructing distributed
storage scheme (ORDSS). Furthermore, the repair problem for a failed storage
node in ORDSSes is under consideration and a tight lower bound on the repair
bandwidth is obtained. In particular, we indicate the fact that for any ORDSS,
every storage node can be repaired with repair bandwidth achieving the lower
bound with equality. In addition, we present two constructions for ORDSSes of
arbitrary parameters, called MDS construction and ED construction,
respectively. Particularly, ED construction, by using the concept of Euclidean
division, is more efficient by our analysis in detail.
|
1402.4322 | On the properties of $\alpha$-unchaining single linkage hierarchical
clustering | cs.LG | In the election of a hierarchical clustering method, theoretic properties may
give some insight to determine which method is the most suitable to treat a
clustering problem. Herein, we study some basic properties of two hierarchical
clustering methods: $\alpha$-unchaining single linkage or $SL(\alpha)$ and a
modified version of this one, $SL^*(\alpha)$. We compare the results with the
properties satisfied by the classical linkage-based hierarchical clustering
methods.
|
1402.4325 | Rich-cores in networks | physics.soc-ph cs.SI | A core is said to be a group of central and densely connected nodes which
governs the overall behavior of a network. Profiling this meso--scale structure
currently relies on a limited number of methods which are often complex, and
have scalability issues when dealing with very large networks. As a result, we
are yet to fully understand its impact on network properties and dynamics. Here
we introduce a simple method to profile this structure by combining the
concepts of core/periphery and rich-club. The key challenge in addressing such
association of the two concepts is to establish a way to define the membership
of the core. The notion of a "rich-club" describes nodes which are essentially
the hub of a network, as they play a dominating role in structural and
functional properties. Interestingly, the definition of a rich-club naturally
emphasizes high degree nodes and divides a network into two subgroups. Our
approach theoretically couples the underlying principle of a rich-club with the
escape time of a random walker, and a rich-core is defined by examining changes
in the associated persistence probability. The method is fast and scalable to
large networks. In particular, we successfully show that the evolution of the
core in \emph{C. elegans} and World Trade networks correspond to key
development stages and responses to historical events respectively.
|
1402.4353 | Communication and Interference Coordination | cs.IT math.IT | We study the problem of controlling the interference created to an external
observer by a communication processes. We model the interference in terms of
its type (empirical distribution), and we analyze the consequences of placing
constraints on the admissible type. Considering a single interfering link, we
characterize the communication-interference capacity region. Then, we look at a
scenario where the interference is jointly created by two users allowed to
coordinate their actions prior to transmission. In this case, the trade-off
involves communication and interference as well as coordination. We establish
an achievable communication-interference region and show that efficiency is
significantly improved by coordination.
|
1402.4354 | Hybrid SRL with Optimization Modulo Theories | cs.LG stat.ML | Generally speaking, the goal of constructive learning could be seen as, given
an example set of structured objects, to generate novel objects with similar
properties. From a statistical-relational learning (SRL) viewpoint, the task
can be interpreted as a constraint satisfaction problem, i.e. the generated
objects must obey a set of soft constraints, whose weights are estimated from
the data. Traditional SRL approaches rely on (finite) First-Order Logic (FOL)
as a description language, and on MAX-SAT solvers to perform inference. Alas,
FOL is unsuited for con- structive problems where the objects contain a mixture
of Boolean and numerical variables. It is in fact difficult to implement, e.g.
linear arithmetic constraints within the language of FOL. In this paper we
propose a novel class of hybrid SRL methods that rely on Satisfiability Modulo
Theories, an alternative class of for- mal languages that allow to describe,
and reason over, mixed Boolean-numerical objects and constraints. The resulting
methods, which we call Learning Mod- ulo Theories, are formulated within the
structured output SVM framework, and employ a weighted SMT solver as an
optimization oracle to perform efficient in- ference and discriminative max
margin weight learning. We also present a few examples of constructive learning
applications enabled by our method.
|
1402.4360 | An Elementary Completeness Proof for Secure Two-Party Computation
Primitives | cs.CR cs.IT math.IT | In the secure two-party computation problem, two parties wish to compute a
(possibly randomized) function of their inputs via an interactive protocol,
while ensuring that neither party learns more than what can be inferred from
only their own input and output. For semi-honest parties and
information-theoretic security guarantees, it is well-known that, if only
noiseless communication is available, only a limited set of functions can be
securely computed; however, if interaction is also allowed over general
communication primitives (multi-input/output channels), there are "complete"
primitives that enable any function to be securely computed. The general set of
complete primitives was characterized recently by Maji, Prabhakaran, and
Rosulek leveraging an earlier specialized characterization by Kilian. Our
contribution in this paper is a simple, self-contained, alternative derivation
using elementary information-theoretic tools.
|
1402.4371 | A convergence proof of the split Bregman method for regularized
least-squares problems | math.OC cs.LG stat.ML | The split Bregman (SB) method [T. Goldstein and S. Osher, SIAM J. Imaging
Sci., 2 (2009), pp. 323-43] is a fast splitting-based algorithm that solves
image reconstruction problems with general l1, e.g., total-variation (TV) and
compressed sensing (CS), regularizations by introducing a single variable split
to decouple the data-fitting term and the regularization term, yielding simple
subproblems that are separable (or partially separable) and easy to minimize.
Several convergence proofs have been proposed, and these proofs either impose a
"full column rank" assumption to the split or assume exact updates in all
subproblems. However, these assumptions are impractical in many applications
such as the X-ray computed tomography (CT) image reconstructions, where the
inner least-squares problem usually cannot be solved efficiently due to the
highly shift-variant Hessian. In this paper, we show that when the data-fitting
term is quadratic, the SB method is a convergent alternating direction method
of multipliers (ADMM), and a straightforward convergence proof with inexact
updates is given using [J. Eckstein and D. P. Bertsekas, Mathematical
Programming, 55 (1992), pp. 293-318, Theorem 8]. Furthermore, since the SB
method is just a special case of an ADMM algorithm, it seems likely that the
ADMM algorithm will be faster than the SB method if the augmented Largangian
(AL) penalty parameters are selected appropriately. To have a concrete example,
we conduct a convergence rate analysis of the ADMM algorithm using two splits
for image restoration problems with quadratic data-fitting term and
regularization term. According to our analysis, we can show that the two-split
ADMM algorithm can be faster than the SB method if the AL penalty parameter of
the SB method is suboptimal. Numerical experiments were conducted to verify our
analysis.
|
1402.4380 | A Comparative Study of Machine Learning Methods for Verbal Autopsy Text
Classification | cs.CL | A Verbal Autopsy is the record of an interview about the circumstances of an
uncertified death. In developing countries, if a death occurs away from health
facilities, a field-worker interviews a relative of the deceased about the
circumstances of the death; this Verbal Autopsy can be reviewed off-site. We
report on a comparative study of the processes involved in Text Classification
applied to classifying Cause of Death: feature value representation; machine
learning classification algorithms; and feature reduction strategies in order
to identify the suitable approaches applicable to the classification of Verbal
Autopsy text. We demonstrate that normalised term frequency and the standard
TFiDF achieve comparable performance across a number of classifiers. The
results also show Support Vector Machine is superior to other classification
algorithms employed in this research. Finally, we demonstrate the effectiveness
of employing a "locally-semi-supervised" feature reduction strategy in order to
increase performance accuracy.
|
1402.4381 | Fast X-ray CT image reconstruction using the linearized augmented
Lagrangian method with ordered subsets | math.OC cs.LG stat.ML | The augmented Lagrangian (AL) method that solves convex optimization problems
with linear constraints has drawn more attention recently in imaging
applications due to its decomposable structure for composite cost functions and
empirical fast convergence rate under weak conditions. However, for problems
such as X-ray computed tomography (CT) image reconstruction and large-scale
sparse regression with "big data", where there is no efficient way to solve the
inner least-squares problem, the AL method can be slow due to the inevitable
iterative inner updates. In this paper, we focus on solving regularized
(weighted) least-squares problems using a linearized variant of the AL method
that replaces the quadratic AL penalty term in the scaled augmented Lagrangian
with its separable quadratic surrogate (SQS) function, thus leading to a much
simpler ordered-subsets (OS) accelerable splitting-based algorithm, OS-LALM,
for X-ray CT image reconstruction. To further accelerate the proposed
algorithm, we use a second-order recursive system analysis to design a
deterministic downward continuation approach that avoids tedious parameter
tuning and provides fast convergence. Experimental results show that the
proposed algorithm significantly accelerates the "convergence" of X-ray CT
image reconstruction with negligible overhead and greatly reduces the OS
artifacts in the reconstructed image when using many subsets for OS
acceleration.
|
1402.4385 | Estimating the resolution limit of the map equation in community
detection | physics.soc-ph cs.SI | A community detection algorithm is considered to have a resolution limit if
the scale of the smallest modules that can be resolved depends on the size of
the analyzed subnetwork. The resolution limit is known to prevent some
community detection algorithms from accurately identifying the modular
structure of a network. In fact, any global objective function for measuring
the quality of a two-level assignment of nodes into modules must have some sort
of resolution limit or an external resolution parameter. However, it is yet
unknown how the resolution limit affects the so-called map equation, which is
known to be an efficient objective function for community detection. We derive
an analytical estimate and conclude that the resolution limit of the map
equation is set by the total number of links between modules instead of the
total number of links in the full network as for modularity. This mechanism
makes the resolution limit much less restrictive for the map equation than for
modularity, and in practice orders of magnitudes smaller. Furthermore, we argue
that the effect of the resolution limit often results from shoehorning
multi-level modular structures into two-level descriptions. As we show, the
hierarchical map equation effectively eliminates the resolution limit for
networks with nested multi-level modular structures.
|
1402.4388 | Automatic Detection of Font Size Straight from Run Length Compressed
Text Documents | cs.CV | Automatic detection of font size finds many applications in the area of
intelligent OCRing and document image analysis, which has been traditionally
practiced over uncompressed documents, although in real life the documents
exist in compressed form for efficient storage and transmission. It would be
novel and intelligent if the task of font size detection could be carried out
directly from the compressed data of these documents without decompressing,
which would result in saving of considerable amount of processing time and
space. Therefore, in this paper we present a novel idea of learning and
detecting font size directly from run-length compressed text documents at line
level using simple line height features, which paves the way for intelligent
OCRing and document analysis directly from compressed documents. In the
proposed model, the given mixed-case text documents of different font size are
segmented into compressed text lines and the features extracted such as line
height and ascender height are used to capture the pattern of font size in the
form of a regression line, using which the automatic detection of font size is
done during the recognition stage. The method is experimented with a dataset of
50 compressed documents consisting of 780 text lines of single font size and
375 text lines of mixed font size resulting in an overall accuracy of 99.67%.
|
1402.4413 | Towards Ultra Rapid Restarts | cs.AI cs.LO | We observe a trend regarding restart strategies used in SAT solvers. A few
years ago, most state-of-the-art solvers restarted on average after a few
thousands of backtracks. Currently, restarting after a dozen backtracks results
in much better performance. The main reason for this trend is that heuristics
and data structures have become more restart-friendly. We expect further
continuation of this trend, so future SAT solvers will restart even more
rapidly. Additionally, we present experimental results to support our
observations.
|
1402.4417 | Incremental Entity Resolution from Linked Documents | cs.DB cs.IR | In many government applications we often find that information about
entities, such as persons, are available in disparate data sources such as
passports, driving licences, bank accounts, and income tax records. Similar
scenarios are commonplace in large enterprises having multiple customer,
supplier, or partner databases. Each data source maintains different aspects of
an entity, and resolving entities based on these attributes is a well-studied
problem. However, in many cases documents in one source reference those in
others; e.g., a person may provide his driving-licence number while applying
for a passport, or vice-versa. These links define relationships between
documents of the same entity (as opposed to inter-entity relationships, which
are also often used for resolution). In this paper we describe an algorithm to
cluster documents that are highly likely to belong to the same entity by
exploiting inter-document references in addition to attribute similarity. Our
technique uses a combination of iterative graph-traversal, locality-sensitive
hashing, iterative match-merge, and graph-clustering to discover unique
entities based on a document corpus. A unique feature of our technique is that
new sets of documents can be added incrementally while having to re-resolve
only a small subset of a previously resolved entity-document collection. We
present performance and quality results on two data-sets: a real-world database
of companies and a large synthetically generated `population' database. We also
demonstrate benefit of using inter-document references for clustering in the
form of enhanced recall of documents for resolution.
|
1402.4419 | Incremental Majorization-Minimization Optimization with Application to
Large-Scale Machine Learning | math.OC cs.LG stat.ML | Majorization-minimization algorithms consist of successively minimizing a
sequence of upper bounds of the objective function. These upper bounds are
tight at the current estimate, and each iteration monotonically drives the
objective function downhill. Such a simple principle is widely applicable and
has been very popular in various scientific fields, especially in signal
processing and statistics. In this paper, we propose an incremental
majorization-minimization scheme for minimizing a large sum of continuous
functions, a problem of utmost importance in machine learning. We present
convergence guarantees for non-convex and convex optimization when the upper
bounds approximate the objective up to a smooth error; we call such upper
bounds "first-order surrogate functions". More precisely, we study asymptotic
stationary point guarantees for non-convex problems, and for convex ones, we
provide convergence rates for the expected objective function value. We apply
our scheme to composite optimization and obtain a new incremental proximal
gradient algorithm with linear convergence rate for strongly convex functions.
In our experiments, we show that our method is competitive with the state of
the art for solving machine learning problems such as logistic regression when
the number of training samples is large enough, and we demonstrate its
usefulness for sparse estimation with non-convex penalties.
|
1402.4423 | New Method for Accurate Parameter Estimation of Induction Motors Based
on Artificial Bee Colony Algorithm | cs.SY | This paper proposes an effective method for estimating the parameters of
double-cage induction motors by using Artificial Bee Colony (ABC) algorithm.
For this purpose the unknown parameters in the electrical model of asynchronous
machine are calculated such that the sum of the square of differences between
full load torques, starting torques, maximum torques, starting currents, full
load currents, and nominal power factors obtained from model and provided by
manufacturer is minimized. In order to confirm the efficiency of the proposed
method the results are also compared with those achieved by using GA, PSO, and
PAMP. The simulations show that in the problem under consideration ABC
converges considerably faster than other algorithms and the results are as
accurate as PAMP.
|
1402.4437 | Learning the Irreducible Representations of Commutative Lie Groups | cs.LG | We present a new probabilistic model of compact commutative Lie groups that
produces invariant-equivariant and disentangled representations of data. To
define the notion of disentangling, we borrow a fundamental principle from
physics that is used to derive the elementary particles of a system from its
symmetries. Our model employs a newfound Bayesian conjugacy relation that
enables fully tractable probabilistic inference over compact commutative Lie
groups -- a class that includes the groups that describe the rotation and
cyclic translation of images. We train the model on pairs of transformed image
patches, and show that the learned invariant representation is highly effective
for classification.
|
1402.4442 | Artificial Mutation inspired Hyper-heuristic for Runtime Usage of
Multi-objective Algorithms | cs.SE cs.NE | In the last years, multi-objective evolutionary algorithms (MOEA) have been
applied to different software engineering problems where many conflicting
objectives have to be optimized simultaneously. In theory, evolutionary
algorithms feature a nice property for runtime optimization as they can provide
a solution in any execution time. In practice, based on a Darwinian inspired
natural selection, these evolutionary algorithms produce many deadborn
solutions whose computation results in a computational resources wastage:
natural selection is naturally slow. In this paper, we reconsider this founding
analogy to accelerate convergence of MOEA, by looking at modern biology
studies: artificial selection has been used to achieve an anticipated specific
purpose instead of only relying on crossover and natural selection (i.e.,
Muller et al [18] research on artificial mutation of fruits with X-Ray).
Putting aside the analogy with natural selection , the present paper proposes
an hyper-heuristic for MOEA algorithms named Sputnik 1 that uses artificial
selective mutation to improve the convergence speed of MOEA. Sputnik leverages
the past history of mutation efficiency to select the most relevant mutations
to perform. We evaluate Sputnik on a cloud-reasoning engine, which drives
on-demand provisioning while considering conflicting performance and cost
objectives. We have conducted experiments to highlight the significant
performance improvement of Sputnik in terms of resolution time.
|
1402.4455 | Symbiosis of Search and Heuristics for Random 3-SAT | cs.DS cs.AI | When combined properly, search techniques can reveal the full potential of
sophisticated branching heuristics. We demonstrate this observation on the
well-known class of random 3-SAT formulae. First, a new branching heuristic is
presented, which generalizes existing work on this class. Much smaller search
trees can be constructed by using this heuristic. Second, we introduce a
variant of discrepancy search, called ALDS. Theoretical and practical evidence
support that ALDS traverses the search tree in a near-optimal order when
combined with the new heuristic. Both techniques, search and heuristic, have
been implemented in the look-ahead solver march. The SAT 2009 competition
results show that march is by far the strongest complete solver on random k-SAT
formulae.
|
1402.4465 | Concurrent Cube-and-Conquer | cs.DS cs.AI | Recent work introduced the cube-and-conquer technique to solve hard SAT
instances. It partitions the search space into cubes using a lookahead solver.
Each cube is tackled by a conflict-driven clause learning (CDCL) solver.
Crucial for strong performance is the cutoff heuristic that decides when to
switch from lookahead to CDCL. Yet, this offline heuristic is far from ideal.
In this paper, we present a novel hybrid solver that applies the cube and
conquer steps simultaneously. A lookahead and a CDCL solver work together on
each cube, while communication is restricted to synchronization. Our concurrent
cube-and-conquer solver can solve many instances faster than pure lookahead,
pure CDCL and offline cube-and-conquer, and can abort early in favor of a pure
CDCL search if an instance is not suitable for cube-and-conquer techniques.
|
1402.4466 | Compressed bitmap indexes: beyond unions and intersections | cs.DB cs.DS | Compressed bitmap indexes are used to speed up simple aggregate queries in
databases. Indeed, set operations like intersections, unions and complements
can be represented as logical operations (AND,OR,NOT) that are ideally suited
for bitmaps. However, it is less obvious how to apply bitmaps to more advanced
queries. For example, we might seek products in a store that meet some, but
maybe not all, criteria. Such threshold queries generalize intersections and
unions; they are often used in information-retrieval and data-mining
applications. We introduce new algorithms that are sometimes three orders of
magnitude faster than a naive approach. Our work shows that bitmap indexes are
more broadly applicable than is commonly believed.
|
1402.4512 | Classification with Sparse Overlapping Groups | cs.LG stat.ML | Classification with a sparsity constraint on the solution plays a central
role in many high dimensional machine learning applications. In some cases, the
features can be grouped together so that entire subsets of features can be
selected or not selected. In many applications, however, this can be too
restrictive. In this paper, we are interested in a less restrictive form of
structured sparse feature selection: we assume that while features can be
grouped according to some notion of similarity, not all features in a group
need be selected for the task at hand. When the groups are comprised of
disjoint sets of features, this is sometimes referred to as the "sparse group"
lasso, and it allows for working with a richer class of models than traditional
group lasso methods. Our framework generalizes conventional sparse group lasso
further by allowing for overlapping groups, an additional flexiblity needed in
many applications and one that presents further challenges. The main
contribution of this paper is a new procedure called Sparse Overlapping Group
(SOG) lasso, a convex optimization program that automatically selects similar
features for classification in high dimensions. We establish model selection
error bounds for SOGlasso classification problems under a fairly general
setting. In particular, the error bounds are the first such results for
classification using the sparse group lasso. Furthermore, the general SOGlasso
bound specializes to results for the lasso and the group lasso, some known and
some new. The SOGlasso is motivated by multi-subject fMRI studies in which
functional activity is classified using brain voxels as features, source
localization problems in Magnetoencephalography (MEG), and analyzing gene
activation patterns in microarray data analysis. Experiments with real and
synthetic data demonstrate the advantages of SOGlasso compared to the lasso and
group lasso.
|
1402.4525 | Off-Policy General Value Functions to Represent Dynamic Role Assignments
in RoboCup 3D Soccer Simulation | cs.AI | Collecting and maintaining accurate world knowledge in a dynamic, complex,
adversarial, and stochastic environment such as the RoboCup 3D Soccer
Simulation is a challenging task. Knowledge should be learned in real-time with
time constraints. We use recently introduced Off-Policy Gradient Descent
algorithms within Reinforcement Learning that illustrate learnable knowledge
representations for dynamic role assignments. The results show that the agents
have learned competitive policies against the top teams from the RoboCup 2012
competitions for three vs three, five vs five, and seven vs seven agents. We
have explicitly used subsets of agents to identify the dynamics and the
semantics for which the agents learn to maximize their performance measures,
and to gather knowledge about different objectives, so that all agents
participate effectively and efficiently within the group.
|
1402.4540 | A Unifying Framework for Measuring Weighted Rich Clubs | physics.soc-ph cs.SI | Network analysis can help uncover meaningful regularities in the organization
of complex systems. Among these, rich clubs are a functionally important
property of a variety of social, technological and biological networks. Rich
clubs emerge when nodes that are somehow prominent or 'rich' (e.g., highly
connected) interact preferentially with one another. The identification of rich
clubs is non-trivial, especially in weighted networks, and to this end multiple
distinct metrics have been proposed. Here we describe a unifying framework for
detecting rich clubs which intuitively generalizes various metrics into a
single integrated method. This generalization rests upon the explicit
incorporation of randomized control networks into the measurement process. We
apply this framework to real-life examples, and show that, depending on the
selection of randomized controls, different kinds of rich-club structures can
be detected, such as topological and weighted rich clubs.
|
1402.4542 | Unsupervised Ranking of Multi-Attribute Objects Based on Principal
Curves | cs.LG cs.AI stat.ML | Unsupervised ranking faces one critical challenge in evaluation applications,
that is, no ground truth is available. When PageRank and its variants show a
good solution in related subjects, they are applicable only for ranking from
link-structure data. In this work, we focus on unsupervised ranking from
multi-attribute data which is also common in evaluation tasks. To overcome the
challenge, we propose five essential meta-rules for the design and assessment
of unsupervised ranking approaches: scale and translation invariance, strict
monotonicity, linear/nonlinear capacities, smoothness, and explicitness of
parameter size. These meta-rules are regarded as high level knowledge for
unsupervised ranking tasks. Inspired by the works in [8] and [14], we propose a
ranking principal curve (RPC) model, which learns a one-dimensional manifold
function to perform unsupervised ranking tasks on multi-attribute observations.
Furthermore, the RPC is modeled to be a cubic B\'ezier curve with control
points restricted in the interior of a hypercube, thereby complying with all
the five meta-rules to infer a reasonable ranking list. With control points as
the model parameters, one is able to understand the learned manifold and to
interpret the ranking list semantically. Numerical experiments of the presented
RPC model are conducted on two open datasets of different ranking applications.
In comparison with the state-of-the-art approaches, the new model is able to
show more reasonable ranking lists.
|
1402.4543 | Normalized Volume of Hyperball in Complex Grassmann Manifold and Its
Application in Large-Scale MU-MIMO Communication Systems | cs.IT math.IT | This paper provides a solution to a critical issue in large-scale Multi-User
Multiple-Input Multiple-Output (MU-MIMO) communication systems: how to estimate
the Signal-to-Interference-plus-Noise-Ratios (SINRs) and their expectations in
MU-MIMO mode at the Base Station (BS) side when only the Channel Quality
Information (CQI) in Single-User MIMO (SU-MIMO) mode and non-ideal Channel
State Information (CSI) are known? A solution to this problem would be very
beneficial for the BS to predict the capacity of MU-MIMO and choose the proper
modulation and channel coding for MU-MIMO. To that end, this paper derives a
normalized volume formula of a hyperball based on the probability density
function of the canonical angle between any two points in a complex Grassmann
manifold, and shows that this formula provides a solution to the aforementioned
issue. It enables the capability of a BS to predict the capacity loss due to
non-ideal CSI, group users in MU-MIMO mode, choose the proper modulation and
channel coding, and adaptively switch between SU-MIMO and MU-MIMO modes, as
well as between Conjugate Beamforming (CB) and Zero-Forcing (ZF) precoding.
Numerical results are provided to verify the validity and accuracy of the
solution.
|
1402.4566 | Transduction on Directed Graphs via Absorbing Random Walks | cs.CV cs.LG stat.ML | In this paper we consider the problem of graph-based transductive
classification, and we are particularly interested in the directed graph
scenario which is a natural form for many real world applications. Different
from existing research efforts that either only deal with undirected graphs or
circumvent directionality by means of symmetrization, we propose a novel random
walk approach on directed graphs using absorbing Markov chains, which can be
regarded as maximizing the accumulated expected number of visits from the
unlabeled transient states. Our algorithm is simple, easy to implement, and
works with large-scale graphs. In particular, it is capable of preserving the
graph structure even when the input graph is sparse and changes over time, as
well as retaining weak signals presented in the directed edges. We present its
intimate connections to a number of existing methods, including graph kernels,
graph Laplacian based methods, and interestingly, spanning forest of graphs.
Its computational complexity and the generalization error are also studied.
Empirically our algorithm is systematically evaluated on a wide range of
applications, where it has shown to perform competitively comparing to a suite
of state-of-the-art methods.
|
1402.4572 | Caching and Coded Multicasting: Multiple Groupcast Index Coding | cs.IT math.IT | The capacity of caching networks has received considerable attention in the
past few years. A particularly studied setting is the case of a single server
(e.g., a base station) and multiple users, each of which caches segments of
files in a finite library. Each user requests one (whole) file in the library
and the server sends a common coded multicast message to satisfy all users at
once. The problem consists of finding the smallest possible codeword length to
satisfy such requests. In this paper we consider the generalization to the case
where each user places $L \geq 1$ requests. The obvious naive scheme consists
of applying $L$ times the order-optimal scheme for a single request, obtaining
a linear in $L$ scaling of the multicast codeword length. We propose a new
achievable scheme based on multiple groupcast index coding that achieves a
significant gain over the naive scheme. Furthermore, through an information
theoretic converse we find that the proposed scheme is approximately optimal
within a constant factor of (at most) $18$.
|
1402.4576 | On the Average Performance of Caching and Coded Multicasting with Random
Demands | cs.IT cs.NI math.IT | For a network with one sender, $n$ receivers (users) and $m$ possible
messages (files), caching side information at the users allows to satisfy
arbitrary simultaneous demands by sending a common (multicast) coded message.
In the worst-case demand setting, explicit deterministic and random caching
strategies and explicit linear coding schemes have been shown to be order
optimal. In this work, we consider the same scenario where the user demands are
random i.i.d., according to a Zipf popularity distribution. In this case, we
pose the problem in terms of the minimum average number of equivalent message
transmissions. We present a novel decentralized random caching placement and a
coded delivery scheme which are shown to achieve order-optimal performance. As
a matter of fact, this is the first order-optimal result for the caching and
coded multicasting problem in the case of random demands.
|
1402.4590 | On the distinctness of binary sequences derived from $2$-adic expansion
of m-sequences over finite prime fields | cs.IT math.IT | Let $p$ be an odd prime with $2$-adic expansion $\sum_{i=0}^kp_i\cdot2^i$.
For a sequence $\underline{a}=(a(t))_{t\ge 0}$ over $\mathbb{F}_{p}$, each
$a(t)$ belongs to $\{0,1,\ldots, p-1\}$ and has a unique $2$-adic expansion
$$a(t)=a_0(t)+a_1(t)\cdot 2+\cdots+a_{k}(t)\cdot2^k,$$ with $a_i(t)\in\{0,
1\}$. Let $\underline{a_i}$ denote the binary sequence $(a_i(t))_{t\ge 0}$ for
$0\le i\le k$. Assume $i_0$ is the smallest index $i$ such that $p_{i}=0$ and
$\underline{a}$ and $\underline{b}$ are two different m-sequences generated by
a same primitive characteristic polynomial over $\mathbb{F}_p$. We prove that
for $i\neq i_0$ and $0\le i\le k$, $\underline{a_i}=\underline{b_i}$ if and
only if $\underline{a}=\underline{b}$, and for $i=i_0$,
$\underline{a_{i_0}}=\underline{b_{i_0}}$ if and only if
$\underline{a}=\underline{b}$ or $\underline{a}=-\underline{b}$. Then the
period of $\underline{a_i}$ is equal to the period of $\underline{a}$ if $i\ne
i_0$ and half of the period of $\underline{a}$ if $i=i_0$. We also discuss a
possible application of the binary sequences $\underline{a_i}$.
|
1402.4600 | Ancillary Service to the Grid Using Intelligent Deferrable Loads | math.OC cs.SY | Renewable energy sources such as wind and solar power have a high degree of
unpredictability and time-variation, which makes balancing demand and supply
challenging. One possible way to address this challenge is to harness the
inherent flexibility in demand of many types of loads. Introduced in this paper
is a technique for decentralized control for automated demand response that can
be used by grid operators as ancillary service for maintaining demand-supply
balance.
A Markovian Decision Process (MDP) model is introduced for an individual
load. A randomized control architecture is proposed, motivated by the need for
decentralized decision making, and the need to avoid synchronization that can
lead to large and detrimental spikes in demand. An aggregate model for a large
number of loads is then developed by examining the mean field limit. A key
innovation is an LTI-system approximation of the aggregate nonlinear model,
with a scalar signal as the input and a measure of the aggregate demand as the
output. This makes the approximation particularly convenient for control design
at the grid level.
The second half of the paper contains a detailed application of these results
to a network of residential pools. Simulations are provided to illustrate the
accuracy of the approximations and effectiveness of the proposed control
approach.
|
1402.4612 | Power Allocation in Compressed Sensing of Non-uniformly Sparse Signals | cs.IT math.IT | This paper studies the problem of power allocation in compressed sensing when
different components in the unknown sparse signal have different probability to
be non-zero. Given the prior information of the non-uniform sparsity and the
total power budget, we are interested in how to optimally allocate the power
across the columns of a Gaussian random measurement matrix so that the mean
squared reconstruction error is minimized. Based on the state evolution
technique originated from the work by Donoho, Maleki, and Montanari, we revise
the so called approximate message passing (AMP) algorithm for the
reconstruction and quantify the MSE performance in the asymptotic regime. Then
the closed form of the optimal power allocation is obtained. The results show
that in the presence of measurement noise, uniform power allocation, which
results in the commonly used Gaussian random matrix with i.i.d. entries, is not
optimal for non-uniformly sparse signals. Empirical results are presented to
demonstrate the performance gain.
|
1402.4618 | Passive Dynamics in Mean Field Control | cs.SY | Mean-field models are a popular tool in a variety of fields. They provide an
understanding of the impact of interactions among a large number of particles
or people or other "self-interested agents", and are an increasingly popular
tool in distributed control.
This paper considers a particular randomized distributed control architecture
introduced in our own recent work. In numerical results it was found that the
associated mean-field model had attractive properties for purposes of control.
In particular, when viewed as an input-output system, its linearization was
found to be minimum phase.
In this paper we take a closer look at the control model. The results are
summarized as follows:
(i) The Markov Decision Process framework of Todorov is extended to
continuous time models, in which the "control cost" is based on relative
entropy. This is the basis of the construction of a family of controlled
Markovian generators.
(ii) A decentralized control architecture is proposed in which each agent
evolves as a controlled Markov process. A central authority broadcasts a common
control signal to each agent. The central authority chooses this signal based
on an aggregate scalar output of the Markovian agents.
(iii) Provided the control-free system is a reversible Markov process, the
following identity holds for the linearization, \[ \text{Real} (G(j\omega)) =
\text{PSD}_Y(\omega)\ge 0, \quad \omega\in\Re, \] where the right hand side
denotes the power spectral density for the output of any one of the individual
(control-free) Markov processes.
|
1402.4645 | A Survey on Semi-Supervised Learning Techniques | cs.LG | Semisupervised learning is a learning standard which deals with the study of
how computers and natural systems such as human beings acquire knowledge in the
presence of both labeled and unlabeled data. Semisupervised learning based
methods are preferred when compared to the supervised and unsupervised learning
because of the improved performance shown by the semisupervised approaches in
the presence of large volumes of data. Labels are very hard to attain while
unlabeled data are surplus, therefore semisupervised learning is a noble
indication to shrink human labor and improve accuracy. There has been a large
spectrum of ideas on semisupervised learning. In this paper we bring out some
of the key approaches for semisupervised learning.
|
1402.4653 | Retrieval of Experiments by Efficient Estimation of Marginal Likelihood | stat.ML cs.IR cs.LG | We study the task of retrieving relevant experiments given a query
experiment. By experiment, we mean a collection of measurements from a set of
`covariates' and the associated `outcomes'. While similar experiments can be
retrieved by comparing available `annotations', this approach ignores the
valuable information available in the measurements themselves. To incorporate
this information in the retrieval task, we suggest employing a retrieval metric
that utilizes probabilistic models learned from the measurements. We argue that
such a metric is a sensible measure of similarity between two experiments since
it permits inclusion of experiment-specific prior knowledge. However, accurate
models are often not analytical, and one must resort to storing posterior
samples which demands considerable resources. Therefore, we study strategies to
select informative posterior samples to reduce the computational load while
maintaining the retrieval performance. We demonstrate the efficacy of our
approach on simulated data with simple linear regression as the models, and
real world datasets.
|
1402.4662 | Optimal Control of Applications for Hybrid Cloud Services | cs.DC cs.SY | Development of cloud computing enables to move Big Data in the hybrid cloud
services. This requires research of all processing systems and data structures
for provide QoS. Due to the fact that there are many bottlenecks requires
monitoring and control system when performing a query. The models and
optimization criteria for the design of systems in a hybrid cloud
infrastructures are created. In this article suggested approaches and the
results of this build.
|
1402.4663 | Concept of Feedback in Future Computing Models to Cloud Systems | cs.DC cs.NI cs.SY | Currently, it is urgent to ensure QoS in distributed computing systems. This
became especially important to the development and spread of cloud services.
Big data structures become heavily distributed. Necessary to consider the
communication channels and data transmission systems and virtualization and
scalability in future design of computational models in problems of designing
cloud systems, evaluating the effectiveness of the algorithms, the assessment
of economic performance data centers. Requires not only the monitoring of data
flows and computing resources, but also the operational management of these
resources to QoS provide. Such a tool may be just the introduction of feedback
in computational models. The article presents the main dynamic model with
feedback as a basis for a new model of distributed computing processes. The
research results are presented here. Formulated in this work can be used for
other complex tasks - estimation of structural complexity of distributed
databases, evaluation of dynamic characteristics of systems operating in the
hybrid cloud, etc.
|
1402.4678 | When Learners Surpass their Sources: Mathematical Modeling of Learning
from an Inconsistent Source | cs.CL | We present a new algorithm to model and investigate the learning process of a
learner mastering a set of grammatical rules from an inconsistent source. The
compelling interest of human language acquisition is that the learning succeeds
in virtually every case, despite the fact that the input data are formally
inadequate to explain the success of learning. Our model explains how a learner
can successfully learn from or even surpass its imperfect source without
possessing any additional biases or constraints about the types of patterns
that exist in the language. We use the data collected by Singleton and Newport
(2004) on the performance of a 7-year boy Simon, who mastered the American Sign
Language (ASL) by learning it from his parents, both of whom were imperfect
speakers of ASL. We show that the algorithm possesses a frequency-boosting
property, whereby the frequency of the most common form of the source is
increased by the learner. We also explain several key features of Simon's ASL.
|
1402.4699 | A Powerful Genetic Algorithm for Traveling Salesman Problem | cs.NE cs.AI | This paper presents a powerful genetic algorithm(GA) to solve the traveling
salesman problem (TSP). To construct a powerful GA, I use edge swapping(ES)
with a local search procedure to determine good combinations of building blocks
of parent solutions for generating even better offspring solutions.
Experimental results on well studied TSP benchmarks demonstrate that the
proposed GA is competitive in finding very high quality solutions on instances
with up to 16,862 cities.
|
1402.4729 | On the Degrees-of-freedom of the 3-user MISO Broadcast Channel with
Hybrid CSIT | cs.IT math.IT | The 3-user multiple-input single-output (MISO) broadcast channel (BC) with
hybrid channel state information at the transmitter (CSIT) is considered. In
this framework, there is perfect and instantaneous CSIT from a subset of users
and delayed CSIT from the remaining users. We present new results on the
degrees of freedom (DoF) of the 3-user MISO BC with hybrid CSIT. In particular,
for the case of 2 transmit antennas, we show that with perfect CSIT from one
user and delayed CSIT from the remaining two users, the optimal DoF is 5/3. For
the case of 3 transmit antennas and the same hybrid CSIT setting, it is shown
that a higher DoF of 9/5 is achievable and this result improves upon the best
known bound. Furthermore, with 3 transmit antennas, and the hybrid CSIT setting
in which there is perfect CSIT from two users and delayed CSIT from the third
one, a novel scheme is presented which achieves 9/4 DoF. Our results also
reveal new insights on how to utilize hybrid channel knowledge for multi-user
scenarios.
|
1402.4732 | Efficient Inference of Gaussian Process Modulated Renewal Processes with
Application to Medical Event Data | stat.ML cs.LG stat.AP | The episodic, irregular and asynchronous nature of medical data render them
difficult substrates for standard machine learning algorithms. We would like to
abstract away this difficulty for the class of time-stamped categorical
variables (or events) by modeling them as a renewal process and inferring a
probability density over continuous, longitudinal, nonparametric intensity
functions modulating that process. Several methods exist for inferring such a
density over intensity functions, but either their constraints and assumptions
prevent their use with our potentially bursty event streams, or their time
complexity renders their use intractable on our long-duration observations of
high-resolution events, or both. In this paper we present a new and efficient
method for inferring a distribution over intensity functions that uses direct
numeric integration and smooth interpolation over Gaussian processes. We
demonstrate that our direct method is up to twice as accurate and two orders of
magnitude more efficient than the best existing method (thinning). Importantly,
the direct method can infer intensity functions over the full range of bursty
to memoryless to regular events, which thinning and many other methods cannot.
Finally, we apply the method to clinical event data and demonstrate the
face-validity of the abstraction, which is now amenable to standard learning
algorithms.
|
1402.4738 | A measure of compression gain for new symbols in data-compression | cs.IT math.IT | Huffman encoding is often improved by using block codes, for example a
3-block would be an alphabet consisting of each possible combination of three
characters. We take the approach of starting with a base alphabet and expanding
it to include frequently occurring aggregates of symbols. We prove that the
change in compressed message length by the introduction of a new aggregate
symbol can be expressed as the difference of two entropies, dependent only on
the probabilities and length of the introduced symbol. The expression is
independent of the probability of all other symbols in the alphabet. This
measure of information gain, for a new symbol, can be applied in data
compression methods. We also demonstrate that aggregate symbol alphabets, as
opposed to mutually exclusive alphabets have the potential to provide good
levels of compression, with a simple experiment. Finally, compression gain as
defined in this paper may also be useful for feature selection.
|
1402.4741 | A normative account of defeasible and probabilistic inference | cs.LO cs.AI | In this paper, we provide more evidence for the contention that logical
consequence should be understood in normative terms. Hartry Field and John
MacFarlane covered the classical case. We extend their work, examining what it
means for an agent to be obliged to infer a conclusion when faced with
uncertain information or reasoning within a non-monotonic, defeasible, logical
framework (which allows e. g. for inference to be drawn from premises
considered true unless evidence to the contrary is presented).
|
1402.4742 | IVOA Recommendation: TAPRegExt: a VOResource Schema Extension for
Describing TAP Services | astro-ph.IM cs.DB | This document describes an XML encoding standard for metadata about services
implementing the table access protocol TAP [TAP], referred to as TAPRegExt.
Instance documents are part of the service's registry record or can be obtained
from the service itself. They deliver information to both humans and software
on the languages, output formats, and upload methods supported by the service,
as well as data models implemented by the exposed tables, optional language
features, and certain limits enforced by the service.
|
1402.4746 | Near-optimal-sample estimators for spherical Gaussian mixtures | cs.LG cs.DS cs.IT math.IT stat.ML | Statistical and machine-learning algorithms are frequently applied to
high-dimensional data. In many of these applications data is scarce, and often
much more costly than computation time. We provide the first sample-efficient
polynomial-time estimator for high-dimensional spherical Gaussian mixtures.
For mixtures of any $k$ $d$-dimensional spherical Gaussians, we derive an
intuitive spectral-estimator that uses
$\mathcal{O}_k\bigl(\frac{d\log^2d}{\epsilon^4}\bigr)$ samples and runs in time
$\mathcal{O}_{k,\epsilon}(d^3\log^5 d)$, both significantly lower than
previously known. The constant factor $\mathcal{O}_k$ is polynomial for sample
complexity and is exponential for the time complexity, again much smaller than
what was previously known. We also show that
$\Omega_k\bigl(\frac{d}{\epsilon^2}\bigr)$ samples are needed for any
algorithm. Hence the sample complexity is near-optimal in the number of
dimensions.
We also derive a simple estimator for one-dimensional mixtures that uses
$\mathcal{O}\bigl(\frac{k \log \frac{k}{\epsilon} }{\epsilon^2} \bigr)$ samples
and runs in time
$\widetilde{\mathcal{O}}\left(\bigl(\frac{k}{\epsilon}\bigr)^{3k+1}\right)$.
Our other technical contributions include a faster algorithm for choosing a
density estimate from a set of distributions, that minimizes the $\ell_1$
distance to an unknown underlying distribution.
|
1402.4799 | Multiple Access Channel with Common Message and Secrecy constraint | cs.IT math.IT | In this paper, we study the problem of secret communication over a
multiple-access channel with a common message. Here, we assume that two
transmitters have confidential messages, which must be kept secret from the
wiretapper (the second receiver), and both of them have access to a common
message which can be decoded by the two receivers. We call this setting as
Multiple-Access Wiretap Channel with Common message (MAWC-CM). For this
setting, we derive general inner and outer bounds on the secrecy capacity
region for the discrete memoryless case and show that these bounds meet each
other for a special case called the switch channel. As well, for a Gaussian
version of MAWC-CM, we derive inner and outer bounds on the secrecy capacity
region. Providing numerical results for the Gaussian case, we illustrate the
comparison between the derived achievable rate region and the outer bound for
the considered model and the capacity region of compound multiple access
channel.
|
1402.4802 | Ambiguity in language networks | physics.soc-ph cs.CL q-bio.NC | Human language defines the most complex outcomes of evolution. The emergence
of such an elaborated form of communication allowed humans to create extremely
structured societies and manage symbols at different levels including, among
others, semantics. All linguistic levels have to deal with an astronomic
combinatorial potential that stems from the recursive nature of languages. This
recursiveness is indeed a key defining trait. However, not all words are
equally combined nor frequent. In breaking the symmetry between less and more
often used and between less and more meaning-bearing units, universal scaling
laws arise. Such laws, common to all human languages, appear on different
stages from word inventories to networks of interacting words. Among these
seemingly universal traits exhibited by language networks, ambiguity appears to
be a specially relevant component. Ambiguity is avoided in most computational
approaches to language processing, and yet it seems to be a crucial element of
language architecture. Here we review the evidence both from language network
architecture and from theoretical reasonings based on a least effort argument.
Ambiguity is shown to play an essential role in providing a source of language
efficiency, and is likely to be an inevitable byproduct of network growth.
|
1402.4834 | The Application of Imperialist Competitive Algorithm for Fuzzy Random
Portfolio Selection Problem | math.OC cs.AI | This paper presents an implementation of the Imperialist Competitive
Algorithm (ICA) for solving the fuzzy random portfolio selection problem where
the asset returns are represented by fuzzy random variables. Portfolio
Optimization is an important research field in modern finance. By using the
necessity-based model, fuzzy random variables reformulate to the linear
programming and ICA will be designed to find the optimum solution. To show the
efficiency of the proposed method, a numerical example illustrates the whole
idea on implementation of ICA for fuzzy random portfolio selection problem.
|
1402.4844 | Subspace Learning with Partial Information | cs.LG stat.ML | The goal of subspace learning is to find a $k$-dimensional subspace of
$\mathbb{R}^d$, such that the expected squared distance between instance
vectors and the subspace is as small as possible. In this paper we study
subspace learning in a partial information setting, in which the learner can
only observe $r \le d$ attributes from each instance vector. We propose several
efficient algorithms for this task, and analyze their sample complexity
|
1402.4845 | Diffusion Least Mean Square: Simulations | cs.LG cs.MA | In this technical report we analyse the performance of diffusion strategies
applied to the Least-Mean-Square adaptive filter. We configure a network of
cooperative agents running adaptive filters and discuss their behaviour when
compared with a non-cooperative agent which represents the average of the
network. The analysis provides conditions under which diversity in the filter
parameters is beneficial in terms of convergence and stability. Simulations
drive and support the analysis.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.