id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1404.2300 | Better Performance ACF Operation for PAPR Reduction of OFDM Signal | cs.NI cs.IT math.IT | Orthogonal frequency division multiplexing (OFDM) is a promising modulation
radio access scheme for next generation wireless communication systems because
of its inherent immunity to multipath interference due to a low symbol rate,
the use of a cyclic prefix, and its affinity to different transmission
bandwidth arrangements. OFDM has already been adopted as a radio access scheme
for several of the latest cellular system specifications such as the long-term
evolution (LTE) system in the 3GPP (3rd Generation Partnership Project).
Nevertheless, peak-to-average power ratio (PAPR) of OFDM signal is a
significant drawback since it restricts the efficiency of the transmitter. A
number of promising approaches have been proposed & implemented to reduce PAPR
with the expense of increase transmit signal power, bit error rate (BER) &
computational complexity and data rate loss, etc. In this paper, a relatively
better scheme of amplitude clipping & filtering operation (ACF) is proposed and
implemented which shows the significant improvement in case of PAPR reduction
while increasing slight BER compare to an present method.
|
1404.2302 | Performance Analysis on The Basis of a Comparative Study Between
Multipath Rayleigh Fading And AWGN Channel in The Presence of Various
Interference | cs.IT cs.NI math.IT | Interference is the most important issue for present wireless communication.
There are various kinds of channel used in wireless communication. Here I want
to show a performance analysis on the basis of two different channels - AWGN
and Multipath Rayleigh fading channel. This is the comparative analysis with
different kinds of modulation techniques. Here I have also measured the Bit
Error Rate with respect to different modulation techniques and compare the rate
in different channels. My objective is to compare the different characteristics
of the transmitter and receiver for different types of channels and modulators.
|
1404.2313 | Outer-Product Hidden Markov Model and Polyphonic MIDI Score Following | cs.AI cs.SD | We present a polyphonic MIDI score-following algorithm capable of following
performances with arbitrary repeats and skips, based on a probabilistic model
of musical performances. It is attractive in practical applications of score
following to handle repeats and skips which may be made arbitrarily during
performances, but the algorithms previously described in the literature cannot
be applied to scores of practical length due to problems with large
computational complexity. We propose a new type of hidden Markov model (HMM) as
a performance model which can describe arbitrary repeats and skips including
performer tendencies on distributed score positions before and after them, and
derive an efficient score-following algorithm that reduces computational
complexity without pruning. A theoretical discussion on how much such
information on performer tendencies improves the score-following results is
given. The proposed score-following algorithm also admits performance mistakes
and is demonstrated to be effective in practical situations by carrying out
evaluations with human performances. The proposed HMM is potentially valuable
for other topics in information processing and we also provide a detailed
description of inference algorithms.
|
1404.2314 | A Stochastic Temporal Model of Polyphonic MIDI Performance with
Ornaments | cs.AI cs.SD | We study indeterminacies in realization of ornaments and how they can be
incorporated in a stochastic performance model applicable for music information
processing such as score-performance matching. We point out the importance of
temporal information, and propose a hidden Markov model which describes it
explicitly and represents ornaments with several state types. Following a
review of the indeterminacies, they are carefully incorporated into the model
through its topology and parameters, and the state construction for quite
general polyphonic scores is explained in detail. By analyzing piano
performance data, we find significant overlaps in inter-onset-interval
distributions of chordal notes, ornaments, and inter-chord events, and the data
is used to determine details of the model. The model is applied for score
following and offline score-performance matching, yielding highly accurate
matching for performances with many ornaments and relatively frequent errors,
repeats, and skips.
|
1404.2334 | Informed RRT*: Optimal Sampling-based Path Planning Focused via Direct
Sampling of an Admissible Ellipsoidal Heuristic | cs.RO | Rapidly-exploring random trees (RRTs) are popular in motion planning because
they find solutions efficiently to single-query problems. Optimal RRTs (RRT*s)
extend RRTs to the problem of finding the optimal solution, but in doing so
asymptotically find the optimal path from the initial state to every state in
the planning domain. This behaviour is not only inefficient but also
inconsistent with their single-query nature.
For problems seeking to minimize path length, the subset of states that can
improve a solution can be described by a prolate hyperspheroid. We show that
unless this subset is sampled directly, the probability of improving a solution
becomes arbitrarily small in large worlds or high state dimensions. In this
paper, we present an exact method to focus the search by directly sampling this
subset.
The advantages of the presented sampling technique are demonstrated with a
new algorithm, Informed RRT*. This method retains the same probabilistic
guarantees on completeness and optimality as RRT* while improving the
convergence rate and final solution quality. We present the algorithm as a
simple modification to RRT* that could be further extended by more advanced
path-planning algorithms. We show experimentally that it outperforms RRT* in
rate of convergence, final solution cost, and ability to find difficult
passages while demonstrating less dependence on the state dimension and range
of the planning problem.
|
1404.2342 | Social Collaborative Retrieval | cs.IR | Socially-based recommendation systems have recently attracted significant
interest, and a number of studies have shown that social information can
dramatically improve a system's predictions of user interests. Meanwhile, there
are now many potential applications that involve aspects of both recommendation
and information retrieval, and the task of collaborative retrieval---a
combination of these two traditional problems---has recently been introduced.
Successful collaborative retrieval requires overcoming severe data sparsity,
making additional sources of information, such as social graphs, particularly
valuable. In this paper we propose a new model for collaborative retrieval, and
show that our algorithm outperforms current state-of-the-art approaches by
incorporating information from social networks. We also provide empirical
analyses of the ways in which cultural interests propagate along a social graph
using a real-world music dataset.
|
1404.2343 | Wireless Transmission of Video for Biomechanical Analysis | cs.CE cs.MM cs.NI | When there is a possibility to wirelessly stream video over a network, a
sophisticated computer analysis of the transmitted video is possible. Such
process is used in biomechanics when it is important to analyze athletes
performance via streaming digital uncompressed video to a computer and then
analyzing it using specific software such as Arial Performance Analysis Systems
or Dartfish. This manuscript presents some approaches and challenges in
streaming video as well as some applications of Information Technology in
biomechanics. An example of how scientists from Indiana State University
approached the wireless transmission of video is also introduced.
|
1404.2352 | Low-complexity Decoding is Asymptotically Optimal in the SIMO MAC | cs.IT math.IT | A single input multiple output (SIMO) multiple access channel, with a large
number of transmitters sending symbols from a constellation to the receiver of
a multi-antenna base station, is considered. The fundamental limits of joint
decoding of the signals from all the users using a low complexity convex
relaxation of the maximum likelihood decoder (ML, constellation search) is
investigated. It has been shown that in a rich scattering environment, and in
the asymptotic limit of a large number of transmitters, reliable communication
is possible even without employing coding at the transmitters. This holds even
when the number of receiver antennas per transmitter is arbitrarily small, with
scaling behaviour arbitrarily close to what is achievable with coding. Thus,
the diversity of a large system not only makes the scaling law for coded
systems similar to that of uncoded systems, but, as we show, also allows
efficient decoders to realize close to the optimal performance of
maximum-likelihood decoding. However, while there is no performance loss
relative to the scaling laws of the optimal decoder, our proposed
low-complexity decoder exhibits a loss of the exponential or near-exponential
rates of decay of error probability relative to the optimal ML decoder.
|
1404.2353 | Power System Parameters Forecasting Using Hilbert-Huang Transform and
Machine Learning | cs.LG stat.ML | A novel hybrid data-driven approach is developed for forecasting power system
parameters with the goal of increasing the efficiency of short-term forecasting
studies for non-stationary time-series. The proposed approach is based on mode
decomposition and a feature analysis of initial retrospective data using the
Hilbert-Huang transform and machine learning algorithms. The random forests and
gradient boosting trees learning techniques were examined. The decision tree
techniques were used to rank the importance of variables employed in the
forecasting models. The Mean Decrease Gini index is employed as an impurity
function. The resulting hybrid forecasting models employ the radial basis
function neural network and support vector regression. Apart from introduction
and references the paper is organized as follows. The section 2 presents the
background and the review of several approaches for short-term forecasting of
power system parameters. In the third section a hybrid machine learning-based
algorithm using Hilbert-Huang transform is developed for short-term forecasting
of power system parameters. Fourth section describes the decision tree learning
algorithms used for the issue of variables importance. Finally in section six
the experimental results in the following electric power problems are
presented: active power flow forecasting, electricity price forecasting and for
the wind speed and direction forecasting.
|
1404.2357 | Multiple Access Analog Fountain Codes | cs.IT math.IT | In this paper, we propose a novel rateless multiple access scheme based on
the recently proposed capacity-approaching analog fountain code (AFC). We show
that the multiple access process will create an equivalent analog fountain
code, referred to as the multiple access analog fountain code (MA-AFC), at the
destination. Thus, the standard belief propagation (BP) decoder can be
effectively used to jointly decode all the users. We further analyse the
asymptotic performance of the BP decoder by using a density evolution approach
and show that the average log-likelihood ratio (LLR) of each user's information
symbol is proportional to its transmit signal to noise ratio (SNR), when all
the users utilize the same AFC code. Simulation results show that the proposed
scheme can approach the sum-rate capacity of the Gaussian multiple access
channel in a wide range of signal to noise ratios.
|
1404.2367 | Detecting Possible Manipulators in Elections | cs.MA cs.GT | Manipulation is a problem of fundamental importance in the context of voting
in which the voters exercise their votes strategically instead of voting
honestly to prevent selection of an alternative that is less preferred. The
Gibbard-Satterthwaite theorem shows that there is no strategy-proof voting rule
that simultaneously satisfies certain combinations of desirable properties.
Researchers have attempted to get around the impossibility results in several
ways such as domain restriction and computational hardness of manipulation.
However these approaches have been shown to have limitations. Since prevention
of manipulation seems to be elusive, an interesting research direction
therefore is detection of manipulation. Motivated by this, we initiate the
study of detection of possible manipulators in an election.
We formulate two pertinent computational problems - Coalitional Possible
Manipulators (CPM) and Coalitional Possible Manipulators given Winner (CPMW),
where a suspect group of voters is provided as input to compute whether they
can be a potential coalition of possible manipulators. In the absence of any
suspect group, we formulate two more computational problems namely Coalitional
Possible Manipulators Search (CPMS), and Coalitional Possible Manipulators
Search given Winner (CPMSW). We provide polynomial time algorithms for these
problems, for several popular voting rules. For a few other voting rules, we
show that these problems are in NP-complete. We observe that detecting
manipulation maybe easy even when manipulation is hard, as seen for example, in
the case of the Borda voting rule.
|
1404.2374 | A signature of power law network dynamics | q-bio.QM cs.SI physics.soc-ph q-bio.MN | Can one hear the 'sound' of a growing network? We address the problem of
recognizing the topology of evolving biological or social networks. Starting
from percolation theory, we analytically prove a linear inverse relationship
between two simple graph parameters--the logarithm of the average cluster size
and logarithm of the ratio of the edges of the graph to the theoretically
maximum number of edges for that graph--that holds for all growing power law
graphs. The result establishes a novel property of evolving power-law networks
in the asymptotic limit of network size. Numerical simulations as well as
fitting to real-world citation co-authorship networks demonstrate that the
result holds for networks of finite sizes, and provides a convenient measure of
the extent to which an evolving family of networks belongs to the same
power-law class.
|
1404.2380 | A Direct Approach to Computing Spatially Averaged Outage Probability | cs.IT math.IT | This letter describes a direct method for computing the spatially averaged
outage probability of a network with interferers located according to a point
process and signals subject to fading. Unlike most common approaches, it does
not require transforms such as a Laplace transform. Examples show how to
directly obtain the outage probability in the presence of Rayleigh fading in
networks whose interferers are drawn from binomial and Poisson point processes
defined over arbitrary regions. We furthermore show that, by extending the
arbitrary region to the entire plane, the result for Poisson point processes
converges to the same expression found by Baccelli et al..
|
1404.2393 | Spatially Coupled Turbo Codes | cs.IT math.IT | In this paper, we introduce the concept of spatially coupled turbo codes
(SC-TCs), as the turbo codes counterpart of spatially coupled low-density
parity-check codes. We describe spatial coupling for both Berrou et al. and
Benedetto et al. parallel and serially concatenated codes. For the binary
erasure channel, we derive the exact density evolution (DE) equations of SC-TCs
by using the method proposed by Kurkoski et al. to compute the decoding erasure
probability of convolutional encoders. Using DE, we then analyze the asymptotic
behavior of SC-TCs. We observe that the belief propagation (BP) threshold of
SC-TCs improves with respect to that of the uncoupled ensemble and approaches
its maximum a posteriori threshold. This phenomenon is especially significant
for serially concatenated codes, whose uncoupled ensemble suffers from a poor
BP threshold.
|
1404.2403 | Robustness surfaces of complex networks | cs.SI | Despite the robustness of complex networks has been extensively studied in
the last decade, there still lacks a unifying framework able to embrace all the
proposed metrics. In the literature there are two open issues related to this
gap: (a) how to dimension several metrics to allow their summation and (b) how
to weight each of the metrics. In this work we propose a solution for the two
aforementioned problems by defining the $R^*$-value and introducing the concept
of \emph{robustness surface} ($\Omega$). The rationale of our proposal is to
make use of Principal Component Analysis (PCA). We firstly adjust to 1 the
initial robustness of a network. Secondly, we find the most informative
robustness metric under a specific failure scenario. Then, we repeat the
process for several percentage of failures and different realizations of the
failure process. Lastly, we join these values to form the robustness surface,
which allows the visual assessment of network robustness variability. Results
show that a network presents different robustness surfaces (i.e., dissimilar
shapes) depending on the failure scenario and the set of metrics. In addition,
the robustness surface allows the robustness of different networks to be
compared.
|
1404.2458 | r-Extreme Signalling for Congestion Control | math.OC cs.AI cs.MA | In many "smart city" applications, congestion arises in part due to the
nature of signals received by individuals from a central authority. In the
model of Marecek et al. [arXiv:1406.7639, Int. J. Control 88(10), 2015], each
agent uses one out of multiple resources at each time instant. The per-use cost
of a resource depends on the number of concurrent users. A central authority
has up-to-date knowledge of the congestion across all resources and uses
randomisation to provide a scalar or an interval for each resource at each
time. In this paper, the interval to broadcast per resource is obtained by
taking the minima and maxima of costs observed within a time window of length
r, rather than by randomisation. We show that the resulting distribution of
agents across resources also converges in distribution, under plausible
assumptions about the evolution of the population over time.
|
1404.2464 | How Credible is the Prediction of a Party-Based Election? | cs.MA cs.GT | In a party-based election system, the voters are grouped into parties and all
voters of a party are assumed to vote according to the party preferences over
the candidates. Hence, once the party preferences are declared the outcome of
the election can be determined. However, in the actual election, the members of
some "instable" parties often leave their own party to join other parties. We
introduce two parameters to measure the credibility of the prediction based on
party preferences: Min is the minimum number of voters leaving the instable
parties such that the prediction is no longer true, while Max is the maximum
number of voters leaving the instable parties such that the prediction remains
valid. Concerning the complexity of computing Min and Max, we consider both
positional scoring rules (Plurality, Veto, r-Approval and Borda) and
Condorcet-consistent rules (Copeland and Maximin). We show that for all
considered scoring rules, Min is polynomial-time computable, while it is
NP-hard to compute Min for Copeland and Maximin. With the only exception of
Borda, Max can be computed in polynomial time for other scoring rules. We have
NP-hardness results for the computation of Max under Borda, Maximin and
Copeland.
|
1404.2471 | Yet another algorithm to compute the nonlinearity of a Boolean function | cs.IT math.IT | We associate to each Boolean function a polynomial whose evaluations
represents the distances from all possible Boolean affine functions. Both
determining the coefficients of this polynomial from the truth table of the
Boolean function and computing its evaluation vector requires a worst-case
complexity of $O(n2^n)$ integer operations. This way, with a different
approach, we reach the same complexity of established algorithms, such as those
based on the fast Walsh transform.
|
1404.2520 | Posterior Matching for Gaussian Broadcast Channels with Feedback | cs.IT math.IT | In this paper, the posterior matching scheme proposed by Shayevits and Feder
is extended to the Gaussian broadcast channel with feedback, and the error
probabilities and achievable rate region are derived for this coding strategy
by using the iterated random function theory. A variant of the Ozarow-Leung
code for the general two-user broadcast channel with feedback can be realized
as a special case of our coding scheme. Furthermore, for the symmetric Gaussian
broadcast channel with feedback, our coding scheme achieves the linear-feedback
sum-capacity like the LQG code and outperforms the Kramer code.
|
1404.2537 | A Signal-Space Analysis of Spatial Self-Interference Isolation for
Full-Duplex Wireless | cs.IT math.IT | The challenge to in-band full-duplex wireless communication is managing
self-interference. Many designs have employed spatial isolation mechanisms,
such as shielding or multi-antenna beamforming, to isolate the
self-interference wave from the receiver. Such spatial isolation methods are
effective, but by confining the transmit and receive signals to a subset of the
available space, the full spatial resources of the channel be under-utilized,
expending a cost that may nullify the net benefit of operating in full-duplex
mode. In this paper we leverage an antenna-theory-based channel model to
analyze the spatial degrees of freedom available to a full-duplex capable base
station, and observe that whether or not spatial isolation out-performs
time-division (i.e. half-duplex) depends heavily on the geometric distribution
of scatterers. Unless the angular spread of the objects that scatter to the
intended users is overlapped by the spread of objects that backscatter to the
base station, then spatial isolation outperforms time division, otherwise time
division may be optimal.
|
1404.2570 | Modelling View-count Dynamics in YouTube | cs.SI physics.soc-ph | The goal of this paper is to study the behaviour of view-count in YouTube. We
first propose several bio-inspired models for the evolution of the view-count
of YouTube videos. We show, using a large set of empirical data, that the
view-count for 90% of videos in YouTube can indeed be associated to at least
one of these models, with a Mean Error which does not exceed 5%. We derive
automatic ways of classifying the view-count curve into one of these models and
of extracting the most suitable parameters of the model. We study empirically
the impact of videos' popularity and category on the evolution of its
view-count. We finally use the above classification along with the automatic
parameters extraction in order to predict the evolution of videos' view-count.
|
1404.2571 | RANCOR: Non-Linear Image Registration with Total Variation
Regularization | cs.CV | Optimization techniques have been widely used in deformable registration,
allowing for the incorporation of similarity metrics with regularization
mechanisms. These regularization mechanisms are designed to mitigate the
effects of trivial solutions to ill-posed registration problems and to
otherwise ensure the resulting deformation fields are well-behaved. This paper
introduces a novel deformable registration algorithm, RANCOR, which uses
iterative convexification to address deformable registration problems under
total-variation regularization. Initial comparative results against four
state-of-the-art registration algorithms are presented using the Internet Brain
Segmentation Repository (IBSR) database.
|
1404.2576 | Asymptotics of Fingerprinting and Group Testing: Tight Bounds from
Channel Capacities | cs.IT cs.CR math.IT | In this work we consider the large-coalition asymptotics of various
fingerprinting and group testing games, and derive explicit expressions for the
capacities for each of these models. We do this both for simple decoders (fast
but suboptimal) and for joint decoders (slow but optimal).
For fingerprinting, we show that if the pirate strategy is known, the
capacity often decreases linearly with the number of colluders, instead of
quadratically as in the uninformed fingerprinting game. For many attacks the
joint capacity is further shown to be strictly higher than the simple capacity.
For group testing, we improve upon known results about the joint capacities,
and derive new explicit asymptotics for the simple capacities. These show that
existing simple group testing algorithms are suboptimal, and that simple
decoders cannot asymptotically be as efficient as joint decoders. For the
traditional group testing model, we show that the gap between the simple and
joint capacities is a factor 1.44 for large numbers of defectives.
|
1404.2584 | MIMO MAC-BC Duality with Linear-Feedback Coding Schemes | cs.IT math.IT | We show that for the multi-antenna Gaussian multi-access channel (MAC) and
broadcast channel (BC) with perfect feedback, the rate regions achieved by
linear-feedback coding schemes (called linear-feedback capacity regions)
coincide when the same total input-power constraint is imposed on both channels
and when the MAC channel matrices are the transposes of the BC channel
matrices. Such a pair of MAC and BC is called dual. We also identify
sub-classes of linear-feedback coding schemes that achieve the linear-feedback
capacity regions of these two channels and present multi-letter expressions for
the linear-feedback capacity regions. Moreover, within the two sub-classes of
coding schemes that achieve the linear-feedback capacity regions for a given
MAC and its dual BC, we identify for each MAC scheme a BC scheme and for each
BC scheme a MAC scheme so that the two schemes have same total input power and
achieve the same rate regions.
|
1404.2590 | Cluster analysis of weighted bipartite networks: a new copula-based
approach | physics.data-an cs.SI physics.soc-ph | In this work we are interested in identifying clusters of "positional
equivalent" actors, i.e. actors who play a similar role in a system. In
particular, we analyze weighted bipartite networks that describes the
relationships between actors on one side and features or traits on the other,
together with the intensity level to which actors show their features. The main
contribution of our work is twofold. First, we develop a methodological
approach that takes into account the underlying multivariate dependence among
groups of actors. The idea is that positions in a network could be defined on
the basis of the similar intensity levels that the actors exhibit in expressing
some features, instead of just considering relationships that actors hold with
each others. Second, we propose a new clustering procedure that exploits the
potentiality of copula functions, a mathematical instrument for the
modelization of the stochastic dependence structure. Our clustering algorithm
can be applied both to binary and real-valued matrices. We validate it with
simulations and applications to real-world data.
|
1404.2644 | A Distributed Frank-Wolfe Algorithm for Communication-Efficient Sparse
Learning | cs.DC cs.AI cs.LG stat.ML | Learning sparse combinations is a frequent theme in machine learning. In this
paper, we study its associated optimization problem in the distributed setting
where the elements to be combined are not centrally located but spread over a
network. We address the key challenges of balancing communication costs and
optimization errors. To this end, we propose a distributed Frank-Wolfe (dFW)
algorithm. We obtain theoretical guarantees on the optimization error
$\epsilon$ and communication cost that do not depend on the total number of
combining elements. We further show that the communication cost of dFW is
optimal by deriving a lower-bound on the communication cost required to
construct an $\epsilon$-approximate solution. We validate our theoretical
analysis with empirical studies on synthetic and real-world data, which
demonstrate that dFW outperforms both baselines and competing methods. We also
study the performance of dFW when the conditions of our analysis are relaxed,
and show that dFW is fairly robust.
|
1404.2655 | Open problem: Tightness of maximum likelihood semidefinite relaxations | math.OC cs.LG stat.ML | We have observed an interesting, yet unexplained, phenomenon: Semidefinite
programming (SDP) based relaxations of maximum likelihood estimators (MLE) tend
to be tight in recovery problems with noisy data, even when MLE cannot exactly
recover the ground truth. Several results establish tightness of SDP based
relaxations in the regime where exact recovery from MLE is possible. However,
to the best of our knowledge, their tightness is not understood beyond this
regime. As an illustrative example, we focus on the generalized Procrustes
problem.
|
1404.2656 | Wireless Backhaul Node Placement for Small Cell Networks | cs.IT cs.NI math.IT | Small cells have been proposed as a vehicle for wireless networks to keep up
with surging demand. Small cells come with a significant challenge of providing
backhaul to transport data to(from) a gateway node in the core network. Fiber
based backhaul offers the high rates needed to meet this requirement, but is
costly and time-consuming to deploy, when not readily available. Wireless
backhaul is an attractive option for small cells as it provides a less
expensive and easy-to-deploy alternative to fiber. However, there are multitude
of bands and features (e.g. LOS/NLOS, spatial multiplexing etc.) associated
with wireless backhaul that need to be used intelligently for small cells.
Candidate bands include: sub-6 GHz band that is useful in non-line-of-sight
(NLOS) scenarios, microwave band (6-42 GHz) that is useful in point-to-point
line-of-sight (LOS) scenarios, and millimeter wave bands (e.g. 60, 70 and 80
GHz) that are recently being commercially used in LOS scenarios. In many
deployment topologies, it is advantageous to use aggregator nodes, located at
the roof tops of tall buildings near small cells. These nodes can provide high
data rate to multiple small cells in NLOS paths, sustain the same data rate to
gateway nodes using LOS paths and take advantage of all available bands. This
work performs the joint cost optimal aggregator node placement, power
allocation, channel scheduling and routing to optimize the wireless backhaul
network. We formulate mixed integer nonlinear programs (MINLP) to capture the
different interference and multiplexing patterns at sub-6 GHz and microwave
band. We solve the MINLP through linear relaxation and branch-and-bound
algorithm and apply our algorithm in an example wireless backhaul network of
downtown Manhattan.
|
1404.2668 | How Complex Contagions Spread Quickly in the Preferential Attachment
Model and Other Time-Evolving Networks | cs.SI physics.soc-ph | In this paper, we study the spreading speed of complex contagions in a social
network. A $k$-complex contagion starts from a set of initially infected seeds
such that any node with at least $k$ infected neighbors gets infected. Simple
contagions, i.e., $k=1$, quickly spread to the entire network in small world
graphs. However, fast spreading of complex contagions appears to be less likely
and more delicate; the successful cases depend crucially on the network
structure~\cite{G08,Ghasemiesfeh:2013:CCW}.
Our main result shows that complex contagions can spread fast in a general
family of time-evolving networks that includes the preferential attachment
model~\cite{barabasi99emergence}. We prove that if the initial seeds are chosen
as the oldest nodes in a network of this family, a $k$-complex contagion covers
the entire network of $n$ nodes in $O(\log n)$ steps. We show that the choice
of the initial seeds is crucial. If the initial seeds are uniformly randomly
chosen in the PA model, even with a polynomial number of them, a complex
contagion would stop prematurely. The oldest nodes in a preferential attachment
model are likely to have high degrees. However, we remark that it is actually
not the power law degree distribution per se that facilitates fast spreading of
complex contagions, but rather the evolutionary graph structure of such models.
Some members of the said family do not even have a power-law distribution.
We also prove that complex contagions are fast in the copy
model~\cite{KumarRaRa00}, a variant of the preferential attachment family.
Finally, we prove that when a complex contagion starts from an arbitrary set
of initial seeds on a general graph, determining if the number of infected
vertices is above a given threshold is $\mathbf{P}$-complete. Thus, one cannot
hope to categorize all the settings in which complex contagions percolate in a
graph.
|
1404.2725 | Concave Switching in Single and Multihop Networks | cs.SY cs.NI math.OC math.PR | Switched queueing networks model wireless networks, input queued switches and
numerous other networked communications systems. For single-hop networks, we
consider a {($\alpha,g$)-switch policy} which combines the MaxWeight policies
with bandwidth sharing networks -- a further well studied model of Internet
congestion. We prove the maximum stability property for this class of
randomized policies. Thus these policies have the same first order behavior as
the MaxWeight policies. However, for multihop networks some of these
generalized polices address a number of critical weakness of the
MaxWeight/BackPressure policies.
For multihop networks with fixed routing, we consider the Proportional
Scheduler (or (1,log)-policy). In this setting, the BackPressure policy is
maximum stable, but must maintain a queue for every route-destination, which
typically grows rapidly with a network's size. However, this proportionally
fair policy only needs to maintain a queue for each outgoing link, which is
typically bounded in number. As is common with Internet routing, by maintaining
per-link queueing each node only needs to know the next hop for each packet and
not its entire route. Further, in contrast to BackPressure, the Proportional
Scheduler does not compare downstream queue lengths to determine weights, only
local link information is required. This leads to greater potential for
decomposed implementations of the policy. Through a reduction argument and an
entropy argument, we demonstrate that, whilst maintaining substantially less
queueing overhead, the Proportional Scheduler achieves maximum throughput
stability.
|
1404.2728 | Real-time Decolorization using Dominant Colors | cs.GR cs.CV | Decolorization is the process to convert a color image or video to its
grayscale version, and it has received great attention in recent years. An
ideal decolorization algorithm should preserve the original color contrast as
much as possible. Meanwhile, it should provide the final decolorized result as
fast as possible. However, most of the current methods are suffering from
either unsatisfied color information preservation or high computational cost,
limiting their application value. In this paper, a simple but effective
technique is proposed for real-time decolorization. Based on the typical
rgb2gray() color conversion model, which produces a grayscale image by linearly
combining R, G, and B channels, we propose a dominant color hypothesis and a
corresponding distance measurement metric to evaluate the quality of grayscale
conversion. The local optimum scheme provides several "good" candidates in a
confidence interval, from which the "best" result can be extracted.
Experimental results demonstrate that remarkable simplicity of the proposed
method facilitates the process of high resolution images and videos in
real-time using a common CPU.
|
1404.2741 | Nonlinearity of Boolean functions: an algorithmic approach based on
multivariate polynomials | cs.IT math.IT | We compute the nonlinearity of Boolean functions with Groebner basis
techniques, providing two algorithms: one over the binary field and the other
over the rationals. We also estimate their complexity. Then we show how to
improve our rational algorithm, arriving at a worst-case complexity of
$O(n2^n)$ operations over the integers, that is, sums and doublings. This way,
with a different approach, we reach the same complexity of established
algorithms, such as those based on the fast Walsh transform.
|
1404.2745 | Approximate controllability and lack of controllability to zero of the
heat equation with memory | cs.SY math.OC | In this paper we consider the heat equation with memory in a bounded region
$\Omega \subset\mathbb{R}^d$, $d\geq 1$, in the case that the propagation speed
of the signal is infinite (i.e. the Colemann-Gurtin model). The memory kernel
is of class $C^1$. We examine its controllability properties both under the
action of boundary controls or when the controls are distributed in a subregion
of $\Omega$. We prove approximate controllability of the system and, in
contrast with this, we prove the existence of initial conditions which cannot
be steered to hit the target $0$ in a certain time $T$, of course when the
memory kernel is not identically zero. In both the cases we derive our results
from well known properties of the heat equation.
|
1404.2750 | Efficient Advert Assignment | cs.GT cs.SY math.OC | We develop a framework for the analysis of large-scale Ad-auctions where
adverts are assigned over a continuum of search types. For this pay-per-click
market, we provide an efficient mechanism that maximizes social welfare. In
particular, we show that the social welfare optimization can be solved in
separate optimizations conducted on the time-scales relevant to the search
platform and advertisers. Here, on each search occurrence, the platform solves
an assignment problem and, on a slower time-scale, each advertiser submits a
bid which matches its demand for click-throughs with supply. Importantly,
knowledge of global parameters, such as the distribution of search terms, is
not required when separating the problem in this way. Exploiting the
information asymmetry between the platform and advertiser, we describe a simple
mechanism which incentivizes truthful bidding and has a unique Nash equilibrium
that is socially optimal, and thus implements our decomposition. Further, we
consider models where advertisers adapt their bids smoothly over time, and
prove convergence to the solution that maximizes social welfare. Finally, we
describe several extensions which illustrate the flexibility and tractability
of our framework.
|
1404.2768 | Verification of confliction and unreachability in rule-based expert
systems with model checking | cs.AI | It is important to find optimal solutions for structural errors in rule-based
expert systems .Solutions to discovering such errors by using model checking
techniques have already been proposed, but these solutions have problems such
as state space explosion. In this paper, to overcome these problems, we model
the rule-based systems as finite state transition systems and express
confliction and unreachability as Computation Tree Logic (CTL) logic formula
and then use the technique of model checking to detect confliction and
unreachability in rule-based systems with the model checker UPPAAL.
|
1404.2772 | A New Clustering Approach for Anomaly Intrusion Detection | cs.DC cs.CR cs.LG | Recent advances in technology have made our work easier compare to earlier
times. Computer network is growing day by day but while discussing about the
security of computers and networks it has always been a major concerns for
organizations varying from smaller to larger enterprises. It is true that
organizations are aware of the possible threats and attacks so they always
prepare for the safer side but due to some loopholes attackers are able to make
attacks. Intrusion detection is one of the major fields of research and
researchers are trying to find new algorithms for detecting intrusions.
Clustering techniques of data mining is an interested area of research for
detecting possible intrusions and attacks. This paper presents a new clustering
approach for anomaly intrusion detection by using the approach of K-medoids
method of clustering and its certain modifications. The proposed algorithm is
able to achieve high detection rate and overcomes the disadvantages of K-means
algorithm.
|
1404.2796 | Linear Batch Codes | cs.IT math.IT | In an application, where a client wants to obtain many elements from a large
database, it is often desirable to have some load balancing. Batch codes
(introduced by Ishai et al. in STOC 2004) make it possible to do exactly that:
the large database is divided between many servers, so that the client has to
only make a small number of queries to every server to obtain sufficient
information to reconstruct all desired elements. Other important parameters of
the batch codes are total storage and the number of servers. Batch codes also
have applications in cryptography (namely, in the construction of multi-query
computationally-private information retrieval protocols).
In this work, we initiate the study of linear batch codes. These codes, in
particular, are of potential use in distributed storage systems. We show that a
generator matrix of a binary linear batch code is also a generator matrix of
classical binary linear error-correcting code. This immediately yields that a
variety of upper bounds, which were developed for error-correcting codes, are
applicable also to binary linear batch codes. We also propose new methods to
construct large linear batch codes from the smaller ones.
|
1404.2813 | Cycle flow based module detection in directed recurrence networks | physics.data-an cs.SI physics.soc-ph | We present a new cycle flow based method for finding fuzzy partitions of
weighted directed networks coming from time series data. We show that this
method overcomes essential problems of most existing clustering approaches,
which tend to ignore important directional information by considering only
one-step, one-directional node connections. Our method introduces a novel
measure of communication between nodes using multi-step, bidirectional
transitions encoded by a cycle decomposition of the probability flow. Symmetric
properties of this measure enable us to construct an undirected graph that
captures information flow of the original graph seen by the data and apply
clustering methods designed for undirected graphs. Finally, we demonstrate our
algorithm by analyzing earthquake time series data, which naturally induce
(time-)directed networks. This article has been published originally in EPL,
DOI: 10.1209/0295-5075/108/68008. This version differs from the published
version by minor formatting details.
|
1404.2819 | Decoding of Quasi-Cyclic Codes up to A New Lower Bound on the Minimum
Distance | cs.IT math.IT | A new lower bound on the minimum Hamming distance of linear quasi-cyclic
codes over finite fields is proposed. It is based on spectral analysis and
generalizes the Semenov- Trifonov bound in a similar way as the Hartmann-Tzeng
bound extends the BCH approach for cyclic codes. Furthermore, a syndrome-based
algebraic decoding algorithm is given.
|
1404.2825 | Asymptotics of Fingerprinting and Group Testing: Capacity-Achieving
Log-Likelihood Decoders | cs.IT cs.CR math.IT math.ST stat.TH | We study the large-coalition asymptotics of fingerprinting and group testing,
and derive explicit decoders that provably achieve capacity for many of the
considered models. We do this both for simple decoders (fast but suboptimal)
and for joint decoders (slow but optimal), and both for informed and uninformed
settings.
For fingerprinting, we show that if the pirate strategy is known, the
Neyman-Pearson-based log-likelihood decoders provably achieve capacity,
regardless of the strategy. The decoder built against the interleaving attack
is further shown to be a universal decoder, able to deal with arbitrary attacks
and achieving the uninformed capacity. This universal decoder is shown to be
closely related to the Lagrange-optimized decoder of Oosterwijk et al. and the
empirical mutual information decoder of Moulin. Joint decoders are also
proposed, and we conjecture that these also achieve the corresponding joint
capacities.
For group testing, the simple decoder for the classical model is shown to be
more efficient than the one of Chan et al. and it provably achieves the simple
group testing capacity. For generalizations of this model such as noisy group
testing, the resulting simple decoders also achieve the corresponding simple
capacities.
|
1404.2843 | Practical Comparison of Optimization Algorithms for Learning-Based MPC
with Linear Models | math.OC cs.RO | Learning-based control methods are an attractive approach for addressing
performance and efficiency challenges in robotics and automation systems. One
such technique that has found application in these domains is learning-based
model predictive control (LBMPC). An important novelty of LBMPC lies in the
fact that its robustness and stability properties are independent of the type
of online learning used. This allows the use of advanced statistical or machine
learning methods to provide the adaptation for the controller. This paper is
concerned with providing practical comparisons of different optimization
algorithms for implementing the LBMPC method, for the special case where the
dynamic model of the system is linear and the online learning provides linear
updates to the dynamic model. For comparison purposes, we have implemented a
primal-dual infeasible start interior point method that exploits the sparsity
structure of LBMPC. Our open source implementation (called LBmpcIPM) is
available through a BSD license and is provided freely to enable the rapid
implementation of LBMPC on other platforms. This solver is compared to the
dense active set solvers LSSOL and qpOASES using a quadrotor helicopter
platform. Two scenarios are considered: The first is a simulation comparing
hovering control for the quadrotor, and the second is on-board control
experiments of dynamic quadrotor flight. Though the LBmpcIPM method has better
asymptotic computational complexity than LSSOL and qpOASES, we find that for
certain integrated systems (like our quadrotor testbed) these methods can
outperform LBmpcIPM. This suggests that actual benchmarks should be used when
choosing which algorithm is used to implement LBMPC on practical systems.
|
1404.2862 | Tangle Machines | cs.IT cs.SY math.GT math.IT quant-ph | Tangle machines are topologically inspired diagrammatic models. Their novel
feature is their natural notion of equivalence. Equivalent tangle machines may
differ locally, but globally they are considered to share the same information
content. The goal of tangle machine equivalence is to provide a
context-independent method to select, from among many ways to perform a task,
the `best' way to perform the task. The concept of equivalent tangle machines
is illustrated through examples in which they represent recursive computations,
networks of adiabatic quantum computations, and networks of distributed
information processing.
|
1404.2863 | Tangle Machines II: Invariants | cs.IT cs.SY math.GT math.IT quant-ph | The preceding paper constructed tangle machines as diagrammatic models, and
illustrated their utility with a number of examples. The information content of
a tangle machine is contained in characteristic quantities associated to
equivalence classes of tangle machines, which are called invariants. This paper
constructs invariants of tangle machines. Chief among these are the prime
factorizations of a machine, which are essentially unique. This is proven using
low dimensional topology, through representing a colour-suppressed machine as a
diagram for a network of jointly embedded spheres and intervals in 4-space. The
complexity of a tangle machine is defined as its number of prime factors.
|
1404.2864 | LDPC coded transmissions over the Gaussian broadcast channel with
confidential messages | cs.IT cs.CR math.IT | We design and assess some practical low-density parity-check (LDPC) coded
transmission schemes for the Gaussian broadcast channel with confidential
messages (BCC). This channel model is different from the classical wiretap
channel model as the unauthorized receiver (Eve) must be able to decode some
part of the information. Hence, the reliability and security targets are
different from those of the wiretap channel. In order to design and assess
practical coding schemes, we use the error rate as a metric of the performance
achieved by the authorized receiver (Bob) and the unauthorized receiver (Eve).
We study the system feasibility, and show that two different levels of
protection against noise are required on the public and the secret messages.
This can be achieved in two ways: i) by using LDPC codes with unequal error
protection (UEP) of the transmitted information bits or ii) by using two
classical non-UEP LDPC codes with different rates. We compare these two
approaches and show that, for the considered examples, the solution exploiting
UEP LDPC codes is more efficient than that using non-UEP LDPC codes.
|
1404.2872 | TreQ-CG: Clustering Accelerates High-Throughput Sequencing Read Mapping | cs.CE | As high-throughput sequencers become standard equipment outside of sequencing
centers, there is an increasing need for efficient methods for pre-processing
and primary analysis. While a vast literature proposes methods for HTS data
analysis, we argue that significant improvements can still be gained by
exploiting expensive pre-processing steps which can be amortized with savings
from later stages. We propose a method to accelerate and improve read mapping
based on an initial clustering of possibly billions of high-throughput
sequencing reads, yielding clusters of high stringency and a high degree of
overlap. This clustering improves on the state-of-the-art in running time for
small datasets and, for the first time, makes clustering high-coverage human
libraries feasible. Given the efficiently computed clusters, only one
representative read from each cluster needs to be mapped using a traditional
readmapper such as BWA, instead of individually mapping all reads. On human
reads, all processing steps, including clustering and mapping, only require
11%-59% of the time for individually mapping all reads, achieving speed-ups for
all readmappers, while minimally affecting mapping quality. This accelerates a
highly sensitive readmapper such as Stampy to be competitive with a fast
readmapper such as BWA on unclustered reads.
|
1404.2878 | Overview of Stemming Algorithms for Indian and Non-Indian Languages | cs.CL | Stemming is a pre-processing step in Text Mining applications as well as a
very common requirement of Natural Language processing functions. Stemming is
the process for reducing inflected words to their stem. The main purpose of
stemming is to reduce different grammatical forms / word forms of a word like
its noun, adjective, verb, adverb etc. to its root form. Stemming is widely
uses in Information Retrieval system and reduces the size of index files. We
can say that the goal of stemming is to reduce inflectional forms and sometimes
derivationally related forms of a word to a common base form. In this paper we
have discussed different stemming algorithm for non-Indian and Indian language,
methods of stemming, accuracy and errors.
|
1404.2885 | A Networks and Machine Learning Approach to Determine the Best College
Coaches of the 20th-21st Centuries | stat.AP cs.LG cs.SI | Our objective is to find the five best college sports coaches of past century
for three different sports. We decided to look at men's basketball, football,
and baseball. We wanted to use an approach that could definitively determine
team skill from the games played, and then use a machine-learning algorithm to
calculate the correct coach skills for each team in a given year. We created a
networks-based model to calculate team skill from historical game data. A
digraph was created for each year in each sport. Nodes represented teams, and
edges represented a game played between two teams. The arrowhead pointed
towards the losing team. We calculated the team skill of each graph using a
right-hand eigenvector centrality measure. This way, teams that beat good teams
will be ranked higher than teams that beat mediocre teams. The eigenvector
centrality rankings for most years were well correlated with tournament
performance and poll-based rankings. We assumed that the relationship between
coach skill $C_s$, player skill $P_s$, and team skill $T_s$ was $C_s \cdot P_s
= T_s$. We then created a function to describe the probability that a given
score difference would occur based on player skill and coach skill. We
multiplied the probabilities of all edges in the network together to find the
probability that the correct network would occur with any given player skill
and coach skill matrix. We was able to determine player skill as a function of
team skill and coach skill, eliminating the need to optimize two unknown
matrices. The top five coaches in each year were noted, and the top coach of
all time was calculated by dividing the number of times that coach ranked in
the yearly top five by the years said coach had been active.
|
1404.2892 | Modelling of Walking Humanoid Robot With Capability of Floor Detection
and Dynamic Balancing Using Colored Petri Net | cs.RO | Most humanoid robots have highly complicated structure and design of robots
that are very similar to human is extremely difficult. In this paper, modelling
of a general and comprehensive algorithm for control of humanoid robots is
presented using Colored Petri Nets. For keeping dynamic balance of the robot,
combination of Gyroscope and Accelerometer sensors are used in algorithm. Image
processing is used to identify two fundamental issues: first, detection of
target or an object which robot must follow; second, detecting surface of the
ground so that walking robot could maintain its balance just like a human and
shows its best performance. Presented model gives high-level view of humanoid
robot's operations.
|
1404.2903 | Thoughts on a Recursive Classifier Graph: a Multiclass Network for Deep
Object Recognition | cs.CV cs.LG cs.NE | We propose a general multi-class visual recognition model, termed the
Classifier Graph, which aims to generalize and integrate ideas from many of
today's successful hierarchical recognition approaches. Our graph-based model
has the advantage of enabling rich interactions between classes from different
levels of interpretation and abstraction. The proposed multi-class system is
efficiently learned using step by step updates. The structure consists of
simple logistic linear layers with inputs from features that are automatically
selected from a large pool. Each newly learned classifier becomes a potential
new feature. Thus, our feature pool can consist both of initial manually
designed features as well as learned classifiers from previous steps (graph
nodes), each copied many times at different scales and locations. In this
manner we can learn and grow both a deep, complex graph of classifiers and a
rich pool of features at different levels of abstraction and interpretation.
Our proposed graph of classifiers becomes a multi-class system with a recursive
structure, suitable for deep detection and recognition of several classes
simultaneously.
|
1404.2904 | Construction A of Lattices over Number Fields and Block Fading Wiretap
Coding | cs.IT math.IT math.NT | We propose a lattice construction from totally real and CM fields, which
naturally generalizes the Construction A of lattices from $p$-ary codes
obtained from the cyclotomic field $\mathbb{Q}(\zeta_p)$, $p$ a prime, which in
turn contains the so-called Construction A of lattices from binary codes as a
particular case. We focus on the maximal totally real subfield
$\mathbb{Q}(\zeta_{p^r}+\zeta_{p}^{-r})$ of the cyclotomic field
$\mathbb{Q}(\zeta_{p^r})$, $r\geq 1$. Our construction has applications to
coset encoding of algebraic lattice codes, and we detail the case of coset
encoding of block fading wiretap codes.
|
1404.2923 | Self-organization towards optimally interdependent networks by means of
coevolution | physics.soc-ph cs.SI q-bio.PE | Coevolution between strategy and network structure is established as a means
to arrive at optimal conditions for resolving social dilemmas. Yet recent
research highlights that the interdependence between networks may be just as
important as the structure of an individual network. We therefore introduce
coevolution of strategy and network interdependence to study whether it can
give rise to elevated levels of cooperation in the prisoner's dilemma game. We
show that the interdependence between networks self-organizes so as to yield
optimal conditions for the evolution of cooperation. Even under extremely
adverse conditions cooperators can prevail where on isolated networks they
would perish. This is due to the spontaneous emergence of a two-class society,
with only the upper class being allowed to control and take advantage of the
interdependence. Spatial patterns reveal that cooperators, once arriving to the
upper class, are much more competent than defectors in sustaining compact
clusters of followers. Indeed, the asymmetric exploitation of interdependence
confers to them a strong evolutionary advantage that may resolve even the
toughest of social dilemmas.
|
1404.2948 | Gradient-based Laplacian Feature Selection | cs.LG | Analysis of high dimensional noisy data is of essence across a variety of
research fields. Feature selection techniques are designed to find the relevant
feature subset that can facilitate classification or pattern detection.
Traditional (supervised) feature selection methods utilize label information to
guide the identification of relevant feature subsets. In this paper, however,
we consider the unsupervised feature selection problem. Without the label
information, it is particularly difficult to identify a small set of relevant
features due to the noisy nature of real-world data which corrupts the
intrinsic structure of the data. Our Gradient-based Laplacian Feature Selection
(GLFS) selects important features by minimizing the variance of the Laplacian
regularized least squares regression model. With $\ell_1$ relaxation, GLFS can
find a sparse subset of features that is relevant to the Laplacian manifolds.
Extensive experiments on simulated, three real-world object recognition and two
computational biology datasets, have illustrated the power and superior
performance of our approach over multiple state-of-the-art unsupervised feature
selection methods. Additionally, we show that GLFS selects a sparser set of
more relevant features in a supervised setting outperforming the popular
elastic net methodology.
|
1404.2959 | SocioAware Content Distribution using P2P solutions in Hybrid Networks | cs.SI cs.NI | The growing online traffic that is bringing the infrastructure to its limits
induces an urgent demand for an efficient content delivery model. Capitalizing
social networks and using advanced delivery networks potentially can help to
solve this problem. However, due to the complex nature of the involved networks
such a model is difficult to assess. In this paper we use a simulative approach
to analyze how the SatTorrent P2P protocol supported by social networks can
improve content delivery by means of reduced download duration and traffic.
|
1404.2983 | Couple Control Model Implementation on Antagonistic Mono- and
Bi-Articular Actuators | physics.med-ph cs.RO | Recently, robot assisted therapy devices are increasingly used for spinal
cord injury (SCI) rehabilitation in assisting handicapped patients to regain
their impaired movements. Assistive robotic systems may not be able to cure or
fully compensate impairments, but it should be able to assist certain impaired
functions and ease movements. In this study, a couple control model for
lower-limb orthosis of a body weight support gait training system is proposed.
The developed leg orthosis implements the use of pneumatic artificial muscle as
an actuation system. The pneumatic muscle was arranged antagonistically to form
two pair of mono-articular muscles (i.e., hip and knee joints), and a pair of
bi-articular actuators (i.e., rectus femoris and hamstring). The results of the
proposed couple control model showed that, it was able to simultaneously
control the antagonistic mono- and bi-articular actuators and sufficiently
performed walking motion of the leg orthosis.
|
1404.2984 | Distribution-Aware Sampling and Weighted Model Counting for SAT | cs.AI cs.DS | Given a CNF formula and a weight for each assignment of values to variables,
two natural problems are weighted model counting and distribution-aware
sampling of satisfying assignments. Both problems have a wide variety of
important applications. Due to the inherent complexity of the exact versions of
the problems, interest has focused on solving them approximately. Prior work in
this area scaled only to small problems in practice, or failed to provide
strong theoretical guarantees, or employed a computationally-expensive maximum
a posteriori probability (MAP) oracle that assumes prior knowledge of a
factored representation of the weight distribution. We present a novel approach
that works with a black-box oracle for weights of assignments and requires only
an {\NP}-oracle (in practice, a SAT-solver) to solve both the counting and
sampling problems. Our approach works under mild assumptions on the
distribution of weights of satisfying assignments, provides strong theoretical
guarantees, and scales to problems involving several thousand variables. We
also show that the assumptions can be significantly relaxed while improving
computational efficiency if a factored representation of the weights is known.
|
1404.2986 | A Tutorial on Independent Component Analysis | cs.LG stat.ML | Independent component analysis (ICA) has become a standard data analysis
technique applied to an array of problems in signal processing and machine
learning. This tutorial provides an introduction to ICA based on linear algebra
formulating an intuition for ICA from first principles. The goal of this
tutorial is to provide a solid foundation on this advanced topic so that one
might learn the motivation behind ICA, learn why and when to apply this
technique and in the process gain an introduction to this exciting field of
active research.
|
1404.2993 | On More Bent Functions From Dillon Exponents | cs.IT math.IT | In this paper, we obtain a new class of $p$-ary binomial bent functions which
are determined by Kloosterman sums. The bentness of another three classes of
functions is characterized by some exponential sums and some results in
\cite{Linian2013} are generalized. Furthermore we obtain, in some special
cases, some bent functions are determined by Kloosterman sums.
|
1404.2997 | Automatic Detection of Reuses and Citations in Literary Texts | cs.CL cs.DL | For more than forty years now, modern theories of literature (Compagnon,
1979) insist on the role of paraphrases, rewritings, citations, reciprocal
borrowings and mutual contributions of any kinds. The notions of
intertextuality, transtextuality, hypertextuality/hypotextuality, were
introduced in the seventies and eighties to approach these phenomena. The
careful analysis of these references is of particular interest in evaluating
the distance that the creator voluntarily introduces with his/her masters.
Phoebus is collaborative project that makes computer scientists from the
University Pierre and Marie Curie (LIP6-UPMC) collaborate with the literary
teams of Paris-Sorbonne University with the aim to develop efficient tools for
literary studies that take advantage of modern computer science techniques. In
this context, we have developed a piece of software that automatically detects
and explores networks of textual reuses in classical literature. This paper
describes the principles on which is based this program, the significant
results that have already been obtained and the perspectives for the near
future.
|
1404.2999 | A Reverse Hierarchy Model for Predicting Eye Fixations | cs.CV | A number of psychological and physiological evidences suggest that early
visual attention works in a coarse-to-fine way, which lays a basis for the
reverse hierarchy theory (RHT). This theory states that attention propagates
from the top level of the visual hierarchy that processes gist and abstract
information of input, to the bottom level that processes local details.
Inspired by the theory, we develop a computational model for saliency detection
in images. First, the original image is downsampled to different scales to
constitute a pyramid. Then, saliency on each layer is obtained by image
super-resolution reconstruction from the layer above, which is defined as
unpredictability from this coarse-to-fine reconstruction. Finally, saliency on
each layer of the pyramid is fused into stochastic fixations through a
probabilistic model, where attention initiates from the top layer and
propagates downward through the pyramid. Extensive experiments on two standard
eye-tracking datasets show that the proposed method can achieve competitive
results with state-of-the-art models.
|
1404.3001 | Joint Successive Cancellation Decoding of Polar Codes over Intersymbol
Interference Channels | cs.IT math.IT | Polar codes are a class of capacity-achieving codes for the binary-input
discrete memoryless channels (B-DMCs). However, when applied in channels with
intersymbol interference (ISI), the codes may perform poorly with BCJR
equalization and conventional decoding methods. To deal with the ISI problem,
in this paper a new joint successive cancellation (SC) decoding algorithm is
proposed for polar codes in ISI channels, which combines the equalization and
conventional decoding. The initialization information of the decoding method is
the likelihood functions of ISI codeword symbols rather than the codeword
symbols. The decoding adopts recursion formulas like conventional SC decoding
and is without iterations. This is in contrast to the conventional iterative
algorithm which performs iterations between the equalizer and decoder. In
addition, the proposed SC trellis decoding can be easily extended to list
decoding which can further improve the performance. Simulation shows that the
proposed scheme significantly outperforms the conventional decoding schemes in
ISI channels.
|
1404.3010 | On the Energy-Spectral Efficiency Trade-off of the MRC Receiver in
Massive MIMO Systems with Transceiver Power Consumption | cs.IT math.IT | We consider the uplink of a multiuser massive MIMO system wherein a base
station (BS) having $M$ antennas communicates coherently with $K$ single
antenna user terminals (UTs). We study the energy efficiency of this system
while taking the transceiver power consumption at the UTs and the BS into
consideration. For a given spectral efficiency $R$ and fixed transceiver power
consumption parameters, we propose and analyze the problem of maximizing the
energy efficiency as a function of $(M,K)$. For the maximum ratio combining
(MRC) detector at the BS we show that with increasing $R$, $(M,K)$ can be
adaptively increased in such a way that the energy efficiency converges to a
positive constant as $R \rightarrow \infty$ ($(M,K)$ is increased in such a way
that a constant per-user spectral efficiency $R/K$ is maintained). This is in
contrast to the fixed $(M,K)$ scenario where the energy efficiency is known to
converge to zero as $R \rightarrow \infty$. We also observe that for large $R$,
the optimal $(M,K)$ maximizing the energy efficiency is such that, the total
power consumed by the power amplifiers (PA) in all the $K$ UTs is a small
fraction of the total system power consumption.
|
1404.3012 | Bayesian image segmentations by Potts prior and loopy belief propagation | cs.CV cond-mat.dis-nn cond-mat.stat-mech cs.LG stat.ML | This paper presents a Bayesian image segmentation model based on Potts prior
and loopy belief propagation. The proposed Bayesian model involves several
terms, including the pairwise interactions of Potts models, and the average
vectors and covariant matrices of Gauss distributions in color image modeling.
These terms are often referred to as hyperparameters in statistical machine
learning theory. In order to determine these hyperparameters, we propose a new
scheme for hyperparameter estimation based on conditional maximization of
entropy in the Potts prior. The algorithm is given based on loopy belief
propagation. In addition, we compare our conditional maximum entropy framework
with the conventional maximum likelihood framework, and also clarify how the
first order phase transitions in LBP's for Potts models influence our
hyperparameter estimation procedures.
|
1404.3017 | A Link-based Approach to Entity Resolution in Social Networks | cs.IR cs.DS cs.SI | Social networks initially had been places for people to contact each other,
find friends or new acquaintances. As such they ever proved interesting for
machine aided analysis. Recent developments, however, pivoted social networks
to being among the main fields of information exchange, opinion expression and
debate. As a result there is growing interest in both analyzing and integrating
social network services. In this environment efficient information retrieval is
hindered by the vast amount and varying quality of the user-generated content.
Guiding users to relevant information is a valuable service and also a
difficult task, where a crucial part of the process is accurately resolving
duplicate entities to real-world ones. In this paper we propose a novel
approach that utilizes the principles of link mining to successfully extend the
methodology of entity resolution to multitype problems. The proposed method is
presented using an illustrative social network-based real-world example and
validated by comprehensive evaluation of the results.
|
1404.3022 | Multi-Trial Guruswami-Sudan Decoding for Generalised Reed--Solomon Codes | cs.IT math.IT | An iterated refinement procedure for the Guruswami-Sudan list decoding
algorithm for Generalised Reed-Solomon codes based on Alekhnovich's module
minimisation is proposed. The method is parametrisable and allows variants of
the usual list decoding approach. In particular, finding the list of closest
codewords within an intermediate radius can be performed with improved
average-case complexity while retaining the worst-case complexity. We provide a
detailed description of the module minimisation, reanalysing the
Mulders-Storjohann algorithm and drawing new connections to both Alekhnovich's
algorithm and Lee-O'Sullivan's. Furthermore, we show how to incorporate the
re-encoding technique of K\"otter and Vardy into our iterative algorithm.
|
1404.3023 | Markov Chain Analysis of Evolution Strategies on a Linear Constraint
Optimization Problem | cs.NE math.OC | This paper analyses a $(1,\lambda)$-Evolution Strategy, a randomised
comparison-based adaptive search algorithm, on a simple constraint optimisation
problem. The algorithm uses resampling to handle the constraint and optimizes a
linear function with a linear constraint. Two cases are investigated: first the
case where the step-size is constant, and second the case where the step-size
is adapted using path length control. We exhibit for each case a Markov chain
whose stability analysis would allow us to deduce the divergence of the
algorithm depending on its internal parameters. We show divergence at a
constant rate when the step-size is constant. We sketch that with step-size
adaptation geometric divergence takes place. Our results complement previous
studies where stability was assumed.
|
1404.3026 | On the Ground Validation of Online Diagnosis with Twitter and Medical
Records | cs.SI cs.CL cs.LG | Social media has been considered as a data source for tracking disease.
However, most analyses are based on models that prioritize strong correlation
with population-level disease rates over determining whether or not specific
individual users are actually sick. Taking a different approach, we develop a
novel system for social-media based disease detection at the individual level
using a sample of professionally diagnosed individuals. Specifically, we
develop a system for making an accurate influenza diagnosis based on an
individual's publicly available Twitter data. We find that about half (17/35 =
48.57%) of the users in our sample that were sick explicitly discuss their
disease on Twitter. By developing a meta classifier that combines text
analysis, anomaly detection, and social network analysis, we are able to
diagnose an individual with greater than 99% accuracy even if she does not
discuss her health.
|
1404.3033 | How to go Viral: Cheaply and Quickly | cs.SI cs.DS math.CO | Given a social network represented by a graph $G$, we consider the problem of
finding a bounded cardinality set of nodes $S$ with the property that the
influence spreading from $S$ in $G$ is as large as possible. The dynamics that
govern the spread of influence is the following: initially only elements in $S$
are influenced; subsequently at each round, the set of influenced elements is
augmented by all nodes in the network that have a sufficiently large number of
already influenced neighbors. While it is known that the general problem is
hard to solve --- even in the approximate sense --- we present exact polynomial
time algorithms for trees, paths, cycles, and complete graphs.
|
1404.3041 | Labelled OSPA metric for fixed and known number of targets | cs.SY | The evaluation of multiple target tracking algorithms with labelled sets can
be done using the labelled optimal subpattern assignment (LOSPA) metric. In
this paper, we provide the expression of the same metric for fixed and known
number of targets when vector notation is used.
|
1404.3075 | Practical LDPC coded modulation schemes for the fading broadcast channel
with confidential messages | cs.IT cs.CR math.IT | The broadcast channel with confidential messages is a well studied scenario
from the theoretical standpoint, but there is still lack of practical schemes
able to achieve some fixed level of reliability and security over such a
channel. In this paper, we consider a quasi-static fading channel in which both
public and private messages must be sent from the transmitter to the receivers,
and we aim at designing suitable coding and modulation schemes to achieve such
a target. For this purpose, we adopt the error rate as a metric, by considering
that reliability (security) is achieved when a sufficiently low (high) error
rate is experienced at the receiving side. We show that some conditions exist
on the system feasibility, and that some outage probability must be tolerated
to cope with the fading nature of the channel. The proposed solution exploits
low-density parity-check codes with unequal error protection, which are able to
guarantee two different levels of protection against noise for the public and
the private information, in conjunction with different modulation schemes for
the public and the private message bits.
|
1404.3078 | Distributed Compressed Sensing for Sensor Networks with Packet Erasures | cs.IT math.IT | We study two approaches to distributed compressed sensing for in-network data
compression and signal reconstruction at a sink in a wireless sensor network
where sensors are placed on a straight line. Communication to the sink is
considered to be bandwidth-constrained due to the large number of devices. By
using distributed compressed sensing for compression of the data in the
network, the communication cost (bandwith usage) to the sink can be decreased
at the expense of delay induced by the local communication necessary for
compression. We investigate the relation between cost and delay given a certain
reconstruction performance requirement when using basis pursuit denoising for
reconstruction. Moreover, we analyze and compare the performance degradation
due to erased packets sent to the sink of the two approaches.
|
1404.3114 | Conditions for viral influence spreading through multiplex correlated
social networks | physics.soc-ph cs.SI physics.data-an | A fundamental problem in network science is to predict how certain
individuals are able to initiate new networks to spring up "new ideas".
Frequently, these changes in trends are triggered by a few innovators who
rapidly impose their ideas through "viral" influence spreading producing
cascades of followers fragmenting an old network to create a new one. Typical
examples include the raise of scientific ideas or abrupt changes in social
media, like the raise of Facebook.com to the detriment of Myspace.com. How this
process arises in practice has not been conclusively demonstrated. Here, we
show that a condition for sustaining a viral spreading process is the existence
of a multiplex correlated graph with hidden "influence links". Analytical
solutions predict percolation phase transitions, either abrupt or continuous,
where networks are disintegrated through viral cascades of followers as in
empirical data. Our modeling predicts the strict conditions to sustain a large
viral spreading via a scaling form of the local correlation function between
multilayers, which we also confirm empirically. Ultimately, the theory predicts
the conditions for viral cascading in a large class of multiplex networks
ranging from social to financial systems and markets.
|
1404.3131 | The Possibility Problem for Probabilistic XML (Extended Version) | cs.DB cs.CC cs.LO | We consider the possibility problem of determining if a document is a
possible world of a probabilistic document, in the setting of probabilistic
XML. This basic question is a special case of query answering or tree automata
evaluation, but it has specific practical uses, such as checking whether an
user-provided probabilistic document outcome is possible or sufficiently
likely. In this paper, we study the complexity of the possibility problem for
probabilistic XML models of varying expressiveness. We show that the decision
problem is often tractable in the absence of long-distance dependencies, but
that its computation variant is intractable on unordered documents. We also
introduce an explicit matches variant to generalize practical situations where
node labels are unambiguous; this ensures tractability of the possibility
problem, even under long-distance dependencies, provided event conjunctions are
disallowed. Our results entirely classify the tractability boundary over all
considered problem variants.
|
1404.3141 | Datalog Rewritability of Disjunctive Datalog Programs and its
Applications to Ontology Reasoning | cs.AI cs.LO | We study the problem of rewriting a disjunctive datalog program into plain
datalog. We show that a disjunctive program is rewritable if and only if it is
equivalent to a linear disjunctive program, thus providing a novel
characterisation of datalog rewritability. Motivated by this result, we propose
weakly linear disjunctive datalog---a novel rule-based KR language that extends
both datalog and linear disjunctive datalog and for which reasoning is
tractable in data complexity. We then explore applications of weakly linear
programs to ontology reasoning and propose a tractable extension of OWL 2 RL
with disjunctive axioms. Our empirical results suggest that many non-Horn
ontologies can be reduced to weakly linear programs and that query answering
over such ontologies using a datalog engine is feasible in practice.
|
1404.3145 | Distributed Local Linear Parameter Estimation using Gaussian SPAWN | cs.MA cs.SY | We consider the problem of estimating local sensor parameters, where the
local parameters and sensor observations are related through linear stochastic
models. Sensors exchange messages and cooperate with each other to estimate
their own local parameters iteratively. We study the Gaussian Sum-Product
Algorithm over a Wireless Network (gSPAWN) procedure, which is based on belief
propagation, but uses fixed size broadcast messages at each sensor instead.
Compared with the popular diffusion strategies for performing network parameter
estimation, whose communication cost at each sensor increases with increasing
network density, the gSPAWN algorithm allows sensors to broadcast a message
whose size does not depend on the network size or density, making it more
suitable for applications in wireless sensor networks. We show that the gSPAWN
algorithm converges in mean and has mean-square stability under some technical
sufficient conditions, and we describe an application of the gSPAWN algorithm
to a network localization problem in non-line-of-sight environments. Numerical
results suggest that gSPAWN converges much faster in general than the diffusion
method, and has lower communication costs, with comparable root mean square
errors.
|
1404.3146 | Reconsidering unique information: Towards a multivariate information
decomposition | cs.IT math.IT | The information that two random variables $Y$, $Z$ contain about a third
random variable $X$ can have aspects of shared information (contained in both
$Y$ and $Z$), of complementary information (only available from $(Y,Z)$
together) and of unique information (contained exclusively in either $Y$ or
$Z$). Here, we study measures $\widetilde{SI}$ of shared, $\widetilde{UI}$
unique and $\widetilde{CI}$ complementary information introduced by
Bertschinger et al., which are motivated from a decision theoretic perspective.
We find that in most cases the intuitive rule that more variables contain more
information applies, with the exception that $\widetilde{SI}$ and
$\widetilde{CI}$ information are not monotone in the target variable $X$.
Additionally, we show that it is not possible to extend the bivariate
information decomposition into $\widetilde{SI}$, $\widetilde{UI}$ and
$\widetilde{CI}$ to a non-negative decomposition on the partial information
lattice of Williams and Beer. Nevertheless, the quantities $\widetilde{UI}$,
$\widetilde{SI}$ and $\widetilde{CI}$ have a well-defined interpretation, even
in the multivariate setting.
|
1404.3152 | Change Detection with Compressive Measurements | cs.IT math.IT math.ST stat.TH | Quickest change point detection is concerned with the detection of
statistical change(s) in sequences while minimizing the detection delay subject
to false alarm constraints. In this paper, the problem of change point
detection is studied when the decision maker only has access to compressive
measurements. First, an expression for the average detection delay of
Shiryaev's procedure with compressive measurements is derived in the asymptotic
regime where the probability of false alarm goes to zero. Second, the
dependence of the delay on the compression ratio and the signal to noise ratio
is explicitly quantified. The ratio of delays with and without compression is
studied under various sensing matrix constructions, including Gaussian
ensembles and random projections. For a target ratio of the delays after and
before compression, a sufficient condition on the number of measurements
required to meet this objective with prespecified probability is derived.
|
1404.3165 | Energy-Efficient Power Adaptation for Cognitive Radio Systems under
Imperfect Channel Sensing | cs.IT math.IT | In this paper, energy efficient power adaptation is considered in
sensing-based spectrum sharing cognitive radio systems in which secondary users
first perform channel sensing and then initiate data transmission with two
power levels based on the sensing decisions (e.g., idle or busy). It is assumed
that spectrum sensing is performed by the cognitive secondary users, albeit
with possible errors. In this setting, the optimization problem of maximizing
the energy efficiency (EE) subject to peak/average transmission power
constraints and average interference constraints is considered. The circuit
power is taken into account for total power consumption. By exploiting the
quasiconcave property of the EE maximization problem, the original problem is
transformed into an equivalent parameterized concave problem and Dinkelbach's
method-based iterative power adaptation algorithm is proposed. The impact of
sensing performance, peak/average transmit power constraints and average
interference constraint on the energy efficiency of cognitive radio systems is
analyzed.
|
1404.3181 | FAST-PPR: Scaling Personalized PageRank Estimation for Large Graphs | cs.DS cs.SI | We propose a new algorithm, FAST-PPR, for estimating personalized PageRank:
given start node $s$ and target node $t$ in a directed graph, and given a
threshold $\delta$, FAST-PPR estimates the Personalized PageRank $\pi_s(t)$
from $s$ to $t$, guaranteeing a small relative error as long $\pi_s(t)>\delta$.
Existing algorithms for this problem have a running-time of $\Omega(1/\delta)$;
in comparison, FAST-PPR has a provable average running-time guarantee of
${O}(\sqrt{d/\delta})$ (where $d$ is the average in-degree of the graph). This
is a significant improvement, since $\delta$ is often $O(1/n)$ (where $n$ is
the number of nodes) for applications. We also complement the algorithm with an
$\Omega(1/\sqrt{\delta})$ lower bound for PageRank estimation, showing that the
dependence on $\delta$ cannot be improved.
We perform a detailed empirical study on numerous massive graphs, showing
that FAST-PPR dramatically outperforms existing algorithms. For example, on the
2010 Twitter graph with 1.5 billion edges, for target nodes sampled by
popularity, FAST-PPR has a $20$ factor speedup over the state of the art.
Furthermore, an enhanced version of FAST-PPR has a $160$ factor speedup on the
Twitter graph, and is at least $20$ times faster on all our candidate graphs.
|
1404.3184 | Decreasing Weighted Sorted $\ell_1$ Regularization | cs.CV cs.IT cs.LG math.IT | We consider a new family of regularizers, termed {\it weighted sorted
$\ell_1$ norms} (WSL1), which generalizes the recently introduced {\it
octagonal shrinkage and clustering algorithm for regression} (OSCAR) and also
contains the $\ell_1$ and $\ell_{\infty}$ norms as particular instances. We
focus on a special case of the WSL1, the {\sl decreasing WSL1} (DWSL1), where
the elements of the argument vector are sorted in non-increasing order and the
weights are also non-increasing. In this paper, after showing that the DWSL1 is
indeed a norm, we derive two key tools for its use as a regularizer: the dual
norm and the Moreau proximity operator.
|
1404.3190 | Pareto-Path Multi-Task Multiple Kernel Learning | cs.LG | A traditional and intuitively appealing Multi-Task Multiple Kernel Learning
(MT-MKL) method is to optimize the sum (thus, the average) of objective
functions with (partially) shared kernel function, which allows information
sharing amongst tasks. We point out that the obtained solution corresponds to a
single point on the Pareto Front (PF) of a Multi-Objective Optimization (MOO)
problem, which considers the concurrent optimization of all task objectives
involved in the Multi-Task Learning (MTL) problem. Motivated by this last
observation and arguing that the former approach is heuristic, we propose a
novel Support Vector Machine (SVM) MT-MKL framework, that considers an
implicitly-defined set of conic combinations of task objectives. We show that
solving our framework produces solutions along a path on the aforementioned PF
and that it subsumes the optimization of the average of objective functions as
a special case. Using algorithms we derived, we demonstrate through a series of
experimental results that the framework is capable of achieving better
classification performance, when compared to other similar MTL approaches.
|
1404.3203 | Compressive classification and the rare eclipse problem | cs.LG cs.IT math.IT math.ST stat.TH | This paper addresses the fundamental question of when convex sets remain
disjoint after random projection. We provide an analysis using ideas from
high-dimensional convex geometry. For ellipsoids, we provide a bound in terms
of the distance between these ellipsoids and simple functions of their
polynomial coefficients. As an application, this theorem provides bounds for
compressive classification of convex sets. Rather than assuming that the data
to be classified is sparse, our results show that the data can be acquired via
very few measurements yet will remain linearly separable. We demonstrate the
feasibility of this approach in the context of hyperspectral imaging.
|
1404.3221 | UAV Circumnavigating an Unknown Target Under a GPS-denied Environment
with Range-only Measurements | cs.SY cs.RO math.OC | One typical application of unmanned aerial vehicles is the intelligence,
surveillance, and reconnaissance mission, where the objective is to improve
situation awareness through information acquisition. For examples, an efficient
way to gather information regarding a target is to deploy UAV in such a way
that it orbits around this target at a desired distance. Such a UAV motion is
called circumnavigation. The objective of the paper is to design a UAV control
algorithm such that this circumnavigation mission is achieved under a
GPS-denied environment using range-only measurement. The control algorithm is
constructed in two steps. The first step is to design a UAV control algorithm
by assuming the availability of both range and range rate measurements, where
the associated control input is always bounded. The second step is to further
eliminate the use of range rate measurement by using an estimated range rate,
obtained via a sliding-mode estimator using range measurement, to replace
actual range rate measurement. Such a controller design technique is applicable
in the control design of other UAV navigation and control missions under a
GPS-denied environment.
|
1404.3233 | Pagination: It's what you say, not how long it takes to say it | cs.CL cs.IR | Pagination - the process of determining where to break an article across
pages in a multi-article layout is a common layout challenge for most
commercially printed newspapers and magazines. To date, no one has created an
algorithm that determines a minimal pagination break point based on the content
of the article. Existing approaches for automatic multi-article layout focus
exclusively on maximizing content (number of articles) and optimizing aesthetic
presentation (e.g., spacing between articles). However, disregarding the
semantic information within the article can lead to overly aggressive cutting,
thereby eliminating key content and potentially confusing the reader, or
setting too generous of a break point, thereby leaving in superfluous content
and making automatic layout more difficult. This is one of the remaining
challenges on the path from manual layouts to fully automated processes that
still ensure article content quality. In this work, we present a new approach
to calculating a document minimal break point for the task of pagination. Our
approach uses a statistical language model to predict minimal break points
based on the semantic content of an article. We then compare 4 novel candidate
approaches, and 4 baselines (currently in use by layout algorithms). Results
from this experiment show that one of our approaches strongly outperforms the
baselines and alternatives. Results from a second study suggest that humans are
not able to agree on a single "best" break point. Therefore, this work shows
that a semantic-based lower bound break point prediction is necessary for ideal
automated document synthesis within a real-world context.
|
1404.3238 | Bounds on Distance Estimation via Diffusive Molecular Communication | cs.IT math.IT | This paper studies distance estimation for diffusive molecular communication.
The Cramer-Rao lower bound on the variance of the distance estimation error is
derived. The lower bound is derived for a physically unbounded environment with
molecule degradation and steady uniform flow. The maximum likelihood distance
estimator is derived and its accuracy is shown via simulation to perform very
close to the Cramer-Rao lower bound. An existing protocol is shown to be
equivalent to the maximum likelihood distance estimator if only one observation
is made. Simulation results also show the accuracy of existing protocols with
respect to the Cramer-Rao lower bound.
|
1404.3250 | On the rank of random matrices over finite fields | cs.IT math.IT | A novel lower bound is introduced for the full rank probability of random
finite field matrices, where a number of elements with known location are
identically zero, and remaining elements are chosen independently of each
other, uniformly over the field. The main ingredient is a result showing that
constraining additional elements to be zero cannot result in a higher
probability of full rank. The bound then follows by "zeroing" elements to
produce a block-diagonal matrix, whose full rank probability can be computed
exactly. The bound is shown to be at least as tight and can be strictly tighter
than existing bounds.
|
1404.3263 | Compressive Origin-Destination Matrix Estimation | cs.SY math.DS | The paper presents an approach to estimate Origin-Destination (OD) flows and
their path splits, based on traffic counts on links in the network. The
approach called Compressive Origin-Destination Estimation (CODE) is inspired by
Compressive Sensing (CS) techniques. Even though the estimation problem is
underdetermined, CODE recovers the unknown variables exactly when the number of
alternative paths for each OD pair is small. Noiseless, noisy, and weighted
versions of CODE are illustrated for synthetic networks, and with real data for
a small region in East Providence. CODE's versatility is suggested by its use
to estimate the number of vehicles and the Vehicle-Miles Traveled (VMT) using
link counts.
|
1404.3285 | An Integer Programming Model for the Dynamic Location and Relocation of
Emergency Vehicles: A Case Study | cs.AI | In this paper, we address the dynamic Emergency Medical Service (EMS)
systems. A dynamic location model is presented that tries to locate and
relocate the ambulances. The proposed model controls the movements and
locations of ambulances in order to provide a better coverage of the demand
points under different fluctuation patterns that may happen during a given
period of time. Some numerical experiments have been carried out by using some
real-world data sets that have been collected through the French EMS system.
|
1404.3286 | A Continuous Optimization Approach for the Financial Portfolio Selection
under Discrete Asset Choice Constraints | cs.CE | In this paper we consider a generalization of the Markowitz's Mean-Variance
model under linear transaction costs and cardinality constraints. The
cardinality constraints are used to limit the number of assets in the optimal
portfolio. The generalized model is formulated as a mixed integer quadratic
programming (MIP) problem. The purpose of this paper is to investigate a
continuous approach based on difference of convex functions (DC) programming
for solving the MIP model. The preliminary comparative results of the proposed
approach versus CPLEX are presented.
|
1404.3290 | Motion-Compensated Coding and Frame-Rate Up-Conversion: Models and
Analysis | cs.MM cs.CV | Block-based motion estimation (ME) and compensation (MC) techniques are
widely used in modern video processing algorithms and compression systems. The
great variety of video applications and devices results in numerous compression
specifications. Specifically, there is a diversity of frame-rates and
bit-rates. In this paper, we study the effect of frame-rate and compression
bit-rate on block-based ME and MC as commonly utilized in inter-frame coding
and frame-rate up conversion (FRUC). This joint examination yields a
comprehensive foundation for comparing MC procedures in coding and FRUC. First,
the video signal is modeled as a noisy translational motion of an image. Then,
we theoretically model the motion-compensated prediction of an available and
absent frames as in coding and FRUC applications, respectively. The theoretic
MC-prediction error is further analyzed and its autocorrelation function is
calculated for coding and FRUC applications. We show a linear relation between
the variance of the MC-prediction error and temporal-distance. While the
affecting distance in MC-coding is between the predicted and reference frames,
MC-FRUC is affected by the distance between the available frames used for the
interpolation. Moreover, the dependency in temporal-distance implies an inverse
effect of the frame-rate. FRUC performance analysis considers the prediction
error variance, since it equals to the mean-squared-error of the interpolation.
However, MC-coding analysis requires the entire autocorrelation function of the
error; hence, analytic simplicity is beneficial. Therefore, we propose two
constructions of a separable autocorrelation function for prediction error in
MC-coding. We conclude by comparing our estimations with experimental results.
|
1404.3291 | Cost-Effective HITs for Relative Similarity Comparisons | cs.CV cs.LG | Similarity comparisons of the form "Is object a more similar to b than to c?"
are useful for computer vision and machine learning applications.
Unfortunately, an embedding of $n$ points is specified by $n^3$ triplets,
making collecting every triplet an expensive task. In noticing this difficulty,
other researchers have investigated more intelligent triplet sampling
techniques, but they do not study their effectiveness or their potential
drawbacks. Although it is important to reduce the number of collected triplets,
it is also important to understand how best to display a triplet collection
task to a user. In this work we explore an alternative display for collecting
triplets and analyze the monetary cost and speed of the display. We propose
best practices for creating cost effective human intelligence tasks for
collecting triplets. We show that rather than changing the sampling algorithm,
simple changes to the crowdsourcing UI can lead to much higher quality
embeddings. We also provide a dataset as well as the labels collected from
crowd workers.
|
1404.3301 | Efficient Inference and Learning in a Large Knowledge Base: Reasoning
with Extracted Information using a Locally Groundable First-Order
Probabilistic Logic | cs.AI | One important challenge for probabilistic logics is reasoning with very large
knowledge bases (KBs) of imperfect information, such as those produced by
modern web-scale information extraction systems. One scalability problem shared
by many probabilistic logics is that answering queries involves "grounding" the
query---i.e., mapping it to a propositional representation---and the size of a
"grounding" grows with database size. To address this bottleneck, we present a
first-order probabilistic language called ProPPR in which that approximate
"local groundings" can be constructed in time independent of database size.
Technically, ProPPR is an extension to stochastic logic programs (SLPs) that is
biased towards short derivations; it is also closely related to an earlier
relational learning algorithm called the path ranking algorithm (PRA). We show
that the problem of constructing proofs for this logic is related to
computation of personalized PageRank (PPR) on a linearized version of the proof
space, and using on this connection, we develop a proveably-correct approximate
grounding scheme, based on the PageRank-Nibble algorithm. Building on this, we
develop a fast and easily-parallelized weight-learning algorithm for ProPPR. In
experiments, we show that learning for ProPPR is orders magnitude faster than
learning for Markov logic networks; that allowing mutual recursion (joint
learning) in KB inference leads to improvements in performance; and that ProPPR
can learn weights for a mutually recursive program with hundreds of clauses,
which define scores of interrelated predicates, over a KB containing one
million entities.
|
1404.3312 | Shrinkage Optimized Directed Information using Pictorial Structures for
Action Recognition | cs.CV | In this paper, we propose a novel action recognition framework. The method
uses pictorial structures and shrinkage optimized directed information
assessment (SODA) coupled with Markov Random Fields called SODA+MRF to model
the directional temporal dependency and bidirectional spatial dependency. As a
variant of mutual information, directional information captures the directional
information flow and temporal structure of video sequences across frames.
Meanwhile, within each frame, Markov random fields are utilized to model the
spatial relations among different parts of a human body and the body parts of
different people. The proposed SODA+MRF model is robust to view point
transformations and detect complex interactions accurately. We compare the
proposed method against several baseline methods to highlight the effectiveness
of the SODA+MRF model. We demonstrate that our algorithm has superior action
recognition performance on the UCF action recognition dataset, the Olympic
sports dataset and the collective activity dataset over several
state-of-the-art methods.
|
1404.3316 | Embed System for Robotic Arm with 3 Degree of Freedom Controller using
Computational Vision on Real-Time | cs.RO cs.SY | This Paper deals with robotic arm embed controller system, with distributed
system based on protocol communication between one server supporting multiple
points and mobile applications trough sockets .The proposed system utilizes
hand with glove gesture in three-dimensional recognition using fuzzy
implementation to set x,y,z coordinates. This approach present all
implementation over: two raspberry PI arm based computer running client
program, x64 PC running server program, and one robot arm controlled by
ATmega328p based board.
|
1404.3325 | A Weighted Correlation Index for Rankings with Ties | cs.SI cs.IR | Understanding the correlation between two different scores for the same set
of items is a common problem in information retrieval, and the most commonly
used statistics that quantifies this correlation is Kendall's $\tau$. However,
the standard definition fails to capture that discordances between items with
high rank are more important than those between items with low rank. Recently,
a new measure of correlation based on average precision has been proposed to
solve this problem, but like many alternative proposals in the literature it
assumes that there are no ties in the scores. This is a major deficiency in a
number of contexts, and in particular while comparing centrality scores on
large graphs, as the obvious baseline, indegree, has a very large number of
ties in web and social graphs. We propose to extend Kendall's definition in a
natural way to take into account weights in the presence of ties. We prove a
number of interesting mathematical properties of our generalization and
describe an $O(n\log n)$ algorithm for its computation. We also validate the
usefulness of our weighted measure of correlation using experimental data.
|
1404.3329 | Portfolio Selection Under Buy-In Threshold Constraints Using DC
Programming and DCA | cs.CE | In matter of Portfolio selection, we consider a generalization of the
Markowitz Mean-Variance model which includes buy-in threshold constraints.
These constraints limit the amount of capital to be invested in each asset and
prevent very small investments in any asset. The new model can be converted
into a NP-hard mixed integer quadratic programming problem. The purpose of this
paper is to investigate a continuous approach based on DC programming and DCA
for solving this new model. DCA is a local continuous approach to solve a wide
variety of nonconvex programs for which it provided quite often a global
solution and proved to be more robust and efficient than standard methods.
Preliminary comparative results of DCA and a classical Branch-and-Bound
algorithm will be presented. These results show that DCA is an efficient and
promising approach for the considered portfolio selection problem.
|
1404.3330 | A DC programming approach for constrained two-dimensional non-guillotine
cutting problem | cs.CE | We investigate a new application of Difference of Convex functions
programming and DCA in solving the constrained two-dimensional non-guillotine
cutting problem. This problem consists of cutting a number of rectangular
pieces from a large rectangular object. The cuts are done under some
constraints and the objective is to maximize the total value of the pieces cut.
We reformulate this problem as a DC program and solve it by DCA. The
performance of the approach is compared with the standard solver CPLEX.
|
1404.3366 | Learning Deep Convolutional Features for MRI Based Alzheimer's Disease
Classification | cs.CV | Effective and accurate diagnosis of Alzheimer's disease (AD) or mild
cognitive impairment (MCI) can be critical for early treatment and thus has
attracted more and more attention nowadays. Since first introduced, machine
learning methods have been gaining increasing popularity for AD related
research. Among the various identified biomarkers, magnetic resonance imaging
(MRI) are widely used for the prediction of AD or MCI. However, before a
machine learning algorithm can be applied, image features need to be extracted
to represent the MRI images. While good representations can be pivotal to the
classification performance, almost all the previous studies typically rely on
human labelling to find the regions of interest (ROI) which may be correlated
to AD, such as hippocampus, amygdala, precuneus, etc. This procedure requires
domain knowledge and is costly and tedious.
Instead of relying on extraction of ROI features, it is more promising to
remove manual ROI labelling from the pipeline and directly work on the raw MRI
images. In other words, we can let the machine learning methods to figure out
these informative and discriminative image structures for AD classification. In
this work, we propose to learn deep convolutional image features using
unsupervised and supervised learning. Deep learning has emerged as a powerful
tool in the machine learning community and has been successfully applied to
various tasks. We thus propose to exploit deep features of MRI images based on
a pre-trained large convolutional neural network (CNN) for AD and MCI
classification, which spares the effort of manual ROI annotation process.
|
1404.3368 | Near-optimal sample compression for nearest neighbors | cs.LG cs.CC | We present the first sample compression algorithm for nearest neighbors with
non-trivial performance guarantees. We complement these guarantees by
demonstrating almost matching hardness lower bounds, which show that our bound
is nearly optimal. Our result yields new insight into margin-based nearest
neighbor classification in metric spaces and allows us to significantly sharpen
and simplify existing bounds. Some encouraging empirical results are also
presented.
|
1404.3370 | Distance function of D numbers | cs.AI | Dempster-Shafer theory is widely applied in uncertainty modelling and
knowledge reasoning due to its ability of expressing uncertain information. A
distance between two basic probability assignments(BPAs) presents a measure of
performance for identification algorithms based on the evidential theory of
Dempster-Shafer. However, some conditions lead to limitations in practical
application for Dempster-Shafer theory, such as exclusiveness hypothesis and
completeness constraint. To overcome these shortcomings, a novel theory called
D numbers theory is proposed. A distance function of D numbers is proposed to
measure the distance between two D numbers. The distance function of D numbers
is an generalization of distance between two BPAs, which inherits the advantage
of Dempster-Shafer theory and strengthens the capability of uncertainty
modeling. An illustrative case is provided to demonstrate the effectiveness of
the proposed function.
|
1404.3377 | A Generalized Language Model as the Combination of Skipped n-grams and
Modified Kneser-Ney Smoothing | cs.CL | We introduce a novel approach for building language models based on a
systematic, recursive exploration of skip n-gram models which are interpolated
using modified Kneser-Ney smoothing. Our approach generalizes language models
as it contains the classical interpolation with lower order models as a special
case. In this paper we motivate, formalize and present our approach. In an
extensive empirical experiment over English text corpora we demonstrate that
our generalized language models lead to a substantial reduction of perplexity
between 3.1% and 12.7% in comparison to traditional language models using
modified Kneser-Ney smoothing. Furthermore, we investigate the behaviour over
three other languages and a domain specific corpus where we observed consistent
improvements. Finally, we also show that the strength of our approach lies in
its ability to cope in particular with sparse training data. Using a very small
training data set of only 736 KB text we yield improvements of even 25.7%
reduction of perplexity.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.