id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1404.1345 | Optimizing Relay Precoding for Wireless Coordinated Relaying | cs.IT math.IT | Processing of multiple communication flows in wireless systems has given rise
to a number of novel transmission techniques, notably the two-way relaying
based on wireless network coding. Recently, a related set of techniques has
emerged, termed coordinated direct and relay (CDR) transmissions, where the
constellation of traffic flows is more general than the two-way. Regardless of
the actual traffic flows, in a CDR scheme the relay has a central role in
managing the interference and boosting the overall system performance. In this
paper we investigate the novel transmission modes, based on
amplify-and-forward, that arise when the relay is equipped with multiple
antennas and can use beamforming.
|
1404.1355 | Studying Social Networks at Scale: Macroscopic Anatomy of the Twitter
Social Graph | cs.SI physics.soc-ph | Twitter is one of the largest social networks using exclusively directed
links among accounts. This makes the Twitter social graph much closer to the
social graph supporting real life communications than, for instance, Facebook.
Therefore, understanding the structure of the Twitter social graph is
interesting not only for computer scientists, but also for researchers in other
fields, such as sociologists. However, little is known about how the
information propagation in Twitter is constrained by its inner structure. In
this paper, we present an in-depth study of the macroscopic structure of the
Twitter social graph unveiling the highways on which tweets propagate, the
specific user activity associated with each component of this macroscopic
structure, and the evolution of this macroscopic structure with time for the
past 6 years. For this study, we crawled Twitter to retrieve all accounts and
all social relationships (follow links) among accounts; the crawl completed in
July 2012 with 505 million accounts interconnected by 23 billion links. Then,
we present a methodology to unveil the macroscopic structure of the Twitter
social graph. This macroscopic structure consists of 8 components defined by
their connectivity characteristics. Each component group users with a specific
usage of Twitter. For instance, we identified components gathering together
spammers, or celebrities. Finally, we present a method to approximate the
macroscopic structure of the Twitter social graph in the past, validate this
method using old datasets, and discuss the evolution of the macroscopic
structure of the Twitter social graph during the past 6 years.
|
1404.1356 | Optimal learning with Bernstein Online Aggregation | stat.ML cs.LG math.ST stat.TH | We introduce a new recursive aggregation procedure called Bernstein Online
Aggregation (BOA). The exponential weights include an accuracy term and a
second order term that is a proxy of the quadratic variation as in Hazan and
Kale (2010). This second term stabilizes the procedure that is optimal in
different senses. We first obtain optimal regret bounds in the deterministic
context. Then, an adaptive version is the first exponential weights algorithm
that exhibits a second order bound with excess losses that appears first in
Gaillard et al. (2014). The second order bounds in the deterministic context
are extended to a general stochastic context using the cumulative predictive
risk. Such conversion provides the main result of the paper, an inequality of a
novel type comparing the procedure with any deterministic aggregation procedure
for an integrated criteria. Then we obtain an observable estimate of the excess
of risk of the BOA procedure. To assert the optimality, we consider finally the
iid case for strongly convex and Lipschitz continuous losses and we prove that
the optimal rate of aggregation of Tsybakov (2003) is achieved. The batch
version of the BOA procedure is then the first adaptive explicit algorithm that
satisfies an optimal oracle inequality with high probability.
|
1404.1366 | New one shot quantum protocols with application to communication
complexity | quant-ph cs.CC cs.IT math.IT | In this paper we present the following quantum compression protocol:
P : Let $\rho,\sigma$ be quantum states such that $S(\rho || \sigma) =
\text{Tr} (\rho \log \rho - \rho \log \sigma)$, the relative entropy between
$\rho$ and $\sigma$, is finite. Alice gets to know the eigen-decomposition of
$\rho$. Bob gets to know the eigen-decomposition of $\sigma$. Both Alice and
Bob know $S(\rho || \sigma)$ and an error parameter $\epsilon$. Alice and Bob
use shared entanglement and after communication of $\mathcal{O}((S(\rho ||
\sigma)+1)/\epsilon^4)$ bits from Alice to Bob, Bob ends up with a quantum
state $\tilde{\rho}$ such that $F(\rho, \tilde{\rho}) \geq 1 - 5\epsilon$,
where $F(\cdot)$ represents fidelity.
This result can be considered as a non-commutative generalization of a result
due to Braverman and Rao [2011] where they considered the special case when
$\rho$ and $\sigma$ are classical probability distributions (or commute with
each other) and use shared randomness instead of shared entanglement. We use P
to obtain an alternate proof of a direct-sum result for entanglement assisted
quantum one-way communication complexity for all relations, which was first
shown by Jain, Radhakrishnan and Sen [2005,2008]. We also present a variant of
protocol P in which Bob has some side information about the state with Alice.
We show that in such a case, the amount of communication can be further
reduced, based on the side information that Bob has.
Our second result provides a quantum analogue of the widely used classical
correlated-sampling protocol. For example, Holenstein [2007] used the classical
correlated-sampling protocol in his proof of a parallel-repetition theorem for
two-player one-round games.
|
1404.1368 | Revealing the structure of the world airline network | physics.soc-ph cs.SI physics.data-an | Resilience of most critical infrastructures against failure of elements that
appear insignificant is usually taken for granted. The World Airline Network
(WAN) is an infrastructure that reduces the geographical gap between societies,
both small and large, and brings forth economic gains. With the extensive use
of a publicly maintained data set that contains information about airports and
alternative connections between these airports, we empirically reveal that the
WAN is a redundant and resilient network for long distance air travel, but
otherwise breaks down completely due to removal of short and apparently
insignificant connections. These short range connections with moderate number
of passengers and alternate flights are the connections that keep remote parts
of the world accessible. It is surprising, insofar as there exists a highly
resilient and strongly connected core consisting of a small fraction of
airports (around 2.3%) together with an extremely fragile star-like periphery.
Yet, in spite of their relevance, more than 90% of the world airports are still
interconnected upon removal of this core. With standard and unconventional
removal measures we compare both empirical and topological perceptions for the
fragmentation of the world. We identify how the WAN is organized into different
classes of clusters based on the physical proximity of airports and analyze the
consequence of this fragmentation.
|
1404.1377 | Orthogonal Rank-One Matrix Pursuit for Low Rank Matrix Completion | cs.LG math.NA stat.ML | In this paper, we propose an efficient and scalable low rank matrix
completion algorithm. The key idea is to extend orthogonal matching pursuit
method from the vector case to the matrix case. We further propose an economic
version of our algorithm by introducing a novel weight updating rule to reduce
the time and storage complexity. Both versions are computationally inexpensive
for each matrix pursuit iteration, and find satisfactory results in a few
iterations. Another advantage of our proposed algorithm is that it has only one
tunable parameter, which is the rank. It is easy to understand and to use by
the user. This becomes especially important in large-scale learning problems.
In addition, we rigorously show that both versions achieve a linear convergence
rate, which is significantly better than the previous known results. We also
empirically compare the proposed algorithms with several state-of-the-art
matrix completion algorithms on many real-world datasets, including the
large-scale recommendation dataset Netflix as well as the MovieLens datasets.
Numerical results show that our proposed algorithm is more efficient than
competing algorithms while achieving similar or better prediction performance.
|
1404.1404 | On the Existence of Optimal Policies for a Class of Static and
Sequential Dynamic Teams | cs.SY math.OC math.PR | In this paper, we identify sufficient conditions under which static teams and
a class of sequential dynamic teams admit team-optimal solutions. We first
investigate the existence of optimal solutions in static teams where the
observations of the decision makers are conditionally independent or satisfy
certain regularity conditions. Building on these findings and the static
reduction method of Witsenhausen, we then extend the analysis to sequential
dynamic teams. In particular, we show that a large class of dynamic LQG team
problems, including the vector version of the well-known Witsenhausen's
counterexample and the Gaussian relay channel problem viewed as a dynamic team,
admit team-optimal solutions. Results in this paper substantially broaden the
class of stochastic control and team problems with non-classical information
known to have optimal solutions.
|
1404.1405 | Optimal Budget Allocation in Social Networks: Quality or Seeding | cs.SI cs.GT math.OC physics.soc-ph | In this paper, we study a strategic model of marketing and product
consumption in social networks. We consider two competing firms in a market
providing two substitutable products with preset qualities. Agents choose their
consumptions following a myopic best response dynamics which results in a
local, linear update for the consumptions. At some point in time, firms receive
a limited budget which they can use to trigger a larger consumption of their
products in the network. Firms have to decide between marginally improving the
quality of their products and giving free offers to a chosen set of agents in
the network in order to better facilitate spreading their products. We derive a
simple threshold rule for the optimal allocation of the budget and describe the
resulting Nash equilibrium. It is shown that the optimal allocation of the
budget depends on the entire distribution of centralities in the network,
quality of products and the model parameters. In particular, we show that in a
graph with a higher number of agents with centralities above a certain
threshold, firms spend more budget on seeding in the optimal allocation.
Furthermore, if seeding budget is nonzero for a balanced graph, it will also be
nonzero for any other graph, and if seeding budget is zero for a star graph, it
will be zero for any other graph too. We also show that firms allocate more
budget to quality improvement when their qualities are close, in order to
distance themselves from the rival firm. However, as the gap between qualities
widens, competition in qualities becomes less effective and firms spend more
budget on seeding.
|
1404.1434 | On the Subadditivity of the Entropy on the Sphere | math.FA cs.IT math-ph math.IT math.MP | We present a refinement of a known entropic inequality on the sphere, finding
suitable conditions under which the uniform probability measure on the sphere
behaves asymptomatically like the Gaussian measure on $\mathbb{R}^N$ with
respect to the entropy.
|
1404.1441 | A Stochastic Maximum Principle for Risk-Sensitive Mean-Field Type
Control | math.OC cs.SY math.PR q-fin.RM | In this paper we study mean-field type control problems with risk-sensitive
performance functionals. We establish a stochastic maximum principle (SMP) for
optimal control of stochastic differential equations (SDEs) of mean-field type,
in which the drift and the diffusion coefficients as well as the performance
functional depend not only on the state and the control but also on the mean of
the distribution of the state. Our result extends the risk-sensitive SMP
(without mean-field coupling) of Lim and Zhou (2005), derived for feedback (or
Markov) type optimal controls, to optimal control problems for non-Markovian
dynamics which may be time-inconsistent in the sense that the Bellman
optimality principle does not hold. In our approach to the risk-sensitive SMP,
the smoothness assumption on the value-function imposed in Lim and Zhou (2005)
need not to be satisfied. For a general action space a Peng's type SMP is
derived, specifying the necessary conditions for optimality. Two examples are
carried out to illustrate the proposed risk-sensitive mean-field type SMP under
linear stochastic dynamics with exponential quadratic cost function. Explicit
solutions are given for both mean-field free and mean-field models.
|
1404.1443 | Upper-Bounding the Capacity of Relay Communications - Part II | cs.IT math.IT | This paper focuses on the capacity of peer-to-peer relay communications
wherein the transmitter are assisted by an arbitrary number of parallel relays,
i.e. there is no link and cooperation between the relays themselves. We detail
the mathematical model of different relaying strategies including cutset and
amplify and forward strategies. The cutset upper bound capacity is presented as
a reference to compare another realistic strategy. We present its outer region
capacity which is lower than that in the existing literature. We show that a
multiple parallel relayed network achieves its maximum capacity by virtue of
only one relay or by virtue of all relays together. Adding a relay may even
decrease the overall capacity or may do not change it. We exemplify various
outer region capacities of the addressed strategies with two different case
studies. The results exhibit that in low signal-to-noise ratio (SNR)
environments the cutset outperforms the amplify and forward strategy and this
is contrary in high SNR environments.
|
1404.1449 | Non-Asymptotic Mean-Field Games | cs.GT cs.SY | Mean-field games have been studied under the assumption of very large number
of players. For such large systems, the basic idea consists to approximate
large games by a stylized game model with a continuum of players. The approach
has been shown to be useful in some applications. However, the stylized game
model with continuum of decision-makers is rarely observed in practice and the
approximation proposed in the asymptotic regime is meaningless for networks
with few entities. In this paper we propose a mean-field framework that is
suitable not only for large systems but also for a small world with few number
of entities. The applicability of the proposed framework is illustrated through
various examples including dynamic auction with asymmetric valuation
distributions, and spiteful bidders.
|
1404.1451 | Higher Rank Interference Effect on Weak Beamforming or OSTBC Terminals | cs.IT math.IT | User performance on a wireless network depends on whether a neighboring
cochannel interferer applies a single (spatial) stream or a multi stream
transmission. This work analyzes the impact of interference rank on a
beamforming and orthogonal space-time block coded (OSTBC) user transmission. We
generalize existing analytical results on
signal-to-interference-plus-noise-ratio (SINR) distribution and outage
probability under arbitrary number of unequal power interferers. We show that
higher rank interference causes lower outage probability, and can support
better outage threshold especially in the case of beamforming.
|
1404.1468 | High Throughput and Less Area AMP Architecture for Audio Signal
Restoration | cs.SD cs.IT math.IT | Audio restoration is effectively achieved by using low complexity algorithm
called AMP. This algorithm has fast convergence and has lower computation
intensity making it suitable for audio recovery problems. This paper focuses on
restoring an audio signal by using VLSI architecture called AMP-M that
implements AMP algorithm. This architecture employs MAC unit with fixed bit
Wallace tree multiplier, FFT-MUX and various memory units (RAM) for audio
restoration. VLSI and FPGA implementation results shows that reduced area, high
throughput, low power is achieved making it suitable for real time audio
recovery problems. Prominent examples are Magnetic Resonance Imaging (MRI),
Radar and Wireless Communications.
|
1404.1484 | MUSIC for Single-Snapshot Spectral Estimation: Stability and
Super-resolution | cs.IT math.IT math.NA | This paper studies the problem of line spectral estimation in the continuum
of a bounded interval with one snapshot of array measurement. The
single-snapshot measurement data is turned into a Hankel data matrix which
admits the Vandermonde decomposition and is suitable for the MUSIC algorithm.
The MUSIC algorithm amounts to finding the null space (the noise space) of the
Hankel matrix, forming the noise-space correlation function and identifying the
s smallest local minima of the noise-space correlation as the frequency set.
In the noise-free case exact reconstruction is guaranteed for any arbitrary
set of frequencies as long as the number of measurements is at least twice the
number of distinct frequencies to be recovered. In the presence of noise the
stability analysis shows that the perturbation of the noise-space correlation
is proportional to the spectral norm of the noise matrix as long as the latter
is smaller than the smallest (nonzero) singular value of the noiseless Hankel
data matrix. Under the assumption that frequencies are separated by at least
twice the Rayleigh Length (RL), the stability of the noise-space correlation is
proved by means of novel discrete Ingham inequalities which provide bounds on
nonzero singular values of the noiseless Hankel data matrix.
The numerical performance of MUSIC is tested in comparison with other
algorithms such as BLO-OMP and SDP (TV-min). While BLO-OMP is the stablest
algorithm for frequencies separated above 4 RL, MUSIC becomes the best
performing one for frequencies separated between 2 RL and 3 RL. Also, MUSIC is
more efficient than other methods. MUSIC truly shines when the frequency
separation drops to 1 RL or below when all other methods fail. Indeed, the
resolution length of MUSIC decreases to zero as noise decreases to zero as a
power law with an exponent much smaller than an upper bound established by
Donoho.
|
1404.1486 | MIMO Multiway Relaying with Clustered Full Data Exchange: Signal Space
Alignment and Degrees of Freedom | cs.IT math.IT | We investigate achievable degrees of freedom (DoF) for a multiple-input
multiple-output (MIMO) multiway relay channel (mRC) with $L$ clusters and $K$
users per cluster. Each user is equipped with $M$ antennas and the relay with
$N$ antennas. We assume a new data exchange model, termed \emph{clustered full
data exchange}, i.e., each user in a cluster wants to learn the messages of all
the other users in the same cluster. Novel signal alignment techniques are
developed to systematically construct the beamforming matrices at the users and
the relay for efficient physical-layer network coding. Based on that, we derive
an achievable DoF of the MIMO mRC with an arbitrary network configuration of
$L$ and $K$, as well as with an arbitrary antenna configuration of $M$ and $N$.
We show that our proposed scheme achieves the DoF capacity when $\frac{M}{N}
\leq \frac{1}{LK-1}$ and $\frac{M}{N} \geq \frac{(K-1)L+1}{KL}$.
|
1404.1491 | An Efficient Feature Selection in Classification of Audio Files | cs.LG | In this paper we have focused on an efficient feature selection method in
classification of audio files. The main objective is feature selection and
extraction. We have selected a set of features for further analysis, which
represents the elements in feature vector. By extraction method we can compute
a numerical representation that can be used to characterize the audio using the
existing toolbox. In this study Gain Ratio (GR) is used as a feature selection
measure. GR is used to select splitting attribute which will separate the
tuples into different classes. The pulse clarity is considered as a subjective
measure and it is used to calculate the gain of features of audio files. The
splitting criterion is employed in the application to identify the class or the
music genre of a specific audio file from testing database. Experimental
results indicate that by using GR the application can produce a satisfactory
result for music genre classification. After dimensionality reduction best
three features have been selected out of various features of audio file and in
this technique we will get more than 90% successful classification result.
|
1404.1492 | Ensemble Committees for Stock Return Classification and Prediction | stat.ML cs.LG | This paper considers a portfolio trading strategy formulated by algorithms in
the field of machine learning. The profitability of the strategy is measured by
the algorithm's capability to consistently and accurately identify stock
indices with positive or negative returns, and to generate a preferred
portfolio allocation on the basis of a learned model. Stocks are characterized
by time series data sets consisting of technical variables that reflect market
conditions in a previous time interval, which are utilized produce binary
classification decisions in subsequent intervals. The learned model is
constructed as a committee of random forest classifiers, a non-linear support
vector machine classifier, a relevance vector machine classifier, and a
constituent ensemble of k-nearest neighbors classifiers. The Global Industry
Classification Standard (GICS) is used to explore the ensemble model's efficacy
within the context of various fields of investment including Energy, Materials,
Financials, and Information Technology. Data from 2006 to 2012, inclusive, are
considered, which are chosen for providing a range of market circumstances for
evaluating the model. The model is observed to achieve an accuracy of
approximately 70% when predicting stock price returns three months in advance.
|
1404.1498 | Model Predictive Control (MPC) Applied To Coupled Tank Liquid Level
System | cs.SY | Coupled Tank system used for liquid level control is a model of plant that
has usually been used in industries especially chemical process industries.
Level control is also very important for mixing reactant process. This survey
paper tries to presents in a systemic way an approach predictive control
strategy for a system that is similar to the process and is represented by two
liquid tanks. This system of coupled Tank is one of the most commonly available
systems representing a coupled Multiple Input Multiple Output (MIMO) system.
With 2 inputs and 2 outputs, it is the most primitive form of a coupled
multivariable system. Therefor the basic concept of how the coupled tanks
system works is by using a numerical system which it operates with a flow
control valve FCV as main control of the level of liquid in one tank or both
tanks. For this paper, MPC algorithm control is used which will be developed
below. And it is focuses on the design and modelling for coupled tanks system.
The steps followed for the design of the controller are: Developing a state
space system model for the coupled tank system then design an MPC controller
for the developed system model. And study the effect of the disturbance on
measured level output. Note that the implementation Model Predictive Controller
on flow controller valve in a Coupled Tank liquid level system is one of the
new methods of controlling liquid level.
|
1404.1504 | A Compression Technique for Analyzing Disagreement-Based Active Learning | cs.LG stat.ML | We introduce a new and improved characterization of the label complexity of
disagreement-based active learning, in which the leading quantity is the
version space compression set size. This quantity is defined as the size of the
smallest subset of the training data that induces the same version space. We
show various applications of the new characterization, including a tight
analysis of CAL and refined label complexity bounds for linear separators under
mixtures of Gaussians and axis-aligned rectangles under product densities. The
version space compression set size, as well as the new characterization of the
label complexity, can be naturally extended to agnostic learning problems, for
which we show new speedup results for two well known active learning
algorithms.
|
1404.1506 | Two algorithms for compressed sensing of sparse tensors | cs.IT math.IT | Compressed sensing (CS) exploits the sparsity of a signal in order to
integrate acquisition and compression. CS theory enables exact reconstruction
of a sparse signal from relatively few linear measurements via a suitable
nonlinear minimization process. Conventional CS theory relies on vectorial data
representation, which results in good compression ratios at the expense of
increased computational complexity. In applications involving color images,
video sequences, and multi-sensor networks, the data is intrinsically of
high-order, and thus more suitably represented in tensorial form. Standard
applications of CS to higher-order data typically involve representation of the
data as long vectors that are in turn measured using large sampling matrices,
thus imposing a huge computational and memory burden. In this chapter, we
introduce Generalized Tensor Compressed Sensing (GTCS)--a unified framework for
compressed sensing of higher-order tensors which preserves the intrinsic
structure of tensorial data with reduced computational complexity at
reconstruction. We demonstrate that GTCS offers an efficient means for
representation of multidimensional data by providing simultaneous acquisition
and compression from all tensor modes. In addition, we propound two
reconstruction procedures, a serial method (GTCS-S) and a parallelizable method
(GTCS-P), both capable of recovering a tensor based on noiseless and noisy
observations. We then compare the performance of the proposed methods with
Kronecker compressed sensing (KCS) and multi-way compressed sensing (MWCS). We
demonstrate experimentally that GTCS outperforms KCS and MWCS in terms of both
reconstruction accuracy (within a range of compression ratios) and processing
speed. The major disadvantage of our methods (and of MWCS as well), is that the
achieved compression ratios may be worse than those offered by KCS.
|
1404.1511 | MTD(f), A Minimax Algorithm Faster Than NegaScout | cs.AI | MTD(f) is a new minimax search algorithm, simpler and more efficient than
previous algorithms. In tests with a number of tournament game playing programs
for chess, checkers and Othello it performed better, on average, than
NegaScout/PVS (the AlphaBeta variant used in practically all good chess,
checkers, and Othello programs). One of the strongest chess programs of the
moment, MIT's parallel chess program Cilkchess uses MTD(f) as its search
algorithm, replacing NegaScout, which was used in StarSocrates, the previous
version of the program.
|
1404.1514 | Text Based Approach For Indexing And Retrieval Of Image And Video: A
Review | cs.IR cs.CV cs.DL cs.MM | Text data present in multimedia contain useful information for automatic
annotation, indexing. Extracted information used for recognition of the overlay
or scene text from a given video or image. The Extracted text can be used for
retrieving the videos and images. In this paper, firstly, we are discussed the
different techniques for text extraction from images and videos. Secondly, we
are reviewed the techniques for indexing and retrieval of image and videos by
using extracted text.
|
1404.1515 | A New Paradigm for Minimax Search | cs.AI | This paper introduces a new paradigm for minimax game-tree search algo-
rithms. MT is a memory-enhanced version of Pearls Test procedure. By changing
the way MT is called, a number of best-first game-tree search algorithms can be
simply and elegantly constructed (including SSS*). Most of the assessments of
minimax search algorithms have been based on simulations. However, these
simulations generally do not address two of the key ingredients of high
performance game-playing programs: iterative deepening and memory usage. This
paper presents experimental data from three game-playing programs (checkers,
Othello and chess), covering the range from low to high branching factor. The
improved move ordering due to iterative deepening and memory usage results in
significantly different results from those portrayed in the literature. Whereas
some simulations show Alpha-Beta expanding almost 100% more leaf nodes than
other algorithms [12], our results showed variations of less than 20%. One new
instance of our framework (MTD-f) out-performs our best alpha- beta searcher
(aspiration NegaScout) on leaf nodes, total nodes and execution time. To our
knowledge, these are the first reported results that compare both depth-first
and best-first algorithms given the same amount of memory
|
1404.1517 | SSS* = Alpha-Beta + TT | cs.AI | In 1979 Stockman introduced the SSS* minimax search algorithm that domi-
nates Alpha-Beta in the number of leaf nodes expanded. Further investigation of
the algorithm showed that it had three serious drawbacks, which prevented its
use by practitioners: it is difficult to understand, it has large memory
requirements, and it is slow. This paper presents an alternate formulation of
SSS*, in which it is implemented as a series of Alpha-Beta calls that use a
transposition table (AB- SSS*). The reformulation solves all three perceived
drawbacks of SSS*, making it a practical algorithm. Further, because the search
is now based on Alpha-Beta, the extensive research on minimax search
enhancements can be easily integrated into AB-SSS*. To test AB-SSS* in
practise, it has been implemented in three state-of-the- art programs: for
checkers, Othello and chess. AB-SSS* is comparable in performance to Alpha-Beta
on leaf node count in all three games, making it a viable alternative to
Alpha-Beta in practise. Whereas SSS* has usually been regarded as being
entirely different from Alpha-Beta, it turns out to be just an Alpha-Beta
enhancement, like null-window searching. This runs counter to published
simulation results. Our research leads to the surprising result that iterative
deepening versions of Alpha-Beta can expand fewer leaf nodes than iterative
deepening versions of SSS* due to dynamic move re-ordering.
|
1404.1518 | Nearly Optimal Minimax Tree Search? | cs.AI | Knuth and Moore presented a theoretical lower bound on the number of leaves
that any fixed-depth minimax tree-search algorithm traversing a uniform tree
must explore, the so-called minimal tree. Since real-life minimax trees are not
uniform, the exact size of this tree is not known for most applications.
Further, most games have transpositions, implying that there exists a minimal
graph which is smaller than the minimal tree. For three games (chess, Othello
and checkers) we compute the size of the minimal tree and the minimal graph.
Empirical evidence shows that in all three games, enhanced Alpha-Beta search is
capable of building a tree that is close in size to that of the minimal graph.
Hence, it appears game-playing programs build nearly optimal search trees.
However, the conventional definition of the minimal graph is wrong. There are
ways in which the size of the minimal graph can be reduced: by maximizing the
number of transpositions in the search, and generating cutoffs using branches
that lead to smaller search trees. The conventional definition of the minimal
graph is just a left-most approximation. Calculating the size of the real
minimal graph is too computationally intensive. However, upper bound
approximations show it to be significantly smaller than the left-most minimal
graph. Hence, it appears that game-playing programs are not searching as
efficiently as is widely believed. Understanding the left-most and real minimal
search graphs leads to some new ideas for enhancing Alpha-Beta search. One of
them, enhanced transposition cutoffs, is shown to significantly reduce search
tree size.
|
1404.1521 | Exploring the power of GPU's for training Polyglot language models | cs.LG cs.CL | One of the major research trends currently is the evolution of heterogeneous
parallel computing. GP-GPU computing is being widely used and several
applications have been designed to exploit the massive parallelism that
GP-GPU's have to offer. While GPU's have always been widely used in areas of
computer vision for image processing, little has been done to investigate
whether the massive parallelism provided by GP-GPU's can be utilized
effectively for Natural Language Processing(NLP) tasks. In this work, we
investigate and explore the power of GP-GPU's in the task of learning language
models. More specifically, we investigate the performance of training Polyglot
language models using deep belief neural networks. We evaluate the performance
of training the model on the GPU and present optimizations that boost the
performance on the GPU.One of the key optimizations, we propose increases the
performance of a function involved in calculating and updating the gradient by
approximately 50 times on the GPU for sufficiently large batch sizes. We show
that with the above optimizations, the GP-GPU's performance on the task
increases by factor of approximately 3-4. The optimizations we made are generic
Theano optimizations and hence potentially boost the performance of other
models which rely on these operations.We also show that these optimizations
result in the GPU's performance at this task being now comparable to that on
the CPU. We conclude by presenting a thorough evaluation of the applicability
of GP-GPU's for this task and highlight the factors limiting the performance of
training a Polyglot model on the GPU.
|
1404.1530 | Provable Deterministic Leverage Score Sampling | cs.DS cs.IT cs.NA math.IT math.ST stat.ML stat.TH | We explain theoretically a curious empirical phenomenon: "Approximating a
matrix by deterministically selecting a subset of its columns with the
corresponding largest leverage scores results in a good low-rank matrix
surrogate". To obtain provable guarantees, previous work requires randomized
sampling of the columns with probabilities proportional to their leverage
scores.
In this work, we provide a novel theoretical analysis of deterministic
leverage score sampling. We show that such deterministic sampling can be
provably as accurate as its randomized counterparts, if the leverage scores
follow a moderately steep power-law decay. We support this power-law assumption
by providing empirical evidence that such decay laws are abundant in real-world
data sets. We then demonstrate empirically the performance of deterministic
leverage score sampling, which many times matches or outperforms the
state-of-the-art techniques.
|
1404.1547 | Asymptotic Behavior of Ultra-Dense Cellular Networks and Its Economic
Impact | cs.IT cs.NI math.IT | This paper investigates the relationship between base station (BS) density
and average spectral efficiency (SE) in the downlink of a cellular network.
This relationship has been well known for sparse deployment, i.e. when the
number of BSs is small compared to the number of users. In this case the SE is
independent of BS density. As BS density grows, on the other hand, it has
previously been shown that increasing the BS density increases the SE, but no
tractable form for the SE-BS density relationship has yet been derived. In this
paper we derive such a closed-form result that reveals the SE is asymptotically
a logarithmic function of BS density as the density grows. Further, we study
the impact of this result on the network operator's profit when user demand
varies, and derive the profit maximizing BS density and the optimal amount of
spectrum to be utilized in closed forms. In addition, we provide deployment
planning guidelines that will aid the operator in his decision if he should
invest in densifying his network or in acquiring more spectrum.
|
1404.1559 | Sparse Coding: A Deep Learning using Unlabeled Data for High - Level
Representation | cs.LG cs.NE | Sparse coding algorithm is an learning algorithm mainly for unsupervised
feature for finding succinct, a little above high - level Representation of
inputs, and it has successfully given a way for Deep learning. Our objective is
to use High - Level Representation data in form of unlabeled category to help
unsupervised learning task. when compared with labeled data, unlabeled data is
easier to acquire because, unlike labeled data it does not follow some
particular class labels. This really makes the Deep learning wider and
applicable to practical problems and learning. The main problem with sparse
coding is it uses Quadratic loss function and Gaussian noise mode. So, its
performs is very poor when binary or integer value or other Non- Gaussian type
data is applied. Thus first we propose an algorithm for solving the L1 -
regularized convex optimization algorithm for the problem to allow High - Level
Representation of unlabeled data. Through this we derive a optimal solution for
describing an approach to Deep learning algorithm by using sparse code.
|
1404.1561 | Fast Supervised Hashing with Decision Trees for High-Dimensional Data | cs.CV cs.LG | Supervised hashing aims to map the original features to compact binary codes
that are able to preserve label based similarity in the Hamming space.
Non-linear hash functions have demonstrated the advantage over linear ones due
to their powerful generalization capability. In the literature, kernel
functions are typically used to achieve non-linearity in hashing, which achieve
encouraging retrieval performance at the price of slow evaluation and training
time. Here we propose to use boosted decision trees for achieving non-linearity
in hashing, which are fast to train and evaluate, hence more suitable for
hashing with high dimensional data. In our approach, we first propose
sub-modular formulations for the hashing binary code inference problem and an
efficient GraphCut based block search method for solving large-scale inference.
Then we learn hash functions by training boosted decision trees to fit the
binary codes. Experiments demonstrate that our proposed method significantly
outperforms most state-of-the-art methods in retrieval precision and training
time. Especially for high-dimensional data, our method is orders of magnitude
faster than many methods in terms of training time.
|
1404.1588 | Gaussian Networks Generated by Random Walks | physics.soc-ph cond-mat.stat-mech cs.SI | We propose a random walks based model to generate complex networks. Many
authors studied and developed different methods and tools to analyze complex
networks by random walk processes. Just to cite a few, random walks have been
adopted to perform community detection, exploration tasks and to study temporal
networks. Moreover, they have been used also to generate scale-free networks.
In this work, we define a random walker that plays the role of
"edges-generator". In particular, the random walker generates new connections
and uses these ones to visit each node of a network. As result, the proposed
model allows to achieve networks provided with a Gaussian degree distribution,
and moreover, some features as the clustering coefficient and the assortativity
show a critical behavior. Finally, we performed numerical simulations to study
the behavior and the properties of the cited model.
|
1404.1592 | The Power of Online Learning in Stochastic Network Optimization | math.OC cs.LG cs.SY | In this paper, we investigate the power of online learning in stochastic
network optimization with unknown system statistics {\it a priori}. We are
interested in understanding how information and learning can be efficiently
incorporated into system control techniques, and what are the fundamental
benefits of doing so. We propose two \emph{Online Learning-Aided Control}
techniques, $\mathtt{OLAC}$ and $\mathtt{OLAC2}$, that explicitly utilize the
past system information in current system control via a learning procedure
called \emph{dual learning}. We prove strong performance guarantees of the
proposed algorithms: $\mathtt{OLAC}$ and $\mathtt{OLAC2}$ achieve the
near-optimal $[O(\epsilon), O([\log(1/\epsilon)]^2)]$ utility-delay tradeoff
and $\mathtt{OLAC2}$ possesses an $O(\epsilon^{-2/3})$ convergence time.
$\mathtt{OLAC}$ and $\mathtt{OLAC2}$ are probably the first algorithms that
simultaneously possess explicit near-optimal delay guarantee and sub-linear
convergence time. Simulation results also confirm the superior performance of
the proposed algorithms in practice. To the best of our knowledge, our attempt
is the first to explicitly incorporate online learning into stochastic network
optimization and to demonstrate its power in both theory and practice.
|
1404.1601 | Density Evolution for Min-Sum Decoding of LDPC Codes Under Unreliable
Message Storage | cs.IT math.IT | We analyze the performance of quantized min-sum decoding of low-density
parity-check codes under unreliable message storage. To this end, we introduce
a simple bit-level error model and show that decoder symmetry is preserved
under this model. Subsequently, we formulate the corresponding density
evolution equations to predict the average bit error probability in the limit
of infinite blocklength. We present numerical threshold results and we show
that using more quantization bits is not always beneficial in the context of
faulty decoders.
|
1404.1614 | A Denoising Autoencoder that Guides Stochastic Search | cs.NE cs.LG | An algorithm is described that adaptively learns a non-linear mutation
distribution. It works by training a denoising autoencoder (DA) online at each
generation of a genetic algorithm to reconstruct a slowly decaying memory of
the best genotypes so far. A compressed hidden layer forces the autoencoder to
learn hidden features in the training set that can be used to accelerate search
on novel problems with similar structure. Its output neurons define a
probability distribution that we sample from to produce offspring solutions.
The algorithm outperforms a canonical genetic algorithm on several
combinatorial optimisation problems, e.g. multidimensional 0/1 knapsack
problem, MAXSAT, HIFF, and on parameter optimisation problems, e.g. Rastrigin
and Rosenbrock functions.
|
1404.1653 | Multi-Linear Interactive Matrix Factorization | cs.IR | Recommender systems, which can significantly help users find their interested
items from the information era, has attracted an increasing attention from both
the scientific and application society. One of the widest applied
recommendation methods is the Matrix Factorization (MF). However, most of MF
based approaches focus on the user-item rating matrix, but ignoring the
ingredients which may have significant influence on users' preferences on
items. In this paper, we propose a multi-linear interactive MF algorithm
(MLIMF) to model the interactions between the users and each event associated
with their final decisions. Our model considers not only the user-item rating
information but also the pairwise interactions based on some empirically
supported factors. In addition, we compared the proposed model with three
typical other methods: user-based collaborative filtering (UCF), item-based
collaborative filtering (ICF) and regularized MF (RMF). Experimental results on
two real-world datasets, \emph{MovieLens} 1M and \emph{MovieLens} 100k, show
that our method performs much better than other three methods in the accuracy
of recommendation. This work may shed some light on the in-depth understanding
of modeling user online behaviors and the consequent decisions.
|
1404.1654 | LOS-based Conjugate Beamforming and Power-Scaling Law in Massive-MIMO
Systems | cs.IT math.IT | This paper is concerned with massive-MIMO systems over Rician flat fading
channels. In order to reduce the overhead to obtain full channel state
information and to avoid the pilot contamination problem, by treating the
scattered component as interference, we investigate a transmit and receive
conjugate beamforming (BF) transmission scheme only based on the line-of-sight
(LOS) component. Under Rank-1 model, we first consider a single-user system
with N transmit and M receive antennas, and focus on the problem of
power-scaling law when the transmit power is scaled down proportionally to
1/MN. It can be shown that as MN grows large, the scattered interference
vanishes, and the ergodic achievable rate is higher than that of the
corresponding BF scheme based fast fading and minimum mean-square error (MMSE)
channel estimation. Then we further consider uplink and downlink single-cell
scenarios where the base station (BS) has M antennas and each of K users has N
antennas. When the transmit power for each user is scaled down proportionally
to 1/MN, it can be shown for finite users that as M grows without bound, each
user obtains finally the same rate performance as in the single-user case. Even
when N grows without bound, however, there still remains inter-user LOS
interference that can not be cancelled. Regarding infinite users, there exists
such a power scaling law that when K and the b-th power of M go to infinity
with a fixed and finite ratio for a given b in (0, 1), not only inter-user LOS
interference but also fast fading effect can be cancelled, while fast fading
effect can not be cancelled if b=1. Extension to multi-cells and
frequency-selective channels are also discussed shortly. Moreover, numerical
results indicate that spacial antenna correlation does not have serious
influence on the rate performance, and the BS antennas may be allowed to be
placed compactly when M is very large.
|
1404.1664 | Icon Based Information Retrieval and Disease Identification in
Agriculture | cs.HC cs.CV cs.CY cs.IR | Recent developments in the ICT industry in past few decades has enabled the
quick and easy access to the information available on the internet. But,
digital literacy is the pre-requisite for its use. The main purpose of this
paper is to provide an interface for digitally illiterate users, especially
farmers to efficiently and effectively retrieve information through Internet.
In addition, to enable the farmers to identify the disease in their crop, its
cause and symptoms using digital image processing and pattern recognition
instantly without waiting for an expert to visit the farms and identify the
disease.
|
1404.1668 | On Resilient Control of Nonlinear Systems under Denial-of-Service | cs.SY math.OC | We analyze and design a control strategy for nonlinear systems under
Denial-of-Service attacks. Based on an ISS-Lyapunov function analysis, we
provide a characterization of the maximal percentage of time during which
feedback information can be lost without resulting in the instability of the
system. Motivated by the presence of a digital channel we consider event-based
controllers for which a minimal inter-sampling time is explicitly
characterized.
|
1404.1674 | Channel Assignment With Access Contention Resolution for Cognitive Radio
Networks | cs.IT cs.NI math.IT | In this paper, we consider the channel assignment problem for cognitive radio
networks with hardware-constrained secondary users (SUs). In particular, we
assume that SUs exploit spectrum holes on a set of channels where each SU can
use at most one available channel for communication. We present the optimal
brute-force search algorithm to solve the corresponding nonlinear integer
optimization problem and analyze its complexity. Because the optimal solution
has exponential complexity with the numbers of channels and SUs, we develop two
low-complexity channel assignment algorithms that can efficiently utilize the
spectrum holes. In the first algorithm, SUs are assigned distinct sets of
channels. We show that this algorithm achieves the maximum throughput limit if
the number of channels is sufficiently large. In addition, we propose an
overlapping channel assignment algorithm that can improve the throughput
performance compared with its nonoverlapping channel assignment counterpart.
Moreover, we design a distributed medium access control (MAC) protocol for
access contention resolution and integrate it into the overlapping channel
assignment algorithm. We then analyze the saturation throughput and the
complexity of the proposed channel assignment algorithms. We also present
several potential extensions, including the development of greedy channel
assignment algorithms under the max-min fairness criterion and throughput
analysis, considering sensing errors. Finally, numerical results are presented
to validate the developed theoretical results and illustrate the performance
gains due to the proposed channel assignment algorithms.
|
1404.1675 | Distributed MAC Protocol for Cognitive Radio Networks: Design,
Analysis,and Optimization | cs.IT cs.NI math.IT | In this paper, we investigate the joint optimal sensing and distributed
Medium Access Control (MAC) protocol design problem for cognitive radio (CR)
networks. We consider both scenarios with single and multiple channels. For
each scenario, we design a synchronized MAC protocol for dynamic spectrum
sharing among multiple secondary users (SUs), which incorporates spectrum
sensing for protecting active primary users (PUs). We perform saturation
throughput analysis for the corresponding proposed MAC protocols that
explicitly capture the spectrum-sensing performance. Then, we find their
optimal configuration by formulating throughput maximization problems subject
to detection probability constraints for PUs. In particular, the optimal
solution of the optimization problem returns the required sensing time for PUs'
protection and optimal contention window to maximize the total throughput of
the secondary network. Finally, numerical results are presented to illustrate
developed theoretical findings in this paper and significant performance gains
of the optimal sensing and protocol configuration.
|
1404.1682 | Pseudo-Zernike Based Multi-Pass Automatic Target Recognition From
Multi-Channel SAR | cs.CV | The capability to exploit multiple sources of information is of fundamental
importance in a battlefield scenario. Information obtained from different
sources, and separated in space and time, provide the opportunity to exploit
diversities in order to mitigate uncertainty. For the specific challenge of
Automatic Target Recognition (ATR) from radar platforms, both channel (e.g.
polarization) and spatial diversity can provide useful information for such a
specific and critical task. In this paper the use of pseudo-Zernike moments
applied to multi-channel multi-pass data is presented exploiting diversities
and invariant properties leading to high confidence ATR, small computational
complexity and data transfer requirements. The effectiveness of the proposed
approach, in different configurations and data source availability is
demonstrated using real data.
|
1404.1685 | Thou Shalt is not You Will | cs.AI cs.LO | In this paper we discuss some reasons why temporal logic might not be
suitable to model real life norms. To show this, we present a novel deontic
logic contrary-to-duty/derived permission paradox based on the interaction of
obligations, permissions and contrary-to-duty obligations. The paradox is
inspired by real life norms.
|
1404.1695 | Proceedings of Third Workshop on Robots and Sensors integration in
future rescue INformation system (ROSIN 2013) | cs.RO | This is the proceedings of the third workshop on Robots and Sensors
integration in future rescue INformation system (ROSIN 2013)
|
1404.1718 | Applications of Algorithmic Probability to the Philosophy of Mind | cs.AI | This paper presents formulae that can solve various seemingly hopeless
philosophical conundrums. We discuss the simulation argument, teleportation,
mind-uploading, the rationality of utilitarianism, and the ethics of exploiting
artificial general intelligence. Our approach arises from combining the
essential ideas of formalisms such as algorithmic probability, the universal
intelligence measure, space-time-embedded intelligence, and Hutter's observer
localization. We argue that such universal models can yield the ultimate
solutions, but a novel research direction would be required in order to find
computationally efficient approximations thereof.
|
1404.1736 | Faulty Successive Cancellation Decoding of Polar Codes for the Binary
Erasure Channel | cs.IT math.IT | We study faulty successive cancellation decoding of polar codes for the
binary erasure channel. To this end, we introduce a simple erasure-based fault
model and we show that, under this model, polarization does not happen, meaning
that fully reliable communication is not possible at any rate. Moreover, we
provide numerical results for the frame erasure rate and bit erasure rate and
we study an unequal error protection scheme that can significantly improve the
performance of the faulty successive cancellation decoder with negligible
overhead.
|
1404.1777 | Neural Codes for Image Retrieval | cs.CV | It has been shown that the activations invoked by an image within the top
layers of a large convolutional neural network provide a high-level descriptor
of the visual content of the image. In this paper, we investigate the use of
such descriptors (neural codes) within the image retrieval application. In the
experiments with several standard retrieval benchmarks, we establish that
neural codes perform competitively even when the convolutional neural network
has been trained for an unrelated classification task (e.g.\ Image-Net). We
also evaluate the improvement in the retrieval performance of neural codes,
when the network is retrained on a dataset of images that are similar to images
encountered at test time.
We further evaluate the performance of the compressed neural codes and show
that a simple PCA compression provides very good short codes that give
state-of-the-art accuracy on a number of datasets. In general, neural codes
turn out to be much more resilient to such compression in comparison other
state-of-the-art descriptors. Finally, we show that discriminative
dimensionality reduction trained on a dataset of pairs of matched photographs
improves the performance of PCA-compressed neural codes even further. Overall,
our quantitative experiments demonstrate the promise of neural codes as visual
descriptors for image retrieval.
|
1404.1812 | Determining the Consistency factor of Autopilot using Rough Set Theory | cs.AI | Autopilot is a system designed to guide a vehicle without aid. Due to
increase in flight hours and complexity of modern day flight it has become
imperative to equip the aircrafts with autopilot. Thus reliability and
consistency of an Autopilot system becomes a crucial role in a flight. But the
increased complexity and demand for better accuracy has made the process of
evaluating the autopilot for consistency a difficult process .A vast amount of
imprecise data has been involved. Rough sets can be a potent tool for such kind
of Applications containing vague data. This paper proposes an approach towards
Consistency factor determination using Rough Set Theory. The seventeen basic
factors, that are crucial in determining the consistency of an Autopilot
system, are grouped into five Payloads based on their functionality.
Consistency Factor is evaluated through these payloads, using Rough Set Theory.
Consistency Factor determines the consistency and reliability of an autopilot
system and the conditions under which manual override becomes imperative. Using
Rough set Theory the most and the least influential factors towards Autopilot
system are also determined.
|
1404.1820 | Max-min Fair Wireless Energy Transfer for Secure Multiuser Communication
Systems | cs.IT math.IT | This paper considers max-min fairness for wireless energy transfer in a
downlink multiuser communication system. Our resource allocation design
maximizes the minimum harvested energy among multiple multiple-antenna energy
harvesting receivers (potential eavesdroppers) while providing quality of
service (QoS) for secure communication to multiple single-antenna information
receivers. In particular, the algorithm design is formulated as a non-convex
optimization problem which takes into account a minimum required
signal-to-interference-plus-noise ratio (SINR) constraint at the information
receivers and a constraint on the maximum tolerable channel capacity achieved
by the energy harvesting receivers for a given transmit power budget. The
proposed problem formulation exploits the dual use of artificial noise
generation for facilitating efficient wireless energy transfer and secure
communication. A semidefinite programming (SDP) relaxation approach is
exploited to obtain a global optimal solution of the considered problem.
Simulation results demonstrate the significant performance gain in harvested
energy that is achieved by the proposed optimal scheme compared to two simple
baseline schemes.
|
1404.1831 | Improving Bilayer Product Quantization for Billion-Scale Approximate
Nearest Neighbors in High Dimensions | cs.CV | The top-performing systems for billion-scale high-dimensional approximate
nearest neighbor (ANN) search are all based on two-layer architectures that
include an indexing structure and a compressed datapoints layer. An indexing
structure is crucial as it allows to avoid exhaustive search, while the lossy
data compression is needed to fit the dataset into RAM. Several of the most
successful systems use product quantization (PQ) for both the indexing and the
dataset compression layers. These systems are however limited in the way they
exploit the interaction of product quantization processes that happen at
different stages of these systems.
Here we introduce and evaluate two approximate nearest neighbor search
systems that both exploit the synergy of product quantization processes in a
more efficient way. The first system, called Fast Bilayer Product Quantization
(FBPQ), speeds up the runtime of the baseline system (Multi-D-ADC) by several
times, while achieving the same accuracy. The second system, Hierarchical
Bilayer Product Quantization (HBPQ) provides a significantly better recall for
the same runtime at a cost of small memory footprint increase. For the BIGANN
dataset of billion SIFT descriptors, the 10% increase in Recall@1 and the 17%
increase in Recall@10 is observed.
|
1404.1847 | Evaluation and Ranking of Machine Translated Output in Hindi Language
using Precision and Recall Oriented Metrics | cs.CL | Evaluation plays a crucial role in development of Machine translation
systems. In order to judge the quality of an existing MT system i.e. if the
translated output is of human translation quality or not, various automatic
metrics exist. We here present the implementation results of different metrics
when used on Hindi language along with their comparisons, illustrating how
effective are these metrics on languages like Hindi (free word order language).
|
1404.1848 | Establishing Global Policies over Decentralized Online Social Networks | cs.SI | Conventional online social networks (OSNs) are implemented in a centralized
manner. Although centralization is a convenient way for implementing OSNs, it
has several well known drawbacks. Chief among them are the risks they pose to
the security and privacy of the information maintained by the OSN; and the loss
of control over the information contributed by individual members.
These concerns prompted several attempts to create decentralized OSNs, or
DOSNs. The basic idea underlying these attempts, is that each member of a
social network keeps its data under its own control, instead of surrendering it
to a central host; providing access to it to other members of the OSN according
to its own access-control policy. Unfortunately all existing DOSN projects have
a very serious limitation. Namely, they are unable to subject the membership of
a DOSN, and the interaction between its members, to any global policy.
We adopt the decentralization idea underlying DOSNs, complementing it with a
means for specifying and enforcing a wide range of policies over the membership
of a social community, and over the interaction between its disparate
distributed members. And we do so in a scalable fashion.
|
1404.1864 | Sublinear algorithms for local graph centrality estimation | cs.DS cs.IR cs.SI | We study the complexity of local graph centrality estimation, with the goal
of approximating the centrality score of a given target node while exploring
only a sublinear number of nodes/arcs of the graph and performing a sublinear
number of elementary operations. We develop a technique, that we apply to the
PageRank and Heat Kernel centralities, for building a low-variance score
estimator through a local exploration of the graph. We obtain an algorithm
that, given any node in any graph of $m$ arcs, with probability $(1-\delta)$
computes a multiplicative $(1\pm\epsilon)$-approximation of its score by
examining only $\tilde{O}(\min(m^{2/3} \Delta^{1/3} d^{-2/3},\, m^{4/5}
d^{-3/5}))$ nodes/arcs, where $\Delta$ and $d$ are respectively the maximum and
average outdegree of the graph (omitting for readability
$\operatorname{poly}(\epsilon^{-1})$ and $\operatorname{polylog}(\delta^{-1})$
factors). A similar bound holds for computational complexity. We also prove a
lower bound of $\Omega(\min(m^{1/2} \Delta^{1/2} d^{-1/2}, \, m^{2/3}
d^{-1/3}))$ for both query complexity and computational complexity. Moreover,
our technique yields a $\tilde{O}(n^{2/3})$ query complexity algorithm for the
graph access model of [Brautbar et al., 2010], widely used in social network
mining; we show this algorithm is optimal up to a sublogarithmic factor. These
are the first algorithms yielding worst-case sublinear bounds for general
directed graphs and any choice of the target node.
|
1404.1869 | DenseNet: Implementing Efficient ConvNet Descriptor Pyramids | cs.CV | Convolutional Neural Networks (CNNs) can provide accurate object
classification. They can be extended to perform object detection by iterating
over dense or selected proposed object regions. However, the runtime of such
detectors scales as the total number and/or area of regions to examine per
image, and training such detectors may be prohibitively slow. However, for some
CNN classifier topologies, it is possible to share significant work among
overlapping regions to be classified. This paper presents DenseNet, an open
source system that computes dense, multiscale features from the convolutional
layers of a CNN based object classifier. Future work will involve training
efficient object detectors with DenseNet feature descriptors.
|
1404.1872 | Int\'egration des donn\'ees d'un lexique syntaxique dans un analyseur
syntaxique probabiliste | cs.CL | This article reports the evaluation of the integration of data from a
syntactic-semantic lexicon, the Lexicon-Grammar of French, into a syntactic
parser. We show that by changing the set of labels for verbs and predicational
nouns, we can improve the performance on French of a non-lexicalized
probabilistic parser.
|
1404.1884 | Plug and Play! A Simple, Universal Model for Energy Disaggregation | cs.AI | Energy disaggregation is to discover the energy consumption of individual
appliances from their aggregated energy values. To solve the problem, most
existing approaches rely on either appliances' signatures or their state
transition patterns, both hard to obtain in practice. Aiming at developing a
simple, universal model that works without depending on sophisticated machine
learning techniques or auxiliary equipments, we make use of easily accessible
knowledge of appliances and the sparsity of the switching events to design a
Sparse Switching Event Recovering (SSER) method. By minimizing the total
variation (TV) of the (sparse) event matrix, SSER can effectively recover the
individual energy consumption values from the aggregated ones. To speed up the
process, a Parallel Local Optimization Algorithm (PLOA) is proposed to solve
the problem in active epochs of appliance activities in parallel. Using
real-world trace data, we compare the performance of our method with that of
the state-of-the-art solutions, including Least Square Estimation (LSE) and
iterative Hidden Markov Model (HMM). The results show that our approach has an
overall higher detection accuracy and a smaller overhead.
|
1404.1890 | Polish and English wordnets -- statistical analysis of interconnected
networks | cs.CL physics.soc-ph | Wordnets are semantic networks containing nouns, verbs, adjectives, and
adverbs organized according to linguistic principles, by means of semantic
relations. In this work, we adopt a complex network perspective to perform a
comparative analysis of the English and Polish wordnets. We determine their
similarities and show that the networks exhibit some of the typical
characteristics observed in other real-world networks. We analyse interlingual
relations between both wordnets and deliberate over the problem of mapping the
Polish lexicon onto the English one.
|
1404.1955 | Capturing Aggregate Flexibility in Demand Response | cs.SY | Flexibility in electric power consumption can be leveraged by Demand Response
(DR) programs. The goal of this paper is to systematically capture the inherent
aggregate flexibility of a population of appliances. We do so by clustering
individual loads based on their characteristics and service constraints. We
highlight the challenges associated with learning the customer response to
economic incentives while applying demand side management to heterogeneous
appliances. We also develop a framework to quantify customer privacy in direct
load scheduling programs.
|
1404.1957 | Ergodic control of multi-class $M/M/N+M$ queues in the Halfin-Whitt
regime | math.PR cs.SY math.OC | We study a dynamic scheduling problem for a multi-class queueing network with
a large pool of statistically identical servers. The arrival processes are
Poisson, and service times and patience times are assumed to be exponentially
distributed and class dependent. The optimization criterion is the expected
long time average (ergodic) of a general (nonlinear) running cost function of
the queue lengths. We consider this control problem in the Halfin-Whitt (QED)
regime, that is, the number of servers $n$ and the total offered load
$\mathbf{r}$ scale like $n\approx\mathbf{r}+\hat{\rho}\sqrt{\mathbf{r}}$ for
some constant $\hat{\rho}$. This problem was proposed in [Ann. Appl. Probab. 14
(2004) 1084-1134, Section 5.2]. The optimal solution of this control problem
can be approximated by that of the corresponding ergodic diffusion control
problem in the limit. We introduce a broad class of ergodic control problems
for controlled diffusions, which includes a large class of queueing models in
the diffusion approximation, and establish a complete characterization of
optimality via the study of the associated HJB equation. We also prove the
asymptotic convergence of the values for the multi-class queueing control
problem to the value of the associated ergodic diffusion control problem. The
proof relies on an approximation method by spatial truncation for the ergodic
control of diffusion processes, where the Markov policies follow a fixed
priority policy outside a fixed compact set.
|
1404.1958 | Scalable and Anonymous Modeling of Large Populations of Flexible
Appliances | cs.SY | To respond to volatility and congestion in the power grid, demand response
(DR) mechanisms allow for shaping the load compared to a base load profile.
When tapping on a large population of heterogeneous appliances as a DR
resource, the challenge is in modeling the dimensions available for control.
Such models need to strike the right balance between accuracy of the model and
tractability. The goal of this paper is to provide a medium-grained stochastic
hybrid model to represent a population of appliances that belong to two
classes: deferrable or thermostatically controlled loads. We preserve quantized
information regarding individual load constraints, while discarding information
about the identity of appliance owners. The advantages of our proposed
population model are 1) it allows us to model and control load in a scalable
fashion, useful for ex-ante planning by an aggregator or for real-time load
control; 2) it allows for the preservation of the privacy of end-use customers
that own submetered or directly controlled appliances.
|
1404.1972 | Regularization for Design | math.OC cs.SY | When designing controllers for large-scale systems, the architectural aspects
of the controller such as the placement of actuators, sensors, and the
communication links between them can no longer be taken as given. The task of
designing this architecture is now as important as the design of the control
laws themselves. By interpreting controller synthesis (in a model matching
setup) as the solution of a particular linear inverse problem, we view the
challenge of obtaining a controller with a desired architecture as one of
finding a structured solution to an inverse problem. Building on this
conceptual connection, we formulate and analyze a framework called
\textit{Regularization for Design (RFD)}, in which we augment the variational
formulations of controller synthesis problems with convex penalty functions
that induce a desired controller architecture. The resulting regularized
formulations are convex optimization problems that can be solved efficiently,
these convex programs provide a unified computationally tractable approach for
the simultaneous co-design of a structured optimal controller and the
actuation, sensing and communication architecture required to implement it.
Further, these problems are natural control-theoretic analogs of prominent
approaches such as the Lasso, the Group Lasso, the Elastic Net, and others that
are employed in statistical modeling. In analogy to that literature, we show
that our approach identifies optimally structured controllers under a suitable
condition on a "signal-to-noise" type ratio.
|
1404.1978 | An Abrupt Change Detection Heuristic with Applications to Cyber Data
Attacks on Power Systems | math.DS cs.SY | We present an analysis of a heuristic for abrupt change detection of systems
with bounded state variations. The proposed analysis is based on the Singular
Value Decomposition (SVD) of a history matrix built from system observations.
We show that monitoring the largest singular value of the history matrix can be
used as a heuristic for detecting abrupt changes in the system outputs. We
provide sufficient detectability conditions for the proposed heuristic. As an
application, we consider detecting malicious cyber data attacks on power
systems and test our proposed heuristic on the IEEE 39-bus testbed.
|
1404.1981 | Iterative Detection and LDPC Decoding Algorithms for MIMO Systems in
Block-Fading Channels | cs.IT math.IT | We propose an Iterative Detection and Decoding (IDD) scheme with Low Density
Parity Check (LDPC) codes for Multiple Input Multiple Output (MIMO) systems for
block-fading $F = 2$ and fast fading Rayleigh channels. An IDD receiver with
soft information processing that exploits the code structure and the behaviour
of the log likelihood ratios (LLR)'s is developed. Minimum Mean Square Error
(MMSE) with Successive Interference Cancellation (SIC) and with Parallel
Interference Cancellation (PIC) schemes are considered. The soft \textit{a
posteriori} output of the decoder in a block-fading channel with Root-Check
LDPC codes has allowed us to create a new strategy to improve the Bit Error
Rate (BER) of a MIMO IDD scheme. Our proposed strategy in some scenarios has
resulted in up to 3dB of gain in terms of BER for block-fading channels and up
to 1dB in fast fading channels.
|
1404.1982 | Aspect-Based Opinion Extraction from Customer reviews | cs.CL cs.IR | Text is the main method of communicating information in the digital age.
Messages, blogs, news articles, reviews, and opinionated information abound on
the Internet. People commonly purchase products online and post their opinions
about purchased items. This feedback is displayed publicly to assist others
with their purchasing decisions, creating the need for a mechanism with which
to extract and summarize useful information for enhancing the decision-making
process. Our contribution is to improve the accuracy of extraction by combining
different techniques from three major areas, named Data Mining, Natural
Language Processing techniques and Ontologies. The proposed framework
sequentially mines products aspects and users opinions, groups representative
aspects by similarity, and generates an output summary. This paper focuses on
the task of extracting product aspects and users opinions by extracting all
possible aspects and opinions from reviews using natural language, ontology,
and frequent (tag) sets. The proposed framework, when compared with an existing
baseline model, yielded promising results.
|
1404.1990 | Estimating the Accuracy of the Return on Investment (ROI) Performance
Evaluations | cs.CE | Return on Investment (ROI) is one of the most popular performance measurement
and evaluation metrics. ROI analysis (when applied correctly) is a powerful
tool in comparing solutions and making informed decisions on the acquisitions
of information systems. The ROI sensitivity to error is a natural thought, and
common sense suggests that ROI evaluations cannot be absolutely accurate.
However, literature review revealed that in most publications and analyst firms
reports, this issue is just overlooked. On the one hand, the results of the ROI
calculations are implied to be produced with a mathematical rigor, possibility
of errors is not mentioned and amount of errors is not estimated. On the
contrary, another approach claims ROI evaluations to be absolutely inaccurate
because, in view of their authors, future benefits (especially, intangible)
cannot be estimated within any reasonable boundaries. The purpose of this study
is to provide a systematic research of the accuracy of the ROI evaluations in
the context of the information systems implementations. The main contribution
of the study is that this is the first systematic effort to evaluate ROI
accuracy. Analytical expressions have been derived for estimating errors of the
ROI evaluations. Results of the Monte Carlo simulation will help practitioners
in making informed decisions based on explicitly stated factors influencing the
ROI uncertainties. The results of this research are intended for researchers in
information systems, technology solutions and business management, and also for
information specialists, project managers, program managers, technology
directors, and information systems evaluators. Most results are applicable to
ROI evaluations in a wider subject area.
|
1404.1991 | Multiple-Symbol Differential Detection for Distributed Space-Time Coding | cs.IT math.IT | Differential distributed space-time coding (D-DSTC) technique has been
considered for relay networks to provide both diversity gain and high
throughput in the absence of channel state information. Conventional
differential detection (CDD) or two-symbol non-coherent detection over slow
-fading channels has been examined and shown to suffer 3-4 dB loss when
compared to coherent detections. Moreover, it has also been shown that the
performance of CDD severely degrades in fast-fading channels and an irreducible
error floor exists at high signal-to-noise ratio region. To overcome the error
floor experienced with fast-fading, a nearly optimal "multiple-symbol"
differential detection (MSDD) is developed in this paper. The MSDD algorithm
jointly processes a larger window of received signals for detection and
significantly improves the performance of D-DSTC in fast-fading channels. The
error performance of the MSDD algorithm is illustrated with simulation results
under different fading scenarios.
|
1404.1998 | A Light Discussion and Derivation of Entropy | cs.IT math.IT | The expression for entropy sometimes appears mysterious - as it often is
asserted without justification. This short manuscript contains a discussion of
the underlying assumptions behind entropy as well as simple derivation of this
ubiquitous quantity.
|
1404.1999 | Notes on Generalized Linear Models of Neurons | cs.NE cs.LG q-bio.NC | Experimental neuroscience increasingly requires tractable models for
analyzing and predicting the behavior of neurons and networks. The generalized
linear model (GLM) is an increasingly popular statistical framework for
analyzing neural data that is flexible, exhibits rich dynamic behavior and is
computationally tractable (Paninski, 2004; Pillow et al., 2008; Truccolo et
al., 2005). What follows is a brief summary of the primary equations governing
the application of GLM's to spike trains with a few sentences linking this work
to the larger statistical literature. Latter sections include extensions of a
basic GLM to model spatio-temporal receptive fields as well as network activity
in an arbitrary numbers of neurons.
|
1404.2000 | Notes on Kullback-Leibler Divergence and Likelihood | cs.IT math.IT | The Kullback-Leibler (KL) divergence is a fundamental equation of information
theory that quantifies the proximity of two probability distributions. Although
difficult to understand by examining the equation, an intuition and
understanding of the KL divergence arises from its intimate relationship with
likelihood theory. We discuss how KL divergence arises from likelihood theory
in an attempt to provide some intuition and reserve a rigorous (but rather
simple) derivation for the appendix. Finally, we comment on recent applications
of KL divergence in the neural coding literature and highlight its natural
application.
|
1404.2005 | Automatic Tracker Selection w.r.t Object Detection Performance | cs.CV | The tracking algorithm performance depends on video content. This paper
presents a new multi-object tracking approach which is able to cope with video
content variations. First the object detection is improved using Kanade-
Lucas-Tomasi (KLT) feature tracking. Second, for each mobile object, an
appropriate tracker is selected among a KLT-based tracker and a discriminative
appearance-based tracker. This selection is supported by an online tracking
evaluation. The approach has been experimented on three public video datasets.
The experimental results show a better performance of the proposed approach
compared to recent state of the art trackers.
|
1404.2006 | K\"ahlerian information geometry for signal processing | math.DG cs.IT cs.SY math.IT math.ST stat.TH | We prove the correspondence between the information geometry of a signal
filter and a K\"ahler manifold. The information geometry of a minimum-phase
linear system with a finite complex cepstrum norm is a K\"ahler manifold. The
square of the complex cepstrum norm of the signal filter corresponds to the
K\"ahler potential. The Hermitian structure of the K\"ahler manifold is
explicitly emergent if and only if the impulse response function of the highest
degree in $z$ is constant in model parameters. The K\"ahlerian information
geometry takes advantage of more efficient calculation steps for the metric
tensor and the Ricci tensor. Moreover, $\alpha$-generalization on the geometric
tensors is linear in $\alpha$. It is also robust to find Bayesian predictive
priors, such as superharmonic priors, because Laplace-Beltrami operators on
K\"ahler manifolds are in much simpler forms than those of the non-K\"ahler
manifolds. Several time series models are studied in the K\"ahlerian
information geometry.
|
1404.2013 | Optimizing The Selection of Strangers To Answer Questions in Social
Media | cs.SI physics.soc-ph | Millions of people express themselves on public social media, such as
Twitter. Through their posts, these people may reveal themselves as potentially
valuable sources of information. For example, real-time information about an
event might be collected through asking questions of people who tweet about
being at the event location. In this paper, we explore how to model and select
users to target with questions so as to improve answering performance while
managing the load on people who must be asked. We first present a feature-based
model that leverages users exhibited social behavior, including the content of
their tweets and social interactions, to characterize their willingness and
readiness to respond to questions on Twitter. We then use the model to predict
the likelihood for people to answer questions. To support real-world
information collection applications, we present an optimization-based approach
that selects a proper set of strangers to answer questions while achieving a
set of application-dependent objectives, such as achieving a desired number of
answers and minimizing the number of questions to be sent. Our cross-validation
experiments using multiple real-world data sets demonstrate the effectiveness
of our work.
|
1404.2014 | Entropy Computation of Document Images in Run-Length Compressed Domain | cs.CV | Compression of documents, images, audios and videos have been traditionally
practiced to increase the efficiency of data storage and transfer. However, in
order to process or carry out any analytical computations, decompression has
become an unavoidable pre-requisite. In this research work, we have attempted
to compute the entropy, which is an important document analytic directly from
the compressed documents. We use Conventional Entropy Quantifier (CEQ) and
Spatial Entropy Quantifiers (SEQ) for entropy computations [1]. The entropies
obtained are useful in applications like establishing equivalence, word
spotting and document retrieval. Experiments have been performed with all the
data sets of [1], at character, word and line levels taking compressed
documents in run-length compressed domain. The algorithms developed are
computational and space efficient, and results obtained match 100% with the
results reported in [1].
|
1404.2034 | Main Memory Adaptive Indexing for Multi-core Systems | cs.DB | Adaptive indexing is a concept that considers index creation in databases as
a by-product of query processing; as opposed to traditional full index creation
where the indexing effort is performed up front before answering any queries.
Adaptive indexing has received a considerable amount of attention, and several
algorithms have been proposed over the past few years; including a recent
experimental study comparing a large number of existing methods. Until now,
however, most adaptive indexing algorithms have been designed single-threaded,
yet with multi-core systems already well established, the idea of designing
parallel algorithms for adaptive indexing is very natural. In this regard only
one parallel algorithm for adaptive indexing has recently appeared in the
literature: The parallel version of standard cracking. In this paper we
describe three alternative parallel algorithms for adaptive indexing, including
a second variant of a parallel standard cracking algorithm. Additionally, we
describe a hybrid parallel sorting algorithm, and a NUMA-aware method based on
sorting. We then thoroughly compare all these algorithms experimentally; along
a variant of a recently published parallel version of radix sort. Parallel
sorting algorithms serve as a realistic baseline for multi-threaded adaptive
indexing techniques. In total we experimentally compare seven parallel
algorithms. Additionally, we extensively profile all considered algorithms. The
initial set of experiments considered in this paper indicates that our parallel
algorithms significantly improve over previously known ones. Our results
suggest that, although adaptive indexing algorithms are a good design choice in
single-threaded environments, the rules change considerably in the parallel
case. That is, in future highly-parallel environments, sorting algorithms could
be serious alternatives to adaptive indexing.
|
1404.2071 | Extracting a bilingual semantic grammar from FrameNet-annotated corpora | cs.CL | We present the creation of an English-Swedish FrameNet-based grammar in
Grammatical Framework. The aim of this research is to make existing framenets
computationally accessible for multilingual natural language applications via a
common semantic grammar API, and to facilitate the porting of such grammar to
other languages. In this paper, we describe the abstract syntax of the semantic
grammar while focusing on its automatic extraction possibilities. We have
extracted a shared abstract syntax from ~58,500 annotated sentences in Berkeley
FrameNet (BFN) and ~3,500 annotated sentences in Swedish FrameNet (SweFN). The
abstract syntax defines 769 frame-specific valence patterns that cover 77.8%
examples in BFN and 74.9% in SweFN belonging to the shared set of 471 frames.
As a side result, we provide a unified method for comparing semantic and
syntactic valence patterns across framenets.
|
1404.2074 | Renewable Powered Cellular Networks: Energy Field Modeling and Network
Coverage | cs.IT math.IT | Powering radio access networks using renewables, such as wind and solar
power, promises dramatic reduction in the network operation cost and the
network carbon footprints. However, the spatial variation of the energy field
can lead to fluctuations in power supplied to the network and thereby affects
its coverage. This warrants research on quantifying the aforementioned negative
effect and countermeasure techniques, motivating the current work. First, a
novel energy field model is presented, in which fixed maximum energy intensity
$\gamma$ occurs at Poisson distributed locations, called energy centers. The
intensities fall off from the centers following an exponential decay function
of squared distance and the energy intensity at an arbitrary location is given
by the decayed intensity from the nearest energy center. The product between
the energy center density and the exponential rate of the decay function,
denoted as $\psi$, is shown to determine the energy field distribution. Next,
the paper considers a cellular downlink network powered by harvesting energy
from the energy field and analyzes its network coverage. For the case of
harvesters deployed at the same sites as base stations (BSs), as $\gamma$
increases, the mobile outage probability is shown to scale as $(c
\gamma^{-\pi\psi}+p)$, where $p$ is the outage probability corresponding to a
flat energy field and $c$ a constant. Subsequently, a simple scheme is proposed
for counteracting the energy randomness by spatial averaging. Specifically,
distributed harvesters are deployed in clusters and the generated energy from
the same cluster is aggregated and then redistributed to BSs. As the cluster
size increases, the power supplied to each BS is shown to converge to a
constant proportional to the number of harvesters per BS.
|
1404.2078 | Optimistic Risk Perception in the Temporal Difference error Explains the
Relation between Risk-taking, Gambling, Sensation-seeking and Low Fear | cs.LG q-bio.NC | Understanding the affective, cognitive and behavioural processes involved in
risk taking is essential for treatment and for setting environmental conditions
to limit damage. Using Temporal Difference Reinforcement Learning (TDRL) we
computationally investigated the effect of optimism in risk perception in a
variety of goal-oriented tasks. Optimism in risk perception was studied by
varying the calculation of the Temporal Difference error, i.e., delta, in three
ways: realistic (stochastically correct), optimistic (assuming action control),
and overly optimistic (assuming outcome control). We show that for the gambling
task individuals with 'healthy' perception of control, i.e., action optimism,
do not develop gambling behaviour while individuals with 'unhealthy' perception
of control, i.e., outcome optimism, do. We show that high intensity of
sensations and low levels of fear co-occur due to optimistic risk perception.
We found that overly optimistic risk perception (outcome optimism) results in
risk taking and in persistent gambling behaviour in addition to high intensity
of sensations. We discuss how our results replicate risk-taking related
phenomena.
|
1404.2081 | Simultaneous Diagonalization: On the DoF Region of the K-user MIMO
Multi-way Relay Channel | cs.IT math.IT | The K-user MIMO Y-channel consisting of K users which want to exchange
messages among each other via a common relay node is studied in this paper. A
transmission strategy based on channel diagonalization using zero-forcing
beam-forming is proposed. This strategy is then combined with signal-space
alignment for network-coding, and the achievable degrees-of-freedom region is
derived. A new degrees-of-freedom outer bound is also derived and it is shown
that the proposed strategy achieves this outer bound if the users have more
antennas than the relay.
|
1404.2083 | Efficiency of conformalized ridge regression | cs.LG stat.ML | Conformal prediction is a method of producing prediction sets that can be
applied on top of a wide range of prediction algorithms. The method has a
guaranteed coverage probability under the standard IID assumption regardless of
whether the assumptions (often considerably more restrictive) of the underlying
algorithm are satisfied. However, for the method to be really useful it is
desirable that in the case where the assumptions of the underlying algorithm
are satisfied, the conformal predictor loses little in efficiency as compared
with the underlying algorithm (whereas being a conformal predictor, it has the
stronger guarantee of validity). In this paper we explore the degree to which
this additional requirement of efficiency is satisfied in the case of Bayesian
ridge regression; we find that asymptotically conformal prediction sets differ
little from ridge regression prediction intervals when the standard Bayesian
assumptions are satisfied.
|
1404.2086 | Cascades of Regression Tree Fields for Image Restoration | cs.CV | Conditional random fields (CRFs) are popular discriminative models for
computer vision and have been successfully applied in the domain of image
restoration, especially to image denoising. For image deblurring, however,
discriminative approaches have been mostly lacking. We posit two reasons for
this: First, the blur kernel is often only known at test time, requiring any
discriminative approach to cope with considerable variability. Second, given
this variability it is quite difficult to construct suitable features for
discriminative prediction. To address these challenges we first show a
connection between common half-quadratic inference for generative image priors
and Gaussian CRFs. Based on this analysis, we then propose a cascade model for
image restoration that consists of a Gaussian CRF at each stage. Each stage of
our cascade is semi-parametric, i.e. it depends on the instance-specific
parameters of the restoration problem, such as the blur kernel. We train our
model by loss minimization with synthetically generated training data. Our
experiments show that when applied to non-blind image deblurring, the proposed
approach is efficient and yields state-of-the-art restoration quality on images
corrupted with synthetic and real blur. Moreover, we demonstrate its
suitability for image denoising, where we achieve competitive results for
grayscale and color images.
|
1404.2115 | An efficient time domain representation for Single-Carrier Frequency
Division Multiple Access | cs.IT math.IT | This paper presents a physical model for Single Carrier-Frequency Division
Mutliple Access (SC-FDMA). We specifically show that by using mutlirate signal
processing we derive a general time domain description of Localised SC-FDMA
systems relying on circular convolution. This general model has the advantage
of encompassing different implementations with flexible rates as well as
additional frequency precoding such as spectral shaping. Based on this
time-domain model, we study the Power Spectral Density (PSD) and the Signal to
Interference and Noise Ratio (SINR). Different implementations of SC-FDMA are
investigated and analytical expressions of both PSD and SINR compared to
simulations results.
|
1404.2116 | Rational Counterfactuals | cs.AI | This paper introduces the concept of rational countefactuals which is an idea
of identifying a counterfactual from the factual (whether perceived or real)
that maximizes the attainment of the desired consequent. In counterfactual
thinking if we have a factual statement like: Saddam Hussein invaded Kuwait and
consequently George Bush declared war on Iraq then its counterfactuals is: If
Saddam Hussein did not invade Kuwait then George Bush would not have declared
war on Iraq. The theory of rational counterfactuals is applied to identify the
antecedent that gives the desired consequent necessary for rational decision
making. The rational countefactual theory is applied to identify the values of
variables Allies, Contingency, Distance, Major Power, Capability, Democracy, as
well as Economic Interdependency that gives the desired consequent Peace.
|
1404.2119 | Characterization of Coded Random Access with Compressive Sensing based
Multi-User Detection | cs.IT math.IT | The emergence of Machine-to-Machine (M2M) communication requires new Medium
Access Control (MAC) schemes and physical (PHY) layer concepts to support a
massive number of access requests. The concept of coded random access,
introduced recently, greatly outperforms other random access methods and is
inherently capable to take advantage of the capture effect from the PHY layer.
Furthermore, at the PHY layer, compressive sensing based multi-user detection
(CS-MUD) is a novel technique that exploits sparsity in multi-user detection to
achieve a joint activity and data detection. In this paper, we combine coded
random access with CS-MUD on the PHY layer and show very promising results for
the resulting protocol.
|
1404.2131 | Performance Analysis of Hybrid ARQ with Incremental Redundancy over
Amplify-and-Forward Dual-Hop Relay Channels | cs.IT math.IT | In this paper, we consider a three node relay network comprising a source, a
relay, and a destination. The source transmits the message to the destination
using hybrid automatic repeat request (HARQ) with incremental redundancy (IR).
The relay overhears the transmitted message, amplifies it using a variable gain
amplifier, and then forwards the message to the destination. This latter
combines both the source and the relay message and tries to decode the
information. In case of decoding failure, the destination sends a negative
acknowledgement. A new replica of the message containing new parity bits is
then transmitted in the subsequent HARQ round. This process continues until
successful decoding occurs at the destination or a maximum number $M$ of rounds
is reached. We study the performance of HARQ-IR over the considered relay
channel from an information theoretic perspective. We derive exact expressions
and bounds for the information outage probability, the average number of
transmissions, and the average transmission rate. Moreover, we evaluate the
delay experienced by Poisson arriving packets over the considered relay
network. We also provide analytical expressions for the expected waiting time,
the sojourn time, and the energy efficiency. The derived exact expressions are
validated by Monte Carlo simulations.
|
1404.2149 | Bond theory for pentapods and hexapods | cs.RO math.AG | This paper deals with the old and classical problem of determining necessary
conditions for the overconstrained mobility of some mechanical device. In
particular, we show that the mobility of pentapods/hexapods implies either a
collinearity condition on the anchor points, or a geometric condition on the
normal projections of base and platform points. The method is based on a
specific compactification of the group of direct isometries of $\mathbb{R}^3$.
|
1404.2160 | SAP HANA and its performance benefits | cs.DB | In-memory computing has changed the landscape of database technology. Within
the database and technology field, advancements occur over the course of time
that has had the capacity to transform some fundamental tenants of the
technology and how it is applied. The concept of Database Management Systems
(DBMS) was realized in industry during the 1960s, allowing users and developers
to use a navigational model to access the data stored by the computers of that
day as they grew in speed and capability. This manuscript is specifically
examines the SAPHigh Performance Analytics Appliance(HANA) approach, which is
one of the commonly used technologies today. Additionally, this manuscript
provides the analysis of the first two of the four common main usecases to
utilize SAP HANA's in-memory computing database technology. The performance
benefits are important factors for DB calculations.Some of the benefits are
quantified and the demonstrated by the defined sets of data.
|
1404.2162 | The NNN Formalization: Review and Development of Guideline Specification
in the Care Domain | cs.AI | Due to an ageing society, it can be expected that less nursing personnel will
be responsible for an increasing number of patients in the future. One way to
address this challenge is to provide system-based support for nursing personnel
in creating, executing, and adapting patient care processes. In care practice,
these processes are following the general care process definition and
individually specified according to patient-specific data as well as diagnoses
and guidelines from the NANDA, NIC, and NOC (NNN) standards. In addition,
adaptations to running patient processes become necessary frequently and are to
be conducted by nursing personnel including NNN knowledge. In order to provide
semi-automatic support for design and adaption of care processes, a
formalization of NNN knowledge is indispensable. This technical report presents
the NNN formalization that is developed targeting at goals such as
completeness, flexibility, and later exploitation for creating and adapting
patient care processes. The formalization also takes into consideration an
extensive evaluation of existing formalization standards for clinical
guidelines. The NNN formalization as well as its usage are evaluated based on
case study FATIGUE.
|
1404.2166 | Sampling-based Roadmap Planners are Probably Near-Optimal after Finite
Computation | cs.RO | Sampling-based motion planners have proven to be efficient solutions to a
variety of high-dimensional, geometrically complex motion planning problems
with applications in several domains. The traditional view of these approaches
is that they solve challenges efficiently by giving up formal guarantees and
instead attain asymptotic properties in terms of completeness and optimality.
Recent work has argued based on Monte Carlo experiments that these approaches
also exhibit desirable probabilistic properties in terms of completeness and
optimality after finite computation. The current paper formalizes these
guarantees. It proves a formal bound on the probability that solutions returned
by asymptotically optimal roadmap-based methods (e.g., PRM*) are within a bound
of the optimal path length I* with clearance {\epsilon} after a finite
iteration n. This bound has the form P(|In - I* | {\leq} {\delta}I*) {\leq}
Psuccess, where {\delta} is an error term for the length a path in the PRM*
graph, In. This bound is proven for general dimension Euclidean spaces and
evaluated in simulation. A discussion on how this bound can be used in
practice, as well as bounds for sparse roadmaps are also provided.
|
1404.2188 | A Convolutional Neural Network for Modelling Sentences | cs.CL | The ability to accurately represent sentences is central to language
understanding. We describe a convolutional architecture dubbed the Dynamic
Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of
sentences. The network uses Dynamic k-Max Pooling, a global pooling operation
over linear sequences. The network handles input sentences of varying length
and induces a feature graph over the sentence that is capable of explicitly
capturing short and long-range relations. The network does not rely on a parse
tree and is easily applicable to any language. We test the DCNN in four
experiments: small scale binary and multi-class sentiment prediction, six-way
question classification and Twitter sentiment prediction by distant
supervision. The network achieves excellent performance in the first three
tasks and a greater than 25% error reduction in the last task with respect to
the strongest baseline.
|
1404.2201 | Resource-Constrained Adaptive Search and Tracking for Sparse Dynamic
Targets | cs.IT math.IT | This paper considers the problem of resource-constrained and noise-limited
localization and estimation of dynamic targets that are sparsely distributed
over a large area. We generalize an existing framework [Bashan et al, 2008] for
adaptive allocation of sensing resources to the dynamic case, accounting for
time-varying target behavior such as transitions to neighboring cells and
varying amplitudes over a potentially long time horizon. The proposed adaptive
sensing policy is driven by minimization of a modified version of the
previously introduced ARAP objective function, which is a surrogate function
for mean squared error within locations containing targets. We provide
theoretical upper bounds on the performance of adaptive sensing policies by
analyzing solutions with oracle knowledge of target locations, gaining insight
into the effect of target motion and amplitude variation as well as sparsity.
Exact minimization of the multi-stage objective function is infeasible, but
myopic optimization yields a closed-form solution. We propose a simple
non-myopic extension, the Dynamic Adaptive Resource Allocation Policy (D-ARAP),
that allocates a fraction of resources for exploring all locations rather than
solely exploiting the current belief state. Our numerical studies indicate that
D-ARAP has the following advantages: (a) it is more robust than the myopic
policy to noise, missing data, and model mismatch; (b) it performs comparably
to well-known approximate dynamic programming solutions but at significantly
lower computational complexity; and (c) it improves greatly upon non-adaptive
uniform resource allocation in terms of estimation error and probability of
detection.
|
1404.2203 | Sum-rate maximization of OFDMA femtocell networks that incorporates the
QoS of macro mobile stations | cs.NI cs.IT math.IT | This paper proposes a power allocation scheme with co-channel allocation for
a femto base station (BS) that maximizes the sum-rate of its own femto mobile
stations (MSs) with a constraint that limits the degradation of quality of
service (QoS) of macro MSs. We have found a closed-form solution for the upper
limit on the transmission power of each sub-channel that satisfies the
constraint in a probabilistic sense. The proposed scheme is practical since it
uses only the information easily obtained by the femto BS. Moreover, our scheme
meets the constraint with minimal degradation compared to the optimal sum-rate
of the femto MSs achieved without the constraint.
|
1404.2229 | Towards the Safety of Human-in-the-Loop Robotics: Challenges and
Opportunities for Safety Assurance of Robotic Co-Workers | cs.RO cs.LG | The success of the human-robot co-worker team in a flexible manufacturing
environment where robots learn from demonstration heavily relies on the correct
and safe operation of the robot. How this can be achieved is a challenge that
requires addressing both technical as well as human-centric research questions.
In this paper we discuss the state of the art in safety assurance, existing as
well as emerging standards in this area, and the need for new approaches to
safety assurance in the context of learning machines. We then focus on robotic
learning from demonstration, the challenges these techniques pose to safety
assurance and indicate opportunities to integrate safety considerations into
algorithms "by design". Finally, from a human-centric perspective, we stipulate
that, to achieve high levels of safety and ultimately trust, the robotic
co-worker must meet the innate expectations of the humans it works with. It is
our aim to stimulate a discussion focused on the safety aspects of
human-in-the-loop robotics, and to foster multidisciplinary collaboration to
address the research challenges identified.
|
1404.2231 | Distributed Joint Source and Channel Coding with Low-Density
Parity-Check Codes | cs.IT math.IT | Low-density parity-check (LDPC) codes with the parity-based approach for
distributed joint source channel coding (DJSCC) with decoder side information
is described in this paper. The parity-based approach is theoretical limit
achievable. Different edge degree distributions are used for source variable
nodes and parity variable nodes. Particularly, the codeword-averaged density
evolution (CADE) is presented for asymmetrically correlated nonuniform sources
over the asymmetric memoryless transmission channel. Extensive simulations show
that the splitting of variable nodes can improve the coding efficiency of
suboptimal codes and lower the error floor.
|
1404.2233 | Performance Improvement of PAPR Reduction for OFDM Signal In LTE System | cs.NI cs.IT math.IT | Orthogonal frequency division multiplexing (OFDM) is an emerging research
field of wireless communication. It is one of the most proficient multi-carrier
transmission techniques widely used today as broadband wired & wireless
applications having several attributes such as provides greater immunity to
multipath fading & impulse noise, eliminating inter symbol interference (ISI),
inter carrier interference (ICI) & the need for equalizers. OFDM signals have a
general problem of high peak to average power ratio (PAPR) which is defined as
the ratio of the peak power to the average power of the OFDM signal. The
drawback of high PAPR is that the dynamic range of the power amplifier (PA) and
digital-to-analog converter (DAC). In this paper, an improved scheme of
amplitude clipping & filtering method is proposed and implemented which shows
the significant improvement in case of PAPR reduction while increasing slight
BER compare to an existing method. Also, the comparative studies of different
parameters will be covered.
|
1404.2258 | Genie Chains: Exploring Outer Bounds on the Degrees of Freedom of MIMO
Interference Networks | cs.IT math.IT | In this paper, we propose a novel genie chains approach to obtain information
theoretic degrees of freedom (DoF) outer bounds for MIMO wireless interference
networks. This new approach creates a chain of mappings from genie signals
provided to a receiver to the exposed signal spaces at that receiver, which
then serve as the genie signals for the next receiver in the chain subject to
certain linear independence requirements, essentially converting an information
theoretic DoF outer bound problem into a linear algebra problem. Several
applications of the genie chains approach are presented.
|
1404.2259 | Virtual Prototyping and Distributed Control for Solar Array with
Distributed Multilevel Inverter | cs.DC cs.SY | In this paper, we present the virtual prototyping of a solar array with a
grid-tie implemented as a distributed inverter and controlled using distributed
algorithms. Due to the distributed control and inherent redundancy in the array
composed of many panels and inverter modules, the virtual prototype exhibits
fault-tolerance capabilities. The distributed identifier algorithm allows the
system to keep track of the number of operating panels to appropriately
regulate the DC voltage output of the panels using buck-boost converters, and
determine appropriate switching times for H-bridges in the grid-tie. We
evaluate the distributed inverter, its control strategy, and fault-tolerance
through simulation in Simulink/Stateflow. Our virtual prototyping framework
allows for generating arrays and grid-ties consisting of many panels, and we
evaluate arrays of five to dozens of panels. Our analysis suggests the
achievable total harmonic distortion (THD) of the system may allow for
operating the array in spite of failures of the power electronics, control
software, and other subcomponents.
|
1404.2267 | Transparallel mind: Classical computing with quantum power | cs.AI | Inspired by the extraordinary computing power promised by quantum computers,
the quantum mind hypothesis postulated that quantum mechanical phenomena are
the source of neuronal synchronization, which, in turn, might underlie
consciousness. Here, I present an alternative inspired by a classical computing
method with quantum power. This method relies on special distributed
representations called hyperstrings. Hyperstrings are superpositions of up to
an exponential number of strings, which -- by a single-processor classical
computer -- can be evaluated in a transparallel fashion, that is,
simultaneously as if only one string were concerned. Building on a neurally
plausible model of human visual perceptual organization, in which hyperstrings
are formal counterparts of transient neural assemblies, I postulate that
synchronization in such assemblies is a manifestation of transparallel
information processing. This accounts for the high combinatorial capacity and
speed of human visual perceptual organization and strengthens ideas that
self-organizing cognitive architecture bridges the gap between neurons and
consciousness.
|
1404.2268 | A Compact Linear Programming Relaxation for Binary Sub-modular MRF | cs.CV | We propose a novel compact linear programming (LP) relaxation for binary
sub-modular MRF in the context of object segmentation. Our model is obtained by
linearizing an $l_1^+$-norm derived from the quadratic programming (QP) form of
the MRF energy. The resultant LP model contains significantly fewer variables
and constraints compared to the conventional LP relaxation of the MRF energy.
In addition, unlike QP which can produce ambiguous labels, our model can be
viewed as a quasi-total-variation minimization problem, and it can therefore
preserve the discontinuities in the labels. We further establish a relaxation
bound between our LP model and the conventional LP model. In the experiments,
we demonstrate our method for the task of interactive object segmentation. Our
LP model outperforms QP when converting the continuous labels to binary labels
using different threshold values on the entire Oxford interactive segmentation
dataset. The computational complexity of our LP is of the same order as that of
the QP, and it is significantly lower than the conventional LP relaxation.
|
1404.2269 | Improving soft FEC performance for higher-order modulations via
optimized bit channel mappings | cs.IT math.IT physics.optics | Soft forward error correction with higher-order modulations is often
implemented in practice via the pragmatic bit-interleaved coded modulation
paradigm, where a single binary code is mapped to a nonbinary modulation. In
this paper, we study the optimization of the mapping of the coded bits to the
modulation bits for a polarization-multiplexed fiber-optical system without
optical inline dispersion compensation. Our focus is on protograph-based
low-density parity-check (LDPC) codes which allow for an efficient hardware
implementation, suitable for high-speed optical communications. The
optimization is applied to the AR4JA protograph family, and further extended to
protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full
field simulations via the split-step Fourier method are used to verify the
analysis. The results show performance gains of up to 0.25 dB, which translate
into a possible extension of the transmission reach by roughly up to 8%,
without significantly increasing the system complexity.
|
1404.2289 | On the Minimal Revision Problem of Specification Automata | cs.SY cs.RO | As robots are being integrated into our daily lives, it becomes necessary to
provide guarantees on the safe and provably correct operation. Such guarantees
can be provided using automata theoretic task and mission planning where the
requirements are expressed as temporal logic specifications. However, in
real-life scenarios, it is to be expected that not all user task requirements
can be realized by the robot. In such cases, the robot must provide feedback to
the user on why it cannot accomplish a given task. Moreover, the robot should
indicate what tasks it can accomplish which are as "close" as possible to the
initial user intent. This paper establishes that the latter problem, which is
referred to as the minimal specification revision problem, is NP complete. A
heuristic algorithm is presented that can compute good approximations to the
Minimal Revision Problem (MRP) in polynomial time. The experimental study of
the algorithm demonstrates that in most problem instances the heuristic
algorithm actually returns the optimal solution. Finally, some cases where the
algorithm does not return the optimal solution are presented.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.