id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1401.6775 | Methods for Collision-Free Navigation of Multiple Mobile Robots in
Unknown Cluttered Environments | math.OC cs.RO | Navigation and guidance of autonomous vehicles is a fundamental problem in
robotics, which has attracted intensive research in recent decades. This report
is mainly concerned with provable collision avoidance of multiple autonomous
vehicles operating in unknown cluttered environments, using reactive
decentralized navigation laws, where obstacle information is supplied by some
sensor system.
Recently, robust and decentralized variants of model predictive control based
navigation systems have been applied to vehicle navigation problems. Properties
such as provable collision avoidance under disturbance and provable convergence
to a target have been shown; however these often require significant
computational and communicative capabilities, and don't consider sensor
constraints, making real time use somewhat difficult. There also seems to be
opportunity to develop a better trade-off between tractability, optimality, and
robustness.
The main contributions of this work are as follows; firstly, the integration
of the robust model predictive control concept with reactive navigation
strategies based on local path planning, which is applied to both holonomic and
unicycle vehicle models subjected to acceleration bounds and disturbance;
secondly, the extension of model predictive control type methods to situations
where the information about the obstacle is limited to a discrete ray-based
sensor model, for which provably safe, convergent boundary following can be
shown; and thirdly the development of novel constraints allowing decentralized
coordination of multiple vehicles using a robust model predictive control type
approach, where a single communication exchange is used per control update,
vehicles are allowed to perform planning simultaneously, and coherency
objectives are avoided.
|
1401.6787 | On the capacity of the dither-quantized Gaussian channel | cs.IT math.IT | This paper studies the capacity of the peak-and-average-power-limited
Gaussian channel when its output is quantized using a dithered, infinite-level,
uniform quantizer of step size $\Delta$. It is shown that the capacity of this
channel tends to that of the unquantized Gaussian channel when $\Delta$ tends
to zero, and it tends to zero when $\Delta$ tends to infinity. In the low
signal-to-noise ratio (SNR) regime, it is shown that, when the peak-power
constraint is absent, the low-SNR asymptotic capacity is equal to that of the
unquantized channel irrespective of $\Delta$. Furthermore, an expression for
the low-SNR asymptotic capacity for finite peak-to-average-power ratios is
given and evaluated in the low- and high-resolution limit. It is demonstrated
that, in this case, the low-SNR asymptotic capacity converges to that of the
unquantized channel when $\Delta$ tends to zero, and it tends to zero when
$\Delta$ tends to infinity. Comparing these results with achievability results
for (undithered) 1-bit quantization, it is observed that the dither reduces
capacity in the low-precision limit, and it reduces the low-SNR asymptotic
capacity unless the peak-to-average-power ratio is unbounded.
|
1401.6790 | Optimal Power Allocation in Block Fading Gaussian Channels with Causal
CSI and Secrecy Constraints | cs.IT cs.CR math.IT | The optimal power allocation that maximizes the secrecy capacity of block
fading Gaussian (BF-Gaussian) networks with causal channel state information
(CSI), M-block delay tolerance and a frame based power constraint is examined.
In particular, we formulate the secrecy capacity maximization as a dynamic
program. We propose suitable linear approximations of the secrecy capacity
density in the low SNR, the high SNR and the intermediate SNR regimes,
according to the overall available power budget. Our findings indicate that
when the available power resources are very low (low SNR case) the optimal
strategy is a threshold policy. On the other hand when the available power
budget is infinite (high SNR case) a constant power policy maximizes the frame
secrecy capacity. Finally, when the power budget is finite (medium SNR case),
an approximate tractable power allocation policy is derived.
|
1401.6799 | Slotted Aloha for Networked Base Stations | cs.IT math.IT | We study multiple base station, multi-access systems in which the user-base
station adjacency is induced by geographical proximity. At each slot, each user
transmits (is active) with a certain probability, independently of other users,
and is heard by all base stations within the distance $r$. Both the users and
base stations are placed uniformly at random over the (unit) area. We first
consider a non-cooperative decoding where base stations work in isolation, but
a user is decoded as soon as one of its nearby base stations reads a clean
signal from it. We find the decoding probability and quantify the gains
introduced by multiple base stations. Specifically, the peak throughput
increases linearly with the number of base stations $m$ and is roughly $m/4$
larger than the throughput of a single-base station that uses standard slotted
Aloha. Next, we propose a cooperative decoding, where the mutually close base
stations inform each other whenever they decode a user inside their coverage
overlap. At each base station, the messages received from the nearby stations
help resolve collisions by the interference cancellation mechanism. Building
from our exact formulas for the non-cooperative case, we provide a heuristic
formula for the cooperative decoding probability that reflects well the actual
performance. Finally, we demonstrate by simulation significant gains of
cooperation with respect to the non-cooperative decoding.
|
1401.6810 | Slotted Aloha for Networked Base Stations with Spatial and Temporal
Diversity | cs.IT math.IT | We consider framed slotted Aloha where $m$ base stations cooperate to decode
messages from $n$ users. Users and base stations are placed uniformly at random
over an area. At each frame, each user sends multiple replicas of its packet
according to a prescribed distribution, and it is heard by all base stations
within the communication radius $r$. Base stations employ a decoding algorithm
that utilizes the successive interference cancellation mechanism, both in
space--across neighboring base stations, and in time--across different slots,
locally at each base station. We show that there exists a threshold on the
normalized load $G=n/(\tau m)$, where $\tau$ is the number of slots per frame,
below which decoding probability converges asymptotically (as
$n,m,\tau\rightarrow \infty$, $r\rightarrow 0$) to the maximal possible
value--the probability that a user is heard by at least one base station, and
we find a lower bound on the threshold. Further, we give a heuristic evaluation
of the decoding probability based on the and-or-tree analysis. Finally, we show
that the peak throughput increases linearly in the number of base stations.
|
1401.6846 | Delayed Channel State Information: Incremental Redundancy with Backtrack
Retransmission | cs.IT math.IT | In many practical wireless systems, the Signal-to-Interference-and-Noise
Ratio (SINR) that is applicable to a certain transmission, referred to as
Channel State Information (CSI), can only be learned after the transmission has
taken place and is thereby outdated (delayed). For example, this occurs under
intermittent interference. We devise the backward retransmission (BRQ) scheme,
which uses the delayed CSIT to send the optimal amount of incremental
redundancy (IR). BRQ uses fixed-length packets, fixed-rate R transmission
codebook, and operates as Markov block coding, where the correlation between
the adjacent packets depends on the amount of IR parity bits. When the delayed
CSIT is full and R grows asymptotically, the average throughput of BRQ becomes
equal to the value achieved with prior CSIT and a fixed-power transmitter;
however, at the expense of increased delay. The second contribution is a method
for employing BRQ when a limited number of feedback bits is available to report
the delayed CSIT. The main novelty is the idea to assemble multiple feedback
opportunities and report multiple SINRs through vector quantization. This
challenges the conventional wisdom in ARQ protocols where feedback bits are
used to only quantize the CSIT of the immediate previous transmission.
|
1401.6853 | Computing the Kullback-Leibler Divergence between two Generalized Gamma
Distributions | cs.IT math.IT | We derive a closed form solution for the Kullback-Leibler divergence between
two generalized gamma distributions. These notes are meant as a reference and
provide a guided tour towards a result of practical interest that is rarely
explicated in the literature.
|
1401.6875 | Context-based Word Acquisition for Situated Dialogue in a Virtual World | cs.CL | To tackle the vocabulary problem in conversational systems, previous work has
applied unsupervised learning approaches on co-occurring speech and eye gaze
during interaction to automatically acquire new words. Although these
approaches have shown promise, several issues related to human language
behavior and human-machine conversation have not been addressed. First,
psycholinguistic studies have shown certain temporal regularities between human
eye movement and language production. While these regularities can potentially
guide the acquisition process, they have not been incorporated in the previous
unsupervised approaches. Second, conversational systems generally have an
existing knowledge base about the domain and vocabulary. While the existing
knowledge can potentially help bootstrap and constrain the acquired new words,
it has not been incorporated in the previous models. Third, eye gaze could
serve different functions in human-machine conversation. Some gaze streams may
not be closely coupled with speech stream, and thus are potentially detrimental
to word acquisition. Automated recognition of closely-coupled speech-gaze
streams based on conversation context is important. To address these issues, we
developed new approaches that incorporate user language behavior, domain
knowledge, and conversation context in word acquisition. We evaluated these
approaches in the context of situated dialogue in a virtual world. Our
experimental results have shown that incorporating the above three types of
contextual information significantly improves word acquisition performance.
|
1401.6876 | Improving Statistical Machine Translation for a Resource-Poor Language
Using Related Resource-Rich Languages | cs.CL | We propose a novel language-independent approach for improving machine
translation for resource-poor languages by exploiting their similarity to
resource-rich ones. More precisely, we improve the translation from a
resource-poor source language X_1 into a resource-rich language Y given a
bi-text containing a limited number of parallel sentences for X_1-Y and a
larger bi-text for X_2-Y for some resource-rich language X_2 that is closely
related to X_1. This is achieved by taking advantage of the opportunities that
vocabulary overlap and similarities between the languages X_1 and X_2 in
spelling, word order, and syntax offer: (1) we improve the word alignments for
the resource-poor language, (2) we further augment it with additional
translation options, and (3) we take care of potential spelling differences
through appropriate transliteration. The evaluation for Indonesian- >English
using Malay and for Spanish -> English using Portuguese and pretending Spanish
is resource-poor shows an absolute gain of up to 1.35 and 3.37 BLEU points,
respectively, which is an improvement over the best rivaling approaches, while
using much less additional data. Overall, our method cuts the amount of
necessary "real training data by a factor of 2--5.
|
1401.6887 | OLAP on Structurally Significant Data in Graphs | cs.DB | Summarized data analysis of graphs using OLAP (Online Analytical Processing)
is very popular these days. However due to high dimensionality and large size,
it is not easy to decide which data should be aggregated for OLAP analysis.
Though iceberg cubing is useful, but it is unaware of the significance of
dimensional values with respect to the structure of the graph. In this paper,
we propose a Structural Significance, SS, measure to identify the structurally
significant dimensional values in each dimension. This leads to structure aware
pruning. We then propose an algorithm, iGraphCubing, to compute the graph cube
to analyze the structurally significant data using the proposed measure. We
evaluated the proposed ideas on real and synthetic data sets and observed very
encouraging results.
|
1401.6891 | Unsupervised Visual and Textual Information Fusion in Multimedia
Retrieval - A Graph-based Point of View | cs.IR | Multimedia collections are more than ever growing in size and diversity.
Effective multimedia retrieval systems are thus critical to access these
datasets from the end-user perspective and in a scalable way. We are interested
in repositories of image/text multimedia objects and we study multimodal
information fusion techniques in the context of content based multimedia
information retrieval. We focus on graph based methods which have proven to
provide state-of-the-art performances. We particularly examine two of such
methods : cross-media similarities and random walk based scores. From a
theoretical viewpoint, we propose a unifying graph based framework which
encompasses the two aforementioned approaches. Our proposal allows us to
highlight the core features one should consider when using a graph based
technique for the combination of visual and textual information. We compare
cross-media and random walk based results using three different real-world
datasets. From a practical standpoint, our extended empirical analysis allow us
to provide insights and guidelines about the use of graph based methods for
multimodal information fusion in content based multimedia information
retrieval.
|
1401.6904 | Adaptive Visual Tracking for Robotic Systems Without Image-Space
Velocity Measurement | cs.RO cs.SY math.OC | In this paper, we investigate the visual tracking problem for robotic systems
without image-space velocity measurement, simultaneously taking into account
the uncertainties of the camera model and the manipulator kinematics and
dynamics. We propose a new image-space observer that exploits the image-space
velocity information contained in the unknown kinematics, upon which, we design
an adaptive controller without using the image-space velocity signal where the
adaptations of the depth-rate-independent kinematic parameter and depth
parameter are driven by both the image-space tracking errors and observation
errors. The major superiority of the proposed observer-based adaptive
controller lies in its simplicity and the separation of the handling of
multiple uncertainties in visually servoed robotic systems, thus avoiding the
overparametrization problem of the existing work. Using Lyapunov analysis, we
demonstrate that the image-space tracking errors converge to zero
asymptotically. The performance of the proposed adaptive control scheme is
illustrated by a numerical simulation.
|
1401.6929 | Computing support for advanced medical data analysis and imaging | physics.comp-ph cs.CV cs.DC physics.ins-det physics.med-ph | We discuss computing issues for data analysis and image reconstruction of
PET-TOF medical scanner or other medical scanning devices producing large
volumes of data. Service architecture based on the grid and cloud concepts for
distributed processing is proposed and critically discussed.
|
1401.6931 | How the Sando Search Tool Recommends Queries | cs.SE cs.IR | Developers spend a significant amount of time searching their local codebase.
To help them search efficiently, researchers have proposed novel tools that
apply state-of-the-art information retrieval algorithms to retrieve relevant
code snippets from the local codebase. However, these tools still rely on the
developer to craft an effective query, which requires that the developer is
familiar with the terms contained in the related code snippets. Our empirical
data from a state-of-the-art local code search tool, called Sando, suggests
that developers are sometimes unacquainted with their local codebase. In order
to bridge the gap between developers and their ever-increasing local codebase,
in this paper we demonstrate the recommendation techniques integrated in Sando.
|
1401.6956 | A continuous-time approach to online optimization | math.OC cs.LG stat.ML | We consider a family of learning strategies for online optimization problems
that evolve in continuous time and we show that they lead to no regret. From a
more traditional, discrete-time viewpoint, this continuous-time approach allows
us to derive the no-regret properties of a large class of discrete-time
algorithms including as special cases the exponential weight algorithm, online
mirror descent, smooth fictitious play and vanishingly smooth fictitious play.
In so doing, we obtain a unified view of many classical regret bounds, and we
show that they can be decomposed into a term stemming from continuous-time
considerations and a term which measures the disparity between discrete and
continuous time. As a result, we obtain a general class of infinite horizon
learning strategies that guarantee an $\mathcal{O}(n^{-1/2})$ regret bound
without having to resort to a doubling trick.
|
1401.6962 | Compressive Classification of a Mixture of Gaussians: Analysis, Designs
and Geometrical Interpretation | cs.IT math.IT | This paper derives fundamental limits on the performance of compressive
classification when the source is a mixture of Gaussians. It provides an
asymptotic analysis of a Bhattacharya based upper bound on the
misclassification probability for the optimal Maximum-A-Posteriori (MAP)
classifier that depends on quantities that are dual to the concepts of
diversity-order and coding gain in multi-antenna communications. The
diversity-order of the measurement system determines the rate at which the
probability of misclassification decays with signal-to-noise ratio (SNR) in the
low-noise regime. The counterpart of coding gain is the measurement gain which
determines the power offset of the probability of misclassification in the
low-noise regime. These two quantities make it possible to quantify differences
in misclassification probability between random measurement and
(diversity-order) optimized measurement. Results are presented for two-class
classification problems first with zero-mean Gaussians then with nonzero-mean
Gaussians, and finally for multiple-class Gaussian classification problems. The
behavior of misclassification probability is revealed to be intimately related
to certain fundamental geometric quantities determined by the measurement
system, the source and their interplay. Numerical results, representative of
compressive classification of a mixture of Gaussians, demonstrate alignment of
the actual misclassification probability with the Bhattacharya based upper
bound. The connection between the misclassification performance and the
alignment between source and measurement geometry may be used to guide the
design of dictionaries for compressive classification.
|
1401.6964 | Co-Evolution of Friendship and Publishing in Online Blogging Social
Networks | cs.SI physics.soc-ph | In the past decade, blogging web sites have become more sophisticated and
influential than ever. Much of this sophistication and influence follows from
their network organization. Blogging social networks (BSNs) allow individual
bloggers to form contact lists, subscribe to other blogs, comment on blog
posts, declare interests, and participate in collective blogs. Thus, a BSN is a
bimodal venue, where users can engage in publishing (post) as well as in social
(make friends) activities. In this paper, we study the co-evolution of both
activities. We observed a significant positive correlation between blogging and
socializing. In addition, we identified a number of user archetypes that
correspond to "mainly bloggers," "mainly socializers," etc. We analyzed a BSN
at the level of individual posts and changes in contact lists and at the level
of trajectories in the friendship-publishing space. Both approaches produced
consistent results: the majority of BSN users are passive readers; publishing
is the dominant active behavior in a BSN; and social activities complement
blogging, rather than compete with it.
|
1401.6968 | Fixed-rank Rayleigh Quotient Maximization by an $M$PSK Sequence | cs.IT math.CO math.IT math.OC | Certain optimization problems in communication systems, such as
limited-feedback constant-envelope beamforming or noncoherent $M$-ary
phase-shift keying ($M$PSK) sequence detection, result in the maximization of a
fixed-rank positive semidefinite quadratic form over the $M$PSK alphabet. This
form is a special case of the Rayleigh quotient of a matrix and, in general,
its maximization by an $M$PSK sequence is $\mathcal{NP}$-hard. However, if the
rank of the matrix is not a function of its size, then the optimal solution can
be computed with polynomial complexity in the matrix size. In this work, we
develop a new technique to efficiently solve this problem by utilizing
auxiliary continuous-valued angles and partitioning the resulting continuous
space of solutions into a polynomial-size set of regions, each of which
corresponds to a distinct $M$PSK sequence. The sequence that maximizes the
Rayleigh quotient is shown to belong to this polynomial-size set of sequences,
thus efficiently reducing the size of the feasible set from exponential to
polynomial. Based on this analysis, we also develop an algorithm that
constructs this set in polynomial time and show that it is fully
parallelizable, memory efficient, and rank scalable. The proposed algorithm
compares favorably with other solvers for this problem that have appeared
recently in the literature.
|
1401.6975 | A decoding algorithm for CSS codes using the X/Z correlations | cs.IT math.IT quant-ph | We propose a simple decoding algorithm for CSS codes taking into account the
correlations between the X part and the Z part of the error. Applying this idea
to surface codes, we derive an improved version of the perfect matching
decoding algorithm which uses these X/Z correlations.
|
1401.6984 | Kaldi+PDNN: Building DNN-based ASR Systems with Kaldi and PDNN | cs.LG cs.CL | The Kaldi toolkit is becoming popular for constructing automated speech
recognition (ASR) systems. Meanwhile, in recent years, deep neural networks
(DNNs) have shown state-of-the-art performance on various ASR tasks. This
document describes our open-source recipes to implement fully-fledged DNN
acoustic modeling using Kaldi and PDNN. PDNN is a lightweight deep learning
toolkit developed under the Theano environment. Using these recipes, we can
build up multiple systems including DNN hybrid systems, convolutional neural
network (CNN) systems and bottleneck feature systems. These recipes are
directly based on the Kaldi Switchboard 110-hour setup. However, adapting them
to new datasets is easy to achieve.
|
1401.7006 | Polar Codes for Some Multi-terminal Communications Problems | cs.IT math.IT | It is shown that polar coding schemes achieve the known achievable rate
regions for several multi-terminal communications problems including lossy
distributed source coding, multiple access channels and multiple descriptions
coding. The results are valid for arbitrary alphabet sizes (binary or
nonbinary) and arbitrary distributions (symmetric or asymmetric).
|
1401.7020 | A Stochastic Quasi-Newton Method for Large-Scale Optimization | math.OC cs.LG stat.ML | The question of how to incorporate curvature information in stochastic
approximation methods is challenging. The direct application of classical
quasi- Newton updating techniques for deterministic optimization leads to noisy
curvature estimates that have harmful effects on the robustness of the
iteration. In this paper, we propose a stochastic quasi-Newton method that is
efficient, robust and scalable. It employs the classical BFGS update formula in
its limited memory form, and is based on the observation that it is beneficial
to collect curvature information pointwise, and at regular intervals, through
(sub-sampled) Hessian-vector products. This technique differs from the
classical approach that would compute differences of gradients, and where
controlling the quality of the curvature estimates can be difficult. We present
numerical results on problems arising in machine learning that suggest that the
proposed method shows much promise.
|
1401.7074 | Phase Precoded Compute-and-Forward with Partial Feedback | cs.IT math.IT | In this work, we propose phase precoding for the compute-and-forward (CoF)
protocol. We derive the phase precoded computation rate and show that it is
greater than the original computation rate of CoF protocol without precoder. To
maximize the phase precoded computation rate, we need to 'jointly' find the
optimum phase precoding matrix and the corresponding network equation
coefficients. This is a mixed integer programming problem where the optimum
precoders should be obtained at the transmitters and the network equation
coefficients have to be computed at the relays. To solve this problem, we
introduce phase precoded CoF with partial feedback. It is a quantized precoding
system where the relay jointly computes both a quasi-optimal precoder from a
finite codebook and the corresponding network equations. The index of the
obtained phase precoder within the codebook will then be fedback to the
transmitters. A "deep hole phase precoder" is presented as an example of such a
scheme. We further simulate our scheme with a lattice code carved out of the
Gosset lattice and show that significant coding gains can be obtained in terms
of equation error performance.
|
1401.7077 | Quantifying literature quality using complexity criteria | cs.CL | We measured entropy and symbolic diversity for English and Spanish texts
including literature Nobel laureates and other famous authors. Entropy, symbol
diversity and symbol frequency profiles were compared for these four groups. We
also built a scale sensitive to the quality of writing and evaluated its
relationship with the Flesch's readability index for English and the
Szigriszt's perspicuity index for Spanish. Results suggest a correlation
between entropy and word diversity with quality of writing. Text genre also
influences the resulting entropy and diversity of the text. Results suggest the
plausibility of automated quality assessment of texts.
|
1401.7085 | Reverse Edge Cut-Set Bounds for Secure Network Coding | cs.IT math.IT | We consider the problem of secure communication over a network in the
presence of wiretappers. We give a new cut-set bound on secrecy capacity which
takes into account the contribution of both forward and backward edges crossing
the cut, and the connectivity between their endpoints in the rest of the
network. We show the bound is tight on a class of networks, which demonstrates
that it is not possible to find a tighter bound by considering only cut set
edges and their connectivity.
|
1401.7088 | Cellular Downlink Performance with Base Station Sleeping, User
Association, and Scheduling | cs.NI cs.IT math.IT stat.AP | Base station (BS) sleeping has emerged as a viable solution to enhance the
overall network energy efficiency by inactivating the underutilized BSs.
However, it affects the performance of users in sleeping cells depending on
their BS association criteria, their channel conditions towards the active BSs,
and scheduling criteria and traffic loads at the active BSs. This paper
characterizes the performance of cellular systems with BS sleeping by
developing a systematic framework to derive the spectral efficiency and outage
probability of downlink transmission to the sleeping cell users taking into
account the aforementioned factors. In this context, we develop a user
association scheme in which a typical user in a sleeping cell selects a BS with
\textbf{M}aximum best-case \textbf{M}ean channel \textbf{A}ccess
\textbf{P}robability (MMAP) which is calculated by all active BSs based on
their existing traffic loads. We consider both greedy and round-robin schemes
at active BSs for scheduling users in a channel. Once the association is
performed, the exact access probability for a typical sleeping cell user and
the statistics of its received signal and interference powers are derived to
evaluate the spectral and energy efficiencies of transmission. For the sleeping
cell users, we also consider the conventional \textbf{M}aximum
\textbf{R}eceived \textbf{S}ignal \textbf{P}ower (MRSP)-based user association
scheme along with greedy and round-robin schemes at the BSs. The impact of
cell-zooming is incorporated in the derivations to analyze its feasibility in
reducing the coverage holes created by BS sleeping. Numerical results show the
trade-offs between spectral efficiency and energy efficiency in various network
scenarios. The accuracy of the analysis is verified through Monte-Carlo
simulations.
|
1401.7114 | Fundamental Limits in Correlated Fading MIMO Broadcast Channels:
Benefits of Transmit Correlation Diversity | cs.IT math.IT | We investigate asymptotic capacity limits of the Gaussian MIMO broadcast
channel (BC) with spatially correlated fading to understand when and how much
transmit correlation helps the capacity. By imposing a structure on channel
covariances (equivalently, transmit correlations at the transmitter side) of
users, also referred to as \emph{transmit correlation diversity}, the impact of
transmit correlation on the power gain of MIMO BCs is characterized in several
regimes of system parameters, with a particular interest in the large-scale
array (or massive MIMO) regime. Taking the cost for downlink training into
account, we provide asymptotic capacity bounds of multiuser MIMO downlink
systems to see how transmit correlation diversity affects the system
multiplexing gain. We make use of the notion of joint spatial division and
multiplexing (JSDM) to derive the capacity bounds. It is advocated in this
paper that transmit correlation diversity may be of use to significantly
increase multiplexing gain as well as power gain in multiuser MIMO systems. In
particular, the new type of diversity in wireless communications is shown to
improve the system multiplexing gain up to by a factor of the number of degrees
of such diversity. Finally, performance limits of conventional large-scale MIMO
systems not exploiting transmit correlation are also characterized.
|
1401.7116 | Bayesian Properties of Normalized Maximum Likelihood and its Fast
Computation | cs.IT cs.LG math.IT stat.ML | The normalized maximized likelihood (NML) provides the minimax regret
solution in universal data compression, gambling, and prediction, and it plays
an essential role in the minimum description length (MDL) method of statistical
modeling and estimation. Here we show that the normalized maximum likelihood
has a Bayes-like representation as a mixture of the component models, even in
finite samples, though the weights of linear combination may be both positive
and negative. This representation addresses in part the relationship between
MDL and Bayes modeling. This representation has the advantage of speeding the
calculation of marginals and conditionals required for coding and prediction
applications.
|
1401.7134 | Block-Fading Channels with Delayed CSIT at Finite Blocklength | cs.IT math.IT | In many wireless systems, the channel state information at the transmitter
(CSIT) can not be learned until after a transmission has taken place and is
thereby outdated. In this paper, we study the benefits of delayed CSIT on a
block-fading channel at finite blocklength. First, the achievable rates of a
family of codes that allows the number of codewords to expand during
transmission, based on delayed CSIT, are characterized. A fixed-length and a
variable-length characterization of the rates are provided using the dependency
testing bound and the variable-length setting introduced by Polyanskiy et al.
Next, a communication protocol based on codes with expandable message space is
put forth, and numerically, it is shown that higher rates are achievable
compared to coding strategies that do not benefit from delayed CSIT.
|
1401.7161 | Circular Sphere Decoding: A Low Complexity Detection for MIMO Systems
with General Two-dimensional Signal Constellations | cs.IT math.IT | We propose a low complexity complex valued Sphere Decoding (CV-SD) algorithm,
referred to as Circular Sphere Decoding (CSD) which is applicable to
multiple-input multiple-output (MIMO) systems with arbitrary two dimensional
(2D) constellations. CSD provides a new constraint test. This constraint test
is carefully designed so that the element-wise dependency is removed in the
metric computation for the test. As a result, the constraint test becomes
simple to perform without restriction on its constellation structure. By
additionally employing this simple test as a prescreening test, CSD reduces the
complexity of the CV-SD search. We show that the complexity reduction is
significant while its maximum-likelihood (ML) performance is not compromised.
We also provide a powerful tool to estimate the pruning capacity of any
particular search tree. Using this tool, we propose the Predict-And-Change
strategy which leads to a further complexity reduction in CSD. Extension of the
proposed methods to soft output SD is also presented.
|
1401.7169 | On the Evaluation of the Polyanskiy-Poor-Verdu Converse Bound for Finite
Blocklength Coding in AWGN | cs.IT math.IT | A tight converse bound to channel coding rate in the finite block-length
regime and under AWGN conditions was recently proposed by Polyanskiy, Poor, and
Verdu (PPV). The bound is a generalization of a number of other classical
results, and it was also claimed to be equivalent to Shannon's 1959 cone
packing bound. Unfortunately, its numerical evaluation is troublesome even for
not too large values of the block-length n. In this paper we tackle the
numerical evaluation by compactly expressing the PPV converse bound in terms of
non-central chi-squared distributions, and by evaluating those through a an
integral expression and a corresponding series expansion which exploit a method
proposed by Temme. As a result, a robust evaluation method and new insights on
the bound's asymptotics, as well as new approximate expressions, are given.
|
1401.7188 | Network Connectivity: Stochastic vs. Deterministic Wireless Channels | cs.NI cs.IT math.IT | We study the effect of stochastic wireless channel models on the connectivity
of ad hoc networks. Unlike in the deterministic geometric disk model where
nodes connect if they are within a certain distance from each other, stochastic
models attempt to capture small-scale fading effects due to shadowing and
multipath received signals. Through analysis of local and global network
observables, we present conclusive evidence suggesting that network behaviour
is highly dependent upon whether a stochastic or deterministic connection model
is employed. Specifically we show that the network mean degree is lower
(higher) for stochastic wireless channels than for deterministic ones, if the
path loss exponent is greater (lesser) than the spatial dimension. Similarly,
the probability of forming isolated pairs of nodes in an otherwise dense random
network is much less for stochastic wireless channels than for deterministic
ones. The latter realisation explains why the upper bound of $k$-connectivity
is tighter for stochastic wireless channels. We obtain closed form analytic
results and compare to extensive numerical simulations.
|
1401.7216 | Reconfigurable Structures for Direct Equalisation in Mobile Receivers | cs.IT math.IT | Any communication channel will usually distort the transmitted signal. This
is especially true in the case of mobile systems, where multipath propagation
causes the received signal to be seriously degraded. Over the years, many
techniques have been proposed to combat channel effects. Two of the most
popular are linear equalisation (LE) and decision feedback equalisation (DFE).
These methods offer a good compromise between performance and computational
complexity. LE and DFE are implemented using finite impulse response (FIR)
filters whose frequency spectrum approximates the inverse of the channel
spectrum plus noise. In mobile systems, the equaliser is made adaptable in
order to be able to respond to the channel variations. Adaptability is achieved
using adaptive FIR filters whose coefficients are iteratively updated. In
principle, an infinite number of filter coefficients would be needed to achieve
perfect channel inversion. In practice, the number of taps must be finite.
Simulations show that, in realistic scenarios, making the equaliser longer than
a certain (undetermined) number of taps will not yield any benefit. Moreover,
computation and power will be wasted. In battery powered devices, like mobile
terminals, it would be desirable to have the equaliser properly dimensioned.
The equaliser's optimum length strongly depends on the particular scenario, and
as channel conditions vary, this optimum is likely to vary. This thesis
presents novel techniques to perform equaliser length adjustment. Methods for
the LE and the DFE have been developed. Simulations in many different scenarios
show that the proposed schemes optimise the number of taps to be used.
Moreover, these techniques are able to detect changes in the channel and
re-adjust the equaliser length appropriately.
|
1401.7229 | MIMO Multiway Relaying with Pairwise Data Exchange: A Degrees of Freedom
Perspective | cs.IT math.IT | In this paper, we study achievable degrees of freedom (DoF) of a
multiple-input multiple-output (MIMO) multiway relay channel (mRC) where $K$
users, each equipped with $M$ antennas, exchange messages in a pairwise manner
via a common $N$-antenna relay node. % A novel and systematic way of joint
beamforming design at the users and at the relay is proposed to align signals
for efficient implementation of physical-layer network coding (PNC). It is
shown that, when the user number $K=3$, the proposed beamforming design can
achieve the DoF capacity of the considered mRC for any $(M,N)$ setups. % For
the scenarios with $K>3$, we show that the proposed signaling scheme can be
improved by disabling a portion of relay antennas so as to align signals more
efficiently. Our analysis reveals that the obtained achievable DoF is always
piecewise linear, and is bounded either by the number of user antennas $M$ or
by the number of relay antennas $N$. Further, we show that the DoF capacity can
be achieved for $\frac{M}{N} \in \left(0,\frac{K-1}{K(K-2)} \right]$ and
$\frac{M}{N} \in \left[\frac{1}{K(K-1)}+\frac{1}{2},\infty \right)$, which
provides a broader range of the DoF capacity than the existing results.
Asymptotic DoF as $K\rightarrow \infty$ is also derived based on the proposed
signaling scheme.
|
1401.7233 | Measuring large-scale social networks with high resolution | cs.SI physics.soc-ph | This paper describes the deployment of a large-scale study designed to
measure human interactions across a variety of communication channels, with
high temporal resolution and spanning multiple years - the Copenhagen Networks
Study. Specifically, we collect data on face-to-face interactions,
telecommunication, social networks, location, and background information
(personality, demographic, health, politics) for a densely connected population
of 1,000 individuals, using state-of-art smartphones as social sensors. Here we
provide an overview of the related work and describe the motivation and
research agenda driving the study. Additionally the paper details the
data-types measured, and the technical infrastructure in terms of both backend
and phone software, as well as an outline of the deployment procedures. We
document the participant privacy procedures and their underlying principles.
The paper is concluded with early results from data analysis, illustrating the
importance of multi-channel high-resolution approach to data collection.
|
1401.7239 | Contexts of diffusion: Adoption of research synthesis in Social Work and
Women's Studies | cs.SI cs.DL physics.soc-ph | Texts reveal the subjects of interest in research fields, and the values,
beliefs, and practices of researchers. In this study, texts are examined
through bibliometric mapping and topic modeling to provide a birds eye view of
the social dynamics associated with the diffusion of research synthesis methods
in the contexts of Social Work and Women's Studies. Research synthesis texts
are especially revealing because the methods, which include meta-analysis and
systematic review, are reliant on the availability of past research and data,
sometimes idealized as objective, egalitarian approaches to research
evaluation, fundamentally tied to past research practices, and performed with
the goal informing future research and practice. This study highlights the
co-influence of past and subsequent research within research fields;
illustrates dynamics of the diffusion process; and provides insight into the
cultural contexts of research in Social Work and Women's Studies. This study
suggests the potential to further develop bibliometric mapping and topic
modeling techniques to inform research problem selection and resource
allocation.
|
1401.7249 | Fuzzy Controller Design for Assisted Omni-Directional Treadmill Therapy | cs.AI | One of the defining characteristic of human being is their ability to walk
upright. Loss or restriction of such ability whether due to the accident, spine
problem, stroke or other neurological injuries can cause tremendous stress on
the patients and hence will contribute negatively to their quality of life.
Modern research shows that physical exercise is very important for maintaining
physical fitness and adopting a healthier life style. In modern days treadmill
is widely used for physical exercises and training which enables the user to
set up an exercise regime that can be adhered to irrespective of the weather
conditions. Among the users of treadmills today are medical facilities such as
hospitals, rehabilitation centres, medical and physiotherapy clinics etc. The
process of assisted training or doing rehabilitation exercise through treadmill
is referred to as treadmill therapy. A modern treadmill is an automated machine
having built in functions and predefined features. Most of the treadmills used
today are one dimensional and user can only walk in one direction. This paper
presents the idea of using omnidirectional treadmills which will be more
appealing to the patients as they can walk in any direction, hence encouraging
them to do exercises more frequently. This paper proposes a fuzzy control
design and possible implementation strategy to assist patients in treadmill
therapy. By intelligently controlling the safety belt attached to the treadmill
user, one can help them steering left, right or in any direction. The use of
intelligent treadmill therapy can help patients to improve their walking
ability without being continuously supervised by the specialists. The patients
can walk freely within a limited space and the support system will provide
continuous evaluation of their position and can adjust the control parameters
of treadmill accordingly to provide best possible assistance.
|
1401.7261 | On the Cooperative Communication over Cognitive Interference Channel | cs.IT math.IT | In this paper, we investigate the problem of communication over cognitive
interference channel (CIC) with partially cooperating (PC) destinations
(CIC-PC). This channel consists of two source nodes communicating two
independent messages to their corresponding destination nodes. One of the
sources, referred to as the cognitive source, has a noncausal knowledge of the
message of the other source, referred to as the primary source. Each
destination is assumed to decode only its intended message. In addition, the
destination corresponding to the cognitive source assists the other destination
by transmitting cooperative information through a relay link. We derive a new
upper bound on the capacity region of discrete memoryless CI-CPC. Moreover, we
characterize the capacity region for two new classes of this channel: (1)
degraded CIC-PC, and (2) a class of semideterministic CIC-PC.
|
1401.7262 | Impact of Spectrum Sharing on the Efficiency of Faster-Than-Nyquist
Signaling | cs.IT math.IT | Capacity computations are presented for Faster-Than-Nyquist (FTN) signaling
in the presence of interference from neighboring frequency bands. It is shown
that Shannon's sinc pulses maximize the spectral efficiency for a multi-access
channel, where spectral efficiency is defined as the sum rate in bits per
second per Hertz. Comparisons using root raised cosine pulses show that the
spectral efficiency decreases monotonically with the roll-off factor. At high
signal-to-noise ratio, these pulses have an additive gap to capacity that
increases monotonically with the roll-off factor.
|
1401.7267 | Community Detection in Networks with Node Attributes | cs.SI physics.soc-ph | Community detection algorithms are fundamental tools that allow us to uncover
organizational principles in networks. When detecting communities, there are
two possible sources of information one can use: the network structure, and the
features and attributes of nodes. Even though communities form around nodes
that have common edges and common attributes, typically, algorithms have only
focused on one of these two data modalities: community detection algorithms
traditionally focus only on the network structure, while clustering algorithms
mostly consider only node attributes. In this paper, we develop Communities
from Edge Structure and Node Attributes (CESNA), an accurate and scalable
algorithm for detecting overlapping communities in networks with node
attributes. CESNA statistically models the interaction between the network
structure and the node attributes, which leads to more accurate community
detection as well as improved robustness in the presence of noise in the
network structure. CESNA has a linear runtime in the network size and is able
to process networks an order of magnitude larger than comparable approaches.
Last, CESNA also helps with the interpretation of detected communities by
finding relevant node attributes for each community.
|
1401.7273 | On Stochastic Estimation of Partition Function | cs.IT math.IT | In this paper, we show analytically that the duality of normal factor graphs
(NFG) can facilitate stochastic estimation of partition functions. In
particular, our analysis suggests that for the $q-$ary two-dimensional
nearest-neighbor Potts model, sampling from the primal NFG of the model and
sampling from its dual exhibit opposite behaviours with respect to the
temperature of the model. For high-temperature models, sampling from the primal
NFG gives rise to better estimators whereas for low-temperature models,
sampling from the dual gives rise to better estimators. This analysis is
validated by experiments.
|
1401.7288 | Spatially-Coupled Precoded Rateless Codes with Bounded Degree Achieve
the Capacity of BEC under BP decoding | cs.IT math.IT | Raptor codes are known as precoded rateless codes that achieve the capacity
of BEC. However the maximum degree of Raptor codes needs to be unbounded to
achieve the capacity. In this paper, we prove that spatially-coupled precoded
rateless codes achieve the capacity with bounded degree under BP decoding.
|
1401.7289 | Spatially-Coupled MacKay-Neal Codes with No Bit Nodes of Degree Two
Achieve the Capacity of BEC | cs.IT math.IT | Obata et al. proved that spatially-coupled (SC) MacKay-Neal (MN) codes
achieve the capacity of BEC. However, the SC-MN codes codes have many variable
nodes of degree two and have higher error floors. In this paper, we prove that
SC-MN codes with no variable nodes of degree two achieve the capacity of BEC.
|
1401.7290 | Non-Binary LDPC Codes with Large Alphabet Size | cs.IT math.IT | We study LDPC codes for the channel with input ${x}\in \mathbb{F}_q^m$ and
output ${y}={x}+{z}\in \mathbb{F}_q^m$. The aim of this paper is to evaluate
decoding performance of $q^m$-ary non-binary LDPC codes for large $m$. We give
density evolution and decoding performance evaluation for regular non-binary
LDPC codes and spatially-coupled (SC) codes. We show the regular codes do not
achieve the capacity of the channel while SC codes do.
|
1401.7293 | Polar coding for interference networks | cs.IT math.IT | A polar coding scheme for interference networks is introduced. The scheme
combines Arikan's monotone chain rules for multiple-access channels and a
method by Hassani and Urbanke to 'align' two incompatible polarization
processes. It achieves the Han--Kobayashi inner bound for two-user interference
channels and generalizes to interference networks.
|
1401.7344 | Release of the Kraken: A Novel Money Multiplier Equation's Debut in 21st
Century Banking | q-fin.GN cs.CE | Historically, the banking multiplier has been in a range of 4 to 100, with
25% to 1% reserve ratios at most layers of the banking system encompassing the
majority of its range in recent centuries. Here it is shown that multipliers
over 1 000 can occur from a new mechanism in banking. This new multiplier uses
a default insurance note to insure an outstanding loan in order to return the
value of the insured amount into capital. The economic impact of this invention
is calculably greater than the original invention of reserve banking. The
consequence of this lending invention is to render the existing money
multiplier equations of reserve banking obsolete where it occurs. The equations
describing this new multiplier do not converge. Each set of parameters for
reserve percentage, nesting depth, etc. creates a unique logarithmic curve
rather than approaching a limit. Thus it is necessary to show the behavior of
this new equation by numerical methods. Understanding this new multiplier and
associated issues is necessary for economic analyses of the Global Financial
Crisis.
|
1401.7360 | A Shannon Approach to Secure Multi-party Computations | cs.IT cs.CR math.IT | In secure multi-party computations (SMC), parties wish to compute a function
on their private data without revealing more information about their data than
what the function reveals. In this paper, we investigate two Shannon-type
questions on this problem. We first consider the traditional one-shot model for
SMC which does not assume a probabilistic prior on the data. In this model,
private communication and randomness are the key enablers to secure computing,
and we investigate a notion of randomness cost and capacity. We then move to a
probabilistic model for the data, and propose a Shannon model for discrete
memoryless SMC. In this model, correlations among data are the key enablers for
secure computing, and we investigate a notion of dependency which permits the
secure computation of a function. While the models and questions are general,
this paper focuses on summation functions, and relies on polar code
constructions.
|
1401.7369 | Linear Codes are Optimal for Index-Coding Instances with Five or Fewer
Receivers | cs.IT math.IT | We study zero-error unicast index-coding instances, where each receiver must
perfectly decode its requested message set, and the message sets requested by
any two receivers do not overlap. We show that for all these instances with up
to five receivers, linear index codes are optimal. Although this class contains
9847 non-isomorphic instances, by using our recent results and by properly
categorizing the instances based on their graphical representations, we need to
consider only 13 non-trivial instances to solve the entire class. This work
complements the result by Arbabjolfaei et al. (ISIT 2013), who derived the
capacity region of all unicast index-coding problems with up to five receivers
in the diminishing-error setup. They employed random-coding arguments, which
require infinitely-long messages. We consider the zero-error setup; our
approach uses graph theory and combinatorics, and does not require long
messages.
|
1401.7374 | A Message-Passing Approach to Combating Hidden Terminals in Wireless
Networks | cs.NI cs.IT math.IT | Collisions with hidden terminals is a major cause of performance degradation
in 802.11 and likewise wireless networks. Carrier sense multiple access with
collision avoidance (CSMA/CA) is utilized to avoid collisions at the cost of
spatial reuse. This report studies receiver design to mitigate interference
from hidden terminals. A wireless channel model with correlated fading in time
is assumed. A message-passing approach is proposed, in which a receiver can
successfully receive and decode partially overlapping transmissions from two
sources rather than treating undesired one as thermal noise. Numerical results
of both coded and uncoded systems show the advantage of the receiver over
conventional receivers.
|
1401.7375 | Detecting Cohesive and 2-mode Communities in Directed and Undirected
Networks | cs.SI physics.soc-ph | Networks are a general language for representing relational information among
objects. An effective way to model, reason about, and summarize networks, is to
discover sets of nodes with common connectivity patterns. Such sets are
commonly referred to as network communities. Research on network community
detection has predominantly focused on identifying communities of densely
connected nodes in undirected networks.
In this paper we develop a novel overlapping community detection method that
scales to networks of millions of nodes and edges and advances research along
two dimensions: the connectivity structure of communities, and the use of edge
directedness for community detection. First, we extend traditional definitions
of network communities by building on the observation that nodes can be densely
interlinked in two different ways: In cohesive communities nodes link to each
other, while in 2-mode communities nodes link in a bipartite fashion, where
links predominate between the two partitions rather than inside them. Our
method successfully detects both 2-mode as well as cohesive communities, that
may also overlap or be hierarchically nested. Second, while most existing
community detection methods treat directed edges as though they were
undirected, our method accounts for edge directions and is able to identify
novel and meaningful community structures in both directed and undirected
networks, using data from social, biological, and ecological domains.
|
1401.7377 | Improved Robust Node Position Estimation in Wireless Sensor Networks | cs.NI cs.IT math.IT | A new method for estimating the relative positions of location-unaware nodes
from the location-aware nodes and the received signal strength (RSS) between
the nodes, in a wireless sensor network (WSN), is proposed. In the method, a
regularization term is incorporated in the optimization problem leading to
significant improvement in the estimation accuracy even in the presence of
position errors of the location-aware nodes and distance errors between the
nodes. The regularization term is appropriated weighted on the basis of the
degree of connectivity between the nodes in the network. The method is
formulated as a convex optimization problem using the semidefinite relaxation
approach. Experimental comparisons with state-of-the-art competing methods show
that the proposed method yields node positions that are much more accurate even
in the presence of measurement errors.
|
1401.7388 | Bounding Embeddings of VC Classes into Maximum Classes | cs.LG math.CO stat.ML | One of the earliest conjectures in computational learning theory-the Sample
Compression conjecture-asserts that concept classes (equivalently set systems)
admit compression schemes of size linear in their VC dimension. To-date this
statement is known to be true for maximum classes---those that possess maximum
cardinality for their VC dimension. The most promising approach to positively
resolving the conjecture is by embedding general VC classes into maximum
classes without super-linear increase to their VC dimensions, as such
embeddings would extend the known compression schemes to all VC classes. We
show that maximum classes can be characterised by a local-connectivity property
of the graph obtained by viewing the class as a cubical complex. This geometric
characterisation of maximum VC classes is applied to prove a negative embedding
result which demonstrates VC-d classes that cannot be embedded in any maximum
class of VC dimension lower than 2d. On the other hand, we show that every VC-d
class C embeds in a VC-(d+D) maximum class where D is the deficiency of C,
i.e., the difference between the cardinalities of a maximum VC-d class and of
C. For VC-2 classes in binary n-cubes for 4 <= n <= 6, we give best possible
results on embedding into maximum classes. For some special classes of Boolean
functions, relationships with maximum classes are investigated. Finally we give
a general recursive procedure for embedding VC-d classes into VC-(d+k) maximum
classes for smallest k.
|
1401.7404 | On Index Coding in Noisy Broadcast Channels with Receiver Message Side
Information | cs.IT math.IT | This letter investigates the role of index coding in the capacity of AWGN
broadcast channels with receiver message side information. We first show that
index coding is unnecessary where there are two receivers; multiplexing coding
and superposition coding are sufficient to achieve the capacity region. We next
show that, for more than two receivers, multiplexing coding and superposition
coding alone can be suboptimal. We give an example where these two coding
schemes alone cannot achieve the capacity region, but adding index coding can.
This demonstrates that, in contrast to the two-receiver case, multiplexing
coding cannot fulfill the function of index coding where there are three or
more receivers.
|
1401.7406 | The parametrized probabilistic finite-state transducer probe game player
fingerprint model | cs.GT cs.NE | Fingerprinting operators generate functional signatures of game players and
are useful for their automated analysis independent of representation or
encoding. The theory for a fingerprinting operator which returns the
length-weighted probability of a given move pair occurring from playing the
investigated agent against a general parametrized probabilistic finite-state
transducer (PFT) is developed, applicable to arbitrary iterated games. Results
for the distinguishing power of the 1-state opponent model, uniform
approximability of fingerprints of arbitrary players, analyticity and Lipschitz
continuity of fingerprints for logically possible players, and equicontinuity
of the fingerprints of bounded-state probabilistic transducers are derived.
Algorithms for the efficient computation of special instances are given; the
shortcomings of a previous model, strictly generalized here from a simple
projection of the new model, are explained in terms of regularity condition
violations, and the extra power and functional niceness of the new fingerprints
demonstrated. The 2-state deterministic finite-state transducers (DFTs) are
fingerprinted and pairwise distances computed; using this the structure of DFTs
in strategy space is elucidated.
|
1401.7413 | Smoothed Low Rank and Sparse Matrix Recovery by Iteratively Reweighted
Least Squares Minimization | cs.LG cs.CV stat.ML | This work presents a general framework for solving the low rank and/or sparse
matrix minimization problems, which may involve multiple non-smooth terms. The
Iteratively Reweighted Least Squares (IRLS) method is a fast solver, which
smooths the objective function and minimizes it by alternately updating the
variables and their weights. However, the traditional IRLS can only solve a
sparse only or low rank only minimization problem with squared loss or an
affine constraint. This work generalizes IRLS to solve joint/mixed low rank and
sparse minimization problems, which are essential formulations for many tasks.
As a concrete example, we solve the Schatten-$p$ norm and $\ell_{2,q}$-norm
regularized Low-Rank Representation (LRR) problem by IRLS, and theoretically
prove that the derived solution is a stationary point (globally optimal if
$p,q\geq1$). Our convergence proof of IRLS is more general than previous one
which depends on the special properties of the Schatten-$p$ norm and
$\ell_{2,q}$-norm. Extensive experiments on both synthetic and real data sets
demonstrate that our IRLS is much more efficient.
|
1401.7416 | A Comparative Study on String Matching Algorithm of Biological Sequences | cs.DS cs.CE | String matching algorithm plays the vital role in the Computational Biology.
The functional and structural relationship of the biological sequence is
determined by similarities on that sequence. For that, the researcher is
supposed to aware of similarities on the biological sequences. Pursuing of
similarity among biological sequences is an important research area of that can
bring insight into the evolutionary and genetic relationships among the genes.
In this paper, we have studied different kinds of string matching algorithms
and observed their time and space complexities. For this study, we have
assessed the performance of algorithms tested with biological sequences.
|
1401.7425 | A novel method of generating tunable underlying network topologies for
social simulation | cs.SI physics.soc-ph | We propose a method of generating different scale-free networks, which has
several input parameters in order to adjust the structure, so that they can
serve as a basis for computer simulation of real-world phenomena. The
topological structure of these networks was studied to determine what kind of
networks can be produced and how can we give the appropriate values of
parameters to get a desired structure.
|
1401.7426 | Channel Estimation and Hybrid Precoding for Millimeter Wave Cellular
Systems | cs.IT math.IT | Millimeter wave (mmWave) cellular systems will enable gigabit-per-second data
rates thanks to the large bandwidth available at mmWave frequencies. To realize
sufficient link margin, mmWave systems will employ directional beamforming with
large antenna arrays at both the transmitter and receiver. Due to the high cost
and power consumption of gigasample mixed-signal devices, mmWave precoding will
likely be divided among the analog and digital domains. The large number of
antennas and the presence of analog beamforming requires the development of
mmWave-specific channel estimation and precoding algorithms. This paper
develops an adaptive algorithm to estimate the mmWave channel parameters that
exploits the poor scattering nature of the channel. To enable the efficient
operation of this algorithm, a novel hierarchical multi-resolution codebook is
designed to construct training beamforming vectors with different beamwidths.
For single-path channels, an upper bound on the estimation error probability
using the proposed algorithm is derived, and some insights into the efficient
allocation of the training power among the adaptive stages of the algorithm are
obtained. The adaptive channel estimation algorithm is then extended to the
multi-path case relying on the sparse nature of the channel. Using the
estimated channel, this paper proposes a new hybrid analog/digital precoding
algorithm that overcomes the hardware constraints on the analog-only
beamforming, and approaches the performance of digital solutions. Simulation
results show that the proposed low-complexity channel estimation algorithm
achieves comparable precoding gains compared to exhaustive channel training
algorithms. The results also illustrate that the proposed algorithms can
approach the coverage probability achieved by perfect channel knowledge even in
the presence of interference.
|
1401.7463 | Propagators and Violation Functions for Geometric and Workload
Constraints Arising in Airspace Sectorisation | cs.AI | Airspace sectorisation provides a partition of a given airspace into sectors,
subject to geometric constraints and workload constraints, so that some cost
metric is minimised. We make a study of the constraints that arise in airspace
sectorisation. For each constraint, we give an analysis of what algorithms and
properties are required under systematic search and stochastic local search.
|
1401.7474 | The phenotypic expansion and its boundaries | stat.OT cs.MA nlin.AO | The development of sport performances in the future is a subject of myth and
disagreement among experts. As arguments favoring and opposing such methodology
were discussed, other publications empirically showed that the past development
of performances followed a non linear trend. Other works, while deeply
exploring the conditions leading to world records, highlighted that performance
is tied to the economical and geopolitical context. Here we investigated the
following human boundaries: development of performances with time in Olympic
and non-Olympic events, development of sport performances with aging among
humans and others species (greyhounds, thoroughbreds, mice). Development of
performances from a broader point of view (demography & lifespan) in a specific
sub-system centered on primary energy was also investigated. We show that the
physiological developments are limited with time. Three major and direct
determinants of sport performance are age, technology and climatic conditions
(temperature). However, all observed developments are related to the
international context including the efficient use of primary energies. This
last parameter is a major indirect propeller of performance development. We
show that when physiological and societal performance indicators such as
lifespan and population density depend on primary energies, the energy source,
competition and mobility are key parameters for achieving long term sustainable
trajectories. Otherwise, the vast majority (98.7%) of the studied trajectories
reaches 0 before 15 generations, due to the consumption of fossil energy and a
low mobility rate. This led us to consider that in the present turbulent
economical context and given the upcoming energy crisis, societal and physical
performances are not expected to grow continuously.
|
1401.7485 | Superimposed Codes and Threshold Group Testing | cs.IT math.IT | We will discuss superimposed codes and non-adaptive group testing designs
arising from the potentialities of compressed genotyping models in molecular
biology. The given paper was motivated by the 30th anniversary of
D'yachkov-Rykov recurrent upper bound on the rate of superimposed codes
published in 1982. We were also inspired by recent results obtained for
non-adaptive threshold group testing which develop the theory of superimposed
codes
|
1401.7486 | Use HMM and KNN for classifying corneal data | cs.CV | These days to gain classification system with high accuracy that can classify
complicated pattern are so useful in medicine and industry. In this article a
process for getting the best classifier for Lasik data is suggested. However at
first it's been tried to find the best line and curve by this classifier in
order to gain classifier fitting, and in the end by using the Markov method a
classifier for topographies is gained.
|
1401.7492 | Lectures on DNA Codes | cs.IT math.IT q-bio.QM | For $q$-ary $n$-sequences, we develop the concept of similarity functions
that can be used (for $q=4$) to model a thermodynamic similarity on DNA
sequences. A similarity function is identified by the length of a longest
common subsequence between two $q$-ary $n$-sequences. Codes based on similarity
functions are called DNA codes. DNA codes are important components in
biomolecular computing and other biotechnical applications that employ DNA
hybridization assays. The main aim of the given lecture notes -- to discuss
lower bounds on the rate of optimal DNA codes for a biologically motivated
similarity function called a block similarity and for the conventional deletion
similarity function used in the theory of error-correcting codes. We also
present constructions of suboptimal DNA codes based on the parity-check code
detecting one error in the Hamming metric.
|
1401.7505 | Lectures on Designing Screening Experiments | cs.IT math.IT | Designing Screening Experiments (DSE) is a class of information - theoretical
models for multiple - access channels (MAC). We discuss the combinatorial model
of DSE called a disjunct channel model. This model is the most important for
applications and closely connected with the superimposed code concept. We give
a detailed survey of lower and upper bounds on the rate of superimposed codes.
The best known constructions of superimposed codes are considered in paper. We
also discuss the development of these codes (non-adaptive pooling designs)
intended for the clone - library screening problem. We obtain lower and upper
bounds on the rate of binary codes for the combinatorial model of DSE called an
adder channel model. We also consider the concept of universal decoding for the
probabilistic DSE model called a symmetric model of DSE.
|
1401.7508 | Two Models of Nonadaptive Group Testing for Designing Screening
Experiments | cs.IT math.IT | We discuss two non-standard models of nonadaptive combinatorial search which
develop the conventional disjunct search model for a small number of defective
elements contained in a finite ground set or a population. The first model is
called a search of defective supersets. The second model is called a search of
defective subsets in the presence of inhibitors. For these models, we study the
constructive search methods based on the known constructions for the disjunct
model.
|
1401.7517 | Information quantity in a pixel of digital image | cs.CV cs.IT math.IT | The paper is devoted to the problem of integer-valued estimating of
information quantity in a pixel of digital image. The definition of an integer
estimation of information quantity based on constructing of the certain binary
hierarchy of pixel clusters is proposed. The methods for constructing
hierarchies of clusters and generating of hierarchical sequences of image
approximations that minimally differ from the image by a standard deviation are
developed. Experimental results on integer-valued estimation of information
quantity are compared with the results obtained by utilizing of the classical
formulas.
|
1401.7533 | Relaxed Recovery Conditions for OMP/OLS by Exploiting both Coherence and
Decay | cs.IT math.IT | We propose extended coherence-based conditions for exact sparse support
recovery using orthogonal matching pursuit (OMP) and orthogonal least squares
(OLS). Unlike standard uniform guarantees, we embed some information about the
decay of the sparse vector coefficients in our conditions. As a result, the
standard condition $\mu<1/(2k-1)$ (where $\mu$ denotes the mutual coherence and
$k$ the sparsity level) can be weakened as soon as the non-zero coefficients
obey some decay, both in the noiseless and the bounded-noise scenarios.
Furthermore, the resulting condition is approaching $\mu<1/k$ for strongly
decaying sparse signals. Finally, in the noiseless setting, we prove that the
proposed conditions, in particular the bound $\mu<1/k$, are the tightest
achievable guarantees based on mutual coherence.
|
1401.7535 | Online Social Media in the Syria Conflict: Encompassing the Extremes and
the In-Betweens | cs.SI cs.CY physics.soc-ph | The Syria conflict has been described as the most socially mediated in
history, with online social media playing a particularly important role. At the
same time, the ever-changing landscape of the conflict leads to difficulties in
applying analytical approaches taken by other studies of online political
activism. Therefore, in this paper, we use an approach that does not require
strong prior assumptions or the proposal of an advance hypothesis to analyze
Twitter and YouTube activity of a range of protagonists to the conflict, in an
attempt to reveal additional insights into the relationships between them. By
means of a network representation that combines multiple data views, we uncover
communities of accounts falling into four categories that broadly reflect the
situation on the ground in Syria. A detailed analysis of selected communities
within the anti-regime categories is provided, focusing on their central
actors, preferred online platforms, and activity surrounding "real world"
events. Our findings indicate that social media activity in Syria is
considerably more convoluted than reported in many other studies of online
political activism, suggesting that alternative analytical approaches can play
an important role in this type of scenario.
|
1401.7538 | Bayesian Pursuit Algorithms | cs.IT math.IT | This paper addresses the sparse representation (SR) problem within a general
Bayesian framework. We show that the Lagrangian formulation of the standard SR
problem, i.e., $\mathbf{x}^\star=\arg\min_\mathbf{x} \lbrace \|
\mathbf{y}-\mathbf{D}\mathbf{x} \|_2^2+\lambda\| \mathbf{x}\|_0 \rbrace$, can
be regarded as a limit case of a general maximum a posteriori (MAP) problem
involving Bernoulli-Gaussian variables. We then propose different tractable
implementations of this MAP problem that we refer to as "Bayesian pursuit
algorithms". The Bayesian algorithms are shown to have strong connections with
several well-known pursuit algorithms of the literature (e.g., MP, OMP, StOMP,
CoSaMP, SP) and generalize them in several respects. In particular, i) they
allow for atom deselection; ii) they can include any prior information about
the probability of occurrence of each atom within the selection process; iii)
they can encompass the estimation of unkown model parameters into their
recursions.
|
1401.7574 | Causal Network Inference by Optimal Causation Entropy | cs.IT math.IT | The broad abundance of time series data, which is in sharp contrast to
limited knowledge of the underlying network dynamic processes that produce such
observations, calls for a rigorous and efficient method of causal network
inference. Here we develop mathematical theory of causation entropy, an
information-theoretic statistic designed for model-free causality inference.
For stationary Markov processes, we prove that for a given node in the network,
its causal parents forms the minimal set of nodes that maximizes causation
entropy, a result we refer to as the optimal causation entropy principle.
Furthermore, this principle guides us to develop computational and data
efficient algorithms for causal network inference based on a two-step discovery
and removal algorithm for time series data for a network-couple dynamical
system. Validation in terms of analytical and numerical results for Gaussian
processes on large random networks highlight that inference by our algorithm
outperforms previous leading methods including conditioned Granger causality
and transfer entropy. Interestingly, our numerical results suggest that the
number of samples required for accurate inference depends strongly on network
characteristics such as the density of links and information diffusion rate and
not necessarily on the number of nodes.
|
1401.7584 | XLSearch: A Search Engine for Spreadsheets | cs.DB | Spreadsheets are end-user programs and domain models that are heavily
employed in administration, financial forecasting, education, and science
because of their intuitive, flexible, and direct approach to computation. As a
result, institutions are swamped by millions of spreadsheets that are becoming
increasingly difficult to manage, access, and control.
This note presents the XLSearch system, a novel search engine for
spreadsheets. It indexes spreadsheet formulae and efficiently answers formula
queries via unification (a complex query language that allows metavariables in
both the query as well as the index). But a web-based search engine is only one
application of the underlying technology: Spreadsheet formula export to web
standards like MathML combined with formula indexing can be used to find
similar spreadsheets or common formula errors.
|
1401.7612 | Mathematical Modelling of Turning Delays in Swarm Robotics | cs.RO cs.SY | We investigate the effect of turning delays on the behaviour of groups of
differential wheeled robots and show that the group-level behaviour can be
described by a transport equation with a suitably incorporated delay. The
results of our mathematical analysis are supported by numerical simulations and
experiments with e-puck robots. The experimental quantity we compare to our
revised model is the mean time for robots to find the target area in an unknown
environment. The transport equation with delay better predicts the mean time to
find the target than the standard transport equation without delay.
|
1401.7620 | Bayesian nonparametric comorbidity analysis of psychiatric disorders | stat.ML cs.LG | The analysis of comorbidity is an open and complex research field in the
branch of psychiatry, where clinical experience and several studies suggest
that the relation among the psychiatric disorders may have etiological and
treatment implications. In this paper, we are interested in applying latent
feature modeling to find the latent structure behind the psychiatric disorders
that can help to examine and explain the relationships among them. To this end,
we use the large amount of information collected in the National Epidemiologic
Survey on Alcohol and Related Conditions (NESARC) database and propose to model
these data using a nonparametric latent model based on the Indian Buffet
Process (IBP). Due to the discrete nature of the data, we first need to adapt
the observation model for discrete random variables. We propose a generative
model in which the observations are drawn from a multinomial-logit distribution
given the IBP matrix. The implementation of an efficient Gibbs sampler is
accomplished using the Laplace approximation, which allows integrating out the
weighting factors of the multinomial-logit likelihood model. We also provide a
variational inference algorithm for this model, which provides a complementary
(and less expensive in terms of computational complexity) alternative to the
Gibbs sampler allowing us to deal with a larger number of data. Finally, we use
the model to analyze comorbidity among the psychiatric disorders diagnosed by
experts from the NESARC database.
|
1401.7623 | Graph matching: relax or not? | cs.DS cs.CG cs.CV math.OC | We consider the problem of exact and inexact matching of weighted undirected
graphs, in which a bijective correspondence is sought to minimize a quadratic
weight disagreement. This computationally challenging problem is often relaxed
as a convex quadratic program, in which the space of permutations is replaced
by the space of doubly-stochastic matrices. However, the applicability of such
a relaxation is poorly understood. We define a broad class of friendly graphs
characterized by an easily verifiable spectral property. We prove that for
friendly graphs, the convex relaxation is guaranteed to find the exact
isomorphism or certify its inexistence. This result is further extended to
approximately isomorphic graphs, for which we develop an explicit bound on the
amount of weight disagreement under which the relaxation is guaranteed to find
the globally optimal approximate isomorphism. We also show that in many cases,
the graph matching problem can be further harmlessly relaxed to a convex
quadratic program with only n separable linear equality constraints, which is
substantially more efficient than the standard relaxation involving 2n equality
and n^2 inequality constraints. Finally, we show that our results are still
valid for unfriendly graphs if additional information in the form of seeds or
attributes is allowed, with the latter satisfying an easy to verify spectral
characteristic.
|
1401.7625 | RES: Regularized Stochastic BFGS Algorithm | cs.LG math.OC stat.ML | RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno
(BFGS) quasi-Newton method is proposed to solve convex optimization problems
with stochastic objectives. The use of stochastic gradient descent algorithms
is widespread, but the number of iterations required to approximate optimal
arguments can be prohibitive in high dimensional problems. Application of
second order methods, on the other hand, is impracticable because computation
of objective function Hessian inverses incurs excessive computational cost.
BFGS modifies gradient descent by introducing a Hessian approximation matrix
computed from finite gradient differences. RES utilizes stochastic gradients in
lieu of deterministic gradients for both, the determination of descent
directions and the approximation of the objective function's curvature. Since
stochastic gradients can be computed at manageable computational cost RES is
realizable and retains the convergence rate advantages of its deterministic
counterparts. Convergence results show that lower and upper bounds on the
Hessian egeinvalues of the sample functions are sufficient to guarantee
convergence to optimal arguments. Numerical experiments showcase reductions in
convergence time relative to stochastic gradient descent algorithms and
non-regularized stochastic versions of BFGS. An application of RES to the
implementation of support vector machines is developed.
|
1401.7631 | Slope Instability of the Earthen Levee in Boston, UK: Numerical
Simulation and Sensor Data Analysis | cs.CE | The paper presents a slope stability analysis for a heterogeneous earthen
levee in Boston, UK, which is prone to occasional slope failures under tidal
loads. Dynamic behavior of the levee under tidal fluctuations was simulated
using a finite element model of variably saturated linear elastic perfectly
plastic soil. Hydraulic conductivities of the soil strata have been calibrated
according to piezometers readings, in order to obtain correct range of
hydraulic loads in tidal mode. Finite element simulation was complemented with
series of limit equilibrium analyses. Stability analyses have shown that slope
failure occurs with the development of a circular slip surface located in the
soft clay layer. Both models (FEM and LEM) confirm that the least stable
hydraulic condition is the combination of the minimum river levels at low tide
with the maximal saturation of soil layers. FEM results indicate that in winter
time the levee is almost at its limit state, at the margin of safety (strength
reduction factor values are 1.03 and 1.04 for the low-tide and high-tide
phases, respectively); these results agree with real-life observations. The
stability analyses have been implemented as real-time components integrated
into the UrbanFlood early warning system for flood protection.
|
1401.7702 | A Spectral Framework for Anomalous Subgraph Detection | cs.SI stat.ML | A wide variety of application domains are concerned with data consisting of
entities and their relationships or connections, formally represented as
graphs. Within these diverse application areas, a common problem of interest is
the detection of a subset of entities whose connectivity is anomalous with
respect to the rest of the data. While the detection of such anomalous
subgraphs has received a substantial amount of attention, no
application-agnostic framework exists for analysis of signal detectability in
graph-based data. In this paper, we describe a framework that enables such
analysis using the principal eigenspace of a graph's residuals matrix, commonly
called the modularity matrix in community detection. Leveraging this analytical
tool, we show that the framework has a natural power metric in the spectral
norm of the anomalous subgraph's adjacency matrix (signal power) and of the
background graph's residuals matrix (noise power). We propose several
algorithms based on spectral properties of the residuals matrix, with more
computationally expensive techniques providing greater detection power.
Detection and identification performance are presented for a number of signal
and noise models, including clusters and bipartite foregrounds embedded into
simple random backgrounds as well as graphs with community structure and
realistic degree distributions. The trends observed verify intuition gleaned
from other signal processing areas, such as greater detection power when the
signal is embedded within a less active portion of the background. We
demonstrate the utility of the proposed techniques in detecting small, highly
anomalous subgraphs in real graphs derived from Internet traffic and product
co-purchases.
|
1401.7709 | Joint Inference of Multiple Label Types in Large Networks | cs.LG cs.SI stat.ML | We tackle the problem of inferring node labels in a partially labeled graph
where each node in the graph has multiple label types and each label type has a
large number of possible labels. Our primary example, and the focus of this
paper, is the joint inference of label types such as hometown, current city,
and employers, for users connected by a social network. Standard label
propagation fails to consider the properties of the label types and the
interactions between them. Our proposed method, called EdgeExplain, explicitly
models these, while still enabling scalable inference under a distributed
message-passing architecture. On a billion-node subset of the Facebook social
network, EdgeExplain significantly outperforms label propagation for several
label types, with lifts of up to 120% for recall@1 and 60% for recall@3.
|
1401.7713 | A Generalized Probabilistic Framework for Compact Codebook Creation | cs.CV | Compact and discriminative visual codebooks are preferred in many visual
recognition tasks. In the literature, a number of works have taken the approach
of hierarchically merging visual words of an initial large-sized codebook, but
implemented this approach with different merging criteria. In this work, we
propose a single probabilistic framework to unify these merging criteria, by
identifying two key factors: the function used to model class-conditional
distribution and the method used to estimate the distribution parameters. More
importantly, by adopting new distribution functions and/or parameter estimation
methods, our framework can readily produce a spectrum of novel merging
criteria. Three of them are specifically focused in this work. In the first
criterion, we adopt the multinomial distribution with Bayesian method; In the
second criterion, we integrate Gaussian distribution with maximum likelihood
parameter estimation. In the third criterion, which shows the best merging
performance, we propose a max-margin-based parameter estimation method and
apply it with multinomial distribution. Extensive experimental study is
conducted to systematically analyse the performance of the above three criteria
and compare them with existing ones. As demonstrated, the best criterion
obtained in our framework achieves the overall best merging performance among
the comparable merging criteria developed in the literature.
|
1401.7715 | Video Compressive Sensing for Dynamic MRI | cs.CV math.OC | We present a video compressive sensing framework, termed kt-CSLDS, to
accelerate the image acquisition process of dynamic magnetic resonance imaging
(MRI). We are inspired by a state-of-the-art model for video compressive
sensing that utilizes a linear dynamical system (LDS) to model the motion
manifold. Given compressive measurements, the state sequence of an LDS can be
first estimated using system identification techniques. We then reconstruct the
observation matrix using a joint structured sparsity assumption. In particular,
we minimize an objective function with a mixture of wavelet sparsity and joint
sparsity within the observation matrix. We derive an efficient convex
optimization algorithm through alternating direction method of multipliers
(ADMM), and provide a theoretical guarantee for global convergence. We
demonstrate the performance of our approach for video compressive sensing, in
terms of reconstruction accuracy. We also investigate the impact of various
sampling strategies. We apply this framework to accelerate the acquisition
process of dynamic MRI and show it achieves the best reconstruction accuracy
with the least computational time compared with existing algorithms in the
literature.
|
1401.7727 | Security Evaluation of Support Vector Machines in Adversarial
Environments | cs.LG cs.CR | Support Vector Machines (SVMs) are among the most popular classification
techniques adopted in security applications like malware detection, intrusion
detection, and spam filtering. However, if SVMs are to be incorporated in
real-world security systems, they must be able to cope with attack patterns
that can either mislead the learning algorithm (poisoning), evade detection
(evasion), or gain information about their internal parameters (privacy
breaches). The main contributions of this chapter are twofold. First, we
introduce a formal general framework for the empirical evaluation of the
security of machine-learning systems. Second, according to our framework, we
demonstrate the feasibility of evasion, poisoning and privacy attacks against
SVMs in real-world security problems. For each attack technique, we evaluate
its impact and discuss whether (and how) it can be countered through an
adversary-aware design of SVMs. Our experiments are easily reproducible thanks
to open-source code that we have made available, together with all the employed
datasets, on a public repository.
|
1401.7733 | Security Implications of Distributed Database Management System Models | cs.DB | Security features must be addressed when escalating a distributed database.
The choice between the object oriented and the relational data model, several
factors should be considered. The most important of these factors are single
and multilevel access controls (MAC), protection and integrity maintenance.
While determining which distributed database replica will be more secure for a
particular function, the choice should not be made exclusively on the basis of
available security features. One should also query the effectiveness and
efficiency of the delivery of these characteristics. In this paper, the
security strengths and weaknesses of both database models and the thorough
problems initiate in the distributed environment are conversed.
|
1401.7739 | Stability robustness of a feedback interconnection of systems with
negative imaginary frequency response | math.OC cs.SY | A necessary and sufficient condition, expressed simply as the DC loop gain
(ie the loop gain at zero frequency) being less than unity, is given in this
paper to guarantee the internal stability of a feedback interconnection of
Linear Time-Invariant (LTI) Multiple-Input Multiple-Output (MIMO) systems with
negative imaginary frequency response. Systems with negative imaginary
frequency response arise for example when considering transfer functions from
force actuators to co-located position sensors, and are commonly important in
for example lightly damped structures. The key result presented here has
similar application to the small-gain theorem, which refers to the stability of
feedback interconnections of contractive gain systems, and the passivity
theorem (or more precisely the positive real theorem in the LTI case), which
refers to the stability of feedback interconnections of positive real systems.
A complete state-space characterisation of systems with negative imaginary
frequency response is also given in this paper and also an example that
demonstrates the application of the key result is provided.
|
1401.7743 | Effective Features of Remote Sensing Image Classification Using
Interactive Adaptive Thresholding Method | cs.CV | Remote sensing image classification can be performed in many different ways
to extract meaningful features. One common approach is to perform edge
detection. A second approach is to try and detect whole shapes, given the fact
that these shapes usually tend to have distinctive properties such as object
foreground or background. To get optimal results, these two approaches can be
combined. This paper adopts a combinatorial optimization method to adaptively
select threshold based features to improve remote sensing image. Feature
selection is an important combinatorial optimization problem in the remote
sensing image classification. The feature selection method has to achieve three
characteristics: first the performance issues by facilitating data collection
and reducing storage space and classification time, second to perform semantics
analysis helping to understand the problem, and third to improve prediction
accuracy by avoiding the curse of dimensionality. The goal of this thresholding
an image is to classify pixels as either dark or light and evaluation of
classification results. Interactive adaptive thresholding is a form of
thresholding that takes into account spatial variations in illumination of
remote sensing image. We present a technique for remote sensing based adaptive
thresholding using the interactive satellite image of the input. However, our
solution is more robust to illumination changes in the remote sensing image.
Additionally, our method is simple and easy to implement but it is effective
algorithm to classify the image pixels. This technique is suitable for
preprocessing the remote sensing image classification, making it a valuable
tool for interactive remote based applications such as augmented reality of the
classification procedure.
|
1401.7745 | Feedback Control of Negative-Imaginary Systems: Large Flexible
structures with colocated actuators and sensors | cs.SY math.OC | This paper presents a survey of recent results on the theory of negative
imaginary systems. This theory can be applied to the robust control of large
flexible structures with colocated force actuators and position sensors.
|
1401.7772 | Spectrum Sensing Via Reconfigurable Antennas: Is Cooperation of
Secondary Users Indispensable? | cs.NI cs.IT math.IT | This work presents an analytical framework for characterizing the performance
of cooperative and noncooperative spectrum sensing schemes by figuring out the
tradeoff between the achieved diversity and coding gains in each scheme. Based
on this analysis, we try to answer the fundamental question: can we dispense
with SUs cooperation and still achieve an arbitrary diversity gain? It is shown
that this is indeed possible via a novel technique that can offer diversity
gain for a single SU using a single antenna. The technique is based on the
usage of a reconfigurable antenna that changes its propagation characteristics
over time, thus creating an artificial temporal diversity. It is shown that the
usage of reconfigurable antennas outperforms cooperative as well as
non-cooperative schemes at low and high Signal-to-Noise Ratios (SNRs).
Moreover, if the channel state information is available at the SU, an
additional SNR gain can also be achieved.
|
1401.7828 | Codes over a subset of Octonion Integers | cs.IT math.IT | In this paper we define codes over some Octonion integers. We prove that in
some conditions these codes can correct up to two errors for a transmitted
vector and the code rate of the codes is grater than the code rate of the codes
defined on some subset of Quaternion integers.
|
1401.7838 | Dynamic Stride Length Adaptation According to Utility And Personal Space | physics.soc-ph cs.MA math.OC | Pedestrians adjust both speed and stride length when they navigate difficult
situations such as tight corners or dense crowds. They try to avoid collisions
and to preserve their personal space. State-of-the-art pedestrian motion models
automatically reduce speed in dense crowds simply because there is no space
where the pedestrians could go. The stride length and its correct adaptation,
however, are rarely considered. This leads to artefacts that impact macroscopic
observation parameters such as densities in front of bottlenecks and, through
this, flow. Hence modelling stride adaptation is important to increase the
predictive power of pedestrian models. To achieve this we reformulate the
problem as an optimisation problem on a disk around the pedestrian. Each
pedestrian seeks the position that is most attractive in a sense of balanced
goals between the search for targets, the need for individual space and the
need to keep a distance from obstacles. The need for space is modelled
according to findings from psychology defining zones around a person that, when
invaded, cause unease. The result is a fully automatic adjustment that allows
calibration through meaningful social parameters and that gives visually
natural results with an excellent fit to measured experimental data.
|
1401.7842 | Analysis of Compatible Discrete Operator Schemes for the Stokes
Equations on Polyhedral Meshes | math.NA cs.CE cs.NA | Compatible Discrete Operator schemes preserve basic properties of the
continuous model at the discrete level. They combine discrete differential
operators that discretize exactly topological laws and discrete Hodge operators
that approximate constitutive relations. We devise and analyze two families of
such schemes for the Stokes equations in curl formulation, with the pressure
degrees of freedom located at either mesh vertices or cells. The schemes ensure
local mass and momentum conservation. We prove discrete stability by
establishing novel discrete Poincar\'e inequalities. Using commutators related
to the consistency error, we derive error estimates with first-order
convergence rates for smooth solutions. We analyze two strategies for
discretizing the external load, so as to deliver tight error estimates when the
external load has a large irrotational or divergence-free part. Finally,
numerical results are presented on three-dimensional polyhedral meshes.
|
1401.7846 | Optimal power control in Cognitive MIMO systems with limited feedback | cs.IT math.IT | In this paper, the problem of optimal power allocation in Cognitive Radio
(CR) Multiple Input Multiple Output (MIMO) systems is treated. The focus is on
providing limited feedback solutions aiming at maximizing the secondary system
rate subject to a constraint on the average interference caused to primary
communication. The limited feedback solutions are obtained by reducing the
information available at secondary transmitter (STx) for the link between STx
and the secondary receiver (SRx) as well as by limiting the level of available
information at STx that corresponds to the link between the STx and the primary
receiver PRx. Monte Carlo simulation results are given that allow to quanitfy
the performance achieved by the proposed algorithms.
|
1401.7860 | Motion planning and control of a planar polygonal linkage | math.MG cs.RO | For a polygonal linkage, we produce a fast navigation algorithm on its
configuration space. The basic idea is to approximate the configuration space
by the vertex-edge graph of its cell decomposition discovered by the first
author. The algorithm has three aspects: (1) the number of navigation steps
does not exceed 15 (independent of the linkage), (2) each step is a disguised
flex of a quadrilateral from one triangular configuration to another, which is
a well understood type of flex, and (3) each step can be performed explicitly
by adding some extra bars and obtaining a mechanism with one degree of freedom.
|
1401.7890 | Exploring the Relationship between Membership Turnover and Productivity
in Online Communities | cs.SI physics.soc-ph | One of the more disruptive reforms associated with the modern Internet is the
emergence of online communities working together on knowledge artefacts such as
Wikipedia and OpenStreetMap. Recently it has become clear that these
initiatives are vulnerable because of problems with membership turnover. This
study presents a longitudinal analysis of 891 WikiProjects where we model the
impact of member turnover and social capital losses on project productivity. By
examining social capital losses we attempt to provide a more nuanced analysis
of member turnover. In this context social capital is modelled from a social
network perspective where the loss of more central members has more impact. We
find that only a small proportion of WikiProjects are in a relatively healthy
state with low levels of membership turnover and social capital losses. The
results show that the relationship between social capital losses and project
performance is U-shaped, and that member withdrawal has significant negative
effect on project outcomes. The results also support the mediation of turnover
rate and network density on the curvilinear relationship.
|
1401.7898 | Maximum Margin Multiclass Nearest Neighbors | cs.LG math.ST stat.TH | We develop a general framework for margin-based multicategory classification
in metric spaces. The basic work-horse is a margin-regularized version of the
nearest-neighbor classifier. We prove generalization bounds that match the
state of the art in sample size $n$ and significantly improve the dependence on
the number of classes $k$. Our point of departure is a nearly Bayes-optimal
finite-sample risk bound independent of $k$. Although $k$-free, this bound is
unregularized and non-adaptive, which motivates our main result: Rademacher and
scale-sensitive margin bounds with a logarithmic dependence on $k$. As the best
previous risk estimates in this setting were of order $\sqrt k$, our bound is
exponentially sharper. From the algorithmic standpoint, in doubling metric
spaces our classifier may be trained on $n$ examples in $O(n^2\log n)$ time and
evaluated on new points in $O(\log n)$ time.
|
1401.7909 | When is it Biased? Assessing the Representativeness of Twitter's
Streaming API | cs.SI physics.soc-ph | Twitter has captured the interest of the scientific community not only for
its massive user base and content, but also for its openness in sharing its
data. Twitter shares a free 1% sample of its tweets through the "Streaming
API", a service that returns a sample of tweets according to a set of
parameters set by the researcher. Recently, research has pointed to evidence of
bias in the data returned through the Streaming API, raising concern in the
integrity of this data service for use in research scenarios. While these
results are important, the methodologies proposed in previous work rely on the
restrictive and expensive Firehose to find the bias in the Streaming API data.
In this work we tackle the problem of finding sample bias without the need for
"gold standard" Firehose data. Namely, we focus on finding time periods in the
Streaming API data where the trend of a hashtag is significantly different from
its trend in the true activity on Twitter. We propose a solution that focuses
on using an open data source to find bias in the Streaming API. Finally, we
assess the utility of the data source in sparse data situations and for users
issuing the same query from different regions.
|
1401.7923 | Loopy annealing belief propagation for vertex cover and matching:
convergence, LP relaxation, correctness and Bethe approximation | cs.DM cs.DS cs.IT math-ph math.IT math.MP math.PR | For the minimum cardinality vertex cover and maximum cardinality matching
problems, the max-product form of belief propagation (BP) is known to perform
poorly on general graphs. In this paper, we present an iterative loopy
annealing BP (LABP) algorithm which is shown to converge and to solve a Linear
Programming relaxation of the vertex cover or matching problem on general
graphs. LABP finds (asymptotically) a minimum half-integral vertex cover (hence
provides a 2-approximation) and a maximum fractional matching on any graph. We
also show that LABP finds (asymptotically) a minimum size vertex cover for any
bipartite graph and as a consequence compute the matching number of the graph.
Our proof relies on some subtle monotonicity arguments for the local iteration.
We also show that the Bethe free entropy is concave and that LABP maximizes it.
Using loop calculus, we also give an exact (also intractable for general
graphs) expression of the partition function for matching in term of the LABP
messages which can be used to improve mean-field approximations.
|
1401.7941 | Exploiting Causality for Selective Belief Filtering in Dynamic Bayesian
Networks | cs.AI | Dynamic Bayesian networks (DBNs) are a general model for stochastic processes
with partially observed states. Belief filtering in DBNs is the task of
inferring the belief state (i.e. the probability distribution over process
states) based on incomplete and noisy observations. This can be a hard problem
in complex processes with large state spaces. In this article, we explore the
idea of accelerating the filtering task by automatically exploiting causality
in the process. We consider a specific type of causal relation, called
passivity, which pertains to how state variables cause changes in other
variables. We present the Passivity-based Selective Belief Filtering (PSBF)
method, which maintains a factored belief representation and exploits passivity
to perform selective updates over the belief factors. PSBF produces exact
belief states under certain assumptions and approximate belief states
otherwise, where the approximation error is bounded by the degree of
uncertainty in the process. We show empirically, in synthetic processes with
varying sizes and degrees of passivity, that PSBF is faster than several
alternative methods while achieving competitive accuracy. Furthermore, we
demonstrate how passivity occurs naturally in a complex system such as a
multi-robot warehouse, and how PSBF can exploit this to accelerate the
filtering task.
|
1401.7944 | Performance Rescaling of Complex Networks | cs.NI cond-mat.stat-mech cs.SI physics.soc-ph | Recent progress in network topology modeling [1], [2] has shown that it is
possible to create smaller-scale replicas of large complex networks, like the
Internet, while simultaneously preserving several important topological
properties. However, the constructed replicas do not include notions of
capacities and latencies, and the fundamental question of whether smaller
networks can reproduce the performance of larger networks remains unanswered.
We address this question in this letter, and show that it is possible to
predict the performance of larger networks from smaller replicas, as long as
the right link capacities and propagation delays are assigned to the replica's
links. Our procedure is inspired by techniques introduced in [2] and combines a
time-downscaling argument from [3]. We show that significant computational
savings can be achieved when simulating smaller-scale replicas with TCP and UDP
traffic, with simulation times being reduced by up to two orders of magnitude.
|
1401.8008 | Support vector comparison machines | stat.ML cs.LG | In ranking problems, the goal is to learn a ranking function from labeled
pairs of input points. In this paper, we consider the related comparison
problem, where the label indicates which element of the pair is better, or if
there is no significant difference. We cast the learning problem as a margin
maximization, and show that it can be solved by converting it to a standard
SVM. We use simulated nonlinear patterns, a real learning to rank sushi data
set, and a chess data set to show that our proposed SVMcompare algorithm
outperforms SVMrank when there are equality pairs.
|
1401.8022 | Synchronizing Rankings via Interactive Communication | cs.IT math.IT | We consider the problem of exact synchronization of two rankings at remote
locations connected by a two-way channel. Such synchronization problems arise
when items in the data are distinguishable, as is the case for playlists,
tasklists, crowdvotes and recommender systems rankings. Our model accounts for
different constraints on the communication throughput of the forward and
feedback links, resulting in different anchoring, syndrome and checksum
computation strategies. Information editing is assumed of the form of
deletions, insertions, block deletions/insertions, translocations and
transpositions. The protocols developed under the given model are order-optimal
with respect to genie aided lower bounds.
|
1401.8042 | Online Dating Recommendations: Matching Markets and Learning Preferences | cs.SI cs.IR physics.soc-ph | Recommendation systems for online dating have recently attracted much
attention from the research community. In this paper we proposed a two-side
matching framework for online dating recommendations and design an LDA model to
learn the user preferences from the observed user messaging behavior and user
profile features. Experimental results using data from a large online dating
website shows that two-sided matching improves significantly the rate of
successful matches by as much as 45%. Finally, using simulated matchings we
show that the the LDA model can correctly capture user preferences.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.