id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1304.6761 | Towards a Networks-of-Networks Framework for Cyber Security | cs.CR cs.NI cs.SI | Networks-of-networks (NoN) is a graph-theoretic model of interdependent
networks that have distinct dynamics at each network (layer). By adding special
edges to represent relationships between nodes in different layers, NoN
provides a unified mechanism to study interdependent systems intertwined in a
complex relationship. While NoN based models have been proposed for
cyber-physical systems, in this position paper we build towards a three-layered
NoN model for an enterprise cyber system. Each layer captures a different facet
of a cyber system. We present in-depth discussion for four major graph-
theoretic applications to demonstrate how the three-layered NoN model can be
leveraged for continuous system monitoring and mission assurance.
|
1304.6763 | Deep Scattering Spectrum | cs.SD cs.IT math.IT | A scattering transform defines a locally translation invariant representation
which is stable to time-warping deformations. It extends MFCC representations
by computing modulation spectrum coefficients of multiple orders, through
cascades of wavelet convolutions and modulus operators. Second-order scattering
coefficients characterize transient phenomena such as attacks and amplitude
modulation. A frequency transposition invariant representation is obtained by
applying a scattering transform along log-frequency. State-the-of-art
classification results are obtained for musical genre and phone classification
on GTZAN and TIMIT databases, respectively.
|
1304.6777 | A Bayesian approach for predicting the popularity of tweets | cs.SI physics.soc-ph stat.AP | We predict the popularity of short messages called tweets created in the
micro-blogging site known as Twitter. We measure the popularity of a tweet by
the time-series path of its retweets, which is when people forward the tweet to
others. We develop a probabilistic model for the evolution of the retweets
using a Bayesian approach, and form predictions using only observations on the
retweet times and the local network or "graph" structure of the retweeters. We
obtain good step ahead forecasts and predictions of the final total number of
retweets even when only a small fraction (i.e., less than one tenth) of the
retweet path is observed. This translates to good predictions within a few
minutes of a tweet being posted, and has potential implications for
understanding the spread of broader ideas, memes, or trends in social networks.
|
1304.6792 | On the mixed $f$-divergence for multiple pairs of measures | cs.IT math.IT math.MG | In this paper, the concept of the classical $f$-divergence (for a pair of
measures) is extended to the mixed $f$-divergence (for multiple pairs of
measures). The mixed $f$-divergence provides a way to measure the difference
between multiple pairs of (probability) measures. Properties for the mixed
$f$-divergence are established, such as permutation invariance and symmetry in
distributions. An Alexandrov-Fenchel type inequality and an isoperimetric type
inequality for the mixed $f$-divergence will be proved and applications in the
theory of convex bodies are given.
|
1304.6810 | Inference and learning in probabilistic logic programs using weighted
Boolean formulas | cs.AI cs.LG cs.LO | Probabilistic logic programs are logic programs in which some of the facts
are annotated with probabilities. This paper investigates how classical
inference and learning tasks known from the graphical model community can be
tackled for probabilistic logic programs. Several such tasks such as computing
the marginals given evidence and learning from (partial) interpretations have
not really been addressed for probabilistic logic programs before.
The first contribution of this paper is a suite of efficient algorithms for
various inference tasks. It is based on a conversion of the program and the
queries and evidence to a weighted Boolean formula. This allows us to reduce
the inference tasks to well-studied tasks such as weighted model counting,
which can be solved using state-of-the-art methods known from the graphical
model and knowledge compilation literature. The second contribution is an
algorithm for parameter estimation in the learning from interpretations
setting. The algorithm employs Expectation Maximization, and is built on top of
the developed inference algorithms.
The proposed approach is experimentally evaluated. The results show that the
inference algorithms improve upon the state-of-the-art in probabilistic logic
programming and that it is indeed possible to learn the parameters of a
probabilistic logic program from interpretations.
|
1304.6822 | On Design of Opportunistic Spectrum Access in the Presence of Reactive
Primary Users | cs.IT math.IT | Opportunistic spectrum access (OSA) is a key technique enabling the secondary
users (SUs) in a cognitive radio (CR) network to transmit over the "spectrum
holes" unoccupied by the primary users (PUs). In this paper, we focus on the
OSA design in the presence of reactive PUs, where PU's access probability in a
given channel is related to SU's past access decisions. We model the channel
occupancy of the reactive PU as a 4-state discrete-time Markov chain. We
formulate the optimal OSA design for SU throughput maximization as a
constrained finite-horizon partially observable Markov decision process (POMDP)
problem. We solve this problem by first considering the conventional short-term
conditional collision probability (SCCP) constraint. We then adopt a long-term
PU throughput (LPUT) constraint to effectively protect the reactive PU
transmission. We derive the structure of the optimal OSA policy under the LPUT
constraint and propose a suboptimal policy with lower complexity. Numerical
results are provided to validate the proposed studies, which reveal some
interesting new tradeoffs between SU throughput maximization and PU
transmission protection in a practical interaction scenario.
|
1304.6858 | Phase Transition and Strong Predictability | cs.IT math.IT | The statistical mechanical interpretation of algorithmic information theory
(AIT, for short) was introduced and developed in our former work [K. Tadaki,
Local Proceedings of CiE 2008, pp.425-434, 2008], where we introduced the
notion of thermodynamic quantities into AIT. These quantities are real
functions of temperature T>0. The values of all the thermodynamic quantities
diverge when T exceeds 1. This phenomenon corresponds to phase transition in
statistical mechanics. In this paper we introduce the notion of strong
predictability for an infinite binary sequence and then apply it to the
partition function Z(T), which is one of the thermodynamic quantities in AIT.
We then reveal a new computational aspect of the phase transition in AIT by
showing the critical difference of the behavior of Z(T) between T=1 and T<1 in
terms of the strong predictability for the base-two expansion of Z(T).
|
1304.6898 | Automated Synthesis of Controllers for Search and Rescue from Temporal
Logic Specifications | cs.SY | In this thesis, the synthesis of correct-by-construction controllers for
robots assisting in Search and Rescue (SAR) is considered. In recent years, the
development of robots assisting in disaster mitigation in urban environments
has been actively encouraged, since robots can be deployed in dangerous and
hazardous areas where human SAR operations would not be possible.
In order to meet the reliability requirements in SAR, the specifications of
the robots are stated in Linear Temporal Logic and synthesized into finite
state machines that can be executed as controllers. The resulting controllers
are purely discrete and maintain an ongoing interaction with their environment
by changing their internal state according to the inputs they receive from
sensors or other robots.
Since SAR robots have to cooperate in order to complete the required tasks,
the synthesis of controllers that together achieve a common goal is considered.
This distributed synthesis problem is provably undecidable, hence it cannot be
solved in full generality, but a set of design principles is introduced in
order to develop specialized synthesizable specifications. In particular,
communication and cooperation are resolved by introducing a verified
standardized communication protocol and preempting negotiations between robots.
The robots move on a graph on which we consider the search for stationary and
moving targets. Searching for moving targets is cast into a game of cops and
robbers, and specifications implementing a winning strategy are developed so
that the number of robots required is minimized.
The viability of the methods is demonstrated by synthesizing controllers for
robots performing search and rescue for stationary targets and searching for
moving targets. It is shown that the controllers are guaranteed to achieve the
common goal of finding and rescuing the targets.
|
1304.6899 | An implementation of the relational k-means algorithm | cs.LG cs.CV cs.MS | A C# implementation of a generalized k-means variant called relational
k-means is described here. Relational k-means is a generalization of the
well-known k-means clustering method which works for non-Euclidean scenarios as
well. The input is an arbitrary distance matrix, as opposed to the traditional
k-means method, where the clustered objects need to be identified with vectors.
|
1304.6920 | Contextual Query Using Bell Tests | cs.IR quant-ph | Tests are essential in Information Retrieval and Data Mining in order to
evaluate the effectiveness of a query. An automatic measure tool intended to
exhibit the meaning of words in context has been developed and linked with
Quantum Theory, particularly entanglement. "Quantum like" experiments were
undertaken on semantic space based on the Hyperspace Analogue Language (HAL)
method. A quantum HAL model was implemented using state vectors issued from the
HAL matrix and query observables, testing a wide range of windows sizes. The
Bell parameter S, associating measures on two words in a document, was derived
showing peaks for specific window sizes. The peaks show maximum quantum
violation of the Bell inequalities and are document dependent. This new
correlation measure inspired by Quantum Theory could be promising for measuring
query relevance.
|
1304.6933 | Digit Recognition in Handwritten Weather Records | cs.CV | This paper addresses the automatic recognition of handwritten temperature
values in weather records. The localization of table cells is based on line
detection using projection profiles. Further, a stroke-preserving line removal
method which is based on gradient images is proposed. The presented digit
recognition utilizes features which are extracted using a set of filters and a
Support Vector Machine classifier. It was evaluated on the MNIST and the USPS
dataset and our own database with about 17,000 RGB digit images. An accuracy of
99.36% per digit is achieved for the entire system using a set of 84 weather
records.
|
1304.6969 | A Deterministic Annealing Approach to Optimization of Zero-delay
Source-Channel Codes | cs.IT math.IT | This paper studies optimization of zero-delay source-channel codes, and
specifically the problem of obtaining globally optimal transformations that map
between the source space and the channel space, under a given transmission
power constraint and for the mean square error distortion. Particularly, we
focus on the setting where the decoder has access to side information, whose
cost surface is known to be riddled with local minima. Prior work derived the
necessary conditions for optimality of the encoder and decoder mappings, along
with a greedy optimization algorithm that imposes these conditions iteratively,
in conjunction with the heuristic "noisy channel relaxation" method to mitigate
poor local minima. While noisy channel relaxation is arguably effective in
simple settings, it fails to provide accurate global optimization results in
more complicated settings including the decoder with side information as
considered in this paper. We propose a global optimization algorithm based on
the ideas of "deterministic annealing"- a non-convex optimization method,
derived from information theoretic principles with analogies to statistical
physics, and successfully employed in several problems including clustering,
vector quantization and regression. We present comparative numerical results
that show strict superiority of the proposed algorithm over greedy optimization
methods as well as over the noisy channel relaxation.
|
1304.6990 | Euclidean Upgrade from a Minimal Number of Segments | cs.CV | In this paper, we propose an algebraic approach to upgrade a projective
reconstruction to a Euclidean one, and aim at computing the rectifying
homography from a minimal number of 9 segments of known length. Constraints are
derived from these segments which yield a set of polynomial equations that we
solve by means of Gr\"obner bases. We explain how a solver for such a system of
equations can be constructed from simplified template data. Moreover, we
present experiments that demonstrate that the given problem can be solved in
this way.
|
1304.7018 | Higher-order compatible discretization on hexahedrals | math-ph cs.CE cs.CG cs.NA math.MP | We derive a compatible discretization method that relies heavily on the
underlying geometric structure, and obeys the topological sequences and
commuting properties that are constructed. As a sample problem we consider the
vorticity-velocity-pressure formulation of the Stokes problem. We motivate the
choice for a mixed variational formulation based on both geometric as well as
physical arguments. Numerical tests confirm the theoretical results that we
obtain a pointwise divergence-free solution for the Stokes problem and that the
method obtains optimal convergence rates.
|
1304.7025 | Recovery of bilevel causal signals with finite rate of innovation using
positive sampling kernels | cs.IT math.IT | Bilevel signal $x$ with maximal local rate of innovation $R$ is a
continuous-time signal that takes only two values 0 and 1 and that there is at
most one transition position in any time period of 1/R.In this note, we
introduce a recovery method for bilevel causal signals $x$ with maximal local
rate of innovation $R$ from their uniform samples $x*h(nT), n\ge 1$, where the
sampling kernel $h$ is causal and positive on $(0, T)$, and the sampling rate
$\tau:=1/T$ is at (or above) the maximal local rate of innovation $R$. We also
discuss stability of the bilevel signal recovery procedure in the presence of
bounded noises.
|
1304.7034 | Threshold-limited spreading in social networks with multiple initiators | physics.soc-ph cond-mat.stat-mech cs.SI | A classical model for social-influence-driven opinion change is the threshold
model. Here we study cascades of opinion change driven by threshold model
dynamics in the case where multiple {\it initiators} trigger the cascade, and
where all nodes possess the same adoption threshold $\phi$. Specifically, using
empirical and stylized models of social networks, we study cascade size as a
function of the initiator fraction $p$. We find that even for arbitrarily high
value of $\phi$, there exists a critical initiator fraction $p_c(\phi)$ beyond
which the cascade becomes global. Network structure, in particular clustering,
plays a significant role in this scenario. Similarly to the case of single-node
or single-clique initiators studied previously, we observe that community
structure within the network facilitates opinion spread to a larger extent than
a homogeneous random network. Finally, we study the efficacy of different
initiator selection strategies on the size of the cascade and the cascade
window.
|
1304.7045 | An Algorithm for Training Polynomial Networks | cs.LG cs.AI stat.ML | We consider deep neural networks, in which the output of each node is a
quadratic function of its inputs. Similar to other deep architectures, these
networks can compactly represent any function on a finite training set. The
main goal of this paper is the derivation of an efficient layer-by-layer
algorithm for training such networks, which we denote as the \emph{Basis
Learner}. The algorithm is a universal learner in the sense that the training
error is guaranteed to decrease at every iteration, and can eventually reach
zero under mild conditions. We present practical implementations of this
algorithm, as well as preliminary experimental results. We also compare our
deep architecture to other shallow architectures for learning polynomials, in
particular kernel learning.
|
1304.7047 | Finding Hidden Cliques of Size \sqrt{N/e} in Nearly Linear Time | math.PR cs.IT math.IT math.ST stat.TH | Consider an Erd\"os-Renyi random graph in which each edge is present
independently with probability 1/2, except for a subset $\sC_N$ of the vertices
that form a clique (a completely connected subgraph). We consider the problem
of identifying the clique, given a realization of such a random graph.
The best known algorithm provably finds the clique in linear time with high
probability, provided $|\sC_N|\ge 1.261\sqrt{N}$ \cite{dekel2011finding}.
Spectral methods can be shown to fail on cliques smaller than $\sqrt{N}$. In
this paper we describe a nearly linear time algorithm that succeeds with high
probability for $|\sC_N|\ge (1+\eps)\sqrt{N/e}$ for any $\eps>0$. This is the
first algorithm that provably improves over spectral methods.
We further generalize the hidden clique problem to other background graphs
(the standard case corresponding to the complete graph on $N$ vertices). For
large girth regular graphs of degree $(\Delta+1)$ we prove that `local'
algorithms succeed if $|\sC_N|\ge (1+\eps)N/\sqrt{e\Delta}$ and fail if
$|\sC_N|\le(1-\eps)N/\sqrt{e\Delta}$.
|
1304.7075 | Lower bounds on the M\"{u}nchhausen problem | cs.IT math.CO math.IT | "The Baron's omni-sequence", B(n), first defined by Khovanova and Lewis
(2011), is a sequence that gives for each n the minimum number of weighings on
balance scales that can verify the correct labeling of n identically-looking
coins with distinct integer weights between 1 gram and n grams. A trivial lower
bound on B(n) is log_3(n), and it has been shown that B(n) is log_3(n) + O(log
log n). In this paper we give a first nontrivial lower bound to the
M\"{u}nchhausen problem, showing that there is an infinite number of n values
for which B(n) does not equal ceil(log_3 n). Furthermore, we show that if N(k)
is the number of n values for which k = ceil(log_3 n) and B(n) does not equal
k, then N(k) is an unbounded function of k.
|
1304.7094 | A new Watermarking Technique for Secure Database | cs.DB cs.CR cs.MM | Digital multimedia watermarking technology was suggested in the last decade
to embed copyright information in digital objects such images, audio and video.
However, the increasing use of relational database systems in many real-life
applications created an ever increasing need for watermarking database systems.
As a result, watermarking relational database systems is now merging as a
research area that deals with the legal issue of copyright protection of
database systems. Approach: In this study, we proposed an efficient database
watermarking algorithm based on inserting binary image watermarks in
non-numeric mutli-word attributes of selected database tuples. Results: The
algorithm is robust as it resists attempts to remove or degrade the embedded
watermark and it is blind as it does not require the original database in order
to extract the embedded watermark. Conclusion: Experimental results
demonstrated blindness and the robustness of the algorithm against common
database attacks.
|
1304.7095 | Proximity Factors of Lattice Reduction-Aided Precoding for Multiantenna
Broadcast | cs.IT math.IT | Lattice precoding is an effective strategy for multiantenna broadcast. In
this paper, we show that approximate lattice precoding in multiantenna
broadcast is a variant of the closest vector problem (CVP) known as $\eta$-CVP.
The proximity factors of lattice reduction-aided precoding are defined, and
their bounds are derived, which measure the worst-case loss in power efficiency
compared to sphere precoding. Unlike decoding applications, this analysis does
not suffer from the boundary effect of a finite constellation, since the
underlying lattice in multiantenna broadcast is indeed infinite.
|
1304.7096 | A Novel approach for Hybrid Database | cs.DB cs.CR cs.MM | In the current world of economic crises, the cost control is one of the chief
concerns for all types of industries, especially for the small venders. The
small vendors are suppose to minimize their budget on Information Technology by
reducing the initial investment in hardware and costly database servers like
ORACLE, SQL Server, SYBASE, etc. for the purpose of data processing and
storing. In other divisions, the electronic devices manufacturing companies
want to increase the demand and reduce the manufacturing cost by introducing
the low cost technologies. The new small devices like ipods, iphones, palm top
etc. are now-a-days used as data computation and storing tools. For both the
cases mentioned above, instead of going for the costly database servers which
additionally requires extra hardware as well as the extra expenses in training
and handling, the flat file may be considered as a candidate due to its easy
handling nature, fast accessing, and of course free of cost. But the main
hurdle is the security aspects which are not up to the optimum level. In this
paper, we propose a methodology that combines all the merit of the flat file
and with the help of a novel steganographic technique we can maintain the
utmost security fence. The new proposed methodology will undoubtedly be highly
beneficial for small vendors as well as for the above said electronic devices
manufacturer
|
1304.7118 | Synthesis of neural networks for spatio-temporal spike pattern
recognition and processing | cs.NE q-bio.NC | The advent of large scale neural computational platforms has highlighted the
lack of algorithms for synthesis of neural structures to perform predefined
cognitive tasks. The Neural Engineering Framework offers one such synthesis,
but it is most effective for a spike rate representation of neural information,
and it requires a large number of neurons to implement simple functions. We
describe a neural network synthesis method that generates synaptic connectivity
for neurons which process time-encoded neural signals, and which makes very
sparse use of neurons. The method allows the user to specify, arbitrarily,
neuronal characteristics such as axonal and dendritic delays, and synaptic
transfer functions, and then solves for the optimal input-output relationship
using computed dendritic weights. The method may be used for batch or online
learning and has an extremely fast optimization process. We demonstrate its use
in generating a network to recognize speech which is sparsely encoded as spike
times.
|
1304.7132 | Filament and Flare Detection in H{\alpha} image sequences | cs.CV astro-ph.IM | Solar storms can have a major impact on the infrastructure of the earth. Some
of the causing events are observable from ground in the H{\alpha} spectral
line. In this paper we propose a new method for the simultaneous detection of
flares and filaments in H{\alpha} image sequences. Therefore we perform several
preprocessing steps to enhance and normalize the images. Based on the intensity
values we segment the image by a variational approach. In a final
postprecessing step we derive essential properties to classify the events and
further demonstrate the performance by comparing our obtained results to the
data annotated by an expert. The information produced by our method can be used
for near real-time alerts and the statistical analysis of existing data by
solar physicists.
|
1304.7140 | Pulmonary Vascular Tree Segmentation from Contrast-Enhanced CT Images | cs.CV physics.med-ph | We present a pulmonary vessel segmentation algorithm, which is fast, fully
automatic and robust. It uses a coarse segmentation of the airway tree and a
left and right lung labeled volume to restrict a vessel enhancement filter,
based on an offset medialness function, to the lungs. We show the application
of our algorithm on contrast-enhanced CT images, where we derive a clinical
parameter to detect pulmonary hypertension (PH) in patients. Results on a
dataset of 24 patients show that quantitative indices derived from the
segmentation are applicable to distinguish patients with and without PH.
Further work-in-progress results are shown on the VESSEL12 challenge dataset,
which is composed of non-contrast-enhanced scans, where we range in the
midfield of participating contestants.
|
1304.7153 | A Convex Approach for Image Hallucination | cs.CV | In this paper we propose a global convex approach for image hallucination.
Altering the idea of classical multi image super resolution (SU) systems to
single image SU, we incorporate aligned images to hallucinate the output. Our
work is based on the paper of Tappen et al. where they use a non-convex model
for image hallucination. In comparison we formulate a convex primal
optimization problem and derive a fast converging primal-dual algorithm with a
global optimal solution. We use a database with face images to incorporate
high-frequency details to the high-resolution output. We show that we can
achieve state-of-the-art results by using a convex approach.
|
1304.7157 | Question Answering Against Very-Large Text Collections | cs.CL cs.IR | Question answering involves developing methods to extract useful information
from large collections of documents. This is done with specialised search
engines such as Answer Finder. The aim of Answer Finder is to provide an answer
to a question rather than a page listing related documents that may contain the
correct answer. So, a question such as "How tall is the Eiffel Tower" would
simply return "325m" or "1,063ft". Our task was to build on the current version
of Answer Finder by improving information retrieval, and also improving the
pre-processing involved in question series analysis.
|
1304.7158 | Irreflexive and Hierarchical Relations as Translations | cs.LG | We consider the problem of embedding entities and relations of knowledge
bases in low-dimensional vector spaces. Unlike most existing approaches, which
are primarily efficient for modeling equivalence relations, our approach is
designed to explicitly model irreflexive relations, such as hierarchies, by
interpreting them as translations operating on the low-dimensional embeddings
of the entities. Preliminary experiments show that, despite its simplicity and
a smaller number of parameters than previous approaches, our approach achieves
state-of-the-art performance according to standard evaluation protocols on data
from WordNet and Freebase.
|
1304.7162 | The automorphism group of a self-dual [72,36,16] code is not an
elementary abelian group of order 8 | cs.IT math.CO math.IT | The existence of an extremal self-dual binary linear code C of length 72 is a
long-standing open problem. We continue the investigation of its automorphism
group: looking at the combination of the subcodes fixed by different
involutions and doing a computer calculation with Magma, we prove that Aut(C)
is not isomorphic to the elementary abelian group of order 8. Combining this
with the known results in the literature one obtains that Aut(C) has order at
most 5.
|
1304.7168 | Non Deterministic Logic Programs | cs.AI | Non deterministic applications arise in many domains, including, stochastic
optimization, multi-objectives optimization, stochastic planning, contingent
stochastic planning, reinforcement learning, reinforcement learning in
partially observable Markov decision processes, and conditional planning. We
present a logic programming framework called non deterministic logic programs,
along with a declarative semantics and fixpoint semantics, to allow
representing and reasoning about inherently non deterministic real-world
applications. The language of non deterministic logic programs framework is
extended with non-monotonic negation, and two alternative semantics are
defined: the stable non deterministic model semantics and the well-founded non
deterministic model semantics as well as their relationship is studied. These
semantics subsume the deterministic stable model semantics and the
deterministic well-founded semantics of deterministic normal logic programs,
and they reduce to the semantics of deterministic definite logic programs
without negation. We show the application of the non deterministic logic
programs framework to a conditional planning problem.
|
1304.7184 | Reading Ancient Coin Legends: Object Recognition vs. OCR | cs.CV | Standard OCR is a well-researched topic of computer vision and can be
considered solved for machine-printed text. However, when applied to
unconstrained images, the recognition rates drop drastically. Therefore, the
employment of object recognition-based techniques has become state of the art
in scene text recognition applications. This paper presents a scene text
recognition method tailored to ancient coin legends and compares the results
achieved in character and word recognition experiments to a standard OCR
engine. The conducted experiments show that the proposed method outperforms the
standard OCR engine on a set of 180 cropped coin legend words.
|
1304.7211 | Algorithmic Optimisations for Iterative Deconvolution Methods | cs.CV | We investigate possibilities to speed up iterative algorithms for non-blind
image deconvolution. We focus on algorithms in which convolution with the
point-spread function to be deconvolved is used in each iteration, and aim at
accelerating these convolution operations as they are typically the most
expensive part of the computation. We follow two approaches: First, for some
practically important specific point-spread functions, algorithmically
efficient sliding window or list processing techniques can be used. In some
constellations this allows faster computation than via the Fourier domain.
Second, as iterations progress, computation of convolutions can be restricted
to subsets of pixels. For moderate thinning rates this can be done with almost
no impact on the reconstruction quality. Both approaches are demonstrated in
the context of Richardson-Lucy deconvolution but are not restricted to this
method.
|
1304.7217 | Correction of inertial navigation system's errors by the help of
video-based navigator based on Digital Terrarium Map | cs.SY | This paper deals with the error analysis of a novel navigation algorithm that
uses as input the sequence of images acquired from a moving camera and a
Digital Terrain (or Elevation) Map (DTM/DEM). More specifically, it has been
shown that the optical flow derived from two consecutive camera frames can be
used in combination with a DTM to estimate the position, orientation and
ego-motion parameters of the moving camera. As opposed to previous works, the
proposed approach does not require an intermediate explicit reconstruction of
the 3D world. In the present work the sensitivity of the algorithm outlined
above is studied. The main sources for errors are identified to be the
optical-flow evaluation and computation, the quality of the information about
the terrain, the structure of the observed terrain and the trajectory of the
camera. By assuming appropriate characterization of these error sources, a
closed form expression for the uncertainty of the pose and motion of the camera
is first developed and then the influence of these factors is confirmed using
extensive numerical simulations. The main conclusion of this paper is to
establish that the proposed navigation algorithm generates accurate estimates
for reasonable scenarios and error sources, and thus can be effectively used as
part of a navigation system of autonomous vehicles.
|
1304.7224 | PAV ontology: Provenance, Authoring and Versioning | cs.DL cs.IR | Provenance is a critical ingredient for establishing trust of published
scientific content. This is true whether we are considering a data set, a
computational workflow, a peer-reviewed publication or a simple scientific
claim with supportive evidence. Existing vocabularies such as DC Terms and the
W3C PROV-O are domain-independent and general-purpose and they allow and
encourage for extensions to cover more specific needs. We identify the specific
need for identifying or distinguishing between the various roles assumed by
agents manipulating digital artifacts, such as author, contributor and curator.
We present the Provenance, Authoring and Versioning ontology (PAV): a
lightweight ontology for capturing just enough descriptions essential for
tracking the provenance, authoring and versioning of web resources. We argue
that such descriptions are essential for digital scientific content. PAV
distinguishes between contributors, authors and curators of content and
creators of representations in addition to the provenance of originating
resources that have been accessed, transformed and consumed. We explore five
projects (and communities) that have adopted PAV illustrating their usage
through concrete examples. Moreover, we present mappings that show how PAV
extends the PROV-O ontology to support broader interoperability.
The authors strived to keep PAV lightweight and compact by including only
those terms that have demonstrated to be pragmatically useful in existing
applications, and by recommending terms from existing ontologies when
plausible.
We analyze and compare PAV with related approaches, namely Provenance
Vocabulary, DC Terms and BIBFRAME. We identify similarities and analyze their
differences with PAV, outlining strengths and weaknesses of our proposed model.
We specify SKOS mappings that align PAV with DC Terms.
|
1304.7226 | Lay-up Optimization of Laminated Composites: Mixed Approach with Exact
Feasibility Bounds on Lamination Parameters | cs.CE | We suggest modified bi-level approach for finding the best stacking sequence
of laminated composite structures subject to mechanical, blending and
manufacturing constraints. We propose to use both the number of plies laid up
at predefined angles and lamination parameters as independent variables at
outer (global) stage of bi-level scheme aimed to satisfy buckling, strain and
percentage constraints. Our formulation allows precise definition of the
feasible region of lamination parameters and greatly facilitates the solution
of inner level problem of finding the optimal stacking sequence.
|
1304.7230 | Learning Densities Conditional on Many Interacting Features | stat.ML cs.LG | Learning a distribution conditional on a set of discrete-valued features is a
commonly encountered task. This becomes more challenging with a
high-dimensional feature set when there is the possibility of interaction
between the features. In addition, many frequently applied techniques consider
only prediction of the mean, but the complete conditional density is needed to
answer more complex questions. We demonstrate a novel nonparametric Bayes
method based upon a tensor factorization of feature-dependent weights for
Gaussian kernels. The method makes use of multistage feature selection for
dimension reduction. The resulting conditional density morphs flexibly with the
selected features.
|
1304.7236 | In the sight of my wearable camera: Classifying my visual experience | cs.CV | We introduce and we analyze a new dataset which resembles the input to
biological vision systems much more than most previously published ones. Our
analysis leaded to several important conclusions. First, it is possible to
disambiguate over dozens of visual scenes (locations) encountered over the
course of several weeks of a human life with accuracy of over 80%, and this
opens up possibility for numerous novel vision applications, from early
detection of dementia to everyday use of wearable camera streams for automatic
reminders, and visual stream exchange. Second, our experimental results
indicate that, generative models such as Latent Dirichlet Allocation or
Counting Grids, are more suitable to such types of data, as they are more
robust to overtraining and comfortable with images at low resolution, blurred
and characterized by relatively random clutter and a mix of objects.
|
1304.7238 | Solution of the Decision Making Problems using Fuzzy Soft Relations | cs.AI | The Fuzzy Modeling has been applied in a wide variety of fields such as
Engineering and Management Sciences and Social Sciences to solve a number
Decision Making Problems which involve impreciseness, uncertainty and vagueness
in data. In particular, applications of this Modeling technique in Decision
Making Problems have remarkable significance. These problems have been tackled
using various theories such as Probability theory, Fuzzy Set Theory, Rough Set
Theory, Vague Set Theory, Approximate Reasoning Theory etc. which lack in
parameterization of the tools due to which they could not be applied
successfully to such problems. The concept of Soft Set has a promising
potential for giving an optimal solution for these problems. With the
motivation of this new concept, in this paper we define the concepts of Soft
Relation and Fuzzy Soft Relation and then apply them to solve a number of
Decision Making Problems. The advantages of Fuzzy Soft Relation compared to
other paradigms are discussed. To the best of our knowledge this is the first
work on the application of Fuzzy Soft Relation to the Decision Making Problems.
|
1304.7239 | Solution of System of Linear Equations - A Neuro-Fuzzy Approach | cs.AI | Neuro-Fuzzy Modeling has been applied in a wide variety of fields such as
Decision Making, Engineering and Management Sciences etc. In particular,
applications of this Modeling technique in Decision Making by involving complex
Systems of Linear Algebraic Equations have remarkable significance. In this
Paper, we present Polak-Ribiere Conjugate Gradient based Neural Network with
Fuzzy rules to solve System of Simultaneous Linear Algebraic Equations. This is
achieved using Fuzzy Backpropagation Learning Rule. The implementation results
show that the proposed Neuro-Fuzzy Network yields effective solutions for
exactly determined, underdetermined and over-determined Systems of Linear
Equations. This fact is demonstrated by the Computational Complexity analysis
of the Neuro-Fuzzy Algorithm. The proposed Algorithm is simulated effectively
using MATLAB software. To the best of our knowledge this is the first work of
the Systems of Linear Algebraic Equations using Neuro-Fuzzy Modeling.
|
1304.7244 | Relation-algebraic and Tool-supported Control of Condorcet Voting | cs.GT cs.AI | We present a relation-algebraic model of Condorcet voting and, based on it,
relation-algebraic solutions of the constructive control problem via the
removal of voters.
We consider two winning conditions, viz. to be a Condorcet winner and to be
in the (Gilles resp. upward) uncovered set. For the first condition the control
problem is known to be NP-hard; for the second condition the NP-hardness of the
control problem is shown in the paper. All relation-algebraic specifications we
will develop in the paper immediately can be translated into the programming
language of the BDD-based computer system RelView. Our approach is very
flexible and especially appropriate for prototyping and experimentation, and as
such very instructive for educational purposes. It can easily be applied to
other voting rules and control problems.
|
1304.7256 | Robust Belief Roadmap: Planning Under Intermittent Sensing | cs.RO | In this paper, we extend the recent body of work on planning under
uncertainty to include the fact that sensors may not provide any measurement
owing to misdetection. This is caused either by adverse environmental
conditions that prevent the sensors from making measurements or by the
fundamental limitations of the sensors. Examples include RF-based ranging
devices that intermittently do not receive the signal from beacons because of
obstacles; the misdetection of features by a camera system in detrimental
lighting conditions; a LIDAR sensor that is pointed at a glass-based material
such as a window, etc.
The main contribution of this paper is twofold. We first show that it is
possible to obtain an analytical bound on the performance of a state estimator
under sensor misdetection occurring stochastically over time in the
environment. We then show how this bound can be used in a sample-based path
planning algorithm to produce a path that trades off accuracy and robustness.
Computational results demonstrate the benefit of the approach and comparisons
are made with the state of the art in path planning under state uncertainty.
|
1304.7278 | On Adaptive Control with Closed-loop Reference Models: Transients,
Oscillations, and Peaking | cs.SY math.OC nlin.AO | One of the main features of adaptive systems is an oscillatory convergence
that exacerbates with the speed of adaptation. Recently it has been shown that
Closed-loop Reference Models (CRMs) can result in improved transient
performance over their open-loop counterparts in model reference adaptive
control. In this paper, we quantify both the transient performance in the
classical adaptive systems and their improvement with CRMs. In addition to
deriving bounds on L-2 norms of the derivatives of the adaptive parameters
which are shown to be smaller, an optimal design of CRMs is proposed which
minimizes an underlying peaking phenomenon. The analytical tools proposed are
shown to be applicable for a range of adaptive control problems including
direct control and composite control with observer feedback. The presence of
CRMs in adaptive backstepping and adaptive robot control are also discussed.
Simulation results are presented throughout the paper to support the
theoretical derivations.
|
1304.7282 | An Improved Approach for Word Ambiguity Removal | cs.CL | Word ambiguity removal is a task of removing ambiguity from a word, i.e.
correct sense of word is identified from ambiguous sentences. This paper
describes a model that uses Part of Speech tagger and three categories for word
sense disambiguation (WSD). Human Computer Interaction is very needful to
improve interactions between users and computers. For this, the Supervised and
Unsupervised methods are combined. The WSD algorithm is used to find the
efficient and accurate sense of a word based on domain information. The
accuracy of this work is evaluated with the aim of finding best suitable domain
of word.
|
1304.7284 | Supervised Heterogeneous Multiview Learning for Joint Association Study
and Disease Diagnosis | cs.LG cs.CE stat.ML | Given genetic variations and various phenotypical traits, such as Magnetic
Resonance Imaging (MRI) features, we consider two important and related tasks
in biomedical research: i)to select genetic and phenotypical markers for
disease diagnosis and ii) to identify associations between genetic and
phenotypical data. These two tasks are tightly coupled because underlying
associations between genetic variations and phenotypical features contain the
biological basis for a disease. While a variety of sparse models have been
applied for disease diagnosis and canonical correlation analysis and its
extensions have bee widely used in association studies (e.g., eQTL analysis),
these two tasks have been treated separately. To unify these two tasks, we
present a new sparse Bayesian approach for joint association study and disease
diagnosis. In this approach, common latent features are extracted from
different data sources based on sparse projection matrices and used to predict
multiple disease severity levels based on Gaussian process ordinal regression;
in return, the disease status is used to guide the discovery of relationships
between the data sources. The sparse projection matrices not only reveal
interactions between data sources but also select groups of biomarkers related
to the disease. To learn the model from data, we develop an efficient
variational expectation maximization algorithm. Simulation results demonstrate
that our approach achieves higher accuracy in both predicting ordinal labels
and discovering associations between data sources than alternative methods. We
apply our approach to an imaging genetics dataset for the study of Alzheimer's
Disease (AD). Our method identifies biologically meaningful relationships
between genetic variations, MRI features, and AD status, and achieves
significantly higher accuracy for predicting ordinal AD stages than the
competing methods.
|
1304.7285 | Traitement approximatif des requ\^etes flexibles avec groupement
d'attributs et jointure | cs.DB | This paper addresses the problem of approximate processing for flexible
queries in the form SELECT-FROM-WHERE-GROUP BY with join condition. It offers a
flexible framework for online aggregation while promoting response time at the
expense of result accuracy.
|
1304.7289 | TimeML-strict: clarifying temporal annotation | cs.CL | TimeML is an XML-based schema for annotating temporal information over
discourse. The standard has been used to annotate a variety of resources and is
followed by a number of tools, the creation of which constitute hundreds of
thousands of man-hours of research work. However, the current state of
resources is such that many are not valid, or do not produce valid output, or
contain ambiguous or custom additions and removals. Difficulties arising from
these variances were highlighted in the TempEval-3 exercise, which included its
own extra stipulations over conventional TimeML as a response.
To unify the state of current resources, and to make progress toward easy
adoption of its current incarnation ISO-TimeML, this paper introduces
TimeML-strict: a valid, unambiguous, and easy-to-process subset of TimeML. We
also introduce three resources -- a schema for TimeML-strict; a validator tool
for TimeML-strict, so that one may ensure documents are in the correct form;
and a repair tool that corrects common invalidating errors and adds
disambiguating markup in order to convert documents from the laxer TimeML
standard to TimeML-strict.
|
1304.7308 | Improved Capacity Approximations for Gaussian Relay Networks | cs.IT math.IT | Consider a Gaussian relay network where a number of sources communicate to a
destination with the help of several layers of relays. Recent work has shown
that a compress-and-forward based strategy at the relays can achieve the
capacity of this network within an additive gap. In this strategy, the relays
quantize their observations at the noise level and map it to a random Gaussian
codebook. The resultant capacity gap is independent of the SNR's of the
channels in the network but linear in the total number of nodes.
In this paper, we show that if the relays quantize their signals at a
resolution decreasing with the number of nodes in the network, the additive gap
to capacity can be made logarithmic in the number of nodes for a class of
layered, time-varying wireless relay networks. This suggests that the
rule-of-thumb to quantize the received signals at the noise level used for
compress-and-forward in the current literature can be highly suboptimal.
|
1304.7344 | On feedback in Gaussian multi-hop networks | cs.IT math.IT | The study of feedback has been mostly limited to single-hop communication
settings. In this paper, we consider Gaussian networks where sources and
destinations can communicate with the help of intermediate relays over multiple
hops. We assume that links in the network can be bidirected providing
opportunities for feedback. We ask the following question: can the information
transfer in both directions of a link be critical to maximizing the end-to-end
communication rates in the network? Equivalently, could one of the directions
in each bidirected link (and more generally at least one of the links forming a
cycle) be shut down and the capacity of the network still be approximately
maintained? We show that in any arbitrary Gaussian network with bidirected
edges and cycles and unicast traffic, we can always identify a directed acyclic
subnetwork that approximately maintains the capacity of the original network.
For Gaussian networks with multiple-access and broadcast traffic, an acyclic
subnetwork is sufficient to achieve every rate point in the capacity region of
the original network, however, there may not be a single acyclic subnetwork
that maintains the whole capacity region. For networks with multicast and
multiple unicast traffic, on the other hand, bidirected information flow across
certain links can be critically needed to maximize the end-to-end capacity
region. These results can be regarded as generalizations of the conclusions
regarding the usefulness of feedback in various single-hop Gaussian settings
and can provide opportunities for simplifying operation in Gaussian multi-hop
networks.
|
1304.7355 | Web graph compression with fast access | cs.DS cs.IR cs.SI | In recent years studying the content of the World Wide Web became a very
important yet rather difficult task. There is a need for a compression
technique that would allow a web graph representation to be put into the memory
while maintaining random access time competitive to the time needed to access
uncompressed web graph on a hard drive.
There are already available techniques that accomplish this task, but there
is still room for improvements and this thesis attempts to prove it. It
includes a comparison of two methods contained in state of art of this field
(BV and k2partitioned) to two already implemented algorithms (rewritten,
however, in C++ programming language to maximize speed and resource management
efficiency), which are LM and 2D, and introduces the new variant of the latter
one, called 2D stripes.
This thesis serves as well as a proof of concept. The final considerations
show positive and negative aspects of all presented methods, expose the
feasibility of the new variant as well as indicate future direction for
development.
|
1304.7359 | Constant conditional entropy and related hypotheses | cond-mat.stat-mech cs.CL cs.IT math.IT physics.data-an | Constant entropy rate (conditional entropies must remain constant as the
sequence length increases) and uniform information density (conditional
probabilities must remain constant as the sequence length increases) are two
information theoretic principles that are argued to underlie a wide range of
linguistic phenomena. Here we revise the predictions of these principles to the
light of Hilberg's law on the scaling of conditional entropy in language and
related laws. We show that constant entropy rate (CER) and two interpretations
for uniform information density (UID), full UID and strong UID, are
inconsistent with these laws. Strong UID implies CER but the reverse is not
true. Full UID, a particular case of UID, leads to costly uncorrelated
sequences that are totally unrealistic. We conclude that CER and its particular
cases are incomplete hypotheses about the scaling of conditional entropies.
|
1304.7375 | Asymptotic FRESH Properizer for Block Processing of Improper-Complex
Second-Order Cyclostationary Random Processes | cs.IT math.IT | In this paper, the block processing of a discrete-time (DT) improper-complex
second-order cyclostationary (SOCS) random process is considered. In
particular, it is of interest to find a pre-processing operation that enables
computationally efficient near-optimal post-processing. An invertible
linear-conjugate linear (LCL) operator named the DT FREquency Shift (FRESH)
properizer is first proposed. It is shown that the DT FRESH properizer converts
a DT improper-complex SOCS random process input to an equivalent DT
proper-complex SOCS random process output by utilizing the information only
about the cycle period of the input. An invertible LCL block processing
operator named the asymptotic FRESH properizer is then proposed that mimics the
operation of the DT FRESH properizer but processes a finite number of
consecutive samples of a DT improper-complex SOCS random process. It is shown
that the output of the asymptotic FRESH properizer is not proper but
asymptotically proper and that its frequency-domain covariance matrix converges
to a highly-structured block matrix with diagonal blocks as the block size
tends to infinity. Two representative estimation and detection problems are
presented to demonstrate that asymptotically optimal low-complexity
post-processors can be easily designed by exploiting these asymptotic
second-order properties of the output of the asymptotic FRESH properizer.
|
1304.7392 | A Universal Grammar-Based Code For Lossless Compression of Binary Trees | cs.IT math.IT | We consider the problem of lossless compression of binary trees, with the aim
of reducing the number of code bits needed to store or transmit such trees. A
lossless grammar-based code is presented which encodes each binary tree into a
binary codeword in two steps. In the first step, the tree is transformed into a
context-free grammar from which the tree can be reconstructed. In the second
step, the context-free grammar is encoded into a binary codeword. The decoder
of the grammar-based code decodes the original tree from its codeword by
reversing the two encoding steps. It is shown that the resulting grammar-based
binary tree compression code is a universal code on a family of probabilistic
binary tree source models satisfying certain weak restrictions.
|
1304.7397 | Uniform generation of RNA pseudoknot structures with genus filtration | cs.CE math.CO q-bio.BM | In this paper we present a sampling framework for RNA structures of fixed
topological genus. We introduce a novel, linear time, uniform sampling
algorithm for RNA structures of fixed topological genus $g$, for arbitrary
$g>0$. Furthermore we develop a linear time sampling algorithm for RNA
structures of fixed topological genus $g$ that are weighted by a simplified,
loop-based energy functional. For this process the partition function of the
energy functional has to be computed once, which has $O(n^2)$ time complexity.
|
1304.7399 | Bingham Procrustean Alignment for Object Detection in Clutter | cs.CV cs.RO stat.AP | A new system for object detection in cluttered RGB-D images is presented. Our
main contribution is a new method called Bingham Procrustean Alignment (BPA) to
align models with the scene. BPA uses point correspondences between oriented
features to derive a probability distribution over possible model poses. The
orientation component of this distribution, conditioned on the position, is
shown to be a Bingham distribution. This result also applies to the classic
problem of least-squares alignment of point sets, when point features are
orientation-less, and gives a principled, probabilistic way to measure pose
uncertainty in the rigid alignment problem. Our detection system leverages BPA
to achieve more reliable object detections in clutter.
|
1304.7401 | Analytic Treatment of Tipping Points for Social Consensus in Large
Random Networks | cs.SI physics.soc-ph | We introduce a homogeneous pair approximation to the Naming Game (NG) model
by deriving a six-dimensional ODE for the two-word Naming Game. Our ODE reveals
the change in dynamical behavior of the Naming Game as a function of the
average degree < k > of an uncorrelated network. This result is in good
agreement with the numerical results. We also analyze the extended NG model
that allows for presence of committed nodes and show that there is a shift of
the tipping point for social consensus in sparse networks.
|
1304.7402 | Stopping Sets of Algebraic Geometry Codes | cs.IT math.IT | Stopping sets and stopping set distribution of a linear code play an
important role in the performance analysis of iterative decoding for this
linear code. Let $C$ be an $[n,k]$ linear code over $\f$ with parity-check
matrix $H$, where the rows of $H$ may be dependent. Let $[n]=\{1,2,...,n\}$
denote the set of column indices of $H$. A \emph{stopping set} $S$ of $C$ with
parity-check matrix $H$ is a subset of $[n]$ such that the restriction of $H$
to $S$ does not contain a row of weight 1. The \emph{stopping set distribution}
$\{T_{i}(H)\}_{i=0}^{n}$ enumerates the number of stopping sets with size $i$
of $C$ with parity-check matrix $H$. Denote $H^{*}$ the parity-check matrix
consisting of all the non-zero codewords in the dual code $C^{\bot}$. In this
paper, we study stopping sets and stopping set distributions of some residue
algebraic geometry (AG) codes with parity-check matrix $H^*$. First, we give
two descriptions of stopping sets of residue AG codes. For the simplest AG
codes, i.e., the generalized Reed-Solomon codes, it is easy to determine all
the stopping sets. Then we consider AG codes from elliptic curves. We use the
group structure of rational points of elliptic curves to present a complete
characterization of stopping sets. Then the stopping sets, the stopping set
distribution and the stopping distance of the AG code from an elliptic curve
are reduced to the search, counting and decision versions of the subset sum
problem in the group of rational points of the elliptic curve, respectively.
Finally, for some special cases, we determine the stopping set distributions of
AG codes from elliptic curves.
|
1304.7423 | On Integrating Fuzzy Knowledge Using a Novel Evolutionary Algorithm | cs.NE cs.AI | Fuzzy systems may be considered as knowledge-based systems that incorporates
human knowledge into their knowledge base through fuzzy rules and fuzzy
membership functions. The intent of this study is to present a fuzzy knowledge
integration framework using a Novel Evolutionary Strategy (NES), which can
simultaneously integrate multiple fuzzy rule sets and their membership function
sets. The proposed approach consists of two phases: fuzzy knowledge encoding
and fuzzy knowledge integration. Four application domains, the hepatitis
diagnosis, the sugarcane breeding prediction, Iris plants classification, and
Tic-tac-toe endgame were used to show the performance ofthe proposed knowledge
approach. Results show that the fuzzy knowledge base derived using our approach
performs better than Genetic Algorithm based approach.
|
1304.7432 | Sybil-proof Mechanisms in Query Incentive Networks | cs.GT cs.SI | In this paper, we study incentive mechanisms for retrieving information from
networked agents. Following the model in [Kleinberg and Raghavan 2005], the
agents are represented as nodes in an infinite tree, which is generated by a
random branching process. A query is issued by the root, and each node
possesses an answer with an independent probability $p=1/n$. Further, each node
in the tree acts strategically to maximize its own payoff. In order to
encourage the agents to participate in the information acquisition process, an
incentive mechanism is needed to reward agents who provide the information as
well as agents who help to facilitate such acquisition.
We focus on designing efficient sybil-proof incentive mechanisms, i.e., which
are robust to fake identity attacks. %We consider incentive mechanisms which
are sybil-proof, i.e., robust to fake identity attacks. We propose a family of
mechanisms, called the direct referral (DR) mechanisms, which allocate most
reward to the information holder as well as its direct parent (or direct
referral). We show that, when designed properly, the direct referral mechanism
is sybil-proof and efficient. In particular, we show that we may achieve an
expected cost of $O(h^2)$ for propagating the query down $h$ levels for any
branching factor $b>1$. This result exponentially improves on previous work
when requiring to find an answer with high probability. When the underlying
network is a deterministic chain, our mechanism is optimal under some mild
assumptions. In addition, due to its simple reward structure, the DR mechanism
might have good chance to be adopted in practice.
|
1304.7434 | Low Complexity Joint Estimation of Synchronization Impairments in Sparse
Channel for MIMO-OFDM System | cs.IT math.IT | Low complexity joint estimation of synchronization impairments and channel in
a single-user MIMO-OFDM system is presented in this letter. Based on a system
model that takes into account the effects of synchronization impairments such
as carrier frequency offset, sampling frequency offset, and symbol timing
error, and channel, a Maximum Likelihood (ML) algorithm for the joint
estimation is proposed. To reduce the complexity of ML grid search, the number
of received signal samples used for estimation need to be reduced. The
conventional channel estimation methods using Least-Squares (LS) fail for the
reduced sample under-determined system, which results in poor performance of
the joint estimator. The proposed ML algorithm uses Compressed Sensing (CS)
based channel estimation method in a sparse fading scenario, where the received
samples used for estimation are less than that required for an LS based
estimation. The performance of the estimation method is studied through
numerical simulations, and it is observed that CS based joint estimator
performs better than LS based joint estimator
|
1304.7435 | Statistical characterization of kappa-mu shadowed fading | cs.IT math.IT stat.AP | This paper investigates a natural generalization of the kappa-mu fading
channel in which the line-of-sight (LOS) component is subject to shadowing.
This fading distribution has a clear physical interpretation, good analytical
properties and unifies the one-side Gaussian, Rayleigh, Nakagami-m, Ricean,
kappa-mu and Ricean shadowed fading distributions. The three basic statistical
characterizations, i.e. probability density function (PDF), cumulative
distribution function (CDF) and moment generating function (MGF), of the
kappa-mu shadowed distribution are obtained in closed-form. Then, it is also
shown that the sum and maximum distributions of independent but arbitrarily
distributed kappa-mu shadowed variates can be expressed in closed-form. This
set of new statistical results is finally applied to the performance analysis
of several wireless communication systems.
|
1304.7457 | On the Effect of Correlated Measurements on the Performance of
Distributed Estimation | cs.IT math.IT | We address the distributed estimation of an unknown scalar parameter in
Wireless Sensor Networks (WSNs). Sensor nodes transmit their noisy observations
over multiple access channel to a Fusion Center (FC) that reconstructs the
source parameter. The received signal is corrupted by noise and channel fading,
so that the FC objective is to minimize the Mean-Square Error (MSE) of the
estimate. In this paper, we assume sensor node observations to be correlated
with the source signal and correlated with each other as well. The correlation
coefficient between two observations is exponentially decaying with the
distance separation. The effect of the distance-based correlation on the
estimation quality is demonstrated and compared with the case of unity
correlated observations. Moreover, a closed-form expression for the outage
probability is derived and its dependency on the correlation coefficients is
investigated. Numerical simulations are provided to verify our analytic
results.
|
1304.7461 | A maximization problem in tropical mathematics: a complete solution and
application examples | math.OC cs.SY | A multidimensional optimization problem is formulated in the tropical
mathematics setting as to maximize a nonlinear objective function, which is
defined through a multiplicative conjugate transposition operator on vectors in
a finite-dimensional semimodule over a general idempotent semifield. The study
is motivated by problems drawn from project scheduling, where the deviation
between initiation or completion times of activities in a project is to be
maximized subject to various precedence constraints among the activities. To
solve the unconstrained problem, we first establish an upper bound for the
objective function, and then solve a system of vector equations to find all
vectors that yield the bound. As a corollary, an extension of the solution to
handle constrained problems is discussed. The results obtained are applied to
give complete direct solutions to the motivating problems from project
scheduling. Numerical examples of the development of optimal schedules are also
presented.
|
1304.7465 | Deterministic Initialization of the K-Means Algorithm Using Hierarchical
Clustering | cs.LG cs.CV | K-means is undoubtedly the most widely used partitional clustering algorithm.
Unfortunately, due to its gradient descent nature, this algorithm is highly
sensitive to the initial placement of the cluster centers. Numerous
initialization methods have been proposed to address this problem. Many of
these methods, however, have superlinear complexity in the number of data
points, making them impractical for large data sets. On the other hand, linear
methods are often random and/or order-sensitive, which renders their results
unrepeatable. Recently, Su and Dy proposed two highly successful hierarchical
initialization methods named Var-Part and PCA-Part that are not only linear,
but also deterministic (non-random) and order-invariant. In this paper, we
propose a discriminant analysis based approach that addresses a common
deficiency of these two methods. Experiments on a large and diverse collection
of data sets from the UCI Machine Learning Repository demonstrate that Var-Part
and PCA-Part are highly competitive with one of the best random initialization
methods to date, i.e., k-means++, and that the proposed approach significantly
improves the performance of both hierarchical methods.
|
1304.7468 | Selection and Influence in Cultural Dynamics | cs.GT cs.SI physics.soc-ph | One of the fundamental principles driving diversity or homogeneity in domains
such as cultural differentiation, political affiliation, and product adoption
is the tension between two forces: influence (the tendency of people to become
similar to others they interact with) and selection (the tendency to be
affected most by the behavior of others who are already similar). Influence
tends to promote homogeneity within a society, while selection frequently
causes fragmentation. When both forces act simultaneously, it becomes an
interesting question to analyze which societal outcomes should be expected.
To study this issue more formally, we analyze a natural stylized model built
upon active lines of work in political opinion formation, cultural diversity,
and language evolution. We assume that the population is partitioned into
"types" according to some traits (such as language spoken or political
affiliation). While all types of people interact with one another, only people
with sufficiently similar types can possibly influence one another. The
"similarity" is captured by a graph on types in which individuals of the same
or adjacent types can influence one another. We achieve an essentially complete
characterization of (stable) equilibrium outcomes and prove convergence from
all starting states. We also consider generalizations of this model.
|
1304.7480 | The Ergodic Capacity of the Multiple Access Channel Under Distributed
Scheduling - Order Optimality of Linear Receivers | cs.IT math.IT | Consider the problem of a Multiple-Input Multiple-Output (MIMO)
Multiple-Access Channel (MAC) at the limit of large number of users. Clearly,
in practical scenarios, only a small subset of the users can be scheduled to
utilize the channel simultaneously. Thus, a problem of user selection arises.
However, since solutions which collect Channel State Information (CSI) from all
users and decide on the best subset to transmit in each slot do not scale when
the number of users is large, distributed algorithms for user selection are
advantageous.
In this paper, we analyse a distributed user selection algorithm, which
selects a group of users to transmit without coordinating between users and
without all users sending CSI to the base station. This threshold-based
algorithm is analysed for both Zero-Forcing (ZF) and Minimum Mean Square Error
(MMSE) receivers, and its expected sum-rate in the limit of large number of
users is investigated. It is shown that for large number of users it achieves
the same scaling laws as the optimal centralized scheme.
|
1304.7487 | Design of Non-Binary Quasi-Cyclic LDPC Codes by ACE Optimization | cs.IT math.IT | An algorithm for constructing Tanner graphs of non-binary irregular
quasi-cyclic LDPC codes is introduced. It employs a new method for selection of
edge labels allowing control over the code's non-binary ACE spectrum and
resulting in low error-floor. The efficiency of the algorithm is demonstrated
by generating good codes of short to moderate length over small fields,
outperforming codes generated by the known methods.
|
1304.7507 | Measuring Cultural Relativity of Emotional Valence and Arousal using
Semantic Clustering and Twitter | cs.CL cs.AI | Researchers since at least Darwin have debated whether and to what extent
emotions are universal or culture-dependent. However, previous studies have
primarily focused on facial expressions and on a limited set of emotions. Given
that emotions have a substantial impact on human lives, evidence for cultural
emotional relativity might be derived by applying distributional semantics
techniques to a text corpus of self-reported behaviour. Here, we explore this
idea by measuring the valence and arousal of the twelve most popular emotion
keywords expressed on the micro-blogging site Twitter. We do this in three
geographical regions: Europe, Asia and North America. We demonstrate that in
our sample, the valence and arousal levels of the same emotion keywords differ
significantly with respect to these geographical regions --- Europeans are, or
at least present themselves as more positive and aroused, North Americans are
more negative and Asians appear to be more positive but less aroused when
compared to global valence and arousal levels of the same emotion keywords. Our
work is the first in kind to programatically map large text corpora to a
dimensional model of affect.
|
1304.7509 | Optimized Backhaul Compression for Uplink Cloud Radio Access Network | cs.IT math.IT | This paper studies the uplink of a cloud radio access network (C-RAN) where
the cell sites are connected to a cloud-computing-based central processor (CP)
with noiseless backhaul links with finite capacities. We employ a simple
compress-and-forward scheme in which the base-stations(BSs) quantize the
received signals and send the quantized signals to the CP using either
distributed Wyner-Ziv coding or single-user compression. The CP decodes the
quantization codewords first, then decodes the user messages as if the remote
users and the cloud center form a virtual multiple-access channel (VMAC). This
paper formulates the problem of optimizing the quantization noise levels for
weighted sum rate maximization under a sum backhaul capacity constraint. We
propose an alternating convex optimization approach to find a local optimum
solution to the problem efficiently, and more importantly, establish that
setting the quantization noise levels to be proportional to the background
noise levels is near optimal for sum-rate maximization when the
signal-to-quantization-noise ratio (SQNR) is high. In addition, with Wyner-Ziv
coding, the approximate quantization noise level is shown to achieve the
sum-capacity of the uplink C-RAN model to within a constant gap. With
single-user compression, a similar constant-gap result is obtained under a
diagonal dominant channel condition. These results lead to an efficient
algorithm for allocating the backhaul capacities in C-RAN. The performance of
the proposed scheme is evaluated for practical multicell and heterogeneous
networks. It is shown that multicell processing with optimized quantization
noise levels across the BSs can significantly improve the performance of
wireless cellular networks.
|
1304.7517 | A New Analysis of the DS-CDMA Cellular Uplink Under Spatial Constraints | cs.IT math.IT | A new analysis is presented for the direct-sequence code-division multiple
access (DS-CDMA) cellular uplink. For a given network topology, closed-form
expressions are found for the outage probability and rate of each uplink in the
presence of path-dependent Nakagami fading and log-normal shadowing. The
topology may be arbitrary or modeled by a random spatial distribution for a
fixed number of base stations and mobiles placed over a finite area with the
separations among them constrained to exceed a minimum distance. The analysis
is more detailed and accurate than existing ones and facilitates the resolution
of network design issues, including the influence of the minimum base-station
separation, the role of the spreading factor, and the impact of various
power-control and rate-control policies. It is shown that once power control is
established, the rate can be allocated according to a fixed-rate or
variable-rate policy with the objective of either meeting an outage constraint
or maximizing throughput. An advantage of the variable-rate policy is that it
allows an outage constraint to be enforced on every uplink, whereas the
fixed-rate policy can only meet an average outage constraint.
|
1304.7528 | Semi-supervised Eigenvectors for Large-scale Locally-biased Learning | cs.LG math.SP stat.ML | In many applications, one has side information, e.g., labels that are
provided in a semi-supervised manner, about a specific target region of a large
data set, and one wants to perform machine learning and data analysis tasks
"nearby" that prespecified target region. For example, one might be interested
in the clustering structure of a data graph near a prespecified "seed set" of
nodes, or one might be interested in finding partitions in an image that are
near a prespecified "ground truth" set of pixels. Locally-biased problems of
this sort are particularly challenging for popular eigenvector-based machine
learning and data analysis tools. At root, the reason is that eigenvectors are
inherently global quantities, thus limiting the applicability of
eigenvector-based methods in situations where one is interested in very local
properties of the data.
In this paper, we address this issue by providing a methodology to construct
semi-supervised eigenvectors of a graph Laplacian, and we illustrate how these
locally-biased eigenvectors can be used to perform locally-biased machine
learning. These semi-supervised eigenvectors capture
successively-orthogonalized directions of maximum variance, conditioned on
being well-correlated with an input seed set of nodes that is assumed to be
provided in a semi-supervised manner. We show that these semi-supervised
eigenvectors can be computed quickly as the solution to a system of linear
equations; and we also describe several variants of our basic method that have
improved scaling properties. We provide several empirical examples
demonstrating how these semi-supervised eigenvectors can be used to perform
locally-biased learning; and we discuss the relationship between our results
and recent machine learning algorithms that use global eigenvectors of the
graph Laplacian.
|
1304.7539 | Compressive parameter estimation in AWGN | cs.IT math.IT | Compressed sensing is by now well-established as an effective tool for
extracting sparsely distributed information, where sparsity is a discrete
concept, referring to the number of dominant nonzero signal components in some
basis for the signal space. In this paper, we establish a framework for
estimation of continuous-valued parameters based on compressive measurements on
a signal corrupted by additive white Gaussian noise (AWGN). While standard
compressed sensing based on naive discretization has been shown to suffer from
performance loss due to basis mismatch, we demonstrate that this is not an
inherent property of compressive measurements. Our contributions are summarized
as follows: (a) We identify the isometries required to preserve fundamental
estimation-theoretic quantities such as the Ziv-Zakai bound (ZZB) and the
Cramer-Rao bound (CRB). Under such isometries, compressive projections can be
interpreted simply as a reduction in "effective SNR." (b) We show that the
threshold behavior of the ZZB provides a criterion for determining the minimum
number of measurements for "accurate" parameter estimation. (c) We provide
detailed computations of the number of measurements needed for the isometries
in (a) to hold for the problem of frequency estimation in a mixture of
sinusoids. We show via simulations that the design criterion in (b) is accurate
for estimating the frequency of a single sinusoid.
|
1304.7544 | Monoidify! Monoids as a Design Principle for Efficient MapReduce
Algorithms | cs.DC cs.DB cs.PL | It is well known that since the sort/shuffle stage in MapReduce is costly,
local aggregation is one important principle to designing efficient algorithms.
This short paper represents an attempt to more clearly articulate this design
principle in terms of monoids, which generalizes the use of combiners and the
in-mapper combining pattern.
|
1304.7548 | Adaptive Reduced-Rank RLS Algorithms based on Joint Iterative
Optimization of Filters for Space-Time Interference Suppression | cs.IT math.IT | This paper presents novel adaptive reduced-rank filtering algorithms based on
joint iterative optimization of adaptive filters. The novel scheme consists of
a joint iterative optimization of a bank of full-rank adaptive filters that
constitute the projection matrix and an adaptive reduced-rank filter that
operates at the output of the bank of filters. We describe least squares (LS)
expressions for the design of the projection matrix and the reduced-rank filter
and recursive least squares (RLS) adaptive algorithms for its computationally
efficient implementation. Simulations for a space-time interference suppression
in a CDMA system application show that the proposed scheme outperforms in
convergence and tracking the state-of-the-art reduced-rank schemes at about the
same complexity.
|
1304.7552 | Adaptive Decision Feedback Reduced-Rank Equalization Based on Joint
Iterative Optimization of Adaptive Estimation Algorithms for Multi-Antenna
Systems | cs.IT math.IT | This paper presents a novel adaptive reduced-rank multi-input-multi-output
(MIMO) decision feedback equalization structure based on joint iterative
optimization of adaptive estimators. The novel reduced-rank equalization
structure consists of a joint iterative optimization of two equalization
stages, namely, a projection matrix that performs dimensionality reduction and
a reduced-rank estimator that retrieves the desired transmitted symbol. The
proposed reduced-rank structure is followed by a decision feedback scheme that
is responsible for cancelling the inter-antenna interference caused by the
associated data streams. We describe least squares (LS) expressions for the
design of the projection matrix and the reduced-rank estimator along with
computationally efficient recursive least squares (RLS) adaptive estimation
algorithms. Simulations for a MIMO equalization application show that the
proposed scheme outperforms the state-of-the-art reduced-rank and the
conventional estimation algorithms at about the same complexity.
|
1304.7576 | Fractal structures in Adversarial Prediction | cs.LG | Fractals are self-similar recursive structures that have been used in
modeling several real world processes. In this work we study how "fractal-like"
processes arise in a prediction game where an adversary is generating a
sequence of bits and an algorithm is trying to predict them. We will see that
under a certain formalization of the predictive payoff for the algorithm it is
most optimal for the adversary to produce a fractal-like sequence to minimize
the algorithm's ability to predict. Indeed it has been suggested before that
financial markets exhibit a fractal-like behavior. We prove that a fractal-like
distribution arises naturally out of an optimization from the adversary's
perspective.
In addition, we give optimal trade-offs between predictability and expected
deviation (i.e. sum of bits) for our formalization of predictive payoff. This
result is motivated by the observation that several time series data exhibit
higher deviations than expected for a completely random walk.
|
1304.7577 | Optimal amortized regret in every interval | cs.LG cs.DS stat.ML | Consider the classical problem of predicting the next bit in a sequence of
bits. A standard performance measure is {\em regret} (loss in payoff) with
respect to a set of experts. For example if we measure performance with respect
to two constant experts one that always predicts 0's and another that always
predicts 1's it is well known that one can get regret $O(\sqrt T)$ with respect
to the best expert by using, say, the weighted majority algorithm. But this
algorithm does not provide performance guarantee in any interval. There are
other algorithms that ensure regret $O(\sqrt {x \log T})$ in any interval of
length $x$. In this paper we show a randomized algorithm that in an amortized
sense gets a regret of $O(\sqrt x)$ for any interval when the sequence is
partitioned into intervals arbitrarily. We empirically estimated the constant
in the $O()$ for $T$ upto 2000 and found it to be small -- around 2.1. We also
experimentally evaluate the efficacy of this algorithm in predicting high
frequency stock data.
|
1304.7607 | A Discrete State Transition Algorithm for Generalized Traveling Salesman
Problem | math.OC cs.AI cs.NE | Generalized traveling salesman problem (GTSP) is an extension of classical
traveling salesman problem (TSP), which is a combinatorial optimization problem
and an NP-hard problem. In this paper, an efficient discrete state transition
algorithm (DSTA) for GTSP is proposed, where a new local search operator named
\textit{K-circle}, directed by neighborhood information in space, has been
introduced to DSTA to shrink search space and strengthen search ability. A
novel robust update mechanism, restore in probability and risk in probability
(Double R-Probability), is used in our work to escape from local minima. The
proposed algorithm is tested on a set of GTSP instances. Compared with other
heuristics, experimental results have demonstrated the effectiveness and strong
adaptability of DSTA and also show that DSTA has better search ability than its
competitors.
|
1304.7622 | Optimal Design of Water Distribution Networks by Discrete State
Transition Algorithm | math.OC cs.IT math.CO math.IT math.PR | Optimal design of water distribution networks, which are governed by a series
of linear and nonlinear equations, has been extensively studied in the past
decades. Due to their NP-hardness, methods to solve the optimization problem
have changed from traditional mathematical programming to modern intelligent
optimization techniques. In this study, with respect to the model formulation,
we have demonstrated that the network system can be reduced to the
dimensionality of the number of closed simple loops or required independent
paths, and the reduced nonlinear system can be solved efficiently by the
Newton-Raphson method. Regarding the optimization technique, a discrete state
transition algorithm (STA) is introduced to solve several cases of water
distribution networks. In discrete STA, there exist four basic intelligent
operators, namely, swap, shift, symmetry and substitute as well as the "risk
and restore in probability" strategy. Firstly, we focus on a parametric study
of the restore probability $p_1$ and risk probability $p_2$. To effectively
deal with the head pressure constraints, we then investigate the effect of
penalty coefficient and search enforcement on the performance of the algorithm.
Based on the experience gained from the training of the Two-Loop network
problem, the discrete STA has successfully achieved the best known solutions
for the Hanoi and New York problems. A detailed comparison of our results with
those gained by other algorithms is also presented.
|
1304.7638 | Lobby index as a network centrality measure | cs.SI cs.DL physics.soc-ph | We study the lobby index (l-index for short) as a local node centrality
measure for complex networks. The l-inde is compared with degree (a local
measure), betweenness and Eigenvector centralities (two global measures) in the
case of biological network (Yeast interaction protein-protein network) and a
linguistic network (Moby Thesaurus II). In both networks, the l-index has poor
correlation with betweenness but correlates with degree and Eigenvector. Being
a local measure, one can take advantage by using the l-index because it carries
more information about its neighbors when compared with degree centrality,
indeed it requires less time to compute when compared with Eigenvector
centrality. Results suggests that l-index produces better results than degree
and Eigenvector measures for ranking purposes, becoming suitable as a tool to
perform this task.
|
1304.7700 | Information-theoretic tools for parametrized coarse-graining of
non-equilibrium extended systems | physics.comp-ph cs.IT math.IT physics.data-an | In this paper we focus on the development of new methods suitable for
efficient and reliable coarse-graining of {\it non-equilibrium} molecular
systems. In this context, we propose error estimation and controlled-fidelity
model reduction methods based on Path-Space Information Theory, and combine it
with statistical parametric estimation of rates for non-equilibrium stationary
processes. The approach we propose extends the applicability of existing
information-based methods for deriving parametrized coarse-grained models to
Non-Equilibrium systems with Stationary States (NESS). In the context of
coarse-graining it allows for constructing optimal parametrized Markovian
coarse-grained dynamics, by minimizing information loss (due to
coarse-graining) on the path space. Furthermore, the associated path-space
Fisher Information Matrix can provide confidence intervals for the
corresponding parameter estimators. We demonstrate the proposed coarse-graining
method in a non-equilibrium system with diffusing interacting particles, driven
by out-of-equilibrium boundary conditions.
|
1304.7710 | Learning Geo-Temporal Non-Stationary Failure and Recovery of Power
Distribution | cs.SY cs.LG physics.soc-ph | Smart energy grid is an emerging area for new applications of machine
learning in a non-stationary environment. Such a non-stationary environment
emerges when large-scale failures occur at power distribution networks due to
external disturbances such as hurricanes and severe storms. Power distribution
networks lie at the edge of the grid, and are especially vulnerable to external
disruptions. Quantifiable approaches are lacking and needed to learn
non-stationary behaviors of large-scale failure and recovery of power
distribution. This work studies such non-stationary behaviors in three aspects.
First, a novel formulation is derived for an entire life cycle of large-scale
failure and recovery of power distribution. Second, spatial-temporal models of
failure and recovery of power distribution are developed as geo-location based
multivariate non-stationary GI(t)/G(t)/Infinity queues. Third, the
non-stationary spatial-temporal models identify a small number of parameters to
be learned. Learning is applied to two real-life examples of large-scale
disruptions. One is from Hurricane Ike, where data from an operational network
is exact on failures and recoveries. The other is from Hurricane Sandy, where
aggregated data is used for inferring failure and recovery processes at one of
the impacted areas. Model parameters are learned using real data. Two findings
emerge as results of learning: (a) Failure rates behave similarly at the two
different provider networks for two different hurricanes but differently at the
geographical regions. (b) Both rapid- and slow-recovery are present for
Hurricane Ike but only slow recovery is shown for a regional distribution
network from Hurricane Sandy.
|
1304.7713 | Markovian models for one dimensional structure estimation on heavily
noisy imagery | cs.CV stat.AP | Radar (SAR) images often exhibit profound appearance variations due to a
variety of factors including clutter noise produced by the coherent nature of
the illumination. Ultrasound images and infrared images have similar cluttered
appearance, that make 1 dimensional structures, as edges and object boundaries
difficult to locate. Structure information is usually extracted in two steps:
first, building and edge strength mask classifying pixels as edge points by
hypothesis testing, and secondly estimating from that mask, pixel wide
connected edges. With constant false alarm rate (CFAR) edge strength detectors
for speckle clutter, the image needs to be scanned by a sliding window composed
of several differently oriented splitting sub-windows. The accuracy of edge
location for these ratio detectors depends strongly on the orientation of the
sub-windows. In this work we propose to transform the edge strength detection
problem into a binary segmentation problem in the undecimated wavelet domain,
solvable using parallel 1d Hidden Markov Models. For general dependency models,
exact estimation of the state map becomes computationally complex, but in our
model, exact MAP is feasible. The effectiveness of our approach is demonstrated
on simulated noisy real-life natural images with available ground truth, while
the strength of our output edge map is measured with Pratt's, Baddeley an Kappa
proficiency measures. Finally, analysis and experiments on three different
types of SAR images, with different polarizations, resolutions and textures,
illustrate that the proposed method can detect structure on SAR images
effectively, providing a very good start point for active contour methods.
|
1304.7727 | Distributed stochastic optimization via correlated scheduling | math.OC cs.MA | This paper considers a problem where multiple users make repeated decisions
based on their own observed events. The events and decisions at each time step
determine the values of a utility function and a collection of penalty
functions. The goal is to make distributed decisions over time to maximize time
average utility subject to time average constraints on the penalties. An
example is a collection of power constrained sensor nodes that repeatedly
report their own observations to a fusion center. Maximum time average utility
is fundamentally reduced because users do not know the events observed by
others. Optimality is characterized for this distributed context. It is shown
that optimality is achieved by correlating user decisions through a commonly
known pseudorandom sequence. An optimal algorithm is developed that chooses
pure strategies at each time step based on a set of time-varying weights.
|
1304.7728 | Machine Translation Systems in India | cs.CL cs.CY | Machine Translation is the translation of one natural language into another
using automated and computerized means. For a multilingual country like India,
with the huge amount of information exchanged between various regions and in
different languages in digitized format, it has become necessary to find an
automated process from one language to another. In this paper, we take a look
at the various Machine Translation System in India which is specifically built
for the purpose of translation between the Indian languages. We discuss the
various approaches taken for building the machine translation system and then
discuss some of the Machine Translation Systems in India along with their
features.
|
1304.7745 | On the Capacity of the Finite Field Counterparts of Wireless
Interference Networks | cs.IT math.IT | This work explores how degrees of freedom (DoF) results from wireless
networks can be translated into capacity results for their finite field
counterparts that arise in network coding applications. The main insight is
that scalar (SISO) finite field channels over $\mathbb{F}_{p^n}$ are analogous
to n x n vector (MIMO) channels in the wireless setting, but with an important
distinction -- there is additional structure due to finite field arithmetic
which enforces commutativity of matrix multiplication and limits the channel
diversity to n, making these channels similar to diagonal channels in the
wireless setting. Within the limits imposed by the channel structure, the DoF
optimal precoding solutions for wireless networks can be translated into
capacity optimal solutions for their finite field counterparts. This is shown
through the study of the 2-user X channel and the 3-user interference channel.
Besides bringing the insights from wireless networks into network coding
applications, the study of finite field networks over $\mathbb{F}_{p^n}$ also
touches upon important open problems in wireless networks (finite SNR, finite
diversity scenarios) through interesting parallels between p and SNR, and n and
diversity.
|
1304.7750 | The $abc$-problem for Gabor systems | cs.IT math.DS math.FA math.IT | A Gabor system generated by a window function $\phi$ and a rectangular
lattice $a \Z\times \Z/b$ is given by $${\mathcal G}(\phi, a \Z\times
\Z/b):=\{e^{-2\pi i n t/b} \phi(t- m a):\ (m, n)\in \Z\times \Z\}.$$ One of
fundamental problems in Gabor analysis is to identify window functions $\phi$
and time-frequency shift lattices $a \Z\times \Z/b$ such that the corresponding
Gabor system ${\mathcal G}(\phi, a \Z\times \Z/b)$ is a Gabor frame for
$L^2(\R)$, the space of all square-integrable functions on the real line $\R$.
In this paper, we provide a full classification of triples $(a,b,c)$ for which
the Gabor system ${\mathcal G}(\chi_I, a \Z\times \Z/b)$ generated by the ideal
window function $\chi_I$ on an interval $I$ of length $c$ is a Gabor frame for
$L^2(\R)$. For the classification of such triples $(a, b, c)$ (i.e., the
$abc$-problem for Gabor systems), we introduce maximal invariant sets of some
piecewise linear transformations and establish the equivalence between Gabor
frame property and triviality of maximal invariant sets. We then study dynamic
system associated with the piecewise linear transformations and explore various
properties of their maximal invariant sets. By performing holes-removal surgery
for maximal invariant sets to shrink and augmentation operation for a line with
marks to expand, we finally parameterize those triples $(a, b, c)$ for which
maximal invariant sets are trivial. The novel techniques involving
non-ergodicity of dynamical systems associated with some novel non-contractive
and non-measure-preserving transformations lead to our arduous answer to the
$abc$-problem for Gabor systems.
|
1304.7751 | On the Minimax Capacity Loss under Sub-Nyquist Universal Sampling | cs.IT math.IT | This paper investigates the information rate loss in analog channels when the
sampler is designed to operate independent of the instantaneous channel
occupancy. Specifically, a multiband linear time-invariant Gaussian channel
under universal sub-Nyquist sampling is considered. The entire channel
bandwidth is divided into $n$ subbands of equal bandwidth. At each time only
$k$ constant-gain subbands are active, where the instantaneous subband
occupancy is not known at the receiver and the sampler. We study the
information loss through a capacity loss metric, that is, the capacity gap
caused by the lack of instantaneous subband occupancy information. We
characterize the minimax capacity loss for the entire sub-Nyquist rate regime,
provided that the number $n$ of subbands and the SNR are both large. The
minimax limits depend almost solely on the band sparsity factor and the
undersampling factor, modulo some residual terms that vanish as $n$ and SNR
grow. Our results highlight the power of randomized sampling methods (i.e. the
samplers that consist of random periodic modulation and low-pass filters),
which are able to approach the minimax capacity loss with exponentially high
probability.
|
1304.7755 | Majorization entropic uncertainty relations | quant-ph cs.IT math.IT | Entropic uncertainty relations in a finite dimensional Hilbert space are
investigated. Making use of the majorization technique we derive explicit lower
bounds for the sum of R\'enyi entropies describing probability distributions
associated with a given pure state expanded in eigenbases of two observables.
Obtained bounds are expressed in terms of the largest singular values of
submatrices of the unitary rotation matrix. Numerical simulations show that for
a generic unitary matrix of size N = 5 our bound is stronger than the well
known result of Maassen and Uffink (MU) with a probability larger than 98%. We
also show that the bounds investigated are invariant under the dephasing and
permutation operations. Finally, we derive a classical analogue of the MU
uncertainty relation, which is formulated for stochastic transition matrices.
|
1304.7799 | Left Bit Right: For SPARQL Join Queries with OPTIONAL Patterns
(Left-outer-joins) | cs.DB | SPARQL basic graph pattern (BGP) (a.k.a. SQL inner-join) query optimization
is a well researched area. However, optimization of OPTIONAL pattern queries
(a.k.a. SQL left-outer-joins) poses additional challenges, due to the
restrictions on the \textit{reordering} of left-outer-joins. The occurrence of
such queries tends to be as high as 50% of the total queries (e.g., DBPedia
query logs).
In this paper, we present \textit{Left Bit Right} (LBR), a technique for
\textit{well-designed} nested BGP and OPTIONAL pattern queries. Through LBR, we
propose a novel method to represent such queries using a graph of
\textit{supernodes}, which is used to aggressively prune the RDF triples, with
the help of compressed indexes. We also propose novel optimization strategies
-- first of a kind, to the best of our knowledge -- that combine together the
characteristics of \textit{acyclicity} of queries, \textit{minimality}, and
\textit{nullification}, \textit{best-match} operators. In this paper, we focus
on OPTIONAL patterns without UNIONs or FILTERs, but we also show how UNIONs and
FILTERs can be handled with our technique using a \textit{query rewrite}. Our
evaluation on RDF graphs of up to and over one billion triples, on a commodity
laptop with 8 GB memory, shows that LBR can process \textit{well-designed}
low-selectivity complex queries up to 11 times faster compared to the
state-of-the-art RDF column-stores as Virtuoso and MonetDB, and for highly
selective queries, LBR is at par with them.
|
1304.7820 | Challenges on Probabilistic Modeling for Evolving Networks | cs.SI cs.AI physics.soc-ph | With the emerging of new networks, such as wireless sensor networks, vehicle
networks, P2P networks, cloud computing, mobile Internet, or social networks,
the network dynamics and complexity expands from system design, hardware,
software, protocols, structures, integration, evolution, application, even to
business goals. Thus the dynamics and uncertainty are unavoidable
characteristics, which come from the regular network evolution and unexpected
hardware defects, unavoidable software errors, incomplete management
information and dependency relationship between the entities among the emerging
complex networks. Due to the complexity of emerging networks, it is not always
possible to build precise models in modeling and optimization (local and
global) for networks. This paper presents a survey on probabilistic modeling
for evolving networks and identifies the new challenges which emerge on the
probabilistic models and optimization strategies in the potential application
areas of network performance, network management and network security for
evolving networks.
|
1304.7843 | A Hybrid Rule Based Fuzzy-Neural Expert System For Passive Network
Monitoring | cs.AI cs.NI | An enhanced approach for network monitoring is to create a network monitoring
tool that has artificial intelligence characteristics. There are a number of
approaches available. One such approach is by the use of a combination of rule
based, fuzzy logic and neural networks to create a hybrid ANFIS system. Such
system will have a dual knowledge database approach. One containing membership
function values to compare to and do deductive reasoning and another database
with rules deductively formulated by an expert (a network administrator). The
knowledge database will be updated continuously with newly acquired patterns.
In short, the system will be composed of 2 parts, learning from data sets and
fine-tuning the knowledge-base using neural network and the use of fuzzy logic
in making decision based on the rules and membership functions inside the
knowledge base. This paper will discuss the idea, steps and issues involved in
creating such a system.
|
1304.7851 | North Atlantic Right Whale Contact Call Detection | cs.LG cs.SD | The North Atlantic right whale (Eubalaena glacialis) is an endangered
species. These whales continuously suffer from deadly vessel impacts alongside
the eastern coast of North America. There have been countless efforts to save
the remaining 350 - 400 of them. One of the most prominent works is done by
Marinexplore and Cornell University. A system of hydrophones linked to
satellite connected-buoys has been deployed in the whales habitat. These
hydrophones record and transmit live sounds to a base station. These recording
might contain the right whale contact call as well as many other noises. The
noise rate increases rapidly in vessel-busy areas such as by the Boston harbor.
This paper presents and studies the problem of detecting the North Atlantic
right whale contact call with the presence of noise and other marine life
sounds. A novel algorithm was developed to preprocess the sound waves before a
tree based hierarchical classifier is used to classify the data and provide a
score. The developed model was trained with 30,000 data points made available
through the Cornell University Whale Detection Challenge program. Results
showed that the developed algorithm had close to 85% success rate in detecting
the presence of the North Atlantic right whale.
|
1304.7854 | On the Complexity of Query Answering under Matching Dependencies for
Entity Resolution | cs.DB | Matching Dependencies (MDs) are a relatively recent proposal for declarative
entity resolution. They are rules that specify, given the similarities
satisfied by values in a database, what values should be considered duplicates,
and have to be matched. On the basis of a chase-like procedure for MD
enforcement, we can obtain clean (duplicate-free) instances; actually possibly
several of them. The resolved answers to queries are those that are invariant
under the resulting class of resolved instances. In previous work we identified
some tractable cases (i.e. for certain classes of queries and MDs) of resolved
query answering. In this paper we further investigate the complexity of this
problem, identifying some intractable cases. For a special case we obtain a
dichotomy complexity result.
|
1304.7855 | Enhancements to ACL2 in Versions 5.0, 6.0, and 6.1 | cs.MS cs.AI cs.LO | We report on highlights of the ACL2 enhancements introduced in ACL2 releases
since the 2011 ACL2 Workshop. Although many enhancements are critical for
soundness or robustness, we focus in this paper on those improvements that
could benefit users who are aware of them, but that might not be discovered in
everyday practice.
|
1304.7886 | Throughput Maximization in Wireless Powered Communication Networks | cs.IT math.IT | This paper studies the newly emerging wireless powered communication network
(WPCN) in which one hybrid access point (H-AP) with constant power supply
coordinates the wireless energy/information transmissions to/from distributed
users that do not have energy sources. A "harvest-then-transmit" protocol is
proposed where all users first harvest the wireless energy broadcast by the
H-AP in the downlink (DL) and then send their independent information to the
H-AP in the uplink (UL) by time-division-multiple-access (TDMA). First, we
study the sum-throughput maximization of all users by jointly optimizing the
time allocation for the DL wireless power transfer versus the users' UL
information transmissions given a total time constraint based on the users' DL
and UL channels as well as their average harvested energy values. By applying
convex optimization techniques, we obtain the closed-form expressions for the
optimal time allocations to maximize the sum-throughput. Our solution reveals
"doubly near-far" phenomenon due to both the DL and UL distance-dependent
signal attenuation, where a far user from the H-AP, which receives less
wireless energy than a nearer user in the DL, has to transmit with more power
in the UL for reliable information transmission. Consequently, the maximum
sum-throughput is achieved by allocating substantially more time to the near
users than the far users, thus resulting in unfair rate allocation among
different users. To overcome this problem, we furthermore propose a new
performance metric so-called common-throughput with the additional constraint
that all users should be allocated with an equal rate regardless of their
distances to the H-AP. We present an efficient algorithm to solve the
common-throughput maximization problem. Simulation results demonstrate the
effectiveness of the common-throughput approach for solving the new doubly
near-far problem in WPCNs.
|
1304.7920 | From Ordinary Differential Equations to Structural Causal Models: the
deterministic case | stat.OT cs.AI | We show how, and under which conditions, the equilibrium states of a
first-order Ordinary Differential Equation (ODE) system can be described with a
deterministic Structural Causal Model (SCM). Our exposition sheds more light on
the concept of causality as expressed within the framework of Structural Causal
Models, especially for cyclic models.
|
1304.7928 | Accurate and Robust Indoor Localization Systems using Ultra-wideband
Signals | cs.ET cs.IT math.IT | Indoor localization systems that are accurate and robust with respect to
propagation channel conditions are still a technical challenge today. In
particular, for systems based on range measurements from radio signals,
non-line-of-sight (NLOS) situations can result in large position errors. In
this paper, we address these issues using measurements in a representative
indoor environment. Results show that conventional tracking schemes using high-
and a low-complexity ranging algorithms are strongly impaired by NLOS
conditions unless a very large signal bandwidth is used. Furthermore, we
discuss and evaluate the performance of multipath-assisted indoor navigation
and tracking (MINT), that can overcome these impairments by making use of
multipath propagation. Across a wide range of bandwidths, MINT shows superior
performance compared to conventional schemes, and virtually no degradation in
its robustness due to NLOS conditions.
|
1304.7942 | ManTIME: Temporal expression identification and normalization in the
TempEval-3 challenge | cs.CL | This paper describes a temporal expression identification and normalization
system, ManTIME, developed for the TempEval-3 challenge. The identification
phase combines the use of conditional random fields along with a
post-processing identification pipeline, whereas the normalization phase is
carried out using NorMA, an open-source rule-based temporal normalizer. We
investigate the performance variation with respect to different feature types.
Specifically, we show that the use of WordNet-based features in the
identification task negatively affects the overall performance, and that there
is no statistically significant difference in using gazetteers, shallow parsing
and propositional noun phrases labels on top of the morphological features. On
the test data, the best run achieved 0.95 (P), 0.85 (R) and 0.90 (F1) in the
identification phase. Normalization accuracies are 0.84 (type attribute) and
0.77 (value attribute). Surprisingly, the use of the silver data (alone or in
addition to the gold annotated ones) does not improve the performance.
|
1304.7948 | Convolutional Neural Networks learn compact local image descriptors | cs.CV | A standard deep convolutional neural network paired with a suitable loss
function learns compact local image descriptors that perform comparably to
state-of-the art approaches.
|
1304.7966 | Performance of a Multiple-Access DCSK-CC System over Nakagami-$m$ Fading
Channels | cs.IT cs.PF math.IT | In this paper, we propose a novel cooperative scheme to enhance the
performance of multiple-access (MA) differential-chaos-shift-keying (DCSK)
systems. We provide the bit-error-rate (BER) performance and throughput
analyses for the new system with a decode-and-forward (DF) protocol over
Nakagami-$m$ fading channels. Our simulated results not only show that this
system significantly improves the BER performance as compared to the existing
DCSK non-cooperative (DCSK-NC) system and the multiple-input multiple-output
DCSK (MIMO-DCSK) system, but also verify the theoretical analyses. Furthermore,
we show that the throughput of this system approximately equals that of the
DCSK-NC system, both of which have prominent improvements over the MIMO-DCSK
system. We thus believe that the proposed system can be a good framework for
chaos-modulation-based wireless communications.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.