id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1307.2200 | Inconsistency and Accuracy of Heuristics with A* Search | cs.AI | Many studies in heuristic search suggest that the accuracy of the heuristic
used has a positive impact on improving the performance of the search. In
another direction, historical research perceives that the performance of
heuristic search algorithms, such as A* and IDA*, can be improved by requiring
the heuristics to be consistent -- a property satisfied by any perfect
heuristic. However, a few recent studies show that inconsistent heuristics can
also be used to achieve a large improvement in these heuristic search
algorithms. These results leave us a natural question: which property of
heuristics, accuracy or consistency/inconsistency, should we focus on when
building heuristics? While there are studies on the heuristic accuracy with the
assumption of consistency, no studies on both the inconsistency and the
accuracy of heuristics are known to our knowledge.
In this study, we investigate the relationship between the inconsistency and
the accuracy of heuristics with A* search. Our analytical result reveals a
correlation between these two properties. We then run experiments on the domain
for the Knapsack problem with a family of practical heuristics. Our empirical
results show that in many cases, the more accurate heuristics also have higher
level of inconsistency and result in fewer node expansions by A*.
|
1307.2202 | TDOA assisted RSSD based localization using UWB and directional antennas | cs.IT math.IT | This paper studies the use of directional antennas for received signal
strength difference (RSSD) based localization using ultra-wideband and
demonstrates the achievable accuracy with this localization method applied to
UWB. As introduced in our previous work the RSSD localization is assisted with
one Time Difference of Arrival (TDOA) estimation. The use of directional
receiving antennas and an omni-directional transmitting antenna is assumed.
Localization is performed in 2D. Two localization approaches are considered:
RSSD using statistical channel model and fingerprinting approach. In the case
of statistical channel model simulations are performed using Matlab. In the
case of fingerprinting approach localization is done based on real
indoor-measurements.
|
1307.2203 | Self-organization versus top-down planning in the evolution of a city | physics.soc-ph cond-mat.dis-nn cs.SI nlin.AO | Interventions of central, top-down planning are serious limitations to the
possibility of modelling the dynamics of cities. An example is the city of
Paris (France), which during the 19th century experienced large modifications
supervised by a central authority, the `Haussmann period'. In this article, we
report an empirical analysis of more than 200 years (1789-2010) of the
evolution of the street network of Paris. We show that the usual network
measures display a smooth behavior and that the most important quantitative
signatures of central planning is the spatial reorganization of centrality and
the modification of the block shape distribution. Such effects can only be
obtained by structural modifications at a large-scale level, with the creation
of new roads not constrained by the existing geometry. The evolution of a city
thus seems to result from the superimposition of continuous, local growth
processes and punctual changes operating at large spatial scales.
|
1307.2228 | The MacWilliams identity for $m$-spotty weight enumerator over
$\mathbb{F}_2+u\mathbb{F}_2+\cdots+u^{m-1}\mathbb{F}_2$ | cs.IT math.IT | Past few years have seen an extensive use of RAM chips with wide I/O data
(e.g. 16, 32, 64 bits) in computer memory systems. These chips are highly
vulnerable to a special type of byte error, called an $m$-spotty byte error,
which can be effectively detected or corrected using byte error-control codes.
The MacWilliams identity provides the relationship between the weight
distribution of a code and that of its dual. The main purpose of this paper is
to present a version of the MacWilliams identity for $m$-spotty weight
enumerators over
$\mathbbm{F}_{2}+u\mathbbm{F}_{2}+\cdots+u^{m-1}\mathbbm{F}_{2}$ (shortly
$R_{u, m, 2}$).
|
1307.2295 | Duality Codes and the Integrality Gap Bound for Index Coding | cs.IT math.IT | This paper considers a base station that delivers packets to multiple
receivers through a sequence of coded transmissions. All receivers overhear the
same transmissions. Each receiver may already have some of the packets as side
information, and requests another subset of the packets. This problem is known
as the index coding problem and can be represented by a bipartite digraph. An
integer linear program is developed that provides a lower bound on the minimum
number of transmissions required for any coding algorithm. Conversely, its
linear programming relaxation is shown to provide an upper bound that is
achievable by a simple form of vector linear coding. Thus, the information
theoretic optimum is bounded by the integrality gap between the integer program
and its linear relaxation. In the special case when the digraph has a planar
structure, the integrality gap is shown to be zero, so that exact optimality is
achieved. Finally, for non-planar problems, an enhanced integer program is
constructed that provides a smaller integrality gap. The dual of this problem
corresponds to a more sophisticated partial clique coding strategy that
time-shares between Reed-Solomon erasure codes. This work illuminates the
relationship between index coding, duality, and integrality gaps between
integer programs and their linear relaxations.
|
1307.2307 | Bridging Information Criteria and Parameter Shrinkage for Model
Selection | stat.ML cs.LG | Model selection based on classical information criteria, such as BIC, is
generally computationally demanding, but its properties are well studied. On
the other hand, model selection based on parameter shrinkage by $\ell_1$-type
penalties is computationally efficient. In this paper we make an attempt to
combine their strengths, and propose a simple approach that penalizes the
likelihood with data-dependent $\ell_1$ penalties as in adaptive Lasso and
exploits a fixed penalization parameter. Even for finite samples, its model
selection results approximately coincide with those based on information
criteria; in particular, we show that in some special cases, this approach and
the corresponding information criterion produce exactly the same model. One can
also consider this approach as a way to directly determine the penalization
parameter in adaptive Lasso to achieve information criteria-like model
selection. As extensions, we apply this idea to complex models including
Gaussian mixture model and mixture of factor analyzers, whose model selection
is traditionally difficult to do; by adopting suitable penalties, we provide
continuous approximators to the corresponding information criteria, which are
easy to optimize and enable efficient model selection.
|
1307.2312 | Bayesian Discovery of Multiple Bayesian Networks via Transfer Learning | stat.ML cs.LG | Bayesian network structure learning algorithms with limited data are being
used in domains such as systems biology and neuroscience to gain insight into
the underlying processes that produce observed data. Learning reliable networks
from limited data is difficult, therefore transfer learning can improve the
robustness of learned networks by leveraging data from related tasks. Existing
transfer learning algorithms for Bayesian network structure learning give a
single maximum a posteriori estimate of network models. Yet, many other models
may be equally likely, and so a more informative result is provided by Bayesian
structure discovery. Bayesian structure discovery algorithms estimate posterior
probabilities of structural features, such as edges. We present transfer
learning for Bayesian structure discovery which allows us to explore the shared
and unique structural features among related tasks. Efficient computation
requires that our transfer learning objective factors into local calculations,
which we prove is given by a broad class of transfer biases. Theoretically, we
show the efficiency of our approach. Empirically, we show that compared to
single task learning, transfer learning is better able to positively identify
true edges. We apply the method to whole-brain neuroimaging data.
|
1307.2320 | Dynamic Partial Cooperative MIMO System for Delay-Sensitive Applications
with Limited Backhaul Capacity | cs.IT cs.NI cs.PF math.IT | Considering backhaul consumption in practical systems, it may not be the best
choice to engage all the time in full cooperative MIMO for interference
mitigation. In this paper, we propose a novel downlink partial cooperative MIMO
(Pco-MIMO) physical layer (PHY) scheme, which allows flexible tradeoff between
the partial data cooperation level and the backhaul consumption. Based on this
Pco-MIMO scheme, we consider dynamic transmit power and rate allocation
according to the imperfect channel state information at transmitters (CSIT) and
the queue state information (QSI) to minimize the average delay cost subject to
average backhaul consumption constraints and average power constraints. The
delay-optimal control problem is formulated as an infinite horizon average cost
constrained partially observed Markov decision process (CPOMDP). By exploiting
the special structure in our problem, we derive an equivalent Bellman Equation
to solve the CPOMDP. To reduce computational complexity and facilitate
distributed implementation, we propose a distributed online learning algorithm
to estimate the per-flow potential functions and Lagrange multipliers (LMs) and
a distributed online stochastic partial gradient algorithm to obtain the power
and rate control policy. The proposed low-complexity distributed solution is
based on local observations of the system states at the BSs and is very robust
against model variations. We also prove the convergence and the asymptotic
optimality of the proposed solution.
|
1307.2342 | Model Selection with Low Complexity Priors | math.OC cs.IT math.IT math.ST stat.TH | Regularization plays a pivotal role when facing the challenge of solving
ill-posed inverse problems, where the number of observations is smaller than
the ambient dimension of the object to be estimated. A line of recent work has
studied regularization models with various types of low-dimensional structures.
In such settings, the general approach is to solve a regularized optimization
problem, which combines a data fidelity term and some regularization penalty
that promotes the assumed low-dimensional/simple structure. This paper provides
a general framework to capture this low-dimensional structure through what we
coin partly smooth functions relative to a linear manifold. These are convex,
non-negative, closed and finite-valued functions that will promote objects
living on low-dimensional subspaces. This class of regularizers encompasses
many popular examples such as the L1 norm, L1-L2 norm (group sparsity), as well
as several others including the Linfty norm. We also show that the set of
partly smooth functions relative to a linear manifold is closed under addition
and pre-composition by a linear operator, which allows to cover mixed
regularization, and the so-called analysis-type priors (e.g. total variation,
fused Lasso, finite-valued polyhedral gauges). Our main result presents a
unified sharp analysis of exact and robust recovery of the low-dimensional
subspace model associated to the object to recover from partial measurements.
This analysis is illustrated on a number of special and previously studied
cases, and on an analysis of the performance of Linfty regularization in a
compressed sensing scenario.
|
1307.2350 | Stability Analysis of Continuous-Time Switched Systems with a Random
Switching Signal | cs.SY | This paper is concerned with the stability analysis of continuous-time
switched systems with a random switching signal. The switching signal manifests
its characteristics with that the dwell time in each subsystem consists of a
fixed part and a random part. The stochastic stability of such switched systems
is studied using a Lyapunov approach. A necessary and sufficient condition is
established in terms of linear matrix inequalities. The effect of the random
switching signal on system stability is illustrated by a numerical example and
the results coincide with our intuition.
|
1307.2352 | Polar Codes with Dynamic Frozen Symbols and Their Decoding by Directed
Search | cs.IT math.IT | A novel construction of polar codes with dynamic frozen symbols is proposed.
The proposed codes are subcodes of extended BCH codes, which ensure
sufficiently high minimum distance. Furthermore, a decoding algorithm is
proposed, which employs estimates of the not-yet-processed bit channel error
probabilities to perform directed search in code tree, reducing thus the total
number of iterations.
|
1307.2381 | Local Mode Dependent Decentralized $H_{\infty}$ Control of Uncertain
Markovian Jump Large-scale Systems | cs.SY | This paper considers the problem of robust $H_{\infty}$ control using
decentralized state feedback controllers for a class of large-scale systems
with Markov jump parameters. A sufficient condition is developed to design
controllers using local system states and local system operation modes. The
sufficient condition is given in terms of rank constrained linear matrix
inequalities. An illustrative numerical example is given to demonstrate the
developed theory.
|
1307.2421 | Energy Efficient Coordinated Beamforming for Multi-cell MISO Systems | cs.IT math.IT | In this paper, we investigate the optimal energy efficient coordinated
beamforming in multi-cell multiple-input single-output (MISO) systems with $K$
multiple-antenna base stations (BS) and $K$ single-antenna mobile stations
(MS), where each BS sends information to its own intended MS with cooperatively
designed transmit beamforming. We assume single user detection at the MS by
treating the interference as noise. By taking into account a realistic power
model at the BS, we characterize the Pareto boundary of the achievable energy
efficiency (EE) region of the $K$ links, where the EE of each link is defined
as the achievable data rate at the MS divided by the total power consumption at
the BS. Since the EE of each link is non-cancave (which is a non-concave
function over an affine function), characterizing this boundary is difficult.
To meet this challenge, we relate this multi-cell MISO system to cognitive
radio (CR) MISO channels by applying the concept of interference temperature
(IT), and accordingly transform the EE boundary characterization problem into a
set of fractional concave programming problems. Then, we apply the fractional
concave programming technique to solve these fractional concave problems, and
correspondingly give a parametrization for the EE boundary in terms of IT
levels. Based on this characterization, we further present a decentralized
algorithm to implement the multi-cell coordinated beamforming, which is shown
by simulations to achieve the EE Pareto boundary.
|
1307.2427 | Testing experiments on synchronized Petri nets | cs.SY cs.FL | Synchronizing sequences have been proposed in the late 60's to solve testing
problems on systems modeled by finite state machines. Such sequences lead a
system, seen as a black box, from an unknown current state to a known final
one. This paper presents a first investigation of the computation of
synchronizing sequences for systems modeled by bounded synchronized Petri nets.
In the first part of the paper, existing techniques for automata are adapted to
this new setting. Later on, new approaches, that exploit the net structure to
efficiently compute synchronizing sequences without an exhaustive enumeration
of the state space, are presented.
|
1307.2430 | On The Fast Fading Multiple-Antenna Gaussian Broadcast Channel with
Confidential Messages and Partial CSIT | cs.IT math.IT | In wiretap channels the eavesdropper's channel state information (CSI) is
commonly assumed to be known at transmitter, fully or partially. However, under
perfect secrecy constraint the eavesdropper may not be motivated to feedback
any correct CSI. In this paper we consider a more feasible problem for the
transmitter to have eavesdropper's CSI. That is, the fast fading
multiple-antenna Gaussian broadcast channels (FMGBC-CM) with confidential
messages, where both receivers are legitimate users such that they both are
willing to feedback accurate CSI to maintain their secure transmission, and not
to be eavesdropped by the other. We assume that only the statistics of the
channel state information are known by the transmitter. We first show the
necessary condition for the FMGBC-CM not to be degraded to the common wiretap
channels. Then we derive the achievable rate region for the FMGBC-CM where the
channel input covariance matrices and the inflation factor are left unknown and
to be solved. After that we provide an analytical solution to the channel input
covariance matrices. We also propose an iterative algorithm to solve the
channel input covariance matrices and the inflation factor. Due to the
complicated rate region formulae in normal SNR, we resort to low SNR analysis
to investigate the characteristics of the channel. Finally, numerical examples
show that under perfect secrecy constraint both users can achieve positive
rates simultaneously, which verifies our derived necessary condition. Numerical
results also elucidate the effectiveness of the analytic solution and proposed
algorithm of solving the channel input covariance matrices and the inflation
factor under different conditions.
|
1307.2432 | Average sampling restoration of harmonizable processes | math.PR cs.IT math.IT | The harmonizable Piranashvili-type stochastic processes are approximated by
finite time shifted average sampling sums. Explicit truncation error upper
bounds are established. Various corollaries and special cases are discussed.
|
1307.2434 | Major Limitations of Satellite images | cs.CV | Remote sensing has proven to be a powerful tool for the monitoring of the
Earth surface to improve our perception of our surroundings has led to
unprecedented developments in sensor and information technologies. However,
technologies for effective use of the data and for extracting useful
information from the data of Remote sensing are still very limited since no
single sensor combines the optimal spectral, spatial and temporal resolution.
This paper briefly reviews the limitations of satellite remote sensing. Also,
reviews on the problems of image fusion techniques. The conclusion of this,
According to literature, the remote sensing is still the lack of software tools
for effective information extraction from remote sensing data. The trade-off in
spectral and spatial resolution will remain and new advanced data fusion
approaches are needed to make optimal use of remote sensors for extract the
most useful information.
|
1307.2438 | Efficient Probabilistic Group Testing Based on Traitor Tracing | cs.IT cs.CR math.IT | Inspired by recent results from collusion-resistant traitor tracing, we
provide a framework for constructing efficient probabilistic group testing
schemes. In the traditional group testing model, our scheme asymptotically
requires T ~ 2 K ln N tests to find (with high probability) the correct set of
K defectives out of N items. The framework is also applied to several noisy
group testing and threshold group testing models, often leading to improvements
over previously known results, but we emphasize that this framework can be
applied to other variants of the classical model as well, both in adaptive and
in non-adaptive settings.
|
1307.2440 | Image Fusion Technologies In Commercial Remote Sensing Packages | cs.CV | Several remote sensing software packages are used to the explicit purpose of
analyzing and visualizing remotely sensed data, with the developing of remote
sensing sensor technologies from last ten years. Accord-ing to literature, the
remote sensing is still the lack of software tools for effective information
extraction from remote sensing data. So, this paper provides a state-of-art of
multi-sensor image fusion technologies as well as review on the quality
evaluation of the single image or fused images in the commercial remote sensing
pack-ages. It also introduces program (ALwassaiProcess) developed for image
fusion and classification.
|
1307.2457 | Detection of Outer Rotations on 3D-Vector Fields with Iterative
Geometric Correlation and its Efficiency | cs.CV cs.GR | Correlation is a common technique for the detection of shifts. Its
generalization to the multidimensional geometric correlation in Clifford
algebras has been proven a useful tool for color image processing, because it
additionally contains information about a rotational misalignment. But so far
the exact correction of a three-dimensional outer rotation could only be
achieved in certain special cases. In this paper we prove that applying the
geometric correlation iteratively has the potential to detect the outer
rotational misalignment for arbitrary three-dimensional vector fields. We
further present the explicit iterative algorithm, analyze its efficiency
detecting the rotational misalignment in the color space of a color image. The
experiments suggest a method for the acceleration of the algorithm, which is
practically tested with great success.
|
1307.2482 | Linear Convergence Rate of a Class of Distributed Augmented Lagrangian
Algorithms | cs.IT math.IT | We study distributed optimization where nodes cooperatively minimize the sum
of their individual, locally known, convex costs $f_i(x)$'s, $x \in {\mathbb
R}^d$ is global. Distributed augmented Lagrangian (AL) methods have good
empirical performance on several signal processing and learning applications,
but there is limited understanding of their convergence rates and how it
depends on the underlying network. This paper establishes globally linear
(geometric) convergence rates of a class of deterministic and randomized
distributed AL methods, when the $f_i$'s are twice continuously differentiable
and have a bounded Hessian. We give explicit dependence of the convergence
rates on the underlying network parameters. Simulations illustrate our
analytical findings.
|
1307.2541 | Geospatial Narratives and their Spatio-Temporal Dynamics: Commonsense
Reasoning for High-level Analyses in Geographic Information Systems | cs.AI cs.ET cs.HC | The modelling, analysis, and visualisation of dynamic geospatial phenomena
has been identified as a key developmental challenge for next-generation
Geographic Information Systems (GIS). In this context, the envisaged
paradigmatic extensions to contemporary foundational GIS technology raises
fundamental questions concerning the ontological, formal representational, and
(analytical) computational methods that would underlie their spatial
information theoretic underpinnings.
We present the conceptual overview and architecture for the development of
high-level semantic and qualitative analytical capabilities for dynamic
geospatial domains. Building on formal methods in the areas of commonsense
reasoning, qualitative reasoning, spatial and temporal representation and
reasoning, reasoning about actions and change, and computational models of
narrative, we identify concrete theoretical and practical challenges that
accrue in the context of formal reasoning about `space, events, actions, and
change'. With this as a basis, and within the backdrop of an illustrated
scenario involving the spatio-temporal dynamics of urban narratives, we address
specific problems and solutions techniques chiefly involving `qualitative
abstraction', `data integration and spatial consistency', and `practical
geospatial abduction'. From a broad topical viewpoint, we propose that
next-generation dynamic GIS technology demands a transdisciplinary scientific
perspective that brings together Geography, Artificial Intelligence, and
Cognitive Science.
Keywords: artificial intelligence; cognitive systems; human-computer
interaction; geographic information systems; spatio-temporal dynamics;
computational models of narrative; geospatial analysis; geospatial modelling;
ontology; qualitative spatial modelling and reasoning; spatial assistance
systems
|
1307.2554 | Les index pour les entrep\^ots de donn\'ees : comparaison entre index
arbre-B et Bitmap | cs.DB | With the development of decision systems and specially data warehouses, the
visibility of the data warehouse design before its creation has become
essential, and that because of data warehouse importance as considered as the
unique data source giving meaning to the decision. In a decision system the
proper functioning of a data warehouse resides in the smooth running of the
middleware tools ETC step one hand, and the restitution step through the data
mining, reporting solutions, dashboards... etc other. The large volume of data
that passes through these stages require an optimal design for a highly
efficient decision system, without disregarding the choice of technologies that
are introduced for the data warehouse implementation such as: database
management system, the type of server operating systems, physical server
architecture (64-bit, for example) that can be a benefit performance of this
system. The designer of the data warehouse should consider the effectiveness of
data query, this depends on the selection of relevant indexes and their
combination with the materialized views, note that the index selection is a
NPcomplete problem, because the number of indexes is exponential in the total
number of attributes in the database, So, it is necessary to provide, while the
data warehouse design, the suitable type of index for this data warehouse. This
paper presents a comparative study between the index B-tree type and type
Bitmap, their advantages and disadvantages, with a real experiment showing that
its index of type Bitmap more advantageous than the index B-tree type.
|
1307.2555 | MacWilliams Type identities for $m$-spotty Rosenbloom-Tsfasman weight
enumerators over finite commutative Frobenius rings | cs.IT math.IT | The $m$-spotty byte error control codes provide a good source for detecting
and correcting errors in semiconductor memory systems using high density RAM
chips with wide I/O data (e.g. 8, 16, or 32 bits). $m$-spotty byte error
control codes are very suitable for burst correction. M. \"{O}zen and V. Siap
[7] proved a MacWilliams identity for the $m$-spotty Rosenbloom-Tsfasman
(shortly RT) weight enumerators of binary codes. The main purpose of this paper
is to present the MacWilliams type identities for $m$-spotty RT weight
enumerators of linear codes over finite commutative Frobenius rings.
|
1307.2559 | General Drift Analysis with Tail Bounds | cs.NE | Drift analysis is one of the state-of-the-art techniques for the runtime
analysis of randomized search heuristics (RSHs) such as evolutionary algorithms
(EAs), simulated annealing etc. The vast majority of existing drift theorems
yield bounds on the expected value of the hitting time for a target state,
e.g., the set of optimal solutions, without making additional statements on the
distribution of this time. We address this lack by providing a general drift
theorem that includes bounds on the upper and lower tail of the hitting time
distribution. The new tail bounds are applied to prove very precise
sharp-concentration results on the running time of a simple EA on standard
benchmark problems, including the class of general linear functions.
Surprisingly, the probability of deviating by an $r$-factor in lower order
terms of the expected time decreases exponentially with $r$ on all these
problems. The usefulness of the theorem outside the theory of RSHs is
demonstrated by deriving tail bounds on the number of cycles in random
permutations. All these results handle a position-dependent (variable) drift
that was not covered by previous drift theorems with tail bounds. Moreover, our
theorem can be specialized into virtually all existing drift theorems with
drift towards the target from the literature. Finally, user-friendly
specializations of the general drift theorem are given.
|
1307.2560 | Exploiting Data Parallelism in the yConvex Hypergraph Algorithm for
Image Representation using GPGPUs | cs.DC cs.CV | To define and identify a region-of-interest (ROI) in a digital image, the
shape descriptor of the ROI has to be described in terms of its boundary
characteristics. To address the generic issues of contour tracking, the yConvex
Hypergraph (yCHG) model was proposed by Kanna et al [1]. In this work, we
propose a parallel approach to implement the yCHG model by exploiting massively
parallel cores of NVIDIA's Compute Unified Device Architecture (CUDA). We
perform our experiments on the MODIS satellite image database by NASA, and
based on our analysis we observe that the performance of the serial
implementation is better on smaller images, but once the threshold is achieved
in terms of image resolution, the parallel implementation outperforms its
sequential counterpart by 2 to 10 times (2x-10x). We also conclude that an
increase in the number of hyperedges in the ROI of a given size does not impact
the performance of the overall algorithm.
|
1307.2579 | Tuned Models of Peer Assessment in MOOCs | cs.LG cs.AI cs.HC stat.AP stat.ML | In massive open online courses (MOOCs), peer grading serves as a critical
tool for scaling the grading of complex, open-ended assignments to courses with
tens or hundreds of thousands of students. But despite promising initial
trials, it does not always deliver accurate results compared to human experts.
In this paper, we develop algorithms for estimating and correcting for grader
biases and reliabilities, showing significant improvement in peer grading
accuracy on real data with 63,199 peer grades from Coursera's HCI course
offerings --- the largest peer grading networks analysed to date. We relate
grader biases and reliabilities to other student factors such as student
engagement, performance as well as commenting style. We also show that our
model can lead to more intelligent assignment of graders to gradees.
|
1307.2584 | Massive MIMO Systems with Non-Ideal Hardware: Energy Efficiency,
Estimation, and Capacity Limits | cs.IT math.IT | The use of large-scale antenna arrays can bring substantial improvements in
energy and/or spectral efficiency to wireless systems due to the greatly
improved spatial resolution and array gain. Recent works in the field of
massive multiple-input multiple-output (MIMO) show that the user channels
decorrelate when the number of antennas at the base stations (BSs) increases,
thus strong signal gains are achievable with little inter-user interference.
Since these results rely on asymptotics, it is important to investigate whether
the conventional system models are reasonable in this asymptotic regime. This
paper considers a new system model that incorporates general transceiver
hardware impairments at both the BSs (equipped with large antenna arrays) and
the single-antenna user equipments (UEs). As opposed to the conventional case
of ideal hardware, we show that hardware impairments create finite ceilings on
the channel estimation accuracy and on the downlink/uplink capacity of each UE.
Surprisingly, the capacity is mainly limited by the hardware at the UE, while
the impact of impairments in the large-scale arrays vanishes asymptotically and
inter-user interference (in particular, pilot contamination) becomes
negligible. Furthermore, we prove that the huge degrees of freedom offered by
massive MIMO can be used to reduce the transmit power and/or to tolerate larger
hardware impairments, which allows for the use of inexpensive and
energy-efficient antenna elements.
|
1307.2599 | Compactly Supported Tensor Product Complex Tight Framelets with
Directionality | cs.IT math.IT | Although tensor product real-valued wavelets have been successfully applied
to many high-dimensional problems, they can only capture well edge
singularities along the coordinate axis directions. As an alternative and
improvement of tensor product real-valued wavelets and dual tree complex
wavelet transform, recently tensor product complex tight framelets with
increasing directionality have been introduced in [8] and applied to image
denoising in [13]. Despite several desirable properties, the directional tensor
product complex tight framelets constructed in [8,13] are bandlimited and do
not have compact support in the space/time domain. Since compactly supported
wavelets and framelets are of great interest and importance in both theory and
application, it remains as an unsolved problem whether there exist compactly
supported tensor product complex tight framelets with directionality. In this
paper, we shall satisfactorily answer this question by proving a theoretical
result on directionality of tight framelets and by introducing an algorithm to
construct compactly supported complex tight framelets with directionality. Our
examples show that compactly supported complex tight framelets with
directionality can be easily derived from any given eligible low-pass filters
and refinable functions. Several examples of compactly supported tensor product
complex tight framelets with directionality have been presented.
|
1307.2603 | Ontology Based Data Integration Over Document and Column Family Oriented
NOSQL | cs.DB | The World Wide Web infrastructure together with its more than 2 billion users
enables to store information at a rate that has never been achieved before.
This is mainly due to the will of storing almost all end-user interactions
performed on some web applications. In order to reply to scalability and
availability constraints, many web companies involved in this process recently
started to design their own data management systems. Many of them are referred
to as NOSQL databases, standing for 'Not only SQL'. With their wide adoption
emerges new needs and data integration is one of them. In this paper, we
consider that an ontology-based representation of the information stored in a
set of NOSQL sources is highly needed. The main motivation of this approach is
the ability to reason on elements of the ontology and to retrieve information
in an efficient and distributed manner. Our contributions are the following:
(1) we analyze a set of schemaless NOSQL databases to generate local
ontologies, (2) we generate a global ontology based on the discovery of
correspondences between the local ontologies and finally (3) we propose a query
translation solution from SPARQL to query languages of the sources. We are
currently implementing our data integration solution on two popular NOSQL
databases: MongoDB as a document database and Cassandra as a column family
store.
|
1307.2611 | Controlling the Precision-Recall Tradeoff in Differential Dependency
Network Analysis | stat.ML cs.LG | Graphical models have gained a lot of attention recently as a tool for
learning and representing dependencies among variables in multivariate data.
Often, domain scientists are looking specifically for differences among the
dependency networks of different conditions or populations (e.g. differences
between regulatory networks of different species, or differences between
dependency networks of diseased versus healthy populations). The standard
method for finding these differences is to learn the dependency networks for
each condition independently and compare them. We show that this approach is
prone to high false discovery rates (low precision) that can render the
analysis useless. We then show that by imposing a bias towards learning similar
dependency networks for each condition the false discovery rates can be reduced
to acceptable levels, at the cost of finding a reduced number of differences.
Algorithms developed in the transfer learning literature can be used to vary
the strength of the imposed similarity bias and provide a natural mechanism to
smoothly adjust this differential precision-recall tradeoff to cater to the
requirements of the analysis conducted. We present real case studies
(oncological and neurological) where domain experts use the proposed technique
to extract useful differential networks that shed light on the biological
processes involved in cancer and brain function.
|
1307.2641 | From Design to Implementation: an Automated, Credible Autocoding Chain
for Control Systems | cs.SY cs.SE | This article describes a fully automated, credible autocoding chain for
control systems. The framework generates code, along with guarantees of high
level functional properties which can be independently verified. It relies on
domain specific knowledge and fomal methods of analysis to address a context of
heightened safety requirements for critical embedded systems and
ever-increasing costs of verification and validation. The platform strives to
bridge the semantic gap between domain expert and code verification expert.
First, a graphical dataflow language is extended with annotation symbols
enabling the control engineer to express high level properties of its control
law within the framework of a familiar language. An existing autocoder is
enhanced to both generate the code implementing the initial design, but also to
carry high level properties down to annotations at the level of the code.
Finally, using customized code analysis tools, certificates are generated which
guarantee the correctness of the annotations with respect to the code, and can
be verified using existing static analysis tools. Only a subset of properties
and controllers are handled at this point.
|
1307.2642 | Structure controllability of complex network based on preferential
matching | math-ph cs.SI math.MP physics.soc-ph | Minimum driver node sets (MDSs) play an important role in studying the
structural controllability of complex networks. Recent research has shown that
MDSs tend to avoid high-degree nodes. However, this observation is based on the
analysis of a small number of MDSs, because enumerating all of the MDSs of a
network is a #P problem. Therefore, past research has not been sufficient to
arrive at a convincing conclusion. In this paper, first, we propose a
preferential matching algorithm to find MDSs that have a specific degree
property. Then, we show that the MDSs obtained by preferential matching can be
composed of high- and medium-degree nodes. Moreover, the experimental results
also show that the average degree of the MDSs of some networks tends to be
greater than that of the overall network, even when the MDSs are obtained using
previous research method. Further analysis shows that whether the driver nodes
tend to be high-degree nodes or not is closely related to the edge direction of
the network.
|
1307.2669 | Text Categorization via Similarity Search: An Efficient and Effective
Novel Algorithm | cs.IR | We present a supervised learning algorithm for text categorization which has
brought the team of authors the 2nd place in the text categorization division
of the 2012 Cybersecurity Data Mining Competition (CDMC'2012) and a 3rd prize
overall. The algorithm is quite different from existing approaches in that it
is based on similarity search in the metric space of measure distributions on
the dictionary. At the preprocessing stage, given a labeled learning sample of
texts, we associate to every class label (document category) a point in the
space of question. Unlike it is usual in clustering, this point is not a
centroid of the category but rather an outlier, a uniform measure distribution
on a selection of domain-specific words. At the execution stage, an unlabeled
text is assigned a text category as defined by the closest labeled neighbour to
the point representing the frequency distribution of the words in the text. The
algorithm is both effective and efficient, as further confirmed by experiments
on the Reuters 21578 dataset.
|
1307.2672 | Index Coding Problem with Side Information Repositories | cs.IT math.IT | To tackle the expected enormous increase in mobile video traffic in cellular
networks, an architecture involving a base station along with caching femto
stations (referred to as helpers), storing popular files near users, has been
proposed [1]. The primary benefit of caching is the enormous increase in
downloading rate when a popular file is available at helpers near a user
requesting that file. In this work, we explore a secondary benefit of caching
in this architecture through the lens of index coding. We assume a system with
n users and constant number of caching helpers. Only helpers store files, i.e.
have side information. We investigate the following scenario: Each user
requests a distinct file that is not found in the set of helpers nearby. Users
are served coded packets (through an index code) by an omniscient base station.
Every user decodes its desired packet from the coded packets and the side
information packets from helpers nearby. We assume that users can obtain any
file stored in their neighboring helpers without incurring transmission costs.
With respect to the index code employed, we investigate two achievable schemes:
1) XOR coloring based on coloring of the side information graph associated with
the problem and 2)Vector XOR coloring based on fractional coloring of the side
information graph. We show that the general problem reduces to a canonical
problem where every user is connected to exactly one helper under some
topological constraints. For the canonical problem, with constant number of
helpers (k), we show that the complexity of computing the best XOR/vector XOR
coloring schemes are polynomial in the number of users n. The result exploits a
special complete bi-partite structure that the side information graphs exhibit
for any finite k.
|
1307.2674 | Error Rate Bounds in Crowdsourcing Models | stat.ML cs.LG stat.AP | Crowdsourcing is an effective tool for human-powered computation on many
tasks challenging for computers. In this paper, we provide finite-sample
exponential bounds on the error rate (in probability and in expectation) of
hyperplane binary labeling rules under the Dawid-Skene crowdsourcing model. The
bounds can be applied to analyze many common prediction methods, including the
majority voting and weighted majority voting. These bound results could be
useful for controlling the error rate and designing better algorithms. We show
that the oracle Maximum A Posterior (MAP) rule approximately optimizes our
upper bound on the mean error rate for any hyperplane binary labeling rule, and
propose a simple data-driven weighted majority voting (WMV) rule (called
one-step WMV) that attempts to approximate the oracle MAP and has a provable
theoretical guarantee on the error rate. Moreover, we use simulated and real
data to demonstrate that the data-driven EM-MAP rule is a good approximation to
the oracle MAP rule, and to demonstrate that the mean error rate of the
data-driven EM-MAP rule is also bounded by the mean error rate bound of the
oracle MAP rule with estimated parameters plugging into the bound.
|
1307.2676 | Efficiency of Entanglement Concentration by Photon Subtraction | quant-ph cs.IT math.IT | We introduce a measure of efficiency for the photon subtraction protocol
aimed at entanglement concentration on a single copy of bipartite continuous
variable state. We then show that iterating the protocol does not lead to
higher efficiency than a single application. In order to overcome this limit we
present an adaptive version of the protocol able to greatly enhance its
efficiency.
|
1307.2704 | Applications of repeat degree on coverings of neighborhoods | cs.AI | In covering based rough sets, the neighborhood of an element is the
intersection of all the covering blocks containing the element. All the
neighborhoods form a new covering called a covering of neighborhoods. In the
course of studying under what condition a covering of neighborhoods is a
partition, the concept of repeat degree is proposed, with the help of which the
issue is addressed. This paper studies further the application of repeat degree
on coverings of neighborhoods. First, we investigate under what condition a
covering of neighborhoods is the reduct of the covering inducing it. As a
preparation for addressing this issue, we give a necessary and sufficient
condition for a subset of a set family to be the reduct of the set family. Then
we study under what condition two coverings induce a same relation and a same
covering of neighborhoods. Finally, we give the method of calculating the
covering according to repeat degree.
|
1307.2747 | Impossibility of Local State Transformation via Hypercontractivity | quant-ph cs.IT math-ph math.IT math.MP | Local state transformation is the problem of transforming an arbitrary number
of copies of a bipartite resource state to a bipartite target state under local
operations. That is, given two bipartite states, is it possible to transform an
arbitrary number of copies of one of them to one copy of the other state under
local operations only? This problem is a hard one in general since we assume
that the number of copies of the resource state is arbitrarily large. In this
paper we prove some bounds on this problem using the hypercontractivity
properties of some super-operators corresponding to bipartite states. We
measure hypercontractivity in terms of both the usual super-operator norms as
well as completely bounded norms.
|
1307.2748 | Self-Organized Synchronization and Voltage Stability in Networks of
Synchronous Machines | nlin.AO cs.SY | The integration of renewable energy sources in the course of the energy
transition is accompanied by grid decentralization and fluctuating power
feed-in characteristics. This raises new challenges for power system stability
and design. We intend to investigate power system stability from the viewpoint
of self-organized synchronization aspects. In this approach, the power grid is
represented by a network of synchronous machines. We supplement the classical
Kuramoto-like network model, which assumes constant voltages, with dynamical
voltage equations, and thus obtain an extended version, that incorporates the
coupled categories voltage stability and rotor angle synchronization. We
compare disturbance scenarios in small systems simulated on the basis of both
classical and extended model and we discuss resultant implications and possible
applications to complex modern power grids.
|
1307.2756 | Secure and Policy-Private Resource Sharing in an Online Social Network | cs.CR cs.SI | Providing functionalities that allow online social network users to manage in
a secure and private way the publication of their information and/or resources
is a relevant and far from trivial topic that has been under scrutiny from
various research communities. In this work, we provide a framework that allows
users to define highly expressive access policies to their resources in a way
that the enforcement does not require the intervention of a (trusted or not)
third party. This is made possible by the deployment of a newly defined
cryptographic primitives that provides - among other things - efficient access
revocation and access policy privacy. Finally, we provide an implementation of
our framework as a Facebook application, proving the feasibility of our
approach.
|
1307.2785 | Rising tides or rising stars?: Dynamics of shared attention on Twitter
during media events | cs.SI physics.soc-ph | "Media events" such as political debates generate conditions of shared
attention as many users simultaneously tune in with the dual screens of
broadcast and social media to view and participate. Are collective patterns of
user behavior under conditions of shared attention distinct from other "bursts"
of activity like breaking news events? Using data from a population of
approximately 200,000 politically-active Twitter users, we compare features of
their behavior during eight major events during the 2012 U.S. presidential
election to examine (1) the impact of "media events" have on patterns of social
media use compared to "typical" time and (2) whether changes during media
events are attributable to changes in behavior across the entire population or
an artifact of changes in elite users' behavior. Our findings suggest that
while this population became more active during media events, this additional
activity reflects concentrated attention to a handful of users, hashtags, and
tweets. Our work is the first study on distinguishing patterns of large-scale
social behavior under condition of uncertainty and shared attention, suggesting
new ways of mining information from social media to support collective
sensemaking following major events.
|
1307.2789 | Computer Simulation of 3-D Finite-Volume Liquid Transport in Fibrous
Materials: a Physical Model for Ink Seepage into Paper | cs.CE cond-mat.mes-hall | A physical model for the simulation ink/paper interaction at the mesoscopic
scale is developed. It is based on the modified Ising model, and is generalized
to consider the restriction of the finite-volume of ink and also its dynamic
seepage. This allows the model to obtain the ink distribution within the paper
volume. At the mesoscopic scale, the paper is modeled using a discretized fiber
structure. The ink distribution is obtained by solving its equivalent
optimization problem, which is solved using a modified genetic algorithm, along
with a new boundary condition and the quasi-linear technique. The model is able
to simulate the finite-volume distribution of ink.
|
1307.2799 | Polar Coded Modulation with Optimal Constellation Labeling | cs.IT math.IT | A practical $2^m$-ary polar coded modulation (PCM) scheme with optimal
constellation labeling is proposed. To efficiently find the optimal labeling
rule, the search space is reduced by exploiting the symmetry properties of the
channels. Simulation results show that the proposed PCM scheme can outperform
the bit-interleaved turbo coded modulation scheme used in the WCDMA (Wideband
Code Division Multiple Access) mobile communication systems by up to 1.5dB.
|
1307.2800 | A Hybrid ARQ Scheme Based on Polar Codes | cs.IT math.IT | A hybrid automatic repeat request (HARQ) scheme based on a novel class of
rate-compatible polar (\mbox{RCP}) codes are proposed. The RCP codes are
constructed by performing punctures and repetitions on the conventional polar
codes. Simulation results over binary-input additive white Gaussian noise
channels (BAWGNCs) show that, using a low-complexity successive cancellation
(SC) decoder, the proposed HARQ scheme performs as well as the existing schemes
based on turbo codes and low-density parity-check (LDPC) codes. The proposed
transmission scheme is only about 1.0-1.5dB away from the channel capacity with
the information block length of 1024 bits.
|
1307.2811 | GROTESQUE: Noisy Group Testing (Quick and Efficient) | cs.IT math.IT | Group-testing refers to the problem of identifying (with high probability) a
(small) subset of $D$ defectives from a (large) set of $N$ items via a "small"
number of "pooled" tests. For ease of presentation in this work we focus on the
regime when $D = \cO{N^{1-\gap}}$ for some $\gap > 0$. The tests may be
noiseless or noisy, and the testing procedure may be adaptive (the pool
defining a test may depend on the outcome of a previous test), or non-adaptive
(each test is performed independent of the outcome of other tests). A rich body
of literature demonstrates that $\Theta(D\log(N))$ tests are
information-theoretically necessary and sufficient for the group-testing
problem, and provides algorithms that achieve this performance. However, it is
only recently that reconstruction algorithms with computational complexity that
is sub-linear in $N$ have started being investigated (recent work by
\cite{GurI:04,IndN:10, NgoP:11} gave some of the first such algorithms). In the
scenario with adaptive tests with noisy outcomes, we present the first scheme
that is simultaneously order-optimal (up to small constant factors) in both the
number of tests and the decoding complexity ($\cO{D\log(N)}$ in both the
performance metrics). The total number of stages of our adaptive algorithm is
"small" ($\cO{\log(D)}$). Similarly, in the scenario with non-adaptive tests
with noisy outcomes, we present the first scheme that is simultaneously
near-optimal in both the number of tests and the decoding complexity (via an
algorithm that requires $\cO{D\log(D)\log(N)}$ tests and has a decoding
complexity of {${\cal O}(D(\log N+\log^{2}D))$}. Finally, we present an
adaptive algorithm that only requires 2 stages, and for which both the number
of tests and the decoding complexity scale as {${\cal O}(D(\log
N+\log^{2}D))$}. For all three settings the probability of error of our
algorithms scales as $\cO{1/(poly(D)}$.
|
1307.2818 | Anisotropic Diffusion for Details Enhancement in Multi-Exposure Image
Fusion | cs.MM cs.CV | We develop a multiexposure image fusion method based on texture features,
which exploits the edge preserving and intraregion smoothing property of
nonlinear diffusion filters based on partial differential equations (PDE). With
the captured multiexposure image series, we first decompose images into base
layers and detail layers to extract sharp details and fine details,
respectively. The magnitude of the gradient of the image intensity is utilized
to encourage smoothness at homogeneous regions in preference to inhomogeneous
regions. Then, we have considered texture features of the base layer to
generate a mask (i.e., decision mask) that guides the fusion of base layers in
multiresolution fashion. Finally, well-exposed fused image is obtained that
combines fused base layer and the detail layers at each scale across all the
input exposures. Proposed algorithm skipping complex High Dynamic Range Image
(HDRI) generation and tone mapping steps to produce detail preserving image for
display on standard dynamic range display devices. Moreover, our technique is
effective for blending flash/no-flash image pair and multifocus images, that
is, images focused on different targets.
|
1307.2826 | Image Denoising Using Tensor Product Complex Tight Framelets with
Increasing Directionality | cs.IT math.IT | Tensor product real-valued wavelets have been employed in many applications
such as image processing with impressive performance. Though edge singularities
are ubiquitous and play a fundamental role in two-dimensional problems, tensor
product real-valued wavelets are known to be only sub-optimal since they can
only capture edges well along the coordinate axis directions. The dual tree
complex wavelet transform (DTCWT), proposed by Kingsbury [16] and further
developed by Selesnick et al. [24], is one of the most popular and successful
enhancements of the classical tensor product real-valued wavelets. The
two-dimensional DTCWT is obtained via tensor product and offers improved
directionality with 6 directions. In this paper we shall further enhance the
performance of DTCWT for the problem of image denoising. Using framelet-based
approach and the notion of discrete affine systems, we shall propose a family
of tensor product complex tight framelets TPCTF_n for all integers n>2 with
increasing directionality, where n refers to the number of filters in the
underlying one-dimensional complex tight framelet filter bank. For dimension
two, such tensor product complex tight framelet TPCTF_n offers (n-1)(n-3)/2+4
directions when n is odd, and (n-4)(n+2)/2+6 directions when n is even. In
particular, TPCTF_4, which is different to DTCWT in both nature and design,
provides an alternative to DTCWT. Indeed, TPCTF_4 behaves quite similar to
DTCWT by offering 6 directions in dimension two, employing the tensor product
structure, and enjoying slightly less redundancy than DTCWT. When TPCTF_4 is
applied to image denoising, its performance is comparable to DTCWT. Moreover,
better results on image denoising can be obtained by using TPCTF_6. Moreover,
TPCTF_n allows us to further improve DTCWT by using TPCTF_n as the first stage
filter bank in DTCWT.
|
1307.2855 | Flow-Based Algorithms for Local Graph Clustering | cs.DS cs.LG stat.ML | Given a subset S of vertices of an undirected graph G, the cut-improvement
problem asks us to find a subset S that is similar to A but has smaller
conductance. A very elegant algorithm for this problem has been given by
Andersen and Lang [AL08] and requires solving a small number of
single-commodity maximum flow computations over the whole graph G. In this
paper, we introduce LocalImprove, the first cut-improvement algorithm that is
local, i.e. that runs in time dependent on the size of the input set A rather
than on the size of the entire graph. Moreover, LocalImprove achieves this
local behaviour while essentially matching the same theoretical guarantee as
the global algorithm of Andersen and Lang.
The main application of LocalImprove is to the design of better
local-graph-partitioning algorithms. All previously known local algorithms for
graph partitioning are random-walk based and can only guarantee an output
conductance of O(\sqrt{OPT}) when the target set has conductance OPT \in [0,1].
Very recently, Zhu, Lattanzi and Mirrokni [ZLM13] improved this to O(OPT /
\sqrt{CONN}) where the internal connectivity parameter CONN \in [0,1] is
defined as the reciprocal of the mixing time of the random walk over the graph
induced by the target set. In this work, we show how to use LocalImprove to
obtain a constant approximation O(OPT) as long as CONN/OPT = Omega(1). This
yields the first flow-based algorithm. Moreover, its performance strictly
outperforms the ones based on random walks and surprisingly matches that of the
best known global algorithm, which is SDP-based, in this parameter regime
[MMV12].
Finally, our results show that spectral methods are not the only viable
approach to the construction of local graph partitioning algorithm and open
door to the study of algorithms with even better approximation and locality
guarantees.
|
1307.2867 | Tractable Combinations of Global Constraints | cs.AI cs.LO | We study the complexity of constraint satisfaction problems involving global
constraints, i.e., special-purpose constraints provided by a solver and
represented implicitly by a parametrised algorithm. Such constraints are widely
used; indeed, they are one of the key reasons for the success of constraint
programming in solving real-world problems.
Previous work has focused on the development of efficient propagators for
individual constraints. In this paper, we identify a new tractable class of
constraint problems involving global constraints of unbounded arity. To do so,
we combine structural restrictions with the observation that some important
types of global constraint do not distinguish between large classes of
equivalent solutions.
|
1307.2889 | Achieving the Uniform Rate Region of General Multiple Access Channels by
Polar Coding | cs.IT math.IT | We consider the problem of polar coding for transmission over $m$-user
multiple access channels. In the proposed scheme, all users encode their
messages using a polar encoder, while a multi-user successive cancellation
decoder is deployed at the receiver. The encoding is done separately across the
users and is independent of the target achievable rate. For the code
construction, the positions of information bits and frozen bits for each of the
users are decided jointly. This is done by treating the polar transformations
across all the $m$ users as a single polar transformation with a certain
\emph{polarization base}. We characterize the resolution of achievable rates on
the dominant face of the uniform rate region in terms of the number of users
$m$ and the length of the polarization base $L$. In particular, we prove that
for any target rate on the dominant face, there exists an achievable rate, also
on the dominant face, within the distance at most $\frac{(m-1)\sqrt{m}}{L}$
from the target rate. We then prove that the proposed MAC polar coding scheme
achieves the whole uniform rate region with fine enough resolution by changing
the decoding order in the multi-user successive cancellation decoder, as $L$
and the code block length $N$ grow large. The encoding and decoding
complexities are $O(N \log N)$ and the asymptotic block error probability of
$O(2^{-N^{0.5 - \epsilon}})$ is guaranteed. Examples of achievable rates for
the $3$-user multiple access channel are provided.
|
1307.2893 | Coexistence in preferential attachment networks | physics.soc-ph cs.SI math.PR | We introduce a new model of competition on growing networks. This extends the
preferential attachment model, with the key property that node choices evolve
simultaneously with the network. When a new node joins the network, it chooses
neighbours by preferential attachment, and selects its type based on the number
of initial neighbours of each type. The model is analysed in detail, and in
particular, we determine the possible proportions of the various types in the
limit of large networks. An important qualitative feature we find is that, in
contrast to many current theoretical models, often several competitors will
coexist. This matches empirical observations in many real-world networks.
|
1307.2923 | Two-Way Relaying under the Presence of Relay Transceiver Hardware
Impairments | cs.IT math.IT | Hardware impairments in physical transceivers are known to have a deleterious
effect on communication systems; however, very few contributions have
investigated their impact on relaying. This paper quantifies the impact of
transceiver impairments in a two-way amplify-and-forward configuration. More
specifically, the effective signal-to-noise-and-distortion ratios at both
transmitter nodes are obtained. These are used to deduce exact and asymptotic
closed-form expressions for the outage probabilities (OPs), as well as
tractable formulations for the symbol error rates (SERs). It is explicitly
shown that non-zero lower bounds on the OP and SER exist in the high-power
regime---this stands in contrast to the special case of ideal hardware, where
the OP and SER go asymptotically to zero.
|
1307.2958 | Exact MIMO Zero-Forcing Detection Analysis for Transmit-Correlated
Rician Fading | cs.IT math.IT | We analyze the performance of multiple input/multiple output (MIMO)
communications systems employing spatial multiplexing and zero-forcing
detection (ZF). The distribution of the ZF signal-to-noise ratio (SNR) is
characterized when either the intended stream or interfering streams experience
Rician fading, and when the fading may be correlated on the transmit side.
Previously, exact ZF analysis based on a well-known SNR expression has been
hindered by the noncentrality of the Wishart distribution involved. In
addition, approximation with a central-Wishart distribution has not proved
consistently accurate. In contrast, the following exact ZF study proceeds from
a lesser-known SNR expression that separates the intended and interfering
channel-gain vectors. By first conditioning on, and then averaging over the
interference, the ZF SNR distribution for Rician-Rayleigh fading is shown to be
an infinite linear combination of gamma distributions. On the other hand, for
Rayleigh-Rician fading, the ZF SNR is shown to be gamma-distributed. Based on
the SNR distribution, we derive new series expressions for the ZF average error
probability, outage probability, and ergodic capacity. Numerical results
confirm the accuracy of our new expressions, and reveal effects of interference
and channel statistics on performance.
|
1307.2965 | Semantic Context Forests for Learning-Based Knee Cartilage Segmentation
in 3D MR Images | cs.CV cs.LG q-bio.TO stat.ML | The automatic segmentation of human knee cartilage from 3D MR images is a
useful yet challenging task due to the thin sheet structure of the cartilage
with diffuse boundaries and inhomogeneous intensities. In this paper, we
present an iterative multi-class learning method to segment the femoral, tibial
and patellar cartilage simultaneously, which effectively exploits the spatial
contextual constraints between bone and cartilage, and also between different
cartilages. First, based on the fact that the cartilage grows in only certain
area of the corresponding bone surface, we extract the distance features of not
only to the surface of the bone, but more informatively, to the densely
registered anatomical landmarks on the bone surface. Second, we introduce a set
of iterative discriminative classifiers that at each iteration, probability
comparison features are constructed from the class confidence maps derived by
previously learned classifiers. These features automatically embed the semantic
context information between different cartilages of interest. Validated on a
total of 176 volumes from the Osteoarthritis Initiative (OAI) dataset, the
proposed approach demonstrates high robustness and accuracy of segmentation in
comparison with existing state-of-the-art MR cartilage segmentation methods.
|
1307.2967 | Layer-switching cost and optimality in information spreading on
multiplex networks | physics.soc-ph cond-mat.stat-mech cs.SI | We study a model of information spreading on multiplex networks, in which
agents interact through multiple interaction channels (layers), say online vs.\
offline communication layers, subject to layer-switching cost for transmissions
across different interaction layers. The model is characterized by the
layer-wise path-dependent transmissibility over a contact, that is dynamically
determined dependently on both incoming and outgoing transmission layers. We
formulate an analytical framework to deal with such path-dependent
transmissibility and demonstrate the nontrivial interplay between the
multiplexity and spreading dynamics, including optimality. It is shown that the
epidemic threshold and prevalence respond to the layer-switching cost
non-monotonically and that the optimal conditions can change in abrupt
non-analytic ways, depending also on the densities of network layers and the
type of seed infections. Our results elucidate the essential role of
multiplexity that its explicit consideration should be crucial for realistic
modeling and prediction of spreading phenomena on multiplex social networks in
an era of ever-diversifying social interaction layers.
|
1307.2968 | Introduction to Queueing Theory and Stochastic Teletraffic Models | math.PR cs.IT math.IT | The aim of this textbook is to provide students with basic knowledge of
stochastic models that may apply to telecommunications research areas, such as
traffic modelling, resource provisioning and traffic management. These study
areas are often collectively called teletraffic. This book assumes prior
knowledge of a programming language, mathematics, probability and stochastic
processes normally taught in an electrical engineering course. For students who
have some but not sufficiently strong background in probability and stochastic
processes, we provide, in the first few chapters, background on the relevant
concepts in these areas.
|
1307.2971 | Accuracy of MAP segmentation with hidden Potts and Markov mesh prior
models via Path Constrained Viterbi Training, Iterated Conditional Modes and
Graph Cut based algorithms | cs.LG cs.CV stat.ML | In this paper, we study statistical classification accuracy of two different
Markov field environments for pixelwise image segmentation, considering the
labels of the image as hidden states and solving the estimation of such labels
as a solution of the MAP equation. The emission distribution is assumed the
same in all models, and the difference lays in the Markovian prior hypothesis
made over the labeling random field. The a priori labeling knowledge will be
modeled with a) a second order anisotropic Markov Mesh and b) a classical
isotropic Potts model. Under such models, we will consider three different
segmentation procedures, 2D Path Constrained Viterbi training for the Hidden
Markov Mesh, a Graph Cut based segmentation for the first order isotropic Potts
model, and ICM (Iterated Conditional Modes) for the second order isotropic
Potts model.
We provide a unified view of all three methods, and investigate goodness of
fit for classification, studying the influence of parameter estimation,
computational gain, and extent of automation in the statistical measures
Overall Accuracy, Relative Improvement and Kappa coefficient, allowing robust
and accurate statistical analysis on synthetic and real-life experimental data
coming from the field of Dental Diagnostic Radiography. All algorithms, using
the learned parameters, generate good segmentations with little interaction
when the images have a clear multimodal histogram. Suboptimal learning proves
to be frail in the case of non-distinctive modes, which limits the complexity
of usable models, and hence the achievable error rate as well.
All Matlab code written is provided in a toolbox available for download from
our website, following the Reproducible Research Paradigm.
|
1307.2982 | Fast Exact Search in Hamming Space with Multi-Index Hashing | cs.CV cs.AI cs.DS cs.IR | There is growing interest in representing image data and feature descriptors
using compact binary codes for fast near neighbor search. Although binary codes
are motivated by their use as direct indices (addresses) into a hash table,
codes longer than 32 bits are not being used as such, as it was thought to be
ineffective. We introduce a rigorous way to build multiple hash tables on
binary code substrings that enables exact k-nearest neighbor search in Hamming
space. The approach is storage efficient and straightforward to implement.
Theoretical analysis shows that the algorithm exhibits sub-linear run-time
behavior for uniformly distributed codes. Empirical results show dramatic
speedups over a linear scan baseline for datasets of up to one billion codes of
64, 128, or 256 bits.
|
1307.2991 | Integrity Verification for Outsourcing Uncertain Frequent Itemset Mining | cs.DB | In recent years, due to the wide applications of uncertain data (e.g., noisy
data), uncertain frequent itemsets (UFI) mining over uncertain databases has
attracted much attention, which differs from the corresponding deterministic
problem from the generalized definition and resolutions. As the most costly
task in association rule mining process, it has been shown that outsourcing
this task to a service provider (e.g.,the third cloud party) brings several
benefits to the data owner such as cost relief and a less commitment to storage
and computational resources. However, the correctness integrity of mining
results can be corrupted if the service provider is with random fault or not
honest (e.g., lazy, malicious, etc). Therefore, in this paper, we focus on the
integrity and verification issue in UFI mining problem during outsourcing
process, i.e., how the data owner verifies the mining results. Specifically, we
explore and extend the existing work on deterministic FI outsourcing
verification to uncertain scenario. For this purpose, We extend the existing
outsourcing FI mining work to uncertain area w.r.t. the two popular UFI
definition criteria and the approximate UFI mining methods. Specifically, We
construct and improve the basic/enhanced verification scheme with such
different UFI definition respectively. After that, we further discuss the
scenario of existing approximation UFP mining, where we can see that our
technique can provide good probabilistic guarantees about the correctness of
the verification. Finally, we present the comparisons and analysis on the
schemes proposed in this paper.
|
1307.2997 | Conversion of Braille to Text in English, Hindi and Tamil Languages | cs.CV | The Braille system has been used by the visually impaired for reading and
writing. Due to limited availability of the Braille text books an efficient
usage of the books becomes a necessity. This paper proposes a method to convert
a scanned Braille document to text which can be read out to many through the
computer. The Braille documents are pre processed to enhance the dots and
reduce the noise. The Braille cells are segmented and the dots from each cell
is extracted and converted in to a number sequence. These are mapped to the
appropriate alphabets of the language. The converted text is spoken out through
a speech synthesizer. The paper also provides a mechanism to type the Braille
characters through the number pad of the keyboard. The typed Braille character
is mapped to the alphabet and spoken out. The Braille cell has a standard
representation but the mapping differs for each language. In this paper mapping
of English, Hindi and Tamil are considered.
|
1307.3003 | Application of a cognitive-inspired algorithm for detecting communities
in mobility networks | physics.soc-ph cs.SI | The emergence and the global adaptation of mobile devices has influenced
human interactions at the individual, community, and social levels leading to
the so called Cyber-Physical World (CPW) convergence scenario [1]. One of the
most important features of CPW is the possibility of exploiting information
about the structure of the social communities of users, revealed by joint
movement patterns and frequency of physical co-location. Mobile devices of
users that belong to the same social community are likely to "see" each other
(and thus be able to communicate through ad-hoc networking techniques) more
frequently and regularly than devices outside the community. In mobile
opportunistic networks, this fact can be exploited, for example, to optimize
networking operations such as forwarding and dissemination of messages. In this
paper we present the application of a cognitive-inspired algorithm [2,3,4] for
revealing the structure of these dynamic social networks (simulated by the HCMM
model [5]) using information about physical encounters logged by the users'
mobile devices. The main features of our algorithm are: (i) the capacity of
detecting social communities induced by physical co-location of users through
distributed algorithms; (ii) the capacity to detect users belonging to more
communities (thus acting as bridges across them), and (iii) the capacity to
detect the time evolution of communities.
|
1307.3004 | Routing in Wireless Mesh Networks: Two Soft Computing Based Approaches | cs.NI cs.AI | Due to dynamic network conditions, routing is the most critical part in WMNs
and needs to be optimised. The routing strategies developed for WMNs must be
efficient to make it an operationally self configurable network. Thus we need
to resort to near shortest path evaluation. This lays down the requirement of
some soft computing approaches such that a near shortest path is available in
an affordable computing time. This paper proposes a Fuzzy Logic based
integrated cost measure in terms of delay, throughput and jitter. Based upon
this distance (cost) between two adjacent nodes we evaluate minimal shortest
path that updates routing tables. We apply two recent soft computing approaches
namely Big Bang Big Crunch (BB-BC) and Biogeography Based Optimization (BBO)
approaches to enumerate shortest or near short paths. BB-BC theory is related
with the evolution of the universe whereas BBO is inspired by dynamical
equilibrium in the number of species on an island. Both the algorithms have low
computational time and high convergence speed. Simulation results show that the
proposed routing algorithms find the optimal shortest path taking into account
three most important parameters of network dynamics. It has been further
observed that for the shortest path problem BB-BC outperforms BBO in terms of
speed and percent error between the evaluated minimal path and the actual
shortest path.
|
1307.3005 | Computational Complexity Comparison Of Multi-Sensor Single Target Data
Fusion Methods By Matlab | cs.SY | Target tracking using observations from multiple sensors can achieve better
estimation performance than a single sensor. The most famous estimation tool in
target tracking is Kalman filter. There are several mathematical approaches to
combine the observations of multiple sensors by use of Kalman filter. An
important issue in applying a proper approach is computational complexity. In
this paper, four data fusion algorithms based on Kalman filter are considered
including three centralized and one decentralized methods. Using MATLAB,
computational loads of these methods are compared while number of sensors
increases. The results show that inverse covariance method has the best
computational performance if the number of sensors is above 20. For a smaller
number of sensors, other methods, especially group sensors, are more
appropriate..
|
1307.3011 | Soft Computing Framework for Routing in Wireless Mesh Networks: An
Integrated Cost Function Approach | cs.NI cs.AI | Dynamic behaviour of a WMN imposes stringent constraints on the routing
policy of the network. In the shortest path based routing the shortest paths
needs to be evaluated within a given time frame allowed by the WMN dynamics.
The exact reasoning based shortest path evaluation methods usually fail to meet
this rigid requirement. Thus, requiring some soft computing based approaches
which can replace "best for sure" solutions with "good enough" solutions. This
paper proposes a framework for optimal routing in the WMNs; where we
investigate the suitability of Big Bang-Big Crunch (BB-BC), a soft computing
based approach to evaluate shortest/near-shortest path. In order to make
routing optimal we first propose to replace distance between the adjacent nodes
with an integrated cost measure that takes into account throughput, delay,
jitter and residual energy of a node. A fuzzy logic based inference mechanism
evaluates this cost measure at each node. Using this distance measure we apply
BB-BC optimization algorithm to evaluate shortest/near shortest path to update
the routing tables periodically as dictated by network requirements. A large
number of simulations were conducted and it has been observed that BB-BC
algorithm appears to be a high potential candidate suitable for routing in
WMNs.
|
1307.3014 | A New Approach to the Solution of Economic Dispatch Using Particle Swarm
Optimization with Simulated Annealing | cs.CE cs.NE | A new approach to the solution of Economic Dispatch using Particle Swarm
Optimization is presented. It is the progression of allocating production
amongst the dedicated units such that the restriction forced are fulfilled and
the power needs are reduced. More just, the soft computing method has received
supplementary concentration and was used in a quantity of successful and
sensible applications. Here, an attempt has been made to find out the minimum
cost by using Particle Swarm Optimization Algorithm using the data of three
generating units. In this work, data has been taken such as the loss
coefficients with the max-min power limit and cost function. PSO and Simulated
Annealing are functional to put out the least amount for dissimilar energy
requirements. When the outputs are compared with the conventional method, PSO
seems to give an improved result with enhanced convergence feature. All the
methods are executed in MATLAB environment. The effectiveness and feasibility
of the proposed method were demonstrated by three generating units case study.
Output gives hopeful results, signifying that the projected method of
calculation is competent of economically formative advanced eminence solutions
addressing economic dispatch problems.
|
1307.3040 | Between Sense and Sensibility: Declarative narrativisation of mental
models as a basis and benchmark for visuo-spatial cognition and computation
focussed collaborative cognitive systems | cs.AI cs.CL cs.CV cs.HC cs.RO | What lies between `\emph{sensing}' and `\emph{sensibility}'? In other words,
what kind of cognitive processes mediate sensing capability, and the formation
of sensible impressions ---e.g., abstractions, analogies, hypotheses and theory
formation, beliefs and their revision, argument formation--- in domain-specific
problem solving, or in regular activities of everyday living, working and
simply going around in the environment? How can knowledge and reasoning about
such capabilities, as exhibited by humans in particular problem contexts, be
used as a model and benchmark for the development of collaborative cognitive
(interaction) systems concerned with human assistance, assurance, and
empowerment?
We pose these questions in the context of a range of assistive technologies
concerned with \emph{visuo-spatial perception and cognition} tasks encompassing
aspects such as commonsense, creativity, and the application of specialist
domain knowledge and problem-solving thought processes. Assistive technologies
being considered include: (a) human activity interpretation; (b) high-level
cognitive rovotics; (c) people-centred creative design in domains such as
architecture & digital media creation, and (d) qualitative analyses geographic
information systems. Computational narratives not only provide a rich cognitive
basis, but they also serve as a benchmark of functional performance in our
development of computational cognitive assistance systems. We posit that
computational narrativisation pertaining to space, actions, and change provides
a useful model of \emph{visual} and \emph{spatio-temporal thinking} within a
wide-range of problem-solving tasks and application areas where collaborative
cognitive systems could serve an assistive and empowering function.
|
1307.3043 | A two-layer Conditional Random Field for the classification of partially
occluded objects | cs.CV | Conditional Random Fields (CRF) are among the most popular techniques for
image labelling because of their flexibility in modelling dependencies between
the labels and the image features. This paper proposes a novel CRF-framework
for image labeling problems which is capable to classify partially occluded
objects. Our approach is evaluated on aerial near-vertical images as well as on
urban street-view images and compared with another methods.
|
1307.3046 | Spatio-Temporal Queries for moving objects Data warehousing | cs.DB | In the last decade, Moving Object Databases (MODs) have attracted a lot of
attention from researchers. Several research works were conducted to extend
traditional database techniques to accommodate the new requirements imposed by
the continuous change in location information of moving objects. Managing,
querying, storing, and mining moving objects were the key research directions.
This extensive interest in moving objects is a natural consequence of the
recent ubiquitous location-aware devices, such as PDAs, mobile phones, etc., as
well as the variety of information that can be extracted from such new
databases. In this paper we propose a Spatio-Temporal data warehousing (STDW)
for efficiently querying location information of moving objects. The proposed
schema introduces new measures like direction majority and other
direction-based measures that enhance the decision making based on location
information.
|
1307.3047 | Linear Codes over Z_4+uZ_4: MacWilliams identities, projections, and
formally self-dual codes | math.RA cs.IT math.IT | Linear codes are considered over the ring Z_4+uZ_4, a non-chain extension of
Z_4. Lee weights, Gray maps for these codes are defined and MacWilliams
identities for the complete, symmetrized and Lee weight enumerators are proved.
Two projections from Z_4+uZ_4 to the rings Z_4 and F_2+uF_2 are considered and
self-dual codes over Z_4+uZ_4 are studied in connection with these projections.
Finally three constructions are given for formally self-dual codes over
Z_4+uZ_4 and their Z_4-images together with some good examples of formally
self-dual Z_4-codes obtained through these constructions.
|
1307.3054 | Contrast Enhancement And Brightness Preservation Using Multi-
Decomposition Histogram Equalization | cs.CV | Histogram Equalization (HE) has been an essential addition to the Image
Enhancement world. Enhancement techniques like Classical Histogram Equalization
(CHE), Adaptive Histogram Equalization (ADHE), Bi-Histogram Equalization (BHE)
and Recursive Mean Separate Histogram Equalization (RMSHE) methods enhance
contrast, however, brightness is not well preserved with these methods, which
gives an unpleasant look to the final image obtained. Thus, we introduce a
novel technique Multi-Decomposition Histogram Equalization (MDHE) to eliminate
the drawbacks of the earlier methods. In MDHE, we have decomposed the input
sixty-four parts, applied CHE in each of the sub-images and then finally
interpolated them in correct order. The final image after MDHE results in
contrast enhanced and brightness preserved image compared to all other
techniques mentioned above. We have calculated the various parameters like
PSNR, SNR, RMSE, MSE, etc. for every technique. Our results are well supported
by bar graphs, histograms and the parameter calculations at the end.
|
1307.3061 | The technology of using a data warehouse to support decision-making in
health care | cs.DB | This paper describes the technology of data warehouse in healthcare
decision-making and tools for support of these technologies, which is used to
cancer diseases. The healthcare executive managers and doctors needs
information about and insight into the existing health data, so as to make
decision more efficiently without interrupting the daily work of an On-Line
Transaction Processing (OLTP) system. This is a complex problem during the
healthcare decision-making process. To solve this problem, the building a
healthcare data warehouse seems to be efficient. First in this paper we explain
the concepts of the data warehouse, On-Line Analysis Processing (OLAP).
Changing the data in the data warehouse into a multidimensional data cube is
then shown. Finally, an application example is given to illustrate the use of
the healthcare data warehouse specific to cancer diseases developed in this
study. The executive managers and doctors can view data from more than one
perspective with reduced query time, thus making decisions faster and more
comprehensive.
|
1307.3091 | Artificial Intelligence MArkup Language: A Brief Tutorial | cs.AI cs.SE | The purpose of this paper is to serve as a reference guide for the
development of chatterbots implemented with the AIML language. In order to
achieve this, the main concepts in Pattern Recognition area are described
because the AIML uses such theoretical framework in their syntactic and
semantic structures. After that, AIML language is described and each AIML
command/tag is followed by an application example. Also, the usage of AIML
embedded tags for the handling of sequence dialogue limitations between humans
and machines is shown. Finally, computer systems that assist in the design of
chatterbots with the AIML language are classified and described.
|
1307.3095 | Fundamental Limits of Energy-Efficient Resource Sharing, Power Control
and Discontinuous Transmission | cs.IT math.IT | The achievable gains via power-optimal scheduling are investigated. Under the
QoS constraint of a guaranteed link rate, the overall power consumed by a
cellular BS is minimized. Available alternatives for the minimization of
transmit power consumption are presented. The transmit power is derived for the
two-user downlink situation. The analysis is extended to incorporate a BS power
model (which maps transmit power to supply power consumption) and the use of
DTX in a BS. Overall potential gains are evaluated by comparison of a
conventional SOTA BS with one that employs DTX exclusively, a power control
scheme and an optimal combined DTX and power control scheme. Fundamental limits
of the achievable savings are found to be at 5.5 dB under low load and 2 dB
under high load when comparing the SOTA consumption with optimal allocation
under the chosen power model.
|
1307.3099 | Minimal average consumption downlink base station power control strategy | cs.IT math.IT | We consider single cell multi-user OFDMA downlink resource allocation on a
flat-fading channel such that average supply power is minimized while
fulfilling a set of target rates. Available degrees of freedom are transmission
power and duration. This paper extends our previous work on power optimal
resource allocation in the mobile downlink by detailing the optimal power
control strategy investigation and extracting fundamental characteristics of
power optimal operation in cellular downlink. We find that only a system wide
allocation of transmit powers is optimal rather than on link level. The
allocation strategy that minimizes overall power consumption requires the
transmission power on all links to be increased if only one link degrades.
Furthermore, we show that for mobile stations with equal channels but different
rate requirements, it is power optimal to assign equal transmit powers with
proportional transmit durations. To relate the effectiveness of power control
to live operation, we take the power model into consideration which maps
transmit power to supply power. We show that due to the affine mapping, the
solution is independent of the power model. However, the effectiveness of power
control measures is completely dependent on the underlying hardware and the
load dependence factor of a base station (instead of absolute consumption
values). Finally, we conclude that power control measures in base stations are
most relevant in macro stations which have load dependence factor of more than
50%.
|
1307.3102 | Statistical Active Learning Algorithms for Noise Tolerance and
Differential Privacy | cs.LG cs.DS stat.ML | We describe a framework for designing efficient active learning algorithms
that are tolerant to random classification noise and are
differentially-private. The framework is based on active learning algorithms
that are statistical in the sense that they rely on estimates of expectations
of functions of filtered random examples. It builds on the powerful statistical
query framework of Kearns (1993).
We show that any efficient active statistical learning algorithm can be
automatically converted to an efficient active learning algorithm which is
tolerant to random classification noise as well as other forms of
"uncorrelated" noise. The complexity of the resulting algorithms has
information-theoretically optimal quadratic dependence on $1/(1-2\eta)$, where
$\eta$ is the noise rate.
We show that commonly studied concept classes including thresholds,
rectangles, and linear separators can be efficiently actively learned in our
framework. These results combined with our generic conversion lead to the first
computationally-efficient algorithms for actively learning some of these
concept classes in the presence of random classification noise that provide
exponential improvement in the dependence on the error $\epsilon$ over their
passive counterparts. In addition, we show that our algorithms can be
automatically converted to efficient active differentially-private algorithms.
This leads to the first differentially-private active learning algorithms with
exponential label savings over the passive case.
|
1307.3103 | On Minimizing Base Station Power Consumption | cs.IT math.IT | We consider resource allocation over a wireless downlink where Base Station
(BS) power consumption is minimized while upholding a set of required link
rates. A Power and Resource Allocation Including Sleep (PRAIS) method is
proposed that combines resource sharing, Power Control (PC), and Discontinuous
Transmission (DTX), such that downlink power consumption is minimized, which
can be formed into a convex optimization problem. Unlike conventional
approaches that aim at minimizing transmit power, in this work the BS mains
supply power is chosen as the relevant metric. Based on a linear power model,
which maps a certain transmit power to the necessary mains supply power, we
quantify the fundamental limits of PRAIS in terms of achievable BS power
savings. The fundamental limits are numerically evaluated on link level for
four sets of BS power model parameters representative of envisaged future
hardware developments. We establish an expected lower limit for PRAIS of 27W to
68W depending on load per link for BSs installed in 2014, which provides a 61%
to 34% gain over conventional resource allocation schemes.
|
1307.3107 | An improvement of the Feng-Rao bound for primary codes | cs.IT math.AC math.IT | We present a new bound for the minimum distance of a general primary linear
code. For affine variety codes defined from generalised C_{ab} curves the new
bound often improves dramatically on the Feng-Rao bound for primary codes. The
method does not only work for the minimum distance but can be applied to any
generalised Hamming weight
|
1307.3110 | Minimizing Base Station Power Consumption | cs.IT math.IT | We propose a new radio resource management algorithm which aims at minimizing
the base station supply power consumption for multi-user MIMO-OFDM. Given a
base station power model that establishes a relation between the RF transmit
power and the supply power consumption, the algorithm optimizes the trade-off
between three basic power-saving mechanisms: antenna adaptation, power control
and discontinuous transmission. The algorithm comprises two steps: a) the first
step estimates sleep mode duration, resource shares and antenna configuration
based on average channel conditions and b) the second step exploits
instantaneous channel knowledge at the transmitter for frequency selective
time-variant channels. The proposed algorithm finds the number of transmit
antennas, the RF transmission power per resource unit and spatial channel, the
number of discontinuous transmission time slots, and the multi-user resource
allocation, such that supply power consumption is minimized. Simulation results
indicate that the proposed algorithm is capable of reducing the supply power
consumption by between 25% and 40%, dependend on the system load.
|
1307.3121 | A Modified Levenberg-Marquardt Method for the Bidirectional Relay
Channel | cs.IT math.IT | This paper presents an optimization approach for a system consisting of
multiple bidirectional links over a two-way amplify-and-forward relay. It is
desired to improve the fairness of the system. All user pairs exchange
information over one relay station with multiple antennas. Due to the joint
transmission to all users, the users are subject to mutual interference. A
mitigation of the interference can be achieved by max-min fair precoding
optimization where the relay is subject to a sum power constraint. The
resulting optimization problem is non-convex. This paper proposes a novel
iterative and low complexity approach based on a modified Levenberg-Marquardt
method to find near optimal solutions. The presented method finds solutions
close to the standard convex-solver based relaxation approach.
|
1307.3125 | Information Theoretic Adaptive Tracking of Epidemics in Complex Networks | physics.soc-ph cs.SI | Adaptively monitoring the states of nodes in a large complex network is of
interest in domains such as national security, public health, and energy grid
management. Here, we present an information theoretic adaptive tracking and
sampling framework that recursively selects measurements using the feedback
from performing inference on a dynamic Bayesian Network. We also present
conditions for the existence of a network specific, observation dependent,
phase transition in the updated posterior of hidden node states resulting from
actively monitoring the network. Since traditional epidemic thresholds are
derived using observation independent Markov chains, the threshold of the
posterior should more accurately model the true phase transition of a network.
The adaptive tracking framework and epidemic threshold should provide insight
into modeling the dynamic response of the updated posterior to active
intervention and control policies while monitoring modern complex networks.
|
1307.3142 | Perfect Codes in the Discrete Simplex | cs.IT cs.DM math.IT | We study the problem of existence of (nontrivial) perfect codes in the
discrete $ n $-simplex $ \Delta_{\ell}^n := \left\{ \begin{pmatrix} x_0,
\ldots, x_n \end{pmatrix} : x_i \in \mathbb{Z}_{+}, \sum_i x_i = \ell \right\}
$ under $ \ell_1 $ metric. The problem is motivated by the so-called multiset
codes, which have recently been introduced by the authors as appropriate
constructs for error correction in the permutation channels. It is shown that $
e $-perfect codes in the $ 1 $-simplex $ \Delta_{\ell}^1 $ exist for any $ \ell
\geq 2e + 1 $, the $ 2 $-simplex $ \Delta_{\ell}^2 $ admits an $ e $-perfect
code if and only if $ \ell = 3e + 1 $, while there are no perfect codes in
higher-dimensional simplices. In other words, perfect multiset codes exist only
over binary and ternary alphabets.
|
1307.3176 | Fast gradient descent for drifting least squares regression, with
application to bandits | cs.LG stat.ML | Online learning algorithms require to often recompute least squares
regression estimates of parameters. We study improving the computational
complexity of such algorithms by using stochastic gradient descent (SGD) type
schemes in place of classic regression solvers. We show that SGD schemes
efficiently track the true solutions of the regression problems, even in the
presence of a drift. This finding coupled with an $O(d)$ improvement in
complexity, where $d$ is the dimension of the data, make them attractive for
implementation in the big data settings. In the case when strong convexity in
the regression problem is guaranteed, we provide bounds on the error both in
expectation and high probability (the latter is often needed to provide
theoretical guarantees for higher level algorithms), despite the drifting least
squares solution. As an example of this case we prove that the regret
performance of an SGD version of the PEGE linear bandit algorithm
[Rusmevichientong and Tsitsiklis 2010] is worse that that of PEGE itself only
by a factor of $O(\log^4 n)$. When strong convexity of the regression problem
cannot be guaranteed, we investigate using an adaptive regularisation. We make
an empirical study of an adaptively regularised, SGD version of LinUCB [Li et
al. 2010] in a news article recommendation application, which uses the large
scale news recommendation dataset from Yahoo! front page. These experiments
show a large gain in computational complexity, with a consistently low tracking
error and click-through-rate (CTR) performance that is $75\%$ close.
|
1307.3181 | Compressive sensing based beamforming for noisy measurements | cs.IT math.IT | Compressive sensing is the newly emerging method in information technology
that could impact array beamforming and the associated engineering
applications. However, practical measurements are inevitably polluted by noise
from external interference and internal acquisition process. Then, compressive
sensing based beamforming was studied in this work for those noisy measurements
with a signal-to-noise ratio. In this article, we firstly introduced the
fundamentals of compressive sensing theory. After that, we implemented two
algorithms (CSB-I and CSB-II). Both algorithms are proposed for those
presumably spatially sparse and incoherent signals. The two algorithms were
examined using a simple simulation case and a practical aeroacoustic test case.
The simulation case clearly shows that the CSB-I algorithm is quite sensitive
to the sensing noise. The CSB-II algorithm, on the other hand, is more robust
to noisy measurements. The results by CSB-II at $\mathrm{SNR}=-10\,$dB are
still reasonable with good resolution and sidelobe rejection. Therefore,
compressive sensing beamforming can be considered as a promising array signal
beamforming method for those measurements with inevitably noisy interference.
|
1307.3185 | Geography and similarity of regional cuisines in China | physics.soc-ph cs.SI physics.data-an | Food occupies a central position in every culture and it is therefore of
great interest to understand the evolution of food culture. The advent of the
World Wide Web and online recipe repositories has begun to provide
unprecedented opportunities for data-driven, quantitative study of food
culture. Here we harness an online database documenting recipes from various
Chinese regional cuisines and investigate the similarity of regional cuisines
in terms of geography and climate. We found that the geographical proximity,
rather than climate proximity is a crucial factor that determines the
similarity of regional cuisines. We develop a model of regional cuisine
evolution that provides helpful clues to understand the evolution of cuisines
and cultures.
|
1307.3195 | Action-based Character AI in Video-games with CogBots Architecture: A
Preliminary Report | cs.AI cs.SE | In this paper we propose an architecture for specifying the interaction of
non-player characters (NPCs) in the game-world in a way that abstracts common
tasks in four main conceptual components, namely perception, deliberation,
control, action. We argue that this architecture, inspired by AI research on
autonomous agents and robots, can offer a number of benefits in the form of
abstraction, modularity, re-usability and higher degrees of personalization for
the behavior of each NPC. We also show how this architecture can be used to
tackle a simple scenario related to the navigation of NPCs under incomplete
information about the obstacles that may obstruct the various way-points in the
game, in a simple and effective way.
|
1307.3203 | Moral foundations in an interacting neural networks society | physics.soc-ph cs.SI nlin.AO | The moral foundations theory supports that people, across cultures, tend to
consider a small number of dimensions when classifying issues on a moral basis.
The data also show that the statistics of weights attributed to each moral
dimension is related to self-declared political affiliation, which in turn has
been connected to cognitive learning styles by recent literature in
neuroscience and psychology. Inspired by these data, we propose a simple
statistical mechanics model with interacting neural networks classifying
vectors and learning from members of their social neighborhood about their
average opinion on a large set of issues. The purpose of learning is to reduce
dissension among agents even when disagreeing. We consider a family of learning
algorithms parametrized by \delta, that represents the importance given to
corroborating (same sign) opinions. We define an order parameter that
quantifies the diversity of opinions in a group with homogeneous learning
style. Using Monte Carlo simulations and a mean field approximation we find the
relation between the order parameter and the learning parameter \delta at a
temperature we associate with the importance of social influence in a given
group. In concordance with data, groups that rely more strongly on
corroborating evidence sustains less opinion diversity. We discuss predictions
of the model and propose possible experimental tests.
|
1307.3224 | Negotiating the Probabilistic Satisfaction of Temporal Logic Motion
Specifications | cs.RO | We propose a human-supervised control synthesis method for a stochastic
Dubins vehicle such that the probability of satisfying a specification given as
a formula in a fragment of Probabilistic Computational Tree Logic (PCTL) over a
set of environmental properties is maximized. Under some mild assumptions, we
construct a finite approximation for the motion of the vehicle in the form of a
tree-structured Markov Decision Process (MDP). We introduce an efficient
algorithm, which exploits the tree structure of the MDP, for synthesizing a
control policy that maximizes the probability of satisfaction. For the proposed
PCTL fragment, we define the specification update rules that guarantee the
increase (or decrease) of the satisfaction probability. We introduce an
incremental algorithm for synthesizing an updated MDP control policy that
reuses the initial solution. The initial specification can be updated, using
the rules, until the supervisor is satisfied with both the updated
specification and the corresponding satisfaction probability. We propose an
offline and an online application of this method.
|
1307.3271 | Fuzzy Fibers: Uncertainty in dMRI Tractography | cs.CV | Fiber tracking based on diffusion weighted Magnetic Resonance Imaging (dMRI)
allows for noninvasive reconstruction of fiber bundles in the human brain. In
this chapter, we discuss sources of error and uncertainty in this technique,
and review strategies that afford a more reliable interpretation of the
results. This includes methods for computing and rendering probabilistic
tractograms, which estimate precision in the face of measurement noise and
artifacts. However, we also address aspects that have received less attention
so far, such as model selection, partial voluming, and the impact of
parameters, both in preprocessing and in fiber tracking itself. We conclude by
giving impulses for future research.
|
1307.3284 | Sequential Selection of Correlated Ads by POMDPs | cs.IR | Online advertising has become a key source of revenue for both web search
engines and online publishers. For them, the ability of allocating right ads to
right webpages is critical because any mismatched ads would not only harm web
users' satisfactions but also lower the ad income. In this paper, we study how
online publishers could optimally select ads to maximize their ad incomes over
time. The conventional offline, content-based matching between webpages and ads
is a fine start but cannot solve the problem completely because good matching
does not necessarily lead to good payoff. Moreover, with the limited display
impressions, we need to balance the need of selecting ads to learn true ad
payoffs (exploration) with that of allocating ads to generate high immediate
payoffs based on the current belief (exploitation). In this paper, we address
the problem by employing Partially observable Markov decision processes
(POMDPs) and discuss how to utilize the correlation of ads to improve the
efficiency of the exploration and increase ad incomes in a long run. Our
mathematical derivation shows that the belief states of correlated ads can be
naturally updated using a formula similar to collaborative filtering. To test
our model, a real world ad dataset from a major search engine is collected and
categorized. Experimenting over the data, we provide an analyse of the effect
of the underlying parameters, and demonstrate that our algorithms significantly
outperform other strong baselines.
|
1307.3290 | Concatenated Coding Using Linear Schemes for Gaussian Broadcast Channels
with Noisy Channel Output Feedback | cs.IT math.IT | Linear coding schemes have been the main choice of coding for the additive
white Gaussian noise broadcast channel (AWGN-BC) with noiseless feedback in the
literature. The achievable rate regions of these schemes go well beyond the
capacity region of the AWGN-BC without feedback. In this paper, a concatenating
coding design for the $K$-user AWGN-BC with noisy feedback is proposed that
relies on linear feedback schemes to achieve rate tuples outside the
no-feedback capacity region. Specifically, a linear feedback code for the
AWGN-BC with noisy feedback is used as an inner code that creates an effective
single-user channel from the transmitter to each of the receivers, and then
open-loop coding is used for coding over these single-user channels. An
achievable rate region of linear feedback schemes for noiseless feedback is
shown to be achievable by the concatenated coding scheme for sufficiently small
feedback noise level. Then, a linear feedback coding scheme for the $K$-user
symmetric AWGN-BC with noisy feedback is presented and optimized for use in the
concatenated coding scheme. Lastly, we apply the concatenated coding design to
the two-user AWGN-BC with a single noisy feedback link from one of the
receivers.
|
1307.3301 | Optimal Bounds on Approximation of Submodular and XOS Functions by
Juntas | cs.DS cs.CC cs.LG | We investigate the approximability of several classes of real-valued
functions by functions of a small number of variables ({\em juntas}). Our main
results are tight bounds on the number of variables required to approximate a
function $f:\{0,1\}^n \rightarrow [0,1]$ within $\ell_2$-error $\epsilon$ over
the uniform distribution: 1. If $f$ is submodular, then it is $\epsilon$-close
to a function of $O(\frac{1}{\epsilon^2} \log \frac{1}{\epsilon})$ variables.
This is an exponential improvement over previously known results. We note that
$\Omega(\frac{1}{\epsilon^2})$ variables are necessary even for linear
functions. 2. If $f$ is fractionally subadditive (XOS) it is $\epsilon$-close
to a function of $2^{O(1/\epsilon^2)}$ variables. This result holds for all
functions with low total $\ell_1$-influence and is a real-valued analogue of
Friedgut's theorem for boolean functions. We show that $2^{\Omega(1/\epsilon)}$
variables are necessary even for XOS functions.
As applications of these results, we provide learning algorithms over the
uniform distribution. For XOS functions, we give a PAC learning algorithm that
runs in time $2^{poly(1/\epsilon)} poly(n)$. For submodular functions we give
an algorithm in the more demanding PMAC learning model (Balcan and Harvey,
2011) which requires a multiplicative $1+\gamma$ factor approximation with
probability at least $1-\epsilon$ over the target distribution. Our uniform
distribution algorithm runs in time $2^{poly(1/(\gamma\epsilon))} poly(n)$.
This is the first algorithm in the PMAC model that over the uniform
distribution can achieve a constant approximation factor arbitrarily close to 1
for all submodular functions. As follows from the lower bounds in (Feldman et
al., 2013) both of these algorithms are close to optimal. We also give
applications for proper learning, testing and agnostic learning with value
queries of these classes.
|
1307.3310 | Improving the quality of Gujarati-Hindi Machine Translation through
part-of-speech tagging and stemmer-assisted transliteration | cs.CL | Machine Translation for Indian languages is an emerging research area.
Transliteration is one such module that we design while designing a translation
system. Transliteration means mapping of source language text into the target
language. Simple mapping decreases the efficiency of overall translation
system. We propose the use of stemming and part-of-speech tagging for
transliteration. The effectiveness of translation can be improved if we use
part-of-speech tagging and stemming assisted transliteration.We have shown that
much of the content in Gujarati gets transliterated while being processed for
translation to Hindi language.
|
1307.3332 | Universal truncation error upper bounds in irregular sampling
restoration | cs.IT math.IT | Universal (pointwise uniform and time shifted) truncation error upper bounds
are presented in Whittaker--Kotel'nikov--Shannon (WKS) sampling restoration sum
for Bernstein function class $B_{\pi,d}^q\,,\ q \ge 1,$ $d\in \mathbb N\,,$
when the sampled functions decay rate is unknown. The case of multidimensional
irregular sampling is discussed.
|
1307.3336 | Opinion Mining and Analysis: A survey | cs.CL cs.IR | The current research is focusing on the area of Opinion Mining also called as
sentiment analysis due to sheer volume of opinion rich web resources such as
discussion forums, review sites and blogs are available in digital form. One
important problem in sentiment analysis of product reviews is to produce
summary of opinions based on product features. We have surveyed and analyzed in
this paper, various techniques that have been developed for the key tasks of
opinion mining. We have provided an overall picture of what is involved in
developing a software system for opinion mining on the basis of our survey and
analysis.
|
1307.3337 | Unsupervised Gene Expression Data using Enhanced Clustering Method | cs.CE cs.LG | Microarrays are made it possible to simultaneously monitor the expression
profiles of thousands of genes under various experimental conditions.
Identification of co-expressed genes and coherent patterns is the central goal
in microarray or gene expression data analysis and is an important task in
bioinformatics research. Feature selection is a process to select features
which are more informative. It is one of the important steps in knowledge
discovery. The problem is that not all features are important. Some of the
features may be redundant, and others may be irrelevant and noisy. In this work
the unsupervised Gene selection method and Enhanced Center Initialization
Algorithm (ECIA) with K-Means algorithms have been applied for clustering of
Gene Expression Data. This proposed clustering algorithm overcomes the
drawbacks in terms of specifying the optimal number of clusters and
initialization of good cluster centroids. Gene Expression Data show that could
identify compact clusters with performs well in terms of the Silhouette
Coefficients cluster measure.
|
1307.3346 | Universal truncation error upper bounds in sampling restoration | cs.IT math.IT | Universal (pointwise uniform and time shifted) truncation error upper bounds
are presented for the Whittaker--Kotel'nikov--Shannon (WKS) sampling
restoration sum for Bernstein function classes $B_{\pi,d}^q,\, q>1,\, d\in
\mathbb N$, when the decay rate of the sampled functions is unknown. The case
of regular sampling is discussed. Extremal properties of related series of sinc
functions are investigated.
|
1307.3360 | Low-complexity Multiclass Encryption by Compressed Sensing | cs.IT math.IT | The idea that compressed sensing may be used to encrypt information from
unauthorised receivers has already been envisioned, but never explored in depth
since its security may seem compromised by the linearity of its encoding
process. In this paper we apply this simple encoding to define a general
private-key encryption scheme in which a transmitter distributes the same
encoded measurements to receivers of different classes, which are provided
partially corrupted encoding matrices and are thus allowed to decode the
acquired signal at provably different levels of recovery quality.
The security properties of this scheme are thoroughly analysed: firstly, the
properties of our multiclass encryption are theoretically investigated by
deriving performance bounds on the recovery quality attained by lower-class
receivers with respect to high-class ones. Then we perform a statistical
analysis of the measurements to show that, although not perfectly secure,
compressed sensing grants some level of security that comes at almost-zero cost
and thus may benefit resource-limited applications.
In addition to this we report some exemplary applications of multiclass
encryption by compressed sensing of speech signals, electrocardiographic tracks
and images, in which quality degradation is quantified as the impossibility of
some feature extraction algorithms to obtain sensitive information from
suitably degraded signal recoveries.
|
1307.3388 | Dynamic networks reveal key players in aging | cs.CE q-bio.MN | Motivation: Since susceptibility to diseases increases with age, studying
aging gains importance. Analyses of gene expression or sequence data, which
have been indispensable for investigating aging, have been limited to studying
genes and their protein products in isolation, ignoring their connectivities.
However, proteins function by interacting with other proteins, and this is
exactly what biological networks (BNs) model. Thus, analyzing the proteins' BN
topologies could contribute to understanding of aging. Current methods for
analyzing systems-level BNs deal with their static representations, even though
cells are dynamic. For this reason, and because different data types can give
complementary biological insights, we integrate current static BNs with
aging-related gene expression data to construct dynamic, age-specific BNs.
Then, we apply sensitive measures of topology to the dynamic BNs to study
cellular changes with age.
Results: While global BN topologies do not significantly change with age,
local topologies of a number of genes do. We predict such genes as
aging-related. We demonstrate credibility of our predictions by: 1) observing
significant overlap between our predicted aging-related genes and "ground
truth" aging-related genes; 2) showing that our aging-related predictions group
by functions and diseases that are different than functions and diseases of
genes that are not predicted as aging-related; 3) observing significant overlap
between functions and diseases that are enriched in our aging-related
predictions and those that are enriched in "ground truth" aging-related data;
4) providing evidence that diseases which are enriched in our aging-related
predictions are linked to human aging; and 5) validating all of our
high-scoring novel predictions via manual literature search.
|
1307.3399 | Social Networking Site For Self Portfolio | cs.SI cs.CY | Online social networking concept is a global phenomenon and there are
millions of sites which help in being connected with friends and family. This
project focuses on creating self-portfolios for the users which makes the users
engaging with their skills. The users follow the other users to interact and
communicate with them. Users can encourage the other users blogs and videos by
clicking the hit button. The functionality of this site is designed to focus on
both professional as well as academics. Each user is given a dashboard for
uploading videos and writing blogs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.