id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1312.0659 | Prioritizing Consumers in Smart Grid: A Game Theoretic Approach | cs.SY cs.GT | This paper proposes an energy management technique for a consumer-to-grid
system in smart grid. The benefit to consumers is made the primary concern to
encourage consumers to participate voluntarily in energy trading with the
central power station (CPS) in situations of energy deficiency. A novel system
model motivating energy trading under the goal of social optimality is
proposed. A single-leader multiple-follower Stackelberg game is then studied to
model the interactions between the CPS and a number of energy consumers (ECs),
and to find optimal distributed solutions for the optimization problem based on
the system model. The CPS is considered as a leader seeking to minimize its
total cost of buying energy from the ECs, and the ECs are the followers who
decide on how much energy they will sell to the CPS for maximizing their
utilities. It is shown that the game, which can be implemented distributedly,
possesses a socially optimal solution, in which the benefits-sum to all
consumers is maximized, as the total cost to the CPS is minimized. Numerical
analysis confirms the effectiveness of the game.
|
1312.0685 | Optimization of zero-delay mappings for distributed coding by
deterministic annealing | cs.IT math.IT | This paper studies the optimization of zero-delay analog mappings in a
network setting that involves distributed coding. The cost surface is known to
be non-convex, and known greedy methods tend to get trapped in poor locally
optimal solutions that depend heavily on initialization. We derive an
optimization algorithm based on the principles of "deterministic annealing", a
powerful global optimization framework that has been successfully employed in
several disciplines, including, in our recent work, to a simple zero-delay
analog communications problem. We demonstrate strict superiority over the
descent based methods, as well as present example mappings whose properties
lend insights on the workings of the solution and relations with digital
distributed coding.
|
1312.0700 | Redundancy and Aging of Efficient Multidimensional MDS-Parity Protected
Distributed Storage Systems | cs.IT math.IT | The effect of redundancy on the aging of an efficient Maximum Distance
Separable (MDS) parity--protected distributed storage system that consists of
multidimensional arrays of storage units is explored. In light of the
experimental evidences and survey data, this paper develops generalized
expressions for the reliability of array storage systems based on more
realistic time to failure distributions such as Weibull. For instance, a
distributed disk array system is considered in which the array components are
disseminated across the network and are subject to independent failure rates.
Based on such, generalized closed form hazard rate expressions are derived.
These expressions are extended to estimate the asymptotical reliability
behavior of large scale storage networks equipped with MDS parity-based
protection. Unlike previous studies, a generic hazard rate function is assumed,
a generic MDS code for parity generation is used, and an evaluation of the
implications of adjustable redundancy level for an efficient distributed
storage system is presented. Results of this study are applicable to any
erasure correction code as long as it is accompanied with a suitable structure
and an appropriate encoding/decoding algorithm such that the MDS property is
maintained.
|
1312.0718 | Electromagnetic Lens-focusing Antenna Enabled Massive MIMO: Performance
Improvement and Cost Reduction | cs.IT math.IT | Massive multiple-input multiple-output (MIMO) techniques have been recently
advanced to tremendously improve the performance of wireless communication
networks. However, the use of very large antenna arrays at the base stations
(BSs) brings new issues, such as the significantly increased hardware and
signal processing costs. In order to reap the enormous gain of massive MIMO and
yet reduce its cost to an affordable level, this paper proposes a novel system
design by integrating an electromagnetic (EM) lens with the large antenna
array, termed the EM-lens enabled MIMO. The EM lens has the capability of
focusing the power of an incident wave to a small area of the antenna array,
while the location of the focal area varies with the angle of arrival (AoA) of
the wave. Therefore, in practical scenarios where the arriving signals from
geographically separated users have different AoAs, the EM-lens enabled system
provides two new benefits, namely energy focusing and spatial interference
rejection. By taking into account the effects of imperfect channel estimation
via pilot-assisted training, in this paper we analytically show that the
average received signal-to-noise ratio (SNR) in both the single-user and
multiuser uplink transmissions can be strictly improved by the EM-lens enabled
system. Furthermore, we demonstrate that the proposed design makes it possible
to considerably reduce the hardware and signal processing costs with only
slight degradations in performance. To this end, two complexity/cost reduction
schemes are proposed, which are small-MIMO processing with parallel receiver
filtering applied over subgroups of antennas to reduce the computational
complexity, and channel covariance based antenna selection to reduce the
required number of radio frequency (RF) chains. Numerical results are provided
to corroborate our analysis.
|
1312.0728 | A Backstepping Control Method for a Nonlinear Process - Two
Coupled-Tanks | cs.SY | The aim of this work is to compute a level backstepping control strategy for
a coupled tanks system. The coupled tanks plant is a component included in the
water treatment system of power plants. The nonlinear-model of the process was
designed and implemented in Matlab- Simulink. The advantages of the control
method proposed is that it takes into consideration the nonlinearity which can
be useful for stabilization and a larger operating point with specified
performances. The backstepping control method is computed using the nonlinear
model of the system and the performance was validated on the physical plant.
|
1312.0735 | Use of the C4.5 machine learning algorithm to test a clinical
guideline-based decision support system | cs.AI | Well-designed medical decision support system (DSS) have been shown to
improve health care quality. However, before they can be used in real clinical
situations, these systems must be extensively tested, to ensure that they
conform to the clinical guidelines (CG) on which they are based. Existing
methods cannot be used for the systematic testing of all possible test cases.
We describe here a new exhaustive dynamic verification method. In this method,
the DSS is considered to be a black box, and the Quinlan C4.5 algorithm is used
to build a decision tree from an exhaustive set of DSS input vectors and
outputs. This method was successfully used for the testing of a medical DSS
relating to chronic diseases: the ASTI critiquing module for type 2 diabetes.
|
1312.0736 | A generic system for critiquing physicians' prescriptions: usability,
satisfaction and lessons learnt | cs.AI | Clinical decision support systems have been developed to help physicians to
take clinical guidelines into account during consultations. The ASTI critiquing
module is one such systems; it provides the physician with automatic criticisms
when a drug prescription does not follow the guidelines. It was initially
developed for hypertension and type 2 diabetes, but is designed to be generic
enough for application to all chronic diseases. We present here the results of
usability and satisfaction evaluations for the ASTI critiquing module, obtained
with GPs for a newly implemented guideline concerning dyslipaemia, and we
discuss the lessons learnt and the difficulties encountered when building a
generic DSS for critiquing physicians' prescriptions.
|
1312.0742 | Parallel Deferred Update Replication | cs.DC cs.DB | Deferred update replication (DUR) is an established approach to implementing
highly efficient and available storage. While the throughput of read-only
transactions scales linearly with the number of deployed replicas in DUR, the
throughput of update transactions experiences limited improvements as replicas
are added. This paper presents Parallel Deferred Update Replication (P-DUR), a
variation of classical DUR that scales both read-only and update transactions
with the number of cores available in a replica. In addition to introducing the
new approach, we describe its full implementation and compare its performance
to classical DUR and to Berkeley DB, a well-known standalone database.
|
1312.0750 | A semi-automatic semantic method for mapping SNOMED CT concepts to VCM
Icons | cs.AI cs.HC | VCM (Visualization of Concept in Medicine) is an iconic language for
representing key medical concepts by icons. However, the use of this language
with reference terminologies, such as SNOMED CT, will require the mapping of
its icons to the terms of these terminologies. Here, we present and evaluate a
semi-automatic semantic method for the mapping of SNOMED CT concepts to VCM
icons. Both SNOMED CT and VCM are compositional in nature; SNOMED CT is
expressed in description logic and VCM semantics are formalized in an OWL
ontology. The proposed method involves the manual mapping of a limited number
of underlying concepts from the VCM ontology, followed by automatic generation
of the rest of the mapping. We applied this method to the clinical findings of
the SNOMED CT CORE subset, and 100 randomly-selected mappings were evaluated by
three experts. The results obtained were promising, with 82 of the SNOMED CT
concepts correctly linked to VCM icons according to the experts. Most of the
errors were easy to fix.
|
1312.0760 | Template-Based Active Contours | cs.CV | We develop a generalized active contour formalism for image segmentation
based on shape templates. The shape template is subjected to a restricted
affine transformation (RAT) in order to segment the object of interest. RAT
allows for translation, rotation, and scaling, which give a total of five
degrees of freedom. The proposed active contour comprises an inner and outer
contour pair, which are closed and concentric. The active contour energy is a
contrast function defined based on the intensities of pixels that lie inside
the inner contour and those that lie in the annulus between the inner and outer
contours. We show that the contrast energy functional is optimal under certain
conditions. The optimal RAT parameters are computed by maximizing the contrast
function using a gradient descent optimizer. We show that the calculations are
made efficient through use of Green's theorem. The proposed formalism is
capable of handling a variety of shapes because for a chosen template,
optimization is carried with respect to the RAT parameters only. The proposed
formalism is validated on multiple images to show robustness to Gaussian and
Poisson noise, to initialization, and to partial loss of structure in the
object to be segmented.
|
1312.0786 | Image Representation Learning Using Graph Regularized Auto-Encoders | cs.LG | We consider the problem of image representation for the tasks of unsupervised
learning and semi-supervised learning. In those learning tasks, the raw image
vectors may not provide enough representation for their intrinsic structures
due to their highly dense feature space. To overcome this problem, the raw
image vectors should be mapped to a proper representation space which can
capture the latent structure of the original data and represent the data
explicitly for further learning tasks such as clustering.
Inspired by the recent research works on deep neural network and
representation learning, in this paper, we introduce the multiple-layer
auto-encoder into image representation, we also apply the locally invariant
ideal to our image representation with auto-encoders and propose a novel
method, called Graph regularized Auto-Encoder (GAE). GAE can provide a compact
representation which uncovers the hidden semantics and simultaneously respects
the intrinsic geometric structure.
Extensive experiments on image clustering show encouraging results of the
proposed algorithm in comparison to the state-of-the-art algorithms on
real-word cases.
|
1312.0788 | A compact formula for the derivative of a 3-D rotation in exponential
coordinates | cs.CV math.OC | We present a compact formula for the derivative of a 3-D rotation matrix with
respect to its exponential coordinates. A geometric interpretation of the
resulting expression is provided, as well as its agreement with other
less-compact but better-known formulas. To the best of our knowledge, this
simpler formula does not appear anywhere in the literature. We hope by
providing this more compact expression to alleviate the common pressure to
reluctantly resort to alternative representations in various computational
applications simply as a means to avoid the complexity of differential analysis
in exponential coordinates.
|
1312.0790 | Test Set Selection using Active Information Acquisition for Predictive
Models | cs.AI cs.LG stat.ML | In this paper, we consider active information acquisition when the prediction
model is meant to be applied on a targeted subset of the population. The goal
is to label a pre-specified fraction of customers in the target or test set by
iteratively querying for information from the non-target or training set. The
number of queries is limited by an overall budget. Arising in the context of
two rather disparate applications- banking and medical diagnosis, we pose the
active information acquisition problem as a constrained optimization problem.
We propose two greedy iterative algorithms for solving the above problem. We
conduct experiments with synthetic data and compare results of our proposed
algorithms with few other baseline approaches. The experimental results show
that our proposed approaches perform better than the baseline schemes.
|
1312.0809 | Automatic White Blood Cell Measuring Aid for Medical Diagnosis | cs.CY cs.CV | Blood related invasive pathological investigations play a major role in
diagnosis of diseases. But in India and other third world countries there are
no enough pathological infrastructures for medical diagnosis. Moreover, most of
the remote places of those countries have neither pathologists nor physicians.
Telemedicine partially solves the lack of physicians. But the pathological
investigation infrastructure can not be integrated with the telemedicine
technology. The objective of this work is to automate the blood related
pathological investigation process. Detection of different white blood cells
has been automated in this work. This system can be deployed in the remote area
as a supporting aid for telemedicine technology and only high school education
is sufficient to operate it. The proposed system achieved 97.33 percent
accuracy for the samples collected to test this system.
|
1312.0821 | Delay-Robustness in Distributed Control of Timed Discrete-Event Systems
Based on Supervisor Localization | cs.SY | Recently we studied communication delay in distributed control of untimed
discrete-event systems based on supervisor localization. We proposed a property
called delay-robustness: the overall system behavior controlled by distributed
controllers with communication delay is logically equivalent to its delay-free
counterpart. In this paper we extend our previous work to timed discrete-event
systems, in which communication delays are counted by a special clock event
{\it tick}. First, we propose a timed channel model and define timed
delay-robustness; for the latter, a polynomial verification procedure is
presented. Next, if the delay-robust property does not hold, we introduce
bounded delay-robustness, and present an algorithm to compute the maximal delay
bound (measured by number of ticks) for transmitting a channeled event.
Finally, we demonstrate delay-robustness on the example of an under-load
tap-changing transformer.
|
1312.0825 | FRANTIC: A Fast Reference-based Algorithm for Network Tomography via
Compressive Sensing | cs.NI cs.IT math.IT | We study the problem of link and node delay estimation in undirected networks
when at most k out of n links or nodes in the network are congested. Our
approach relies on end-to-end measurements of path delays across pre-specified
paths in the network. We present a class of algorithms that we call FRANTIC.
The FRANTIC algorithms are motivated by compressive sensing; however, unlike
traditional compressive sensing, the measurement design here is constrained by
the network topology and the matrix entries are constrained to be positive
integers. A key component of our design is a new compressive sensing algorithm
SHO-FA-INT that is related to the prior SHO-FA algorithm for compressive
sensing, but unlike SHO-FA, the matrix entries here are drawn from the set of
integers {0, 1, ..., M}. We show that O(k log n /log M) measurements suffice
both for SHO-FA-INT and FRANTIC. Further, we show that the computational
complexity of decoding is also O(k log n/log M) for each of these algorithms.
Finally, we look at efficient constructions of the measurement operations
through Steiner Trees.
|
1312.0841 | Combining Simulated Annealing and Monte Carlo Tree Search for Expression
Simplification | cs.AI | In many applications of computer algebra large expressions must be simplified
to make repeated numerical evaluations tractable. Previous works presented
heuristically guided improvements, e.g., for Horner schemes. The remaining
expression is then further reduced by common subexpression elimination. A
recent approach successfully applied a relatively new algorithm, Monte Carlo
Tree Search (MCTS) with UCT as the selection criterion, to find better variable
orderings. Yet, this approach is fit for further improvements since it is
sensitive to the so-called exploration-exploitation constant $C_p$ and the
number of tree updates $N$. In this paper we propose a new selection criterion
called Simulated Annealing UCT (SA-UCT) that has a dynamic
exploration-exploitation parameter, which decreases with the iteration number
$i$ and thus reduces the importance of exploration over time. First, we provide
an intuitive explanation in terms of the exploration-exploitation behavior of
the algorithm. Then, we test our algorithm on three large expressions of
different origins. We observe that SA-UCT widens the interval of good initial
values $C_p$ where best results are achieved. The improvement is large (more
than a tenfold) and facilitates the selection of an appropriate $C_p$.
|
1312.0852 | Feature Extraction of Human Lip Prints | cs.CV | Methods have been used for identification of human by recognizing lip prints.
Human lips have a number of elevation and depressions features called lip
prints and examination of lip prints is referred to as cheiloscopy. Lip prints
of each human being are unique in nature like many others features of human. In
this paper lip print is first smoothened using a Gaussian Filter. Next Sobel
Edge Detector and Canny Edge Detector are used to detect the vertical and
horizontal groove pattern in the lip. This method of identification will be
useful both in criminal forensics and personal identification. It is our
assumption that study of lip prints and their types are well connected to play
a song in a better way that are well accepted to people who loves to hear
songs.
|
1312.0860 | Community Specific Temporal Topic Discovery from Social Media | cs.SI physics.soc-ph | Studying temporal dynamics of topics in social media is very useful to
understand online user behaviors. Most of the existing work on this subject
usually monitors the global trends, ignoring variation among communities. Since
users from different communities tend to have varying tastes and interests,
capturing community-level temporal change can improve the understanding and
management of social content. Additionally, it can further facilitate the
applications such as community discovery, temporal prediction and online
marketing. However, this kind of extraction becomes challenging due to the
intricate interactions between community and topic, and intractable
computational complexity.
In this paper, we take a unified solution towards the community-level topic
dynamic extraction. A probabilistic model, CosTot (Community Specific
Topics-over-Time) is proposed to uncover the hidden topics and communities, as
well as capture community-specific temporal dynamics. Specifically, CosTot
considers text, time, and network information simultaneously, and well
discovers the interactions between community and topic over time. We then
discuss the approximate inference implementation to enable scalable computation
of model parameters, especially for large social data. Based on this, the
application layer support for multi-scale temporal analysis and community
exploration is also investigated.
We conduct extensive experimental studies on a large real microblog dataset,
and demonstrate the superiority of proposed model on tasks of time stamp
prediction, link prediction and topic perplexity.
|
1312.0882 | On the Throughput of Hybrid-ARQ under QoS Constraints | cs.IT math.IT | Hybrid Automatic Repeat Request (HARQ) is a high performance communication
protocol, leading to effective use of the wireless channel and the resources
with only limited feedback about the channel state information (CSI) to the
transmitter. In this paper, the throughput of HARQ with incremental redundancy
(IR) and fixed transmission rate is studied in the presence of quality of
service (QoS) constraints imposed as limitations on buffer overflow
probabilities. In particular, tools from the theory of renewal processes and
stochastic network calculus are employed to characterize the maximum arrival
rates that can be supported by the wireless channel when HARQ-IR is adopted.
Effective capacity is employed as the throughput metric and a closed-form
expression for the effective capacity of HARQ-IR is determined for small values
of the QoS exponent. The impact of the fixed transmission rate, QoS
constraints, and hard deadline limitations on the throughput is investigated
and comparisons with regular ARQ operation are provided.
|
1312.0912 | Evolution of Communities with Focus on Stability | cs.SI cs.CY physics.soc-ph | Community detection is an important tool for analyzing the social graph of
mobile phone users. The problem of finding communities in static graphs has
been widely studied. However, since mobile social networks evolve over time,
static graph algorithms are not sufficient. To be useful in practice (e.g. when
used by a telecom analyst), the stability of the partitions becomes critical.
We tackle this particular use case in this paper: tracking evolution of
communities in dynamic scenarios with focus on stability. We propose two
modifications to a widely used static community detection algorithm: we
introduce fixed nodes and preferential attachment to pre-existing communities.
We then describe experiments to study the stability and quality of the
resulting partitions on real-world social networks, represented by monthly call
graphs for millions of subscribers.
|
1312.0914 | Characterizing the Rate Region of the (4,3,3) Exact-Repair Regenerating
Codes | cs.IT math.IT | Exact-repair regenerating codes are considered for the case (n,k,d)=(4,3,3),
for which a complete characterization of the rate region is provided. This
characterization answers in the affirmative the open question whether there
exists a non-vanishing gap between the optimal bandwidth-storage tradeoff of
the functional-repair regenerating codes (i.e., the cut-set bound) and that of
the exact-repair regenerating codes. To obtain an explicit information
theoretic converse, a computer-aided proof (CAP) approach based on primal and
dual relation is developed. This CAP approach extends Yeung's linear
programming (LP) method, which was previously only used on information
theoretic problems with a few random variables due to the exponential growth of
the number of variables in the corresponding LP problem. The symmetry in the
exact-repair regenerating code problem allows an effective reduction of the
number of variables, and together with several other problem-specific
reductions, the LP problem is reduced to a manageable scale. For the
achievability, only one non-trivial corner point of the rate region needs to be
addressed in this case, for which an explicit binary code construction is
given.
|
1312.0925 | Understanding Alternating Minimization for Matrix Completion | cs.LG cs.DS stat.ML | Alternating Minimization is a widely used and empirically successful
heuristic for matrix completion and related low-rank optimization problems.
Theoretical guarantees for Alternating Minimization have been hard to come by
and are still poorly understood. This is in part because the heuristic is
iterative and non-convex in nature. We give a new algorithm based on
Alternating Minimization that provably recovers an unknown low-rank matrix from
a random subsample of its entries under a standard incoherence assumption. Our
results reduce the sample size requirements of the Alternating Minimization
approach by at least a quartic factor in the rank and the condition number of
the unknown matrix. These improvements apply even if the matrix is only close
to low-rank in the Frobenius norm. Our algorithm runs in nearly linear time in
the dimension of the matrix and, in a broad range of parameters, gives the
strongest sample bounds among all subquadratic time algorithms that we are
aware of.
Underlying our work is a new robust convergence analysis of the well-known
Power Method for computing the dominant singular vectors of a matrix. This
viewpoint leads to a conceptually simple understanding of Alternating
Minimization. In addition, we contribute a new technique for controlling the
coherence of intermediate solutions arising in iterative algorithms based on a
smoothed analysis of the QR factorization. These techniques may be of interest
beyond their application here.
|
1312.0932 | Joint Source-Channel Coding with Time-Varying Channel and
Side-Information | cs.IT math.IT | Transmission of a Gaussian source over a time-varying Gaussian channel is
studied in the presence of time-varying correlated side information at the
receiver. A block fading model is considered for both the channel and the side
information, whose states are assumed to be known only at the receiver. The
optimality of separate source and channel coding in terms of average end-to-end
distortion is shown when the channel is static while the side information state
follows a discrete or a continuous and quasiconcave distribution. When both the
channel and side information states are time-varying, separate source and
channel coding is suboptimal in general. A partially informed encoder lower
bound is studied by providing the channel state information to the encoder.
Several achievable transmission schemes are proposed based on uncoded
transmission, separate source and channel coding, joint decoding as well as
hybrid digital-analog transmission. Uncoded transmission is shown to be optimal
for a class of continuous and quasiconcave side information state
distributions, while the channel gain may have an arbitrary distribution. To
the best of our knowledge, this is the first example in which the uncoded
transmission achieves the optimal performance thanks to the time-varying nature
of the states, while it is suboptimal in the static version of the same
problem. Then, the optimal \emph{distortion exponent}, that quantifies the
exponential decay rate of the expected distortion in the high SNR regime, is
characterized for Nakagami distributed channel and side information states, and
it is shown to be achieved by hybrid digital-analog and joint decoding schemes
in certain cases, illustrating the suboptimality of pure digital or analog
transmission in general.
|
1312.0938 | Epidemic Thresholds with External Agents | cs.SI physics.soc-ph | We study the effect of external infection sources on phase transitions in
epidemic processes. In particular, we consider an epidemic spreading on a
network via the SIS/SIR dynamics, which in addition is aided by external agents
- sources unconstrained by the graph, but possessing a limited infection rate
or virulence. Such a model captures many existing models of externally aided
epidemics, and finds use in many settings - epidemiology, marketing and
advertising, network robustness, etc. We provide a detailed characterization of
the impact of external agents on epidemic thresholds. In particular, for the
SIS model, we show that any external infection strategy with constant virulence
either fails to significantly affect the lifetime of an epidemic, or at best,
sustains the epidemic for a lifetime which is polynomial in the number of
nodes. On the other hand, a random external-infection strategy, with rate
increasing linearly in the number of infected nodes, succeeds under some
conditions to sustain an exponential epidemic lifetime. We obtain similar sharp
thresholds for the SIR model, and discuss the relevance of our results in a
variety of settings.
|
1312.0940 | Medical Aid for Automatic Detection of Malaria | cs.CY cs.CV | The analysis and counting of blood cells in a microscope image can provide
useful information concerning to the health of a person. In particular,
morphological analysis of red blood cells deformations can effectively detect
important disease like malaria. Blood images, obtained by the microscope, which
is coupled with a digital camera, are analyzed by the computer for diagnosis or
can be transmitted easily to clinical centers than liquid blood samples.
Automatic analysis system for the presence of Plasmodium in microscopic image
of blood can greatly help pathologists and doctors that typically inspect blood
films manually. Unfortunately, the analysis made by human experts is not rapid
and not yet standardized due to the operators capabilities and tiredness. The
paper shows how effectively and accurately it is possible to identify the
Plasmodium in the blood film. In particular, the paper presents how to enhance
the microscopic image and filter out the unnecessary segments followed by the
threshold based segmentation and recognize the presence of Plasmodium. The
proposed system can be deployed in the remote area as a supporting aid for
telemedicine technology and only basic training is sufficient to operate it.
This system achieved more than 98 percentage accuracy for the samples collected
to test this system.
|
1312.0972 | Rank-Modulation Rewrite Coding for Flash Memories | cs.IT math.IT | The current flash memory technology focuses on the cost minimization of its
static storage capacity. However, the resulting approach supports a relatively
small number of program-erase cycles. This technology is effective for consumer
devices (e.g., smartphones and cameras) where the number of program-erase
cycles is small. However, it is not economical for enterprise storage systems
that require a large number of lifetime writes. The proposed approach in this
paper for alleviating this problem consists of the efficient integration of two
key ideas: (i) improving reliability and endurance by representing the
information using relative values via the rank modulation scheme and (ii)
increasing the overall (lifetime) capacity of the flash device via rewriting
codes, namely, performing multiple writes per cell before erasure. This paper
presents a new coding scheme that combines rank modulation with rewriting. The
key benefits of the new scheme include: (i) the ability to store close to 2
bits per cell on each write with minimal impact on the lifetime of the memory,
and (ii) efficient encoding and decoding algorithms that make use of
capacity-achieving write-once-memory (WOM) codes that were proposed recently.
|
1312.0976 | Multilinguals and Wikipedia Editing | cs.CY cs.CL cs.DL cs.SI physics.soc-ph | This article analyzes one month of edits to Wikipedia in order to examine the
role of users editing multiple language editions (referred to as multilingual
users). Such multilingual users may serve an important function in diffusing
information across different language editions of the encyclopedia, and prior
work has suggested this could reduce the level of self-focus bias in each
edition. This study finds multilingual users are much more active than their
single-edition (monolingual) counterparts. They are found in all language
editions, but smaller-sized editions with fewer users have a higher percentage
of multilingual users than larger-sized editions. About a quarter of
multilingual users always edit the same articles in multiple languages, while
just over 40% of multilingual users edit different articles in different
languages. When non-English users do edit a second language edition, that
edition is most frequently English. Nonetheless, several regional and
linguistic cross-editing patterns are also present.
|
1312.1003 | High Throughput Virtual Screening with Data Level Parallelism in
Multi-core Processors | cs.AI cs.PF | Improving the throughput of molecular docking, a computationally intensive
phase of the virtual screening process, is a highly sought area of research
since it has a significant weight in the drug designing process. With such
improvements, the world might find cures for incurable diseases like HIV
disease and Cancer sooner. Our approach presented in this paper is to utilize a
multi-core environment to introduce Data Level Parallelism (DLP) to the
Autodock Vina software, which is a widely used for molecular docking software.
Autodock Vina already exploits Instruction Level Parallelism (ILP) in
multi-core environments and therefore optimized for such environments. However,
with the results we have obtained, it can be clearly seen that our approach has
enhanced the throughput of the already optimized software by more than six
times. This will dramatically reduce the time consumed for the lead
identification phase in drug designing along with the shift in the processor
technology from multi-core to many-core of the current era. Therefore, we
believe that the contribution of this project will effectively make it possible
to expand the number of small molecules docked against a drug target and
improving the chances to design drugs for incurable diseases.
|
1312.1004 | Composite Channel Estimation in Massive MIMO Systems | cs.IT math.IT | We consider a multiuser (MU) multiple-input multiple-output (MIMO)
time-division duplexing (TDD) system in which the base station (BS) is equipped
with a large number of antennas for communicating with single-antenna mobile
users. In such a system the BS has to estimate the channel state information
(CSI) that includes large-scale fading coefficients (LSFCs) and small-scale
fading coefficients (SSFCs) by uplink pilots. Although information about the
former FCs are indispensable in a MU-MIMO or distributed MIMO system, they are
usually ignored or assumed perfectly known when treating the MIMO CSI
estimation problem. We take advantage of the large spatial samples of a massive
MIMO BS to derive accurate LSFC estimates in the absence of SSFC information.
With estimated LSFCs, SSFCs are then obtained using a rank-reduced (RR) channel
model which in essence transforms the channel vector into a lower dimension
representation.
We analyze the mean squared error (MSE) performance of the proposed composite
channel estimator and prove that the separable angle of arrival (AoA)
information provided by the RR model is beneficial for enhancing the
estimator's performance, especially when the angle spread of the uplink signal
is not too large.
|
1312.1020 | High-quality Image Restoration from Partial Mixed Adaptive-Random
Measurements | cs.IT math.IT | A novel framework to construct an efficient sensing (measurement) matrix,
called mixed adaptive-random (MAR) matrix, is introduced for directly acquiring
a compressed image representation. The mixed sampling (sensing) procedure
hybridizes adaptive edge measurements extracted from a low-resolution image
with uniform random measurements predefined for the high-resolution image to be
recovered. The mixed sensing matrix seamlessly captures important information
of an image, and meanwhile approximately satisfies the restricted isometry
property. To recover the high-resolution image from MAR measurements, the total
variation algorithm based on the compressive sensing theory is employed for
solving the Lagrangian regularization problem. Both peak signal-to-noise ratio
and structural similarity results demonstrate the MAR sensing framework shows
much better recovery performance than the completely random sensing one. The
work is particularly helpful for high-performance and lost-cost data
acquisition.
|
1312.1024 | Reliability-output Decoding of Tail-biting Convolutional Codes | cs.IT math.IT | We present extensions to Raghavan and Baum's reliability-output Viterbi
algorithm (ROVA) to accommodate tail-biting convolutional codes. These
tail-biting reliability-output algorithms compute the exact word-error
probability of the decoded codeword after first calculating the posterior
probability of the decoded tail-biting codeword's starting state. One approach
employs a state-estimation algorithm that selects the maximum a posteriori
state based on the posterior distribution of the starting states. Another
approach is an approximation to the exact tail-biting ROVA that estimates the
word-error probability. A comparison of the computational complexity of each
approach is discussed in detail. The presented reliability-output algorithms
apply to both feedforward and feedback tail-biting convolutional encoders.
These tail-biting reliability-output algorithms are suitable for use in
reliability-based retransmission schemes with short blocklengths, in which
terminated convolutional codes would introduce rate loss.
|
1312.1031 | Analysis of Distributed Stochastic Dual Coordinate Ascent | cs.DC cs.LG | In \citep{Yangnips13}, the author presented distributed stochastic dual
coordinate ascent (DisDCA) algorithms for solving large-scale regularized loss
minimization. Extraordinary performances have been observed and reported for
the well-motivated updates, as referred to the practical updates, compared to
the naive updates. However, no serious analysis has been provided to understand
the updates and therefore the convergence rates. In the paper, we bridge the
gap by providing a theoretical analysis of the convergence rates of the
practical DisDCA algorithm. Our analysis helped by empirical studies has shown
that it could yield an exponential speed-up in the convergence by increasing
the number of dual updates at each iteration. This result justifies the
superior performances of the practical DisDCA as compared to the naive variant.
As a byproduct, our analysis also reveals the convergence behavior of the
one-communication DisDCA.
|
1312.1037 | Blind Fractional Interference Alignment | cs.IT math.IT | Fractional Interference Alignment (FIA) is a transmission scheme which
achieves any value between [0,1] for the Symbols transmitted per Antenna per
Channel use (SpAC). FIA was designed in [1] specifically for Finite Alphabet
(FA) signals, under the constraint that the Minimum Distance (MD) detector is
used at all the receivers. Similar to classical interference alignment, the FIA
precoder also needs perfect channel state information at all the transmitters
(CSIT). In this work, a novel Blind Fractional Interference Alignment (B-FIA)
scheme is introduced, where the basic assumption is that CSIT is not available.
We consider two popular channel models, namely: Broadcast channel, and
Interference channel. For these two channel models, the maximum achievable
value of SpAC satisfying the constraints of the MD detector is obtained, but
with no CSIT, and also a precoder design is provided to obtain any value of
SpAC in the achievable range.
Further, the precoder structure provided has one distinct advantage:
interference channel state information at the receiver (I-CSIR) is not needed,
when all the transmitters and receivers are equipped with one antenna each.
When two or more antennas are used at both ends, I-CSIR must be available to
obtain the maximum achievable value of SpAC. The receiver designs for both the
Minimum Distance and the Maximum Likelihood (ML) decoders are discussed, where
the interference statistics is estimated from the received signal samples.
Simulation results of the B-FIA show that the ML decoder with estimated
statistics achieves a significantly better error rate performance when compared
to the MD decoder with known statistics, since the MD decoder assumes the
interference plus noise term as colored Gaussian noise.
|
1312.1038 | Efficient Multi-Robot Motion Planning for Unlabeled Discs in Simple
Polygons | cs.CG cs.RO | We consider the following motion-planning problem: we are given $m$ unit
discs in a simple polygon with $n$ vertices, each at their own start position,
and we want to move the discs to a given set of $m$ target positions. Contrary
to the standard (labeled) version of the problem, each disc is allowed to be
moved to any target position, as long as in the end every target position is
occupied. We show that this unlabeled version of the problem can be solved in
$O(n\log n+mn+m^2)$ time, assuming that the start and target positions are at
least some minimal distance from each other. This is in sharp contrast to the
standard (labeled) and more general multi-robot motion-planning problem for
discs moving in a simple polygon, which is known to be strongly NP-hard.
|
1312.1053 | Large deviations, Basic information theorem for fitness preferential
attachment random networks | cs.IT cs.SI math.IT math.PR | For fitness preferential attachment random networks, we define the empirical
degree and pair measure, which counts the number of vertices of a given degree
and the number of edges with given fits, and the sample path empirical degree
distribution. For the empirical degree and pair distribution for the fitness
preferential attachment random networks, we find a large deviation upper bound.
From this result we obtain a weak law of large numbers for the empirical degree
and pair distribution, and the basic information theorem or an asymptotic
equipartition property for fitness preferential attachment random networks.
|
1312.1054 | Faster and Sample Near-Optimal Algorithms for Proper Learning Mixtures
of Gaussians | cs.DS cs.LG math.PR math.ST stat.TH | We provide an algorithm for properly learning mixtures of two
single-dimensional Gaussians without any separability assumptions. Given
$\tilde{O}(1/\varepsilon^2)$ samples from an unknown mixture, our algorithm
outputs a mixture that is $\varepsilon$-close in total variation distance, in
time $\tilde{O}(1/\varepsilon^5)$. Our sample complexity is optimal up to
logarithmic factors, and significantly improves upon both Kalai et al., whose
algorithm has a prohibitive dependence on $1/\varepsilon$, and Feldman et al.,
whose algorithm requires bounds on the mixture parameters and depends
pseudo-polynomially in these parameters.
One of our main contributions is an improved and generalized algorithm for
selecting a good candidate distribution from among competing hypotheses.
Namely, given a collection of $N$ hypotheses containing at least one candidate
that is $\varepsilon$-close to an unknown distribution, our algorithm outputs a
candidate which is $O(\varepsilon)$-close to the distribution. The algorithm
requires ${O}(\log{N}/\varepsilon^2)$ samples from the unknown distribution and
${O}(N \log N/\varepsilon^2)$ time, which improves previous such results (such
as the Scheff\'e estimator) from a quadratic dependence of the running time on
$N$ to quasilinear. Given the wide use of such results for the purpose of
hypothesis selection, our improved algorithm implies immediate improvements to
any such use.
|
1312.1060 | An algebraic study of linkages with helical joints | cs.RO math.AG | Methods from algebra and algebraic geometry have been used in various ways to
study linkages in kinematics. These methods have failed so far for the study of
linkages with helical joints (joints with screw motion), because of the
presence of some non-algebraic relations. In this article, we explore a
delicate reduction of some analytic equations in kinematics to algebraic
questions via a theorem of Ax. As an application, we give a classification of
mobile closed 5-linkages with revolute, prismatic, and helical joints.
|
1312.1075 | A Necessary and Sufficient Condition for the Existence of Potential
Functions for Heterogeneous Routing Games | cs.GT cs.SY math.OC | We study a heterogeneous routing game in which vehicles might belong to more
than one type. The type determines the cost of traveling along an edge as a
function of the flow of various types of vehicles over that edge. We relax the
assumptions needed for the existence of a Nash equilibrium in this
heterogeneous routing game. We extend the available results to present
necessary and sufficient conditions for the existence of a potential function.
We characterize a set of tolls that guarantee the existence of a potential
function when only two types of users are participating in the game. We present
an upper bound for the price of anarchy (i.e., the worst-case ratio of the
social cost calculated for a Nash equilibrium over the social cost for a
socially optimal flow) for the case in which only two types of players are
participating in a game with affine edge cost functions. A heterogeneous
routing game with vehicle platooning incentives is used as an example
throughout the article to clarify the concepts and to validate the results.
|
1312.1099 | Multiscale Dictionary Learning for Estimating Conditional Distributions | stat.ML cs.LG | Nonparametric estimation of the conditional distribution of a response given
high-dimensional features is a challenging problem. It is important to allow
not only the mean but also the variance and shape of the response density to
change flexibly with features, which are massive-dimensional. We propose a
multiscale dictionary learning model, which expresses the conditional response
density as a convex combination of dictionary densities, with the densities
used and their weights dependent on the path through a tree decomposition of
the feature space. A fast graph partitioning algorithm is applied to obtain the
tree decomposition, with Bayesian methods then used to adaptively prune and
average over different sub-trees in a soft probabilistic manner. The algorithm
scales efficiently to approximately one million features. State of the art
predictive performance is demonstrated for toy examples and two neuroscience
applications including up to a million features.
|
1312.1121 | Interpreting random forest classification models using a feature
contribution method | cs.LG | Model interpretation is one of the key aspects of the model evaluation
process. The explanation of the relationship between model variables and
outputs is relatively easy for statistical models, such as linear regressions,
thanks to the availability of model parameters and their statistical
significance. For "black box" models, such as random forest, this information
is hidden inside the model structure. This work presents an approach for
computing feature contributions for random forest classification models. It
allows for the determination of the influence of each variable on the model
prediction for an individual instance. By analysing feature contributions for a
training dataset, the most significant variables can be determined and their
typical contribution towards predictions made for individual classes, i.e.,
class-specific feature contribution "patterns", are discovered. These patterns
represent a standard behaviour of the model and allow for an additional
assessment of the model reliability for a new data. Interpretation of feature
contributions for two UCI benchmark datasets shows the potential of the
proposed methodology. The robustness of results is demonstrated through an
extensive analysis of feature contributions calculated for a large number of
generated random forest models.
|
1312.1134 | Massive MIMO Multicasting in Noncooperative Cellular Networks | cs.IT math.IT | We study the massive multiple-input multiple-output (MIMO) multicast
transmission in cellular networks where each base station (BS) is equipped with
a large-scale antenna array and transmits a common message using a single
beamformer to multiple mobile users. We first show that when each BS knows the
perfect channel state information (CSI) of its own served users, the
asymptotically optimal beamformer at each BS is a linear combination of the
channel vectors of its multicast users. Moreover, the optimal combining
coefficients are obtained in closed form. Then we consider the imperfect CSI
scenario where the CSI is obtained through uplink channel estimation in
timedivision duplex systems. We propose a new pilot scheme that estimates the
composite channel which is a linear combination of the individual channels of
multicast users in each cell. This scheme is able to completely eliminate pilot
contamination. The pilot power control for optimizing the multicast beamformer
at each BS is also derived. Numerical results show that the asymptotic
performance of the proposed scheme is close to the ideal case with perfect CSI.
Simulation also verifies the effectiveness of the proposed scheme with finite
number of antennas at each BS.
|
1312.1142 | ADI iteration for Lyapunov equations: a tangential approach and adaptive
shift selection | math.NA cs.SY math.DS | A new version of the alternating directions implicit (ADI) iteration for the
solution of large-scale Lyapunov equations is introduced. It generalizes the
hitherto existing iteration, by incorporating tangential directions in the way
they are already available for rational Krylov subspaces. Additionally, first
strategies to adaptively select shifts and tangential directions in each
iteration are presented. Numerical examples emphasize the potential of the new
results.
|
1312.1146 | Case-Based Merging Techniques in OAKPLAN | cs.AI | Case-based planning can take advantage of former problem-solving experiences
by storing in a plan library previously generated plans that can be reused to
solve similar planning problems in the future. Although comparative worst-case
complexity analyses of plan generation and reuse techniques reveal that it is
not possible to achieve provable efficiency gain of reuse over generation, we
show that the case-based planning approach can be an effective alternative to
plan generation when similar reuse candidates can be chosen.
|
1312.1147 | Optimality of Operator-Like Wavelets for Representing Sparse AR(1)
Processes | cs.IT math.IT | It is known that the Karhunen-Lo\`{e}ve transform (KLT) of Gaussian
first-order auto-regressive (AR(1)) processes results in sinusoidal basis
functions. The same sinusoidal bases come out of the independent-component
analysis (ICA) and actually correspond to processes with completely independent
samples. In this paper, we relax the Gaussian hypothesis and study how
orthogonal transforms decouple symmetric-alpha-stable (S$\alpha$S) AR(1)
processes. The Gaussian case is not sparse and corresponds to $\alpha=2$, while
$0<\alpha<2$ yields processes with sparse linear-prediction error. In the
presence of sparsity, we show that operator-like wavelet bases do outperform
the sinusoidal ones. Also, we observe that, for processes with very sparse
increments ($0<\alpha\leq 1$), the operator-like wavelet basis is
indistinguishable from the ICA solution obtained through numerical
optimization. We consider two criteria for independence. The first is the
Kullback-Leibler divergence between the joint probability density function
(pdf) of the original signal and the product of the marginals in the
transformed domain. The second is a divergence between the joint pdf of the
original signal and the product of the marginals in the transformed domain,
which is based on Stein's formula for the mean-square estimation error in
additive Gaussian noise. Our framework then offers a unified view that
encompasses the discrete cosine transform (known to be asymptotically optimal
for $\alpha=2$) and Haar-like wavelets (for which we achieve optimality for
$0<\alpha\leq1$).
|
1312.1243 | Pricing Residential Electricity Based on Individual Consumption
Behaviors | math.OC cs.SI cs.SY | The conventional practice of retail electric utilities is to aggregate
customers geographically. The utility purchases electricity for its customers
via bulk transactions on the wholesale market, and it passes these costs along
to its customers, the end consumers, through their rate plan. Typically, all
residential consumers are offered the same per unit rate plan, which leads to
cost sharing. Some consumers use their electricity at peak hours, when it is
more expensive on the wholesale market, and others consume mostly at off peak
hours, when it is cheaper, but they all enjoy the same per unit rate through
their utility. This paper proposed a method for the utility to segment a
population of consumers on the basis of their individual consumption patterns.
An optimal recruitment algorithm was developed to aggregate consumers into
groups with a relatively low per unit cost of electricity on the wholesale
market. It was then proposed that the utility should group together enough
consumers to ensure an adequately low forecast error, which is related to risks
it faces in wholesale market transactions. Finally, it was shown that by
repeated application of this process, the utility could segment the entire
population into groups and offer them differentiated rate plans based on their
actual consumption behavior. These groupings are stable in the sense that no
one consumer can unilaterally improve her outcome.
|
1312.1277 | Bandits and Experts in Metric Spaces | cs.DS cs.LG | In a multi-armed bandit problem, an online algorithm chooses from a set of
strategies in a sequence of trials so as to maximize the total payoff of the
chosen strategies. While the performance of bandit algorithms with a small
finite strategy set is quite well understood, bandit problems with large
strategy sets are still a topic of very active investigation, motivated by
practical applications such as online auctions and web advertisement. The goal
of such research is to identify broad and natural classes of strategy sets and
payoff functions which enable the design of efficient solutions.
In this work we study a very general setting for the multi-armed bandit
problem in which the strategies form a metric space, and the payoff function
satisfies a Lipschitz condition with respect to the metric. We refer to this
problem as the "Lipschitz MAB problem". We present a solution for the
multi-armed bandit problem in this setting. That is, for every metric space we
define an isometry invariant which bounds from below the performance of
Lipschitz MAB algorithms for this metric space, and we present an algorithm
which comes arbitrarily close to meeting this bound. Furthermore, our technique
gives even better results for benign payoff functions. We also address the
full-feedback ("best expert") version of the problem, where after every round
the payoffs from all arms are revealed.
|
1312.1286 | An Ontology Model for Organizing Information Resources Sharing on
Personal Web | cs.DL cs.IR | Retrieve information resources made by the machine processing may refer to
multiple sources. A personal web as part of information resources in the
Internet requires a feature that can be understood by computer machines.
Therefore, in this paper an ontology semantic web approach is used to map the
resources in a meaningful scheme. In the design of concept, resources on the
web are viewed as documents that have some property and ownership. Domain
interest or web scope is used to describe a classification of resources that
navigate into relevant documents. If instances are completed to the concept,
then the ontology file can be loaded and shared as annotation on personal web.
This allows computer machine to query multiple ontology from different personal
webs that use it.
|
1312.1309 | On the DoF Region of the K-user MISO Broadcast Channel with Hybrid CSIT | cs.IT math.IT | An outer bound for the degrees of freedom (DoF) region of the K-user
multiple-input single-output (MISO) broadcast channel (BC) is developed under
the hybrid channel state information at transmitter (CSIT) model, in which the
transmitter has instantaneous CSIT of channels to a subset of the receivers and
delayed CSIT of channels to the rest of the receivers. For the 3-user MISO BC,
when the transmitter has instantaneous CSIT of the channel to one receiver and
delayed CSIT of channels to the other two, two new communication schemes are
designed, which are able to achieve the DoF tuple of
$\left(1,\frac{1}{3},\frac{1}{3}\right)$, with a sum DoF of $\frac{5}{3}$, that
is greater than the sum DoF achievable only with delayed CSIT. Another
communication scheme showing the benefit of the alternating CSIT model is also
developed, to obtain the DoF tuple of $\left(1,\frac{4}{9},\frac{4}{9}\right)$
for the 3-user MISO BC.
|
1312.1325 | Permutation polynomials induced from permutations of subfields, and some
complete sets of mutually orthogonal latin squares | math.NT cs.IT math.CO math.IT | We present a general technique for obtaining permutation polynomials over a
finite field from permutations of a subfield. By applying this technique to the
simplest classes of permutation polynomials on the subfield, we obtain several
new families of permutation polynomials. Some of these have the additional
property that both f(x) and f(x)+x induce permutations of the field, which has
combinatorial consequences. We use some of our permutation polynomials to
exhibit complete sets of mutually orthogonal latin squares. In addition, we
solve the open problem from a recent paper by Wu and Lin, and we give simpler
proofs of much more general versions of the results in two other recent papers.
|
1312.1349 | Improving self-calibration | astro-ph.IM cs.IT math.IT physics.data-an stat.ML | Response calibration is the process of inferring how much the measured data
depend on the signal one is interested in. It is essential for any quantitative
signal estimation on the basis of the data. Here, we investigate
self-calibration methods for linear signal measurements and linear dependence
of the response on the calibration parameters. The common practice is to
augment an external calibration solution using a known reference signal with an
internal calibration on the unknown measurement signal itself. Contemporary
self-calibration schemes try to find a self-consistent solution for signal and
calibration by exploiting redundancies in the measurements. This can be
understood in terms of maximizing the joint probability of signal and
calibration. However, the full uncertainty structure of this joint probability
around its maximum is thereby not taken into account by these schemes.
Therefore better schemes -- in sense of minimal square error -- can be designed
by accounting for asymmetries in the uncertainty of signal and calibration. We
argue that at least a systematic correction of the common self-calibration
scheme should be applied in many measurement situations in order to properly
treat uncertainties of the signal on which one calibrates. Otherwise the
calibration solutions suffer from a systematic bias, which consequently
distorts the signal reconstruction. Furthermore, we argue that non-parametric,
signal-to-noise filtered calibration should provide more accurate
reconstructions than the common bin averages and provide a new, improved
self-calibration scheme. We illustrate our findings with a simplistic numerical
example.
|
1312.1375 | Impact of receiver reaction mechanisms on the performance of molecular
communication networks | q-bio.MN cs.IT math.IT | In a molecular communication network, transmitters and receivers communicate
by using signalling molecules. At the receivers, the signalling molecules
react, via a chain of chemical reactions, to produce output molecules. The
counts of output molecules over time is considered to be the output signal of
the receiver. This output signal is used to detect the presence of signalling
molecules at the receiver. The output signal is noisy due to the stochastic
nature of diffusion and chemical reactions. The aim of this paper is to
characterise the properties of the output signals for two types of receivers,
which are based on two different types of reaction mechanisms. We derive
analytical expressions for the mean, variance and frequency properties of these
two types of receivers. These expressions allow us to study the properties of
these two types of receivers. In addition, our model allows us to study the
effect of the diffusibility of the receiver membrane on the performance of the
receivers.
|
1312.1397 | A Passivity Framework for Modeling and Mitigating Wormhole Attacks on
Networked Control Systems | cs.SY cs.CR cs.NI | Networked control systems consist of distributed sensors and actuators that
communicate via a wireless network. The use of an open wireless medium and
unattended deployment leaves these systems vulnerable to intelligent
adversaries whose goal is to disrupt the system performance. In this paper, we
study the wormhole attack on a networked control system, in which an adversary
establishes a link between two distant regions of the network by using either
high-gain antennas, as in the out-of-band wormhole, or colluding network nodes
as in the in-band wormhole. Wormholes allow the adversary to violate the timing
constraints of real-time control systems by delaying or dropping packets, and
cannot be detected using cryptographic mechanisms alone. We study the impact of
the wormhole attack on the network flows and delays and introduce a
passivity-based control-theoretic framework for modeling the wormhole attack.
We develop this framework for both the in-band and out-of-band wormhole attacks
as well as complex, hereto-unreported wormhole attacks consisting of arbitrary
combinations of in-and out-of band wormholes. We integrate existing mitigation
strategies into our framework, and analyze the throughput, delay, and stability
properties of the overall system. Through simulation study, we show that, by
selectively dropping control packets, the wormhole attack can cause
disturbances in the physical plant of a networked control system, and
demonstrate that appropriate selection of detection parameters mitigates the
disturbances due to the wormhole while satisfying the delay constraints of the
physical system.
|
1312.1421 | Intermittent Communication | cs.IT math.IT | We formulate a model for intermittent communication that can capture bursty
transmissions or a sporadically available channel, where in either case the
receiver does not know a priori when the transmissions will occur. Focusing on
the point-to-point case, we develop a decoding structure, decoding from pattern
detection, and its achievable rate for such communication scenarios. Decoding
from pattern detection first detects the locations of codeword symbols and then
uses them to decode. We introduce the concept of partial divergence and study
some of its properties in order to obtain stronger achievability results. As
the system becomes more intermittent, the achievable rates decrease due to the
additional uncertainty about the positions of the codeword symbols at the
decoder. Additionally, we provide upper bounds on the capacity of binary
noiseless intermittent communication with the help of a genie-aided encoder and
decoder. The upper bounds imply a tradeoff between the capacity and the
intermittency rate of the communication system, even if the receive window
scales linearly with the codeword length.
|
1312.1423 | ABC-SG: A New Artificial Bee Colony Algorithm-Based Distance of
Sequential Data Using Sigma Grams | cs.NE cs.AI | The problem of similarity search is one of the main problems in computer
science. This problem has many applications in text-retrieval, web search,
computational biology, bioinformatics and others. Similarity between two data
objects can be depicted using a similarity measure or a distance metric. There
are numerous distance metrics in the literature, some are used for a particular
data type, and others are more general. In this paper we present a new distance
metric for sequential data which is based on the sum of n-grams. The novelty of
our distance is that these n-grams are weighted using artificial bee colony; a
recent optimization algorithm based on the collective intelligence of a swarm
of bees on their search for nectar. This algorithm has been used in optimizing
a large number of numerical problems. We validate the new distance
experimentally.
|
1312.1444 | Energy Beamforming with One-Bit Feedback | cs.IT math.IT | Wireless energy transfer (WET) has attracted significant attention recently
for providing energy supplies wirelessly to electrical devices without the need
of wires or cables. Among different types of WET techniques, the radio
frequency (RF) signal enabled far-field WET is most practically appealing to
power energy constrained wireless networks in a broadcast manner. To overcome
the significant path loss over wireless channels, multi-antenna or
multiple-input multiple-output (MIMO) techniques have been proposed to enhance
the transmission efficiency and distance for RF-based WET. However, in order to
reap the large energy beamforming gain in MIMO WET, acquiring the channel state
information (CSI) at the energy transmitter (ET) is an essential task. This
task is particularly challenging for WET systems, since existing channel
training and feedback methods used for communication receivers may not be
implementable at the energy receiver (ER) due to its hardware limitation. To
tackle this problem, in this paper we consider a multiuser MIMO system for WET,
where a multiple-antenna ET broadcasts wireless energy to a group of
multiple-antenna ERs concurrently via transmit energy beamforming. By taking
into account the practical energy harvesting circuits at the ER, we propose a
new channel learning method that requires only one feedback bit from each ER to
the ET per feedback interval. The feedback bit indicates the increase or
decrease of the harvested energy by each ER between the present and previous
intervals, which can be measured without changing the existing hardware at the
ER. Based on such feedback information, the ET adjusts transmit beamforming in
different training intervals and at the same time obtains improved estimates of
the MIMO channels to ERs by applying a new approach termed analytic center
cutting plane method (ACCPM).
|
1312.1447 | Asynchronous Convolutional-Coded Physical-Layer Network Coding | cs.IT math.IT | This paper investigates the decoding process of asynchronous
convolutional-coded physical-layer network coding (PNC) systems. Specifically,
we put forth a layered decoding framework for convolutional-coded PNC
consisting of three layers: symbol realignment layer, codeword realignment
layer, and joint channel-decoding network coding (Jt-CNC) decoding layer. Our
framework can deal with phase asynchrony and symbol arrival-time asynchrony
between the signals simultaneously transmitted by multiple sources. A salient
feature of this framework is that it can handle both fractional and integral
symbol offsets; previously proposed PNC decoding algorithms (e.g., XOR-CD and
reduced-state Viterbi algorithms) can only deal with fractional symbol offset.
Moreover, the Jt-CNC algorithm, based on belief propagation (BP), is
BER-optimal for synchronous PNC and near optimal for asynchronous PNC.
Extending beyond convolutional codes, we further generalize the Jt-CNC decoding
algorithm for all cyclic codes. Our simulation shows that Jt-CNC outperforms
the previously proposed XOR-CD algorithm and reduced-state Viterbi algorithm by
2dB for synchronous PNC. For phase-asynchronous PNC, Jt-CNC is 4dB better than
the other two algorithms. Importantly, for real wireless environment testing,
we have also implemented our decoding algorithm in a PNC system built on the
USRP software radio platform. Our experiment shows that the proposed Jt-CNC
decoder works well in practice.
|
1312.1448 | Food Recommendation using Ontology and Heuristics | cs.IR | Recommender systems are needed to find food items of ones interest. We review
recommender systems and recommendation methods. We propose a food
personalization framework based on adaptive hypermedia. We extend Hermes
framework with food recommendation functionality. We combine TF-IDF term
extraction method with cosine similarity measure. Healthy heuristics and
standard food database are incorporated into the knowledgebase. Based on the
performed evaluation, we conclude that semantic recommender systems in general
outperform traditional recommenders systems with respect to accuracy,
precision, and recall, and that the proposed recommender has a better F-measure
than existing semantic recommenders.
|
1312.1450 | Multi-Antenna Wireless Powered Communication with Energy Beamforming | cs.IT math.IT | The newly emerging wireless powered communication networks (WPCNs) have
recently drawn significant attention, where radio signals are used to power
wireless terminals for information transmission. In this paper, we study a WPCN
where one multi-antenna access point (AP) coordinates energy transfer and
information transfer to/from a set of single-antenna users. A
harvest-then-transmit protocol is assumed where the AP first broadcasts
wireless power to all users via energy beamforming in the downlink (DL), and
then the users send their independent information to the AP simultaneously in
the uplink (UL) using their harvested energy. To optimize the users' throughput
and yet guarantee their rate fairness, we maximize the minimum throughput among
all users by a joint design of the DL-UL time allocation, the DL energy
beamforming, and the UL transmit power allocation plus receive beamforming. We
solve this non-convex problem optimally by two steps. First, we fix the DL-UL
time allocation and obtain the optimal DL energy beamforming, UL power
allocation and receive beamforming to maximize the minimum
signal-to-interference-plus-noise ratio (SINR) of all users. This problem is
shown to be in general non-convex; however, we convert it equivalently to a
spectral radius minimization problem, which can be solved efficiently by
applying the alternating optimization based on the non-negative matrix theory.
Then, the optimal time allocation is found by a one-dimension search to
maximize the minimum rate of all users. Furthermore, two suboptimal designs of
lower complexity are proposed, and their throughput performance is compared
against that of the optimal solution.
|
1312.1461 | Multi-Sensor Image Fusion Based on Moment Calculation | cs.CV | An image fusion method based on salient features is proposed in this paper.
In this work, we have concentrated on salient features of the image for fusion
in order to preserve all relevant information contained in the input images and
tried to enhance the contrast in fused image and also suppressed noise to a
maximum extent. In our system, first we have applied a mask on two input images
in order to conserve the high frequency information along with some low
frequency information and stifle noise to a maximum extent. Thereafter, for
identification of salience features from sources images, a local moment is
computed in the neighborhood of a coefficient. Finally, a decision map is
generated based on local moment in order to get the fused image. To verify our
proposed algorithm, we have tested it on 120 sensor image pairs collected from
Manchester University UK database. The experimental results show that the
proposed method can provide superior fused image in terms of several
quantitative fusion evaluation index.
|
1312.1462 | Geometric Feature Based Face-Sketch Recognition | cs.CV | This paper presents a novel facial sketch image or face-sketch recognition
approach based on facial feature extraction. To recognize a face-sketch, we
have concentrated on a set of geometric face features like eyes, nose,
eyebrows, lips, etc and their length and width ratio because it is difficult to
match photos and sketches because they belong to two different modalities. In
this system, first the facial features/components from training images are
extracted, then ratios of length, width, and area etc. are calculated and those
are stored as feature vectors for individual images. After that the mean
feature vectors are computed and subtracted from each feature vector for
centering of the feature vectors. In the next phase, feature vector for the
incoming probe face-sketch is also computed in similar fashion. Here, K-NN
classifier is used to recognize probe face-sketch. It is experimentally
verified that the proposed method is robust against faces are in a frontal
pose, with normal lighting and neutral expression and have no occlusions. The
experiment has been conducted with 80 male and female face images from
different face databases. It has useful applications for both law enforcement
and digital entertainment.
|
1312.1474 | `Hits' emerge through self-organized coordination in collective response
of free agents | physics.soc-ph cs.SI | Individuals in free societies frequently exhibit striking coordination when
making independent decisions en masse. Examples include the regular appearance
of hit products or memes with substantially higher popularity compared to their
otherwise equivalent competitors, or extreme polarization in public opinion.
Such segregation of events manifests as bimodality in the distribution of
collective choices. Here we quantify how apparently independent choices made by
individuals result in a significantly polarized but stable distribution of
success in the context of the box-office performance of movies and show that it
is an emergent feature of a system of non-interacting agents who respond to
sequentially arriving signals. The aggregate response exhibits extreme
variability amplifying much smaller differences in individual cost of adoption.
Due to self-organization of the competitive landscape, most events elicit only
a muted response but a few stimulate widespread adoption, emerging as "hits".
|
1312.1482 | Low Complexity Decoding for Punctured Trellis-Coded Modulation Over
Intersymbol Interference Channels | cs.IT math.IT | Classical trellis-coded modulation (TCM) as introduced by Ungerboeck in
1976/1983 uses a signal constellation of twice the cardinality compared to an
uncoded transmission with one bit of redundancy per PAM symbol, i.e.,
application of codes with rates $\frac{n-1}{n}$ when $2^{n}$ denotes the
cardinality of the signal constellation. The original approach therefore only
comprises integer transmission rates, i.e., $R=\left\{ 2,\,3,\,4\,\ldots
\right\}$, additionally, when transmitting over an intersymbol interference
(ISI) channel an optimum decoding scheme would perform equalization and
decoding of the channel code jointly. In this paper, we allow rate adjustment
for TCM by means of puncturing the convolutional code (CC) on which a TCM
scheme is based on. In this case a nontrivial mapping of the output symbols of
the CC to signal points results in a time-variant trellis. We propose an
efficient technique to integrate an ISI-channel into this trellis and show that
the computational complexity can be significantly reduced by means of a reduced
state sequence estimation (RSSE) algorithm for time-variant trellises.
|
1312.1492 | A fast and robust algorithm to count topologically persistent holes in
noisy clouds | cs.CG cs.CV math.AT | Preprocessing a 2D image often produces a noisy cloud of interest points. We
study the problem of counting holes in unorganized clouds in the plane. The
holes in a given cloud are quantified by the topological persistence of their
boundary contours when the cloud is analyzed at all possible scales. We design
the algorithm to count holes that are most persistent in the filtration of
offsets (neighborhoods) around given points. The input is a cloud of $n$ points
in the plane without any user-defined parameters. The algorithm has $O(n\log
n)$ time and $O(n)$ space. The output is the array (number of holes, relative
persistence in the filtration). We prove theoretical guarantees when the
algorithm finds the correct number of holes (components in the complement) of
an unknown shape approximated by a cloud.
|
1312.1494 | Approximating persistent homology for a cloud of $n$ points in a
subquadratic time | cs.CG cs.CV math.AT | The Vietoris-Rips filtration for an $n$-point metric space is a sequence of
large simplicial complexes adding a topological structure to the otherwise
disconnected space. The persistent homology is a key tool in topological data
analysis and studies topological features of data that persist over many
scales. The fastest algorithm for computing persistent homology of a filtration
has time $O(M(u)+u^2\log^2 u)$, where $u$ is the number of updates (additions
or deletions of simplices), $M(u)=O(u^{2.376})$ is the time for multiplication
of $u\times u$ matrices. For a space of $n$ points given by their pairwise
distances, we approximate the Vietoris-Rips filtration by a zigzag filtration
consisting of $u=o(n)$ updates, which is sublinear in $n$. The constant depends
on a given error of approximation and on the doubling dimension of the metric
space. Then the persistent homology of this sublinear-size filtration can be
computed in time $o(n^2)$, which is subquadratic in $n$.
|
1312.1512 | An adaptive block based integrated LDP,GLCM,and Morphological features
for Face Recognition | cs.CV | This paper proposes a technique for automatic face recognition using
integrated multiple feature sets extracted from the significant blocks of a
gradient image. We discuss about the use of novel morphological, local
directional pattern (LDP) and gray-level co-occurrence matrix GLCM based
feature extraction technique to recognize human faces. Firstly, the new
morphological features i.e., features based on number of runs of pixels in four
directions (N,NE,E,NW) are extracted, together with the GLCM based statistical
features and LDP features that are less sensitive to the noise and
non-monotonic illumination changes, are extracted from the significant blocks
of the gradient image. Then these features are concatenated together. We
integrate the above mentioned methods to take full advantage of the three
approaches. Extraction of the significant blocks from the absolute gradient
image and hence from the original image to extract pertinent information with
the idea of dimension reduction forms the basis of the work. The efficiency of
our method is demonstrated by the experiment on 1100 images from the FRAV2D
face database, 2200 images from the FERET database, where the images vary in
pose, expression, illumination and scale and 400 images from the ORL face
database, where the images slightly vary in pose. Our method has shown 90.3%,
93% and 98.75% recognition accuracy for the FRAV2D, FERET and the ORL database
respectively.
|
1312.1517 | A Gabor block based Kernel Discriminative Common Vector (KDCV) approach
using cosine kernels for Human Face Recognition | cs.CV | In this paper a nonlinear Gabor Wavelet Transform (GWT) discriminant feature
extraction approach for enhanced face recognition is proposed. Firstly, the
low-energized blocks from Gabor wavelet transformed images are extracted.
Secondly, the nonlinear discriminating features are analyzed and extracted from
the selected low-energized blocks by the generalized Kernel Discriminative
Common Vector (KDCV) method. The KDCV method is extended to include cosine
kernel function in the discriminating method. The KDCV with the cosine kernels
is then applied on the extracted low energized discriminating feature vectors
to obtain the real component of a complex quantity for face recognition. In
order to derive positive kernel discriminative vectors; we apply only those
kernel discriminative eigenvectors that are associated with non-zero
eigenvalues. The feasibility of the low energized Gabor block based generalized
KDCV method with cosine kernel function models has been successfully tested for
image classification using the L1, L2 distance measures; and the cosine
similarity measure on both frontal and pose-angled face recognition.
Experimental results on the FRAV2D and the FERET database demonstrate the
effectiveness of this new approach.
|
1312.1520 | A Face Recognition approach based on entropy estimate of the nonlinear
DCT features in the Logarithm Domain together with Kernel Entropy Component
Analysis | cs.CV | This paper exploits the feature extraction capabilities of the discrete
cosine transform (DCT) together with an illumination normalization approach in
the logarithm domain that increase its robustness to variations in facial
geometry and illumination. Secondly in the same domain the entropy measures are
applied on the DCT coefficients so that maximum entropy preserving pixels can
be extracted as the feature vector. Thus the informative features of a face can
be extracted in a low dimensional space. Finally, the kernel entropy component
analysis (KECA) with an extension of arc cosine kernels is applied on the
extracted DCT coefficients that contribute most to the entropy estimate to
obtain only those real kernel ECA eigenvectors that are associated with
eigenvalues having high positive entropy contribution. The resulting system was
successfully tested on real image sequences and is robust to significant
partial occlusion and illumination changes, validated with the experiments on
the FERET, AR, FRAV2D and ORL face databases. Experimental comparison is
demonstrated to prove the superiority of the proposed approach in respect to
recognition accuracy. Using specificity and sensitivity we find that the best
is achieved when Renyi entropy is applied on the DCT coefficients. Extensive
experimental comparison is demonstrated to prove the superiority of the
proposed approach in respect to recognition accuracy. Moreover, the proposed
approach is very simple, computationally fast and can be implemented in any
real-time face recognition system.
|
1312.1530 | Bandit Online Optimization Over the Permutahedron | cs.LG | The permutahedron is the convex polytope with vertex set consisting of the
vectors $(\pi(1),\dots, \pi(n))$ for all permutations (bijections) $\pi$ over
$\{1,\dots, n\}$. We study a bandit game in which, at each step $t$, an
adversary chooses a hidden weight weight vector $s_t$, a player chooses a
vertex $\pi_t$ of the permutahedron and suffers an observed loss of
$\sum_{i=1}^n \pi(i) s_t(i)$.
A previous algorithm CombBand of Cesa-Bianchi et al (2009) guarantees a
regret of $O(n\sqrt{T \log n})$ for a time horizon of $T$. Unfortunately,
CombBand requires at each step an $n$-by-$n$ matrix permanent approximation to
within improved accuracy as $T$ grows, resulting in a total running time that
is super linear in $T$, making it impractical for large time horizons.
We provide an algorithm of regret $O(n^{3/2}\sqrt{T})$ with total time
complexity $O(n^3T)$. The ideas are a combination of CombBand and a recent
algorithm by Ailon (2013) for online optimization over the permutahedron in the
full information setting. The technical core is a bound on the variance of the
Plackett-Luce noisy sorting process's "pseudo loss". The bound is obtained by
establishing positive semi-definiteness of a family of 3-by-3 matrices
generated from rational functions of exponentials of 3 parameters.
|
1312.1577 | On Coordinating Ultra-Dense Wireless Access Networks: Optimization
Modeling, Algorithms and Insights | cs.IT math.IT | Network densification along with universal resources reuse is expected to
play a key role in the realization of 5G radio access as an enabler for
delivering most of the anticipated network capacity improvements. On the one
hand, neither the expected additional spectrum allocation nor the forthcoming
novel air-interface processing techniques will be sufficient for sustaining the
anticipated exponentially-increasing mobile data traffic. On the other hand,
enhanced ultra-dense infrastructure deployments are expected to provide
remarkable capacity gains, regardless of the evolutionary or revolutionary
approach followed towards 5G development. In this work, we thoroughly examine
global network coordination as the main enabler for future 5G large dense
small-cell deployments. We propose a powerful radio resources coordination
framework through which interference management is handled network-wise and
jointly over multiple dimensions. In particular, we explore strategies for
pairing serving and served access nodes, partitioning the available network
resources, as well as dynamically allocating power per pair, towards optimizing
system performance and guaranteeing individual minimum performance levels. We
develop new optimization formulations, providing network scaling performance
upper bounds, along with lower complexity algorithmic solutions tailored to
large networks. We apply the proposed solutions to dense network deployments,
in order to obtain useful insights on network performance and optimization,
such as rate scaling, infrastructure density, optimal bandwidth partitioning
and spatial reuse factor optimization.
|
1312.1583 | Sequences with high nonlinear complexity | cs.IT math.IT math.NT | We improve lower bounds on the $k$th-order nonlinear complexity of
pseudorandom sequences over finite fields and we establish a probabilistic
result on the behavior of the $k$th-order nonlinear complexity of random
sequences over finite fields.
|
1312.1593 | Performance Analysis of Network Coded Systems Under Quasi-static
Rayleigh Fading Channels | cs.IT math.IT | In the area of basic and network coded cooperative communication, the
expected end-to-end bit error rate (BER) values are frequently required to
compare the proposed coding, relaying, and decoding techniques. Instead of
obtaining these values via time consuming Monte Carlo simulations, deriving
closed form expressions using approximations is crucial. In this work, the
ultimate goal is to derive an approximate average BER expression for a network
coded system. While reaching this goal, we firstly consider the cooperative
systems' instantaneous BER values that are commonly composed of Q-functions of
more than one variables. For these Q-functions, we investigate the convergence
characteristics of the sampling property and generalize this property to
arbitrary functions of multiple variables. Second, we adapt the equivalent
channel approach to the network coded scenario for the ease of analysis and
propose a network decoder with reduced complexity. Finally, by combining these
techniques, we show that the obtained closed form expressions well agree with
simulation results in a wide SNR range.
|
1312.1611 | Intent Models for Contextualising and Diversifying Query Suggestions | cs.IR | The query suggestion or auto-completion mechanisms help users to type less
while interacting with a search engine. A basic approach that ranks suggestions
according to their frequency in the query logs is suboptimal. Firstly, many
candidate queries with the same prefix can be removed as redundant. Secondly,
the suggestions can also be personalised based on the user's context. These two
directions to improve the aforementioned mechanisms' quality can be in
opposition: while the latter aims to promote suggestions that address search
intents that a user is likely to have, the former aims to diversify the
suggestions to cover as many intents as possible. We introduce a
contextualisation framework that utilises a short-term context using the user's
behaviour within the current search session, such as the previous query, the
documents examined, and the candidate query suggestions that the user has
discarded. This short-term context is used to contextualise and diversify the
ranking of query suggestions, by modelling the user's information need as a
mixture of intent-specific user models. The evaluation is performed offline on
a set of approximately 1.0M test user sessions. Our results suggest that the
proposed approach significantly improves query suggestions compared to the
baseline approach.
|
1312.1613 | Max-Min Distance Nonnegative Matrix Factorization | stat.ML cs.LG cs.NA | Nonnegative Matrix Factorization (NMF) has been a popular representation
method for pattern classification problem. It tries to decompose a nonnegative
matrix of data samples as the product of a nonnegative basic matrix and a
nonnegative coefficient matrix, and the coefficient matrix is used as the new
representation. However, traditional NMF methods ignore the class labels of the
data samples. In this paper, we proposed a supervised novel NMF algorithm to
improve the discriminative ability of the new representation. Using the class
labels, we separate all the data sample pairs into within-class pairs and
between-class pairs. To improve the discriminate ability of the new NMF
representations, we hope that the maximum distance of the within-class pairs in
the new NMF space could be minimized, while the minimum distance of the
between-class pairs pairs could be maximized. With this criterion, we construct
an objective function and optimize it with regard to basic and coefficient
matrices and slack variables alternatively, resulting in a iterative algorithm.
|
1312.1666 | Semi-Stochastic Gradient Descent Methods | stat.ML cs.LG cs.NA math.NA math.OC | In this paper we study the problem of minimizing the average of a large
number ($n$) of smooth convex loss functions. We propose a new method, S2GD
(Semi-Stochastic Gradient Descent), which runs for one or several epochs in
each of which a single full gradient and a random number of stochastic
gradients is computed, following a geometric law. The total work needed for the
method to output an $\varepsilon$-accurate solution in expectation, measured in
the number of passes over data, or equivalently, in units equivalent to the
computation of a single gradient of the loss, is
$O((\kappa/n)\log(1/\varepsilon))$, where $\kappa$ is the condition number.
This is achieved by running the method for $O(\log(1/\varepsilon))$ epochs,
with a single gradient evaluation and $O(\kappa)$ stochastic gradient
evaluations in each. The SVRG method of Johnson and Zhang arises as a special
case. If our method is limited to a single epoch only, it needs to evaluate at
most $O((\kappa/\varepsilon)\log(1/\varepsilon))$ stochastic gradients. In
contrast, SVRG requires $O(\kappa/\varepsilon^2)$ stochastic gradients. To
illustrate our theoretical results, S2GD only needs the workload equivalent to
about 2.1 full gradient evaluations to find an $10^{-6}$-accurate solution for
a problem with $n=10^9$ and $\kappa=10^3$.
|
1312.1681 | An Approach: Modality Reduction and Face-Sketch Recognition | cs.CV | To recognize face sketch through face photo database is a challenging task
for todays researchers. Because face photo images in training set and face
sketch images in testing set have different modality. Difference between two
face photos of difference person is smaller than the difference between same
person in a face photo and face sketched. In this paper, for reduction of the
modality between face photo and face sketch we first bring face photo and face
sketch images in a new dimension using 2D Discrete Haar wavelet transform with
scale 3 followed by a negative approach. After that, extract features from
transformed images using Principal Component Analysis (PCA). Thereafter, we use
SVM classifier and K-NN classifier for better classification. Our proposed
method is experimentally verified by its robustness against faces that are
captured in a good lighting condition and in a frontal pose. The experiment has
been conducted with 100 male and female face images as training set and 100
male and female face sketch images as testing set collected from CUHK training
and testing cropped photos and CUHK training and testing cropped sketches.
|
1312.1683 | Face Recognition using Hough Peaks extracted from the significant blocks
of the Gradient Image | cs.CV | This paper proposes a new technique for automatic face recognition using
integrated peaks of the Hough transformed significant blocks of the binary
gradient image. In this approach firstly the gradient of an image is calculated
and a threshold is set to obtain a binary gradient image, which is less
sensitive to noise and illumination changes. Secondly, significant blocks are
extracted from the absolute gradient image, to extract pertinent information
with the idea of dimension reduction. Finally the best fitted Hough peaks are
extracted from the Hough transformed significant blocks for efficient face
recognition. Then these Hough peaks are concatenated together, which are used
as feature in classification process. The efficiency of the proposed method is
demonstrated by the experiment on 1100 images from the FRAV2D face database,
2200 images from the FERET database, where the images vary in pose, expression,
illumination and scale and 400 images from the ORL face database, where the
images slightly vary in pose. Our method has shown 93.3%, 88.5% and 99%
recognition accuracy for the FRAV2D, FERET and the ORL database respectively.
|
1312.1684 | High Performance Human Face Recognition using Gabor based Pseudo Hidden
Markov Model | cs.CV | This paper introduces a novel methodology that combines the multi-resolution
feature of the Gabor wavelet transformation (GWT) with the local interactions
of the facial structures expressed through the Pseudo Hidden Markov model
(PHMM). Unlike the traditional zigzag scanning method for feature extraction a
continuous scanning method from top-left corner to right then top-down and
right to left and so on until right-bottom of the image i.e. a spiral scanning
technique has been proposed for better feature selection. Unlike traditional
HMMs, the proposed PHMM does not perform the state conditional independence of
the visible observation sequence assumption. This is achieved via the concept
of local structures introduced by the PHMM used to extract facial bands and
automatically select the most informative features of a face image. Thus, the
long-range dependency problem inherent to traditional HMMs has been drastically
reduced. Again with the use of most informative pixels rather than the whole
image makes the proposed method reasonably faster for face recognition. This
method has been successfully tested on frontal face images from the ORL, FRAV2D
and FERET face databases where the images vary in pose, illumination,
expression, and scale. The FERET data set contains 2200 frontal face images of
200 subjects, while the FRAV2D data set consists of 1100 images of 100 subjects
and the full ORL database is considered. The results reported in this
application are far better than the recent and most referred systems.
|
1312.1685 | Human Face Recognition using Gabor based Kernel Entropy Component
Analysis | cs.CV | In this paper, we present a novel Gabor wavelet based Kernel Entropy
Component Analysis (KECA) method by integrating the Gabor wavelet
transformation (GWT) of facial images with the KECA method for enhanced face
recognition performance. Firstly, from the Gabor wavelet transformed images the
most important discriminative desirable facial features characterized by
spatial frequency, spatial locality and orientation selectivity to cope with
the variations due to illumination and facial expression changes were derived.
After that KECA, relating to the Renyi entropy is extended to include cosine
kernel function. The KECA with the cosine kernels is then applied on the
extracted most important discriminating feature vectors of facial images to
obtain only those real kernel ECA eigenvectors that are associated with
eigenvalues having positive entropy contribution. Finally, these real KECA
features are used for image classification using the L1, L2 distance measures;
the Mahalanobis distance measure and the cosine similarity measure. The
feasibility of the Gabor based KECA method with the cosine kernel has been
successfully tested on both frontal and pose-angled face recognition, using
datasets from the ORL, FRAV2D and the FERET database.
|
1312.1706 | Swapping Variables for High-Dimensional Sparse Regression with
Correlated Measurements | math.ST cs.IT math.IT stat.ML stat.TH | We consider the high-dimensional sparse linear regression problem of
accurately estimating a sparse vector using a small number of linear
measurements that are contaminated by noise. It is well known that the standard
cadre of computationally tractable sparse regression algorithms---such as the
Lasso, Orthogonal Matching Pursuit (OMP), and their extensions---perform poorly
when the measurement matrix contains highly correlated columns. To address this
shortcoming, we develop a simple greedy algorithm, called SWAP, that
iteratively swaps variables until convergence. SWAP is surprisingly effective
in handling measurement matrices with high correlations. In fact, we prove that
SWAP outputs the true support, the locations of the non-zero entries in the
sparse vector, under a relatively mild condition on the measurement matrix.
Furthermore, we show that SWAP can be used to boost the performance of any
sparse regression algorithm. We empirically demonstrate the advantages of SWAP
by comparing it with several state-of-the-art sparse regression algorithms.
|
1312.1725 | Book embeddings of Reeb graphs | cs.CG cs.CV math.GT | Let $X$ be a simplicial complex with a piecewise linear function
$f:X\to\mathbb{R}$. The Reeb graph $Reeb(f,X)$ is the quotient of $X$, where we
collapse each connected component of $f^{-1}(t)$ to a single point. Let the
nodes of $Reeb(f,X)$ be all homologically critical points where any homology of
the corresponding component of the level set $f^{-1}(t)$ changes. Then we can
label every arc of $Reeb(f,X)$ with the Betti numbers
$(\beta_1,\beta_2,\dots,\beta_d)$ of the corresponding $d$-dimensional
component of a level set. The homology labels give more information about the
original complex $X$ than the classical Reeb graph. We describe a canonical
embedding of a Reeb graph into a multi-page book (a star cross a line) and give
a unique linear code of this book embedding.
|
1312.1727 | On the Capacity Region of Broadcast Packet Erasure Relay Networks With
Feedback | cs.IT math.IT | We derive a new outer bound on the capacity region of broadcast traffic in
multiple input broadcast packet erasure channels with feedback, and extend this
outer bound to packet erasure relay networks with feedback. We show the
tightness of the outer bound for various classes of networks. An important
engineering implication of this work is that for network coding schemes for
parallel broadcast channels, the `xor' packets should be sent over correlated
broadcast subchannels.
|
1312.1737 | Curriculum Learning for Handwritten Text Line Recognition | cs.LG | Recurrent Neural Networks (RNN) have recently achieved the best performance
in off-line Handwriting Text Recognition. At the same time, learning RNN by
gradient descent leads to slow convergence, and training times are particularly
long when the training database consists of full lines of text. In this paper,
we propose an easy way to accelerate stochastic gradient descent in this
set-up, and in the general context of learning to recognize sequences. The
principle is called Curriculum Learning, or shaping. The idea is to first learn
to recognize short sequences before training on all available training
sequences. Experiments on three different handwritten text databases (Rimes,
IAM, OpenHaRT) show that a simple implementation of this strategy can
significantly speed up the training of RNN for Text Recognition, and even
significantly improve performance in some cases.
|
1312.1740 | Approximate message-passing with spatially coupled structured operators,
with applications to compressed sensing and sparse superposition codes | cs.IT cond-mat.dis-nn math.IT | We study the behavior of Approximate Message-Passing, a solver for linear
sparse estimation problems such as compressed sensing, when the i.i.d matrices
-for which it has been specifically designed- are replaced by structured
operators, such as Fourier and Hadamard ones. We show empirically that after
proper randomization, the structure of the operators does not significantly
affect the performances of the solver. Furthermore, for some specially designed
spatially coupled operators, this allows a computationally fast and memory
efficient reconstruction in compressed sensing up to the
information-theoretical limit. We also show how this approach can be applied to
sparse superposition codes, allowing the Approximate Message-Passing decoder to
perform at large rates for moderate block length.
|
1312.1743 | Dual coordinate solvers for large-scale structural SVMs | cs.LG cs.CV | This manuscript describes a method for training linear SVMs (including binary
SVMs, SVM regression, and structural SVMs) from large, out-of-core training
datasets. Current strategies for large-scale learning fall into one of two
camps; batch algorithms which solve the learning problem given a finite
datasets, and online algorithms which can process out-of-core datasets. The
former typically requires datasets small enough to fit in memory. The latter is
often phrased as a stochastic optimization problem; such algorithms enjoy
strong theoretical properties but often require manual tuned annealing
schedules, and may converge slowly for problems with large output spaces (e.g.,
structural SVMs). We discuss an algorithm for an "intermediate" regime in which
the data is too large to fit in memory, but the active constraints (support
vectors) are small enough to remain in memory. In this case, one can design
rather efficient learning algorithms that are as stable as batch algorithms,
but capable of processing out-of-core datasets. We have developed such a
MATLAB-based solver and used it to train a collection of recognition systems
for articulated pose estimation, facial analysis, 3D object recognition, and
action classification, all with publicly-available code. This writeup describes
the solver in detail.
|
1312.1752 | Particle Swarm Optimization of Information-Content Weighting of Symbolic
Aggregate Approximation | cs.NE cs.AI | Bio-inspired optimization algorithms have been gaining more popularity
recently. One of the most important of these algorithms is particle swarm
optimization (PSO). PSO is based on the collective intelligence of a swam of
particles. Each particle explores a part of the search space looking for the
optimal position and adjusts its position according to two factors; the first
is its own experience and the second is the collective experience of the whole
swarm. PSO has been successfully used to solve many optimization problems. In
this work we use PSO to improve the performance of a well-known representation
method of time series data which is the symbolic aggregate approximation (SAX).
As with other time series representation methods, SAX results in loss of
information when applied to represent time series. In this paper we use PSO to
propose a new minimum distance WMD for SAX to remedy this problem. Unlike the
original minimum distance, the new distance sets different weights to different
segments of the time series according to their information content. This
weighted minimum distance enhances the performance of SAX as we show through
experiments using different time series datasets.
|
1312.1756 | Joint Energy and Spectrum Cooperation for Cellular Communication Systems | cs.IT math.IT | Powered by renewable energy sources, cellular communication systems usually
have different wireless traffic loads and available resources over time. To
match their traffics, it is beneficial for two neighboring systems to cooperate
in resource sharing when one is excessive in one resource (e.g., spectrum),
while the other is sufficient in another (e.g., energy). In this paper, we
propose a joint energy and spectrum cooperation scheme between different
cellular systems to reduce their operational costs. When the two systems are
fully cooperative in nature (e.g., belonging to the same entity), we formulate
the cooperation problem as a convex optimization problem to minimize their
weighted sum cost and obtain the optimal solution in closed form. We also study
another partially cooperative scenario where the two systems have their own
interests. We show that the two systems seek for partial cooperation as long as
they find inter-system complementarity between the energy and spectrum
resources. Under the partial cooperation conditions, we propose a distributed
algorithm for the two systems to gradually and simultaneously reduce their
costs from the non-cooperative benchmark to the Pareto optimum. This
distributed algorithm also has proportional fair cost reduction by reducing
each system's cost proportionally over iterations. Finally, we provide
numerical results to validate the convergence of the distributed algorithm to
the Pareto optimality and compare the centralized and distributed cost
reduction approaches for fully and partially cooperative scenarios.
|
1312.1760 | Towards Normalizing the Edit Distance Using a Genetic Algorithms Based
Scheme | cs.NE cs.AI | The normalized edit distance is one of the distances derived from the edit
distance. It is useful in some applications because it takes into account the
lengths of the two strings compared. The normalized edit distance is not
defined in terms of edit operations but rather in terms of the edit path. In
this paper we propose a new derivative of the edit distance that also takes
into consideration the lengths of the two strings, but the new distance is
related directly to the edit distance. The particularity of the new distance is
that it uses the genetic algorithms to set the values of the parameters it
uses. We conduct experiments to test the new distance and we obtain promising
results.
|
1312.1763 | Optimal Error Rates for Interactive Coding II: Efficiency and List
Decoding | cs.DS cs.IT math.IT | We study coding schemes for error correction in interactive communications.
Such interactive coding schemes simulate any $n$-round interactive protocol
using $N$ rounds over an adversarial channel that corrupts up to $\rho N$
transmissions. Important performance measures for a coding scheme are its
maximum tolerable error rate $\rho$, communication complexity $N$, and
computational complexity.
We give the first coding scheme for the standard setting which performs
optimally in all three measures: Our randomized non-adaptive coding scheme has
a near-linear computational complexity and tolerates any error rate $\delta <
1/4$ with a linear $N = \Theta(n)$ communication complexity. This improves over
prior results which each performed well in two of these measures.
We also give results for other settings of interest, namely, the first
computationally and communication efficient schemes that tolerate $\rho <
\frac{2}{7}$ adaptively, $\rho < \frac{1}{3}$ if only one party is required to
decode, and $\rho < \frac{1}{2}$ if list decoding is allowed. These are the
optimal tolerable error rates for the respective settings. These coding schemes
also have near linear computational and communication complexity.
These results are obtained via two techniques: We give a general black-box
reduction which reduces unique decoding, in various settings, to list decoding.
We also show how to boost the computational and communication efficiency of any
list decoder to become near linear.
|
1312.1764 | Optimal Error Rates for Interactive Coding I: Adaptivity and Other
Settings | cs.DS cs.IT math.IT | We consider the task of interactive communication in the presence of
adversarial errors and present tight bounds on the tolerable error-rates in a
number of different settings.
Most significantly, we explore adaptive interactive communication where the
communicating parties decide who should speak next based on the history of the
interaction. Braverman and Rao [STOC'11] show that non-adaptively one can code
for any constant error rate below 1/4 but not more. They asked whether this
bound could be improved using adaptivity. We answer this open question in the
affirmative (with a slightly different collection of resources): Our adaptive
coding scheme tolerates any error rate below 2/7 and we show that tolerating a
higher error rate than 1/3 is impossible. We also show that in the setting of
Franklin et al. [CRYPTO'13], where parties share randomness not known to the
adversary, adaptivity increases the tolerable error rate from 1/2 to 2/3. For
list-decodable interactive communications, where each party outputs a constant
size list of possible outcomes, the tight tolerable error rate is 1/2.
Our negative results hold even for unbounded communication and computations,
whereas for our positive results communication and computations are
polynomially bounded. Most prior work considered coding schemes with linear
amount of communication, while allowing unbounded computations. We argue that
studying tolerable error rates in this relaxed context helps to identify a
setting's intrinsic optimal error rate. We set forward a strong working
hypothesis which stipulates that for any setting the maximum tolerable error
rate is independent of many computational and communication complexity
measures. We believe this hypothesis to be a powerful guideline for the design
of simple, natural, and efficient coding schemes and for understanding the
(im)possibilities of coding for interactive communications.
|
1312.1766 | Matrix-Monotonic Optimization for MIMO Systems | cs.IT math.IT | For MIMO systems, due to the deployment of multiple antennas at both the
transmitter and the receiver, the design variables e.g., precoders, equalizers,
training sequences, etc. are usually matrices. It is well known that matrix
operations are usually more complicated compared to their vector counterparts.
In order to overcome the high complexity resulting from matrix variables, in
this paper we investigate a class of elegant multi-objective optimization
problems, namely matrix-monotonic optimization problems (MMOPs). In our work,
various representative MIMO optimization problems are unified into a framework
of matrix-monotonic optimization, which includes linear transceiver design,
nonlinear transceiver design, training sequence design, radar waveform
optimization, the corresponding robust design and so on as its special cases.
Then exploiting the framework of matrix-monotonic optimization the optimal
structures of the considered matrix variables can be derived first. Based on
the optimal structure, the matrix-variate optimization problems can be greatly
simplified into the ones with only vector variables. In particular, the
dimension of the new vector variable is equal to the minimum number of columns
and rows of the original matrix variable. Finally, we also extend our work to
some more general cases with multiple matrix variables.
|
1312.1799 | Space-Time Polar Coded Modulation | cs.IT math.IT | The polar codes are proven to be capacity-achieving and are shown to have
equivalent or even better finite-length performance than the turbo/LDPC codes
under some improved decoding algorithms over the additive white Gaussian noise
(AWGN) channels. Polar coding is based on the so-called channel polarization
phenomenon induced by a transform over the underlying binary-input channel. The
channel polarization is found to be universal in many signal processing
problems and has been applied to the coded modulation schemes. In this paper,
the channel polarization is further extended to the multiple antenna
transmission following a multilevel coding principle. The multiple-input
multile-output (MIMO) channel under quadrature amplitude modulation (QAM) are
transformed into a series of synthesized binary-input channels under a
three-stage channel transform. Based on this generalized channel polarization,
the proposed space-time polar coded modulation (STPCM) scheme allows a joint
optimization of the binary polar coding, modulation and MIMO transmission. In
addition, a practical solution of polar code construction over the fading
channels is also provided, where the fading channels are approximated by an
AWGN channel which shares the same capacity with the original. The simulations
over the MIMO channel with uncorrelated Rayleigh fast fading show that the
proposed STPCM scheme can outperform the bit-interleaved turbo coded scheme in
all the simulated cases, where the latter is adopted in many existing
communication systems.
|
1312.1830 | Quantization and Greed are Good: One bit Phase Retrieval, Robustness and
Greedy Refinements | cs.IT math.IT math.ST stat.TH | In this paper, we study the problem of robust phase recovery. We investigate
a novel approach based on extremely quantized (one-bit) phase-less measurements
and a corresponding recovery scheme. The proposed approach has surprising
robustness and stability properties and, unlike currently available methods,
allows to efficiently perform phase recovery from measurements affected by
severe (possibly unknown) non-linear perturbations, such as distortions (e.g.
clipping). Beyond robustness, we show how our approach can be used within
greedy approaches based on alternating minimization. In particular, we propose
novel initialization schemes for the alternating minimization achieving
favorable convergence properties with improved sample complexity.
|
1312.1847 | Understanding Deep Architectures using a Recursive Convolutional Network | cs.LG | A key challenge in designing convolutional network models is sizing them
appropriately. Many factors are involved in these decisions, including number
of layers, feature maps, kernel sizes, etc. Complicating this further is the
fact that each of these influence not only the numbers and dimensions of the
activation units, but also the total number of parameters. In this paper we
focus on assessing the independent contributions of three of these linked
variables: The numbers of layers, feature maps, and parameters. To accomplish
this, we employ a recursive convolutional network whose weights are tied
between layers; this allows us to vary each of the three factors in a
controlled setting. We find that while increasing the numbers of layers and
parameters each have clear benefit, the number of feature maps (and hence
dimensionality of the representation) appears ancillary, and finds most of its
benefit through the introduction of more weights. Our results (i) empirically
confirm the notion that adding layers alone increases computational power,
within the context of convolutional layers, and (ii) suggest that precise
sizing of convolutional feature map dimensions is itself of little concern;
more attention should be paid to the number of parameters in these layers
instead.
|
1312.1858 | How Santa Fe Ants Evolve | cs.NE | The Santa Fe Ant model problem has been extensively used to investigate, test
and evaluate Evolutionary Computing systems and methods over the past two
decades. There is however no literature on its program structures that are
systematically used for fitness improvement, the geometries of those structures
and their dynamics during optimization. This paper analyzes the Santa Fe Ant
Problem using a new phenotypic schema and landscape analysis based on executed
instruction sequences. For the first time we detail systematic structural
features that give high fitness and the evolutionary dynamics of such
structures. The new schema avoids variances due to introns. We develop a
phenotypic variation method that tests the new understanding of the landscape.
We also develop a modified function set that tests newly identified
synchronization constraints. We obtain favorable computational efforts compared
to those in the literature, on testing the new variation and function set on
both the Santa Fe Trail, and the more computationally demanding Los Altos
Trail. Our findings suggest that for the Santa Fe Ant problem, a perspective of
program assembly from repetition of highly fit responses to trail conditions
leads to better analysis and performance.
|
1312.1860 | Flexible queries in XML native databases | cs.IR cs.DB | To date, most of the XML native databases (DB) flexible querying systems are
based on exploiting the tree structure of their semi structured data (SSD).
However, it becomes important to test the efficiency of Formal Concept Analysis
(FCA) formalism for this type of data since it has been proved a great
performance in the field of information retrieval (IR). So, the IR in XML
databases based on FCA is mainly based on the use of the lattice structure.
Each concept of this lattice can be interpreted as a pair (response, query). In
this work, we provide a new flexible modeling of XML DB based on fuzzy FCA as a
first step towards flexible querying of SSD.
|
1312.1870 | Energy-Efficient, Large-scale Distributed-Antenna System (L-DAS) for
Multiple Users | cs.IT math.IT | Large-scale distributed-antenna system (L-DAS) with very large number of
distributed antennas, possibly up to a few hundred antennas, is considered. A
few major issues of the L-DAS, such as high latency, energy consumption,
computational complexity, and large feedback (signaling) overhead, are
identified. The potential capability of the L-DAS is illuminated in terms of an
energy efficiency (EE) throughout the paper. We firstly and generally model the
power consumption of an L-DAS, and formulate an EE maximization problem. To
tackle two crucial issues, namely the huge computational complexity and large
amount of feedback (signaling) information, we propose a channel-gain-based
antenna selection (AS) method and an interference-based user clustering (UC)
method. The original problem is then split into multiple subproblems by a
cluster, and each cluster's precoding and power control are managed in parallel
for high EE. Simulation results reveal that i) using all antennas for
zero-forcing multiuser multiple-input multiple-output (MU-MIMO) is energy
inefficient if there is nonnegligible overhead power consumption on MU-MIMO
processing, and ii) increasing the number of antennas does not necessarily
result in a high EE. Furthermore, the results validate and underpin the EE
merit of the proposed L-DAS complied with the AS, UC, precoding, and power
control by comparing with non-clustering L-DAS and colocated antenna systems.
|
1312.1882 | Shannon Sampling and Parseval Frames on Compact Manifolds | cs.IT math.FA math.IT | Our article is a summary of some results for Riemannian manifolds that were
obtained in \cite{gpes}-\cite{Pesssubm}. To the best of our knowledge these are
the pioneering papers which contain the most general results about frames,
Shannon sampling, and cubature formulas on compact and non-compact Riemannian
manifolds. In particular, the paper \cite{gpes} gives an "end point"
construction of tight localized frames on homogeneous compact manifolds. The
paper \cite{Pessubm} is the first systematic development of localized frames on
compact domains in Euclidean spaces.
|
1312.1887 | Constraints on the search space of argumentation | cs.AI | Drawing from research on computational models of argumentation (particularly
the Carneades Argumentation System), we explore the graphical representation of
arguments in a dispute; then, comparing two different traditions on the limits
of the justification of decisions, and devising an intermediate, semi-formal,
model, we also show that it can shed light on the theory of dispute resolution.
We conclude our paper with an observation on the usefulness of highly
constrained reasoning for Online Dispute Resolution systems. Restricting the
search space of arguments exclusively to reasons proposed by the parties
(vetoing the introduction of new arguments by the human or artificial
arbitrator) is the only way to introduce some kind of decidability -- together
with foreseeability -- in the argumentation system.
|
1312.1897 | Bootstrapped Grouping of Results to Ambiguous Person Name Queries | cs.IR | Some of the main ranking features of today's search engines reflect result
popularity and are based on ranking models, such as PageRank, implicit feedback
aggregation, and more. While such features yield satisfactory results for a
wide range of queries, they aggravate the problem of search for ambiguous
entities: Searching for a person yields satisfactory results only if the person
we are looking for is represented by a high-ranked Web page and all required
information are contained in this page. Otherwise, the user has to either
reformulate/refine the query or manually inspect low-ranked results to find the
person in question. A possible approach to solve this problem is to cluster the
results, so that each cluster represents one of the persons occurring in the
answer set. However clustering search results has proven to be a difficult
endeavor by itself, where the clusters are typically of moderate quality.
A wealth of useful information about persons occurs in Web 2.0 platforms,
such as LinkedIn, Wikipedia, Facebook, etc. Being human-generated, the
information on these platforms is clean, focused, and already disambiguated. We
show that when searching for ambiguous person names the information from such
platforms can be bootstrapped to group the results according to the individuals
occurring in them. We have evaluated our methods on a hand-labeled dataset of
around 5,000 Web pages retrieved from Google queries on 50 ambiguous person
names.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.