id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1305.2245 | Capacity of a Simple Intercellular Signal Transduction Channel | cs.IT math.IT q-bio.CB | We model the ligand-receptor molecular communication channel with a
discrete-time Markov model, and show how to obtain the capacity of this
channel. We show that the capacity-achieving input distribution is iid;
further, unusually for a channel with memory, we show that feedback does not
increase the capacity of this channel.
|
1305.2254 | Programming with Personalized PageRank: A Locally Groundable First-Order
Probabilistic Logic | cs.AI | In many probabilistic first-order representation systems, inference is
performed by "grounding"---i.e., mapping it to a propositional representation,
and then performing propositional inference. With a large database of facts,
groundings can be very large, making inference and learning computationally
expensive. Here we present a first-order probabilistic language which is
well-suited to approximate "local" grounding: every query $Q$ can be
approximately grounded with a small graph. The language is an extension of
stochastic logic programs where inference is performed by a variant of
personalized PageRank. Experimentally, we show that the approach performs well
without weight learning on an entity resolution task; that supervised
weight-learning improves accuracy; and that grounding time is independent of DB
size. We also show that order-of-magnitude speedups are possible by
parallelizing learning.
|
1305.2265 | Quality Measures of Parameter Tuning for Aggregated Multi-Objective
Temporal Planning | cs.AI | Parameter tuning is recognized today as a crucial ingredient when tackling an
optimization problem. Several meta-optimization methods have been proposed to
find the best parameter set for a given optimization algorithm and (set of)
problem instances. When the objective of the optimization is some scalar
quality of the solution given by the target algorithm, this quality is also
used as the basis for the quality of parameter sets. But in the case of
multi-objective optimization by aggregation, the set of solutions is given by
several single-objective runs with different weights on the objectives, and it
turns out that the hypervolume of the final population of each single-objective
run might be a better indicator of the global performance of the aggregation
method than the best fitness in its population. This paper discusses this issue
on a case study in multi-objective temporal planning using the evolutionary
planner DaE-YAHSP and the meta-optimizer ParamILS. The results clearly show how
ParamILS makes a difference between both approaches, and demonstrate that
indeed, in this context, using the hypervolume indicator as ParamILS target is
the best choice. Other issues pertaining to parameter tuning in the proposed
context are also discussed.
|
1305.2269 | Beyond Physical Connections: Tree Models in Human Pose Estimation | cs.CV | Simple tree models for articulated objects prevails in the last decade.
However, it is also believed that these simple tree models are not capable of
capturing large variations in many scenarios, such as human pose estimation.
This paper attempts to address three questions: 1) are simple tree models
sufficient? more specifically, 2) how to use tree models effectively in human
pose estimation? and 3) how shall we use combined parts together with single
parts efficiently?
Assuming we have a set of single parts and combined parts, and the goal is to
estimate a joint distribution of their locations. We surprisingly find that no
latent variables are introduced in the Leeds Sport Dataset (LSP) during
learning latent trees for deformable model, which aims at approximating the
joint distributions of body part locations using minimal tree structure. This
suggests one can straightforwardly use a mixed representation of single and
combined parts to approximate their joint distribution in a simple tree model.
As such, one only needs to build Visual Categories of the combined parts, and
then perform inference on the learned latent tree. Our method outperformed the
state of the art on the LSP, both in the scenarios when the training images are
from the same dataset and from the PARSE dataset. Experiments on animal images
from the VOC challenge further support our findings.
|
1305.2299 | Fast Collision Checking: From Single Robots to Multi-Robot Teams | cs.RO cs.AI cs.MA | We examine three different algorithms that enable the collision certificate
method from [Bialkowski, et al.] to handle the case of a centralized
multi-robot team. By taking advantage of symmetries in the configuration space
of multi-robot teams, our methods can significantly reduce the number of
collision checks vs. both [Bialkowski, et al.] and standard collision checking
implementations.
|
1305.2322 | Simulation of a typical house in the region of Antananarivo, Madagascar.
Determination of passive solutions using local materials | cs.CE | This paper deals with new proposals for the design of passive solutions
adapted to the climate of the highlands of Madagascar. While the strongest
population density is located in the central highlands, the problem of thermal
comfort in buildings occurs mainly during winter time. Currently, people use
raw wood to warm the poorly designed houses. This leads to a large scale
deforestation of the areas and causes erosion and environmental problems. The
methodology used consisted of the identification of a typical building and of a
typical meteorological year. Simulations were carried out using a thermal and
airflow software (CODYRUN) to improve each building component (roof, walls,
windows, and soil) in such a way as to estimate the influence of some technical
solutions on each component in terms of thermal comfort. The proposed solutions
also took into account the use of local materials and the standard of living of
the country.
|
1305.2352 | Speech Enhancement Using Pitch Detection Approach For Noisy Environment | cs.SD cs.CL | Acoustical mismatch among training and testing phases degrades outstandingly
speech recognition results. This problem has limited the development of
real-world nonspecific applications, as testing conditions are highly variant
or even unpredictable during the training process. Therefore the background
noise has to be removed from the noisy speech signal to increase the signal
intelligibility and to reduce the listener fatigue. Enhancement techniques
applied, as pre-processing stages; to the systems remarkably improve
recognition results. In this paper, a novel approach is used to enhance the
perceived quality of the speech signal when the additive noise cannot be
directly controlled. Instead of controlling the background noise, we propose to
reinforce the speech signal so that it can be heard more clearly in noisy
environments. The subjective evaluation shows that the proposed method improves
perceptual quality of speech in various noisy environments. As in some cases
speaking may be more convenient than typing, even for rapid typists: many
mathematical symbols are missing from the keyboard but can be easily spoken and
recognized. Therefore, the proposed system can be used in an application
designed for mathematical symbol recognition (especially symbols not available
on the keyboard) in schools.
|
1305.2357 | Immunization strategies for epidemic processes in time-varying contact
networks | physics.soc-ph cs.SI q-bio.PE | Spreading processes represent a very efficient tool to investigate the
structural properties of networks and the relative importance of their
constituents, and have been widely used to this aim in static networks. Here we
consider simple disease spreading processes on empirical time-varying networks
of contacts between individuals, and compare the effect of several immunization
strategies on these processes. An immunization strategy is defined as the
choice of a set of nodes (individuals) who cannot catch nor transmit the
disease. This choice is performed according to a certain ranking of the nodes
of the contact network. We consider various ranking strategies, focusing in
particular on the role of the training window during which the nodes'
properties are measured in the time-varying network: longer training windows
correspond to a larger amount of information collected and could be expected to
result in better performances of the immunization strategies. We find instead
an unexpected saturation in the efficiency of strategies based on nodes'
characteristics when the length of the training window is increased, showing
that a limited amount of information on the contact patterns is sufficient to
design efficient immunization strategies. This finding is balanced by the large
variations of the contact patterns, which strongly alter the importance of
nodes from one period to the next and therefore significantly limit the
efficiency of any strategy based on an importance ranking of nodes. We also
observe that the efficiency of strategies that include an element of randomness
and are based on temporally local information do not perform as well but are
largely independent on the amount of information available.
|
1305.2362 | Revisiting Bayesian Blind Deconvolution | cs.CV cs.LG stat.ML | Blind deconvolution involves the estimation of a sharp signal or image given
only a blurry observation. Because this problem is fundamentally ill-posed,
strong priors on both the sharp image and blur kernel are required to
regularize the solution space. While this naturally leads to a standard MAP
estimation framework, performance is compromised by unknown trade-off parameter
settings, optimization heuristics, and convergence issues stemming from
non-convexity and/or poor prior selections. To mitigate some of these problems,
a number of authors have recently proposed substituting a variational Bayesian
(VB) strategy that marginalizes over the high-dimensional image space leading
to better estimates of the blur kernel. However, the underlying cost function
now involves both integrals with no closed-form solution and complex,
function-valued arguments, thus losing the transparency of MAP. Beyond standard
Bayesian-inspired intuitions, it thus remains unclear by exactly what mechanism
these methods are able to operate, rendering understanding, improvements and
extensions more difficult. To elucidate these issues, we demonstrate that the
VB methodology can be recast as an unconventional MAP problem with a very
particular penalty/prior that couples the image, blur kernel, and noise level
in a principled way. This unique penalty has a number of useful characteristics
pertaining to relative concavity, local minima avoidance, and scale-invariance
that allow us to rigorously explain the success of VB including its existing
implementational heuristics and approximations. It also provides strict
criteria for choosing the optimal image prior that, perhaps
counter-intuitively, need not reflect the statistics of natural scenes. In so
doing we challenge the prevailing notion of why VB is successful for blind
deconvolution while providing a transparent platform for introducing
enhancements.
|
1305.2386 | Disappointment in Social Choice Protocols | cs.MA | Social choice theory is a theoretical framework for analysis of combining
individual preferences, interests, or welfare to reach a collective decision or
social welfare in some sense. We introduce a new criterion for social choice
protocols called social disappointment. Social disappointment happens when the
outcome of a voting system occurs for those alternatives which are at the end
of at least half of individual preference profiles. Here we introduce some
protocols that prevent social disappointment and prove an impossibility theorem
based on this key concept.
|
1305.2387 | Loss Rate Based Fountain Codes for Data Transfer | cs.NI cs.IT math.IT | Fountain codes are becoming increasingly important for data transferring over
dedicated high-speed long-distance network. However, the encoding and decoding
complexity of traditional fountain codes such as LT and Raptor codes are still
high. In this paper, a new fountain codes named LRF (Loss Rate Based Fountain)
codes for data transfer is proposed. In order to improve the performance of
encoding and decoding efficiency and decrease the number of redundant encoding
symbols, an innovative degree distribution instead of robust soliton degree
distribution in LT (Luby Transfer) codes is proposed. In LRF codes, the degree
of encoding symbol is decided by loss rate property, and the window size is
extended dynamic. Simulations result using LRF codes show that the proposed
method has better performance in term of encoding ratio, degree ratio, encoding
and decoding efficiency with respect to LT and Raptor codes.
|
1305.2388 | Fast Feature Reduction in intrusion detection datasets | cs.CR cs.LG | In the most intrusion detection systems (IDS), a system tries to learn
characteristics of different type of attacks by analyzing packets that sent or
received in network. These packets have a lot of features. But not all of them
is required to be analyzed to detect that specific type of attack. Detection
speed and computational cost is another vital matter here, because in these
types of problems, datasets are very huge regularly. In this paper we tried to
propose a very simple and fast feature selection method to eliminate features
with no helpful information on them. Result faster learning in process of
redundant feature omission. We compared our proposed method with three most
successful similarity based feature selection algorithm including Correlation
Coefficient, Least Square Regression Error and Maximal Information Compression
Index. After that we used recommended features by each of these algorithms in
two popular classifiers including: Bayes and KNN classifier to measure the
quality of the recommendations. Experimental result shows that although the
proposed method can't outperform evaluated algorithms with high differences in
accuracy, but in computational cost it has huge superiority over them.
|
1305.2395 | Shape Reconstruction and Recognition with Isolated Non-directional Cues | cs.CV | The paper investigates a hypothesis that our visual system groups visual cues
based on how they form a surface, or more specifically triangulation derived
from the visual cues. To test our hypothesis, we compare shape recognition with
three different representations of visual cues: a set of isolated dots
delineating the outline of the shape, a set of triangles obtained from Delaunay
triangulation of the set of dots, and a subset of Delaunay triangles excluding
those outside of the shape. Each participant was assigned to one particular
representation type and increased the number of dots (and consequentially
triangles) until the underlying shape could be identified. We compare the
average number of dots needed for identification among three types of
representations. Our hypothesis predicts that the results from the three
representations will be similar. However, they show statistically significant
differences. The paper also presents triangulation based algorithms for
reconstruction and recognition of a shape from a set of isolated dots.
Experiments showed that the algorithms were more effective and perceptually
agreeable than similar contour based ones. From these experiments, we conclude
that triangulation does affect our shape recognition. However, the surface
based approach presents a number of computational advantages over the contour
based one and should be studied further.
|
1305.2415 | Exponentiated Gradient LINUCB for Contextual Multi-Armed Bandits | cs.AI | We present Exponentiated Gradient LINUCB, an algorithm for con-textual
multi-armed bandits. This algorithm uses Exponentiated Gradient to find the
optimal exploration of the LINUCB. Within a deliberately designed offline
simulation framework we conduct evaluations with real online event log data.
The experimental results demonstrate that our algorithm outperforms surveyed
algorithms.
|
1305.2436 | Regularized M-estimators with nonconvexity: Statistical and algorithmic
theory for local optima | math.ST cs.IT math.IT stat.ML stat.TH | We provide novel theoretical results regarding local optima of regularized
$M$-estimators, allowing for nonconvexity in both loss and penalty functions.
Under restricted strong convexity on the loss and suitable regularity
conditions on the penalty, we prove that \emph{any stationary point} of the
composite objective function will lie within statistical precision of the
underlying parameter vector. Our theory covers many nonconvex objective
functions of interest, including the corrected Lasso for errors-in-variables
linear models; regression for generalized linear models with nonconvex
penalties such as SCAD, MCP, and capped-$\ell_1$; and high-dimensional
graphical model estimation. We quantify statistical accuracy by providing
bounds on the $\ell_1$-, $\ell_2$-, and prediction error between stationary
points and the population-level optimum. We also propose a simple modification
of composite gradient descent that may be used to obtain a near-global optimum
within statistical precision $\epsilon$ in $\log(1/\epsilon)$ steps, which is
the fastest possible rate of any first-order method. We provide simulation
studies illustrating the sharpness of our theoretical results.
|
1305.2440 | Rate Region of the (4,3,3) Exact-Repair Regenerating Codes | cs.IT math.IT | Exact-repair regenerating codes are considered for the case (n,k,d)=(4,3,3),
for which a complete characterization of the rate region is provided. This
characterization answers in the affirmative the open question whether there
exists a non-vanishing gap between the optimal bandwidth-storage tradeoff of
the functional-repair regenerating codes (i.e., the cut-set bound) and that of
the exact-repair regenerating codes. The converse proof relies on the existence
of symmetric optimal solutions. For the achievability, only one non-trivial
corner point of the rate region needs to be addressed, for which an explicit
binary code construction is given.
|
1305.2452 | Stochastic Collapsed Variational Bayesian Inference for Latent Dirichlet
Allocation | cs.LG | In the internet era there has been an explosion in the amount of digital text
information available, leading to difficulties of scale for traditional
inference algorithms for topic models. Recent advances in stochastic
variational inference algorithms for latent Dirichlet allocation (LDA) have
made it feasible to learn topic models on large-scale corpora, but these
methods do not currently take full advantage of the collapsed representation of
the model. We propose a stochastic algorithm for collapsed variational Bayesian
inference for LDA, which is simpler and more efficient than the state of the
art method. We show connections between collapsed variational Bayesian
inference and MAP estimation for LDA, and leverage these connections to prove
convergence properties of the proposed algorithm. In experiments on large-scale
text corpora, the algorithm was found to converge faster and often to a better
solution than the previous method. Human-subject experiments also demonstrated
that the method can learn coherent topics in seconds on small corpora,
facilitating the use of topic models in interactive document analysis software.
|
1305.2459 | Interference Alignment in Distributed Antenna Systems | cs.IT math.IT | Interference alignment (IA) is a cooperative transmission strategy that
improves spectral efficiency in high signal-to-noise ratio (SNR) environments,
yet performs poorly in low-SNR scenarios. This limits IA's utility in cellular
systems as it is ineffective in improving cell-edge data rates. Modern cellular
architectures such as distributed antenna systems (DAS), however, promise to
boost cell-edge SNR, creating the environment needed to realize practical IA
gains. Existing IA solutions cannot be applied to DAS as they neglect the
per-remote-radio power constraints imposed on distributed precoders. This paper
considers two types of distributed antenna IA systems: ones with a limit on
maximum per-radio power, and ones with a strict equality constraint on
per-radio power. The rate-loss incurred by a simple power back-off strategy,
used in systems with maximum power constraints, is characterized analytically.
It is also shown that enforcing strict power constraints avoids such a
rate-loss but negatively affects IA feasibility. For such systems, an IA
algorithm is proposed and feasibility conditions are derived based on the
concept of system properness. Finally, numerical results validate the analysis
and demonstrate that IA and DAS can be successfully combined to mitigate
inter-cell interference and improve performance for most mobile users,
especially those at the cell-edge.
|
1305.2460 | Spatially Sparse Precoding in Millimeter Wave MIMO Systems | cs.IT math.IT | Millimeter wave (mmWave) signals experience orders-of-magnitude more pathloss
than the microwave signals currently used in most wireless applications. MmWave
systems must therefore leverage large antenna arrays, made possible by the
decrease in wavelength, to combat pathloss with beamforming gain. Beamforming
with multiple data streams, known as precoding, can be used to further improve
mmWave spectral efficiency. Both beamforming and precoding are done digitally
at baseband in traditional multi-antenna systems. The high cost and power
consumption of mixed-signal devices in mmWave systems, however, make analog
processing in the RF domain more attractive. This hardware limitation restricts
the feasible set of precoders and combiners that can be applied by practical
mmWave transceivers. In this paper, we consider transmit precoding and receiver
combining in mmWave systems with large antenna arrays. We exploit the spatial
structure of mmWave channels to formulate the precoding/combining problem as a
sparse reconstruction problem. Using the principle of basis pursuit, we develop
algorithms that accurately approximate optimal unconstrained precoders and
combiners such that they can be implemented in low-cost RF hardware. We present
numerical results on the performance of the proposed algorithms and show that
they allow mmWave systems to approach their unconstrained performance limits,
even when transceiver hardware constraints are considered.
|
1305.2480 | Weight Distribution for Non-binary Cluster LDPC Code Ensemble | cs.IT math.IT | In this paper, we derive the average weight distributions for the irregular
non-binary cluster low-density parity-check (LDPC) code ensembles. Moreover, we
give the exponential growth rate of the average weight distribution in the
limit of large code length. We show that there exist $(2,d_c)$-regular
non-binary cluster LDPC code ensembles whose normalized typical minimum
distances are strictly positive.
|
1305.2490 | Combining Drift Analysis and Generalized Schema Theory to Design
Efficient Hybrid and/or Mixed Strategy EAs | cs.NE | Hybrid and mixed strategy EAs have become rather popular for tackling various
complex and NP-hard optimization problems. While empirical evidence suggests
that such algorithms are successful in practice, rather little theoretical
support for their success is available, not mentioning a solid mathematical
foundation that would provide guidance towards an efficient design of this type
of EAs. In the current paper we develop a rigorous mathematical framework that
suggests such designs based on generalized schema theory, fitness levels and
drift analysis. An example-application for tackling one of the classical
NP-hard problems, the "single-machine scheduling problem" is presented.
|
1305.2496 | Perturbation centrality and Turbine: a novel centrality measure obtained
using a versatile network dynamics tool | q-bio.MN cond-mat.dis-nn cs.SI physics.bio-ph | Analysis of network dynamics became a focal point to understand and predict
changes of complex systems. Here we introduce Turbine, a generic framework
enabling fast simulation of any algorithmically definable dynamics on very
large networks. Using a perturbation transmission model inspired by
communicating vessels, we define a novel centrality measure: perturbation
centrality. Hubs and inter-modular nodes proved to be highly efficient in
perturbation propagation. High perturbation centrality nodes of the Met-tRNA
synthetase protein structure network were identified as amino acids involved in
intra-protein communication by earlier studies. Changes in perturbation
centralities of yeast interactome nodes upon various stresses well
recapitulated the functional changes of stressed yeast cells. The novelty and
usefulness of perturbation centrality was validated in several other model,
biological and social networks. The Turbine software and the perturbation
centrality measure may provide a large variety of novel options to assess
signaling, drug action, environmental and social interventions. The Turbine
algorithm is available at: http://www.turbine.linkgroup.hu
|
1305.2498 | A Further Generalization of the Finite-Population Geiringer-like Theorem
for POMDPs to Allow Recombination Over Arbitrary Set Covers | cs.AI | A popular current research trend deals with expanding the Monte-Carlo tree
search sampling methodologies to the environments with uncertainty and
incomplete information. Recently a finite population version of Geiringer
theorem with nonhomologous recombination has been adopted to the setting of
Monte-Carlo tree search to cope with randomness and incomplete information by
exploiting the entrinsic similarities within the state space of the problem.
The only limitation of the new theorem is that the similarity relation was
assumed to be an equivalence relation on the set of states. In the current
paper we lift this "curtain of limitation" by allowing the similarity relation
to be modeled in terms of an arbitrary set cover of the set of state-action
pairs.
|
1305.2504 | Geiringer Theorems: From Population Genetics to Computational
Intelligence, Memory Evolutive Systems and Hebbian Learning | cs.NE | The classical Geiringer theorem addresses the limiting frequency of
occurrence of various alleles after repeated application of crossover. It has
been adopted to the setting of evolutionary algorithms and, a lot more
recently, reinforcement learning and Monte-Carlo tree search methodology to
cope with a rather challenging question of action evaluation at the chance
nodes. The theorem motivates novel dynamic parallel algorithms that are
explicitly described in the current paper for the first time. The algorithms
involve independent agents traversing a dynamically constructed directed graph
that possibly has loops. A rather elegant and profound category-theoretic model
of cognition in biological neural networks developed by a well-known French
mathematician, professor Andree Ehresmann jointly with a neurosurgeon, Jan Paul
Vanbremeersch over the last thirty years provides a hint at the connection
between such algorithms and Hebbian learning.
|
1305.2505 | On the Generalization Ability of Online Learning Algorithms for Pairwise
Loss Functions | cs.LG stat.ML | In this paper, we study the generalization properties of online learning
based stochastic methods for supervised learning problems where the loss
function is dependent on more than one training sample (e.g., metric learning,
ranking). We present a generic decoupling technique that enables us to provide
Rademacher complexity-based generalization error bounds. Our bounds are in
general tighter than those obtained by Wang et al (COLT 2012) for the same
problem. Using our decoupling technique, we are further able to obtain fast
convergence rates for strongly convex pairwise loss functions. We are also able
to analyze a class of memory efficient online learning algorithms for pairwise
learning problems that use only a bounded subset of past training samples to
update the hypothesis at each step. Finally, in order to complement our
generalization bounds, we propose a novel memory efficient online learning
algorithm for higher order learning problems with bounded regret guarantees.
|
1305.2524 | Corrupted Sensing: Novel Guarantees for Separating Structured Signals | cs.IT math.IT math.OC stat.ML | We study the problem of corrupted sensing, a generalization of compressed
sensing in which one aims to recover a signal from a collection of corrupted or
unreliable measurements. While an arbitrary signal cannot be recovered in the
face of arbitrary corruption, tractable recovery is possible when both signal
and corruption are suitably structured. We quantify the relationship between
signal recovery and two geometric measures of structure, the Gaussian
complexity of a tangent cone and the Gaussian distance to a subdifferential. We
take a convex programming approach to disentangling signal and corruption,
analyzing both penalized programs that trade off between signal and corruption
complexity, and constrained programs that bound the complexity of signal or
corruption when prior information is available. In each case, we provide
conditions for exact signal recovery from structured corruption and stable
signal recovery from structured corruption with added unstructured noise. Our
simulations demonstrate close agreement between our theoretical recovery bounds
and the sharp phase transitions observed in practice. In addition, we provide
new interpretable bounds for the Gaussian complexity of sparse vectors,
block-sparse vectors, and low-rank matrices, which lead to sharper guarantees
of recovery when combined with our results and those in the literature.
|
1305.2532 | Learning Policies for Contextual Submodular Prediction | cs.LG stat.ML | Many prediction domains, such as ad placement, recommendation, trajectory
prediction, and document summarization, require predicting a set or list of
options. Such lists are often evaluated using submodular reward functions that
measure both quality and diversity. We propose a simple, efficient, and
provably near-optimal approach to optimizing such prediction problems based on
no-regret learning. Our method leverages a surprising result from online
submodular optimization: a single no-regret online learner can compete with an
optimal sequence of predictions. Compared to previous work, which either learn
a sequence of classifiers or rely on stronger assumptions such as
realizability, we ensure both data-efficiency as well as performance guarantees
in the fully agnostic setting. Experiments validate the efficiency and
applicability of the approach on a wide range of problems including manipulator
trajectory optimization, news recommendation and document summarization.
|
1305.2545 | Bandits with Knapsacks | cs.DS cs.LG | Multi-armed bandit problems are the predominant theoretical model of
exploration-exploitation tradeoffs in learning, and they have countless
applications ranging from medical trials, to communication networks, to Web
search and advertising. In many of these application domains the learner may be
constrained by one or more supply (or budget) limits, in addition to the
customary limitation on the time horizon. The literature lacks a general model
encompassing these sorts of problems. We introduce such a model, called
"bandits with knapsacks", that combines aspects of stochastic integer
programming with online learning. A distinctive feature of our problem, in
comparison to the existing regret-minimization literature, is that the optimal
policy for a given latent distribution may significantly outperform the policy
that plays the optimal fixed arm. Consequently, achieving sublinear regret in
the bandits-with-knapsacks problem is significantly more challenging than in
conventional bandit problems.
We present two algorithms whose reward is close to the information-theoretic
optimum: one is based on a novel "balanced exploration" paradigm, while the
other is a primal-dual algorithm that uses multiplicative updates. Further, we
prove that the regret achieved by both algorithms is optimal up to
polylogarithmic factors. We illustrate the generality of the problem by
presenting applications in a number of different domains including electronic
commerce, routing, and scheduling. As one example of a concrete application, we
consider the problem of dynamic posted pricing with limited supply and obtain
the first algorithm whose regret, with respect to the optimal dynamic policy,
is sublinear in the supply.
|
1305.2548 | On Min-Cut Algorithms for Half-Duplex Relay Networks | cs.IT math.IT | Computing the cut-set bound in half-duplex relay networks is a challenging
optimization problem, since it requires finding the cut-set optimal half-duplex
schedule. This subproblem in general involves an exponential number of
variables, since the number of ways to assign each node to either transmitter
or receiver mode is exponential in the number of nodes. We present a general
technique that takes advantage of specific structures in the topology of a
given network and allows us to reduce the complexity of computing the
half-duplex schedule that maximizes the cut-set bound (with i.i.d. input
distribution). In certain classes of network topologies, our approach yields
polynomial time algorithms. We use simulations to show running time
improvements over alternative methods and compare the performance of various
half-duplex scheduling approaches in different SNR regimes.
|
1305.2550 | HERMES: towards an integrated toolbox to characterize functional and
effective brain connectivity | q-bio.NC cs.CE cs.MS physics.bio-ph physics.data-an | The analysis of the interdependence between time series has become an
important field of research in the last years, mainly as a result of advances
in the characterization of dynamical systems from the signals they produce, the
introduction of concepts such as generalized and phase synchronization and the
application of information theory to time series analysis. In neurophysiology,
different analytical tools stemming from these concepts have added to the
'traditional' set of linear methods, which includes the cross-correlation and
the coherency function in the time and frequency domain, respectively, or more
elaborated tools such as Granger Causality. This increase in the number of
approaches to tackle the existence of functional (FC) or effective connectivity
(EC) between two (or among many) neural networks, along with the mathematical
complexity of the corresponding time series analysis tools, makes it desirable
to arrange them into a unified-easy-to-use software package. The goal is to
allow neuroscientists, neurophysiologists and researchers from related fields
to easily access and make use of these analysis methods from a single
integrated toolbox. Here we present HERMES (http://hermes.ctb.upm.es), a
toolbox for the Matlab environment (The Mathworks, Inc), which is designed for
the analysis functional and effective brain connectivity from
neurophysiological data such as multivariate EEG and/or MEG records. It
includes also visualization tools and statistical methods to address the
problem of multiple comparisons. We believe that this toolbox will be very
helpful to all the researchers working in the emerging field of brain
connectivity analysis.
|
1305.2561 | Strategic Planning for Network Data Analysis | cs.AI | As network traffic monitoring software for cybersecurity, malware detection,
and other critical tasks becomes increasingly automated, the rate of alerts and
supporting data gathered, as well as the complexity of the underlying model,
regularly exceed human processing capabilities. Many of these applications
require complex models and constituent rules in order to come up with decisions
that influence the operation of entire systems. In this paper, we motivate the
novel "strategic planning" problem -- one of gathering data from the world and
applying the underlying model of the domain in order to come up with decisions
that will monitor the system in an automated manner. We describe our use of
automated planning methods to this problem, including the technique that we
used to solve it in a manner that would scale to the demands of a real-time,
real world scenario. We then present a PDDL model of one such application
scenario related to network administration and monitoring, followed by a
description of a novel integrated system that was built to accept generated
plans and to continue the execution process. Finally, we present evaluations of
two different automated planners and their different capabilities with our
integrated system, both on a six-month window of network data, and using a
simulator.
|
1305.2581 | Accelerated Mini-Batch Stochastic Dual Coordinate Ascent | stat.ML cs.LG | Stochastic dual coordinate ascent (SDCA) is an effective technique for
solving regularized loss minimization problems in machine learning. This paper
considers an extension of SDCA under the mini-batch setting that is often used
in practice. Our main contribution is to introduce an accelerated mini-batch
version of SDCA and prove a fast convergence rate for this method. We discuss
an implementation of our method over a parallel computing system, and compare
the results to both the vanilla stochastic dual coordinate ascent and to the
accelerated deterministic gradient descent method of
\cite{nesterov2007gradient}.
|
1305.2592 | On the Performance Limits of Scalar Coding Over MISO Channels | cs.IT math.IT | The performance limits of scalar coding for multiple-input single-output
channels are revisited in this work. By employing randomized beamforming,
Narula et al. demonstrated that the loss of scalar coding is universally
bounded by ~ 2.51 dB (or 0.833 bits/symbol) for any number of antennas and
channel gains. In this work, by using randomized beamforming in conjunction
with space-time codes, it is shown that the bound can be tightened to ~ 1.1 dB
(or 0.39 bits/symbol).
|
1305.2623 | (k,m)-connectivity in Mobile Clustered Wireless Networks | cs.IT cs.NI math.CO math.IT math.PR | This paper has been withdrawn by the author due to a crucial error in the
calculation of Equation (28). We propose a novel concept of
$(k,m)$-connectivity in mobile clustered wireless networks, in which there are
$n$ mobile cluster members and $n^d$ static cluster heads, where $k,m,d$ are
all positive constants and $k\leq m$. $(k,m)$-connectivity signifies that in a
time period consisting of $m$ time slots, there exist at least $k$ time slots
for each cluster member and in any one of these $k$ time slots the cluster
member can directly communicate with at least one cluster head. We investigate
the critical transmission range of asymptotic $(k,m)$-connectivity when cluster
members move according to random walk or i.i.d. mobility model. Under random
walk mobility, we propose two general heterogeneous velocity models in which
cluster members may move with different velocities. Under both mobility models,
we also define weak and strong parameters conditions, resulting in different
accuracies of evaluations on the probability that the network is asymptotically
$(k,m)$-connected, denoted as $P(\mathcal {C})$ below for simplicity. For both
mobilities, under weak parameters condition, we provide bounds on $P(\mathcal
{C})$ and derive the critical transmission range for $(k,m)$-connectivity. For
random walk mobility with one kind of velocity model and i.i.d. mobility, under
strong parameters condition, we present a precise asymptotic probability
distribution of $P(\mathcal {C})$ in terms of the transmission radius. Our
results offer fundamental insights and theoretical guidelines on design of
large-scale wireless networks.
|
1305.2642 | Adaptive Frequency Domain Detectors for SC-FDE in Multiuser DS-UWB
Systems with Structured Channel Estimation and Direct Adaptation | cs.IT math.IT | In this paper, we propose two adaptive detection schemes based on
single-carrier frequency domain equalization (SC-FDE) for multiuser
direct-sequence ultra-wideband (DS-UWB) systems, which are termed structured
channel estimation (SCE) and direct adaptation (DA). Both schemes use the
minimum mean square error (MMSE) linear detection strategy and employ a cyclic
prefix. In the SCE scheme, we perform the adaptive channel estimation in the
frequency domain and implement the despreading in the time domain after the
FDE. In this scheme, the MMSE detection requires the knowledge of the number of
users and the noise variance. For this purpose, we propose simple algorithms
for estimating these parameters. In the DA scheme, the interference suppression
task is fulfilled with only one adaptive filter in the frequency domain and a
new signal expression is adopted to simplify the design of such a filter.
Least-mean squares (LMS), recursive least squares (RLS) and conjugate gradient
(CG) adaptive algorithms are then developed for both schemes. A complexity
analysis compares the computational complexity of the proposed algorithms and
schemes, and simulation results for the downlink illustrate their performance.
|
1305.2648 | Boosting with the Logistic Loss is Consistent | cs.LG stat.ML | This manuscript provides optimization guarantees, generalization bounds, and
statistical consistency results for AdaBoost variants which replace the
exponential loss with the logistic and similar losses (specifically, twice
differentiable convex losses which are Lipschitz and tend to zero on one side).
The heart of the analysis is to show that, in lieu of explicit regularization
and constraints, the structure of the problem is fairly rigidly controlled by
the source distribution itself. The first control of this type is in the
separable case, where a distribution-dependent relaxed weak learning rate
induces speedy convergence with high probability over any sample. Otherwise, in
the nonseparable case, the convex surrogate risk itself exhibits
distribution-dependent levels of curvature, and consequently the algorithm's
output has small norm with high probability.
|
1305.2679 | The Multi-Sender Multicast Index Coding | cs.IT math.IT | We focus on the following instance of an index coding problem, where a set of
receivers are required to decode multiple messages, whilst each knows one of
the messages a priori. In particular, here we consider a generalized setting
where they are multiple senders, each sender only knows a subset of messages,
and all senders are required to collectively transmit the index code. For a
single sender, Ong and Ho (ICC, 2012) have established the optimal index
codelength, where the lower bound was obtained using a pruning algorithm. In
this paper, the pruning algorithm is simplified, and used in conjunction with
an appending technique to give a lower bound to the multi-sender case. An upper
bound is derived based on network coding. While the two bounds do not match in
general, for the special case where no two senders know any message bit in
common, the bounds match, giving the optimal index codelength. The results are
derived based on graph theory, and are expressed in terms of strongly connected
components.
|
1305.2680 | A study for the effect of the Emphaticness and language and dialect for
Voice Onset Time (VOT) in Modern Standard Arabic (MSA) | cs.CL cs.SD | The signal sound contains many different features, including Voice Onset Time
(VOT), which is a very important feature of stop sounds in many languages. The
only application of VOT values is stopping phoneme subsets. This subset of
consonant sounds is stop phonemes exist in the Arabic language, and in fact,
all languages. The pronunciation of these sounds is hard and unique especially
for less-educated Arabs and non-native Arabic speakers. VOT can be utilized by
the human auditory system to distinguish between voiced and unvoiced stops such
as /p/ and /b/ in English.This search focuses on computing and analyzing VOT of
Modern Standard Arabic (MSA), within the Arabic language, for all pairs of
non-emphatic (namely, /d/ and /t/) and emphatic pairs (namely, /d?/ and /t?/)
depending on carrier words. This research uses a database built by ourselves,
and uses the carrier words syllable structure: CV-CV-CV. One of the main
outcomes always found is the emphatic sounds (/d?/, /t?/) are less than 50% of
non-emphatic (counter-part) sounds ( /d/, /t/).Also, VOT can be used to
classify or detect for a dialect ina language.
|
1305.2686 | Using Exclusive Web Crawlers to Store Better Results in Search Engines'
Database | cs.IR | Crawler-based search engines are the mostly used search engines among web and
Internet users, involve web crawling, storing in database, ranking, indexing
and displaying to the user. But it is noteworthy that because of increasing
changes in web sites search engines suffer high time and transfers costs which
are consumed to investigate the existence of each page in database while
crawling, updating database and even investigating its existence in any
crawling operations. "Exclusive Web Crawler" proposes guidelines for crawling
features, links, media and other elements and to store crawling results in a
certain table in its database on the web. With doing this, search engines store
each site's tables in their databases and implement their ranking results on
them. Thus, accuracy of data in every table (and its being up-to-date) is
ensured and no 404 result is shown in search results since, in fact, this data
crawler crawls data entered by webmaster and the database stores whatever he
wants to display.
|
1305.2687 | Automatic Parameter Adaptation for Multi-object Tracking | cs.CV | Object tracking quality usually depends on video context (e.g. object
occlusion level, object density). In order to decrease this dependency, this
paper presents a learning approach to adapt the tracker parameters to the
context variations. In an offline phase, satisfactory tracking parameters are
learned for video context clusters. In the online control phase, once a context
change is detected, the tracking parameters are tuned using the learned values.
The experimental results show that the proposed approach outperforms the recent
trackers in state of the art. This paper brings two contributions: (1) a
classification method of video sequences to learn offline tracking parameters,
(2) a new method to tune online tracking parameters using tracking context.
|
1305.2713 | Early Detection of Alzheimer's - A Crucial Requirement | cs.CV physics.med-ph | Alzheimer's, an old age disease of people over 65 years causes problems with
memory, thinking and behavior. This disease progresses very slow and its
identification in early stages is very difficult. The symptoms of Alzheimer's
appear slowly and gradually will have worse effects. In its early stages, not
only the patients themselves but their loved ones are generally unable to
accept that the patient is suffering from disease. In this paper, we have
proposed a new algorithm to detect patients of Alzheimer's at early stages by
comparing the Magnetic Resonance Images (MRI) of the patients with normal
persons of their age. The progress of the disease can also be monitored by
periodic comparison of the previous and current MRI.
|
1305.2714 | Sharp MSE Bounds for Proximal Denoising | cs.IT math.IT math.OC | Denoising has to do with estimating a signal $x_0$ from its noisy
observations $y=x_0+z$. In this paper, we focus on the "structured denoising
problem", where the signal $x_0$ possesses a certain structure and $z$ has
independent normally distributed entries with mean zero and variance
$\sigma^2$. We employ a structure-inducing convex function $f(\cdot)$ and solve
$\min_x\{\frac{1}{2}\|y-x\|_2^2+\sigma\lambda f(x)\}$ to estimate $x_0$, for
some $\lambda>0$. Common choices for $f(\cdot)$ include the $\ell_1$ norm for
sparse vectors, the $\ell_1-\ell_2$ norm for block-sparse signals and the
nuclear norm for low-rank matrices. The metric we use to evaluate the
performance of an estimate $x^*$ is the normalized mean-squared-error
$\text{NMSE}(\sigma)=\frac{\mathbb{E}\|x^*-x_0\|_2^2}{\sigma^2}$. We show that
NMSE is maximized as $\sigma\rightarrow 0$ and we find the \emph{exact} worst
case NMSE, which has a simple geometric interpretation: the
mean-squared-distance of a standard normal vector to the $\lambda$-scaled
subdifferential $\lambda\partial f(x_0)$. When $\lambda$ is optimally tuned to
minimize the worst-case NMSE, our results can be related to the constrained
denoising problem $\min_{f(x)\leq f(x_0)}\{\|y-x\|_2\}$. The paper also
connects these results to the generalized LASSO problem, in which, one solves
$\min_{f(x)\leq f(x_0)}\{\|y-Ax\|_2\}$ to estimate $x_0$ from noisy linear
observations $y=Ax_0+z$. We show that certain properties of the LASSO problem
are closely related to the denoising problem. In particular, we characterize
the normalized LASSO cost and show that it exhibits a "phase transition" as a
function of number of observations. Our results are significant in two ways.
First, we find a simple formula for the performance of a general convex
estimator. Secondly, we establish a connection between the denoising and linear
inverse problems.
|
1305.2724 | Generalized Neutrosophic Soft Set | cs.AI | In this paper we present a new concept called generalized neutrosophic soft
set. This concept incorporates the beneficial properties of both generalized
neutrosophic set introduced by A.A. Salama [7]and soft set techniques proposed
by Molodtsov [4]. We also study some properties of this concept. Some
definitions and operations have been introduced on generalized neutrosophic
soft set. Finally we present an application of generalized neuutrosophic soft
set in decision making problem.
|
1305.2732 | An efficient algorithm for learning with semi-bandit feedback | cs.LG | We consider the problem of online combinatorial optimization under
semi-bandit feedback. The goal of the learner is to sequentially select its
actions from a combinatorial decision set so as to minimize its cumulative
loss. We propose a learning algorithm for this problem based on combining the
Follow-the-Perturbed-Leader (FPL) prediction method with a novel loss
estimation procedure called Geometric Resampling (GR). Contrary to previous
solutions, the resulting algorithm can be efficiently implemented for any
decision set where efficient offline combinatorial optimization is possible at
all. Assuming that the elements of the decision set can be described with
d-dimensional binary vectors with at most m non-zero entries, we show that the
expected regret of our algorithm after T rounds is O(m sqrt(dT log d)). As a
side result, we also improve the best known regret bounds for FPL in the full
information setting to O(m^(3/2) sqrt(T log d)), gaining a factor of sqrt(d/m)
over previous bounds for this algorithm.
|
1305.2741 | Can Human-Like Bots Control Collective Mood: Agent-Based Simulations of
Online Chats | physics.soc-ph cs.SI | Using agent-based modeling approach, in this paper, we study self-organized
dynamics of interacting agents in the presence of chat Bots. Different Bots
with tunable ``human-like'' attributes, which exchange emotional messages with
agents, are considered, and collective emotional behavior of agents is
quantitatively analysed. In particular, using detrended fractal analysis we
determine persistent fluctuations and temporal correlations in time series of
agent's activity and statistics of avalanches carrying emotional messages of
agents when Bots favoring positive/negative affects are active. We determine
the impact of Bots and identify parameters that can modulate it. Our analysis
suggests that, by these measures, the emotional Bots induce collective emotion
among interacting agents by suitably altering the fractal characteristics of
the underlying stochastic process.Positive-emotion Bots are slightly more
effective than the negative ones. Moreover, the Bots which are periodically
alternating between positive and negative emotion, can enhance fluctuations in
the system leading to the avalanches of agent's messages that are reminiscent
of self-organized critical states.
|
1305.2752 | Hybrid fuzzy logic and pid controller based ph neutralization pilot
plant | cs.SY cs.AI | Use of Control theory within process control industries has changed rapidly
due to the increase complexity of instrumentation, real time requirements,
minimization of operating costs and highly nonlinear characteristics of
chemical process. Previously developed process control technologies which are
mostly based on a single controller are not efficient in terms of signal
transmission delays, processing power for computational needs and signal to
noise ratio. Hybrid controller with efficient system modelling is essential to
cope with the current challenges of process control in terms of control
performance. This paper presents an optimized mathematical modelling and
advance hybrid controller (Fuzzy Logic and PID) design along with practical
implementation and validation of pH neutralization pilot plant. This procedure
is particularly important for control design and automation of Physico-chemical
systems for process control industry.
|
1305.2755 | Clustering Web Search Results For Effective Arabic Language Browsing | cs.IR | The process of browsing Search Results is one of the major problems with
traditional Web search engines for English, European, and any other languages
generally, and for Arabic Language particularly. This process is absolutely
time consuming and the browsing style seems to be unattractive. Organizing Web
search results into clusters facilitates users quick browsing through search
results. Traditional clustering techniques (data-centric clustering algorithms)
are inadequate since they don't generate clusters with highly readable names or
cluster labels. To solve this problem, Description-centric algorithms such as
Suffix Tree Clustering (STC) algorithm have been introduced and used
successfully and extensively with different adapted versions for English,
European, and Chinese Languages. However, till the day of writing this paper,
in our knowledge, STC algorithm has been never applied for Arabic Web Snippets
Search Results Clustering.In this paper, we propose first, to study how STC can
be applied for Arabic Language? We then illustrate by example that is
impossible to apply STC after Arabic Snippets pre-processing (stem or root
extraction) because the Merging process yields many redundant clusters.
Secondly, to overcome this problem, we propose to integrate STC in a new scheme
taking into a count the Arabic language properties in order to get the web more
and more adapted to Arabic users. The proposed approach automatically clusters
the web search results into high quality, and high significant clusters labels.
The obtained clusters not only are coherent, but also can convey the contents
to the users concisely and accurately. Therefore the Arabic users can decide at
a glance whether the contents of a cluster are of interest.....
|
1305.2758 | Using Page Size for Controlling Duplicate Query Results in Semantic Web | cs.DB | Semantic web is a web of future. The Resource Description Framework (RDF) is
a language to represent resources in the World Wide Web. When these resources
are queried the problem of duplicate query results occurs. The present
techniques used hash index comparison to remove duplicate query results. The
major drawback of using the hash index to remove duplicate query results is
that, if there is a slight change in formatting or word order, then hash index
is changed and query results are no more considered as duplicate even though
they have same contents. We presented an algorithm for detection and
elimination of duplicate query results from semantic web using hash index and
page size comparisons. Experimental results showed that the proposed technique
removed duplicate query results from semantic web efficiently, solved the
problems of using hash index for duplicate handling and could be embedded in
existing SQL-Based query system for semantic web. Research could be carried out
for certain flexibilities in existing SQL-Based query system of semantic web to
accommodate other duplicate detection techniques as well.
|
1305.2770 | Personal Information Privacy Settings of Online Social Networks and
their Suitability for Mobile Internet Devices | cs.SI cs.CY | Protecting personal information privacy has become a controversial issue
among online social network providers and users. Most social network providers
have developed several techniques to decrease threats and risks to the users
privacy. These risks include the misuse of personal information which may lead
to illegal acts such as identity theft. This study aims to measure the
awareness of users on protecting their personal information privacy, as well as
the suitability of the privacy systems which they use to modify privacy
settings. Survey results show high percentage of the use of smart phones for
web services but the current privacy settings for online social networks need
to be improved to support different type of mobile phones screens. Because most
users use their mobile phones for Internet services, privacy settings that are
compatible with mobile phones need to be developed. The method of selecting
privacy settings should also be simplified to provide users with a clear
picture of the data that will be shared with others. Results of this study can
be used to develop a new privacy system which will help users control their
personal information easily from different devices, including mobile Internet
devices and computers.
|
1305.2788 | HRF estimation improves sensitivity of fMRI encoding and decoding models | cs.LG stat.AP | Extracting activation patterns from functional Magnetic Resonance Images
(fMRI) datasets remains challenging in rapid-event designs due to the inherent
delay of blood oxygen level-dependent (BOLD) signal. The general linear model
(GLM) allows to estimate the activation from a design matrix and a fixed
hemodynamic response function (HRF). However, the HRF is known to vary
substantially between subjects and brain regions. In this paper, we propose a
model for jointly estimating the hemodynamic response function (HRF) and the
activation patterns via a low-rank representation of task effects.This model is
based on the linearity assumption behind the GLM and can be computed using
standard gradient-based solvers. We use the activation patterns computed by our
model as input data for encoding and decoding studies and report performance
improvement in both settings.
|
1305.2789 | Phaseless Signal Recovery in Infinite Dimensional Spaces using
Structured Modulations | cs.IT math.IT | This paper considers the recovery of continuous signals in infinite
dimensional spaces from the magnitude of their frequency samples. It proposes a
sampling scheme which involves a combination of oversampling and modulations
with complex exponentials. Sufficient conditions are given such that almost
every signal with compact support can be reconstructed up to a unimodular
constant using only its magnitude samples in the frequency domain. Finally it
is shown that an average sampling rate of four times the Nyquist rate is enough
to reconstruct almost every time-limited signal.
|
1305.2801 | Quantization Noise Shaping for Information Maximizing ADCs | cs.IT math.IT | ADCs sit at the interface of the analog and digital worlds and fundamentally
determine what information is available in the digital domain for processing.
This paper shows that a configurable ADC can be designed for signals with non
constant information as a function of frequency such that within a fixed power
budget the ADC maximizes the information in the converted signal by frequency
shaping the quantization noise. Quantization noise shaping can be realized via
loop filter design for a single channel delta sigma ADC and extended to common
time and frequency interleaved multi channel structures. Results are presented
for example wireline and wireless style channels.
|
1305.2827 | Human Mood Detection For Human Computer Interaction | cs.CV | In this paper we propose an easiest approach for facial expression
recognition. Here we are using concept of SVM for Expression Classification.
Main problem is sub divided in three main modules. First one is Face detection
in which we are using skin filter and Face segmentation. We are given more
stress on feature Extraction. This method is effective enough for application
where fast execution is required. Second, Facial Feature Extraction which is
essential part for expression recognition. In this module we used Edge
Projection Analysis. Finally extracted features vector is passed towards SVM
classifier for Expression Recognition. We are considering six basic Expressions
(Anger, Fear, Disgust, Joy, Sadness, and Surprise)
|
1305.2828 | Image Optimization and Prediction | cs.CV | Image Processing, Optimization and Prediction of an Image play a key role in
Computer Science. Image processing provides a way to analyze and identify an
image .Many areas like medical image processing, Satellite images, natural
images and artificial images requires lots of analysis and research on
optimization. In Image Optimization and Prediction we are combining the
features of Query Optimization, Image Processing and Prediction . Image
optimization is used in Pattern analysis, object recognition, in medical Image
processing to predict the type of diseases, in satellite images for predicting
weather forecast, availability of water or mineral etc. Image Processing,
Optimization and analysis is a wide open area for research .Lots of research
has been conducted in the area of Image analysis and many techniques are
available for image analysis but, a single technique is not yet identified for
image analysis and prediction .our research is focused on identifying a global
technique for image analysis and Prediction.
|
1305.2830 | Performance Enhancement of Distributed Quasi Steady-State Genetic
Algorithm | cs.NE | This paper proposes a new scheme for performance enhancement of distributed
genetic algorithm (DGA). Initial population is divided in two classes i.e.
female and male. Simple distance based clustering is used for cluster formation
around females. For reclustering self-adaptive K-means is used, which produces
well distributed and well separated clusters. The self-adaptive K-means used
for reclustering automatically locates initial position of centroids and number
of clusters. Four plans of co-evolution are applied on these clusters
independently. Clusters evolve separately. Merging of clusters takes place
depending on their performance. For experimentation unimodal and multimodal
test functions have been used. Test result show that the new scheme of
distribution of population has given better performance.
|
1305.2831 | Test Model for Text Categorization and Text Summarization | cs.IR | Text Categorization is the task of automatically sorting a set of documents
into categories from a predefined set and Text Summarization is a brief and
accurate representation of input text such that the output covers the most
important concepts of the source in a condensed manner. Document Summarization
is an emerging technique for understanding the main purpose of any kind of
documents. This paper presents a model that uses text categorization and text
summarization for searching a document based on user query.
|
1305.2846 | Opportunities & Challenges In Automatic Speech Recognition | cs.CL cs.SD | Automatic speech recognition enables a wide range of current and emerging
applications such as automatic transcription, multimedia content analysis, and
natural human-computer interfaces. This paper provides a glimpse of the
opportunities and challenges that parallelism provides for automatic speech
recognition and related application research from the point of view of speech
researchers. The increasing parallelism in computing platforms opens three
major possibilities for speech recognition systems: improving recognition
accuracy in non-ideal, everyday noisy environments; increasing recognition
throughput in batch processing of speech data; and reducing recognition latency
in realtime usage scenarios. This paper describes technical challenges,
approaches taken, and possible directions for future research to guide the
design of efficient parallel software and hardware infrastructures.
|
1305.2847 | An Overview of Hindi Speech Recognition | cs.CL cs.SD | In this age of information technology, information access in a convenient
manner has gained importance. Since speech is a primary mode of communication
among human beings, it is natural for people to expect to be able to carry out
spoken dialogue with computer. Speech recognition system permits ordinary
people to speak to the computer to retrieve information. It is desirable to
have a human computer dialogue in local language. Hindi being the most widely
spoken Language in India is the natural primary human language candidate for
human machine interaction. There are five pairs of vowels in Hindi languages;
one member is longer than the other one. This paper describes an overview of
speech recognition system that includes how speech is produced and the
properties and characteristics of Hindi Phoneme.
|
1305.2876 | Multi-q Pattern Classification of Polarization Curves | cs.CE cs.CV | Several experimental measurements are expressed in the form of
one-dimensional profiles, for which there is a scarcity of methodologies able
to classify the pertinence of a given result to a specific group. The
polarization curves that evaluate the corrosion kinetics of electrodes in
corrosive media are an application where the behavior is chiefly analyzed from
profiles. Polarization curves are indeed a classic method to determine the
global kinetics of metallic electrodes, but the strong nonlinearity from
different metals and alloys can overlap and the discrimination becomes a
challenging problem. Moreover, even finding a typical curve from replicated
tests requires subjective judgement. In this paper we used the so-called
multi-q approach based on the Tsallis statistics in a classification engine to
separate multiple polarization curve profiles of two stainless steels. We
collected 48 experimental polarization curves in aqueous chloride medium of two
stainless steel types, with different resistance against localized corrosion.
Multi-q pattern analysis was then carried out on a wide potential range, from
cathodic up to anodic regions. An excellent classification rate was obtained,
at a success rate of 90%, 80%, and 83% for low (cathodic), high (anodic), and
both potential ranges, respectively, using only 2% of the original profile
data. These results show the potential of the proposed approach towards
efficient, robust, systematic and automatic classification of highly non-linear
profile curves.
|
1305.2889 | Finding a needle in an exponential haystack: Discrete RRT for
exploration of implicit roadmaps in multi-robot motion planning | cs.RO | We present a sampling-based framework for multi-robot motion planning which
combines an implicit representation of a roadmap with a novel approach for
pathfinding in geometrically embedded graphs tailored for our setting. Our
pathfinding algorithm, discrete-RRT (dRRT), is an adaptation of the celebrated
RRT algorithm for the discrete case of a graph, and it enables a rapid
exploration of the high-dimensional configuration space by carefully walking
through an implicit representation of a tensor product of roadmaps for the
individual robots. We demonstrate our approach experimentally on scenarios of
up to 60 degrees of freedom where our algorithm is faster by a factor of at
least ten when compared to existing algorithms that we are aware of.
|
1305.2938 | Temporal networks: slowing down diffusion by long lasting interactions | physics.soc-ph cs.SI | Interactions among units in complex systems occur in a specific sequential
order thus affecting the flow of information, the propagation of diseases, and
general dynamical processes. We investigate the Laplacian spectrum of temporal
networks and compare it with that of the corresponding aggregate network.
First, we show that the spectrum of the ensemble average of a temporal network
has identical eigenmodes but smaller eigenvalues than the aggregate networks.
In large networks without edge condensation, the expected temporal dynamics is
a time-rescaled version of the aggregate dynamics. Even for single sequential
realizations, diffusive dynamics is slower in temporal networks. These
discrepancies are due to the noncommutability of interactions. We illustrate
our analytical findings using a simple temporal motif, larger network models
and real temporal networks.
|
1305.2949 | Unsupervised ensemble of experts (EoE) framework for automatic
binarization of document images | cs.CV | In recent years, a large number of binarization methods have been developed,
with varying performance generalization and strength against different
benchmarks. In this work, to leverage on these methods, an ensemble of experts
(EoE) framework is introduced, to efficiently combine the outputs of various
methods. The proposed framework offers a new selection process of the
binarization methods, which are actually the experts in the ensemble, by
introducing three concepts: confidentness, endorsement and schools of experts.
The framework, which is highly objective, is built based on two general
principles: (i) consolidation of saturated opinions and (ii) identification of
schools of experts. After building the endorsement graph of the ensemble for an
input document image based on the confidentness of the experts, the saturated
opinions are consolidated, and then the schools of experts are identified by
thresholding the consolidated endorsement graph. A variation of the framework,
in which no selection is made, is also introduced that combines the outputs of
all experts using endorsement-dependent weights. The EoE framework is evaluated
on the set of participating methods in the H-DIBCO'12 contest and also on an
ensemble generated from various instances of grid-based Sauvola method with
promising performance.
|
1305.2959 | Automatic Speech Recognition Using Template Model for Man-Machine
Interface | cs.SD cs.CL | Speech is a natural form of communication for human beings, and computers
with the ability to understand speech and speak with a human voice are expected
to contribute to the development of more natural man-machine interfaces.
Computers with this kind of ability are gradually becoming a reality, through
the evolution of speech recognition technologies. Speech is being an important
mode of interaction with computers. In this paper Feature extraction is
implemented using well-known Mel-Frequency Cepstral Coefficients (MFCC).Pattern
matching is done using Dynamic time warping (DTW) algorithm.
|
1305.2974 | Blind Adaptive Reduced-Rank Detectors for DS-UWB Systems Based on Joint
Iterative Optimization and the Constrained Constant Modulus Criterion | cs.IT math.IT | A novel linear blind adaptive receiver based on joint iterative optimization
(JIO) and the constrained constant modulus (CCM) design criterion is proposed
for interference suppression in direct-sequence ultra-wideband (DS-UWB)
systems. The proposed blind receiver consists of two parts, a transformation
matrix that performs dimensionality reduction and a reduced-rank filter that
produces the output. In the proposed receiver, the transformation matrix and
the reduced-rank filter are updated jointly and iteratively to minimize the
constant modulus (CM) cost function subject to a constraint. Adaptive
implementations for the JIO receiver are developed by using the normalized
stochastic gradient (NSG) and recursive least-squares (RLS) algorithms. In
order to obtain a low-complexity scheme, the columns of the transformation
matrix with the RLS algorithm are updated individually. Blind channel
estimation algorithms for both versions (NSG and RLS) are implemented. Assuming
the perfect timing, the JIO receiver only requires the spreading code of the
desired user and the received data. Simulation results show that both versions
of the proposed JIO receivers have excellent performance in suppressing the
inter-symbol interference (ISI) and multiple access interference (MAI) with a
low complexity.
|
1305.2981 | Metrics for Computing Trust in a Multi-Agent Environment | cs.CY cs.SI | One of the risks involved in multi agent community is in the identification
of trustworthy agent partners for transaction. In this paper we aim to describe
a trust model for measuring trust in the interacting agents. The trust metric
model works on the basis of the parameters that we have identified. The model
primarily analyses trust value on the basis of the agents reputation, as
provided by the agent itself, and the agents aggregate rating as provided by
the witness agents. The final computation of the trust value is given by a
weighted average of these two components. While computing the aggregate rating,
a weight based method has been adopted to discount the contribution of possibly
unfair ratings by the witness agents.
|
1305.2982 | Estimating or Propagating Gradients Through Stochastic Neurons | cs.LG | Stochastic neurons can be useful for a number of reasons in deep learning
models, but in many cases they pose a challenging problem: how to estimate the
gradient of a loss function with respect to the input of such stochastic
neurons, i.e., can we "back-propagate" through these stochastic neurons? We
examine this question, existing approaches, and present two novel families of
solutions, applicable in different settings. In particular, it is demonstrated
that a simple biologically plausible formula gives rise to an an unbiased (but
noisy) estimator of the gradient with respect to a binary stochastic neuron
firing probability. Unlike other estimators which view the noise as a small
perturbation in order to estimate gradients by finite differences, this
estimator is unbiased even without assuming that the stochastic perturbation is
small. This estimator is also interesting because it can be applied in very
general settings which do not allow gradient back-propagation, including the
estimation of the gradient with respect to future rewards, as required in
reinforcement learning setups. We also propose an approach to approximating
this unbiased but high-variance estimator by learning to predict it using a
biased estimator. The second approach we propose assumes that an estimator of
the gradient can be back-propagated and it provides an unbiased estimator of
the gradient, but can only work with non-linearities unlike the hard threshold,
but like the rectifier, that are not flat for all of their range. This is
similar to traditional sigmoidal units but has the advantage that for many
inputs, a hard decision (e.g., a 0 output) can be produced, which would be
convenient for conditional computation and achieving sparse representations and
sparse gradients.
|
1305.2985 | Opportunistic Interference Management for Multicarrier systems | cs.IT math.IT | We study opportunistic interference management when there is bursty
interference in parallel 2-user linear deterministic interference channels. A
degraded message set communication problem is formulated to exploit the
burstiness of interference in M subcarriers allocated to each user. We focus on
symmetric rate requirements based on the number of interfered subcarriers
rather than the exact set of interfered subcarriers. Inner bounds are obtained
using erasure coding, signal-scale alignment and Han-Kobayashi coding strategy.
Tight outer bounds for a variety of regimes are obtained using the El
Gamal-Costa injective interference channel bounds and a sliding window subset
entropy inequality. The result demonstrates an application of techniques from
multilevel diversity coding to interference channels. We also conjecture outer
bounds indicating the sub-optimality of erasure coding across subcarriers in
certain regimes.
|
1305.2999 | Dynamic Spectrum Refarming of GSM Spectrum for LTE Small Cells | cs.IT cs.NI math.IT | In this paper we propose a novel solution called dynamic spectrum refarming
(DSR) for deploying LTE small cells using the same spectrum as existing GSM
networks. The basic idea of DSR is that LTE small cells are deployed in the GSM
spectrum but suppress transmission of all signals including the reference
signals in some specific physical resource blocks corresponding to a portion of
the GSM carriers to ensure full GSM coverage. Our study shows that the proposed
solution can provide LTE mobile terminals with high speed data services when
they are in the coverage of the LTE small cells while minimally affecting the
service provided to GSM terminals located within the LTE small cell coverage
area. Thus, the proposal allows the normal operation of the existing GSM
networks even with LTE small cells deployed in that spectrum. Though the focus
of this paper is about GSM spectrum refarming, an analogous approach can be
applied to reuse code division multiple access (CDMA) spectrum for LTE small
cells.
|
1305.3002 | Applications of Compressed Sensing in Communications Networks | cs.NI cs.IT math.IT | This paper presents a tutorial for CS applications in communications
networks. The Shannon's sampling theorem states that to recover a signal, the
sampling rate must be as least the Nyquist rate. Compressed sensing (CS) is
based on the surprising fact that to recover a signal that is sparse in certain
representations, one can sample at the rate far below the Nyquist rate. Since
its inception in 2006, CS attracted much interest in the research community and
found wide-ranging applications from astronomy, biology, communications, image
and video processing, medicine, to radar. CS also found successful applications
in communications networks. CS was applied in the detection and estimation of
wireless signals, source coding, multi-access channels, data collection in
sensor networks, and network monitoring, etc. In many cases, CS was shown to
bring performance gains on the order of 10X. We believe this is just the
beginning of CS applications in communications networks, and the future will
see even more fruitful applications of CS in our field.
|
1305.3006 | Fast Linearized Alternating Direction Minimization Algorithm with
Adaptive Parameter Selection for Multiplicative Noise Removal | cs.CV math.NA | Owing to the edge preserving ability and low computational cost of the total
variation (TV), variational models with the TV regularization have been widely
investigated in the field of multiplicative noise removal. The key points of
the successful application of these models lie in: the optimal selection of the
regularization parameter which balances the data-fidelity term with the TV
regularizer; the efficient algorithm to compute the solution. In this paper, we
propose two fast algorithms based on the linearized technique, which are able
to estimate the regularization parameter and recover the image simultaneously.
In the iteration step of the proposed algorithms, the regularization parameter
is adjusted by a special discrepancy function defined for multiplicative noise.
The convergence properties of the proposed algorithms are proved under certain
conditions, and numerical experiments demonstrate that the proposed algorithms
overall outperform some state-of-the-art methods in the PSNR values and
computational time.
|
1305.3011 | Real Time Bid Optimization with Smooth Budget Delivery in Online
Advertising | cs.GT cs.LG | Today, billions of display ad impressions are purchased on a daily basis
through a public auction hosted by real time bidding (RTB) exchanges. A
decision has to be made for advertisers to submit a bid for each selected RTB
ad request in milliseconds. Restricted by the budget, the goal is to buy a set
of ad impressions to reach as many targeted users as possible. A desired action
(conversion), advertiser specific, includes purchasing a product, filling out a
form, signing up for emails, etc. In addition, advertisers typically prefer to
spend their budget smoothly over the time in order to reach a wider range of
audience accessible throughout a day and have a sustainable impact. However,
since the conversions occur rarely and the occurrence feedback is normally
delayed, it is very challenging to achieve both budget and performance goals at
the same time. In this paper, we present an online approach to the smooth
budget delivery while optimizing for the conversion performance. Our algorithm
tries to select high quality impressions and adjust the bid price based on the
prior performance distribution in an adaptive manner by distributing the budget
optimally across time. Our experimental results from real advertising campaigns
demonstrate the effectiveness of our proposed approach.
|
1305.3013 | Novel variational model for inpainting in the wavelet domain | cs.CV | Wavelet domain inpainting refers to the process of recovering the missing
coefficients during the image compression or transmission stage. Recently, an
efficient algorithm framework which is called Bregmanized operator splitting
(BOS) was proposed for solving the classical variational model of wavelet
inpainting. However, it is still time-consuming to some extent due to the inner
iteration. In this paper, a novel variational model is established to formulate
this reconstruction problem from the view of image decomposition. Then an
efficient iterative algorithm based on the split-Bregman method is adopted to
calculate an optimal solution, and it is also proved to be convergent. Compared
with the BOS algorithm the proposed algorithm avoids the inner iteration and
hence is more simple. Numerical experiments demonstrate that the proposed
method is very efficient and outperforms the current state-of-the-art methods,
especially in the computational time.
|
1305.3014 | Scalable Audience Reach Estimation in Real-time Online Advertising | cs.LG cs.DB | Online advertising has been introduced as one of the most efficient methods
of advertising throughout the recent years. Yet, advertisers are concerned
about the efficiency of their online advertising campaigns and consequently,
would like to restrict their ad impressions to certain websites and/or certain
groups of audience. These restrictions, known as targeting criteria, limit the
reachability for better performance. This trade-off between reachability and
performance illustrates a need for a forecasting system that can quickly
predict/estimate (with good accuracy) this trade-off. Designing such a system
is challenging due to (a) the huge amount of data to process, and, (b) the need
for fast and accurate estimates. In this paper, we propose a distributed fault
tolerant system that can generate such estimates fast with good accuracy. The
main idea is to keep a small representative sample in memory across multiple
machines and formulate the forecasting problem as queries against the sample.
The key challenge is to find the best strata across the past data, perform
multivariate stratified sampling while ensuring fuzzy fall-back to cover the
small minorities. Our results show a significant improvement over the uniform
and simple stratified sampling strategies which are currently widely used in
the industry.
|
1305.3040 | Weighted Approach to General Entropy Function | cs.IT math.IT | The definition of weighted entropy allows for easy calculation of the entropy
of the mixture of measures. In this paper we investigate the problem of
equivalent definition of the general entropy function in weighted form. We show
that under reasonable condition, which is satisfied by the well-known Shannon,
R\'enyi and Tsallis entropies, every entropy function can be defined
equivalently in the weighted way. As a corollary, we show how use the weighted
form to compute Tsallis entropy of the mixture of measures.
|
1305.3046 | Running Consensus for Decentralized Detection | cs.SY cs.MA | This thesis represents a culmination of work and learning that has taken
place over a period of almost three years (2007 - 2010) at the University of
Salerno, and at the University of Connecticut. It is mostly an unified
mathematical dissertation of the running consensus procedures. In the recent
years, the detection using the paradigm of the running consensus has been
recognized as one of the three possible classes of distributed detection in
which the phases of sensing and communication need not be mutually exclusive,
i.e., sensing and communication occur simultaneously. Considering that the
running consensus paradigm is just an intuitive inference procedure, i.e.
sub-optimal w.r.t. an ideal centralized system scheme which is optimal, the
most important result is that it asymptotically reaches the performance of this
ideal scheme.
|
1305.3051 | Using Feedback for Secrecy over Graphs | cs.IT cs.CR math.IT | We study the problem of secure message multicasting over graphs in the
presence of a passive (node) adversary who tries to eavesdrop in the network.
We show that use of feedback, facilitated through the existence of cycles or
undirected edges, enables higher rates than possible in directed acyclic graphs
of the same mincut. We demonstrate this using code constructions for canonical
combination networks (CCNs). We also provide general outer bounds as well as
schemes for node adversaries over CCNs.
|
1305.3054 | The Degrees of Freedom of the MIMO Y-channel | cs.IT math.IT | The degrees of freedom (DoF) of the MIMO Y-channel, a multi-way communication
network consisting of 3 users and a relay, are characterized for arbitrary
number of antennas. The converse is provided by cut-set bounds and novel
genie-aided bounds. The achievability is shown by a scheme that uses
beamforming to establish network coding on-the-fly at the relay in the uplink,
and zero-forcing pre-coding in the downlink. It is shown that the network has
min{2M_2+2M_3,M_1+M_2+M_3,2N} DoF, where M_j and N represent the number of
antennas at user j and the relay, respectively. Thus, in the extreme case where
M_1+M_2+M_3 dominates the DoF expression and is smaller than N, the network has
the same DoF as the MAC between the 3 users and the relay. In this case, a
decode and forward strategy is optimal. In the other extreme where 2N
dominates, the DoF of the network is twice that of the aforementioned MAC, and
hence network coding is necessary. As a byproduct of this work, it is shown
that channel output feedback from the relay to the users has no impact on the
DoF of this channel.
|
1305.3055 | Secrecy Transmission on Block Fading Channels: Theoretical Limits and
Performance of Practical Codes | cs.IT math.IT | We consider a system where an agent (Alice) aims at transmitting a message to
a second agent (Bob) over a set of parallel channels, while keeping it secret
from a third agent (Eve) by using physical layer security techniques. We assume
that Alice perfectly knows the set of channels with respect to Bob, but she has
only a statistical knowledge of the channels with respect to Eve. We derive
bounds on the achievable outage secrecy rates, by considering coding either
within each channel or across all parallel channels. Transmit power is adapted
to the channel conditions, with a constraint on the average power over the
whole transmission. We also focus on the maximum cumulative outage secrecy rate
that can be achieved. Moreover, in order to assess the performance in a real
life scenario, we consider the use of practical error correcting codes. We
extend the definitions of security gap and equivocation rate, previously
applied to the single additive white Gaussian noise channel, to Rayleigh
distributed parallel channels, on the basis of the error rate targets and the
outage probability. Bounds on these metrics are also derived, taking into
account the statistics of the parallel channels. Numerical results are
provided, that confirm the feasibility of the considered physical layer
security techniques.
|
1305.3058 | Rule-Based Application Development using Webdamlog | cs.DB | We present the WebdamLog system for managing distributed data on the Web in a
peer-to-peer manner. We demonstrate the main features of the system through an
application called Wepic for sharing pictures between attendees of the sigmod
conference. Using Wepic, the attendees will be able to share, download, rate
and annotate pictures in a highly decentralized manner. We show how WebdamLog
handles heterogeneity of the devices and services used to share data in such a
Web setting. We exhibit the simple rules that define the Wepic application and
show how to easily modify the Wepic application.
|
1305.3082 | Mining Frequent Neighborhood Patterns in Large Labeled Graphs | cs.DB | Over the years, frequent subgraphs have been an important sort of targeted
patterns in the pattern mining literatures, where most works deal with
databases holding a number of graph transactions, e.g., chemical structures of
compounds. These methods rely heavily on the downward-closure property (DCP) of
the support measure to ensure an efficient pruning of the candidate patterns.
When switching to the emerging scenario of single-graph databases such as
Google Knowledge Graph and Facebook social graph, the traditional support
measure turns out to be trivial (either 0 or 1). However, to the best of our
knowledge, all attempts to redefine a single-graph support resulted in measures
that either lose DCP, or are no longer semantically intuitive.
This paper targets mining patterns in the single-graph setting. We resolve
the "DCP-intuitiveness" dilemma by shifting the mining target from frequent
subgraphs to frequent neighborhoods. A neighborhood is a specific topological
pattern where a vertex is embedded, and the pattern is frequent if it is shared
by a large portion (above a given threshold) of vertices. We show that the new
patterns not only maintain DCP, but also have equally significant semantics as
subgraph patterns. Experiments on real-life datasets display the feasibility of
our algorithms on relatively large graphs, as well as the capability of mining
interesting knowledge that is not discovered in prior works.
|
1305.3103 | A fast method for implementation of the property lists in programming
languages | cs.PL cs.DB | One of the major challenges in programming languages is to support different
data structures and their variations in both static and dynamic aspects. One of
the these data structures is the property list which applications use it as a
convenient way to store, organize, and access standard types of data. In this
paper, the standards methods for implementation of the Property Lists,
including the Static Array, Link List, Hash and Tree are reviewed. Then an
efficient method to implement the property list is presented. The experimental
results shows that our method is fast compared with the existing methods.
|
1305.3107 | I Wish I Didn't Say That! Analyzing and Predicting Deleted Messages in
Twitter | cs.SI cs.CL | Twitter has become a major source of data for social media researchers. One
important aspect of Twitter not previously considered are {\em deletions} --
removal of tweets from the stream. Deletions can be due to a multitude of
reasons such as privacy concerns, rashness or attempts to undo public
statements. We show how deletions can be automatically predicted ahead of time
and analyse which tweets are likely to be deleted and how.
|
1305.3120 | Optimization with First-Order Surrogate Functions | stat.ML cs.LG math.OC | In this paper, we study optimization methods consisting of iteratively
minimizing surrogates of an objective function. By proposing several
algorithmic variants and simple convergence analyses, we make two main
contributions. First, we provide a unified viewpoint for several first-order
optimization techniques such as accelerated proximal gradient, block coordinate
descent, or Frank-Wolfe algorithms. Second, we introduce a new incremental
scheme that experimentally matches or outperforms state-of-the-art solvers for
large-scale optimization problems typically arising in machine learning.
|
1305.3146 | Discriminating Power of Centrality Measures | cs.SI physics.soc-ph | The calculation of centrality measures is common practice in the study of
networks, as they attempt to quantify the importance of individual vertices,
edges, or other components. Different centralities attempt to measure
importance in different ways. In this paper, we examine a conjecture posed by
E. Estrada regarding the ability of several measures to distinguish the
vertices of networks. Estrada conjectured that if all vertices of a graph have
the same subgraph centrality, then all vertices must also have the same degree,
eigenvector, closeness, and betweenness centralities. We provide a
counterexample for the latter two centrality measures and propose a revised
conjecture.
|
1305.3149 | Qualitative detection of oil adulteration with machine learning
approaches | cs.CE cs.LG | The study focused on the machine learning analysis approaches to identify the
adulteration of 9 kinds of edible oil qualitatively and answered the following
three questions: Is the oil sample adulterant? How does it constitute? What is
the main ingredient of the adulteration oil? After extracting the
high-performance liquid chromatography (HPLC) data on triglyceride from 370 oil
samples, we applied the adaptive boosting with multi-class Hamming loss
(AdaBoost.MH) to distinguish the oil adulteration in contrast with the support
vector machine (SVM). Further, we regarded the adulterant oil and the pure oil
samples as ones with multiple labels and with only one label, respectively.
Then multi-label AdaBoost.MH and multi-label learning vector quantization
(ML-LVQ) model were built to determine the ingredients and their relative ratio
in the adulteration oil. The experimental results on six measures show that
ML-LVQ achieves better performance than multi-label AdaBoost.MH.
|
1305.3178 | Convergence of Distributed Randomized PageRank Algorithms | cs.SY | The PageRank algorithm employed by Google quantifies the importance of each
page by the link structure of the web. To reduce the computational burden the
distributed randomized PageRank algorithms (DRPA) recently appeared in
literature suggest pages to update their ranking values by locally
communicating with the linked pages. The main objective of the note is to show
that the estimates generated by DRPA converge to the true PageRank value almost
surely under the assumption that the randomization is realized in an
independent and identically distributed (iid) way. This is achieved with the
help of the stochastic approximation (SA) and its convergence results.
|
1305.3189 | A Bag of Words Approach for Semantic Segmentation of Monitored Scenes | cs.CV | This paper proposes a semantic segmentation method for outdoor scenes
captured by a surveillance camera. Our algorithm classifies each perceptually
homogenous region as one of the predefined classes learned from a collection of
manually labelled images. The proposed approach combines two different types of
information. First, color segmentation is performed to divide the scene into
perceptually similar regions. Then, the second step is based on SIFT keypoints
and uses the bag of words representation of the regions for the classification.
The prediction is done using a Na\"ive Bayesian Network as a generative
classifier. Compared to existing techniques, our method provides more compact
representations of scene contents and the segmentation result is more
consistent with human perception due to the combination of the color
information with the image keypoints. The experiments conducted on a publicly
available data set demonstrate the validity of the proposed method.
|
1305.3207 | Efficient Density Estimation via Piecewise Polynomial Approximation | cs.LG cs.DS stat.ML | We give a highly efficient "semi-agnostic" algorithm for learning univariate
probability distributions that are well approximated by piecewise polynomial
density functions. Let $p$ be an arbitrary distribution over an interval $I$
which is $\tau$-close (in total variation distance) to an unknown probability
distribution $q$ that is defined by an unknown partition of $I$ into $t$
intervals and $t$ unknown degree-$d$ polynomials specifying $q$ over each of
the intervals. We give an algorithm that draws $\tilde{O}(t\new{(d+1)}/\eps^2)$
samples from $p$, runs in time $\poly(t,d,1/\eps)$, and with high probability
outputs a piecewise polynomial hypothesis distribution $h$ that is
$(O(\tau)+\eps)$-close (in total variation distance) to $p$. This sample
complexity is essentially optimal; we show that even for $\tau=0$, any
algorithm that learns an unknown $t$-piecewise degree-$d$ probability
distribution over $I$ to accuracy $\eps$ must use $\Omega({\frac {t(d+1)}
{\poly(1 + \log(d+1))}} \cdot {\frac 1 {\eps^2}})$ samples from the
distribution, regardless of its running time. Our algorithm combines tools from
approximation theory, uniform convergence, linear programming, and dynamic
programming.
We apply this general algorithm to obtain a wide range of results for many
natural problems in density estimation over both continuous and discrete
domains. These include state-of-the-art results for learning mixtures of
log-concave distributions; mixtures of $t$-modal distributions; mixtures of
Monotone Hazard Rate distributions; mixtures of Poisson Binomial Distributions;
mixtures of Gaussians; and mixtures of $k$-monotone densities. Our general
technique yields computationally efficient algorithms for all these problems,
in many cases with provably optimal sample complexities (up to logarithmic
factors) in all parameters.
|
1305.3224 | Update-Efficiency and Local Repairability Limits for Capacity
Approaching Codes | cs.IT math.IT | Motivated by distributed storage applications, we investigate the degree to
which capacity achieving encodings can be efficiently updated when a single
information bit changes, and the degree to which such encodings can be
efficiently (i.e., locally) repaired when single encoded bit is lost.
Specifically, we first develop conditions under which optimum
error-correction and update-efficiency are possible, and establish that the
number of encoded bits that must change in response to a change in a single
information bit must scale logarithmically in the block-length of the code if
we are to achieve any nontrivial rate with vanishing probability of error over
the binary erasure or binary symmetric channels. Moreover, we show there exist
capacity-achieving codes with this scaling.
With respect to local repairability, we develop tight upper and lower bounds
on the number of remaining encoded bits that are needed to recover a single
lost bit of the encoding. In particular, we show that if the code-rate is
$\epsilon$ less than the capacity, then for optimal codes, the maximum number
of codeword symbols required to recover one lost symbol must scale as
$\log1/\epsilon$.
Several variations on---and extensions of---these results are also developed.
|
1305.3240 | Reaction-Diffusion Systems as Complex Networks | math.OC cs.SY math.AP | The spatially distributed reaction networks are indispensable for the
understanding of many important phenomena concerning the development of
organisms, coordinated cell behavior, and pattern formation. The purpose of
this brief discussion paper is to point out some open problems in the theory of
PDE and compartmental ODE models of balanced reaction-diffusion networks.
|
1305.3250 | Bioacoustical Periodic Pulse Train Signal Detection and Classification
using Spectrogram Intensity Binarization and Energy Projection | cs.CV | The following work outlines an approach for automatic detection and
recognition of periodic pulse train signals using a multi-stage process based
on spectrogram edge detection, energy projection and classification. The method
has been implemented to automatically detect and recognize pulse train songs of
minke whales. While the long term goal of this work is to properly identify and
detect minke songs from large multi-year datasets, this effort was developed
using sounds off the coast of Massachusetts, in the Stellwagen Bank National
Marine Sanctuary. The detection methodology is presented and evaluated on 232
continuous hours of acoustic recordings and a qualitative analysis of machine
learning classifiers and their performance is described. The trained automatic
detection and classification system is applied to 120 continuous hours,
comprised of various challenges such as broadband and narrowband noises, low
SNR, and other pulse train signatures. This automatic system achieves a TPR of
63% for FPR of 0.6% (or 0.87 FP/h), at a Precision (PPV) of 84% and an F1 score
of 71%.
|
1305.3252 | Fault-tolerant control under controller-driven sampling using virtual
actuator strategy | cs.SY | We present a new output feedback fault tolerant control strategy for
continuous-time linear systems. The strategy combines a digital nominal
controller under controller-driven (varying) sampling with virtual-actuator
(VA)-based controller reconfiguration to compensate for actuator faults. In the
proposed scheme, the controller controls both the plant and the sampling
period, and performs controller reconfiguration by engaging in the loop the VA
adapted to the diagnosed fault. The VA also operates under controller-driven
sampling. Two independent objectives are considered: (a) closed-loop stability
with setpoint tracking and (b) controller reconfiguration under faults. Our
main contribution is to extend an existing VA-based controller reconfiguration
strategy to systems under controller-driven sampling in such a way that if
objective (a) is possible under controller-driven sampling (without VA) and
objective (b) is possible under uniform sampling (without controller-driven
sampling), then closed-loop stability and setpoint tracking will be preserved
under both healthy and faulty operation for all possible sampling rate
evolutions that may be selected by the controller.
|
1305.3253 | Social Network for Smart Devices using Embedded Ethernet | cs.NI cs.SI | Embedded Ethernet is nothing but a microcontroller which is able to
communicate with the network. A design of AVR controller-based embedded
Ethernet interface is presented. In the design, an existing SPI serial device
can be converted into a network interface peripheral to obtain compatibility
with the network. By typing the IP-address of LAN on the web browser, the user
gets a web page on screen; this page contains all the information about the
status of the devices. The user can also control the devices interfaced to the
web server by pressing buttons provided in the web page. This creates a network
for easy communication among the devices.
|
1305.3265 | Interference Channel with Intermittent Feedback | cs.IT math.IT | We investigate how to exploit intermittent feedback for interference
management. Focusing on the two-user linear deterministic interference channel,
we completely characterize the capacity region. We find that the
characterization only depends on the forward channel parameters and the
marginal probability distribution of each feedback link. The scheme we propose
makes use of block Markov encoding and quantize-map-and-forward at the
transmitters, and backward decoding at the receivers. Matching outer bounds are
derived based on novel genie-aided techniques. As a consequence, the
perfect-feedback capacity can be achieved once the two feedback links are
active with large enough probabilities.
|
1305.3282 | Emergence of hierarchy in cost driven growth of spatial networks | physics.soc-ph cond-mat.dis-nn cs.SI | One of the most important features of spatial networks such as transportation
networks, power grids, Internet, neural networks, is the existence of a cost
associated with the length of links. Such a cost has a profound influence on
the global structure of these networks which usually display a hierarchical
spatial organization. The link between local constraints and large-scale
structure is however not elucidated and we introduce here a generic model for
the growth of spatial networks based on the general concept of cost benefit
analysis. This model depends essentially on one single scale and produces a
family of networks which range from the star-graph to the minimum spanning tree
and which are characterised by a continuously varying exponent. We show that
spatial hierarchy emerges naturally, with structures composed of various hubs
controlling geographically separated service areas, and appears as a
large-scale consequence of local cost-benefit considerations. Our model thus
provides the first building blocks for a better understanding of the evolution
of spatial networks and their properties. We also find that, surprisingly, the
average detour is minimal in the intermediate regime, as a result of a large
diversity in link lengths. Finally, we estimate the important parameters for
various world railway networks and find that --remarkably-- they all fall in
this intermediate regime, suggesting that spatial hierarchy is a crucial
feature for these systems and probably possesses an important evolutionary
advantage.
|
1305.3288 | A Convex Analysis Approach to Computational Entropy | cs.IT math.IT | This paper studies the notion of computational entropy. Using techniques from
convex optimization, we investigate the following problems: (a) Can we
derandomize the computational entropy? More precisely, for the computational
entropy, what is the real difference in security defined using the three
important classes of circuits: deterministic boolean, deterministic real
valued, or (the most powerful) randomized ones? (b) How large the difference in
the computational entropy for an unbounded versus efficient adversary can be?
(c) Can we obtain useful, simpler characterizations for the computational
entropy?
|
1305.3289 | Redundancy Allocation of Partitioned Linear Block Codes | cs.IT math.IT | Most memories suffer from both permanent defects and intermittent random
errors. The partitioned linear block codes (PLBC) were proposed by Heegard to
efficiently mask stuck-at defects and correct random errors. The PLBC have two
separate redundancy parts for defects and random errors. In this paper, we
investigate the allocation of redundancy between these two parts. The optimal
redundancy allocation will be investigated using simulations and the simulation
results show that the PLBC can significantly reduce the probability of decoding
failure in memory with defects. In addition, we will derive the upper bound on
the probability of decoding failure of PLBC and estimate the optimal redundancy
allocation using this upper bound. The estimated redundancy allocation matches
the optimal redundancy allocation well.
|
1305.3311 | Does the Great Firewall really isolate the Chinese? Integrating access
blockage with cultural factors to explain web user behavior | cs.CY cs.SI physics.soc-ph | The dominant understanding of Internet censorship posits that blocking access
to foreign-based websites creates isolated communities of Internet users. We
question this discourse for its assumption that if given access people would
use all websites. We develop a conceptual framework that integrates access
blockage with social structures to explain web users' choices, and argue that
users visit websites they find culturally proximate and access blockage matters
only when such sites are blocked. We examine the case of China, where online
blockage is notoriously comprehensive, and compare Chinese web usage patterns
with those elsewhere. Analyzing audience traffic among the 1000 most visited
websites, we find that websites cluster according to language and geography.
Chinese websites constitute one cluster, which resembles other such
geo-linguistic clusters in terms of both its composition and degree of
isolation. Our sociological investigation reveals a greater role of cultural
proximity than access blockage in explaining online behaviors.
|
1305.3317 | Linear Reduced-Rank Interference Suppression for DS-UWB Systems Using
Switched Approximations of Adaptive Basis Functions | cs.IT math.IT | In this work, we propose a novel low-complexity reduced-rank scheme and
consider its application to linear interference suppression in direct-sequence
ultra-wideband (DS-UWB) systems. Firstly, we investigate a generic reduced-rank
scheme that jointly optimizes a projection vector and a reduced-rank filter by
using the minimum mean-squared error (MMSE) criterion. Then a low-complexity
scheme, denoted switched approximation of adaptive basis functions (SAABF), is
proposed. The SAABF scheme is an extension of the generic scheme, in which the
complexity reduction is achieved by using a multi-branch framework to simplify
the structure of the projection vector. Adaptive implementations for the SAABF
scheme are developed by using least-mean squares (LMS) and recursive
least-squares (RLS) algorithms. We also develop algorithms for selecting the
branch number and the model order of the SAABF scheme. Simulations show that in
the scenarios with severe inter-symbol interference (ISI) and multiple access
interference (MAI), the proposed SAABF scheme has fast convergence and
remarkable interference suppression performance with low complexity.
|
1305.3321 | A Mining-Based Compression Approach for Constraint Satisfaction Problems | cs.AI | In this paper, we propose an extension of our Mining for SAT framework to
Constraint satisfaction Problem (CSP). We consider n-ary extensional
constraints (table constraints). Our approach aims to reduce the size of the
CSP by exploiting the structure of the constraints graph and of its associated
microstructure. More precisely, we apply itemset mining techniques to search
for closed frequent itemsets on these two representation. Using Tseitin
extension, we rewrite the whole CSP to another compressed CSP equivalent with
respect to satisfiability. Our approach contrast with previous proposed
approach by Katsirelos and Walsh, as we do not change the structure of the
constraints.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.