id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1206.0469
|
Real-Time Bid Optimization for Group-Buying Ads
|
cs.GT cs.CE
|
Group-buying ads seeking a minimum number of customers before the deal expiry
are increasingly used by the daily-deal providers. Unlike the traditional web
ads, the advertiser's profits for group-buying ads depends on the time to
expiry and additional customers needed to satisfy the minimum group size. Since
both these quantities are time-dependent, optimal bid amounts to maximize
profits change with every impression. Consequently, traditional static bidding
strategies are far from optimal. Instead, bid values need to be optimized in
real-time to maximize expected bidder profits. This online optimization of deal
profits is made possible by the advent of ad exchanges offering real-time
(spot) bidding. To this end, we propose a real-time bidding strategy for
group-buying deals based on the online optimization of bid values. We derive
the expected bidder profit of deals as a function of the bid amounts, and
dynamically vary bids to maximize profits. Further, to satisfy time constraints
of the online bidding, we present methods of minimizing computation timings.
Subsequently, we derive the real time ad selection, admissibility, and real
time bidding of the traditional ads as the special cases of the proposed
method. We evaluate the proposed bidding, selection and admission strategies on
a multi-million click stream of 935 ads. The proposed real-time bidding,
selection and admissibility show significant profit increases over the existing
strategies. Further the experiments illustrate the robustness of the bidding
and acceptable computation timings.
|
1206.0489
|
Sumset and Inverse Sumset Inequalities for Differential Entropy and
Mutual Information
|
cs.IT math.CO math.IT math.PR
|
The sumset and inverse sumset theories of Freiman, Pl\"{u}nnecke and Ruzsa,
give bounds connecting the cardinality of the sumset $A+B=\{a+b\;;\;a\in
A,\,b\in B\}$ of two discrete sets $A,B$, to the cardinalities (or the finer
structure) of the original sets $A,B$. For example, the sum-difference bound of
Ruzsa states that, $|A+B|\,|A|\,|B|\leq|A-B|^3$, where the difference set $A-B=
\{a-b\;;\;a\in A,\,b\in B\}$. Interpreting the differential entropy $h(X)$ of a
continuous random variable $X$ as (the logarithm of) the size of the effective
support of $X$, the main contribution of this paper is a series of natural
information-theoretic analogs for these results. For example, the Ruzsa
sum-difference bound becomes the new inequality, $h(X+Y)+h(X)+h(Y)\leq
3h(X-Y)$, for any pair of independent continuous random variables $X$ and $Y$.
Our results include differential-entropy versions of Ruzsa's triangle
inequality, the Pl\"{u}nnecke-Ruzsa inequality, and the
Balog-Szemer\'{e}di-Gowers lemma. Also we give a differential entropy version
of the Freiman-Green-Ruzsa inverse-sumset theorem, which can be seen as a
quantitative converse to the entropy power inequality. Versions of most of
these results for the discrete entropy $H(X)$ were recently proved by Tao,
relying heavily on a strong, functional form of the submodularity property of
$H(X)$. Since differential entropy is {\em not} functionally submodular, in the
continuous case many of the corresponding discrete proofs fail, in many cases
requiring substantially new proof strategies. We find that the basic property
that naturally replaces the discrete functional submodularity, is the data
processing property of mutual information.
|
1206.0531
|
Mutually unbiased bases as submodules and subspaces
|
math.CO cs.IT math.IT quant-ph
|
Mutually unbiased bases (MUBs) have been used in several cryptographic and
communications applications. There has been much speculation regarding
connections between MUBs and finite geometries. Most of which has focused on a
connection with projective and affine planes. We propose a connection with
higher dimensional projective geometries and projective Hjelmslev geometries.
We show that this proposed geometric structure is present in several
constructions of MUBs.
|
1206.0549
|
Sequence-Based Control for Networked Control Systems Based on Virtual
Control Inputs
|
cs.SY math.OC
|
In this paper, we address the problem of controlling a system over an
unreliable connection that is affected by time-varying delays and randomly
occurring packet losses. A novel sequence-based approach is proposed that
extends a given controller designed without consideration of the
network-induced disturbances. Its key idea is to model the unknown future
control inputs by random variables, the so-called virtual control inputs, which
are characterized by discrete probability density functions. Subject to this
probabilistic description, the actual sequence of future control inputs is
determined and transmitted to the actuator. The high performance of the
proposed approach is demonstrated by means of Monte Carlo simulation runs with
an inverted pendulum on a cart and by a detailed comparison to standard NCS
approaches.
|
1206.0555
|
Synergy-based Hand Pose Sensing: Reconstruction Enhancement
|
cs.RO
|
Low-cost sensing gloves for reconstruction posture provide measurements which
are limited under several regards. They are generated through an imperfectly
known model, are subject to noise, and may be less than the number of Degrees
of Freedom (DoFs) of the hand. Under these conditions, direct reconstruction of
the hand posture is an ill-posed problem, and performance can be very poor.
This paper examines the problem of estimating the posture of a human hand
using(low-cost) sensing gloves, and how to improve their performance by
exploiting the knowledge on how humans most frequently use their hands. To
increase the accuracy of pose reconstruction without modifying the glove
hardware - hence basically at no extra cost - we propose to collect, organize,
and exploit information on the probabilistic distribution of human hand poses
in common tasks. We discuss how a database of such an a priori information can
be built, represented in a hierarchy of correlation patterns or postural
synergies, and fused with glove data in a consistent way, so as to provide a
good hand pose reconstruction in spite of insufficient and inaccurate sensing
data. Simulations and experiments on a low-cost glove are reported which
demonstrate the effectiveness of the proposed techniques.
|
1206.0556
|
Synergy-Based Hand Pose Sensing: Optimal Glove Design
|
cs.RO
|
In this paper we study the problem of improving human hand pose sensing
device performance by exploiting the knowledge on how humans most frequently
use their hands in grasping tasks. In a companion paper we studied the problem
of maximizing the reconstruction accuracy of the hand pose from partial and
noisy data provided by any given pose sensing device (a sensorized "glove")
taking into account statistical a priori information. In this paper we consider
the dual problem of how to design pose sensing devices, i.e. how and where to
place sensors on a glove, to get maximum information about the actual hand
posture. We study the continuous case, whereas individual sensing elements in
the glove measure a linear combination of joint angles, the discrete case,
whereas each measure corresponds to a single joint angle, and the most general
hybrid case, whereas both continuous and discrete sensing elements are
available. The objective is to provide, for given a priori information and
fixed number of measurements, the optimal design minimizing in average the
reconstruction error. Solutions relying on the geometrical synergy definition
as well as gradient flow-based techniques are provided. Simulations of
reconstruction performance show the effectiveness of the proposed optimal
design.
|
1206.0561
|
A simple probabilistic construction yielding generalized entropies and
divergences, escort distributions and q-Gaussians
|
cond-mat.stat-mech cs.IT math-ph math.IT math.MP
|
We give a simple probabilistic description of a transition between two states
which leads to a generalized escort distribution. When the parameter of the
distribution varies, it defines a parametric curve that we call an escort-path.
The R\'enyi divergence appears as a natural by-product of the setting. We study
the dynamics of the Fisher information on this path, and show in particular
that the thermodynamic divergence is proportional to Jeffreys' divergence.
Next, we consider the problem of inferring a distribution on the escort-path,
subject to generalized moments constraints. We show that our setting naturally
induces a rationale for the minimization of the R\'enyi information divergence.
Then, we derive the optimum distribution as a generalized q-Gaussian
distribution.
|
1206.0567
|
On generalized Cram\'er-Rao inequalities, generalized Fisher
informations and characterizations of generalized q-Gaussian distributions
|
math-ph cond-mat.stat-mech cs.IT math.IT math.MP
|
This paper deals with Cram\'er-Rao inequalities in the context of
nonextensive statistics and in estimation theory. It gives characterizations of
generalized q-Gaussian distributions, and introduces generalized versions of
Fisher information. The contributions of this paper are (i) the derivation of
new extended Cram\'er-Rao inequalities for the estimation of a parameter,
involving general q-moments of the estimation error, (ii) the derivation of
Cram\'er-Rao inequalities saturated by generalized q-Gaussian distributions,
(iii) the definition of generalized Fisher informations, (iv) the
identification and interpretation of some prior results, and finally, (v) the
suggestion of new estimation methods.
|
1206.0629
|
DEMON: a Local-First Discovery Method for Overlapping Communities
|
cs.DS cs.SI physics.soc-ph
|
Community discovery in complex networks is an interesting problem with a
number of applications, especially in the knowledge extraction task in social
and information networks. However, many large networks often lack a particular
community organization at a global level. In these cases, traditional graph
partitioning algorithms fail to let the latent knowledge embedded in modular
structure emerge, because they impose a top-down global view of a network. We
propose here a simple local-first approach to community discovery, able to
unveil the modular organization of real complex networks. This is achieved by
democratically letting each node vote for the communities it sees surrounding
it in its limited view of the global system, i.e. its ego neighborhood, using a
label propagation algorithm; finally, the local communities are merged into a
global collection. We tested this intuition against the state-of-the-art
overlapping and non-overlapping community discovery methods, and found that our
new method clearly outperforms the others in the quality of the obtained
communities, evaluated by using the extracted communities to predict the
metadata about the nodes of several real world networks. We also show how our
method is deterministic, fully incremental, and has a limited time complexity,
so that it can be used on web-scale real networks.
|
1206.0638
|
WM Program manual
|
cs.CE
|
This user manual has been written to describe the open source code WM to be
distributed associated with a research article submitted to the information
technology journal 45001-ITJ-ANSI, entitled: "Maintenance and Reengineering of
software: Creating a Visual C++ Graphical User Interface to Perform Specific
Tasks Related to Soil Structure Interaction in Poroelastic Soil".
|
1206.0652
|
Learning in Hierarchical Social Networks
|
cs.SI cs.IT cs.LG math.IT
|
We study a social network consisting of agents organized as a hierarchical
M-ary rooted tree, common in enterprise and military organizational structures.
The goal is to aggregate information to solve a binary hypothesis testing
problem. Each agent at a leaf of the tree, and only such an agent, makes a
direct measurement of the underlying true hypothesis. The leaf agent then makes
a decision and sends it to its supervising agent, at the next level of the
tree. Each supervising agent aggregates the decisions from the M members of its
group, produces a summary message, and sends it to its supervisor at the next
level, and so on. Ultimately, the agent at the root of the tree makes an
overall decision. We derive upper and lower bounds for the Type I and II error
probabilities associated with this decision with respect to the number of leaf
agents, which in turn characterize the converge rates of the Type I, Type II,
and total error probabilities. We also provide a message-passing scheme
involving non-binary message alphabets and characterize the exponent of the
error probability with respect to the message alphabet size.
|
1206.0663
|
Multi-Sparse Signal Recovery for Compressive Sensing
|
cs.IT cs.SY math.IT math.OC stat.ML
|
Signal recovery is one of the key techniques of Compressive sensing (CS). It
reconstructs the original signal from the linear sub-Nyquist measurements.
Classical methods exploit the sparsity in one domain to formulate the L0 norm
optimization. Recent investigation shows that some signals are sparse in
multiple domains. To further improve the signal reconstruction performance, we
can exploit this multi-sparsity to generate a new convex programming model. The
latter is formulated with multiple sparsity constraints in multiple domains and
the linear measurement fitting constraint. It improves signal recovery
performance by additional a priori information. Since some EMG signals exhibit
sparsity both in time and frequency domains, we take them as example in
numerical experiments. Results show that the newly proposed method achieves
better performance for multi-sparse signals.
|
1206.0720
|
A queueing model with independent arrivals, and its fluid and diffusion
limits
|
math.PR cs.SY
|
We introduce the {\Delta}(i)/GI/1 queue, a new queueing model. In this model,
customers from a given population independently sample a time to arrive from
some given distribution F. Thus, the arrival times are an ordered statistics,
and the inter-arrival times are differences of consecutive ordered statistics.
They are served by a single server which provides service according to a
general distribution G, with independent service times. The exact model is
analytically intractable. Thus, we develop fluid and diffusion limits for the
various stochastic processes, and performance metrics. The fluid limit of the
queue length is observed to be a reflected process, while the diffusion limit
is observed to be a function of a Brownian motion and a Brownian bridge
process, and is given by a 'netput' process and a directional derivative of the
Skorokhod reflected fluid netput in the direction of a diffusion refinement of
the netput process. We also observe what may be interpreted as a transient
Little's law. Sample path analysis reveals various operating regimes where the
diffusion limit switches between a free diffusion, a reflected diffusion
process and the zero process, with possible discontinuities during regime
switches. The weak convergence is established in the M1 topology, and it is
also shown that this is not possible in the J1 topology.
|
1206.0729
|
Application of Fractional Fourier Transform in Cepstrum Analysis
|
cs.IT math.IT physics.geo-ph
|
Source wavelet estimation is the key in seismic signal processing for
resolving subsurface structural properties. Homomorphic deconvolution using
cepstrum analysis has been an effective method for wavelet estimation for
decades. In general, the inverse of the Fourier transform of the logarithm of a
signal's Fourier transform is the cepstral domain representation of that
signal. The convolution operation of two signals in the time domain becomes an
addition in the cepstral domain. The fractional Fourier transform (FRFT) is the
generalization of the standard Fourier transform (FT). In an FRFT, the
transformation kernel is a set of linear chirps whereas the kernel is composed
of complex sinusoids for the FT. Depending on the fractional order, signals can
be represented in multiple domains. This gives FRFT an extra degree of freedom
in signal analysis over the standard FT. In this paper, we have taken advantage
of the multidomain nature of the FRFT and applied it to cepstral analysis. We
term this combination the Fractional-Cepstrum (FC). We derive the real FC
formulation, and give an example using wavelets to show the multidomain
representation of the traditional cepstrum with different fractional orders of
the FRFT.
|
1206.0730
|
Theoretical foundation for CMA-ES from information geometric perspective
|
cs.NE
|
This paper explores the theoretical basis of the covariance matrix adaptation
evolution strategy (CMA-ES) from the information geometry viewpoint.
To establish a theoretical foundation for the CMA-ES, we focus on a geometric
structure of a Riemannian manifold of probability distributions equipped with
the Fisher metric. We define a function on the manifold which is the
expectation of fitness over the sampling distribution, and regard the goal of
update of the parameters of sampling distribution in the CMA-ES as maximization
of the expected fitness. We investigate the steepest ascent learning for the
expected fitness maximization, where the steepest ascent direction is given by
the natural gradient, which is the product of the inverse of the Fisher
information matrix and the conventional gradient of the function.
Our first result is that we can obtain under some types of parameterization
of multivariate normal distribution the natural gradient of the expected
fitness without the need for inversion of the Fisher information matrix. We
find that the update of the distribution parameters in the CMA-ES is the same
as natural gradient learning for expected fitness maximization. Our second
result is that we derive the range of learning rates such that a step in the
direction of the exact natural gradient improves the parameters in the expected
fitness. We see from the close relation between the CMA-ES and natural gradient
learning that the default setting of learning rates in the CMA-ES seems
suitable in terms of monotone improvement in expected fitness. Then, we discuss
the relation to the expectation-maximization framework and provide an
information geometric interpretation of the CMA-ES.
|
1206.0771
|
Topological graph clustering with thin position
|
math.GT cs.LG stat.ML
|
A clustering algorithm partitions a set of data points into smaller sets
(clusters) such that each subset is more tightly packed than the whole. Many
approaches to clustering translate the vector data into a graph with edges
reflecting a distance or similarity metric on the points, then look for highly
connected subgraphs. We introduce such an algorithm based on ideas borrowed
from the topological notion of thin position for knots and 3-dimensional
manifolds.
|
1206.0773
|
Changepoint Detection over Graphs with the Spectral Scan Statistic
|
math.ST cs.IT math.IT stat.ML stat.TH
|
We consider the change-point detection problem of deciding, based on noisy
measurements, whether an unknown signal over a given graph is constant or is
instead piecewise constant over two connected induced subgraphs of relatively
low cut size. We analyze the corresponding generalized likelihood ratio (GLR)
statistics and relate it to the problem of finding a sparsest cut in a graph.
We develop a tractable relaxation of the GLR statistic based on the
combinatorial Laplacian of the graph, which we call the spectral scan
statistic, and analyze its properties. We show how its performance as a testing
procedure depends directly on the spectrum of the graph, and use this result to
explicitly derive its asymptotic properties on few significant graph
topologies. Finally, we demonstrate both theoretically and by simulations that
the spectral scan statistic can outperform naive testing procedures based on
edge thresholding and $\chi^2$ testing.
|
1206.0823
|
Orthogonal Matching Pursuit with Noisy and Missing Data: Low and High
Dimensional Results
|
math.ST cs.IT math.IT stat.ML stat.TH
|
Many models for sparse regression typically assume that the covariates are
known completely, and without noise. Particularly in high-dimensional
applications, this is often not the case. This paper develops efficient
OMP-like algorithms to deal with precisely this setting. Our algorithms are as
efficient as OMP, and improve on the best-known results for missing and noisy
data in regression, both in the high-dimensional setting where we seek to
recover a sparse vector from only a few measurements, and in the classical
low-dimensional setting where we recover an unstructured regressor. In the
high-dimensional setting, our support-recovery algorithm requires no knowledge
of even the statistics of the noise. Along the way, we also obtain improved
performance guarantees for OMP for the standard sparse regression problem with
Gaussian noise.
|
1206.0855
|
A Mixed Observability Markov Decision Process Model for Musical Pitch
|
cs.AI cs.LG
|
Partially observable Markov decision processes have been widely used to
provide models for real-world decision making problems. In this paper, we will
provide a method in which a slightly different version of them called Mixed
observability Markov decision process, MOMDP, is going to join with our
problem. Basically, we aim at offering a behavioural model for interaction of
intelligent agents with musical pitch environment and we will show that how
MOMDP can shed some light on building up a decision making model for musical
pitch conveniently.
|
1206.0883
|
Bursty egocentric network evolution in Skype
|
physics.soc-ph cs.SI
|
In this study we analyze the dynamics of the contact list evolution of
millions of users of the Skype communication network. We find that egocentric
networks evolve heterogeneously in time as events of edge additions and
deletions of individuals are grouped in long bursty clusters, which are
separated by long inactive periods. We classify users by their link creation
dynamics and show that bursty peaks of contact additions are likely to appear
shortly after user account creation. We also study possible relations between
bursty contact addition activity and other user-initiated actions like free and
paid service adoption events. We show that bursts of contact additions are
associated with increases in activity and adoption - an observation that can
inform the design of targeted marketing tactics.
|
1206.0905
|
A Fuzzy Approach for Pertinent Information Extraction from Web Resources
|
cs.IR
|
Recent work in machine learning for information extraction has focused on two
distinct sub-problems: the conventional problem of filling template slots from
natural language text, and the problem of wrapper induction, learning simple
extraction procedures ("wrappers") for highly structured text such as Web
pages. For suitable regular domains, existing wrapper induction algorithms can
efficiently learn wrappers that are simple and highly accurate, but the
regularity bias of these algorithms makes them unsuitable for most conventional
information extraction tasks. This paper describes a new approach for wrapping
semistructured Web pages. The wrapper is capable of learning how to extract
relevant information from Web resources on the basis of user supplied examples.
It is based on inductive learning techniques as well as fuzzy logic rules.
Experimental results show that our approach achieves noticeably better
precision and recall coefficient performance measures than SoftMealy, which is
one of the most recently reported wrappers capable of wrapping semi-structured
Web pages with missing attributes, multiple attributes, variant attribute
permutations, exceptions, and typos.
|
1206.0918
|
Fuzzy Knowledge Representation Based on Possibilistic and Necessary
Bayesian Networks
|
cs.AI
|
Within the framework proposed in this paper, we address the issue of
extending the certain networks to a fuzzy certain networks in order to cope
with a vagueness and limitations of existing models for decision under
imprecise and uncertain knowledge. This paper proposes a framework that
combines two disciplines to exploit their own advantages in uncertain and
imprecise knowledge representation problems. The framework proposed is a
possibilistic logic based one in which Bayesian nodes and their properties are
represented by local necessity-valued knowledge base. Data in properties are
interpreted as set of valuated formulas. In our contribution possibilistic
Bayesian networks have a qualitative part and a quantitative part, represented
by local knowledge bases. The general idea is to study how a fusion of these
two formalisms would permit representing compact way to solve efficiently
problems for knowledge representation. We show how to apply possibility and
necessity measures to the problem of knowledge representation with large scale
data. On the other hand fuzzification of crisp certainty degrees to fuzzy
variables improves the quality of the network and tends to bring smoothness and
robustness in the network performance. The general aim is to provide a new
approach for decision under uncertainty that combines three methodologies:
Bayesian networks certainty distribution and fuzzy logic.
|
1206.0925
|
Possibilistic Pertinence Feedback and Semantic Networks for Goal's
Extraction
|
cs.AI cs.IR
|
Pertinence Feedback is a technique that enables a user to interactively
express his information requirement by modifying his original query formulation
with further information. This information is provided by explicitly confirming
the pertinent of some indicating objects and/or goals extracted by the system.
Obviously the user cannot mark objects and/or goals as pertinent until some are
extracted, so the first search has to be initiated by a query and the initial
query specification has to be good enough to pick out some pertinent objects
and/or goals from the Semantic Network. In this paper we present a short survey
of fuzzy and Semantic approaches to Knowledge Extraction. The goal of such
approaches is to define flexible Knowledge Extraction Systems able to deal with
the inherent vagueness and uncertainty of the Extraction process. It has long
been recognised that interactivity improves the effectiveness of Knowledge
Extraction systems. Novice user's queries are the most natural and interactive
medium of communication and recent progress in recognition is making it
possible to build systems that interact with the user. However, given the
typical novice user's queries submitted to Knowledge Extraction Systems, it is
easy to imagine that the effects of goal recognition errors in novice user's
queries must be severely destructive on the system's effectiveness. The
experimental work reported in this paper shows that the use of possibility
theory in classical Knowledge Extraction techniques for novice user's query
processing is more robust than the use of the probability theory. Moreover,
both possibilistic and probabilistic pertinence feedback can be effectively
employed to improve the effectiveness of novice user's query processing.
|
1206.0937
|
Detecting Activations over Graphs using Spanning Tree Wavelet Bases
|
stat.ML cs.IT math.IT math.ST stat.TH
|
We consider the detection of activations over graphs under Gaussian noise,
where signals are piece-wise constant over the graph. Despite the wide
applicability of such a detection algorithm, there has been little success in
the development of computationally feasible methods with proveable theoretical
guarantees for general graph topologies. We cast this as a hypothesis testing
problem, and first provide a universal necessary condition for asymptotic
distinguishability of the null and alternative hypotheses. We then introduce
the spanning tree wavelet basis over graphs, a localized basis that reflects
the topology of the graph, and prove that for any spanning tree, this approach
can distinguish null from alternative in a low signal-to-noise regime. Lastly,
we improve on this result and show that using the uniform spanning tree in the
basis construction yields a randomized test with stronger theoretical
guarantees that in many cases matches our necessary conditions. Specifically,
we obtain near-optimal performance in edge transitive graphs, $k$-nearest
neighbor graphs, and $\epsilon$-graphs.
|
1206.0956
|
Using Short Synchronous WOM Codes to Make WOM Codes Decodable
|
cs.IT math.IT
|
In the framework of write-once memory (WOM) codes, it is important to
distinguish between codes that can be decoded directly and those that require
that the decoder knows the current generation to successfully decode the state
of the memory. A widely used approach to construct WOM codes is to design first
nondecodable codes that approach the boundaries of the capacity region, and
then make them decodable by appending additional cells that store the current
generation, at an expense of a rate loss. In this paper, we propose an
alternative method to make nondecodable WOM codes decodable by appending cells
that also store some additional data. The key idea is to append to the original
(nondecodable) code a short synchronous WOM code and write generations of the
original code and of the synchronous code simultaneously. We consider both the
binary and the nonbinary case. Furthermore, we propose a construction of
synchronous WOM codes, which are then used to make nondecodable codes
decodable. For short-to-moderate block lengths, the proposed method
significantly reduces the rate loss as compared to the standard method.
|
1206.0968
|
Pertinent Information retrieval based on Possibilistic Bayesian network
: origin and possibilistic perspective
|
cs.IR
|
In this paper we present a synthesis of work performed on tow information
retrieval models: Bayesian network information retrieval model witch encode
(in) dependence relation between terms and possibilistic network information
retrieval model witch make use of necessity and possibility measures to
represent the fuzziness of pertinence measure. It is known that the use of a
general Bayesian network methodology as the basis for an IR system is difficult
to tackle. The problem mainly appears because of the large number of variables
involved and the computational efforts needed to both determine the
relationships between variables and perform the inference processes. To resolve
these problems, many models have been proposed such as BNR model. Generally,
Bayesian network models doesn't consider the fuzziness of natural language in
the relevance measure of a document to a given query and possibilistic models
doesn't undertake the dependence relations between terms used to index
documents. As a first solution we propose a hybridization of these two models
in one that will undertake both the relationship between terms and the
intrinsic fuzziness of natural language. We believe that the translation of
Bayesian network model from the probabilistic framework to possibilistic one
will allow a performance improvement of BNRM.
|
1206.0974
|
Black-box optimization benchmarking of IPOP-saACM-ES on the BBOB-2012
noisy testbed
|
cs.NE
|
In this paper, we study the performance of IPOP-saACM-ES, recently proposed
self-adaptive surrogate-assisted Covariance Matrix Adaptation Evolution
Strategy. The algorithm was tested using restarts till a total number of
function evaluations of $10^6D$ was reached, where $D$ is the dimension of the
function search space. The experiments show that the surrogate model control
allows IPOP-saACM-ES to be as robust as the original IPOP-aCMA-ES and
outperforms the latter by a factor from 2 to 3 on 6 benchmark problems with
moderate noise. On 15 out of 30 benchmark problems in dimension 20,
IPOP-saACM-ES exceeds the records observed during BBOB-2009 and BBOB-2010.
|
1206.0976
|
Loopy Belief Propagation in Bayesian Networks : origin and possibilistic
perspectives
|
cs.AI cs.IR
|
In this paper we present a synthesis of the work performed on two inference
algorithms: the Pearl's belief propagation (BP) algorithm applied to Bayesian
networks without loops (i.e. polytree) and the Loopy belief propagation (LBP)
algorithm (inspired from the BP) which is applied to networks containing
undirected cycles. It is known that the BP algorithm, applied to Bayesian
networks with loops, gives incorrect numerical results i.e. incorrect posterior
probabilities. Murphy and al. [7] find that the LBP algorithm converges on
several networks and when this occurs, LBP gives a good approximation of the
exact posterior probabilities. However this algorithm presents an oscillatory
behaviour when it is applied to QMR (Quick Medical Reference) network [15].
This phenomenon prevents the LBP algorithm from converging towards a good
approximation of posterior probabilities. We believe that the translation of
the inference computation problem from the probabilistic framework to the
possibilistic framework will allow performance improvement of LBP algorithm. We
hope that an adaptation of this algorithm to a possibilistic causal network
will show an improvement of the convergence of LBP.
|
1206.0981
|
An Informed Model of Personal Information Release in Social Networking
Sites
|
cs.SI cs.GT physics.soc-ph
|
The emergence of online social networks and the growing popularity of digital
communication has resulted in an increasingly amount of information about
individuals available on the Internet. Social network users are given the
freedom to create complex digital identities, and enrich them with truthful or
even fake personal information. However, this freedom has led to serious
security and privacy incidents, due to the role users' identities play in
establishing social and privacy settings.
In this paper, we take a step toward a better understanding of online
information exposure. Based on the detailed analysis of a sample of real-world
data, we develop a deception model for online users. The model uses a game
theoretic approach to characterizing a user's willingness to release, withhold
or lie about information depending on the behavior of individuals within the
user's circle of friends. In the model, we take into account both the
heterogeneous nature of users and their different attitudes, as well as the
different types of information they may expose online.
|
1206.0983
|
Conditional Kolmogorov Complexity and Universal Probability
|
cs.IT math.IT
|
The Coding Theorem of L.A. Levin connects unconditional prefix Kolmogorov
complexity with the discrete universal distribution. There are conditional
versions referred to in several publications but as yet there exist no written
proofs in English. Here we provide those proofs. They use a different
definition than the standard one for the conditional version of the discrete
universal distribution. Under the classic definition of conditional
probability, there is no conditional version of the Coding Theorem.
|
1206.0985
|
Nearly optimal solutions for the Chow Parameters Problem and low-weight
approximation of halfspaces
|
cs.CC cs.DS cs.LG
|
The \emph{Chow parameters} of a Boolean function $f: \{-1,1\}^n \to \{-1,1\}$
are its $n+1$ degree-0 and degree-1 Fourier coefficients. It has been known
since 1961 (Chow, Tannenbaum) that the (exact values of the) Chow parameters of
any linear threshold function $f$ uniquely specify $f$ within the space of all
Boolean functions, but until recently (O'Donnell and Servedio) nothing was
known about efficient algorithms for \emph{reconstructing} $f$ (exactly or
approximately) from exact or approximate values of its Chow parameters. We
refer to this reconstruction problem as the \emph{Chow Parameters Problem.}
Our main result is a new algorithm for the Chow Parameters Problem which,
given (sufficiently accurate approximations to) the Chow parameters of any
linear threshold function $f$, runs in time $\tilde{O}(n^2)\cdot
(1/\eps)^{O(\log^2(1/\eps))}$ and with high probability outputs a
representation of an LTF $f'$ that is $\eps$-close to $f$. The only previous
algorithm (O'Donnell and Servedio) had running time $\poly(n) \cdot
2^{2^{\tilde{O}(1/\eps^2)}}.$
As a byproduct of our approach, we show that for any linear threshold
function $f$ over $\{-1,1\}^n$, there is a linear threshold function $f'$ which
is $\eps$-close to $f$ and has all weights that are integers at most $\sqrt{n}
\cdot (1/\eps)^{O(\log^2(1/\eps))}$. This significantly improves the best
previous result of Diakonikolas and Servedio which gave a $\poly(n) \cdot
2^{\tilde{O}(1/\eps^{2/3})}$ weight bound, and is close to the known lower
bound of $\max\{\sqrt{n},$ $(1/\eps)^{\Omega(\log \log (1/\eps))}\}$ (Goldberg,
Servedio). Our techniques also yield improved algorithms for related problems
in learning theory.
|
1206.0994
|
An Optimization Framework for Semi-Supervised and Transfer Learning
using Multiple Classifiers and Clusterers
|
cs.LG
|
Unsupervised models can provide supplementary soft constraints to help
classify new, "target" data since similar instances in the target set are more
likely to share the same class label. Such models can also help detect possible
differences between training and target distributions, which is useful in
applications where concept drift may take place, as in transfer learning
settings. This paper describes a general optimization framework that takes as
input class membership estimates from existing classifiers learnt on previously
encountered "source" data, as well as a similarity matrix from a cluster
ensemble operating solely on the target data to be classified, and yields a
consensus labeling of the target data. This framework admits a wide range of
loss functions and classification/clustering methods. It exploits properties of
Bregman divergences in conjunction with Legendre duality to yield a principled
and scalable approach. A variety of experiments show that the proposed
framework can yield results substantially superior to those provided by popular
transductive learning techniques or by naively applying classifiers learnt on
the original task to the target data.
|
1206.1009
|
Opinion groups formation and dynamics : structures that last from non
lasting entities
|
physics.soc-ph cs.SI
|
We extend simple opinion models to obtain stable but continuously evolving
communities. Our scope is to meet a challenge raised by sociologists of
generating "structures that last from non lasting entities". We achieve this by
introducing two kinds of noise on a standard opinion model. First, agents may
interact with other agents even if their opinion difference is large. Second,
agents randomly change their opinion at a constant rate. We show that for a
large range of control parameters, our model yields stable and fluctuating
polarized states, where the composition and mean opinion of the emerging groups
is fluctuating over time.
|
1206.1011
|
A Machine Learning Approach For Opinion Holder Extraction In Arabic
Language
|
cs.IR cs.LG
|
Opinion mining aims at extracting useful subjective information from reliable
amounts of text. Opinion mining holder recognition is a task that has not been
considered yet in Arabic Language. This task essentially requires deep
understanding of clauses structures. Unfortunately, the lack of a robust,
publicly available, Arabic parser further complicates the research. This paper
presents a leading research for the opinion holder extraction in Arabic news
independent from any lexical parsers. We investigate constructing a
comprehensive feature set to compensate the lack of parsing structural
outcomes. The proposed feature set is tuned from English previous works coupled
with our proposed semantic field and named entities features. Our feature
analysis is based on Conditional Random Fields (CRF) and semi-supervised
pattern recognition techniques. Different research models are evaluated via
cross-validation experiments achieving 54.03 F-measure. We publicly release our
own research outcome corpus and lexicon for opinion mining community to
encourage further research.
|
1206.1012
|
A Hybrid Artificial Bee Colony Algorithm for Graph 3-Coloring
|
cs.NE
|
The Artificial Bee Colony (ABC) is the name of an optimization algorithm that
was inspired by the intelligent behavior of a honey bee swarm. It is widely
recognized as a quick, reliable, and efficient methods for solving optimization
problems. This paper proposes a hybrid ABC (HABC) algorithm for graph
3-coloring, which is a well-known discrete optimization problem. The results of
HABC are compared with results of the well-known graph coloring algorithms of
today, i.e. the Tabucol and Hybrid Evolutionary algorithm (HEA) and results of
the traditional evolutionary algorithm with SAW method (EA-SAW). Extensive
experimentations has shown that the HABC matched the competitive results of the
best graph coloring algorithms, and did better than the traditional heuristics
EA-SAW when solving equi-partite, flat, and random generated medium-sized
graphs.
|
1206.1032
|
Frequent Patterns mining in time-sensitive Data Stream
|
cs.DB
|
Mining frequent itemsets through static Databases has been extensively
studied and used and is always considered a highly challenging task. For this
reason it is interesting to extend it to data streams field. In the streaming
case, the frequent patterns' mining has much more information to track and much
greater complexity to manage. Infrequent items can become frequent later on and
hence cannot be ignored. The output structure needs to be dynamically
incremented to reflect the evolution of itemset frequencies over time. In this
paper, we study this problem and specifically the methodology of mining
time-sensitive data streams. We tried to improve an existing algorithm by
increasing the temporal accuracy and discarding the out-of-date data by adding
a new concept called the "Shaking Point". We presented as well some experiments
illustrating the time and space required.
|
1206.1042
|
Relevance Feedback for Goal's Extraction from Fuzzy Semantic Networks
|
cs.IR
|
In this paper we present a short survey of fuzzy and Semantic approaches to
Knowledge Extraction. The goal of such approaches is to define flexible
Knowledge Extraction Systems able to deal with the inherent vagueness and
uncertainty of the Extraction process. It has long been recognised that
interactivity improves the effectiveness of Knowledge Extraction systems.
Novice user's queries is the most natural and interactive medium of
communication and recent progress in recognition is making it possible to build
systems that interact with the user. However, given the typical novice user's
queries submitted to Knowledge Extraction systems, it is easy to imagine that
the effects of goal recognition errors in novice user's queries must be
severely destructive on the system's effectiveness. The experimental work
reported in this paper shows that the use of classical Knowledge Extraction
techniques for novice user's query processing is robust to considerably high
levels of goal recognition errors. Moreover, both standard relevance feedback
and pseudo relevance feedback can be effectively employed to improve the
effectiveness of novice user's query processing.
|
1206.1061
|
Use of Fuzzy Sets in Semantic Nets for Providing On-Line Assistance to
User of Technological Systems
|
cs.AI
|
The main objective of this paper is to develop a new semantic Network
structure, based on the fuzzy sets theory, used in Artificial Intelligent
system in order to provide effective on-line assistance to users of new
technological systems. This Semantic Networks is used to describe the knowledge
of an "ideal" expert while fuzzy sets are used both to describe the approximate
and uncertain knowledge of novice users who intervene to match fuzzy labels of
a query with categories from an "ideal" expert. The technical system we
consider is a word processor software, with Objects such as "Word" and Goals
such as "Cut" or "Copy". We suggest to consider the set of the system's Goals
as a set of linguistic variables to which corresponds a set of possible
linguistic values based on the fuzzy set. We consider, therefore, a set of
interpretation's levels for these possible values to which corresponds a set of
membership functions. We also propose a method to measure the similarity degree
between different fuzzy linguistic variables for the partition of the semantic
network in class of similar objects to make easy the diagnosis of the user's
fuzzy queries.
|
1206.1065
|
An IMU-Aided Carrier-Phase Differential GPS Positioning System
|
cs.RO
|
We consider the problem of carrier-phase differential GPS positioning for an
land vehicle navigation system (LVNS), tightly coupled with an inertial
measurement unit (IMU) and a speedometer. The primary focus is to apply
Bayesian network to an IMU-aided GPS positioning system based on carrier-phase
differential GPS. We describe the implementation details of the positioning
system that integrates GPS measurements (i.e., pseudo-range, carrier-phase and
doppler), IMU measurements, and speedometer measurements. We derive the
linearized state process equation and the measurement equation for GPS and
speedometer. To account for constraints of land vehicle, we add two more pseudo
measurements to ensure the perpendicular velocities close to zero.
|
1206.1066
|
Hedge detection as a lens on framing in the GMO debates: A position
paper
|
cs.CL
|
Understanding the ways in which participants in public discussions frame
their arguments is important in understanding how public opinion is formed. In
this paper, we adopt the position that it is time for more
computationally-oriented research on problems involving framing. In the
interests of furthering that goal, we propose the following specific,
interesting and, we believe, relatively accessible question: In the controversy
regarding the use of genetically-modified organisms (GMOs) in agriculture, do
pro- and anti-GMO articles differ in whether they choose to adopt a
"scientific" tone?
Prior work on the rhetoric and sociology of science suggests that hedging may
distinguish popular-science text from text written by professional scientists
for their colleagues. We propose a detailed approach to studying whether hedge
detection can be used to understanding scientific framing in the GMO debates,
and provide corpora to facilitate this study. Some of our preliminary analyses
suggest that hedges occur less frequently in scientific discourse than in
popular text, a finding that contradicts prior assertions in the literature. We
hope that our initial work and data will encourage others to pursue this
promising line of inquiry.
|
1206.1069
|
Concepts and Their Dynamics: A Quantum-Theoretic Modeling of Human
Thought
|
cs.AI cs.CL quant-ph
|
We analyze different aspects of our quantum modeling approach of human
concepts, and more specifically focus on the quantum effects of contextuality,
interference, entanglement and emergence, illustrating how each of them makes
its appearance in specific situations of the dynamics of human concepts and
their combinations. We point out the relation of our approach, which is based
on an ontology of a concept as an entity in a state changing under influence of
a context, with the main traditional concept theories, i.e. prototype theory,
exemplar theory and theory theory. We ponder about the question why quantum
theory performs so well in its modeling of human concepts, and shed light on
this question by analyzing the role of complex amplitudes, showing how they
allow to describe interference in the statistics of measurement outcomes, while
in the traditional theories statistics of outcomes originates in classical
probability weights, without the possibility of interference. The relevance of
complex numbers, the appearance of entanglement, and the role of Fock space in
explaining contextual emergence, all as unique features of the quantum
modeling, are explicitly revealed in this paper by analyzing human concepts and
their dynamics.
|
1206.1074
|
Memetic Artificial Bee Colony Algorithm for Large-Scale Global
Optimization
|
cs.NE cs.AI
|
Memetic computation (MC) has emerged recently as a new paradigm of efficient
algorithms for solving the hardest optimization problems. On the other hand,
artificial bees colony (ABC) algorithms demonstrate good performances when
solving continuous and combinatorial optimization problems. This study tries to
use these technologies under the same roof. As a result, a memetic ABC (MABC)
algorithm has been developed that is hybridized with two local search
heuristics: the Nelder-Mead algorithm (NMA) and the random walk with direction
exploitation (RWDE). The former is attended more towards exploration, while the
latter more towards exploitation of the search space. The stochastic adaptation
rule was employed in order to control the balancing between exploration and
exploitation. This MABC algorithm was applied to a Special suite on Large Scale
Continuous Global Optimization at the 2012 IEEE Congress on Evolutionary
Computation. The obtained results the MABC are comparable with the results of
DECC-G, DECC-G*, and MLCC.
|
1206.1088
|
Bayesian Structure Learning for Markov Random Fields with a Spike and
Slab Prior
|
stat.ML cs.LG
|
In recent years a number of methods have been developed for automatically
learning the (sparse) connectivity structure of Markov Random Fields. These
methods are mostly based on L1-regularized optimization which has a number of
disadvantages such as the inability to assess model uncertainty and expensive
cross-validation to find the optimal regularization parameter. Moreover, the
model's predictive performance may degrade dramatically with a suboptimal value
of the regularization parameter (which is sometimes desirable to induce
sparseness). We propose a fully Bayesian approach based on a "spike and slab"
prior (similar to L0 regularization) that does not suffer from these
shortcomings. We develop an approximate MCMC method combining Langevin dynamics
and reversible jump MCMC to conduct inference in this model. Experiments show
that the proposed model learns a good combination of the structure and
parameter values without the need for separate hyper-parameter tuning.
Moreover, the model's predictive performance is much more robust than L1-based
methods with hyper-parameter settings that induce highly sparse model
structures.
|
1206.1099
|
Power Grid Vulnerability to Geographically Correlated Failures -
Analysis and Control Implications
|
cs.SY cs.PF math.OC
|
We consider power line outages in the transmission system of the power grid,
and specifically those caused by a natural disaster or a large scale physical
attack. In the transmission system, an outage of a line may lead to overload on
other lines, thereby eventually leading to their outage. While such cascading
failures have been studied before, our focus is on cascading failures that
follow an outage of several lines in the same geographical area. We provide an
analytical model of such failures, investigate the model's properties, and show
that it differs from other models used to analyze cascades in the power grid
(e.g., epidemic/percolation-based models). We then show how to identify the
most vulnerable locations in the grid and perform extensive numerical
experiments with real grid data to investigate the various effects of
geographically correlated outages and the resulting cascades. These results
allow us to gain insights into the relationships between various parameters and
performance metrics, such as the size of the original event, the final number
of connected components, and the fraction of demand (load) satisfied after the
cascade. In particular, we focus on the timing and nature of optimal control
actions used to reduce the impact of a cascade, in real time. We also compare
results obtained by our model to the results of a real cascade that occurred
during a major blackout in the San Diego area on Sept. 2011. The analysis and
results presented in this paper will have implications both on the design of
new power grids and on identifying the locations for shielding, strengthening,
and monitoring efforts in grid upgrades.
|
1206.1105
|
A Linear Circuit Model For Social Influence Analysis
|
cs.SI physics.soc-ph
|
Understanding the behaviors of information propagation is essential for the
effective exploitation of social influence in social networks. However, few
existing influence models are both tractable and efficient for describing the
information propagation process and quantitatively measuring social influence.
To this end, in this paper, we develop a linear social influence model, named
Circuit due to its close relation to the circuit network. Based on the
predefined four axioms of social influence, we first demonstrate that our model
can efficiently measure the influence strength between any pair of nodes. Along
this line, an upper bound of the node(s)' influence is identified for potential
use, e.g., reducing the search space. Furthermore, we provide the physical
implication of the Circuit model and also a deep analysis of its relationships
with the existing methods, such as PageRank. Then, we propose that the Circuit
model provides a natural solution to the problems of computing each single
node's authority and finding a set of nodes for social influence maximization.
At last, the effectiveness of the proposed model is evaluated on the real-world
data. The extensive experimental results demonstrate that Circuit model
consistently outperforms the state-of-the-art methods and can greatly alleviate
the computation burden of the influence maximization problem.
|
1206.1106
|
No More Pesky Learning Rates
|
stat.ML cs.LG
|
The performance of stochastic gradient descent (SGD) depends critically on
how learning rates are tuned and decreased over time. We propose a method to
automatically adjust multiple learning rates so as to minimize the expected
error at any one time. The method relies on local gradient variations across
samples. In our approach, learning rates can increase as well as decrease,
making it suitable for non-stationary problems. Using a number of convex and
non-convex learning tasks, we show that the resulting algorithm matches the
performance of SGD or other adaptive approaches with their best settings
obtained through systematic search, and effectively removes the need for
learning rate tuning.
|
1206.1116
|
Transceiver Design for Multi-user Multi-antenna Two-way Relay Cellular
Systems
|
cs.IT math.IT
|
In this paper, we design interference free transceivers for multi-user
two-way relay systems, where a multi-antenna base station (BS) simultaneously
exchanges information with multiple single-antenna users via a multi-antenna
amplify-and-forward relay station (RS). To offer a performance benchmark and
provide useful insight into the transceiver structure, we employ alternating
optimization to find optimal transceivers at the BS and RS that maximizes the
bidirectional sum rate. We then propose a low complexity scheme, where the BS
transceiver is the zero-forcing precoder and detector, and the RS transceiver
is designed to balance the uplink and downlink sum rates. Simulation results
demonstrate that the proposed scheme is superior to the existing zero forcing
and signal alignment schemes, and the performance gap between the proposed
scheme and the alternating optimization is minor.
|
1206.1120
|
Collective Decision Dynamics in the Presence of External Drivers
|
physics.soc-ph cs.SI nlin.AO
|
We develop a sequence of models describing information transmission and
decision dynamics for a network of individual agents subject to multiple
sources of influence. Our general framework is set in the context of an
impending natural disaster, where individuals, represented by nodes on the
network, must decide whether or not to evacuate. Sources of influence include a
one-to-many externally driven global broadcast as well as pairwise
interactions, across links in the network, in which agents transmit either
continuous opinions or binary actions. We consider both uniform and variable
threshold rules on the individual opinion as baseline models for
decision-making. Our results indicate that 1) social networks lead to
clustering and cohesive action among individuals, 2) binary information
introduces high temporal variability and stagnation, and 3) information
transmission over the network can either facilitate or hinder action adoption,
depending on the influence of the global broadcast relative to the social
network. Our framework highlights the essential role of local interactions
between agents in predicting collective behavior of the population as a whole.
|
1206.1121
|
Comparison of the C4.5 and a Naive Bayes Classifier for the Prediction
of Lung Cancer Survivability
|
cs.LG
|
Numerous data mining techniques have been developed to extract information
and identify patterns and predict trends from large data sets. In this study,
two classification techniques, the J48 implementation of the C4.5 algorithm and
a Naive Bayes classifier are applied to predict lung cancer survivability from
an extensive data set with fifteen years of patient records. The purpose of the
project is to verify the predictive effectiveness of the two techniques on
real, historical data. Besides the performance outcome that renders J48
marginally better than the Naive Bayes technique, there is a detailed
description of the data and the required pre-processing activities. The
performance results confirm expectations while some of the issues that appeared
during experimentation, underscore the value of having domain-specific
understanding to leverage any domain-specific characteristics inherent in the
data.
|
1206.1134
|
Shortest Paths in Less Than a Millisecond
|
cs.SI cs.DB physics.soc-ph
|
We consider the problem of answering point-to-point shortest path queries on
massive social networks. The goal is to answer queries within tens of
milliseconds while minimizing the memory requirements. We present a technique
that achieves this goal for an extremely large fraction of path queries by
exploiting the structure of the social networks.
Using evaluations on real-world datasets, we argue that our technique offers
a unique trade-off between latency, memory and accuracy. For instance, for the
LiveJournal social network (roughly 5 million nodes and 69 million edges), our
technique can answer 99.9% of the queries in less than a millisecond. In
comparison to storing all pair shortest paths, our technique requires at least
550x less memory; the average query time is roughly 365 microseconds --- 430x
faster than the state-of-the-art shortest path algorithm. Furthermore, the
relative performance of our technique improves with the size (and density) of
the network. For the Orkut social network (3 million nodes and 220 million
edges), for instance, our technique is roughly 2588x faster than the
state-of-the-art algorithm for computing shortest paths.
|
1206.1147
|
Memory-Efficient Topic Modeling
|
cs.LG cs.IR
|
As one of the simplest probabilistic topic modeling techniques, latent
Dirichlet allocation (LDA) has found many important applications in text
mining, computer vision and computational biology. Recent training algorithms
for LDA can be interpreted within a unified message passing framework. However,
message passing requires storing previous messages with a large amount of
memory space, increasing linearly with the number of documents or the number of
topics. Therefore, the high memory usage is often a major problem for topic
modeling of massive corpora containing a large number of topics. To reduce the
space complexity, we propose a novel algorithm without storing previous
messages for training LDA: tiny belief propagation (TBP). The basic idea of TBP
relates the message passing algorithms with the non-negative matrix
factorization (NMF) algorithms, which absorb the message updating into the
message passing process, and thus avoid storing previous messages. Experimental
results on four large data sets confirm that TBP performs comparably well or
even better than current state-of-the-art training algorithms for LDA but with
a much less memory consumption. TBP can do topic modeling when massive corpora
cannot fit in the computer memory, for example, extracting thematic topics from
7 GB PUBMED corpora on a common desktop computer with 2GB memory.
|
1206.1208
|
Cumulative Step-size Adaptation on Linear Functions: Technical Report
|
cs.LG
|
The CSA-ES is an Evolution Strategy with Cumulative Step size Adaptation,
where the step size is adapted measuring the length of a so-called cumulative
path. The cumulative path is a combination of the previous steps realized by
the algorithm, where the importance of each step decreases with time. This
article studies the CSA-ES on composites of strictly increasing with affine
linear functions through the investigation of its underlying Markov chains.
Rigorous results on the change and the variation of the step size are derived
with and without cumulation. The step-size diverges geometrically fast in most
cases. Furthermore, the influence of the cumulation parameter is studied.
|
1206.1270
|
Factoring nonnegative matrices with linear programs
|
math.OC cs.LG stat.ML
|
This paper describes a new approach, based on linear programming, for
computing nonnegative matrix factorizations (NMFs). The key idea is a
data-driven model for the factorization where the most salient features in the
data are used to express the remaining features. More precisely, given a data
matrix X, the algorithm identifies a matrix C such that X approximately equals
CX and some linear constraints. The constraints are chosen to ensure that the
matrix C selects features; these features can then be used to find a low-rank
NMF of X. A theoretical analysis demonstrates that this approach has guarantees
similar to those of the recent NMF algorithm of Arora et al. (2012). In
contrast with this earlier work, the proposed method extends to more general
noise models and leads to efficient, scalable algorithms. Experiments with
synthetic and real datasets provide evidence that the new approach is also
superior in practice. An optimized C++ implementation can factor a
multigigabyte matrix in a matter of minutes.
|
1206.1282
|
Assisted Common Information with an Application to Secure Two-Party
Sampling
|
cs.IT cs.CR math.IT
|
In this paper we generalize the notion of common information of two dependent
variables introduced by G\'acs & K\"orner. They defined common information as
the largest entropy rate of a common random variable two parties observing one
of the sources each can agree upon. It is well-known that their common
information captures only a limited form of dependence between the random
variables and is zero in most cases of interest. Our generalization, which we
call the Assisted Common Information system, takes into account almost-common
information ignored by G\'acs-K\"orner common information. In the assisted
common information system, a genie assists the parties in agreeing on a more
substantial common random variable; we characterize the trade-off between the
amount of communication from the genie and the quality of the common random
variable produced using a rate region we call the region of tension.
We show that this region has an application in deriving upperbounds on the
efficiency of secure two-party sampling, which is a special case of secure
multi-party computation, a central problem in modern cryptography. Two parties
desire to produce samples of a pair of jointly distributed random variables
such that neither party learns more about the other's output than what its own
output reveals. They have access to a set up - correlated random variables
whose distribution is different from the desired distribution - and noiseless
communication. We present an upperbound on the rate at which a given set up can
be used to produce samples from a desired distribution by showing a
monotonicity property for the region of tension: a protocol between two parties
can only lower the tension between their views. Then, by calculating the bounds
on the region of tension of various pairs of correlated random variables, we
derive bounds on the rate of secure two-party sampling.
|
1206.1291
|
Feature Weighting for Improving Document Image Retrieval System
Performance
|
cs.AI
|
Feature weighting is a technique used to approximate the optimal degree of
influence of individual features. This paper presents a feature weighting
method for Document Image Retrieval System (DIRS) based on keyword spotting. In
this method, we weight the feature using coefficient of multiple correlations.
Coefficient of multiple correlations can be used to describe the synthesized
effects and correlation of each feature. The aim of this paper is to show that
feature weighting increases the performance of DIRS. After applying the feature
weighting method to DIRS the average precision is 93.23% and average recall
become 98.66% respectively
|
1206.1299
|
Distributed Functional Scalar Quantization Simplified
|
cs.IT math.IT
|
Distributed functional scalar quantization (DFSQ) theory provides optimality
conditions and predicts performance of data acquisition systems in which a
computation on acquired data is desired. We address two limitations of previous
works: prohibitively expensive decoder design and a restriction to sources with
bounded distributions. We rigorously show that a much simpler decoder has
equivalent asymptotic performance as the conditional expectation estimator
previously explored, thus reducing decoder design complexity. The simpler
decoder has the feature of decoupled communication and computation blocks.
Moreover, we extend the DFSQ framework with the simpler decoder to acquire
sources with infinite-support distributions such as Gaussian or exponential
distributions. Finally, through simulation results we demonstrate that
performance at moderate coding rates is well predicted by the asymptotic
analysis, and we give new insight on the rate of convergence.
|
1206.1305
|
MACS: An Agent-Based Memetic Multiobjective Optimization Algorithm
Applied to Space Trajectory Design
|
cs.CE cs.NE math.OC
|
This paper presents an algorithm for multiobjective optimization that blends
together a number of heuristics. A population of agents combines heuristics
that aim at exploring the search space both globally and in a neighborhood of
each agent. These heuristics are complemented with a combination of a local and
global archive. The novel agent- based algorithm is tested at first on a set of
standard problems and then on three specific problems in space trajectory
design. Its performance is compared against a number of state-of-the-art
multiobjective optimisation algorithms that use the Pareto dominance as
selection criterion: NSGA-II, PAES, MOPSO, MTS. The results demonstrate that
the agent-based search can identify parts of the Pareto set that the other
algorithms were not able to capture. Furthermore, convergence is statistically
better although the variance of the results is in some cases higher.
|
1206.1307
|
Non-Additivity of the Entanglement of Purification (Beyond Reasonable
Doubt)
|
quant-ph cs.IT math.IT
|
We demonstrate the convexity of the difference between the regularized
entanglement of purification and the entropy, as a function of the state. This
is proved by means of a new asymptotic protocol to prepare a state from
pre-shared entanglement and by local operations only. We go on to employ this
convexity property in an investigation of the additivity of the (single-copy)
entanglement of purification: using numerical results for two-qubit Werner
states we find strong evidence that the entanglement of purification is
different from its regularization, hence that entanglement of purification is
not additive.
|
1206.1309
|
Evidence-Based Robust Design of Deflection Actions for Near Earth
Objects
|
cs.CE cs.NE math.OC stat.AP
|
This paper presents a novel approach to the robust design of deflection
actions for Near Earth Objects (NEO). In particular, the case of deflection by
means of Solar-pumped Laser ablation is studied here in detail. The basic idea
behind Laser ablation is that of inducing a sublimation of the NEO surface,
which produces a low thrust thereby slowly deviating the asteroid from its
initial Earth threatening trajectory. This work investigates the integrated
design of the Space-based Laser system and the deflection action generated by
laser ablation under uncertainty. The integrated design is formulated as a
multi-objective optimisation problem in which the deviation is maximised and
the total system mass is minimised. Both the model for the estimation of the
thrust produced by surface laser ablation and the spacecraft system model are
assumed to be affected by epistemic uncertainties (partial or complete lack of
knowledge). Evidence Theory is used to quantify these uncertainties and
introduce them in the optimisation process. The propagation of the trajectory
of the NEO under the laser-ablation action is performed with a novel approach
based on an approximated analytical solution of Gauss' Variational Equations.
An example of design of the deflection of asteroid Apophis with a swarm of
spacecraft is presented.
|
1206.1313
|
Obtaining Communities with a Fitness Growth Process
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
The study of community structure has been a hot topic of research over the
last years. But, while successfully applied in several areas, the concept lacks
of a general and precise notion. Facts like the hierarchical structure and
heterogeneity of complex networks make it difficult to unify the idea of
community and its evaluation. The global functional known as modularity is
probably the most used technique in this area. Nevertheless, its limits have
been deeply studied. Local techniques as the ones by Lancichinetti et al. and
Palla et al. arose as an answer to the resolution limit and degeneracies that
modularity has.
Here we start from the algorithm by Lancichinetti et al. and propose a unique
growth process for a fitness function that, while being local, finds a
community partition that covers the whole network, updating the scale parameter
dynamically. We test the quality of our results by using a set of benchmarks of
heterogeneous graphs. We discuss alternative measures for evaluating the
community structure and, in the light of them, infer possible explanations for
the better performance of local methods compared to global ones in these cases.
|
1206.1319
|
Certain Bayesian Network based on Fuzzy knowledge Bases
|
cs.AI
|
In this paper, we are trying to examine trade offs between fuzzy logic and
certain Bayesian networks and we propose to combine their respective advantages
into fuzzy certain Bayesian networks (FCBN), a certain Bayesian networks of
fuzzy random variables. This paper deals with different definitions and
classifications of uncertainty, sources of uncertainty, and theories and
methodologies presented to deal with uncertainty. Fuzzification of crisp
certainty degrees to fuzzy variables improves the quality of the network and
tends to bring smoothness and robustness in the network performance. The aim is
to provide a new approach for decision under uncertainty that combines three
methodologies: Bayesian networks certainty distribution and fuzzy logic. Within
the framework proposed in this paper, we address the issue of extending the
certain networks to a fuzzy certain networks in order to cope with a vagueness
and limitations of existing models for decision under imprecise and uncertain
knowledge.
|
1206.1331
|
Information Diffusion and External Influence in Networks
|
cs.SI physics.soc-ph
|
Social networks play a fundamental role in the diffusion of information.
However, there are two different ways of how information reaches a person in a
network. Information reaches us through connections in our social networks, as
well as through the influence of external out-of-network sources, like the
mainstream media. While most present models of information adoption in networks
assume information only passes from a node to node via the edges of the
underlying network, the recent availability of massive online social media data
allows us to study this process in more detail. We present a model in which
information can reach a node via the links of the social network or through the
influence of external sources. We then develop an efficient model parameter
fitting technique and apply the model to the emergence of URL mentions in the
Twitter network. Using a complete one month trace of Twitter we study how
information reaches the nodes of the network. We quantify the external
influences over time and describe how these influences affect the information
adoption. We discover that the information tends to "jump" across the network,
which can only be explained as an effect of an unobservable external influence
on the network. We find that only about 71% of the information volume in
Twitter can be attributed to network diffusion, and the remaining 29% is due to
external events and factors outside the network.
|
1206.1336
|
Design of a Formation of Solar Pumped Lasers for Asteroid Deflection
|
math.OC astro-ph.EP cs.CE physics.space-ph
|
This paper presents the design of a multi-spacecraft system for the
deflection of asteroids. Each spacecraft is equipped with a fibre laser and a
solar concentrator. The laser induces the sublimation of a portion of the
surface of the asteroid. The jet of gas and debris thrusts the asteroid off its
natural course. The main idea is to have a swarm of spacecraft flying in the
proximity of the asteroid with all the spacecraft beaming to the same location
to achieve the required deflection thrust. The paper presents the design of the
formation orbits and the multi-objective optimization of the swarm in order to
minimize the total mass in space and maximize the deflection of the asteroid.
The paper demonstrates how significant deflections can be obtained with
relatively small sized, easy-to-control spacecraft.
|
1206.1339
|
Finding Quality Issues in SKOS Vocabularies
|
cs.DL cs.IR
|
The Simple Knowledge Organization System (SKOS) is a standard model for
controlled vocabularies on the Web. However, SKOS vocabularies often differ in
terms of quality, which reduces their applicability across system boundaries.
Here we investigate how we can support taxonomists in improving SKOS
vocabularies by pointing out quality issues that go beyond the integrity
constraints defined in the SKOS specification. We identified potential
quantifiable quality issues and formalized them into computable quality
checking functions that can find affected resources in a given SKOS vocabulary.
We implemented these functions in the qSKOS quality assessment tool, analyzed
15 existing vocabularies, and found possible quality issues in all of them.
|
1206.1389
|
Lossy Computing of Correlated Sources with Fractional Sampling
|
cs.IT math.IT
|
This paper considers the problem of lossy compression for the computation of
a function of two correlated sources, both of which are observed at the
encoder. Due to presence of observation costs, the encoder is allowed to
observe only subsets of the samples from both sources, with a fraction of such
sample pairs possibly overlapping. The rate-distortion function is
characterized for memory-less sources, and then specialized to Gaussian and
binary sources for selected functions and with quadratic and Hamming distortion
metrics, respectively. The optimal measurement overlap fraction is shown to
depend on the function to be computed by the decoder, on the source statistics,
including the correlation, and on the link rate. Special cases are discussed in
which the optimal overlap fraction is the maximum or minimum possible value
given the sampling budget, illustrating non-trivial performance trade-offs in
the design of the sampling strategy. Finally, the analysis is extended to the
multi-hop set-up with jointly Gaussian sources, where each encoder can observe
only one of the sources.
|
1206.1402
|
A New Greedy Algorithm for Multiple Sparse Regression
|
stat.ML cs.LG
|
This paper proposes a new algorithm for multiple sparse regression in high
dimensions, where the task is to estimate the support and values of several
(typically related) sparse vectors from a few noisy linear measurements. Our
algorithm is a "forward-backward" greedy procedure that -- uniquely -- operates
on two distinct classes of objects. In particular, we organize our target
sparse vectors as a matrix; our algorithm involves iterative addition and
removal of both (a) individual elements, and (b) entire rows (corresponding to
shared features), of the matrix.
Analytically, we establish that our algorithm manages to recover the supports
(exactly) and values (approximately) of the sparse vectors, under assumptions
similar to existing approaches based on convex optimization. However, our
algorithm has a much smaller computational complexity. Perhaps most
interestingly, it is seen empirically to require visibly fewer samples. Ours
represents the first attempt to extend greedy algorithms to the class of models
that can only/best be represented by a combination of component structural
assumptions (sparse and group-sparse, in our case).
|
1206.1405
|
Recovery of Sparse 1-D Signals from the Magnitudes of their Fourier
Transform
|
cs.IT math.IT math.OC
|
The problem of signal recovery from the autocorrelation, or equivalently, the
magnitudes of the Fourier transform, is of paramount importance in various
fields of engineering. In this work, for one-dimensional signals, we give
conditions, which when satisfied, allow unique recovery from the
autocorrelation with very high probability. In particular, for sparse signals,
we develop two non-iterative recovery algorithms. One of them is based on
combinatorial analysis, which we prove can recover signals upto sparsity
$o(n^{1/3})$ with very high probability, and the other is developed using a
convex optimization based framework, which numerical simulations suggest can
recover signals upto sparsity $o(n^{1/2})$ with very high probability.
|
1206.1414
|
An Intelligent Approach for Negotiating between chains in Supply Chain
Management Systems
|
cs.AI
|
Holding commercial negotiations and selecting the best supplier in supply
chain management systems are among weaknesses of producers in production
process. Therefore, applying intelligent systems may have an effective role in
increased speed and improved quality in the selections .This paper introduces a
system which tries to trade using multi-agents systems and holding negotiations
between any agents. In this system, an intelligent agent is considered for each
segment of chains which it tries to send order and receive the response with
attendance in negotiation medium and communication with other agents .This
paper introduces how to communicate between agents, characteristics of
multi-agent and standard registration medium of each agent in the environment.
JADE (Java Application Development Environment) was used for implementation and
simulation of agents cooperation.
|
1206.1418
|
A weighted combination similarity measure for mobility patterns in
wireless networks
|
cs.AI
|
The similarity between trajectory patterns in clustering has played an
important role in discovering movement behaviour of different groups of mobile
objects. Several approaches have been proposed to measure the similarity
between sequences in trajectory data. Most of these measures are based on
Euclidean space or on spatial network and some of them have been concerned with
temporal aspect or ordering types. However, they are not appropriate to
characteristics of spatiotemporal mobility patterns in wireless networks. In
this paper, we propose a new similarity measure for mobility patterns in
cellular space of wireless network. The framework for constructing our measure
is composed of two phases as follows. First, we present formal definitions to
capture mathematically two spatial and temporal similarity measures for
mobility patterns. And then, we define the total similarity measure by means of
a weighted combination of these similarities. The truth of the partial and
total similarity measures are proved in mathematics. Furthermore, instead of
the time interval or ordering, our work makes use of the timestamp at which two
mobility patterns share the same cell. A case study is also described to give a
comparison of the combination measure with other ones.
|
1206.1430
|
Distance Based Asynchronous Recovery Approach in Mobile Computing
Environment
|
cs.DB cs.DC
|
A mobile computing system is a distributed system in which at least one of
the processes is mobile. They are constrained by lack of stable storage, low
network bandwidth, mobility, frequent disconnection and limited battery life.
Checkpointing is one of the commonly used techniques to provide fault tolerance
in mobile computing environment. In order to suit the mobile environment a
distance based recovery scheme is proposed which is based on checkpointing and
message logging. After the system recovers from failures, only the failed
processes rollback and restart from their respective recent checkpoints,
independent of the others. The salient feature of this scheme is to reduce the
transfer and recovery cost. While the mobile host moves with in a specific
range, recovery information is not moved and thus only be transferred nearby if
the mobile host moves out of certain range.
|
1206.1438
|
Adaptive Sensing of Congested Spectrum Bands
|
cs.IT math.IT
|
Cognitive radios process their sensed information collectively in order to
opportunistically identify and access under-utilized spectrum segments
(spectrum holes). Due to the transient and rapidly-varying nature of the
spectrum occupancy, the cognitive radios (secondary users) must be agile in
identifying the spectrum holes in order to enhance their spectral efficiency.
We propose a novel {\em adaptive} procedure to reinforce the agility of the
secondary users for identifying {\em multiple} spectrum holes simultaneously
over a wide spectrum band. This is accomplished by successively {\em exploring}
the set of potential spectrum holes and {\em progressively} allocating the
sensing resources to the most promising areas of the spectrum. Such exploration
and resource allocation results in conservative spending of the sensing
resources and translates into very agile spectrum monitoring. The proposed
successive and adaptive sensing procedure is in contrast to the more
conventional approaches that distribute the sampling resources equally over the
entire spectrum. Besides improved agility, the adaptive procedure requires
less-stringent constraints on the power of the primary users to guarantee that
they remain distinguishable from the environment noise and renders more
reliable spectrum hole detection.
|
1206.1443
|
On applying Neuro - Computing in E-com Domain
|
cs.NE
|
Prior studies have generally suggested that Artificial Neural Networks (ANNs)
are superior to conventional statistical models in predicting consumer buying
behavior. There are, however, contradicting findings which raise question over
usefulness of ANNs. This paper discusses development of three neural networks
for modeling consumer e-commerce behavior and compares the findings to
equivalent logistic regression models. The results showed that ANNs predict
e-commerce adoption slightly more accurately than logistic models but this is
hardly justifiable given the added complexity. Further, ANNs seem to be highly
adaptive, particularly when a small sample is coupled with a large number of
nodes in hidden layers which, in turn, limits the neural networks'
generalisability.
|
1206.1458
|
Dispelling Classes Gradually to Improve Quality of Feature Reduction
Approaches
|
cs.AI
|
Feature reduction is an important concept which is used for reducing
dimensions to decrease the computation complexity and time of classification.
Since now many approaches have been proposed for solving this problem, but
almost all of them just presented a fix output for each input dataset that some
of them aren't satisfied cases for classification. In this we proposed an
approach as processing input dataset to increase accuracy rate of each feature
extraction methods. First of all, a new concept called dispelling classes
gradually (DCG) is proposed to increase separability of classes based on their
labels. Next, this method is used to process input dataset of the feature
reduction approaches to decrease the misclassification error rate of their
outputs more than when output is achieved without any processing. In addition
our method has a good quality to collate with noise based on adapting dataset
with feature reduction approaches. In the result part, two conditions (With
process and without that) are compared to support our idea by using some of UCI
datasets.
|
1206.1469
|
Human Arm simulation for interactive constrained environment design
|
cs.RO
|
During the conceptual and prototype design stage of an industrial product, it
is crucial to take assembly/disassembly and maintenance operations in advance.
A well-designed system should enable relatively easy access of operating
manipulators in the constrained environment and reduce musculoskeletal disorder
risks for those manual handling operations. Trajectory planning comes up as an
important issue for those assembly and maintenance operations under a
constrained environment, since it determines the accessibility and the other
ergonomics issues, such as muscle effort and its related fatigue. In this
paper, a customer-oriented interactive approach is proposed to partially solve
ergonomic related issues encountered during the design stage under a
constrained system for the operator's convenience. Based on a single objective
optimization method, trajectory planning for different operators could be
generated automatically. Meanwhile, a motion capture based method assists the
operator to guide the trajectory planning interactively when either a local
minimum is encountered within the single objective optimization or the operator
prefers guiding the virtual human manually. Besides that, a physical engine is
integrated into this approach to provide physically realistic simulation in
real time manner, so that collision free path and related dynamic information
could be computed to determine further muscle fatigue and accessibility of a
product design
|
1206.1471
|
A new approach to muscle fatigue evaluation for Push/Pull task
|
cs.RO
|
Pushing/Pulling tasks is an important part of work in many industries.
Usually, most researchers study the Push/Pull tasks by analyzing different
posture conditions, force requirements, velocity factors, etc. However few
studies have reported the effects of fatigue. Fatigue caused by physical
loading is one of the main reasons responsible for MusculoSkeletal Disorders
(MSD). In this paper, muscle groups of articulation is considered and from
joint level a new approach is proposed for muscle fatigue evaluation in the
arms Push/Pull operations. The objective of this work is to predict the muscle
fatigue situation in the Push/Pull tasks in order to reduce the probability of
MSD problems for workers. A case study is presented to use this new approach
for analyzing arm fatigue in Pushing/Pulling.
|
1206.1492
|
Ordinary Search Engine Users Carrying Out Complex Search Tasks
|
cs.IR
|
Web search engines have become the dominant tools for finding information on
the Internet. Due to their popularity, users apply them to a wide range of
search needs, from simple look-ups to rather complex information tasks. This
paper presents the results of a study to investigate the characteristics of
these complex information needs in the context of Web search engines. The aim
of the study is to find out more about (1) what makes complex search tasks
distinct from simple tasks and if it is possible to find simple measures for
describing their complexity, (2) if search success for a task can be predicted
by means of unique measures, and (3) if successful searchers show a different
behavior than unsuccessful ones. The study includes 60 people who carried out a
set of 12 search tasks with current commercial search engines. Their behavior
was logged with the Search-Logger tool. The results confirm that complex tasks
show significantly different characteristics than simple tasks. Yet it seems to
be difficult to distinguish successful from unsuccessful search behaviors. Good
searchers can be differentiated from bad searchers by means of measurable
parameters. The implications of these findings for search engine vendors are
discussed.
|
1206.1494
|
Impact of Gender and Age on performing Search Tasks Online
|
cs.IR cs.HC
|
More and more people use the Internet to work on duties of their daily work
routine. To find the right information online, Web search engines are the tools
of their choice. Apart from finding facts, people use Web search engines to
also execute rather complex and time consuming search tasks. So far search
engines follow the one-for-all approach to serve its users and little is known
about the impact of gender and age on people's Web search behavior. In this
article we present a study that examines (1) how female and male web users
carry out simple and complex search tasks and what are the differences between
the two user groups, and (2) how the age of the users impacts their search
performance. The laboratory study was done with 56 ordinary people each
carrying out 12 search tasks. Our findings confirm that age impacts behavior
and search performance significantly, while gender influences were smaller than
expected.
|
1206.1515
|
Optimizing Face Recognition Using PCA
|
cs.CV
|
Principle Component Analysis PCA is a classical feature extraction and data
representation technique widely used in pattern recognition. It is one of the
most successful techniques in face recognition. But it has drawback of high
computational especially for big size database. This paper conducts a study to
optimize the time complexity of PCA (eigenfaces) that does not affects the
recognition performance. The authors minimize the participated eigenvectors
which consequently decreases the computational time. A comparison is done to
compare the differences between the recognition time in the original algorithm
and in the enhanced algorithm. The performance of the original and the enhanced
proposed algorithm is tested on face94 face database. Experimental results show
that the recognition time is reduced by 35% by applying our proposed enhanced
algorithm. DET Curves are used to illustrate the experimental results.
|
1206.1518
|
Off-Line Arabic Handwriting Character Recognition Using Word
Segmentation
|
cs.CV
|
The ultimate aim of handwriting recognition is to make computers able to read
and/or authenticate human written texts, with a performance comparable to or
even better than that of humans. Reading means that the computer is given a
piece of handwriting and it provides the electronic transcription of that (e.g.
in ASCII format). Two types of handwriting: on-line and offline. The most
important purpose of off-line handwriting recognition is in protection systems
and authentication. Arabic Handwriting scripts are much more complicated in
comparison to Latin scripts. This paper introduces a simple and novel
methodology to authenticate Arabic handwriting characters. Reaching our aim, we
built our own character database. The research methodology depends on two
stages: The first is character extraction where preprocessing the word and then
apply segmentation process to obtain the character. The second is the character
recognition by matching the characters comprising the word with the letters in
the database. Our results ensure character recognition with 81%. We eliminate
FAR by using similarity percent between 45-55%. Our research is coded using
MATLAB.
|
1206.1529
|
Sparse projections onto the simplex
|
cs.LG stat.ML
|
Most learning methods with rank or sparsity constraints use convex
relaxations, which lead to optimization with the nuclear norm or the
$\ell_1$-norm. However, several important learning applications cannot benefit
from this approach as they feature these convex norms as constraints in
addition to the non-convex rank and sparsity constraints. In this setting, we
derive efficient sparse projections onto the simplex and its extension, and
illustrate how to use them to solve high-dimensional learning problems in
quantum tomography, sparse density estimation and portfolio selection with
non-convex constraints.
|
1206.1531
|
k-Connectivity in Random Key Graphs with Unreliable Links
|
cs.IT math.CO math.IT math.PR
|
Random key graphs form a class of random intersection graphs and are
naturally induced by the random key predistribution scheme of Eschenauer and
Gligor for securing wireless sensor network (WSN) communications. Random key
graphs have received much interest recently, owing in part to their wide
applicability in various domains including recommender systems, social
networks, secure sensor networks, clustering and classification analysis, and
cryptanalysis to name a few. In this paper, we study connectivity properties of
random key graphs in the presence of unreliable links. Unreliability of the
edges are captured by independent Bernoulli random variables, rendering edges
of the graph to be on or off independently from each other. The resulting model
is an intersection of a random key graph and an Erdos-Renyi graph, and is
expected to be useful in capturing various real-world networks; e.g., with
secure WSN applications in mind, link unreliability can be attributed to harsh
environmental conditions severely impairing transmissions. We present
conditions on how to scale this model's parameters so that i) the minimum node
degree in the graph is at least k, and ii) the graph is k-connected, both with
high probability as the number of nodes becomes large. The results are given in
the form of zeroone laws with critical thresholds identified and shown to
coincide for both graph properties. These findings improve the previous results
by Rybarczyk on the k-connectivity of random key graphs (with reliable links),
as well as the zero-one laws by Yagan on the 1-connectivity of random key
graphs with unreliable links.
|
1206.1534
|
Software Aging Analysis of Web Server Using Neural Networks
|
cs.AI
|
Software aging is a phenomenon that refers to progressive performance
degradation or transient failures or even crashes in long running software
systems such as web servers. It mainly occurs due to the deterioration of
operating system resource, fragmentation and numerical error accumulation. A
primitive method to fight against software aging is software rejuvenation.
Software rejuvenation is a proactive fault management technique aimed at
cleaning up the system internal state to prevent the occurrence of more severe
crash failures in the future. It involves occasionally stopping the running
software, cleaning its internal state and restarting it. An optimized schedule
for performing the software rejuvenation has to be derived in advance because a
long running application could not be put down now and then as it may lead to
waste of cost. This paper proposes a method to derive an accurate and optimized
schedule for rejuvenation of a web server (Apache) by using Radial Basis
Function (RBF) based Feed Forward Neural Network, a variant of Artificial
Neural Networks (ANN). Aging indicators are obtained through experimental setup
involving Apache web server and clients, which acts as input to the neural
network model. This method is better than existing ones because usage of RBF
leads to better accuracy and speed in convergence.
|
1206.1552
|
Performance Analysis of Unsymmetrical trimmed median as detector on
image noises and its Fpga implementation
|
cs.CV
|
This Paper Analyze the performance of Unsymmetrical trimmed median, which is
used as detector for the detection of impulse noise, Gaussian noise and mixed
noise is proposed. The proposed algorithm uses a fixed 3x3 window for the
increasing noise densities. The pixels in the current window are arranged in
sorting order using a improved snake like sorting algorithm with reduced
comparator. The processed pixel is checked for the occurrence of outliers, if
the absolute difference between processed pixels is greater than fixed
threshold. Under high noise densities the processed pixel is also noisy hence
the median is checked using the above procedure. if found true then the pixel
is considered as noisy hence the corrupted pixel is replaced by the median of
the current processing window. If median is also noisy then replace the
corrupted pixel with unsymmetrical trimmed median else if the pixel is termed
uncorrupted and left unaltered. The proposed algorithm (PA) is tested on
varying detail images for various noises. The proposed algorithm effectively
removes the high density fixed value impulse noise, low density random valued
impulse noise, low density Gaussian noise and lower proportion of mixed noise.
The proposed algorithm is targeted on Xc3e5000-5fg900 FPGA using Xilinx 7.1
compiler version which requires less number of slices, optimum speed and low
power when compared to the other median finding architectures.
|
1206.1557
|
Soil Data Analysis Using Classification Techniques and Soil Attribute
Prediction
|
cs.AI stat.AP stat.ML
|
Agricultural research has been profited by technical advances such as
automation, data mining. Today, data mining is used in a vast areas and many
off-the-shelf data mining system products and domain specific data mining
application soft wares are available, but data mining in agricultural soil
datasets is a relatively a young research field. The large amounts of data that
are nowadays virtually harvested along with the crops have to be analyzed and
should be used to their full extent. This research aims at analysis of soil
dataset using data mining techniques. It focuses on classification of soil
using various algorithms available. Another important purpose is to predict
untested attributes using regression technique, and implementation of automated
soil sample classification.
|
1206.1566
|
Non-Pauli Observables for CWS Codes
|
quant-ph cs.IT math.IT
|
It is known that nonadditive quantum codes are more optimal for error
correction when compared to stabilizer codes. The class of codeword stabilized
codes (CWS) provides tools to obtain new nonadditive quantum codes by reducing
the problem to finding nonlinear classical codes. In this work, we establish
some results on the kind of non-Pauli operators that can be used as decoding
observables for CWS codes and describe a procedure to obtain these observables.
|
1206.1579
|
An Efficient Hybrid Ant Colony System for the Generalized Traveling
Salesman Problem
|
cs.AI math.CO math.OC
|
The Generalized Traveling Salesman Problem (GTSP) is an extension of the
well-known Traveling Salesman Problem (TSP), where the node set is partitioned
into clusters, and the objective is to find the shortest cycle visiting each
cluster exactly once. In this paper, we present a new hybrid Ant Colony System
(ACS) algorithm for the symmetric GTSP. The proposed algorithm is a
modification of a simple ACS for the TSP improved by an efficient GTSP-specific
local search procedure. Our extensive computational experiments show that the
use of the local search procedure dramatically improves the performance of the
ACS algorithm, making it one of the most successful GTSP metaheuristics to
date.
|
1206.1615
|
Objects and Goals Extraction from Semantic Networks : Applications of
Fuzzy SetS Theory
|
cs.IR
|
In this paper we present a short survey of fuzzy and Semantic approaches to
Knowledge Extraction. The goal of such approaches is to define flexible
Knowledge Extraction Systems able to deal with the inherent vagueness and
uncertainty of the Extraction process. In this survey we address if and how
some approaches met their goal.
|
1206.1623
|
Proximal Newton-type methods for minimizing composite functions
|
stat.ML cs.DS cs.LG cs.NA math.OC
|
We generalize Newton-type methods for minimizing smooth functions to handle a
sum of two convex functions: a smooth function and a nonsmooth function with a
simple proximal mapping. We show that the resulting proximal Newton-type
methods inherit the desirable convergence behavior of Newton-type methods for
minimizing smooth functions, even when search directions are computed
inexactly. Many popular methods tailored to problems arising in bioinformatics,
signal processing, and statistical learning are special cases of proximal
Newton-type methods, and our analysis yields new convergence results for some
of these methods.
|
1206.1624
|
Measure of Similarity between Fuzzy Concepts for Optimization of Fuzzy
Semantic Nets
|
cs.IR
|
This paper presents a method to measure the similarity between different
fuzzy concepts in order to optimize Semantic networks. The problem approached
is the minimization of the time of research and identification of user's
Objects and Goals. Indeed, it concerns to determine to each instant the
totality of Objects (respectively Goals) among which one can identify rapidly
the most satisfactory for the user's Object and Goal. Alone Objects and most
similar Goals to Objects and researched Goals of the viewpoint of attribute
values will be processed, what will avoid the analysis of all Objects and
system Goals far of needs of the user.
|
1206.1625
|
Performance assessment of two active power filter control strategies in
the presence of non-stationary currents
|
math.OC cs.SY
|
This paper describes an active power filter (APF) control strategy, which
eliminates harmonics and compensates reactive power in a three-phase four-wire
power system supplying non-linear unbalanced loads in the presence of
non-linear non-stationary currents. Empirical Mode Decomposition (EMD)
technique developed as part of the Hilbert-Huang Transform (HHT) is used to
singulate the harmonics and non-linear non stationary disturbances from the
load currents. The control strategy for the APF is formulated by hybridizing
the so called modified p-q theory with the EMD algorithm. A four-leg
split-capacitor converter controlled by hysteresis band current controller is
used as an APF. The results obtained are compared with those obtained with the
conventional modified p-q theory, which does not possess current harmonics or
distortions separation strategy, to validate its performance.
|
1206.1665
|
An Approach In Optimization Of AD-Hoc Routing Algorithms
|
cs.NI cs.IT math.IT
|
In this paper different optimization of Ad-hoc routing algorithm is surveyed
and a new method using training based optimization algorithm for reducing the
complexity of routing algorithms is suggested. A binary matrix is assigned to
each node in the network and gets updated after each data transfer using the
protocols. The use of optimization algorithm in routing algorithm can reduce
the complexity of routing to the least amount possible.
|
1206.1678
|
A Distributed Optimized Patient Scheduling using Partial Information
|
cs.AI
|
A software agent may be a member of a Multi-Agent System (MAS) which is
collectively performing a range of complex and intelligent tasks. In the
hospital, scheduling decisions are finding difficult to schedule because of the
dynamic changes and distribution. In order to face this problem with dynamic
changes in the hospital, a new method, Distributed Optimized Patient Scheduling
with Grouping (DOPSG) has been proposed. The goal of this method is that there
is no necessity for knowing patient agents information globally. With minimal
information this method works effectively. Scheduling problem can be solved for
multiple departments in the hospital. Patient agents have been scheduled to the
resource agent based on the patient priority to reduce the waiting time of
patient agent and to reduce idle time of resources.
|
1206.1724
|
Softening Fuzzy Knowledge Representation Tool with the Learning of New
Words in Natural Language
|
cs.AI
|
The approach described here allows using membership function to represent
imprecise and uncertain knowledge by learning in Fuzzy Semantic Networks. This
representation has a great practical interest due to the possibility to realize
on the one hand, the construction of this membership function from a simple
value expressing the degree of interpretation of an Object or a Goal as
compared to an other and on the other hand, the adjustment of the membership
function during the apprenticeship. We show, how to use these membership
functions to represent the interpretation of an Object (respectively of a Goal)
user as compared to an system Object (respectively to a Goal). We also show the
possibility to make decision for each representation of an user Object compared
to a system Object. This decision is taken by determining decision coefficient
calculates according to the nucleus of the membership function of the user
Object.
|
1206.1728
|
Aggregating Content and Network Information to Curate Twitter User Lists
|
cs.SI cs.AI physics.soc-ph
|
Twitter introduced user lists in late 2009, allowing users to be grouped
according to meaningful topics or themes. Lists have since been adopted by
media outlets as a means of organising content around news stories. Thus the
curation of these lists is important - they should contain the key information
gatekeepers and present a balanced perspective on a story. Here we address this
list curation process from a recommender systems perspective. We propose a
variety of criteria for generating user list recommendations, based on content
analysis, network analysis, and the "crowdsourcing" of existing user lists. We
demonstrate that these types of criteria are often only successful for datasets
with certain characteristics. To resolve this issue, we propose the aggregation
of these different "views" of a news story on Twitter to produce more accurate
user recommendations to support the curation process.
|
1206.1754
|
Internet Advertising: An Interplay among Advertisers, Online Publishers,
Ad Exchanges and Web Users
|
cs.IR
|
Internet advertising is a fast growing business which has proved to be
significantly important in digital economics. It is vitally important for both
web search engines and online content providers and publishers because web
advertising provides them with major sources of revenue. Its presence is
increasingly important for the whole media industry due to the influence of the
Web. For advertisers, it is a smarter alternative to traditional marketing
media such as TVs and newspapers. As the web evolves and data collection
continues, the design of methods for more targeted, interactive, and friendly
advertising may have a major impact on the way our digital economy evolves, and
to aid societal development.
Towards this goal mathematically well-grounded Computational Advertising
methods are becoming necessary and will continue to develop as a fundamental
tool towards the Web. As a vibrant new discipline, Internet advertising
requires effort from different research domains including Information
Retrieval, Machine Learning, Data Mining and Analytic, Statistics, Economics,
and even Psychology to predict and understand user behaviours. In this paper,
we provide a comprehensive survey on Internet advertising, discussing and
classifying the research issues, identifying the recent technologies, and
suggesting its future directions. To have a comprehensive picture, we first
start with a brief history, introduction, and classification of the industry
and present a schematic view of the new advertising ecosystem. We then
introduce four major participants, namely advertisers, online publishers, ad
exchanges and web users; and through analysing and discussing the major
research problems and existing solutions from their perspectives respectively,
we discover and aggregate the fundamental problems that characterise the
newly-formed research field and capture its potential future prospects.
|
1206.1794
|
Fuzzy Knowledge Representation, Learning and Optimization with Bayesian
Analysis in Fuzzy Semantic Networks
|
cs.AI
|
This paper presents a method of optimization, based on both Bayesian Analysis
technical and Gallois Lattice, of a Fuzzy Semantic Networks. The technical
System we use learn by interpreting an unknown word using the links created
between this new word and known words. The main link is provided by the context
of the query. When novice's query is confused with an unknown verb (goal)
applied to a known noun denoting either an object in the ideal user's Network
or an object in the user's Network, the system infer that this new verb
corresponds to one of the known goal. With the learning of new words in natural
language as the interpretation, which was produced in agreement with the user,
the system improves its representation scheme at each experiment with a new
user and, in addition, takes advantage of previous discussions with users. The
semantic Net of user objects thus obtained by these kinds of learning is not
always optimal because some relationships between couple of user objects can be
generalized and others suppressed according to values of forces that
characterize them. Indeed, to simplify the obtained Net, we propose to proceed
to an inductive Bayesian analysis, on the Net obtained from Gallois lattice.
The objective of this analysis can be seen as an operation of filtering of the
obtained descriptive graph.
|
1206.1800
|
Compressive neural representation of sparse, high-dimensional
probabilities
|
q-bio.NC cs.IT math.IT
|
This paper shows how sparse, high-dimensional probability distributions could
be represented by neurons with exponential compression. The representation is a
novel application of compressive sensing to sparse probability distributions
rather than to the usual sparse signals. The compressive measurements
correspond to expected values of nonlinear functions of the probabilistically
distributed variables. When these expected values are estimated by sampling,
the quality of the compressed representation is limited only by the quality of
sampling. Since the compression preserves the geometric structure of the space
of sparse probability distributions, probabilistic computation can be performed
in the compressed domain. Interestingly, functions satisfying the requirements
of compressive sensing can be implemented as simple perceptrons. If we use
perceptrons as a simple model of feedforward computation by neurons, these
results show that the mean activity of a relatively small number of neurons can
accurately represent a high-dimensional joint distribution implicitly, even
without accounting for any noise correlations. This comprises a novel
hypothesis for how neurons could encode probabilities in the brain.
|
1206.1851
|
Concept of drafting detection system in Ironmans
|
cs.SY
|
One of the biggest challenges for the Computer Science of today can be summed
up by the paradigm "access to information from $everywhere$ at $anytime$". This
is especially true for pervasive computing. With the growth of mobile devices
(e.g., smart-phones), on the one hand, and the quick development of the
Internet (this has become the really pervasive network of today), on the other
hand, the development of real-time pervasive applications has broadened. This
paper focuses on the problem of drafting detection in the Ironman triathlons
which causes serious problems for the majority of organizers regarding such
competitions. A concept of drafting detection system in Ironman is based on the
paradigm of pervasive computing. Results of performing a test system show that
this concept can along with further development of computer technologies become
a reality in the near future.
|
1206.1852
|
Optimization of Fuzzy Semantic Networks Based on Galois Lattice and
Bayesian Formalism
|
cs.IR
|
This paper presents a method of optimization, based on both Bayesian Analysis
technical and Galois Lattice of Fuzzy Semantic Network. The technical System we
use learns by interpreting an unknown word using the links created between this
new word and known words. The main link is provided by the context of the
query. When novice's query is confused with an unknown verb (goal) applied to a
known noun denoting either an object in the ideal user's Network or an object
in the user's Network, the system infer that this new verb corresponds to one
of the known goal. With the learning of new words in natural language as the
interpretation, which was produced in agreement with the user, the system
improves its representation scheme at each experiment with a new user and, in
addition, takes advantage of previous discussions with users. The semantic Net
of user objects thus obtained by learning is not always optimal because some
relationships between couple of user objects can be generalized and others
suppressed according to values of forces that characterize them. Indeed, to
simplify the obtained Net, we propose to proceed to an Inductive Bayesian
Analysis, on the Net obtained from Galois lattice. The objective of this
analysis can be seen as an operation of filtering of the obtained descriptive
graph.
|
1206.1891
|
Multi-Scale Link Prediction
|
cs.SI physics.soc-ph
|
The automated analysis of social networks has become an important problem due
to the proliferation of social networks, such as LiveJournal, Flickr and
Facebook. The scale of these social networks is massive and continues to grow
rapidly. An important problem in social network analysis is proximity
estimation that infers the closeness of different users. Link prediction, in
turn, is an important application of proximity estimation. However, many
methods for computing proximity measures have high computational complexity and
are thus prohibitive for large-scale link prediction problems. One way to
address this problem is to estimate proximity measures via low-rank
approximation. However, a single low-rank approximation may not be sufficient
to represent the behavior of the entire network. In this paper, we propose
Multi-Scale Link Prediction (MSLP), a framework for link prediction, which can
handle massive networks. The basis idea of MSLP is to construct low rank
approximations of the network at multiple scales in an efficient manner. Based
on this approach, MSLP combines predictions at multiple scales to make robust
and accurate predictions. Experimental results on real-life datasets with more
than a million nodes show the superior performance and scalability of our
method.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.