id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1303.5752 | About Updating | cs.AI | Survey of several forms of updating, with a practical illustrative example.
We study several updating (conditioning) schemes that emerge naturally from a
common scenarion to provide some insights into their meaning. Updating is a
subtle operation and there is no single method, no single 'good' rule. The
choice of the appropriate rule must always be given due consideration. Planchet
(1989) presents a mathematical survey of many rules. We focus on the practical
meaning of these rules. After summarizing the several rules for conditioning,
we present an illustrative example in which the various forms of conditioning
can be explained.
|
1303.5753 | Compressed Constraints in Probabilistic Logic and Their Revision | cs.AI | In probabilistic logic entailments, even moderate size problems can yield
linear constraint systems with so many variables that exact methods are
impractical. This difficulty can be remedied in many cases of interest by
introducing a three valued logic (true, false, and "don't care"). The
three-valued approach allows the construction of "compressed" constraint
systems which have the same solution sets as their two-valued counterparts, but
which may involve dramatically fewer variables. Techniques to calculate point
estimates for the posterior probabilities of entailed sentences are discussed.
|
1303.5754 | Detecting Causal Relations in the Presence of Unmeasured Variables | cs.AI | The presence of latent variables can greatly complicate inferences about
causal relations between measured variables from statistical data. In many
cases, the presence of latent variables makes it impossible to determine for
two measured variables A and B, whether A causes B, B causes A, or there is
some common cause. In this paper I present several theorems that state
conditions under which it is possible to reliably infer the causal relation
between two measured variables, regardless of whether latent variables are
acting or not.
|
1303.5755 | A Method for Integrating Utility Analysis into an Expert System for
Design Evaluation | cs.AI | In mechanical design, there is often unavoidable uncertainty in estimates of
design performance. Evaluation of design alternatives requires consideration of
the impact of this uncertainty. Expert heuristics embody assumptions regarding
the designer's attitude towards risk and uncertainty that might be reasonable
in most cases but inaccurate in others. We present a technique to allow
designers to incorporate their own unique attitude towards uncertainty as
opposed to those assumed by the domain expert's rules. The general approach is
to eliminate aspects of heuristic rules which directly or indirectly include
assumptions regarding the user's attitude towards risk, and replace them with
explicit, user-specified probabilistic multi attribute utility and probability
distribution functions. We illustrate the method in a system for material
selection for automobile bumpers.
|
1303.5756 | From Relational Databases to Belief Networks | cs.AI | The relationship between belief networks and relational databases is
examined. Based on this analysis, a method to construct belief networks
automatically from statistical relational data is proposed. A comparison
between our method and other methods shows that our method has several
advantages when generalization or prediction is deeded.
|
1303.5757 | A Monte-Carlo Algorithm for Dempster-Shafer Belief | cs.AI | A very computationally-efficient Monte-Carlo algorithm for the calculation of
Dempster-Shafer belief is described. If Bel is the combination using Dempster's
Rule of belief functions Bel, ..., Bel,7, then, for subset b of the frame C),
Bel(b) can be calculated in time linear in 1(31 and m (given that the weight of
conflict is bounded). The algorithm can also be used to improve the complexity
of the Shenoy-Shafer algorithms on Markov trees, and be generalised to
calculate Dempster-Shafer Belief over other logics.
|
1303.5758 | Compatibility of Quantitative and Qualitative Representations of Belief | cs.AI | The compatibility of quantitative and qualitative representations of beliefs
was studied extensively in probability theory. It is only recently that this
important topic is considered in the context of belief functions. In this
paper, the compatibility of various quantitative belief measures and
qualitative belief structures is investigated. Four classes of belief measures
considered are: the probability function, the monotonic belief function,
Shafer's belief function, and Smets' generalized belief function. The analysis
of their individual compatibility with different belief structures not only
provides a sound b<msis for these quantitative measures, but also alleviates
some of the difficulties in the acquisition and interpretation of numeric
belief numbers. It is shown that the structure of qualitative probability is
compatible with monotonic belief functions. Moreover, a belief structure
slightly weaker than that of qualitative belief is compatible with Smets'
generalized belief functions.
|
1303.5759 | An Efficient Implementation of Belief Function Propagation | cs.AI | The local computation technique (Shafer et al. 1987, Shafer and Shenoy 1988,
Shenoy and Shafer 1986) is used for propagating belief functions in so called a
Markov Tree. In this paper, we describe an efficient implementation of belief
function propagation on the basis of the local computation technique. The
presented method avoids all the redundant computations in the propagation
process, and so makes the computational complexity decrease with respect to
other existing implementations (Hsia and Shenoy 1989, Zarley et al. 1988). We
also give a combined algorithm for both propagation and re-propagation which
makes the re-propagation process more efficient when one or more of the prior
belief functions is changed.
|
1303.5760 | A Non-Numeric Approach to Multi-Criteria/Multi-Expert Aggregation Based
on Approximate Reasoning | cs.AI | We describe a technique that can be used for the fusion of multiple sources
of information as well as for the evaluation and selection of alternatives
under multi-criteria. Three important properties contribute to the uniqueness
of the technique introduced. The first is the ability to do all necessary
operations and aggregations with information that is of a nonnumeric linguistic
nature. This facility greatly reduces the burden on the providers of
information, the experts. A second characterizing feature is the ability
assign, again linguistically, differing importance to the criteria or in the
case of information fusion to the individual sources of information. A third
significant feature of the approach is its ability to be used as method to find
a consensus of the opinion of multiple experts on the issue of concern. The
techniques used in this approach are base on ideas developed from the theory of
approximate reasoning. We illustrate the approach with a problem of project
selection.
|
1303.5761 | Why Do We Need Foundations for Modelling Uncertainties? | cs.AI | Surely we want solid foundations. What kind of castle can we build on sand?
What is the point of devoting effort to balconies and minarets, if the
foundation may be so weak as to allow the structure to collapse of its own
weight? We want our foundations set on bedrock, designed to last for
generations. Who would want an architect who cannot certify the soundness of
the foundations of his buildings?
|
1303.5778 | Speech Recognition with Deep Recurrent Neural Networks | cs.NE cs.CL | Recurrent neural networks (RNNs) are a powerful model for sequential data.
End-to-end training methods such as Connectionist Temporal Classification make
it possible to train RNNs for sequence labelling problems where the
input-output alignment is unknown. The combination of these methods with the
Long Short-term Memory RNN architecture has proved particularly fruitful,
delivering state-of-the-art results in cursive handwriting recognition. However
RNN performance in speech recognition has so far been disappointing, with
better results returned by deep feedforward networks. This paper investigates
\emph{deep recurrent neural networks}, which combine the multiple levels of
representation that have proved so effective in deep networks with the flexible
use of long range context that empowers RNNs. When trained end-to-end with
suitable regularisation, we find that deep Long Short-term Memory RNNs achieve
a test set error of 17.7% on the TIMIT phoneme recognition benchmark, which to
our knowledge is the best recorded score.
|
1303.5802 | Sparsity-leveraging Reconfiguration of Smart Distribution Systems | math.OC cs.SY | A system reconfiguration problem is considered for three-phase power
distribution networks featuring distributed generation. In lieu of binary line
selection variables, the notion of group sparsity is advocated to re-formulate
the nonconvex distribution system reconfiguration (DSR) problem into a convex
one. Using the duality theory, it is shown that the line selection task boils
down to a shrinkage and thresholding operation on the line currents. Further,
numerical tests illustrate the ability of the proposed scheme to identify
meshed, weakly-meshed, or even radial configurations by adjusting a
sparsity-tuning parameter in the DSR cost. Constraints on the voltages are
investigated, and incorporated in the novel DSR problem to effect voltage
regulation.
|
1303.5805 | Optimal Placement of Distributed Energy Storage in Power Networks | math.OC cs.SY | We formulate the optimal placement, sizing and control of storage devices in
a power network to minimize generation costs with the intent of load shifting.
We assume deterministic demand, a linearized DC approximated power flow model
and a fixed available storage budget. Our main result proves that when the
generation costs are convex and nondecreasing, there always exists an optimal
storage capacity allocation that places zero storage at generation-only buses
that connect to the rest of the network via single links. This holds regardless
of the demand profiles, generation capacities, line-flow limits and
characteristics of the storage technologies. Through a counterexample, we
illustrate that this result is not generally true for generation buses with
multiple connections. For specific network topologies, we also characterize the
dependence of the optimal generation cost on the available storage budget,
generation capacities and flow constraints.
|
1303.5841 | Adaptive-Gain Second Order Sliding Mode Observer Design for Switching
Power Converters | cs.SY | In this paper, a novel adaptive-gain Second Order Sliding Mode (SOSM)
observer is proposed for multicell converters by considering it as a class of
hybrid systems. The aim is to reduce the number of voltage sensors by
estimating the capacitor voltages only from the measurement of load current.
The proposed observer is proven to be robust in the presence of perturbations
with \emph{unknown} boundary. However, the states of the system are only
partially observable in the sense of observability rank condition. Due to its
switching behavior, a recent concept of $Z(T_N)$ observability is used to
analysis its hybrid observability, since its observability depends upon the
switching control signals. Under certain condition of the switching sequences,
the voltage across each capacitor becomes observable. Simulation results and
comparisons with Luenberger switched observer highlight the effectiveness and
robustness of the proposed observer with respect to output measurement noise
and system uncertainties (load variations).
|
1303.5842 | Observer-Based High Order Sliding Mode Control of Unity Power Factor in
Three-Phase AC/DC Converter for Hybrid Electric Vehicle Applications | math.OC cs.SY | In this paper, a full-bridge boost power converter topology is studied for
power factor control, using output high order sliding mode control. The AC/DC
converters are used for charging the battery and super-capacitor in hybrid
electric vehicles from the utility. The proposed control forces the input
currents to track the desired values, which can controls the output voltage
while keeping the power factor close to one. Super-twisting sliding mode
observer is employed to estimate the input currents and load resistance only
from the measurement of output voltage. Lyapunov analysis shows the asymptotic
convergence of the closed loop system to zero. Simulation results show the
effectiveness and robustness of the proposed controller.
|
1303.5844 | Correlations and Scaling Laws in Human Mobility | physics.soc-ph cs.SI | Human mobility patterns deeply affect the dynamics of many social systems. In
this paper, we empirically analyze the real-world human movements based GPS
records, and observe rich scaling properties in the temporal-spatial patterns
as well as an abnormal transition in the speed-displacement patterns. We notice
that the displacements at the population level show significant positive
correlation, indicating a cascade-like nature in human movements. Furthermore,
our analysis at the individual level finds that the displacement distributions
of users with strong correlation of displacements are closer to power laws,
implying a relationship between the positive correlation of the series of
displacements and the form of an individual's displacement distribution. These
findings from our empirical analysis show a factor directly relevant to the
origin of the scaling properties in human mobility.
|
1303.5855 | Overlapping Community Detection in Complex Networks using Symmetric
Binary Matrix Factorization | cs.SI physics.soc-ph | Discovering overlapping community structures is a crucial step to
understanding the structure and dynamics of many networks. In this paper we
develop a symmetric binary matrix factorization model (SBMF) to identify
overlapping communities. Our model allows us not only to assign community
memberships explicitly to nodes, but also to distinguish outliers from
overlapping nodes. In addition, we propose a modified partition density to
evaluate the quality of community structures. We use this to determine the most
appropriate number of communities. We evaluate our methods using both synthetic
benchmarks and real world networks, demonstrating the effectiveness of our
approach.
|
1303.5857 | Model of complex networks based on citation dynamics | cs.SI physics.soc-ph | Complex networks of real-world systems are believed to be controlled by
common phenomena, producing structures far from regular or random. These
include scale-free degree distributions, small-world structure and assortative
mixing by degree, which are also the properties captured by different random
graph models proposed in the literature. However, many (non-social) real-world
networks are in fact disassortative by degree. Thus, we here propose a simple
evolving model that generates networks with most common properties of
real-world networks including degree disassortativity. Furthermore, the model
has a natural interpretation for citation networks with different practical
applications.
|
1303.5867 | Similarity based Dynamic Web Data Extraction and Integration System from
Search Engine Result Pages for Web Content Mining | cs.IR cs.DB | There is an explosive growth of information in the World Wide Web thus posing
a challenge to Web users to extract essential knowledge from the Web. Search
engines help us to narrow down the search in the form of Search Engine Result
Pages (SERP). Web Content Mining is one of the techniques that help users to
extract useful information from these SERPs. In this paper, we propose two
similarity based mechanisms; WDES, to extract desired SERPs and store them in
the local depository for offline browsing and WDICS, to integrate the requested
contents and enable the user to perform the intended analysis and extract the
desired information. Our experimental results show that WDES and WDICS
outperform DEPTA [1] in terms of Precision and Recall.
|
1303.5887 | A Behavioural Foundation for Natural Computing and a Programmability
Test | cs.IT cs.AI cs.CC math.IT | What does it mean to claim that a physical or natural system computes? One
answer, endorsed here, is that computing is about programming a system to
behave in different ways. This paper offers an account of what it means for a
physical system to compute based on this notion. It proposes a behavioural
characterisation of computing in terms of a measure of programmability, which
reflects a system's ability to react to external stimuli. The proposed measure
of programmability is useful for classifying computers in terms of the apparent
algorithmic complexity of their evolution in time. I make some specific
proposals in this connection and discuss this approach in the context of other
behavioural approaches, notably Turing's test of machine intelligence. I also
anticipate possible objections and consider the applicability of these
proposals to the task of relating abstract computation to nature-like
computation.
|
1303.5903 | How Do We Find Early Adopters Who Will Guide a Resource Constrained
Network Towards a Desired Distribution of Behaviors? | cs.SI physics.soc-ph | We identify influential early adopters that achieve a target behavior
distribution for a resource constrained social network with multiple costly
behaviors. This problem is important for applications ranging from collective
behavior change to corporate viral marketing campaigns. In this paper, we
propose a model of diffusion of multiple behaviors when individual participants
have resource constraints. Individuals adopt the set of behaviors that maximize
their utility subject to available resources. We show that the problem of
influence maximization for multiple behaviors is NP-complete. Thus we propose
heuristics, which are based on node degree and expected immediate adoption, to
select early adopters. We evaluate the effectiveness under three metrics:
unique number of participants, total number of active behaviors and network
resource utilization. We also propose heuristics to distribute the behaviors
amongst the early adopters to achieve a target distribution in the population.
We test our approach on synthetic and real-world topologies with excellent
results. Our heuristics produce 15-51\% increase in resource utilization over
the na\"ive approach.
|
1303.5909 | Genetic Algorithm with a Local Search Strategy for Discovering
Communities in Complex Networks | cs.SI physics.soc-ph | In order to further improve the performance of current genetic algorithms
aiming at discovering communities, a local search based genetic algorithm GALS
is here proposed. The core of GALS is a local search based mutation technique.
In order to overcome the drawbacks of traditional mutation methods, the paper
develops the concept of marginal gene and then the local monotonicity of
modularity function Q is deduced from each nodes local view. Based on these two
elements, a new mutation method combined with a local search strategy is
presented. GALS has been evaluated on both synthetic benchmarks and several
real networks, and compared with some presently competing algorithms.
Experimental results show that GALS is highly effective and efficient for
discovering community structure.
|
1303.5910 | Ant Colony Optimization with a New Random Walk Model for Community
Detection in Complex Networks | cs.SI physics.soc-ph | Detecting communities from complex networks has recently triggered great
interest. Aiming at this problem, a new ant colony optimization strategy
building on the Markov random walks theory, which is named as MACO, is proposed
in this paper. The framework of ant colony optimization is taken as the basic
framework in this algorithm. In each iteration, a Markov random walk model is
employed as heuristic rule; all of the ants local solutions are aggregated to a
global one through an idea of clustering ensemble, which then will be used to
update a pheromone matrix. The strategy relies on the progressive strengthening
of within-community links and the weakening of between-community links.
Gradually this converges to a solution where the underlying community structure
of the complex network will become clearly visible. The proposed MACO has been
evaluated both on synthetic benchmarks and on some real-world networks, and
compared with some present competing algorithms. Experimental result has shown
that MACO is highly effective for discovering communities.
|
1303.5912 | Fast Complex Network Clustering Algorithm Using Agents | cs.SI physics.soc-ph | Recently, the sizes of networks are always very huge, and they take on
distributed nature. Aiming at this kind of network clustering problem, in the
sight of local view, this paper proposes a fast network clustering algorithm in
which each node is regarded as an agent, and each agent tries to maximize its
local function in order to optimize network modularity defined by function Q,
rather than optimize function Q from the global view as traditional methods.
Both the efficiency and effectiveness of this algorithm are tested against
computer-generated and real-world networks. Experimental result shows that this
algorithm not only has the ability of clustering large-scale networks, but also
can attain very good clustering quality compared with the existing algorithms.
Furthermore, the parameters of this algorithm are analyzed.
|
1303.5913 | A Diffusion Process on Riemannian Manifold for Visual Tracking | cs.CV cs.LG cs.RO stat.ML | Robust visual tracking for long video sequences is a research area that has
many important applications. The main challenges include how the target image
can be modeled and how this model can be updated. In this paper, we model the
target using a covariance descriptor, as this descriptor is robust to problems
such as pixel-pixel misalignment, pose and illumination changes, that commonly
occur in visual tracking. We model the changes in the template using a
generative process. We introduce a new dynamical model for the template update
using a random walk on the Riemannian manifold where the covariance descriptors
lie in. This is done using log-transformed space of the manifold to free the
constraints imposed inherently by positive semidefinite matrices. Modeling
template variations and poses kinetics together in the state space enables us
to jointly quantify the uncertainties relating to the kinematic states and the
template in a principled way. Finally, the sequential inference of the
posterior distribution of the kinematic states and the template is done using a
particle filter. Our results shows that this principled approach can be robust
to changes in illumination, poses and spatial affine transformation. In the
experiments, our method outperformed the current state-of-the-art algorithm -
the incremental Principal Component Analysis method, particularly when a target
underwent fast poses changes and also maintained a comparable performance in
stable target tracking cases.
|
1303.5919 | Heart Disease Prediction System using Associative Classification and
Genetic Algorithm | cs.AI stat.AP | Associative classification is a recent and rewarding technique which
integrates association rule mining and classification to a model for prediction
and achieves maximum accuracy. Associative classifiers are especially fit to
applications where maximum accuracy is desired to a model for prediction. There
are many domains such as medical where the maximum accuracy of the model is
desired. Heart disease is a single largest cause of death in developed
countries and one of the main contributors to disease burden in developing
countries. Mortality data from the registrar general of India shows that heart
disease are a major cause of death in India, and in Andhra Pradesh coronary
heart disease cause about 30%of deaths in rural areas. Hence there is a need to
develop a decision support system for predicting heart disease of a patient. In
this paper we propose efficient associative classification algorithm using
genetic approach for heart disease prediction. The main motivation for using
genetic algorithm in the discovery of high level prediction rules is that the
discovered rules are highly comprehensible, having high predictive accuracy and
of high interestingness values. Experimental Results show that most of the
classifier rules help in the best prediction of heart disease which even helps
doctors in their diagnosis decisions.
|
1303.5929 | DLOLIS-A: Description Logic based Text Ontology Learning | cs.AI | Ontology Learning has been the subject of intensive study for the past
decade. Researchers in this field have been motivated by the possibility of
automatically building a knowledge base on top of text documents so as to
support reasoning based knowledge extraction. While most works in this field
have been primarily statistical (known as light-weight Ontology Learning) not
much attempt has been made in axiomatic Ontology Learning (called heavy-weight
Ontology Learning) from Natural Language text documents. Heavy-weight Ontology
Learning supports more precise formal logic-based reasoning when compared to
statistical ontology learning. In this paper we have proposed a sound Ontology
Learning tool DLOL_(IS-A) that maps English language IS-A sentences into their
equivalent Description Logic (DL) expressions in order to automatically
generate a consistent pair of T-box and A-box thereby forming both regular
(definitional form) and generalized (axiomatic form) DL ontology. The current
scope of the paper is strictly limited to IS-A sentences that exclude the
possible structures of: (i) implicative IS-A sentences, and (ii) "Wh" IS-A
questions. Other linguistic nuances that arise out of pragmatics and epistemic
of IS-A sentences are beyond the scope of this present work. We have adopted
Gold Standard based Ontology Learning evaluation on chosen IS-A rich Wikipedia
documents.
|
1303.5942 | Exact simulation of the GHZ distribution | cs.IT math.IT quant-ph | John Bell has shown that the correlations entailed by quantum mechanics
cannot be reproduced by a classical process involving non-communicating
parties. But can they be simulated with the help of bounded communication? This
problem has been studied for more than two decades and it is now well
understood in the case of bipartite entanglement. However, the issue was still
widely open for multipartite entanglement, even for the simplest case, which is
the tripartite Greenberger-Horne-Zeilinger (GHZ) state. We give an exact
simulation of arbitrary independent von Neumann measurements on general
n-partite GHZ states. Our protocol requires O(n^2) bits of expected
communication between the parties, and O(n log n) expected time is sufficient
to carry it out in parallel. Furthermore, we need only an expectation of O(n)
independent unbiased random bits, with no need for the generation of continuous
real random variables nor prior shared random variables. In the case of
equatorial measurements, we improve on the prior art with a protocol that needs
only O(n log n) bits of communication and O(log^2 n) parallel time. At the cost
of a slight increase in the number of bits communicated, these tasks can be
accomplished with a constant expected number of rounds.
|
1303.5947 | Effect of Receive Spatial Diversity on the Degrees of Freedom Region in
Multi-Cell Random Beamforming | cs.IT math.IT | The random beamforming (RBF) scheme, jointly applied with multi-user
diversity based scheduling, is able to achieve virtually interference-free
downlink transmissions with only partial channel state information (CSI)
available at the transmitter. However, the impact of receive spatial diversity
on the rate performance of RBF is not fully characterized yet even in a
single-cell setup. In this paper, we study a multi-cell multiple-input
multiple-output (MIMO) broadcast system with RBF applied at each base station
(BS) and either the minimum-mean-square-error (MMSE), matched filter (MF), or
antenna selection (AS) based spatial receiver employed at each mobile terminal.
We investigate the effect of different spatial diversity receivers on the
achievable sum-rate of multi-cell RBF systems subject to both the intra- and
inter-cell interferences. We first derive closed-form expressions for the
distributions of the receiver signal-to-interference-plus-noise ratio (SINR)
with different spatial diversity techniques, based on which we compare their
rate performances at finite signal-to-noise ratios (SNRs). We then investigate
the asymptotically high-SNR regime and for a tractable analysis assume that the
number of users in each cell scales in a certain order with the per-cell SNR as
SNR goes to infinity. Under this setup, we characterize the degrees of freedom
(DoF) region for multi-cell RBF systems with different types of spatial
receivers, which consists of all the achievable DoF tuples for the individual
sum-rate of all the cells. The DoF region analysis provides a succinct
characterization of the interplays among the receive spatial diversity,
multiuser diversity, spatial multiplexing gain, inter-/intra-cell
interferences, and BSs' collaborative transmission.
|
1303.5960 | SYNTAGMA. A Linguistic Approach to Parsing | cs.CL | SYNTAGMA is a rule-based parsing system, structured on two levels: a general
parsing engine and a language specific grammar. The parsing engine is a
language independent program, while grammar and language specific rules and
resources are given as text files, consisting in a list of constituent
structuresand a lexical database with word sense related features and
constraints. Since its theoretical background is principally Tesniere's
Elements de syntaxe, SYNTAGMA's grammar emphasizes the role of argument
structure (valency) in constraint satisfaction, and allows also horizontal
bounds, for instance treating coordination. Notions such as Pro, traces, empty
categories are derived from Generative Grammar and some solutions are close to
Government&Binding Theory, although they are the result of an autonomous
research. These properties allow SYNTAGMA to manage complex syntactic
configurations and well known weak points in parsing engineering. An important
resource is the semantic network, which is used in disambiguation tasks.
Parsing process follows a bottom-up, rule driven strategy. Its behavior can be
controlled and fine-tuned.
|
1303.5966 | Time varying networks and the weakness of strong ties | physics.soc-ph cs.SI | In most social and information systems the activity of agents generates
rapidly evolving time-varying networks. The temporal variation in networks'
connectivity patterns and the ongoing dynamic processes are usually coupled in
ways that still challenge our mathematical or computational modelling. Here we
analyse a mobile call dataset and find a simple statistical law that
characterize the temporal evolution of users' egocentric networks. We encode
this observation in a reinforcement process defining a time-varying network
model that exhibits the emergence of strong and weak ties. We study the effect
of time-varying and heterogeneous interactions on the classic rumour spreading
model in both synthetic, and real-world networks. We observe that strong ties
severely inhibit information diffusion by confining the spreading process among
agents with recurrent communication patterns. This provides the
counterintuitive evidence that strong ties may have a negative role in the
spreading of information across networks.
|
1303.5976 | On Learnability, Complexity and Stability | stat.ML cs.LG | We consider the fundamental question of learnability of a hypotheses class in
the supervised learning setting and in the general learning setting introduced
by Vladimir Vapnik. We survey classic results characterizing learnability in
term of suitable notions of complexity, as well as more recent results that
establish the connection between learnability and stability of a learning
algorithm.
|
1303.5984 | Efficient Reinforcement Learning for High Dimensional Linear Quadratic
Systems | stat.ML cs.LG math.OC | We study the problem of adaptive control of a high dimensional linear
quadratic (LQ) system. Previous work established the asymptotic convergence to
an optimal controller for various adaptive control schemes. More recently, for
the average cost LQ problem, a regret bound of ${O}(\sqrt{T})$ was shown, apart
form logarithmic factors. However, this bound scales exponentially with $p$,
the dimension of the state space. In this work we consider the case where the
matrices describing the dynamic of the LQ system are sparse and their
dimensions are large. We present an adaptive control scheme that achieves a
regret bound of ${O}(p \sqrt{T})$, apart from logarithmic factors. In
particular, our algorithm has an average cost of $(1+\eps)$ times the optimum
cost after $T = \polylog(p) O(1/\eps^2)$. This is in comparison to previous
work on the dense dynamics where the algorithm requires time that scales
exponentially with dimension in order to achieve regret of $\eps$ times the
optimal cost.
We believe that our result has prominent applications in the emerging area of
computational advertising, in particular targeted online advertising and
advertising in social networks.
|
1303.5988 | Reinforcement Ranking | cs.IR cs.SI | We introduce a new framework for web page ranking -- reinforcement ranking --
that improves the stability and accuracy of Page Rank while eliminating the
need for computing the stationary distribution of random walks. Instead of
relying on teleportation to ensure a well defined Markov chain, we develop a
reverse-time reinforcement learning framework that determines web page
authority based on the solution of a reverse Bellman equation. In particular,
for a given reward function and surfing policy we recover a well defined
authority score from a reverse-time perspective: looking back from a web page,
what is the total incoming discounted reward brought by the surfer from the
page's predecessors? This results in a novel form of reverse-time
dynamic-programming/reinforcement-learning problem that achieves several
advantages over Page Rank based methods: First, stochasticity, ergodicity, and
irreducibility of the underlying Markov chain is no longer required for
well-posedness. Second, the method is less sensitive to graph topology and more
stable in the presence of dangling pages. Third, not only does the reverse
Bellman iteration yield a more efficient power iteration, it allows for faster
updating in the presence of graph changes. Finally, our experiments demonstrate
improvements in ranking quality.
|
1303.6001 | Generalizing k-means for an arbitrary distance matrix | cs.LG cs.CV stat.ML | The original k-means clustering method works only if the exact vectors
representing the data points are known. Therefore calculating the distances
from the centroids needs vector operations, since the average of abstract data
points is undefined. Existing algorithms can be extended for those cases when
the sole input is the distance matrix, and the exact representing vectors are
unknown. This extension may be named relational k-means after a notation for a
similar algorithm invented for fuzzy clustering. A method is then proposed for
generalizing k-means for scenarios when the data points have absolutely no
connection with a Euclidean space.
|
1303.6017 | Scrambling Code Planning in TD-SCDMA Systems | cs.IT cs.NI math.IT | This paper has been withdrawn by the author due to a crucial sign error in
equation 2.
|
1303.6020 | Multi-Group Testing for Items with Real-Valued Status under Standard
Arithmetic | cs.IT math.CO math.IT | This paper proposes a novel generalization of group testing, called
multi-group testing, which relaxes the notion of "testing subset" in group
testing to "testing multi-set". The generalization aims to learn more
information of each item to be tested rather than identify only defectives as
was done in conventional group testing. This paper provides efficient
nonadaptive strategies for the multi-group testing problem. The major tool is a
new structure, $q$-ary additive $(w,d)$-disjunct matrix, which is a
generalization of the well-known binary disjunct matrix introduced by Kautz and
Singleton in 1964.
|
1303.6021 | Spatio-Temporal Covariance Descriptors for Action and Gesture
Recognition | cs.CV cs.HC | We propose a new action and gesture recognition method based on
spatio-temporal covariance descriptors and a weighted Riemannian locality
preserving projection approach that takes into account the curved space formed
by the descriptors. The weighted projection is then exploited during boosting
to create a final multiclass classification algorithm that employs the most
useful spatio-temporal regions. We also show how the descriptors can be
computed quickly through the use of integral video representations. Experiments
on the UCF sport, CK+ facial expression and Cambridge hand gesture datasets
indicate superior performance of the proposed method compared to several recent
state-of-the-art techniques. The proposed method is robust and does not require
additional processing of the videos, such as foreground detection,
interest-point detection or tracking.
|
1303.6025 | Robust Stability Analysis of an Optical Parametric Amplifier Quantum
System | quant-ph cs.SY math.OC | This paper considers the problem of robust stability for a class of uncertain
nonlinear quantum systems subject to unknown perturbations in the system
Hamiltonian. The case of a nominal linear quantum system is considered with
non-quadratic perturbations to the system Hamiltonian. The paper extends recent
results on the robust stability of nonlinear quantum systems to allow for
non-quadratic perturbations to the Hamiltonian which depend on multiple
parameters. A robust stability condition is given in terms of a strict bounded
real condition. This result is then applied to the robust stability analysis of
a nonlinear quantum system which is a model of an optical parametric amplifier.
|
1303.6046 | Optimized-Cost Repair in Multi-hop Distributed Storage Systems with
Network Coding | cs.IT math.IT | In distributed storage systems reliability is achieved through redundancy
stored at different nodes in the network. Then a data collector can reconstruct
source information even though some nodes fail. To maintain reliability, an
autonomous and efficient protocol should be used to repair the failed node. The
repair process causes traffic and consequently transmission cost in the
network. Recent results found the optimal trafficstorage tradeoff, and proposed
regenerating codes to achieve the optimality. We aim at minimizing the
transmission cost in the repair process. We consider the network topology in
the repair, and accordingly modify information flow graphs. Then we analyze the
cut requirement and based on the results, we formulate the minimum-cost as a
linear programming problem for linear costs. We show that the solution of the
linear problem establishes a fundamental lower bound of the repair-cost. We
also show that this bound is achievable for minimum storage regenerating, which
uses the optimal-cost minimum-storage regenerating (OCMSR) code. We propose
surviving node cooperation which can efficiently reduce the repair cost.
Further, the field size for the construction of OCMSR codes is discussed. We
show the gain of optimal-cost repair in tandem, star, grid and fully connected
networks.
|
1303.6066 | Asymmetric Pruning for Learning Cascade Detectors | cs.CV | Cascade classifiers are one of the most important contributions to real-time
object detection. Nonetheless, there are many challenging problems arising in
training cascade detectors. One common issue is that the node classifier is
trained with a symmetric classifier. Having a low misclassification error rate
does not guarantee an optimal node learning goal in cascade classifiers, i.e.,
an extremely high detection rate with a moderate false positive rate. In this
work, we present a new approach to train an effective node classifier in a
cascade detector. The algorithm is based on two key observations: 1) Redundant
weak classifiers can be safely discarded; 2) The final detector should satisfy
the asymmetric learning objective of the cascade architecture. To achieve this,
we separate the classifier training into two steps: finding a pool of
discriminative weak classifiers/features and training the final classifier by
pruning weak classifiers which contribute little to the asymmetric learning
criterion (asymmetric classifier construction). Our model reduction approach
helps accelerate the learning time while achieving the pre-determined learning
objective. Experimental results on both face and car data sets verify the
effectiveness of the proposed algorithm. On the FDDB face data sets, our
approach achieves the state-of-the-art performance, which demonstrates the
advantage of our approach.
|
1303.6086 | On Sparsity Inducing Regularization Methods for Machine Learning | cs.LG stat.ML | During the past years there has been an explosion of interest in learning
methods based on sparsity regularization. In this paper, we discuss a general
class of such methods, in which the regularizer can be expressed as the
composition of a convex function $\omega$ with a linear function. This setting
includes several methods such the group Lasso, the Fused Lasso, multi-task
learning and many more. We present a general approach for solving
regularization problems of this kind, under the assumption that the proximity
operator of the function $\omega$ is available. Furthermore, we comment on the
application of this approach to support vector machines, a technique pioneered
by the groundbreaking work of Vladimir Vapnik.
|
1303.6088 | Graphical Analysis of Social Group Dynamics | cs.SI physics.soc-ph | Identifying communities in social networks becomes an increasingly important
research problem. Several methods for identifying such groups have been
developed, however, qualitative analysis (taking into account the scale of the
problem) still poses serious problems. This paper describes a tool for
facilitating such an analysis, allowing to visualize the dynamics and
supporting localization of different events (such as creation or merging of
groups). In the final part of the paper, the experimental results performed
using the benchmark data (Enron emails) provide an insight into usefulness of
the proposed tool.
|
1303.6091 | Agent-based modelling of social organisations | cs.SI cs.MA physics.soc-ph | In the paper, the model of the society represented by a social network and
the model of a multi-agent system built on the basis of this, is presented. The
particular aim of the system is to predict the evolution of a society and an
analysis of the communities that appear, their characteristic features and
reasons for coming into being. As an example of application, an analysis was
made of a social portal which makes it possible to offer and reserve places in
rooms for travelling tourists
|
1303.6092 | A Polyhedral Approximation Framework for Convex and Robust Distributed
Optimization | cs.SY cs.DC math.OC | In this paper we consider a general problem set-up for a wide class of convex
and robust distributed optimization problems in peer-to-peer networks. In this
set-up convex constraint sets are distributed to the network processors who
have to compute the optimizer of a linear cost function subject to the
constraints. We propose a novel fully distributed algorithm, named
cutting-plane consensus, to solve the problem, based on an outer polyhedral
approximation of the constraint sets. Processors running the algorithm compute
and exchange linear approximations of their locally feasible sets.
Independently of the number of processors in the network, each processor stores
only a small number of linear constraints, making the algorithm scalable to
large networks. The cutting-plane consensus algorithm is presented and analyzed
for the general framework. Specifically, we prove that all processors running
the algorithm agree on an optimizer of the global problem, and that the
algorithm is tolerant to node and link failures as long as network connectivity
is preserved. Then, the cutting plane consensus algorithm is specified to three
different classes of distributed optimization problems, namely (i) inequality
constrained problems, (ii) robust optimization problems, and (iii) almost
separable optimization problems with separable objective functions and coupling
constraints. For each one of these problem classes we solve a concrete problem
that can be expressed in that framework and present computational results. That
is, we show how to solve: position estimation in wireless sensor networks, a
distributed robust linear program and, a distributed microgrid control problem.
|
1303.6094 | Modelling and analysing relations between entities using the
multi-agents and social network approaches | cs.SI cs.MA physics.soc-ph | In this work, the concept of a system for analysing social relations between
entities using the social network analysis and multi-agent system approaches is
presented. The following problems especially appear within the domain of our
interests: identification of the most influential individuals in a given
society, identification of roles played by the given individuals in that
society and the recognition of groups of individuals strongly connected with
one another. For the analysis of these problems, two application domains are
selected: an analysis of data regarding phone calls and analysis of Internet
Weblogs.
|
1303.6106 | Agent-based environment for knowledge integration | cs.MA | Representing knowledge with the use of ontology description languages offers
several advantages arising from knowledge reusability, possibilities of
carrying out reasoning processes and the use of existing concepts of knowledge
integration. In this work we are going to present an environment for the
integration of knowledge expressed in such a way. Guaranteeing knowledge
integration is an important element during the development of the Semantic Web.
Thanks to this, it is possible to obtain access to services which offer
knowledge contained in various distributed databases associated with
semantically described web portals. We will present the advantages of the
multi-agent approach while solving this problem. Then, we will describe an
example of its application in systems supporting company management knowledge
in the process of constructing supply-chains.
|
1303.6120 | Reliability and efficiency of generalized rumor spreading model on
complex social networks | physics.soc-ph cs.SI | We introduce the generalized rumor spreading model and investigate some
properties of this model on different complex social networks. Despite pervious
rumor models that both the spreader-spreader ($SS$) and the spreader-stifler
($SR$) interactions have the same rate $\alpha$, we define $\alpha^{(1)}$ and
$\alpha^{(2)}$ for $SS$ and $SR$ interactions, respectively. The effect of
variation of $\alpha^{(1)}$ and $\alpha^{(2)}$ on the final density of stiflers
is investigated. Furthermore, the influence of the topological structure of the
network in rumor spreading is studied by analyzing the behavior of several
global parameters such as reliability and efficiency. Our results show that
while networks with homogeneous connectivity patterns reach a higher
reliability, scale-free topologies need a less time to reach a steady state
with respect the rumor.
|
1303.6135 | Model-Based Calibration of Filter Imperfections in the Random
Demodulator for Compressive Sensing | cs.IT math.IT | The random demodulator is a recent compressive sensing architecture providing
efficient sub-Nyquist sampling of sparse band-limited signals. The compressive
sensing paradigm requires an accurate model of the analog front-end to enable
correct signal reconstruction in the digital domain. In practice, hardware
devices such as filters deviate from their desired design behavior due to
component variations. Existing reconstruction algorithms are sensitive to such
deviations, which fall into the more general category of measurement matrix
perturbations. This paper proposes a model-based technique that aims to
calibrate filter model mismatches to facilitate improved signal reconstruction
quality. The mismatch is considered to be an additive error in the discretized
impulse response. We identify the error by sampling a known calibrating signal,
enabling least-squares estimation of the impulse response error. The error
estimate and the known system model are used to calibrate the measurement
matrix. Numerical analysis demonstrates the effectiveness of the calibration
method even for highly deviating low-pass filter responses. The proposed method
performance is also compared to a state of the art method based on discrete
Fourier transform trigonometric interpolation.
|
1303.6138 | About the survey of propagandistic messages in contemporary social media | cs.SI cs.CY physics.soc-ph | This paper presents the research results that have identified a set of
characteristic parameters of propagandistic messages. Later these parameters
can be used in the algorithm creating special user-oriented propagandistic
messages to improve distribution and assimilation of information by users.
|
1303.6145 | Particles Prefer Walking Along the Axes: Experimental Insights into the
Behavior of a Particle Swarm | cs.NE cs.AI | Particle swarm optimization (PSO) is a widely used nature-inspired
meta-heuristic for solving continuous optimization problems. However, when
running the PSO algorithm, one encounters the phenomenon of so-called
stagnation, that means in our context, the whole swarm starts to converge to a
solution that is not (even a local) optimum. The goal of this work is to point
out possible reasons why the swarm stagnates at these non-optimal points. To
achieve our results, we use the newly defined potential of a swarm. The total
potential has a portion for every dimension of the search space, and it drops
when the swarm approaches the point of convergence. As it turns out
experimentally, the swarm is very likely to come sometimes into "unbalanced"
states, i. e., almost all potential belongs to one axis. Therefore, the swarm
becomes blind for improvements still possible in any other direction. Finally,
we show how in the light of the potential and these observations, a slightly
adapted PSO rebalances the potential and therefore increases the quality of the
solution.
|
1303.6149 | Adaptivity of averaged stochastic gradient descent to local strong
convexity for logistic regression | math.ST cs.LG math.OC stat.TH | In this paper, we consider supervised learning problems such as logistic
regression and study the stochastic gradient method with averaging, in the
usual stochastic approximation setting where observations are used only once.
We show that after $N$ iterations, with a constant step-size proportional to
$1/R^2 \sqrt{N}$ where $N$ is the number of observations and $R$ is the maximum
norm of the observations, the convergence rate is always of order
$O(1/\sqrt{N})$, and improves to $O(R^2 / \mu N)$ where $\mu$ is the lowest
eigenvalue of the Hessian at the global optimum (when this eigenvalue is
greater than $R^2/\sqrt{N}$). Since $\mu$ does not need to be known in advance,
this shows that averaged stochastic gradient is adaptive to \emph{unknown
local} strong convexity of the objective function. Our proof relies on the
generalized self-concordance properties of the logistic loss and thus extends
to all generalized linear models with uniformly bounded features.
|
1303.6163 | Machine learning of hierarchical clustering to segment 2D and 3D images | cs.CV cs.LG | We aim to improve segmentation through the use of machine learning tools
during region agglomeration. We propose an active learning approach for
performing hierarchical agglomerative segmentation from superpixels. Our method
combines multiple features at all scales of the agglomerative process, works
for data with an arbitrary number of dimensions, and scales to very large
datasets. We advocate the use of variation of information to measure
segmentation accuracy, particularly in 3D electron microscopy (EM) images of
neural tissue, and using this metric demonstrate an improvement over competing
algorithms in EM and natural images.
|
1303.6166 | Mismatched Decoding: Error Exponents, Second-Order Rates and Saddlepoint
Approximations | cs.IT math.IT | This paper considers the problem of channel coding with a given (possibly
suboptimal) maximum-metric decoding rule. A cost-constrained random-coding
ensemble with multiple auxiliary costs is introduced, and is shown to achieve
error exponents and second-order coding rates matching those of
constant-composition random coding, while being directly applicable to channels
with infinite or continuous alphabets. The number of auxiliary costs required
to match the error exponents and second-order rates of constant-composition
coding is studied, and is shown to be at most two. For i.i.d. random coding,
asymptotic estimates of two well-known non-asymptotic bounds are given using
saddlepoint approximations. Each expression is shown to characterize the
asymptotic behavior of the corresponding random-coding bound at both fixed and
varying rates, thus unifying the regimes characterized by error exponents,
second-order rates and moderate deviations. For fixed rates, novel exact
asymptotics expressions are obtained to within a multiplicative 1+o(1) term.
Using numerical examples, it is shown that the saddlepoint approximations are
highly accurate even at short block lengths.
|
1303.6167 | Second-Order Rate Region of Constant-Composition Codes for the
Multiple-Access Channel | cs.IT math.IT | This paper studies the second-order asymptotics of coding rates for the
discrete memoryless multiple-access channel with a fixed target error
probability. Using constant-composition random coding, coded time-sharing, and
a variant of Hoeffding's combinatorial central limit theorem, an inner bound on
the set of locally achievable second-order coding rates is given for each point
on the boundary of the capacity region. It is shown that the inner bound for
constant-composition random coding includes that recovered by i.i.d. random
coding, and that the inclusion may be strict. The inner bound is extended to
the Gaussian multiple-access channel via an increasingly fine quantization of
the inputs.
|
1303.6170 | Maximum Likelihood Fusion of Stochastic Maps | stat.AP cs.RO | The fusion of independently obtained stochastic maps by collaborating mobile
agents is considered. The proposed approach includes two parts: matching of
stochastic maps and maximum likelihood alignment. In particular, an affine
invariant hypergraph is constructed for each stochastic map, and a bipartite
matching via a linear program is used to establish landmark correspondence
between stochastic maps. A maximum likelihood alignment procedure is proposed
to determine rotation and translation between common landmarks in order to
construct a global map within a common frame of reference. A main feature of
the proposed approach is its scalability with respect to the number of
landmarks: the matching step has polynomial complexity and the maximum
likelihood alignment is obtained in closed form. Experimental validation of the
proposed fusion approach is performed using the Victoria Park benchmark
dataset.
|
1303.6175 | Compression as a universal principle of animal behavior | q-bio.NC cs.CL cs.IT math.IT physics.data-an q-bio.QM | A key aim in biology and psychology is to identify fundamental principles
underpinning the behavior of animals, including humans. Analyses of human
language and the behavior of a range of non-human animal species have provided
evidence for a common pattern underlying diverse behavioral phenomena: words
follow Zipf's law of brevity (the tendency of more frequently used words to be
shorter), and conformity to this general pattern has been seen in the behavior
of a number of other animals. It has been argued that the presence of this law
is a sign of efficient coding in the information theoretic sense. However, no
strong direct connection has been demonstrated between the law and compression,
the information theoretic principle of minimizing the expected length of a
code. Here we show that minimizing the expected code length implies that the
length of a word cannot increase as its frequency increases. Furthermore, we
show that the mean code length or duration is significantly small in human
language, and also in the behavior of other species in all cases where
agreement with the law of brevity has been found. We argue that compression is
a general principle of animal behavior, that reflects selection for efficiency
of coding.
|
1303.6224 | Limited benefit of cooperation in distributed relative localization | cs.SY math.OC | Important applications in robotic and sensor networks require distributed
algorithms to solve the so-called relative localization problem: a node-indexed
vector has to be reconstructed from measurements of differences between
neighbor nodes. In a recent note, we have studied the estimation error of a
popular gradient descent algorithm showing that the mean square error has a
minimum at a finite time, after which the performance worsens. This paper
proposes a suitable modification of this algorithm incorporating more realistic
"a priori" information on the position. The new algorithm presents a
performance monotonically decreasing to the optimal one. Furthermore, we show
that the optimal performance is approximated, up to a 1 + \eps factor, within a
time which is independent of the graph and of the number of nodes. This
convergence time is very much related to the minimum exhibited by the previous
algorithm and both lead to the following conclusion: in the presence of noisy
data, cooperation is only useful till a certain limit.
|
1303.6241 | Structure of complex networks: Quantifying edge-to-edge relations by
failure-induced flow redistribution | physics.soc-ph cs.SI | The analysis of complex networks has so far revolved mainly around the role
of nodes and communities of nodes. However, the dynamics of interconnected
systems is commonly focalised on edge processes, and a dual edge-centric
perspective can often prove more natural. Here we present graph-theoretical
measures to quantify edge-to-edge relations inspired by the notion of flow
redistribution induced by edge failures. Our measures, which are related to the
pseudo-inverse of the Laplacian of the network, are global and reveal the
dynamical interplay between the edges of a network, including potentially
non-local interactions. Our framework also allows us to define the embeddedness
of an edge, a measure of how strongly an edge features in the weighted cuts of
the network. We showcase the general applicability of our edge-centric
framework through analyses of the Iberian Power grid, traffic flow in road
networks, and the C. elegans neuronal network.
|
1303.6249 | A Derivation of the Source-Channel Error Exponent using Non-identical
Product Distributions | cs.IT math.IT | This paper studies the random-coding exponent of joint source-channel coding
for a scheme where source messages are assigned to disjoint subsets (referred
to as classes), and codewords are independently generated according to a
distribution that depends on the class index of the source message. For
discrete memoryless systems, two optimally chosen classes and product
distributions are found to be sufficient to attain the sphere-packing exponent
in those cases where it is tight.
|
1303.6271 | Preferential Attachment in Online Networks: Measurement and Explanations | physics.soc-ph cs.SI physics.data-an | We perform an empirical study of the preferential attachment phenomenon in
temporal networks and show that on the Web, networks follow a nonlinear
preferential attachment model in which the exponent depends on the type of
network considered. The classical preferential attachment model for networks by
Barab\'asi and Albert (1999) assumes a linear relationship between the number
of neighbors of a node in a network and the probability of attachment. Although
this assumption is widely made in Web Science and related fields, the
underlying linearity is rarely measured. To fill this gap, this paper performs
an empirical longitudinal (time-based) study on forty-seven diverse Web network
datasets from seven network categories and including directed, undirected and
bipartite networks. We show that contrary to the usual assumption, preferential
attachment is nonlinear in the networks under consideration. Furthermore, we
observe that the deviation from linearity is dependent on the type of network,
giving sublinear attachment in certain types of networks, and superlinear
attachment in others. Thus, we introduce the preferential attachment exponent
$\beta$ as a novel numerical network measure that can be used to discriminate
different types of networks. We propose explanations for the behavior of that
network measure, based on the mechanisms that underly the growth of the network
in question.
|
1303.6310 | A hybrid bat algorithm | cs.NE | Swarm intelligence is a very powerful technique to be used for optimization
purposes. In this paper we present a new swarm intelligence algorithm, based on
the bat algorithm. The Bat algorithm is hybridized with differential evolution
strategies. Besides showing very promising results of the standard benchmark
functions, this hybridization also significantly improves the original bat
algorithm.
|
1303.6314 | Numerical model of elastic laminated glass beams under finite strain | cs.CE | Laminated glass structures are formed by stiff layers of glass connected with
a compliant plastic interlayer. Due to their slenderness and heterogeneity,
they exhibit a complex mechanical response that is difficult to capture by
single-layer models even in the elastic range. The purpose of this paper is to
introduce an efficient and reliable finite element approach to the simulation
of the immediate response of laminated glass beams. It proceeds from a refined
plate theory due to Mau (1973), as we treat each layer independently and
enforce the compatibility by the Lagrange multipliers. At the layer level, we
adopt the finite-strain shear deformable formulation of Reissner (1972) and the
numerical framework by Ibrahimbegovi\'{c} and Frey (1993). The resulting system
is solved by the Newton method with consistent linearization. By comparing the
model predictions against available experimental data, analytical methods and
two-dimensional finite element simulations, we demonstrate that the proposed
formulation is reliable and provides accuracy comparable to the detailed
two-dimensional finite element analyzes. As such, it offers a convenient basis
to incorporate more refined constitutive description of the interlayer.
|
1303.6361 | Video Face Matching using Subset Selection and Clustering of
Probabilistic Multi-Region Histograms | cs.CV cs.IR | Balancing computational efficiency with recognition accuracy is one of the
major challenges in real-world video-based face recognition. A significant
design decision for any such system is whether to process and use all possible
faces detected over the video frames, or whether to select only a few "best"
faces. This paper presents a video face recognition system based on
probabilistic Multi-Region Histograms to characterise performance trade-offs
in: (i) selecting a subset of faces compared to using all faces, and (ii)
combining information from all faces via clustering. Three face selection
metrics are evaluated for choosing a subset: face detection confidence, random
subset, and sequential selection. Experiments on the recently introduced MOBIO
dataset indicate that the usage of all faces through clustering always
outperformed selecting only a subset of faces. The experiments also show that
the face selection metric based on face detection confidence generally provides
better recognition performance than random or sequential sampling. Moreover,
the optimal number of faces varies drastically across selection metric and
subsets of MOBIO. Given the trade-offs between computational effort,
recognition accuracy and robustness, it is recommended that face feature
clustering would be most advantageous in batch processing (particularly for
video-based watchlists), whereas face selection methods should be limited to
applications with significant computational restrictions.
|
1303.6369 | Extracting the information backbone in online system | cs.IR cs.SI physics.soc-ph | Information overload is a serious problem in modern society and many
solutions such as recommender system have been proposed to filter out
irrelevant information. In the literature, researchers mainly dedicated to
improve the recommendation performance (accuracy and diversity) of the
algorithms while overlooked the influence of topology of the online user-object
bipartite networks. In this paper, we find that some information provided by
the bipartite networks is not only redundant but also misleading. With such
"less can be more" feature, we design some algorithms to improve the
recommendation performance by eliminating some links from the original
networks. Moreover, we propose a hybrid method combining the time-aware and
topology-aware link removal algorithms to extract the backbone which contains
the essential information for the recommender systems. From the practical point
of view, our method can improve the performance and reduce the computational
time of the recommendation system, thus improve both of their effectiveness and
efficiency.
|
1303.6370 | Convex Tensor Decomposition via Structured Schatten Norm Regularization | stat.ML cs.LG cs.NA | We discuss structured Schatten norms for tensor decomposition that includes
two recently proposed norms ("overlapped" and "latent") for
convex-optimization-based tensor decomposition, and connect tensor
decomposition with wider literature on structured sparsity. Based on the
properties of the structured Schatten norms, we mathematically analyze the
performance of "latent" approach for tensor decomposition, which was
empirically found to perform better than the "overlapped" approach in some
settings. We show theoretically that this is indeed the case. In particular,
when the unknown true tensor is low-rank in a specific mode, this approach
performs as good as knowing the mode with the smallest rank. Along the way, we
show a novel duality result for structures Schatten norms, establish the
consistency, and discuss the identifiability of this approach. We confirm
through numerical simulations that our theoretical prediction can precisely
predict the scaling behavior of the mean squared error.
|
1303.6372 | Detecting Friendship Within Dynamic Online Interaction Networks | cs.SI cs.CY cs.HC physics.soc-ph | In many complex social systems, the timing and frequency of interactions
between individuals are observable but friendship ties are hidden. Recovering
these hidden ties, particularly for casual users who are relatively less
active, would enable a wide variety of friendship-aware applications in domains
where labeled data are often unavailable, including online advertising and
national security. Here, we investigate the accuracy of multiple statistical
features, based either purely on temporal interaction patterns or on the
cooperative nature of the interactions, for automatically extracting latent
social ties. Using self-reported friendship and non-friendship labels derived
from an anonymous online survey, we learn highly accurate predictors for
recovering hidden friendships within a massive online data set encompassing 18
billion interactions among 17 million individuals of the popular online game
Halo: Reach. We find that the accuracy of many features improves as more data
accumulates, and cooperative features are generally reliable. However,
periodicities in interaction time series are sufficient to correctly classify
95% of ties, even for casual users. These results clarify the nature of
friendship in online social environments and suggest new opportunities and new
privacy concerns for friendship-aware applications that do not require the
disclosure of private friendship information.
|
1303.6377 | Simulation of Fractional Brownian Surfaces via Spectral Synthesis on
Manifolds | cs.CG cs.CV math.PR | Using the spectral decomposition of the Laplace-Beltrami operator we simulate
fractal surfaces as random series of eigenfunctions. This approach allows us to
generate random fields over smooth manifolds of arbitrary dimension,
generalizing previous work with fractional Brownian motion with
multi-dimensional parameter. We give examples of surfaces with and without
boundary and discuss implementation.
|
1303.6378 | Linear complexity of generalized cyclotomic sequences of order 4 over
F_l | cs.IT math.IT | Generalized cyclotomic sequences of period pq have several desirable
randomness properties if the two primes p and q are chosen properly. In
particular,Ding deduced the exact formulas for the autocorrelation and the
linear complexity of these sequences of order 2. In this paper, we consider the
generalized sequences of order 4. Under certain conditions, the linear
complexity of these sequences of order 4 is developed over a finite field F_l.
Results show that in many cases they have high linear complexity.
|
1303.6385 | Dynamics of Trust Reciprocation in Heterogenous MMOG Networks | cs.SI physics.soc-ph | Understanding the dynamics of reciprocation is of great interest in sociology
and computational social science. The recent growth of Massively Multi-player
Online Games (MMOGs) has provided unprecedented access to large-scale data
which enables us to study such complex human behavior in a more systematic
manner. In this paper, we consider three different networks in the EverQuest2
game: chat, trade, and trust. The chat network has the highest level of
reciprocation (33%) because there are essentially no barriers to it. The trade
network has a lower rate of reciprocation (27%) because it has the obvious
barrier of requiring more goods or money for exchange; morever, there is no
clear benefit to returning a trade link except in terms of social connections.
The trust network has the lowest reciprocation (14%) because this equates to
sharing certain within-game assets such as weapons, and so there is a high
barrier for such connections because they require faith in the players that are
granted such high access. In general, we observe that reciprocation rate is
inversely related to the barrier level in these networks. We also note that
reciprocation has connections across the heterogeneous networks. Our
experiments indicate that players make use of the medium-barrier reciprocations
to strengthen a relationship. We hypothesize that lower-barrier interactions
are an important component to predicting higher-barrier ones. We verify our
hypothesis using predictive models for trust reciprocations using features from
trade interactions. Using the number of trades (both before and after the
initial trust link) boosts our ability to predict if the trust will be
reciprocated up to 11% with respect to the AUC.
|
1303.6387 | Message Passing Algorithm for Distributed Downlink Regularized
Zero-forcing Beamforming with Cooperative Base Stations | cs.IT math.IT | Base station (BS) cooperation can turn unwanted interference to useful signal
energy for enhancing system performance. In the cooperative downlink,
zero-forcing beamforming (ZFBF) with a simple scheduler is well known to obtain
nearly the performance of the capacity-achieving dirty-paper coding. However,
the centralized ZFBF approach is prohibitively complex as the network size
grows. In this paper, we devise message passing algorithms for realizing the
regularized ZFBF (RZFBF) in a distributed manner using belief propagation. In
the proposed methods, the overall computational cost is decomposed into many
smaller computation tasks carried out by groups of neighboring BSs and
communications is only required between neighboring BSs. More importantly, some
exchanged messages can be computed based on channel statistics rather than
instantaneous channel state information, leading to significant reduction in
computational complexity. Simulation results demonstrate that the proposed
algorithms converge quickly to the exact RZFBF and much faster compared to
conventional methods.
|
1303.6388 | Phase Transition Analysis of Sparse Support Detection from Noisy
Measurements | cs.IT math.IT | This paper investigates the problem of sparse support detection (SSD) via a
detection-oriented algorithm named Bayesian hypothesis test via belief
propagation (BHT-BP). Our main focus is to compare BHT-BP to an
estimation-based algorithm, called CS-BP, and show its superiority in the SSD
problem. For this investigation, we perform a phase transition (PT) analysis
over the plain of the noise level and signal magnitude on the signal support.
This PT analysis sharply specifies the required signal magnitude for the
detection under a certain noise level. In addition, we provide an experimental
validation to assure the PT analysis. Our analytical and experimental results
show the fact that BHT-BP detects the signal support against additive noise
more robustly than CS-BP does.
|
1303.6390 | A Note on k-support Norm Regularized Risk Minimization | cs.LG | The k-support norm has been recently introduced to perform correlated
sparsity regularization. Although Argyriou et al. only reported experiments
using squared loss, here we apply it to several other commonly used settings
resulting in novel machine learning algorithms with interesting and familiar
limit cases. Source code for the algorithms described here is available.
|
1303.6397 | Conditions for detectability in distributed consensus-based observer
networks | cs.SY | The paper discusses fundamental detectability properties associated with the
problem of distributed state estimation using networked observers. The main
result of the paper establishes connections between detectability of the plant
through measurements, observability of the node filters through
interconnections, and algebraic properties of the underlying communication
graph, to ensure the interconnected filtering error dynamics are stabilizable
via output injection.
|
1303.6409 | Information Measures for Deterministic Input-Output Systems | cs.IT math.IT | In this work the information loss in deterministic, memoryless systems is
investigated by evaluating the conditional entropy of the input random variable
given the output random variable. It is shown that for a large class of systems
the information loss is finite, even if the input is continuously distributed.
Based on this finiteness, the problem of perfectly reconstructing the input is
addressed and Fano-type bounds between the information loss and the
reconstruction error probability are derived.
For systems with infinite information loss a relative measure is defined and
shown to be tightly related to R\'{e}nyi information dimension. Employing
another Fano-type argument, the reconstruction error probability is bounded by
the relative information loss from below.
In view of developing a system theory from an information-theoretic
point-of-view, the theoretical results are illustrated by a few example
systems, among them a multi-channel autocorrelation receiver.
|
1303.6454 | Partial Transfer Entropy on Rank Vectors | stat.ME cs.IT math.IT nlin.CD physics.data-an | For the evaluation of information flow in bivariate time series, information
measures have been employed, such as the transfer entropy (TE), the symbolic
transfer entropy (STE), defined similarly to TE but on the ranks of the
components of the reconstructed vectors, and the transfer entropy on rank
vectors (TERV), similar to STE but forming the ranks for the future samples of
the response system with regard to the current reconstructed vector. Here we
extend TERV for multivariate time series, and account for the presence of
confounding variables, called partial transfer entropy on ranks (PTERV). We
investigate the asymptotic properties of PTERV, and also partial STE (PSTE),
construct parametric significance tests under approximations with Gaussian and
gamma null distributions, and show that the parametric tests cannot achieve the
power of the randomization test using time-shifted surrogates. Using
simulations on known coupled dynamical systems and applying parametric and
randomization significance tests, we show that PTERV performs better than PSTE
but worse than the partial transfer entropy (PTE). However, PTERV, unlike PTE,
is robust to the presence of drifts in the time series and it is also not
affected by the level of detrending.
|
1303.6455 | Performance Evaluation of Edge-Directed Interpolation Methods for Images | cs.CV | Many interpolation methods have been developed for high visual quality, but
fail for inability to preserve image structures. Edges carry heavy structural
information for detection, determination and classification. Edge-adaptive
interpolation approaches become a center of focus. In this paper, performance
of four edge-directed interpolation methods comparing with two traditional
methods is evaluated on two groups of images. These methods include new
edge-directed interpolation (NEDI), edge-guided image interpolation (EGII),
iterative curvature-based interpolation (ICBI), directional cubic convolution
interpolation (DCCI) and two traditional approaches, bi-linear and bi-cubic.
Meanwhile, no parameters are mentioned to measure edge-preserving ability of
edge-adaptive interpolation approaches and we proposed two. One evaluates
accuracy and the other measures robustness of edge-preservation ability.
Performance evaluation is based on six parameters. Objective assessment and
visual analysis are illustrated and conclusions are drawn from theoretical
backgrounds and practical results.
|
1303.6460 | Social and place-focused communities in location-based online social
networks | physics.soc-ph cs.SI | Thanks to widely available, cheap Internet access and the ubiquity of
smartphones, millions of people around the world now use online location-based
social networking services. Understanding the structural properties of these
systems and their dependence upon users' habits and mobility has many potential
applications, including resource recommendation and link prediction. Here, we
construct and characterise social and place-focused graphs by using
longitudinal information about declared social relationships and about users'
visits to physical places collected from a popular online location-based social
service. We show that although the social and place-focused graphs are
constructed from the same data set, they have quite different structural
properties. We find that the social and location-focused graphs have different
global and meso-scale structure, and in particular that social and
place-focused communities have negligible overlap. Consequently, group
inference based on community detection performed on the social graph alone
fails to isolate place-focused groups, even though these do exist in the
network. By studying the evolution of tie structure within communities, we show
that the time period over which location data are aggregated has a substantial
impact on the stability of place-focused communities, and that information
about place-based groups may be more useful for user-centric applications than
that obtained from the analysis of social communities alone.
|
1303.6544 | Sketching Sparse Matrices | cs.IT math.IT math.OC | This paper considers the problem of recovering an unknown sparse p\times p
matrix X from an m\times m matrix Y=AXB^T, where A and B are known m \times p
matrices with m << p.
The main result shows that there exist constructions of the "sketching"
matrices A and B so that even if X has O(p) non-zeros, it can be recovered
exactly and efficiently using a convex program as long as these non-zeros are
not concentrated in any single row/column of X. Furthermore, it suffices for
the size of Y (the sketch dimension) to scale as m = O(\sqrt{# nonzeros in X}
\times log p). The results also show that the recovery is robust and stable in
the sense that if X is equal to a sparse matrix plus a perturbation, then the
convex program we propose produces an approximation with accuracy proportional
to the size of the perturbation. Unlike traditional results on sparse recovery,
where the sensing matrix produces independent measurements, our sensing
operator is highly constrained (it assumes a tensor product structure).
Therefore, proving recovery guarantees require non-standard techniques. Indeed
our approach relies on a novel result concerning tensor products of bipartite
graphs, which may be of independent interest.
This problem is motivated by the following application, among others.
Consider a p\times n data matrix D, consisting of n observations of p
variables. Assume that the correlation matrix X:=DD^{T} is (approximately)
sparse in the sense that each of the p variables is significantly correlated
with only a few others. Our results show that these significant correlations
can be detected even if we have access to only a sketch of the data S=AD with A
\in R^{m\times p}.
|
1303.6609 | Exploiting Opportunistic Physical Design in Large-scale Data Analytics | cs.DB cs.DC cs.DS | Large-scale systems, such as MapReduce and Hadoop, perform aggressive
materialization of intermediate job results in order to support fault
tolerance. When jobs correspond to exploratory queries submitted by data
analysts, these materializations yield a large set of materialized views that
typically capture common computation among successive queries from the same
analyst, or even across queries of different analysts who test similar
hypotheses. We propose to treat these views as an opportunistic physical design
and use them for the purpose of query optimization. We develop a novel
query-rewrite algorithm that addresses the two main challenges in this context:
how to search the large space of rewrites, and how to reason about views that
contain UDFs (a common feature in large-scale data analytics). The algorithm,
which provably finds the minimum-cost rewrite, is inspired by nearest-neighbor
searches in non-metric spaces. We present an extensive experimental study on
real-world datasets with a prototype data-analytics system based on Hive. The
results demonstrate that our approach can result in dramatic performance
improvements on complex data-analysis queries, reducing total execution time by
an average of 61% and up to two orders of magnitude.
|
1303.6619 | An N-dimensional approach towards object based classification of
remotely sensed imagery | cs.CV | Remote sensing techniques are widely used for land cover classification and
urban analysis. The availability of high resolution remote sensing imagery
limits the level of classification accuracy attainable from pixel-based
approach. In this paper object-based classification scheme based on a
hierarchical support vector machine is introduced. By combining spatial and
spectral information, the amount of overlap between classes can be decreased;
thereby yielding higher classification accuracy and more accurate land cover
maps. We have adopted certain automatic approaches based on the advanced
techniques as Cellular automata and Genetic Algorithm for kernel and tuning
parameter selection. Performance evaluation of the proposed methodology in
comparison with the existing approaches is performed with reference to the
Bhopal city study area.
|
1303.6672 | Living on the edge: Phase transitions in convex programs with random
data | cs.IT math.IT | Recent research indicates that many convex optimization problems with random
constraints exhibit a phase transition as the number of constraints increases.
For example, this phenomenon emerges in the $\ell_1$ minimization method for
identifying a sparse vector from random linear measurements. Indeed, the
$\ell_1$ approach succeeds with high probability when the number of
measurements exceeds a threshold that depends on the sparsity level; otherwise,
it fails with high probability.
This paper provides the first rigorous analysis that explains why phase
transitions are ubiquitous in random convex optimization problems. It also
describes tools for making reliable predictions about the quantitative aspects
of the transition, including the location and the width of the transition
region. These techniques apply to regularized linear inverse problems with
random measurements, to demixing problems under a random incoherence model, and
also to cone programs with random affine constraints.
The applied results depend on foundational research in conic geometry. This
paper introduces a summary parameter, called the statistical dimension, that
canonically extends the dimension of a linear subspace to the class of convex
cones. The main technical result demonstrates that the sequence of intrinsic
volumes of a convex cone concentrates sharply around the statistical dimension.
This fact leads to accurate bounds on the probability that a randomly rotated
cone shares a ray with a fixed cone.
|
1303.6674 | Consensus Algorithms and the Decomposition-Separation Theorem | math.DS cs.SY eess.SY math.OC | Convergence properties of time inhomogeneous Markov chain based discrete and
continuous time linear consensus algorithms are analyzed. Provided that a
so-called infinite jet flow property is satisfied by the underlying chains,
necessary conditions for both consensus and multiple consensus are established.
A recenet extension by Sonin of the classical Kolmogorov-Doeblin
decomposition-separation for homogeneous Markov chains to the inhomogeneous
case is then employed to show that the obtained necessary conditions are also
sufficient when the chain is of Class P*, as defined by Touri and Nedic. It is
also shown that Sonin's theorem leads to a rediscovery and generalization of
most of the existing related consensus results in the literature.
|
1303.6682 | Anatomy of the chase | cs.DB | A lot of research activity has recently taken place around the chase
procedure, due to its usefulness in data integration, data exchange, query
optimization, peer data exchange and data correspondence, to mention a few. As
the chase has been investigated and further developed by a number of research
groups and authors, many variants of the chase have emerged and associated
results obtained. Due to the heterogeneous nature of the area it is frequently
difficult to verify the scope of each result. In this paper we take closer look
at recent developments, and provide additional results. Our analysis allows us
create a taxonomy of the chase variations and the properties they satisfy.
Two of the most central problems regarding the chase is termination, and
discovery of restricted classes of sets of dependencies that guarantee
termination of the chase. The search for the restricted classes has been
motivated by a fairly recent result that shows that it is undecidable to
determine whether the chase with a given dependency set will terminate on a
given instance. There is a small dissonance here, since the quest has been for
classes of sets of dependencies guaranteeing termination of the chase on all
instances, even though the latter problem was not known to be undecidable. We
resolve the dissonance in this paper by showing that determining whether the
chase with a given set of dependencies terminates on all instances is
coRE-complete. Our reduction also gives us the aforementioned
instance-dependent RE-completeness result as a byproduct. For one of the
restricted classes, the stratified sets dependencies, we provide new complexity
results for the problem of testing whether a given set of dependencies belongs
to it. These results rectify some previous claims that have occurred in the
literature.
|
1303.6711 | An intelligent approach towards automatic shape modeling and object
extraction from satellite images using cellular automata based algorithm | cs.CV | Automatic feature extraction domain has witnessed the application of many
intelligent methodologies over past decade; however detection accuracy of these
approaches were limited as object geometry and contextual knowledge were not
given enough consideration. In this paper, we propose a frame work for accurate
detection of features along with automatic interpolation, and interpretation by
modeling feature shape as well as contextual knowledge using advanced
techniques such as SVRF, Cellular Neural Network, Core set, and MACA. Developed
methodology has been compared with contemporary methods using different
statistical measures. Investigations over various satellite images revealed
that considerable success was achieved with the CNN approach. CNN has been
effective in modeling different complex features effectively and complexity of
the approach has been considerably reduced using corset optimization. The
system has dynamically used spectral and spatial information for representing
contextual knowledge using CNN-prolog approach. System has been also proved to
be effective in providing intelligent interpolation and interpretation of
random features.
|
1303.6719 | Blind Identification of ARX Models with Piecewise Constant Inputs | cs.SY | Blind system identification is known to be a hard ill-posed problem and
without further assumptions, no unique solution is at hand. In this
contribution, we are concerned with the task of identifying an ARX model from
only output measurements. Driven by the task of identifying systems that are
turned on and off at unknown times, we seek a piecewise constant input and a
corresponding ARX model which approximates the measured outputs. We phrase this
as a rank minimization problem and present a relaxed convex formulation to
approximate its solution. The proposed method was developed to model power
consumption of electrical appliances and is now a part of a bigger energy
disaggregation framework. Code will be made available online.
|
1303.6746 | Exploiting correlation and budget constraints in Bayesian multi-armed
bandit optimization | stat.ML cs.LG | We address the problem of finding the maximizer of a nonlinear smooth
function, that can only be evaluated point-wise, subject to constraints on the
number of permitted function evaluations. This problem is also known as
fixed-budget best arm identification in the multi-armed bandit literature. We
introduce a Bayesian approach for this problem and show that it empirically
outperforms both the existing frequentist counterpart and other Bayesian
optimization methods. The Bayesian approach places emphasis on detailed
modelling, including the modelling of correlations among the arms. As a result,
it can perform well in situations where the number of arms is much larger than
the number of allowed function evaluation, whereas the frequentist counterpart
is inapplicable. This feature enables us to develop and deploy practical
applications, such as automatic machine learning toolboxes. The paper presents
comprehensive comparisons of the proposed approach, Thompson sampling,
classical Bayesian optimization techniques, more recent Bayesian bandit
approaches, and state-of-the-art best arm identification methods. This is the
first comparison of many of these methods in the literature and allows us to
examine the relative merits of their different features.
|
1303.6750 | Sequential testing over multiple stages and performance analysis of data
fusion | stat.ML cs.LG | We describe a methodology for modeling the performance of decision-level data
fusion between different sensor configurations, implemented as part of the
JIEDDO Analytic Decision Engine (JADE). We first discuss a Bayesian network
formulation of classical probabilistic data fusion, which allows elementary
fusion structures to be stacked and analyzed efficiently. We then present an
extension of the Wald sequential test for combining the outputs of the Bayesian
network over time. We discuss an algorithm to compute its performance
statistics and illustrate the approach on some examples. This variant of the
sequential test involves multiple, distinct stages, where the evidence
accumulated from each stage is carried over into the next one, and is motivated
by a need to keep certain sensors in the network inactive unless triggered by
other sensors.
|
1303.6771 | Optimal Power Allocation over Multiple Identical Gilbert-Elliott
Channels | cs.IT math.IT | We study the fundamental problem of power allocation over multiple
Gilbert-Elliott communication channels. In a communication system with time
varying channel qualities, it is important to allocate the limited transmission
power to channels that will be in good state. However, it is very challenging
to do so because channel states are usually unknown when the power allocation
decision is made. In this paper, we derive an optimal power allocation policy
that can maximize the expected discounted number of bits transmitted over an
infinite time span by allocating the transmission power only to those channels
that are believed to be good in the coming time slot. We use the concept belief
to represent the probability that a channel will be good and derive an optimal
power allocation policy that establishes a mapping from the channel belief to
an allocation decision.
Specifically, we first model this problem as a partially observable Markov
decision processes (POMDP), and analytically investigate the structure of the
optimal policy. Then a simple threshold-based policy is derived for a
three-channel communication system. By formulating and solving a linear
programming formulation of this power allocation problem, we further verified
the derived structure of the optimal policy.
|
1303.6775 | Dynamic Provisioning in Next-Generation Data Centers with On-site Power
Production | cs.DS cs.SY | The critical need for clean and economical sources of energy is transforming
data centers that are primarily energy consumers to also energy producers. We
focus on minimizing the operating costs of next-generation data centers that
can jointly optimize the energy supply from on-site generators and the power
grid, and the energy demand from servers as well as power conditioning and
cooling systems. We formulate the cost minimization problem and present an
offline optimal algorithm. For "on-grid" data centers that use only the grid,
we devise a deterministic online algorithm that achieves the best possible
competitive ratio of $2-\alpha_{s}$, where $\alpha_{s}$ is a normalized
look-ahead window size. For "hybrid" data centers that have on-site power
generation in addition to the grid, we develop an online algorithm that
achieves a competitive ratio of at most \textmd{\normalsize {\small
$\frac{P_{\max} (2-\alpha_{s})}{c_{o}+c_{m}/L}
[1+2\frac{P_{\max}-c_{o}}{P_{\max}(1+\alpha_{g})}]$}}, where $\alpha_{s}$ and
$\alpha_{g}$ are normalized look-ahead window sizes, $P_{\max}$ is the maximum
grid power price, and $L$, $c_{o}$, and $c_{m}$ are parameters of an on-site
generator.
Using extensive workload traces from Akamai with the corresponding grid power
prices, we simulate our offline and online algorithms in a realistic setting.
Our offline (resp., online) algorithm achieves a cost reduction of 25.8%
(resp., 20.7%) for a hybrid data center and 12.3% (resp., 7.3%) for an on-grid
data center. The cost reductions are quite significant and make a strong case
for a joint optimization of energy supply and energy demand in a data center. A
hybrid data center provides about 13% additional cost reduction over an on-grid
data center representing the additional cost benefits that on-site power
generation provides over using the grid alone.
|
1303.6777 | A Graphical Language for Real-Time Critical Robot Commands | cs.RO cs.PL cs.SE | Industrial robotics is characterized by sophisticated mechanical components
and highly-developed real-time control algorithms. However, the efficient use
of robotic systems is very much limited by existing proprietary programming
methods. In the research project SoftRobot, a software architecture was
developed that enables the programming of complex real-time critical robot
tasks with an object-oriented general purpose language. On top of this
architecture, a graphical language was developed to ease the specification of
complex robot commands, which can then be used as part of robot application
workflows. This paper gives an overview about the design and implementation of
this graphical language and illustrates its usefulness with some examples.
|
1303.6784 | Measuring the likelihood of models for network evolution | stat.AP cs.SI | Many researchers have hypothesised models which explain the evolution of the
topology of a target network. The framework described in this paper gives the
likelihood that the target network arose from the hypothesised model. This
allows rival hypothesised models to be compared for their ability to explain
the target network. A null model (of random evolution) is proposed as a
baseline for comparison. The framework also considers models made from linear
combinations of model components. A method is given for the automatic
optimisation of component weights. The framework is tested on simulated
networks with known parameters and also on real data.
|
1303.6785 | Latency-Bounded Target Set Selection in Social Networks | cs.DS cs.SI math.CO | Motivated by applications in sociology, economy and medicine, we study
variants of the Target Set Selection problem, first proposed by Kempe,
Kleinberg and Tardos. In our scenario one is given a graph $G=(V,E)$, integer
values $t(v)$ for each vertex $v$ (\emph{thresholds}), and the objective is to
determine a small set of vertices (\emph{target set}) that activates a given
number (or a given subset) of vertices of $G$ \emph{within} a prescribed number
of rounds. The activation process in $G$ proceeds as follows: initially, at
round 0, all vertices in the target set are activated; subsequently at each
round $r\geq 1$ every vertex of $G$ becomes activated if at least $t(v)$ of its
neighbors are already active by round $r-1$. It is known that the problem of
finding a minimum cardinality Target Set that eventually activates the whole
graph $G$ is hard to approximate to a factor better than
$O(2^{\log^{1-\epsilon}|V|})$. In this paper we give \emph{exact} polynomial
time algorithms to find minimum cardinality Target Sets in graphs of bounded
clique-width, and \emph{exact} linear time algorithms for trees.
|
1303.6794 | A likelihood based framework for assessing network evolution models
tested on real network data | cs.SI physics.soc-ph | This paper presents a statistically sound method for using likelihood to
assess potential models of network evolution. The method is tested on data from
five real networks. Data from the internet autonomous system network, from two
photo sharing sites and from a co-authorship network are tested using this
framework.
|
1303.6801 | Enumerating Some Fractional Repetition Codes | cs.IT math.IT | In a distributed storage systems (DSS), regenerating codes are used to
optimize bandwidth in the repair process of a failed node. To optimize other
DSS parameters such as computation and disk I/O, Distributed Replication-based
Simple Storage (Dress) Codes consisting of an inner Fractional Repetition (FR)
code and an outer MDS code are commonly used. Thus constructing FR codes is an
important research problem, and several constructions using graphs and designs
have been proposed. In this paper, we present an algorithm for constructing the
node-packet distribution matrix of FR codes and thus enumerate some FR codes up
to a given number of nodes n. We also present algorithms for constructing
regular graphs which give rise to FR codes.
|
1303.6837 | Deterministic and Stochastic Approaches to Supervisory Control Design
for Networked Systems with Time-Varying Communication Delays | cs.SY math.OC | This paper proposes a supervisory control structure for networked systems
with time-varying delays. The control structure, in which a supervisor triggers
the most appropriate controller from a multi-controller unit, aims at improving
the closed-loop performance relative to what can be obtained using a single
robust controller. Our analysis considers average dwell-time switching and is
based on a novel multiple Lyapunov-Krasovskii functional. We develop stability
conditions that can be verified by semi-definite programming, and show that the
associated state feedback synthesis problem also can be solved using convex
optimization tools. Extensions of the analysis and synthesis procedures to the
case when the evolution of the delay mode is described by a Markov chain are
also developed. Simulations on small and large-scale networked control systems
are used to illustrate the effectiveness of our approach.
|
1303.6859 | A practical system for improved efficiency in frequency division
multiplexed wireless networks | cs.NI cs.IT math.IT | Spectral efficiency is a key design issue for all wireless communication
systems. Orthogonal frequency division multiplexing (OFDM) is a very well-known
technique for efficient data transmission over many carriers overlapped in
frequency. Recently, several papers have appeared which describe spectrally
efficient variations of multi-carrier systems where the condition of
orthogonality is dropped. Proposed techniques suffer from two weaknesses:
Firstly, the complexity of generating the signal is increased. Secondly, the
signal detection is computationally demanding. Known methods suffer either
unusably high complexity or high error rates because of the inter-carrier
interference. This work addresses both problems by proposing new transmitter
and receiver arch itectures whose design is based on using the simplification
that a rational Spectrally Efficient Frequency Division Multiplexing (SEFDM)
system can be treated as a set of overlapped and interleaving OFDM systems.
The efficacy of the proposed designs is shown through detailed simulation of
sys tems with different signal types and carrier dimensions. The decoder is
heuristic but in practice produces very good results which are close to the
theoretical best performance in a variety of settings. The system is able to
produce efficiency gains of up to 20% with negligible impact on the required
signal to noise ratio.
|
1303.6880 | Multi-sample Receivers Increase Information Rates for Wiener Phase Noise
Channels | cs.IT math.IT | A waveform channel is considered where the transmitted signal is corrupted by
Wiener phase noise and additive white Gaussian noise (AWGN). A discrete-time
channel model is introduced that is based on a multi-sample receiver. Tight
lower bounds on the information rates achieved by the multi-sample receiver are
computed by means of numerical simulations. The results show that oversampling
at the receiver is beneficial for both strong and weak phase noise at high
signal-to-noise ratios. The results are compared with results obtained when
using other discrete-time models.
|
1303.6906 | Large scale citation matching using Apache Hadoop | cs.IR cs.DL | During the process of citation matching links from bibliography entries to
referenced publications are created. Such links are indicators of topical
similarity between linked texts, are used in assessing the impact of the
referenced document and improve navigation in the user interfaces of digital
libraries. In this paper we present a citation matching method and show how to
scale it up to handle great amounts of data using appropriate indexing and a
MapReduce paradigm in the Hadoop environment.
|
1303.6907 | Parameterized Approximability of Maximizing the Spread of Influence in
Networks | cs.DS cs.SI | In this paper, we consider the problem of maximizing the spread of influence
through a social network. Given a graph with a threshold value~$thr(v)$
attached to each vertex~$v$, the spread of influence is modeled as follows: A
vertex~$v$ becomes "active" (influenced) if at least $thr(v)$ of its neighbors
are active. In the corresponding optimization problem the objective is then to
find a fixed number of vertices to activate such that the number of activated
vertices at the end of the propagation process is maximum. We show that this
problem is strongly inapproximable in fpt-time with respect to (w.r.t.)
parameter $k$ even for very restrictive thresholds. In the case that the
threshold of each vertex equals its degree, we prove that the problem is
inapproximable in polynomial time and it becomes $r(n)$-approximable in
fpt-time w.r.t. parameter $k$ for any strictly increasing function $r$.
Moreover, we show that the decision version is W[1]-hard w.r.t. parameter $k$
but becomes fixed-parameter tractable on bounded degree graphs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.