id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1110.1075
|
The Augmented Complex Kernel LMS
|
cs.LG
|
Recently, a unified framework for adaptive kernel based signal processing of
complex data was presented by the authors, which, besides offering techniques
to map the input data to complex Reproducing Kernel Hilbert Spaces, developed a
suitable Wirtinger-like Calculus for general Hilbert Spaces. In this short
paper, the extended Wirtinger's calculus is adopted to derive complex
kernel-based widely-linear estimation filters. Furthermore, we illuminate
several important characteristics of the widely linear filters. We show that,
although in many cases the gains from adopting widely linear estimation
filters, as alternatives to ordinary linear ones, are rudimentary, for the case
of kernel based widely linear filters significant performance improvements can
be obtained.
|
1110.1078
|
Fixed point theory and semidefinite programming for computable
performance analysis of block-sparsity recovery
|
cs.IT math.IT math.NA
|
In this paper, we employ fixed point theory and semidefinite programming to
compute the performance bounds on convex block-sparsity recovery algorithms. As
a prerequisite for optimal sensing matrix design, a computable performance
bound would open doors for wide applications in sensor arrays, radar, DNA
microarrays, and many other areas where block-sparsity arises naturally. We
define a family of goodness measures for arbitrary sensing matrices as the
optimal values of certain optimization problems. The reconstruction errors of
convex recovery algorithms are bounded in terms of these goodness measures. We
demonstrate that as long as the number of measurements is relatively large,
these goodness measures are bounded away from zero for a large class of random
sensing matrices, a result parallel to the probabilistic analysis of the block
restricted isometry property. As the primary contribution of this work, we
associate the goodness measures with the fixed points of functions defined by a
series of semidefinite programs. This relation with fixed point theory yields
efficient algorithms with global convergence guarantees to compute the goodness
measures.
|
1110.1091
|
A simulation of the Neolithic transition in the Indus valley
|
q-bio.PE cs.MA
|
The Indus Valley Civilization (IVC) was one of the first great civilizations
in prehistory. This bronze age civilization flourished from the end of the
fourth millennium BC. It disintegrated during the second millennium BC; despite
much research effort, this decline is not well understood. Less research has
been devoted to the emergence of the IVC, which shows continuous cultural
precursors since at least the seventh millennium BC. To understand the decline,
we believe it is necessary to investigate the rise of the IVC, i.e., the
establishment of agriculture and livestock, dense populations and technological
developments 7000--3000 BC. Although much archaeological information is
available, our capability to investigate the system is hindered by poorly
resolved chronology, and by a lack of field work in the intermediate areas
between the Indus valley and Mesopotamia. We thus employ a complementary
numerical simulation to develop a consistent picture of technology,
agropastoralism and population developments in the IVC domain. Results from
this Global Land Use and technological Evolution Simulator show that there is
(1) fair agreement between the simulated timing of the agricultural transition
and radiocarbon dates from early agricultural sites, but the transition is
simulated first in India then Pakistan; (2) an independent agropastoralism
developing on the Indian subcontinent; and (3) a positive relationship between
archeological artifact richness and simulated population density which remains
to be quantified.
|
1110.1112
|
Modeling Perceived Relevance for Tail Queries without Click-Through Data
|
cs.IR
|
Click-through data has been used in various ways in Web search such as
estimating relevance between documents and queries. Since only search snippets
are perceived by users before issuing any clicks, the relevance induced by
clicks are usually called \emph{perceived relevance} which has proven to be
quite useful for Web search. While there is plenty of click data for popular
queries, very little information is available for unpopular tail ones. These
tail queries take a large portion of the search volume but search accuracy for
these queries is usually unsatisfactory due to data sparseness such as limited
click information. In this paper, we study the problem of modeling perceived
relevance for queries without click-through data. Instead of relying on users'
click data, we carefully design a set of snippet features and use them to
approximately capture the perceived relevance. We study the effectiveness of
this set of snippet features in two settings: (1) predicting perceived
relevance and (2) enhancing search engine ranking. Experimental results show
that our proposed model is effective to predict the relative perceived
relevance of Web search results. Furthermore, our proposed snippet features are
effective to improve search accuracy for longer tail queries without
click-through data.
|
1110.1151
|
Mathematical aspects of decentralized control of formations in the plane
|
math.OC cs.SY
|
In formation control, an ensemble of autonomous agents is required to
stabilize at a given configuration in the plane, doing so while agents are
allowed to observe only a subset of the ensemble. As such, formation control
provides a rich class of problems for decentralized control methods and
techniques. Additionally, it can be used to model a wide variety of scenarios
where decentralization is a main characteristic. We introduce here some
mathematical background necessary to address questions of stability in
decentralized control in general and formation control in particular. This
background includes an extension of the notion of global stability to systems
evolving on manifolds and a notion of robustness of feedback control for
nonlinear systems. We then formally introduce the class of formation control
problems, and summarize known results.
|
1110.1152
|
Known unknowns, unknown unknowns and information flow: new concepts in
decentralized control
|
math.OC cs.SY
|
We introduce and analyze a model for decentral- ized control. The model is
broad enough to include problems such as formation control, decentralization of
the power grid and flocking. The objective of this paper is twofold. First, we
show how the issue of decentralization goes beyond having agents know only part
of the state of the system. In fact, we argue that a complete theory of
decentralization should take into account the fact that agents can be made
aware of only part of the global objective of the ensemble. A second
contribution of this paper is the introduction of a rigorous definition of
information flow for a decentralized system: we show how to attach to a general
nonlinear decentralized system a unique information flow graph that is an
invariant of the system. In order to address some finer issues in decentralized
system, such as the existence of so-called "information loops", we further
refine the information flow graph to a simplicial complex-more precisely, a
Whitney complex. We illustrate the main results on a variety of examples.
|
1110.1193
|
A new class of codes for Boolean masking of cryptographic computations
|
cs.IT math.IT
|
We introduce a new class of rate one-half binary codes: {\bf complementary
information set codes.} A binary linear code of length $2n$ and dimension $n$
is called a complementary information set code (CIS code for short) if it has
two disjoint information sets. This class of codes contains self-dual codes as
a subclass. It is connected to graph correlation immune Boolean functions of
use in the security of hardware implementations of cryptographic primitives.
Such codes permit to improve the cost of masking cryptographic algorithms
against side channel attacks. In this paper we investigate this new class of
codes: we give optimal or best known CIS codes of length $<132.$ We derive
general constructions based on cyclic codes and on double circulant codes. We
derive a Varshamov-Gilbert bound for long CIS codes, and show that they can all
be classified in small lengths $\le 12$ by the building up construction. Some
nonlinear permutations are constructed by using $\Z_4$-codes, based on the
notion of dual distance of an unrestricted code.
|
1110.1198
|
On Joint Diagonalisation for Dynamic Network Analysis
|
cs.SI physics.soc-ph
|
Joint diagonalisation (JD) is a technique used to estimate an average
eigenspace of a set of matrices. Whilst it has been used successfully in many
areas to track the evolution of systems via their eigenvectors; its application
in network analysis is novel. The key focus in this paper is the use of JD on
matrices of spanning trees of a network. This is especially useful in the case
of real-world contact networks in which a single underlying static graph does
not exist. The average eigenspace may be used to construct a graph which
represents the `average spanning tree' of the network or a representation of
the most common propagation paths. We then examine the distribution of
deviations from the average and find that this distribution in real-world
contact networks is multi-modal; thus indicating several \emph{modes} in the
underlying network. These modes are identified and are found to correspond to
particular times. Thus JD may be used to decompose the behaviour, in time, of
contact networks and produce average static graphs for each time. This may be
viewed as a mixture between a dynamic and static graph approach to contact
network analysis.
|
1110.1208
|
Rotation, Scaling and Translation Analysis of Biometric Signature
Templates
|
cs.CV cs.CR cs.IT cs.MM eess.IV math.IT
|
Biometric authentication systems that make use of signature verification
methods often render optimum performance only under limited and restricted
conditions. Such methods utilize several training samples so as to achieve high
accuracy. Moreover, several constraints are imposed on the end-user so that the
system may work optimally, and as expected. For example, the user is made to
sign within a small box, in order to limit their signature to a predefined set
of dimensions, thus eliminating scaling. Moreover, the angular rotation with
respect to the referenced signature that will be inadvertently introduced as
human error, hampers performance of biometric signature verification systems.
To eliminate this, traditionally, a user is asked to sign exactly on top of a
reference line. In this paper, we propose a robust system that optimizes the
signature obtained from the user for a large range of variation in
Rotation-Scaling-Translation (RST) and resolves these error parameters in the
user signature according to the reference signature stored in the database.
|
1110.1209
|
Audio Watermarking with Error Correction
|
cs.CR cs.IT cs.MM eess.AS math.IT
|
In recent times, communication through the internet has tremendously
facilitated the distribution of multimedia data. Although this is indubitably a
boon, one of its repercussions is that it has also given impetus to the
notorious issue of online music piracy. Unethical attempts can also be made to
deliberately alter such copyrighted data and thus, misuse it. Copyright
violation by means of unauthorized distribution, as well as unauthorized
tampering of copyrighted audio data is an important technological and research
issue. Audio watermarking has been proposed as a solution to tackle this issue.
The main purpose of audio watermarking is to protect against possible threats
to the audio data and in case of copyright violation or unauthorized tampering,
authenticity of such data can be disputed by virtue of audio watermarking.
|
1110.1220
|
Standard Quantum Teleportation of an Arbitrary N-Qubit State,
Non-Existence of Magic Basis and Existence of Magic Partial Bases for 2N
Entangled Qubit States with N>1
|
quant-ph cs.IT math.IT
|
We present a simple and precise protocol for standard quantum teleportation
of N-qubit state, considering the most general resource q-channel and Bell
states. We find condition on these states for perfect teleportation and give
explicitly the unitary transformation required to be done by Bob for achieving
perfect teleportation. We discuss connection of our simple theory with the
complicated related work on this subject and with character matrix,
transformation, judgment and kernel operators defined in this context. We also
prove that the magic basis discussed by Hill and Wootters [Phys. Rev. Lett. 78
(1997) 5022] does not exist for entangled 2N-qubit states with N > 1 but magic
partial bases, similar to those discussed recently by Prakash and Maurya
[Optics Commun. 284 (2011) 5024] do exist. We give explicitly all magic partial
bases for N = 2.
|
1110.1228
|
Order-distance and other metric-like functions on jointly distributed
random variables
|
math.PR cs.AI math.ST q-bio.QM stat.TH
|
We construct a class of real-valued nonnegative binary functions on a set of
jointly distributed random variables, which satisfy the triangle inequality and
vanish at identical arguments (pseudo-quasi-metrics). These functions are
useful in dealing with the problem of selective probabilistic causality
encountered in behavioral sciences and in quantum physics. The problem reduces
to that of ascertaining the existence of a joint distribution for a set of
variables with known distributions of certain subsets of this set. Any
violation of the triangle inequality or its consequences by one of our
functions when applied to such a set rules out the existence of this joint
distribution. We focus on an especially versatile and widely applicable
pseudo-quasi-metric called an order-distance and its special case called a
classification distance.
|
1110.1237
|
Free Deterministic Equivalents, Rectangular Random Matrix Models, and
Operator-Valued Free Probability Theory
|
cs.IT math.IT math.OA
|
Motivated by the asymptotic collective behavior of random and deterministic
matrices, we propose an approximation (called "free deterministic equivalent")
to quite general random matrix models, by replacing the matrices with operators
satisfying certain freeness relations. We comment on the relation between our
free deterministic equivalent and deterministic equivalents considered in the
engineering literature. We do not only consider the case of square matrices,
but also show how rectangular matrices can be treated. Furthermore, we
emphasize how operator-valued free probability techniques can be used to solve
our free deterministic equivalents.
As an illustration of our methods we consider a random matrix model studied
first by R. Couillet, J. Hoydis, and M. Debbah. We show how its free
deterministic equivalent can be treated and we thus recover in a conceptual way
their result.
On a technical level, we generalize a result from scalar valued free
probability, by showing that randomly rotated deterministic matrices of
different sizes are asymptotically free from deterministic rectangular
matrices, with amalgamation over a certain algebra of projections.
In the Appendix, we show how estimates for differences between Cauchy
transforms can be extended from a neighborhood of infinity to a region close to
the real axis. This is of some relevance if one wants to compare the original
random matrix problem with its free deterministic equivalent.
|
1110.1259
|
Characterizing and Improving Generalized Belief Propagation Algorithms
on the 2D Edwards-Anderson Model
|
cond-mat.dis-nn cond-mat.stat-mech cs.AI cs.IT math.IT
|
We study the performance of different message passing algorithms in the two
dimensional Edwards Anderson model. We show that the standard Belief
Propagation (BP) algorithm converges only at high temperature to a paramagnetic
solution. Then, we test a Generalized Belief Propagation (GBP) algorithm,
derived from a Cluster Variational Method (CVM) at the plaquette level. We
compare its performance with BP and with other algorithms derived under the
same approximation: Double Loop (DL) and a two-ways message passing algorithm
(HAK). The plaquette-CVM approximation improves BP in at least three ways: the
quality of the paramagnetic solution at high temperatures, a better estimate
(lower) for the critical temperature, and the fact that the GBP message passing
algorithm converges also to non paramagnetic solutions. The lack of convergence
of the standard GBP message passing algorithm at low temperatures seems to be
related to the implementation details and not to the appearance of long range
order. In fact, we prove that a gauge invariance of the constrained CVM free
energy can be exploited to derive a new message passing algorithm which
converges at even lower temperatures. In all its region of convergence this new
algorithm is faster than HAK and DL by some orders of magnitude.
|
1110.1301
|
Predicting User Actions in Software Processes
|
cs.SE cs.AI
|
This paper describes an approach for user (e.g. SW architect) assisting in
software processes. The approach observes the user's action and tries to
predict his next step. For this we use approaches in the area of machine
learning (sequence learning) and adopt these for the use in software processes.
Keywords: Software engineering, Software process description languages,
Software processes, Machine learning, Sequence prediction
|
1110.1303
|
Discovering patterns of correlation and similarities in software project
data with the Circos visualization tool
|
cs.SE cs.AI
|
Software cost estimation based on multivariate data from completed projects
requires the building of efficient models. These models essentially describe
relations in the data, either on the basis of correlations between variables or
of similarities between the projects. The continuous growth of the amount of
data gathered and the need to perform preliminary analysis in order to discover
patterns able to drive the building of reasonable models, leads the researchers
towards intelligent and time-saving tools which can effectively describe data
and their relationships. The goal of this paper is to suggest an innovative
visualization tool, widely used in bioinformatics, which represents relations
in data in an aesthetic and intelligent way. In order to illustrate the
capabilities of the tool, we use a well known dataset from software engineering
projects.
|
1110.1328
|
Bayesian Locality Sensitive Hashing for Fast Similarity Search
|
cs.DB cs.AI cs.DS cs.IR
|
Given a collection of objects and an associated similarity measure, the
all-pairs similarity search problem asks us to find all pairs of objects with
similarity greater than a certain user-specified threshold. Locality-sensitive
hashing (LSH) based methods have become a very popular approach for this
problem. However, most such methods only use LSH for the first phase of
similarity search - i.e. efficient indexing for candidate generation. In this
paper, we present BayesLSH, a principled Bayesian algorithm for the subsequent
phase of similarity search - performing candidate pruning and similarity
estimation using LSH. A simpler variant, BayesLSH-Lite, which calculates
similarities exactly, is also presented. BayesLSH is able to quickly prune away
a large majority of the false positive candidate pairs, leading to significant
speedups over baseline approaches. For BayesLSH, we also provide probabilistic
guarantees on the quality of the output, both in terms of accuracy and recall.
Finally, the quality of BayesLSH's output can be easily tuned and does not
require any manual setting of the number of hashes to use for similarity
estimation, unlike standard approaches. For two state-of-the-art candidate
generation algorithms, AllPairs and LSH, BayesLSH enables significant speedups,
typically in the range 2x-20x for a wide variety of datasets.
|
1110.1347
|
A Dual-based Method for Resource Allocation in OFDMA-SDMA Systems with
Minimum Rate Constraints
|
cs.IT math.IT
|
We consider multi-antenna base stations using orthogonal frequency-division
multiple access (OFDMA) and space division multiple access (SDMA) techniques to
serve single antenna users, where some of those users have minimum rate
requirements and must be served in the current time slot (real time users),
while others do not have strict timing constraints (non real time users) and
are served on a best effort basis. The resource allocation problem is to find
the user assignment to subcarriers and the transmit beamforming vectors that
maximize a linear utility function of the user rates subject to power and
minimum rate constraints. The exact optimal solution to this problem can not be
reasonably obtained for practical parameters values of the communication
system. We thus derive a dual problem formulation whose optimal solution
provides an upper bound to all feasible solutions and can be used to benchmark
the performance of any heuristic method used to solve this problem. We also
derive from this dual optimal solution a primal-feasible dual-based method to
solve the problem and we compare its performance and computation time against a
standard weight adjustment method. We find that our method follows the dual
optimal bound more closely than the weight adjustment method. This off-line
algorithm can serve as the basis to develop more efficient heuristic methods.
|
1110.1349
|
Supporting the Curation of Twitter User Lists
|
cs.SI cs.CY physics.soc-ph
|
Twitter introduced lists in late 2009 as a means of curating tweets into
meaningful themes. Lists were quickly adopted by media companies as a means of
organising content around news stories. Thus the curation of these lists is
important, they should contain the key information gatekeepers and present a
balanced perspective on the story. Identifying members to add to a list on an
emerging topic is a delicate process. From a network analysis perspective there
are a number of views on the Twitter network that can be explored, e.g.
followers, retweets mentions etc. We present a process for integrating these
views in order to recommend authoritative commentators to include on a list.
This process is evaluated on manually curated lists about unrest in Bahrain and
the Iowa caucuses for the 2012 US election.
|
1110.1358
|
Runtime Guarantees for Regression Problems
|
cs.DS cs.CV
|
We study theoretical runtime guarantees for a class of optimization problems
that occur in a wide variety of inference problems. these problems are
motivated by the lasso framework and have applications in machine learning and
computer vision.
Our work shows a close connection between these problems and core questions
in algorithmic graph theory. While this connection demonstrates the
difficulties of obtaining runtime guarantees, it also suggests an approach of
using techniques originally developed for graph algorithms.
We then show that most of these problems can be formulated as a grouped least
squares problem, and give efficient algorithms for this formulation. Our
algorithms rely on routines for solving quadratic minimization problems, which
in turn are equivalent to solving linear systems. Finally we present some
experimental results on applying our approximation algorithm to image
processing problems.
|
1110.1388
|
Effects on quantum physics of the local availability of mathematics and
space time dependent scaling factors for number systems
|
quant-ph cs.IT gr-qc math-ph math.IT math.MP
|
The work is based on two premises: local availability of mathematics to an
observer at any space time location, and the observation that number systems,
as structures satisfying axioms for the number type being considered, can be
scaled by arbitrary, positive real numbers. Local availability leads to the
assignment of mathematical universes, $V_{x},$ to each point, $x,$ of space
time. $V_{x}$ contains all the mathematics that an observer, $O_{x},$ at $x,$
can know. Each $V_{x}$ contains many types of mathematical systems. These
include the different types of numbers (natural numbers, integers, rationals,
and real and complex numbers), Hilbert spaces, algebras, and many other types
of systems. Space time dependent scaling of number systems is used to define
representations, in $V_{x}$, of real and complex number systems in $V_{y}$. The
representations are scaled by a factor $r_{y,x}$ relative to the systems in
$V_{x}.$ For $y$ a neighbor point of $x,$ $r_{y,x}$ is the exponential of the
scalar product of a gauge field, $\vec{A}(x),$ and the vector from $x$ to $y.$
For $y$ distant from $x,$ $r_{y,x}$ is a path integral from $x$ to $y.$ Some
consequences of the two premises will be examined. Number scaling has no effect
on general comparisons of numbers obtained as computations or as experimental
outputs. The effect is limited to mathematical expressions that include space
or space time integrals or derivatives. The effect of $\vec{A}$ on wave packets
and canonical momenta in quantum theory, and some properties of $\vec{A}$ in
gauge theories, is described.
|
1110.1391
|
A Comparison of Different Machine Transliteration Models
|
cs.CL cs.AI
|
Machine transliteration is a method for automatically converting words in one
language into phonetically equivalent ones in another language. Machine
transliteration plays an important role in natural language applications such
as information retrieval and machine translation, especially for handling
proper nouns and technical terms. Four machine transliteration models --
grapheme-based transliteration model, phoneme-based transliteration model,
hybrid transliteration model, and correspondence-based transliteration model --
have been proposed by several researchers. To date, however, there has been
little research on a framework in which multiple transliteration models can
operate simultaneously. Furthermore, there has been no comparison of the four
models within the same framework and using the same data. We addressed these
problems by 1) modeling the four models within the same framework, 2) comparing
them under the same conditions, and 3) developing a way to improve machine
transliteration through this comparison. Our comparison showed that the hybrid
and correspondence-based models were the most effective and that the four
models can be used in a complementary manner to improve machine transliteration
performance.
|
1110.1394
|
Learning Sentence-internal Temporal Relations
|
cs.CL cs.AI
|
In this paper we propose a data intensive approach for inferring
sentence-internal temporal relations. Temporal inference is relevant for
practical NLP applications which either extract or synthesize temporal
information (e.g., summarisation, question answering). Our method bypasses the
need for manual coding by exploiting the presence of markers like after", which
overtly signal a temporal relation. We first show that models trained on main
and subordinate clauses connected with a temporal marker achieve good
performance on a pseudo-disambiguation task simulating temporal inference
(during testing the temporal marker is treated as unseen and the models must
select the right marker from a set of possible candidates). Secondly, we assess
whether the proposed approach holds promise for the semi-automatic creation of
temporal annotations. Specifically, we use a model trained on noisy and
approximate data (i.e., main and subordinate clauses) to predict
intra-sentential relations present in TimeBank, a corpus annotated rich
temporal information. Our experiments compare and contrast several
probabilistic models differing in their feature space, linguistic assumptions
and data requirements. We evaluate performance against gold standard corpora
and also against human subjects.
|
1110.1409
|
Good Fences: The Importance of Setting Boundaries for Peaceful
Coexistence
|
physics.soc-ph cs.SI
|
We consider the conditions of peace and violence among ethnic groups, testing
a theory designed to predict the locations of violence and interventions that
can promote peace. Characterizing the model's success in predicting peace
requires examples where peace prevails despite diversity. Switzerland is
recognized as a country of peace, stability and prosperity. This is surprising
because of its linguistic and religious diversity that in other parts of the
world lead to conflict and violence. Here we analyze how peaceful stability is
maintained. Our analysis shows that peace does not depend on integrated
coexistence, but rather on well defined topographical and political boundaries
separating groups. Mountains and lakes are an important part of the boundaries
between sharply defined linguistic areas. Political canton and circle
(sub-canton) boundaries often separate religious groups. Where such boundaries
do not appear to be sufficient, we find that specific aspects of the population
distribution either guarantee sufficient separation or sufficient mixing to
inhibit intergroup violence according to the quantitative theory of conflict.
In exactly one region, a porous mountain range does not adequately separate
linguistic groups and violent conflict has led to the recent creation of the
canton of Jura. Our analysis supports the hypothesis that violence between
groups can be inhibited by physical and political boundaries. A similar
analysis of the area of the former Yugoslavia shows that during widespread
ethnic violence existing political boundaries did not coincide with the
boundaries of distinct groups, but peace prevailed in specific areas where they
did coincide. The success of peace in Switzerland may serve as a model to
resolve conflict in other ethnically diverse countries and regions of the
world.
|
1110.1416
|
The matrices of argumentation frameworks
|
cs.IT cs.AI math.IT
|
We introduce matrix and its block to the Dung's theory of argumentation
frameworks. It is showed that each argumentation framework has a matrix
representation, and the common extension-based semantics of argumentation
framework can be characterized by blocks of matrix and their relations. In
contrast with traditional method of directed graph, the matrix way has the
advantage of computability. Therefore, it has an extensive perspective to bring
the theory of matrix into the research of argumentation frameworks and related
areas.
|
1110.1428
|
Product Review Summarization based on Facet Identification and Sentence
Clustering
|
cs.CL cs.DL
|
Product review nowadays has become an important source of information, not
only for customers to find opinions about products easily and share their
reviews with peers, but also for product manufacturers to get feedback on their
products. As the number of product reviews grows, it becomes difficult for
users to search and utilize these resources in an efficient way. In this work,
we build a product review summarization system that can automatically process a
large collection of reviews and aggregate them to generate a concise summary.
More importantly, the drawback of existing product summarization systems is
that they cannot provide the underlying reasons to justify users' opinions. In
our method, we solve this problem by applying clustering, prior to selecting
representative candidates for summarization.
|
1110.1468
|
Mathematical aspects of degressive proportionality
|
physics.soc-ph cs.SI
|
We analyze properties of apportionment functions in context of the problem of
allocating seats in the European Parliament. Necessary and sufficient
conditions for apportionment functions are investigated. Some exemplary
families of apportionment functions are specified and the corresponding
partitions of the seats in the European Parliament among the Member States of
the European Union are presented. Although the choice of the allocation
functions is theoretically unlimited, we show that the constraints are so
strong that the acceptable functions lead to rather similar solutions.
|
1110.1470
|
A Constraint-Satisfaction Parser for Context-Free Grammars
|
cs.CL
|
Traditional language processing tools constrain language designers to
specific kinds of grammars. In contrast, model-based language specification
decouples language design from language processing. As a consequence,
model-based language specification tools need general parsers able to parse
unrestricted context-free grammars. As languages specified following this
approach may be ambiguous, parsers must deal with ambiguities. Model-based
language specification also allows the definition of associativity, precedence,
and custom constraints. Therefore parsers generated by model-driven language
specification tools need to enforce constraints. In this paper, we propose
Fence, an efficient bottom-up chart parser with lexical and syntactic ambiguity
support that allows the specification of constraints and, therefore, enables
the use of model-based language specification in practice.
|
1110.1485
|
A Face Recognition Scheme using Wavelet Based Dominant Features
|
cs.CV
|
In this paper, a multi-resolution feature extraction algorithm for face
recognition is proposed based on two-dimensional discrete wavelet transform
(2D-DWT), which efficiently exploits the local spatial variations in a face
image. For the purpose of feature extraction, instead of considering the entire
face image, an entropy-based local band selection criterion is developed, which
selects high-informative horizontal segments from the face image. In order to
capture the local spatial variations within these highinformative horizontal
bands precisely, the horizontal band is segmented into several small spatial
modules. Dominant wavelet coefficients corresponding to each local region
residing inside those horizontal bands are selected as features. In the
selection of the dominant coefficients, a threshold criterion is proposed,
which not only drastically reduces the feature dimension but also provides high
within-class compactness and high between-class separability. A principal
component analysis is performed to further reduce the dimensionality of the
feature space. Extensive experimentation is carried out upon standard face
databases and a very high degree of recognition accuracy is achieved by the
proposed method in comparison to those obtained by some of the existing
methods.
|
1110.1490
|
A Novel Approach for Pass Word Authentication using Brain -State -In -A
Box (BSB) Model
|
cs.CR cs.NE
|
Authentication is the act of confirming the truth of an attribute of a datum
or entity. This might involve confirming the identity of a person, tracing the
origins of an artefact, ensuring that a product is what it's packaging and
labelling claims to be, or assuring that a computer program is a trusted one.
The authentication of information can pose special problems (especially
man-in-the-middle attacks), and is often wrapped up with authenticating
identity. Password authentication using Brain-State -In-A Box is presented in
this paper. Here in this paper we discuss Brain-State -In-A Box Scheme for
Textual and graphical passwords which will be converted in to probabilistic
values Password. We observe how to get password authentication Probabilistic
values for Text and Graphical image. This study proposes the use of a
Brain-State -In-A Box technique for password authentication. In comparison to
existing layered neural network techniques, the proposed method provides better
accuracy and quicker response time to registration and password changes.
|
1110.1494
|
Counterflow in Evacuations
|
physics.soc-ph cs.MA
|
It is shown in this work that the average individual egress time and other
performance indicators for egress of people from a building can be improved
under certain circumstances if counterflow occurs. The circumstances include
widely varying walking speeds and two differently far located exits with
different capacity. The result is achieved both with a paper and pencil
calculation as well as with a micro simulation of an example scenario. As the
difficulty of exit signage with counterflow remains one cannot conclude from
the result that an emergency evacuation procedure with counterflow would really
be the better variant.
|
1110.1495
|
A Probabilistic Approach for Authenticating Text or Graphical Passwords
Using Back Propagation
|
cs.CR cs.NE
|
Password authentication is a common approach to the system security and it is
also a very important procedure to gain access to user resources. In the
conventional password authentication methods a server has to authenticate the
legitimate user. In our proposed method users can freely choose their passwords
from a defined character set or they can use a graphical image as password and
that input will be normalized. Neural networks have been used recently for
password authentication in order to overcome pitfall of traditional password
authentication methods. In this paper we proposed a method for password
authentication using alphanumeric password and graphical password. We used Back
Propagation algorithm for both alphanumeric (Text) and graphical password by
which the level of security can be enhanced. This paper along with test results
show that converting user password in to Probabilistic values enhances the
security of the system
|
1110.1509
|
A Comparative Experiment of Several Shape Methods in Recognizing Plants
|
cs.CV
|
Shape is an important aspects in recognizing plants. Several approaches have
been introduced to identify objects, including plants. Combination of geometric
features such as aspect ratio, compactness, and dispersion, or moments such as
moment invariants were usually used toidentify plants. In this research, a
comparative experiment of 4 methods to identify plants using shape features was
accomplished. Two approaches have never been used in plants identification yet,
Zernike moments and Polar Fourier Transform (PFT), were incorporated. The
experimental comparison was done on 52 kinds of plants with various shapes. The
result, PFT gave best performance with 64% in accuracy and outperformed the
other methods.
|
1110.1513
|
Foliage Plant Retrieval using Polar Fourier Transform, Color Moments and
Vein Features
|
cs.CV
|
This paper proposed a method that combines Polar Fourier Transform, color
moments, and vein features to retrieve leaf images based on a leaf image. The
method is very useful to help people in recognizing foliage plants. Foliage
plants are plants that have various colors and unique patterns in the leaf.
Therefore, the colors and its patterns are information that should be counted
on in the processing of plant identification. To compare the performance of
retrieving system to other result, the experiments used Flavia dataset, which
is very popular in recognizing plants. The result shows that the method gave
better performance than PNN, SVM, and Fourier Transform. The method was also
tested using foliage plants with various colors. The accuracy was 90.80% for 50
kinds of plants.
|
1110.1514
|
Blackwell Approachability and Minimax Theory
|
cs.GT cs.LG
|
This manuscript investigates the relationship between Blackwell
Approachability, a stochastic vector-valued repeated game, and minimax theory,
a single-play scalar-valued scenario. First, it is established in a general
setting --- one not permitting invocation of minimax theory --- that
Blackwell's Approachability Theorem and its generalization due to Hou are still
valid. Second, minimax structure grants a result in the spirit of Blackwell's
weak-approachability conjecture, later resolved by Vieille, that any set is
either approachable by one player, or avoidable by the opponent. This analysis
also reveals a strategy for the opponent.
|
1110.1519
|
Comparison of Radio Propagation Models for Long Term Evolution (LTE)
Network
|
cs.IT math.IT
|
This paper concerns about the radio propagation models used for the upcoming
4th Generation (4G) of cellular networks known as Long Term Evolution (LTE).
The radio wave propagation model or path loss model plays a very significant
role in planning of any wireless communication systems. In this paper, a
comparison is made between different proposed radio propagation models that
would be used for LTE, like Stanford University Interim (SUI) model, Okumura
model, Hata COST 231 model, COST Walfisch-Ikegami & Ericsson 9999 model. The
comparison is made using different terrains e.g. urban, suburban and rural
area.SUI model shows the lowest path lost in all the terrains while COST 231
Hata model illustrates highest path loss in urban area and COST
Walfisch-Ikegami model has highest path loss for suburban and rural
environments.
|
1110.1522
|
Detecting Collusive Cliques in Futures Markets Based on Trading
Behaviors from Real Data
|
q-fin.TR cs.NE
|
In financial markets, abnormal trading behaviors pose a serious challenge to
market surveillance and risk management. What is worse, there is an increasing
emergence of abnormal trading events that some experienced traders constitute a
collusive clique and collaborate to manipulate some instruments, thus mislead
other investors by applying similar trading behaviors for maximizing their
personal benefits. In this paper, a method is proposed to detect the hidden
collusive cliques involved in an instrument of future markets by first
calculating the correlation coefficient between any two eligible unified
aggregated time series of signed order volume, and then combining the connected
components from multiple sparsified weighted graphs constructed by using the
correlation matrices where each correlation coefficient is over a
user-specified threshold. Experiments conducted on real order data from the
Shanghai Futures Exchange show that the proposed method can effectively detect
suspect collusive cliques. A tool based on the proposed method has been
deployed in the exchange as a pilot application for futures market surveillance
and risk management.
|
1110.1538
|
Characteristics of Invariant Weights Related to Code Equivalence over
Rings
|
math.RA cs.IT math.IT
|
The Equivalence Theorem states that, for a given weight on the alphabet,
every linear isometry between linear codes extends to a monomial transformation
of the entire space. This theorem has been proved for several weights and
alphabets, including the original MacWilliams' Equivalence Theorem for the
Hamming weight on codes over finite fields. The question remains: What
conditions must a weight satisfy so that the Extension Theorem will hold? In
this paper we provide an algebraic framework for determining such conditions,
generalising the approach taken in [Greferath, Honold '06].
|
1110.1591
|
Co-evolutionnary network approach to cultural dynamics controlled by
intolerance
|
physics.soc-ph cs.SI
|
Starting from Axelrod's model of cultural dissemination, we introduce a
rewiring probability, enabling agents to cut the links with their unfriendly
neighbors if their cultural similarity is below a tolerance parameter. For low
values of tolerance, rewiring promotes the convergence to a frozen monocultural
state. However, intermediate tolerance values prevent rewiring once the network
is fragmented, resulting in a multicultural society even for values of initial
cultural diversity in which the original Axelrod model reaches globalization.
|
1110.1628
|
Optimisation of hybrid high-modulus/high-strength carbon fiber
reinforced plastic composite drive
|
cs.CE cs.SE physics.comp-ph
|
This study deals with the optimisation of hybrid composite drive shafts
operating at subcritical or supercritical speeds, using a genetic algorithm. A
formulation for the flexural vibrations of a composite drive shaft mounted on
viscoelastic supports including shear effects is developed. In particular, an
analytic stability criterion is developed to ensure the integrity of the system
in the supercritical regime. Then it is shown that the torsional strength can
be computed with the maximum stress criterion. A shell method is developed for
computing drive shaft torsional buckling. The optimisation of a helicopter tail
rotor driveline is then performed. In particular, original hybrid shafts
consisting of high-modulus and high-strength carbon fibre reinforced epoxy
plies were studied. The solutions obtained using the method presented here made
it possible to greatly decrease the number of shafts and the weight of the
driveline under subcritical conditions, and even more under supercritical
conditions. This study yielded some general rules for designing an optimum
composite shaft without any need for optimisation algorithms.
|
1110.1700
|
Adaptive Data Stream Management System Using Learning Automata
|
cs.DB
|
In many modern applications, data are received as infinite, rapid,
unpredictable and time- variant data elements that are known as data streams.
Systems which are able to process data streams with such properties are called
Data Stream Management Systems (DSMS). Due to the unpredictable and time-
variant properties of data streams as well as system, adaptivity of the DSMS is
a major requirement for each DSMS. Accordingly, determining parameters which
are effective on the most important performance metric of a DSMS (i.e.,
response time) and analysing them will affect on designing an adaptive DSMS. In
this paper, effective parameters on response time of DSMS are studied and
analysed and a solution is proposed for DSMSs' adaptivity. The proposed
adaptive DSMS architecture includes a learning unit that frequently evaluates
system to adjust the optimal value for each of tuneable effective. Learning
Automata is used as the learning mechanism of the learning unit to adjust the
value of tuneable effective parameters. So, when system faces some changes, the
learning unit increases performance by tuning each of tuneable effective
parameters to its optimum value. Evaluation results illustrate that after a
while, parameters reach their optimum value and then DSMS's adaptivity will be
improved considerably.
|
1110.1708
|
Advancing Nuclear Physics Through TOPS Solvers and Tools
|
cs.CE physics.comp-ph
|
At the heart of many scientific applications is the solution of algebraic
systems, such as linear systems of equations, eigenvalue problems, and
optimization problems, to name a few. TOPS, which stands for Towards Optimal
Petascale Simulations, is a SciDAC applied math center focused on the
development of solvers for tackling these algebraic systems, as well as the
deployment of such technologies in large-scale scientific applications of
interest to the U.S. Department of Energy. In this paper, we highlight some of
the solver technologies we have developed in optimization and matrix
computations. We also describe some accomplishments achieved using these
technologies in UNEDF, a SciDAC application project on nuclear physics.
|
1110.1729
|
Array Requirements for Scientific Applications and an Implementation for
Microsoft SQL Server
|
cs.DB
|
This paper outlines certain scenarios from the fields of astrophysics and
fluid dynamics simulations which require high performance data warehouses that
support array data type. A common feature of all these use cases is that
subsetting and preprocessing the data on the server side (as far as possible
inside the database server process) is necessary to avoid the client-server
overhead and to minimize IO utilization. Analyzing and summarizing the
requirements of the various fields help software engineers to come up with a
comprehensive design of an array extension to relational database systems that
covers a wide range of scientific applications. We also present a working
implementation of an array data type for Microsoft SQL Server 2008 to support
large-scale scientific applications. We introduce the design of the array type,
results from a performance evaluation, and discuss the lessons learned from
this implementation. The library can be downloaded from our website at
http://voservices.net/sqlarray/
|
1110.1758
|
Data formats for phonological corpora
|
cs.CL
|
The goal of the present chapter is to explore the possibility of providing
the research (but also the industrial) community that commonly uses spoken
corpora with a stable portfolio of well-documented standardised formats that
allow a high re-use rate of annotated spoken resources and, as a consequence,
better interoperability across tools used to produce or exploit such resources.
|
1110.1769
|
On the trade-off between complexity and correlation decay in structural
learning algorithms
|
stat.ML cs.LG physics.data-an
|
We consider the problem of learning the structure of Ising models (pairwise
binary Markov random fields) from i.i.d. samples. While several methods have
been proposed to accomplish this task, their relative merits and limitations
remain somewhat obscure. By analyzing a number of concrete examples, we show
that low-complexity algorithms often fail when the Markov random field develops
long-range correlations. More precisely, this phenomenon appears to be related
to the Ising model phase transition (although it does not coincide with it).
|
1110.1781
|
A Study of Unsupervised Adaptive Crowdsourcing
|
cs.LG cs.SY
|
We consider unsupervised crowdsourcing performance based on the model wherein
the responses of end-users are essentially rated according to how their
responses correlate with the majority of other responses to the same
subtasks/questions. In one setting, we consider an independent sequence of
identically distributed crowdsourcing assignments (meta-tasks), while in the
other we consider a single assignment with a large number of component
subtasks. Both problems yield intuitive results in which the overall
reliability of the crowd is a factor.
|
1110.1796
|
A Behavior-based Approach for Multi-agent Q-learning for Autonomous
Exploration
|
cs.RO cs.LG
|
The use of mobile robots is being popular over the world mainly for
autonomous explorations in hazardous/ toxic or unknown environments. This
exploration will be more effective and efficient if the explorations in unknown
environment can be aided with the learning from past experiences. Currently
reinforcement learning is getting more acceptances for implementing learning in
robots from the system-environment interactions. This learning can be
implemented using the concept of both single-agent and multiagent. This paper
describes such a multiagent approach for implementing a type of reinforcement
learning using a priority based behaviour-based architecture. This proposed
methodology has been successfully tested in both indoor and outdoor
environments.
|
1110.1804
|
The proximal point method for a hybrid model in image restoration
|
cs.CV cs.IT math.IT math.OC
|
Models including two $L^1$ -norm terms have been widely used in image
restoration. In this paper we first propose the alternating direction method of
multipliers (ADMM) to solve this class of models. Based on ADMM, we then
propose the proximal point method (PPM), which is more efficient than ADMM.
Following the operator theory, we also give the convergence analysis of the
proposed methods. Furthermore, we use the proposed methods to solve a class of
hybrid models combining the ROF model with the LLT model. Some numerical
results demonstrate the viability and efficiency of the proposed methods.
|
1110.1884
|
Branching Dynamics of Viral Information Spreading
|
physics.soc-ph cs.SI
|
Despite its importance for rumors or innovations propagation, peer-to-peer
collaboration, social networking or Marketing, the dynamics of information
spreading is not well understood. Since the diffusion depends on the
heterogeneous patterns of human behavior and is driven by the participants'
decisions, its propagation dynamics shows surprising properties not explained
by traditional epidemic or contagion models. Here we present a detailed
analysis of our study of real Viral Marketing campaigns where tracking the
propagation of a controlled message allowed us to analyze the structure and
dynamics of a diffusion graph involving over 31,000 individuals. We found that
information spreading displays a non-Markovian branching dynamics that can be
modeled by a two-step Bellman-Harris Branching Process that generalizes the
static models known in the literature and incorporates the high variability of
human behavior. It explains accurately all the features of information
propagation under the "tipping-point" and can be used for prediction and
management of viral information spreading processes.
|
1110.1891
|
Channel Coding in Random Access Communication over Compound Channels
|
cs.IT math.IT
|
Due to the short and bursty incoming messages, channel access activities in a
wireless random access system are often fractional. The lack of frequent data
support consequently makes it difficult for the receiver to estimate and track
the time varying channel states with high precision. This paper investigates
random multiple access communication over a compound wireless channel where
channel realization is known neither at the transmitters nor at the receiver.
An achievable rate and error probability tradeoff bound is derived under the
non-asymptotic assumption of a finite codeword length. The results are then
extended to the random multiple access system where the receiver is only
interested in decoding messages from a user subset.
|
1110.1892
|
Confidence-based Reasoning in Stochastic Constraint Programming
|
math.OC cs.AI math.CO math.PR stat.OT
|
In this work we introduce a novel approach, based on sampling, for finding
assignments that are likely to be solutions to stochastic constraint
satisfaction problems and constraint optimisation problems. Our approach
reduces the size of the original problem being analysed; by solving this
reduced problem, with a given confidence probability, we obtain assignments
that satisfy the chance constraints in the original model within prescribed
error tolerance thresholds. To achieve this, we blend concepts from stochastic
constraint programming and statistics. We discuss both exact and approximate
variants of our method. The framework we introduce can be immediately employed
in concert with existing approaches for solving stochastic constraint programs.
A thorough computational study on a number of stochastic combinatorial
optimisation problems demonstrates the effectiveness of our approach.
|
1110.1894
|
On the Efficiency of Influence-and-Exploit Strategies for Revenue
Maximization under Positive Externalities
|
cs.DS cs.CC cs.SI
|
We study the problem of revenue maximization in the marketing model for
social networks introduced by (Hartline, Mirrokni, Sundararajan, WWW '08). We
restrict our attention to the Uniform Additive Model and mostly focus on
Influence-and-Exploit (IE) marketing strategies. We obtain a comprehensive
collection of results on the efficiency and the approximability of IE
strategies, which also imply a significant improvement on the best known
approximation ratios for revenue maximization. Specifically, we show that in
the Uniform Additive Model, both computing the optimal marketing strategy and
computing the best IE strategy are $\NP$-hard for undirected social networks.
We observe that allowing IE strategies to offer prices smaller than the myopic
price in the exploit step leads to a measurable improvement on their
performance. Thus, we show that the best IE strategy approximates the maximum
revenue within a factor of 0.911 for undirected and of roughly 0.553 for
directed networks. Moreover, we present a natural generalization of IE
strategies, with more than two pricing classes, and show that they approximate
the maximum revenue within a factor of roughly 0.7 for undirected and of
roughly 0.35 for directed networks. Utilizing a connection between good IE
strategies and large cuts in the underlying social network, we obtain
polynomial-time algorithms that approximate the revenue of the best IE strategy
within a factor of roughly 0.9. Hence, we significantly improve on the best
known approximation ratio for revenue maximization to 0.8229 for undirected and
to 0.5011 for directed networks (from 2/3 and 1/3, respectively, by Hartline et
al.).
|
1110.1914
|
An evolving network model with modular growth
|
physics.soc-ph cs.SI
|
In this paper, we propose an evolving network model growing fast in units of
module, based on the analysis of the evolution characteristics in real complex
networks. Each module is a small-world network containing several
interconnected nodes, and the nodes between the modules are linked by
preferential attachment on degree of nodes. We study the modularity measure of
the proposed model, which can be adjusted by changing ratio of the number of
inner-module edges and the number of inter-module edges. Based on the mean
field theory, we develop an analytical function of the degree distribution,
which is verified by a numerical example and indicates that the degree
distribution shows characteristics of the small-world network and the
scale-free network distinctly at different segments. The clustering coefficient
and the average path length of the network are simulated numerically,
indicating that the network shows the small-world property and is affected
little by the randomness of the new module.
|
1110.1930
|
Statistical Mechanical Analysis of Low-Density Parity-Check Codes on
General Markov Channel
|
cs.IT math.IT
|
Low-density parity-check (LDPC) codes on symmetric memoryless channels have
been analyzed using statistical physics by several authors. In this paper,
statistical mechanical analysis of LDPC codes is performed for asymmetric
memoryless channels and general Markov channels. It is shown that the saddle
point equations of the replica symmetric solution for a Markov channel is
equivalent to the density evolution of the belief propagation on the factor
graph representing LDPC codes on the Markov channel. The derivation uses the
method of types for Markov chain.
|
1110.1976
|
Exploring the structural regularities in networks
|
physics.soc-ph cs.SI
|
In this paper, we consider the problem of exploring structural regularities
of networks by dividing the nodes of a network into groups such that the
members of each group have similar patterns of connections to other groups.
Specifically, we propose a general statistical model to describe network
structure. In this model, group is viewed as hidden or unobserved quantity and
it is learned by fitting the observed network data using the
expectation-maximization algorithm. Compared with existing models, the most
prominent strength of our model is the high flexibility. This strength enables
it to possess the advantages of existing models and overcomes their
shortcomings in a unified way. As a result, not only broad types of structure
can be detected without prior knowledge of what type of intrinsic regularities
exist in the network, but also the type of identified structure can be directly
learned from data. Moreover, by differentiating outgoing edges from incoming
edges, our model can detect several types of structural regularities beyond
competing models. Tests on a number of real world and artificial networks
demonstrate that our model outperforms the state-of-the-art model at shedding
light on the structural features of networks, including the overlapping
community structure, multipartite structure and several other types of
structure which are beyond the capability of existing models.
|
1110.1990
|
Framework for Link-Level Energy Efficiency Optimization with Informed
Transmitter
|
cs.IT math.IT
|
The dramatic increase of network infrastructure comes at the cost of rapidly
increasing energy consumption, which makes optimization of energy efficiency
(EE) an important topic. Since EE is often modeled as the ratio of rate to
power, we present a mathematical framework called fractional programming that
provides insight into this class of optimization problems, as well as
algorithms for computing the solution. The main idea is that the objective
function is transformed to a weighted sum of rate and power. A generic problem
formulation for systems dissipating transmit-independent circuit power in
addition to transmit-dependent power is presented. We show that a broad class
of EE maximization problems can be solved efficiently, provided the rate is a
concave function of the transmit power. We elaborate examples of various system
models including time-varying parallel channels. Rate functions with an
arbitrary discrete modulation scheme are also treated. The examples considered
lead to water-filling solutions, but these are different from the dual problems
of power minimization under rate constraints and rate maximization under power
constraints, respectively, because the constraints need not be active. We also
demonstrate that if the solution to a rate maximization problem is known, it
can be utilized to reduce the EE problem into a one-dimensional convex problem.
|
1110.1992
|
Open Source Software: How Can Design Metrics Facilitate Architecture
Recovery?
|
cs.SE cs.AI
|
Modern software development methodologies include reuse of open source code.
Reuse can be facilitated by architectural knowledge of the software, not
necessarily provided in the documentation of open source software. The effort
required to comprehend the system's source code and discover its architecture
can be considered a major drawback in reuse. In a recent study we examined the
correlations between design metrics and classes' architecture layer. In this
paper, we apply our methodology in more open source projects to verify the
applicability of our method. Keywords: system understanding; program
comprehension; object-oriented; reuse; architecture layer; design metrics;
|
1110.2049
|
Acceleration of Uncertainty Updating in the Description of Transport
Processes in Heterogeneous Materials
|
cs.CE
|
The prediction of thermo-mechanical behaviour of heterogeneous materials such
as heat and moisture transport is strongly influenced by the uncertainty in
parameters. Such materials occur e.g. in historic buildings, and the durability
assessment of these therefore needs a reliable and probabilistic simulation of
transport processes, which is related to the suitable identification of
material parameters. In order to include expert knowledge as well as
experimental results, one can employ an updating procedure such as Bayesian
inference. The classical probabilistic setting of the identification process in
Bayes's form requires the solution of a stochastic forward problem via
computationally expensive sampling techniques, which makes the method almost
impractical. In this paper novel stochastic computational techniques such as
the stochastic Galerkin method are applied in order to accelerate the updating
procedure. The idea is to replace the computationally expensive forward
simulation via the conventional finite element (FE) method by the evaluation of
a polynomial chaos expansion (PCE). Such an approximation of the FE model for
the forward simulation perfectly suits the Bayesian updating. The presented
uncertainty updating techniques are applied to the numerical model of coupled
heat and moisture transport in heterogeneous materials with spatially varying
coefficients defined by random fields.
|
1110.2053
|
Steps Towards a Theory of Visual Information: Active Perception,
Signal-to-Symbol Conversion and the Interplay Between Sensing and Control
|
cs.CV
|
This manuscript describes the elements of a theory of information tailored to
control and decision tasks and specifically to visual data. The concept of
Actionable Information is described, that relates to a notion of information
championed by J. Gibson, and a notion of "complete information" that relates to
the minimal sufficient statistics of a complete representation. It is shown
that the "actionable information gap" between the two can be reduced by
exercising control on the sensing process. Thus, senging, control and
information are inextricably tied. This has consequences in the so-called
"signal-to-symbol barrier" problem, as well as in the analysis and design of
active sensing systems. It has ramifications in vision-based control,
navigation, 3-D reconstruction and rendering, as well as detection,
localization, recognition and categorization of objects and scenes in live
video.
This manuscript has been developed from a set of lecture notes for a summer
course at the First International Computer Vision Summer School (ICVSS) in
Scicli, Italy, in July of 2008. They were later expanded and amended for
subsequent lectures in the same School in July 2009. Starting on November 1,
2009, they were further expanded for a special topics course, CS269, taught at
UCLA in the Spring term of 2010.
|
1110.2055
|
Computational homogenization of non-stationary transport processes in
masonry structures
|
cs.CE physics.comp-ph
|
A fully coupled transient heat and moisture transport in a masonry structure
is examined in this paper. Supported by several successful applications in
civil engineering the nonlinear diffusion model proposed by K\"{u}nzel is
adopted in the present study. A strong material heterogeneity together with a
significant dependence of the model parameters on initial conditions as well as
the gradients of heat and moisture fields vindicates the use of a hierarchical
modeling strategy to solve the problem of this kind. Attention is limited to
the classical first order homogenization in a spatial domain developed here in
the framework of a two step (meso-macro) multi-scale computational scheme (FE^2
problem). Several illustrative examples are presented to investigate the
influence of transient flow at the level of constituents (meso-scale) on the
macroscopic response including the effect of macro-scale boundary conditions. A
two-dimensional section of Charles Bridge subjected to actual climatic
conditions is analyzed next to confirm the suitability of algorithmic format of
FE^2 scheme for the parallel computing.
|
1110.2074
|
Memristors can implement fuzzy logic
|
cs.ET cond-mat.mtrl-sci cs.NE cs.SY
|
In our work we propose implementing fuzzy logic using memristors. Min and max
operations are done by antipodally configured memristor circuits that may be
assembled into computational circuits. We discuss computational power of such
circuits with respect to m-efficiency and experimentally observed behavior of
memristive devices. Circuits implemented with real devices are likely to
manifest learning behavior. The circuits presented in the work may be
applicable for instance in fuzzy classifiers.
|
1110.2096
|
Beating Irrationality: Does Delegating to IT Alleviate the Sunk Cost
Effect?
|
cs.HC cs.CY cs.SI
|
In this research, we investigate the impact of delegating decision making to
information technology (IT) on an important human decision bias - the sunk cost
effect. To address our research question, we use a unique and very rich dataset
containing actual market transaction data for approximately 7,000 pay-per-bid
auctions. Thus, unlike previous studies that are primarily laboratory
experiments, we investigate the effects of using IT on the proneness to a
decision bias in real market transactions. We identify and analyze irrational
decision scenarios of auction participants. We find that participants with a
higher monetary investment have an increased likelihood of violating the
assumption of rationality, due to the sunk cost effect. Interestingly, after
controlling for monetary investments, participants who delegate their decision
making to IT and, consequently, have comparably lower behavioral investments
(e.g., emotional attachment, effort, time) are less prone to the sunk cost
effect. In particular, delegation to IT reduces the impact of overall
investments on the sunk cost effect by approximately 50%.
|
1110.2098
|
Dynamic Matrix Factorization: A State Space Approach
|
cs.LG
|
Matrix factorization from a small number of observed entries has recently
garnered much attention as the key ingredient of successful recommendation
systems. One unresolved problem in this area is how to adapt current methods to
handle changing user preferences over time. Recent proposals to address this
issue are heuristic in nature and do not fully exploit the time-dependent
structure of the problem. As a principled and general temporal formulation, we
propose a dynamical state space model of matrix factorization. Our proposal
builds upon probabilistic matrix factorization, a Bayesian model with Gaussian
priors. We utilize results in state tracking, such as the Kalman filter, to
provide accurate recommendations in the presence of both process and
measurement noise. We show how system parameters can be learned via
expectation-maximization and provide comparisons to current published
techniques.
|
1110.2136
|
Active Learning Using Smooth Relative Regret Approximations with
Applications
|
cs.LG
|
The disagreement coefficient of Hanneke has become a central data independent
invariant in proving active learning rates. It has been shown in various ways
that a concept class with low complexity together with a bound on the
disagreement coefficient at an optimal solution allows active learning rates
that are superior to passive learning ones.
We present a different tool for pool based active learning which follows from
the existence of a certain uniform version of low disagreement coefficient, but
is not equivalent to it. In fact, we present two fundamental active learning
problems of significant interest for which our approach allows nontrivial
active learning bounds. However, any general purpose method relying on the
disagreement coefficient bounds only fails to guarantee any useful bounds for
these problems.
The tool we use is based on the learner's ability to compute an estimator of
the difference between the loss of any hypotheses and some fixed "pivotal"
hypothesis to within an absolute error of at most $\eps$ times the
|
1110.2153
|
Characterizing and modeling citation dynamics
|
physics.soc-ph cs.DL cs.SI physics.data-an
|
Citation distributions are crucial for the analysis and modeling of the
activity of scientists. We investigated bibliometric data of papers published
in journals of the American Physical Society, searching for the type of
function which best describes the observed citation distributions. We used the
goodness of fit with Kolmogorov-Smirnov statistics for three classes of
functions: log-normal, simple power law and shifted power law. The shifted
power law turns out to be the most reliable hypothesis for all citation
networks we derived, which correspond to different time spans. We find that
citation dynamics is characterized by bursts, usually occurring within a few
years since publication of a paper, and the burst size spans several orders of
magnitude. We also investigated the microscopic mechanisms for the evolution of
citation networks, by proposing a linear preferential attachment with time
dependent initial attractiveness. The model successfully reproduces the
empirical citation distributions and accounts for the presence of citation
bursts as well.
|
1110.2162
|
Large-Margin Learning of Submodular Summarization Methods
|
cs.AI cs.CL cs.LG
|
In this paper, we present a supervised learning approach to training
submodular scoring functions for extractive multi-document summarization. By
taking a structured predicition approach, we provide a large-margin method that
directly optimizes a convex relaxation of the desired performance measure. The
learning method applies to all submodular summarization methods, and we
demonstrate its effectiveness for both pairwise as well as coverage-based
scoring functions on multiple datasets. Compared to state-of-the-art functions
that were tuned manually, our method significantly improves performance and
enables high-fidelity models with numbers of parameters well beyond what could
reasonbly be tuned by hand.
|
1110.2186
|
Consensus in networks of mobile communicating agents
|
physics.soc-ph cond-mat.stat-mech cs.MA cs.SI q-bio.PE
|
Populations of mobile and communicating agents describe a vast array of
technological and natural systems, ranging from sensor networks to animal
groups. Here, we investigate how a group-level agreement may emerge in the
continuously evolving network defined by the local interactions of the moving
individuals. We adopt a general scheme of motion in two dimensions and we let
the individuals interact through the minimal naming game, a prototypical scheme
to investigate social consensus. We distinguish different regimes of
convergence determined by the emission range of the agents and by their
mobility, and we identify the corresponding scaling behaviors of the consensus
time. In the same way, we rationalize also the behavior of the maximum memory
used during the convergence process, which determines the minimum
cognitive/storage capacity needed by the individuals. Overall, we believe that
the simple and general model presented in this paper can represent a helpful
reference for a better understanding of the behavior of populations of mobile
agents.
|
1110.2196
|
The evaluation of geometric queries: constraint databases and quantifier
elimination
|
cs.DB cs.CC cs.LO
|
We model the algorithmic task of geometric elimination (e.g., quantifier
elimination in the elementary field theories of real and complex numbers) by
means of certain constraint database queries, called geometric queries. As a
particular case of such a geometric elimination task, we consider sample point
queries. We show exponential lower complexity bounds for evaluating geometric
queries in the general and in the particular case of sample point queries.
Although this paper is of theoretical nature, its aim is to explore the
possibilities and (complexity-)limits of computer implemented query evaluation
algorithms for Constraint Databases, based on the principles of the most
advanced geometric elimination procedures and their implementations, like,
e.g., the software package "Kronecker".
|
1110.2200
|
Modelling Mixed Discrete-Continuous Domains for Planning
|
cs.AI
|
In this paper we present pddl+, a planning domain description language for
modelling mixed discrete-continuous planning domains. We describe the syntax
and modelling style of pddl+, showing that the language makes convenient the
modelling of complex time-dependent effects. We provide a formal semantics for
pddl+ by mapping planning instances into constructs of hybrid automata. Using
the syntax of HAs as our semantic model we construct a semantic mapping to
labelled transition systems to complete the formal interpretation of pddl+
planning instances. An advantage of building a mapping from pddl+ to HA theory
is that it forms a bridge between the Planning and Real Time Systems research
communities. One consequence is that we can expect to make use of some of the
theoretical properties of HAs. For example, for a restricted class of HAs the
Reachability problem (which is equivalent to Plan Existence) is decidable.
pddl+ provides an alternative to the continuous durative action model of
pddl2.1, adding a more flexible and robust model of time-dependent behaviour.
|
1110.2203
|
Set Intersection and Consistency in Constraint Networks
|
cs.AI
|
In this paper, we show that there is a close relation between consistency in
a constraint network and set intersection. A proof schema is provided as a
generic way to obtain consistency properties from properties on set
intersection. This approach not only simplifies the understanding of and
unifies many existing consistency results, but also directs the study of
consistency to that of set intersection properties in many situations, as
demonstrated by the results on the convexity and tightness of constraints in
this paper. Specifically, we identify a new class of tree convex constraints
where local consistency ensures global consistency. This generalizes row convex
constraints. Various consistency results are also obtained on constraint
networks where only some, in contrast to all in the existing work,constraints
are tight.
|
1110.2204
|
Consistency and Random Constraint Satisfaction Models
|
cs.AI
|
In this paper, we study the possibility of designing non-trivial random CSP
models by exploiting the intrinsic connection between structures and
typical-case hardness. We show that constraint consistency, a notion that has
been developed to improve the efficiency of CSP algorithms, is in fact the key
to the design of random CSP models that have interesting phase transition
behavior and guaranteed exponential resolution complexity without putting much
restriction on the parameter of constraint tightness or the domain size of the
problem. We propose a very flexible framework for constructing problem
instances withinteresting behavior and develop a variety of concrete methods to
construct specific random CSP models that enforce different levels of
constraint consistency. A series of experimental studies with interesting
observations are carried out to illustrate the effectiveness of introducing
structural elements in random instances, to verify the robustness of our
proposal, and to investigate features of some specific models based on our
framework that are highly related to the behavior of backtracking search
algorithms.
|
1110.2205
|
Answer Sets for Logic Programs with Arbitrary Abstract Constraint Atoms
|
cs.AI
|
In this paper, we present two alternative approaches to defining answer sets
for logic programs with arbitrary types of abstract constraint atoms (c-atoms).
These approaches generalize the fixpoint-based and the level mapping based
answer set semantics of normal logic programs to the case of logic programs
with arbitrary types of c-atoms. The results are four different answer set
definitions which are equivalent when applied to normal logic programs. The
standard fixpoint-based semantics of logic programs is generalized in two
directions, called answer set by reduct and answer set by complement. These
definitions, which differ from each other in the treatment of
negation-as-failure (naf) atoms, make use of an immediate consequence operator
to perform answer set checking, whose definition relies on the notion of
conditional satisfaction of c-atoms w.r.t. a pair of interpretations. The other
two definitions, called strongly and weakly well-supported models, are
generalizations of the notion of well-supported models of normal logic programs
to the case of programs with c-atoms. As for the case of fixpoint-based
semantics, the difference between these two definitions is rooted in the
treatment of naf atoms. We prove that answer sets by reduct (resp. by
complement) are equivalent to weakly (resp. strongly) well-supported models of
a program, thus generalizing the theorem on the correspondence between stable
models and well-supported models of a normal logic program to the class of
programs with c-atoms. We show that the newly defined semantics coincide with
previously introduced semantics for logic programs with monotone c-atoms, and
they extend the original answer set semantics of normal logic programs. We also
study some properties of answer sets of programs with c-atoms, and relate our
definitions to several semantics for logic programs with aggregates presented
in the literature.
|
1110.2209
|
Bin Completion Algorithms for Multicontainer Packing, Knapsack, and
Covering Problems
|
cs.AI
|
Many combinatorial optimization problems such as the bin packing and multiple
knapsack problems involve assigning a set of discrete objects to multiple
containers. These problems can be used to model task and resource allocation
problems in multi-agent systems and distributed systms, and can also be found
as subproblems of scheduling problems. We propose bin completion, a
branch-and-bound strategy for one-dimensional, multicontainer packing problems.
Bin completion combines a bin-oriented search space with a powerful dominance
criterion that enables us to prune much of the space. The performance of the
basic bin completion framework can be enhanced by using a number of extensions,
including nogood-based pruning techniques that allow further exploitation of
the dominance criterion. Bin completion is applied to four problems: multiple
knapsack, bin covering, min-cost covering, and bin packing. We show that our
bin completion algorithms yield new, state-of-the-art results for the multiple
knapsack, bin covering, and min-cost covering problems, outperforming previous
algorithms by several orders of magnitude with respect to runtime on some
classes of hard, random problem instances. For the bin packing problem, we
demonstrate significant improvements compared to most previous results, but
show that bin completion is not competitive with current state-of-the-art
cutting-stock based approaches.
|
1110.2210
|
Closed-Loop Learning of Visual Control Policies
|
cs.CV
|
In this paper we present a general, flexible framework for learning mappings
from images to actions by interacting with the environment. The basic idea is
to introduce a feature-based image classifier in front of a reinforcement
learning algorithm. The classifier partitions the visual space according to the
presence or absence of few highly informative local descriptors that are
incrementally selected in a sequence of attempts to remove perceptual aliasing.
We also address the problem of fighting overfitting in such a greedy algorithm.
Finally, we show how high-level visual features can be generated when the power
of local descriptors is insufficient for completely disambiguating the aliased
states. This is done by building a hierarchy of composite features that consist
of recursive spatial combinations of visual features. We demonstrate the
efficacy of our algorithms by solving three visual navigation tasks and a
visual version of the classical Car on the Hill control problem.
|
1110.2211
|
Learning Symbolic Models of Stochastic Domains
|
cs.LG cs.AI
|
In this article, we work towards the goal of developing agents that can learn
to act in complex worlds. We develop a probabilistic, relational planning rule
representation that compactly models noisy, nondeterministic action effects,
and show how such rules can be effectively learned. Through experiments in
simple planning domains and a 3D simulated blocks world with realistic physics,
we demonstrate that this learning algorithm allows agents to effectively model
world dynamics.
|
1110.2212
|
Uncertainty in Soft Temporal Constraint Problems:A General Framework and
Controllability Algorithms for the Fuzzy Case
|
cs.AI
|
In real-life temporal scenarios, uncertainty and preferences are often
essential and coexisting aspects. We present a formalism where quantitative
temporal constraints with both preferences and uncertainty can be defined. We
show how three classical notions of controllability (that is, strong, weak, and
dynamic), which have been developed for uncertain temporal problems, can be
generalized to handle preferences as well. After defining this general
framework, we focus on problems where preferences follow the fuzzy approach,
and with properties that assure tractability. For such problems, we propose
algorithms to check the presence of the controllability properties. In
particular, we show that in such a setting dealing simultaneously with
preferences and uncertainty does not increase the complexity of controllability
testing. We also develop a dynamic execution algorithm, of polynomial
complexity, that produces temporal plans under uncertainty that are optimal
with respect to fuzzy preferences.
|
1110.2213
|
Supporting Temporal Reasoning by Mapping Calendar Expressions to Minimal
Periodic Sets
|
cs.AI
|
In the recent years several research efforts have focused on the concept of
time granularity and its applications. A first stream of research investigated
the mathematical models behind the notion of granularity and the algorithms to
manage temporal data based on those models. A second stream of research
investigated symbolic formalisms providing a set of algebraic operators to
define granularities in a compact and compositional way. However, only very
limited manipulation algorithms have been proposed to operate directly on the
algebraic representation making it unsuitable to use the symbolic formalisms in
applications that need manipulation of granularities.
This paper aims at filling the gap between the results from these two streams
of research, by providing an efficient conversion from the algebraic
representation to the equivalent low-level representation based on the
mathematical models. In addition, the conversion returns a minimal
representation in terms of period length. Our results have a major practical
impact: users can more easily define arbitrary granularities in terms of
algebraic operators, and then access granularity reasoning and other services
operating efficiently on the equivalent, minimal low-level representation. As
an example, we illustrate the application to temporal constraint reasoning with
multiple granularities.
From a technical point of view, we propose an hybrid algorithm that
interleaves the conversion of calendar subexpressions into periodical sets with
the minimization of the period length. The algorithm returns set-based
granularity representations having minimal period length, which is the most
relevant parameter for the performance of the considered reasoning services.
Extensive experimental work supports the techniques used in the algorithm, and
shows the efficiency and effectiveness of the algorithm.
|
1110.2215
|
NP Animacy Identification for Anaphora Resolution
|
cs.CL
|
In anaphora resolution for English, animacy identification can play an
integral role in the application of agreement restrictions between pronouns and
candidates, and as a result, can improve the accuracy of anaphora resolution
systems. In this paper, two methods for animacy identification are proposed and
evaluated using intrinsic and extrinsic measures. The first method is a
rule-based one which uses information about the unique beginners in WordNet to
classify NPs on the basis of their animacy. The second method relies on a
machine learning algorithm which exploits a WordNet enriched with animacy
information for each sense. The effect of word sense disambiguation on the two
methods is also assessed. The intrinsic evaluation reveals that the machine
learning method reaches human levels of performance. The extrinsic evaluation
demonstrates that animacy identification can be beneficial in anaphora
resolution, especially in the cases where animate entities are identified with
high precision.
|
1110.2216
|
The Generalized A* Architecture
|
cs.AI
|
We consider the problem of computing a lightest derivation of a global
structure using a set of weighted rules. A large variety of inference problems
in AI can be formulated in this framework. We generalize A* search and
heuristics derived from abstractions to a broad class of lightest derivation
problems. We also describe a new algorithm that searches for lightest
derivations using a hierarchy of abstractions. Our generalization of A* gives a
new algorithm for searching AND/OR graphs in a bottom-up fashion. We discuss
how the algorithms described here provide a general architecture for addressing
the pipeline problem --- the problem of passing information back and forth
between various stages of processing in a perceptual system. We consider
examples in computer vision and natural language processing. We apply the
hierarchical search algorithm to the problem of estimating the boundaries of
convex objects in grayscale images and compare it to other search methods. A
second set of experiments demonstrate the use of a new compositional model for
finding salient curves in images.
|
1110.2227
|
Average Interpolating Wavelets on Point Clouds and Graphs
|
math.FA cs.IT math.IT stat.ML
|
We introduce a new wavelet transform suitable for analyzing functions on
point clouds and graphs. Our construction is based on a generalization of the
average interpolating refinement scheme of Donoho. The most important
ingredient of the original scheme that needs to be altered is the choice of the
interpolant. Here, we define the interpolant as the minimizer of a smoothness
functional, namely a generalization of the Laplacian energy, subject to the
averaging constraints. In the continuous setting, we derive a formula for the
optimal solution in terms of the poly-harmonic Green's function. The form of
this solution is used to motivate our construction in the setting of graphs and
point clouds. We highlight the empirical convergence of our refinement scheme
and the potential applications of the resulting wavelet transform through
experiments on a number of data stets.
|
1110.2288
|
Optimal Power Allocation for Renewable Energy Source
|
cs.SY cs.IT math.IT math.OC math.PR
|
Battery powered transmitters face energy constraint, replenishing their
energy by a renewable energy source (like solar or wind power) can lead to
longer lifetime. We consider here the problem of finding the optimal power
allocation under random channel conditions for a wireless transmitter, such
that rate of information transfer is maximized. Here a rechargeable battery,
which is periodically charged by renewable source, is used to power the
transmitter. All of above is formulated as a Markov Decision Process.
Structural properties like the monotonicity of the optimal value and policy
derived in this paper will be of vital importance in understanding the kind of
algorithms and approximations needed in real-life scenarios. The effect of
curse of dimensionality which is prevalent in Dynamic programming problems can
thus be reduced. We show our results under the most general of assumptions.
|
1110.2294
|
Query Driven Visualization of Astronomical Catalogs
|
astro-ph.IM cs.DB
|
Interactive visualization of astronomical catalogs requires novel techniques
due to the huge volumes and complex structure of the data produced by existing
and upcoming astronomical surveys. The creation as well as the disclosure of
the catalogs can be handled by data pulling mechanisms. These prevent
unnecessary processing and facilitate data sharing by having users request the
desired end products.
In this work we present query driven visualization as a logical continuation
of data pulling. Scientists can request catalogs in a declarative way and set
process parameters directly from within the visualization. This results in
profound interoperation between software with a high level of abstraction.
New messages for the Simple Application Messaging Protocol are proposed to
achieve this abstraction. Support for these messages are implemented in the
Astro-WISE information system and in a set of demonstrational applications.
|
1110.2306
|
Ground Metric Learning
|
stat.ML cs.CV cs.LG
|
Transportation distances have been used for more than a decade now in machine
learning to compare histograms of features. They have one parameter: the ground
metric, which can be any metric between the features themselves. As is the case
for all parameterized distances, transportation distances can only prove useful
in practice when this parameter is carefully chosen. To date, the only option
available to practitioners to set the ground metric parameter was to rely on a
priori knowledge of the features, which limited considerably the scope of
application of transportation distances. We propose to lift this limitation and
consider instead algorithms that can learn the ground metric using only a
training set of labeled histograms. We call this approach ground metric
learning. We formulate the problem of learning the ground metric as the
minimization of the difference of two polyhedral convex functions over a convex
set of distance matrices. We follow the presentation of our algorithms with
promising experimental results on binary classification tasks using GIST
descriptors of images taken in the Caltech-256 set.
|
1110.2341
|
Multiple ant-bee colony optimization for load balancing in
packet-switched networks
|
cs.NI cs.AI
|
One of the important issues in computer networks is "Load Balancing" which
leads to efficient use of the network resources. To achieve a balanced network
it is necessary to find different routes between the source and destination. In
the current paper we propose a new approach to find different routes using
swarm intelligence techniques and multi colony algorithms. In the proposed
algorithm that is an improved version of MACO algorithm, we use different
colonies of ants and bees and appoint these colony members as intelligent
agents to monitor the network and update the routing information. The survey
includes comparison and critiques of MACO. The simulation results show a
tangible improvement in the aforementioned approach.
|
1110.2343
|
Annotated Raptor Codes
|
cs.IT math.IT
|
In this paper, an extension of raptor codes is introduced which keeps all the
desirable properties of raptor codes, including the linear complexity of
encoding and decoding per information bit, unchanged. The new design, however,
improves the performance in terms of the reception rate. Our simulations show a
10% reduction in the needed overhead at the benchmark block length of 64,520
bits and with the same complexity per information bit.
|
1110.2392
|
A Variant of Azuma's Inequality for Martingales with Subgaussian Tails
|
cs.LG math.PR
|
We provide a variant of Azuma's concentration inequality for martingales, in
which the standard boundedness requirement is replaced by the milder
requirement of a subgaussian tail.
|
1110.2407
|
Bi-modal G\"odel logic over [0,1]-valued Kripke frames
|
math.LO cs.AI
|
We consider the G\"odel bi-modal logic determined by fuzzy Kripke models
where both the propositions and the accessibility relation are infinitely
valued over the standard G\"odel algebra [0,1] and prove strong completeness of
Fischer Servi intuitionistic modal logic IK plus the prelinearity axiom with
respect to this semantics. We axiomatize also the bi-modal analogues of $T,$
$S4,$ and $S5$ obtained by restricting to models over frames satisfying the
[0,1]-valued versions of the structural properties which characterize these
logics. As application of the completeness theorems we obtain a representation
theorem for bi-modal G\"odel algebras.
|
1110.2416
|
Supervised learning of short and high-dimensional temporal sequences for
life science measurements
|
cs.LG
|
The analysis of physiological processes over time are often given by
spectrometric or gene expression profiles over time with only few time points
but a large number of measured variables. The analysis of such temporal
sequences is challenging and only few methods have been proposed. The
information can be encoded time independent, by means of classical expression
differences for a single time point or in expression profiles over time.
Available methods are limited to unsupervised and semi-supervised settings. The
predictive variables can be identified only by means of wrapper or
post-processing techniques. This is complicated due to the small number of
samples for such studies. Here, we present a supervised learning approach,
termed Supervised Topographic Mapping Through Time (SGTM-TT). It learns a
supervised mapping of the temporal sequences onto a low dimensional grid. We
utilize a hidden markov model (HMM) to account for the time domain and
relevance learning to identify the relevant feature dimensions most predictive
over time. The learned mapping can be used to visualize the temporal sequences
and to predict the class of a new sequence. The relevance learning permits the
identification of discriminating masses or gen expressions and prunes
dimensions which are unnecessary for the classification task or encode mainly
noise. In this way we obtain a very efficient learning system for temporal
sequences. The results indicate that using simultaneous supervised learning and
metric adaptation significantly improves the prediction accuracy for
synthetically and real life data in comparison to the standard techniques. The
discriminating features, identified by relevance learning, compare favorably
with the results of alternative methods. Our method permits the visualization
of the data on a low dimensional grid, highlighting the observed temporal
structure.
|
1110.2417
|
New Improvements on the Echelon-Ferrers Construction
|
cs.IT math.IT
|
We show how to improve the echelon-Ferrers construction of random network
codes introduced by Etzion and Silberstein to attain codes of larger size for a
given minimum distance.
|
1110.2436
|
An MDL framework for sparse coding and dictionary learning
|
cs.IT math.IT stat.ML
|
The power of sparse signal modeling with learned over-complete dictionaries
has been demonstrated in a variety of applications and fields, from signal
processing to statistical inference and machine learning. However, the
statistical properties of these models, such as under-fitting or over-fitting
given sets of data, are still not well characterized in the literature. As a
result, the success of sparse modeling depends on hand-tuning critical
parameters for each data and application. This work aims at addressing this by
providing a practical and objective characterization of sparse models by means
of the Minimum Description Length (MDL) principle -- a well established
information-theoretic approach to model selection in statistical inference. The
resulting framework derives a family of efficient sparse coding and dictionary
learning algorithms which, by virtue of the MDL principle, are completely
parameter free. Furthermore, such framework allows to incorporate additional
prior information to existing models, such as Markovian dependencies, or to
define completely new problem formulations, including in the matrix analysis
area, in a natural way. These virtues will be demonstrated with parameter-free
algorithms for the classic image denoising and classification problems, and for
low-rank matrix recovery in video applications.
|
1110.2478
|
Countering Gattaca: Efficient and Secure Testing of Fully-Sequenced
Human Genomes (Full Version)
|
cs.CR cs.CE
|
Recent advances in DNA sequencing technologies have put ubiquitous
availability of fully sequenced human genomes within reach. It is no longer
hard to imagine the day when everyone will have the means to obtain and store
one's own DNA sequence. Widespread and affordable availability of fully
sequenced genomes immediately opens up important opportunities in a number of
health-related fields. In particular, common genomic applications and tests
performed in vitro today will soon be conducted computationally, using
digitized genomes. New applications will be developed as genome-enabled
medicine becomes increasingly preventive and personalized. However, this
progress also prompts significant privacy challenges associated with potential
loss, theft, or misuse of genomic data. In this paper, we begin to address
genomic privacy by focusing on three important applications: Paternity Tests,
Personalized Medicine, and Genetic Compatibility Tests. After carefully
analyzing these applications and their privacy requirements, we propose a set
of efficient techniques based on private set operations. This allows us to
implement in in silico some operations that are currently performed via in
vitro methods, in a secure fashion. Experimental results demonstrate that
proposed techniques are both feasible and practical today.
|
1110.2480
|
Beyond Traditional DTN Routing: Social Networks for Opportunistic
Communication
|
cs.NI cs.SI
|
This article examines the evolution of routing protocols for intermittently
connected ad hoc networks and discusses the trend toward social-based routing
protocols. A survey of current routing solutions is presented, where routing
protocols for opportunistic networks are classified based on the network graph
employed. The need to capture performance tradeoffs from a multi-objective
perspective is highlighted.
|
1110.2515
|
Normalized Mutual Information to evaluate overlapping community finding
algorithms
|
physics.soc-ph cs.SI physics.data-an
|
Given the increasing popularity of algorithms for overlapping clustering, in
particular in social network analysis, quantitative measures are needed to
measure the accuracy of a method. Given a set of true clusters, and the set of
clusters found by an algorithm, these sets of clusters must be compared to see
how similar or different the sets are. A normalized measure is desirable in
many contexts, for example assigning a value of 0 where the two sets are
totally dissimilar, and 1 where they are identical. A measure based on
normalized mutual information, [1], has recently become popular. We demonstrate
unintuitive behaviour of this measure, and show how this can be corrected by
using a more conventional normalization. We compare the results to that of
other measures, such as the Omega index [2].
|
1110.2529
|
The Generalization Ability of Online Algorithms for Dependent Data
|
stat.ML cs.LG math.OC
|
We study the generalization performance of online learning algorithms trained
on samples coming from a dependent source of data. We show that the
generalization error of any stable online algorithm concentrates around its
regret--an easily computable statistic of the online performance of the
algorithm--when the underlying ergodic process is $\beta$- or $\phi$-mixing. We
show high probability error bounds assuming the loss function is convex, and we
also establish sharp convergence rates and deviation bounds for strongly convex
losses and several linear prediction problems such as linear and logistic
regression, least-squares SVM, and boosting on dependent data. In addition, our
results have straightforward applications to stochastic optimization with
dependent data, and our analysis requires only martingale convergence
arguments; we need not rely on more powerful statistical tools such as
empirical process theory.
|
1110.2557
|
Constructions of Rank Modulation Codes
|
cs.IT math.IT
|
Rank modulation is a way of encoding information to correct errors in flash
memory devices as well as impulse noise in transmission lines. Modeling rank
modulation involves construction of packings of the space of permutations
equipped with the Kendall tau distance.
We present several general constructions of codes in permutations that cover
a broad range of code parameters. In particular, we show a number of ways in
which conventional error-correcting codes can be modified to correct errors in
the Kendall space. Codes that we construct afford simple encoding and decoding
algorithms of essentially the same complexity as required to correct errors in
the Hamming metric. For instance, from binary BCH codes we obtain codes
correcting $t$ Kendall errors in $n$ memory cells that support the order of
$n!/(\log_2n!)^t$ messages, for any constant $t= 1,2,...$ We also construct
families of codes that correct a number of errors that grows with $n$ at
varying rates, from $\Theta(n)$ to $\Theta(n^{2})$. One of our constructions
gives rise to a family of rank modulation codes for which the trade-off between
the number of messages and the number of correctable Kendall errors approaches
the optimal scaling rate. Finally, we list a number of possibilities for
constructing codes of finite length, and give examples of rank modulation codes
with specific parameters.
|
1110.2558
|
Epidemic centrality - is there an underestimated epidemic impact of
network peripheral nodes?
|
physics.soc-ph cs.SI q-bio.PE
|
In the study of disease spreading on empirical complex networks in SIR model,
initially infected nodes can be ranked according to some measure of their
epidemic impact. The highest ranked nodes, also referred to as
"superspreaders", are associated to dominant epidemic risks and therefore
deserve special attention. In simulations on studied empirical complex
networks, it is shown that the ranking depends on the dynamical regime of the
disease spreading. A possible mechanism leading to this dependence is
illustrated in an analytically tractable example. In systems where the
allocation of resources to counter disease spreading to individual nodes is
based on their ranking, the dynamical regime of disease spreading is frequently
not known before the outbreak of the disease. Therefore, we introduce a
quantity called epidemic centrality as an average over all relevant regimes of
disease spreading as a basis of the ranking. A recently introduced concept of
phase diagram of epidemic spreading is used as a framework in which several
types of averaging are studied. The epidemic centrality is compared to
structural properties of nodes such as node degree, k-cores and betweenness.
There is a growing trend of epidemic centrality with degree and k-cores values,
but the variation of epidemic centrality is much smaller than the variation of
degree or k-cores value. It is found that the epidemic centrality of the
structurally peripheral nodes is of the same order of magnitude as the epidemic
centrality of the structurally central nodes. The implications of these
findings for the distributions of resources to counter disease spreading are
discussed.
|
1110.2593
|
Blind Source Separation with Compressively Sensed Linear Mixtures
|
cs.IT math.IT
|
This work studies the problem of simultaneously separating and reconstructing
signals from compressively sensed linear mixtures. We assume that all source
signals share a common sparse representation basis. The approach combines
classical Compressive Sensing (CS) theory with a linear mixing model. It allows
the mixtures to be sampled independently of each other. If samples are acquired
in the time domain, this means that the sensors need not be synchronized. Since
Blind Source Separation (BSS) from a linear mixture is only possible up to
permutation and scaling, factoring out these ambiguities leads to a
minimization problem on the so-called oblique manifold. We develop a geometric
conjugate subgradient method that scales to large systems for solving the
problem. Numerical results demonstrate the promising performance of the
proposed algorithm compared to several state of the art methods.
|
1110.2610
|
Issues,Challenges and Tools of Clustering Algorithms
|
cs.IR cs.LG
|
Clustering is an unsupervised technique of Data Mining. It means grouping
similar objects together and separating the dissimilar ones. Each object in the
data set is assigned a class label in the clustering process using a distance
measure. This paper has captured the problems that are faced in real when
clustering algorithms are implemented .It also considers the most extensively
used tools which are readily available and support functions which ease the
programming. Once algorithms have been implemented, they also need to be tested
for its validity. There exist several validation indexes for testing the
performance and accuracy which have also been discussed here.
|
1110.2615
|
Alternatives with stronger convergence than coordinate-descent iterative
LMI algorithms
|
math.OC cs.SY eess.SY
|
In this note we aim at putting more emphasis on the fact that trying to solve
non-convex optimization problems with coordinate-descent iterative linear
matrix inequality algorithms leads to suboptimal solutions, and put forward
other optimization methods better equipped to deal with such problems (having
theoretical convergence guarantees and/or being more efficient in practice).
This fact, already outlined at several places in the literature, still appears
to be disregarded by a sizable part of the systems and control community. Thus,
main elements on this issue and better optimization alternatives are presented
and illustrated by means of an example.
|
1110.2626
|
Analysis of Heart Diseases Dataset using Neural Network Approach
|
cs.LG cs.DB
|
One of the important techniques of Data mining is Classification. Many real
world problems in various fields such as business, science, industry and
medicine can be solved by using classification approach. Neural Networks have
emerged as an important tool for classification. The advantages of Neural
Networks helps for efficient classification of given data. In this study a
Heart diseases dataset is analyzed using Neural Network approach. To increase
the efficiency of the classification process parallel approach is also adopted
in the training phase.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.