id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1107.1586
|
Performance of Local Information Based Link Prediction: A Sampling
Perspective
|
cs.SI physics.soc-ph
|
Link prediction is pervasively employed to uncover the missing links in the
snapshots of real-world networks, which are usually obtained from kinds of
sampling methods. Contrarily, in the previous literature, in order to evaluate
the performance of the prediction, the known edges in the sampled snapshot are
divided into the training set and the probe set randomly, without considering
the diverse sampling approaches beyond. However, different sampling methods
might lead to different missing links, especially for the biased ones. For this
reason, random partition based evaluation of performance is no longer
convincing if we take the sampling method into account. Hence, in this paper,
aim at filling this void, we try to reevaluate the performance of local
information based link predictions through sampling methods governed division
of the training set and the probe set. It is interesting that we find for
different sampling methods, each prediction approach performs unevenly.
Moreover, most of these predictions perform weakly when the sampling method is
biased, which indicates that the performance of these methods is overestimated
in the prior works.
|
1107.1600
|
On fuzzy syndrome hashing with LDPC coding
|
cs.IT cs.CR math.IT
|
The last decades have seen a growing interest in hash functions that allow
some sort of tolerance, e.g. for the purpose of biometric authentication. Among
these, the syndrome fuzzy hashing construction allows to securely store
biometric data and to perform user authentication without the need of sharing
any secret key. This paper analyzes this model, showing that it offers a
suitable protection against information leakage and several advantages with
respect to similar solutions, such as the fuzzy commitment scheme. Furthermore,
the design and characterization of LDPC codes to be used for this purpose is
addressed.
|
1107.1608
|
Formation of Common Investment Networks by Project Establishment between
Agents
|
cs.SI cs.CE
|
We present an investment model integrated with trust-reputation mechanisms
where agents interact with each other to establish investment projects. We
investigate the establishment of investment projects, the influence of the
interaction between agents in the evolution of the distribution of wealth, as
well as the formation of common investment networks and some of their
properties. Simulation results show that the wealth distribution presents a
power law in its tail. Also, it is shown that the trust and reputation
mechanism presented leads to the establishment of networks among agents, which
present some of the typical characteristics of real-life networks like a high
clustering coefficient and short average path length.
|
1107.1609
|
Linear Complexity Lossy Compressor for Binary Redundant Memoryless
Sources
|
cs.IT cond-mat.dis-nn math.IT
|
A lossy compression algorithm for binary redundant memoryless sources is
presented. The proposed scheme is based on sparse graph codes. By introducing a
nonlinear function, redundant memoryless sequences can be compressed. We
propose a linear complexity compressor based on the extended belief
propagation, into which an inertia term is heuristically introduced, and show
that it has near-optimal performance for moderate block lengths.
|
1107.1627
|
On Codes for Optimal Rebuilding Access
|
cs.IT cs.DC math.IT
|
MDS (maximum distance separable) array codes are widely used in storage
systems due to their computationally efficient encoding and decoding
procedures. An MDS code with r redundancy nodes can correct any r erasures by
accessing (reading) all the remaining information in both the systematic nodes
and the parity (redundancy) nodes. However, in practice, a single erasure is
the most likely failure event; hence, a natural question is how much
information do we need to access in order to rebuild a single storage node? We
define the rebuilding ratio as the fraction of remaining information accessed
during the rebuilding of a single erasure. In our previous work we showed that
the optimal rebuilding ratio of 1/r is achievable (using our newly constructed
array codes) for the rebuilding of any systematic node, however, all the
information needs to be accessed for the rebuilding of the parity nodes.
Namely, constructing array codes with a rebuilding ratio of 1/r was left as an
open problem. In this paper, we solve this open problem and present array codes
that achieve the lower bound of 1/r for rebuilding any single systematic or
parity node.
|
1107.1638
|
Weighted algorithms for compressed sensing and matrix completion
|
cs.IT math.IT math.ST stat.TH
|
This paper is about iteratively reweighted basis-pursuit algorithms for
compressed sensing and matrix completion problems. In a first part, we give a
theoretical explanation of the fact that reweighted basis pursuit can improve a
lot upon basis pursuit for exact recovery in compressed sensing. We exhibit a
condition that links the accuracy of the weights to the RIP and incoherency
constants, which ensures exact recovery. In a second part, we introduce a new
algorithm for matrix completion, based on the idea of iterative reweighting.
Since a weighted nuclear "norm" is typically non-convex, it cannot be used
easily as an objective function. So, we define a new estimator based on a
fixed-point equation. We give empirical evidences of the fact that this new
algorithm leads to strong improvements over nuclear norm minimization on
simulated and real matrix completion problems.
|
1107.1640
|
Nearest Neighbour Decoding with Pilot-Assisted Channel Estimation for
Fading Multiple-Access Channels
|
cs.IT math.IT
|
We study a noncoherent multiple-input multiple-output (MIMO) fading
multiple-access channel (MAC), where the transmitters and the receiver are
aware of the statistics of the fading, but not of its realisation. We analyse
the rate region that is achievable with nearest neighbour decoding and
pilot-assisted channel estimation and determine the corresponding pre-log
region, which is defined as the limiting ratio of the rate region to the
logarithm of the SNR as the SNR tends to infinity.
|
1107.1642
|
Indirect Channel Sensing for Cognitive Amplify-and-Forward Relay
Networks
|
cs.IT math.IT
|
In cognitive radio network the primary channel information is beneficial. But
it can not be obtained by direct channel estimation in cognitive system as
pervious methods. And only one possible way is the primary receiver broadcasts
the primary channel information to the cognitive users, but it would require
the modification of the primary receiver and additional precious spectrum
resource. Cooperative communication is also a promising technique. And this
paper introduces an indirect channel sensing method for the primary channel in
cognitive amplify-and-forward (AF) relay network. As the signal retransmitted
from the primary AF relay node includes channel effects, the cognitive radio
can receive retransmitted signal from AF node, and then extract the channel
information from them. Afterwards, Least squares channel estimation and sparse
channel estimation can be used to address the dense and sparse multipath
channels respectively. Numerical experiment demonstrates that the proposed
indirect channel sensing method has an acceptable performance.
|
1107.1644
|
Prostate biopsy tracking with deformation estimation
|
cs.CV physics.med-ph
|
Transrectal biopsies under 2D ultrasound (US) control are the current
clinical standard for prostate cancer diagnosis. The isoechogenic nature of
prostate carcinoma makes it necessary to sample the gland systematically,
resulting in a low sensitivity. Also, it is difficult for the clinician to
follow the sampling protocol accurately under 2D US control and the exact
anatomical location of the biopsy cores is unknown after the intervention.
Tracking systems for prostate biopsies make it possible to generate biopsy
distribution maps for intra- and post-interventional quality control and 3D
visualisation of histological results for diagnosis and treatment planning.
They can also guide the clinician toward non-ultrasound targets. In this paper,
a volume-swept 3D US based tracking system for fast and accurate estimation of
prostate tissue motion is proposed. The entirely image-based system solves the
patient motion problem with an a priori model of rectal probe kinematics.
Prostate deformations are estimated with elastic registration to maximize
accuracy. The system is robust with only 17 registration failures out of 786
(2%) biopsy volumes acquired from 47 patients during biopsy sessions. Accuracy
was evaluated to 0.76$\pm$0.52mm using manually segmented fiducials on 687
registered volumes stemming from 40 patients. A clinical protocol for assisted
biopsy acquisition was designed and implemented as a biopsy assistance system,
which allows to overcome the draw-backs of the standard biopsy procedure.
|
1107.1660
|
Click Efficiency: A Unified Optimal Ranking for Online Ads and Documents
|
cs.GT cs.IR
|
Traditionally the probabilistic ranking principle is used to rank the search
results while the ranking based on expected profits is used for paid placement
of ads. These rankings try to maximize the expected utilities based on the user
click models. Recent empirical analysis on search engine logs suggests a
unified click models for both ranked ads and search results. The segregated
view of document and ad rankings does not consider this commonality. Further,
the used models consider parameters of (i) probability of the user abandoning
browsing results (ii) perceived relevance of result snippets. But how to
consider them for improved ranking is unknown currently. In this paper, we
propose a generalized ranking function---namely "Click Efficiency (CE)"---for
documents and ads based on empirically proven user click models. The ranking
considers parameters (i) and (ii) above, optimal and has the same time
complexity as sorting. To exploit its generality, we examine the reduced forms
of CE ranking under different assumptions enumerating a hierarchy of ranking
functions. Some of the rankings in the hierarchy are currently used ad and
document ranking functions; while others suggest new rankings. While optimality
of ranking is sufficient for document ranking, applying CE ranking to ad
auctions requires an appropriate pricing mechanism. We incorporate a second
price based pricing mechanism with the proposed ranking. Our analysis proves
several desirable properties including revenue dominance over VCG for the same
bid vector and existence of a Nash Equilibrium in pure strategies. The
equilibrium is socially optimal, and revenue equivalent to the truthful VCG
equilibrium. Further, we relax the independence assumption in CE ranking and
analyze the diversity ranking problem. We show that optimal diversity ranking
is NP-Hard in general, and that a constant time approximation is unlikely.
|
1107.1686
|
Proceedings of the Doctoral Consortium and Poster Session of the 5th
International Symposium on Rules (RuleML 2011@IJCAI)
|
cs.AI
|
This volume contains the papers presented at the first edition of the
Doctoral Consortium of the 5th International Symposium on Rules (RuleML
2011@IJCAI) held on July 19th, 2011 in Barcelona, as well as the poster session
papers of the RuleML 2011@IJCAI main conference.
|
1107.1691
|
Minimum-Time Quantum Transport with Bounded Trap Velocity
|
math.OC cs.SY quant-ph
|
We formulate the problem of efficient transport of a quantum particle trapped
in a harmonic potential which can move with a bounded velocity, as a
minimum-time problem on a linear system with bounded input. We completely solve
the corresponding optimal control problem and obtain an interesting bang-bang
solution. These results are expected to find applications in quantum
information processing, where quantum transport between the storage and
processing units of a quantum computer is an essential step. They can also be
extended to the efficient transport of Bose-Einstein condensates, where the
ability to control them is crucial for their potential use as interferometric
sensors.
|
1107.1695
|
On Krawtchouk Transforms
|
cs.IT math.CA math.IT
|
Krawtchouk polynomials appear in a variety of contexts, most notably as
orthogonal polynomials and in coding theory via the Krawtchouk transform. We
present an operator calculus formulation of the Krawtchouk transform that is
suitable for computer implementation. A positivity result for the Krawtchouk
transform is shown. Then our approach is compared with the use of the
Krawtchouk transform in coding theory where it appears in MacWilliams' and
Delsarte's theorems on weight enumerators. We conclude with a construction of
Krawtchouk polynomials in an arbitrary finite number of variables, orthogonal
with respect to the multinomial distribution.
|
1107.1709
|
Massive MIMO: How many antennas do we need?
|
cs.IT math.IT
|
We consider a multicell MIMO uplink channel where each base station (BS) is
equipped with a large number of antennas N. The BSs are assumed to estimate
their channels based on pilot sequences sent by the user terminals (UTs).
Recent work has shown that, as N grows infinitely large, (i) the simplest form
of user detection, i.e., the matched filter (MF), becomes optimal, (ii) the
transmit power per UT can be made arbitrarily small, (iii) the system
performance is limited by pilot contamination. The aim of this paper is to
assess to which extent the above conclusions hold true for large, but finite N.
In particular, we derive how many antennas per UT are needed to achieve \eta %
of the ultimate performance. We then study how much can be gained through more
sophisticated minimum-mean-square-error (MMSE) detection and how many more
antennas are needed with the MF to achieve the same performance. Our analysis
relies on novel results from random matrix theory which allow us to derive
tight approximations of achievable rates with a class of linear receivers.
|
1107.1731
|
Distributed SIR-Aware Scheduling in Large-Scale Wireless Networks
|
cs.IT math.IT
|
Opportunistic scheduling and routing can in principle greatly increase the
throughput of decentralized wireless networks, but to be practical they must do
so with small amounts of timely side information. In this paper, we propose
three techniques for low-overhead distributed opportunistic scheduling (DOS)
and precisely determine their affect on the overall network outage probability
and transmission capacity (TC). The first is distributed channel-aware
scheduling (DCAS), the second is distributed interferer-aware scheduling
(DIAS), and the third generalizes and combines those two and is called
distributed interferer-channel-aware scheduling (DICAS). One contribution is
determining the optimum channel and interference thresholds that a given
isolated transmitter should estimate and apply when scheduling their own
transmissions. Using this threshold, the precise network-wide gain of each
technique is quantified and compared. We conclude by considering interference
cancellation at the receivers, and finding how much it improves the outage
probability.
|
1107.1736
|
High-dimensional structure estimation in Ising models: Local separation
criterion
|
stat.ML cs.LG math.ST stat.TH
|
We consider the problem of high-dimensional Ising (graphical) model
selection. We propose a simple algorithm for structure estimation based on the
thresholding of the empirical conditional variation distances. We introduce a
novel criterion for tractable graph families, where this method is efficient,
based on the presence of sparse local separators between node pairs in the
underlying graph. For such graphs, the proposed algorithm has a sample
complexity of $n=\Omega(J_{\min}^{-2}\log p)$, where $p$ is the number of
variables, and $J_{\min}$ is the minimum (absolute) edge potential in the
model. We also establish nonasymptotic necessary and sufficient conditions for
structure estimation.
|
1107.1739
|
The entropy functional, the information path functional's essentials and
their connections to Kolmogorov's entropy, complexity and physics
|
cs.IT cs.SY math.IT math.OC math.ST stat.TH
|
The paper introduces the recent results related to an entropy functional on
trajectories of a controlled diffusion process, and the information path
functional (IPF), analyzing their connections to the Kolmogorov's entropy,
complexity and the Lyapunov's characteristics. Considering the IPF's essentials
and specifics, the paper studies the singularities of the IPF extremal
equations and the created invariant relations, which both are useful for the
solution of important mathematical and applied problems.
Keywords: Additive functional; Entropy; Singularities, Natural Border
Problem; Invariant
|
1107.1744
|
Stochastic convex optimization with bandit feedback
|
math.OC cs.LG cs.SY
|
This paper addresses the problem of minimizing a convex, Lipschitz function
$f$ over a convex, compact set $\xset$ under a stochastic bandit feedback
model. In this model, the algorithm is allowed to observe noisy realizations of
the function value $f(x)$ at any query point $x \in \xset$. The quantity of
interest is the regret of the algorithm, which is the sum of the function
values at algorithm's query points minus the optimal function value. We
demonstrate a generalization of the ellipsoid algorithm that incurs
$\otil(\poly(d)\sqrt{T})$ regret. Since any algorithm has regret at least
$\Omega(\sqrt{T})$ on this problem, our algorithm is optimal in terms of the
scaling with $T$.
|
1107.1750
|
Structural and Dynamical Patterns on Online Social Networks: the Spanish
May 15th Movement as a case study
|
physics.soc-ph cs.SI nlin.AO
|
The number of people using online social networks in their everyday life is
continuously growing at a pace never saw before. This new kind of communication
has an enormous impact on opinions, cultural trends, information spreading and
even in the commercial success of new products. More importantly, social online
networks have revealed as a fundamental organizing mechanism in recent
country-wide social movements. In this paper, we provide a quantitative
analysis of the structural and dynamical patterns emerging from the activity of
an online social network around the ongoing May 15th (15M) movement in Spain.
Our network is made up by users that exchanged tweets in a time period of one
month, which includes the birth and stabilization of the 15M movement. We
characterize in depth the growth of such dynamical network and find that it is
scale-free with communities at the mesoscale. We also find that its dynamics
exhibits typical features of critical systems such as robustness and power-law
distributions for several quantities. Remarkably, we report that the patterns
characterizing the spreading dynamics are asymmetric, giving rise to a clear
distinction between information sources and sinks. Our study represent a first
step towards the use of data from online social media to comprehend modern
societal dynamics.
|
1107.1752
|
Stochastic Sensor Scheduling for Energy Constrained Estimation in
Multi-Hop Wireless Sensor Networks
|
math.OC cs.SY
|
Wireless Sensor Networks (WSNs) enable a wealth of new applications where
remote estimation is essential. Individual sensors simultaneously sense a
dynamic process and transmit measured information over a shared channel to a
central fusion center. The fusion center computes an estimate of the process
state by means of a Kalman filter. In this paper we assume that the WSN admits
a tree topology with fusion center at the root. At each time step only a subset
of sensors can be selected to transmit observations to the fusion center due to
a limited energy budget. We propose a stochastic sensor selection algorithm
that randomly selects a subset of sensors according to certain probability
distribution, which is opportunely designed to minimize the asymptotic expected
estimation error covariance matrix. We show that the optimal stochastic sensor
selection problem can be relaxed into a convex optimization problem and thus
solved efficiently. We also provide a possible implementation of our algorithm
which does not introduce any communication overhead. The paper ends with some
numerical examples that show the effectiveness of the proposed approach.
|
1107.1753
|
Notes on Electronic Lexicography
|
cs.CL
|
These notes are a continuation of topics covered by V. Selegej in his article
"Electronic Dictionaries and Computational lexicography". How can an electronic
dictionary have as its object the description of closely related languages?
Obviously, such a question allows multiple answers.
|
1107.1779
|
A Survey of User-Centric Data Warehouses: From Personalization to
Recommendation
|
cs.DB
|
Providing a customized support for the OLAP brings tremendous challenges to
the OLAP technology. Standing at the crossroads of the preferences and the data
warehouse, two emerging trends are pointed out; namely: (i) the personalization
and (ii) the recommendation. Although the panoply of the proposed approaches,
the user-centric data warehouse community issues have not been addressed yet.
In this paper we draw an overview of several user centric data warehouse
proposals. We also discuss the two promising concepts in this issue, namely,
the personalization and the recommendation of the data warehouses. We compare
the current approaches among each others with respect to some criteria.
|
1107.1805
|
Loss-sensitive Training of Probabilistic Conditional Random Fields
|
stat.ML cs.AI
|
We consider the problem of training probabilistic conditional random fields
(CRFs) in the context of a task where performance is measured using a specific
loss function. While maximum likelihood is the most common approach to training
CRFs, it ignores the inherent structure of the task's loss function. We
describe alternatives to maximum likelihood which take that loss into account.
These include a novel adaptation of a loss upper bound from the structured SVMs
literature to the CRF context, as well as a new loss-inspired KL divergence
objective which relies on the probabilistic nature of CRFs. These
loss-sensitive objectives are compared to maximum likelihood using ranking as a
benchmark task. This comparison confirms the importance of incorporating loss
information in the probabilistic training of CRFs, with the loss-inspired KL
outperforming all other objectives.
|
1107.1824
|
Measurement Design for Detecting Sparse Signals
|
cs.IT math.IT
|
We consider the problem of testing for the presence (or detection) of an
unknown sparse signal in additive white noise. Given a fixed measurement
budget, much smaller than the dimension of the signal, we consider the general
problem of designing compressive measurements to maximize the measurement
signal-to-noise ratio (SNR), as increasing SNR improves the detection
performance in a large class of detectors. We use a lexicographic optimization
approach, where the optimal measurement design for sparsity level $k$ is sought
only among the set of measurement matrices that satisfy the optimality
conditions for sparsity level k-1. We consider optimizing two different SNR
criteria, namely a worst-case SNR measure, over all possible realizations of a
k-sparse signal, and an average SNR measure with respect to a uniform
distribution on the locations of the up to k nonzero entries in the signal. We
establish connections between these two criteria and certain classes of tight
frames. We constrain our measurement matrices to the class of tight frames to
avoid coloring the noise covariance matrix. For the worst-case problem, we show
that the optimal measurement matrix is a Grassmannian line packing for
most---and a uniform tight frame for all---sparse signals. For the average SNR
problem, we prove that the optimal measurement matrix is a uniform tight frame
with minimum sum-coherence for most---and a tight frame for all---sparse
signals.
|
1107.1829
|
Medium Access Control for Wireless Networks with Peer-to-Peer State
Exchange
|
cs.IT math.IT
|
Distributed medium access control (MAC) protocols are proposed for wireless
networks assuming that one-hop peers can periodically exchange a small amount
of state information. Each station maintains a state and makes state
transitions and transmission decisions based on its state and recent state
information collected from its one-hop peers. A station can adapt its packet
length and the size of its state space to the amount of traffic in its
neighborhood. It is shown that these protocols converge to a steady state,
where stations take turns to transmit in each neighborhood without collision.
In other words, an efficient time-division multiple access (TDMA) like schedule
is formed in a distributed manner, as long as the topology of the network
remains static or changes slowly with respect to the execution of the protocol.
|
1107.1837
|
Information-Theoretic Measures for Objective Evaluation of
Classifications
|
cs.CV cs.IT math.IT
|
This work presents a systematic study of objective evaluations of abstaining
classifications using Information-Theoretic Measures (ITMs). First, we define
objective measures for which they do not depend on any free parameter. This
definition provides technical simplicity for examining "objectivity" or
"subjectivity" directly to classification evaluations. Second, we propose
twenty four normalized ITMs, derived from either mutual information,
divergence, or cross-entropy, for investigation. Contrary to conventional
performance measures that apply empirical formulas based on users' intuitions
or preferences, the ITMs are theoretically more sound for realizing objective
evaluations of classifications. We apply them to distinguish "error types" and
"reject types" in binary classifications without the need for input data of
cost terms. Third, to better understand and select the ITMs, we suggest three
desirable features for classification assessment measures, which appear more
crucial and appealing from the viewpoint of classification applications. Using
these features as "meta-measures", we can reveal the advantages and limitations
of ITMs from a higher level of evaluation knowledge. Numerical examples are
given to corroborate our claims and compare the differences among the proposed
measures. The best measure is selected in terms of the meta-measures, and its
specific properties regarding error types and reject types are analytically
derived.
|
1107.1839
|
Interference Networks with General Message Sets: A Random Coding Scheme
|
cs.IT math.IT
|
In this paper, the Interference Network with General Message Sets (IN-GMS) is
introduced in which several transmitters send messages to several receivers:
Each subset of transmitters transmit an individual message to each subset of
receivers. For such a general scenario, an achievability scheme is presented
using the random coding. This scheme is systematically built based on the
capacity achieving scheme for the Multiple Access Channel (MAC) with common
message as well as the best known achievability scheme for the Broadcast
Channel (BC) with common message. A graphical illustration of the random
codebook construction procedure is also provided, by using which the
achievability scheme is easily understood. Some benefits of the proposed
achievability scheme are described. It is also shown that the resulting rate
region is optimal for a class of orthogonal INs-GMS, which yields the capacity
region. Finally, it is demonstrated that how this general achievability scheme
can be used to derive capacity inner bounds for interference networks with
different distribution of messages; in most cases, the proposed achievability
scheme leads to the best known capacity inner bound for the underlying channel.
Capacity inner bounds can also be derived for new communication scenarios.
|
1107.1851
|
Task swapping networks in distributed systems
|
cs.DC cs.AI cs.NI
|
In this paper we propose task swapping networks for task reassignments by
using task swappings in distributed systems. Some classes of task reassignments
are achieved by using iterative local task swappings between software agents in
distributed systems. We use group-theoretic methods to find a minimum-length
sequence of adjacent task swappings needed from a source task assignment to a
target task assignment in a task swapping network of several well-known
topologies.
|
1107.1886
|
Utility Optimal Coding for Packet Transmission over Wireless Networks -
Part I: Networks of Binary Symmetric Channels
|
cs.IT cs.NI math.IT
|
We consider multi--hop networks comprising Binary Symmetric Channels
($\mathsf{BSC}$s). The network carries unicast flows for multiple users. The
utility of the network is the sum of the utilities of the flows, where the
utility of each flow is a concave function of its throughput. Given that the
network capacity is shared by the flows, there is a contention for network
resources like coding rate (at the physical layer), scheduling time (at the MAC
layer), etc., among the flows. We propose a proportional fair transmission
scheme that maximises the sum utility of flow throughputs subject to the rate
and the scheduling constraints. This is achieved by {\em jointly optimising the
packet coding rates of all the flows through the network}.
|
1107.1890
|
Utility Optimal Coding for Packet Transmission over Wireless Networks -
Part II: Networks of Packet Erasure Channels
|
cs.IT cs.NI math.IT
|
We define a class of multi--hop erasure networks that approximates a wireless
multi--hop network. The network carries unicast flows for multiple users, and
each information packet within a flow is required to be decoded at the flow
destination within a specified delay deadline. The allocation of coding rates
amongst flows/users is constrained by network capacity. We propose a
proportional fair transmission scheme that maximises the sum utility of flow
throughputs. This is achieved by {\em jointly optimising the packet coding
rates and the allocation of bits of coded packets across transmission slots.}
|
1107.1895
|
On Investment-Consumption with Regime-Switching
|
math.OC cs.SY q-fin.PM
|
In a continuous time stochastic economy, this paper considers the problem of
consumption and investment in a financial market in which the representative
investor exhibits a change in the discount rate. The investment opportunities
are a stock and a riskless account. The market coefficients and discount factor
switches according to a finite state Markov chain. The change in the discount
rate leads to time inconsistencies of the investor's decisions. The randomness
in our model is driven by a Brownian motion and Markov chain. Following Ekeland
etc (2008) we introduce and characterize the equilibrium policies for power
utility functions. Moreover, they are computed in closed form for logarithmic
utility function. We show that a higher discount rate leads to a higher
equilibrium consumption rate. Numerical experiments show the effect of both
time preference and risk aversion on the equilibrium policies.
|
1107.1900
|
Behavior patterns of online users and the effect on information
filtering
|
physics.soc-ph cs.SI physics.data-an stat.AP
|
Understanding the structure and evolution of web-based user-object bipartite
networks is an important task since they play a fundamental role in online
information filtering. In this paper, we focus on investigating the patterns of
online users' behavior and the effect on recommendation process. Empirical
analysis on the e-commercial systems show that users have significant taste
diversity and their interests for niche items highly overlap. Additionally,
recommendation process are investigated on both the real networks and the
reshuffled networks in which real users' behavior patterns can be gradually
destroyed. Our results shows that the performance of personalized
recommendation methods is strongly related to the real network structure.
Detail study on each item shows that recommendation accuracy for hot items is
almost maximum and quite robust to the reshuffling process. However, niche
items cannot be accurately recommended after removing users' behavior patterns.
Our work also is meaningful in practical sense since it reveals an effective
direction to improve the accuracy and the robustness of the existing
recommender systems.
|
1107.1932
|
Current State and Challenges of Automatic Planning in Web Service
Composition
|
cs.DC cs.AI
|
This paper gives a survey on the current state of Web Service Compositions
and the difficulties and solutions to automated Web Service Compositions. This
first gives a definition of Web Service Composition and the motivation and goal
of it. It then explores into why we need automated Web Service Compositions and
formally defines the domains. Techniques and solutions are proposed by the
papers we surveyed to solve the current difficulty of automated Web Service
Composition. Verification and future work is discussed at the end to further
extend the topic.
|
1107.1938
|
Uncovering Evolutionary Ages of Nodes in Complex Networks
|
physics.soc-ph cs.SI physics.data-an
|
In a complex network, different groups of nodes may have existed for
different amounts of time. To detect the evolutionary history of a network is
of great importance. We present a general method based on spectral analysis to
address this fundamental question in network science. In particular, we argue
and demonstrate, using model and real-world networks, the existence of positive
correlation between the magnitudes of eigenvalues and node ages. In situations
where the network topology is unknown but short time series measured from nodes
are available, we suggest to uncover the network topology at the present (or
any given time of interest) by using compressive sensing and then perform the
spectral analysis. Knowledge of ages of various groups of nodes can provide
significant insights into the evolutionary process underpinning the network.
|
1107.1943
|
Enhanced Genetic Algorithm approach for Solving Dynamic Shortest Path
Routing Problems using Immigrants and Memory Schemes
|
cs.NE cs.NI
|
In Internet Routing, the static shortest path (SP) problem has been addressed
using well known intelligent optimization techniques like artificial neural
networks, genetic algorithms (GAs) and particle swarm optimization. Advancement
in wireless communication lead more and more mobile wireless networks, such as
mobile networks [mobile ad hoc networks (MANETs)] and wireless sensor networks.
Dynamic nature of the network is the main characteristic of MANET. Therefore,
the SP routing problem in MANET turns into dynamic optimization problem (DOP).
Here the nodes ae made aware of the environmental condition, thereby making it
intelligent, which goes as the input for GA. The implementation then uses GAs
with immigrants and memory schemes to solve the dynamic SP routing problem
(DSPRP) in MANETS. In our paper, once the network topology changes, the optimal
solutions in the new environment can be searched using the new immigrants or
the useful information stored in the memory. Results shows GA with new
immigrants shows better convergence result than GA with memory scheme.
|
1107.1944
|
An Interpretation of the Moore-Penrose Generalized Inverse of a Singular
Fisher Information Matrix
|
cs.IT math.IT math.ST stat.TH
|
It is proved that in a non-Bayesian parametric estimation problem, if the
Fisher information matrix (FIM) is singular, unbiased estimators for the
unknown parameter will not exist. Cramer-Rao bound (CRB), a popular tool to
lower bound the variances of unbiased estimators, seems inapplicable in such
situations. In this paper, we show that the Moore-Penrose generalized inverse
of a singular FIM can be interpreted as the CRB corresponding to the minimum
variance among all choices of minimum constraint functions. This result ensures
the logical validity of applying the Moore-Penrose generalized inverse of an
FIM as the covariance lower bound when the FIM is singular. Furthermore, the
result can be applied as a performance bound on the joint design of constraint
functions and unbiased estimators.
|
1107.1950
|
Knowledge Embedding and Retrieval Strategies in an Informledge System
|
cs.AI
|
Informledge System (ILS) is a knowledge network with autonomous nodes and
intelligent links that integrate and structure the pieces of knowledge. In this
paper, we put forward the strategies for knowledge embedding and retrieval in
an ILS. ILS is a powerful knowledge network system dealing with logical storage
and connectivity of information units to form knowledge using autonomous nodes
and multi-lateral links. In ILS, the autonomous nodes known as Knowledge
Network Nodes (KNN)s play vital roles which are not only used in storage,
parsing and in forming the multi-lateral linkages between knowledge points but
also in helping the realization of intelligent retrieval of linked information
units in the form of knowledge. Knowledge built in to the ILS forms the shape
of sphere. The intelligence incorporated into the links of a KNN helps in
retrieving various knowledge threads from a specific set of KNNs. A developed
entity of information realized through KNN forms in to the shape of a knowledge
cone
|
1107.1956
|
Informledge System: A Modified Knowledge Network with Autonomous Nodes
using Multi-lateral Links
|
cs.IR cs.AI cs.NE
|
Research in the field of Artificial Intelligence is continually progressing
to simulate the human knowledge into automated intelligent knowledge base,
which can encode and retrieve knowledge efficiently along with the capability
of being is consistent and scalable at all times. However, there is no system
at hand that can match the diversified abilities of human knowledge base. In
this position paper, we put forward a theoretical model of a different system
that intends to integrate pieces of knowledge, Informledge System (ILS). ILS
would encode the knowledge, by virtue of knowledge units linked across
diversified domains. The proposed ILS comprises of autonomous knowledge units
termed as Knowledge Network Node (KNN), which would help in efficient
cross-linking of knowledge units to encode fresh knowledge. These links are
reasoned and inferred by the Parser and Link Manager, which are part of KNN.
|
1107.1958
|
Linear Index Coding via Semidefinite Programming
|
cs.DS cs.DM cs.IT math.IT
|
In the index coding problem, introduced by Birk and Kol (INFOCOM, 1998), the
goal is to broadcast an n bit word to n receivers (one bit per receiver), where
the receivers have side information represented by a graph G. The objective is
to minimize the length of a codeword sent to all receivers which allows each
receiver to learn its bit. For linear index coding, the minimum possible length
is known to be equal to a graph parameter called minrank (Bar-Yossef et al.,
FOCS, 2006).
We show a polynomial time algorithm that, given an n vertex graph G with
minrank k, finds a linear index code for G of length $\widetilde{O}(n^{f(k)})$,
where f(k) depends only on k. For example, for k=3 we obtain f(3) ~ 0.2574. Our
algorithm employs a semidefinite program (SDP) introduced by Karger, Motwani
and Sudan (J. ACM, 1998) for graph coloring and its refined analysis due to
Arora, Chlamtac and Charikar (STOC, 2006). Since the SDP we use is not a
relaxation of the minimization problem we consider, a crucial component of our
analysis is an upper bound on the objective value of the SDP in terms of the
minrank.
At the heart of our analysis lies a combinatorial result which may be of
independent interest. Namely, we show an exact expression for the maximum
possible value of the Lovasz theta-function of a graph with minrank k. This
yields a tight gap between two classical upper bounds on the Shannon capacity
of a graph.
|
1107.1972
|
Influence of Doppler Bin Width on GPS Acquisition Probabilities
|
cs.IT math.IT
|
Acquisition is a search in two continuous dimensions, where the digital
algorithms require a partitioning of the search space into cells. Depending on
the partitioning of the Doppler frequency domain, more than one cell might
contain significant signal energy. We present an expression for the expected
values of the cells' energies to analyze the impact of the Doppler bin width on
detection and false alarm probabilities.
|
1107.1974
|
On an Efficient Marie Curie Initial Training Network
|
cs.SI physics.soc-ph
|
Collaboration in science is one of the key components of world-class
research. The European Commission supports collaboration between institutions
and funds young researchers appointed by these partner institutions. In these
networks, the mobility of the researchers is enforced in order to enhance the
collaboration. In this study, based on a real Marie Curie Initial Training
Network, an algorithm to construct a collaboration network is investigated. The
algorithm suggests that a strongly efficient expansion leads to a star-like
network. The results might help the design of efficient collaboration networks
for future Initial Training Network proposals.
|
1107.1987
|
Median Algorithm for Sector Spectra Calculation from Images Registered
by the Spectral Airglow Temperature Imager
|
physics.data-an cs.CV
|
The Spectral Airglow Temperature Imager is an instrument, specially designed
for investigation of the wave processes in the Mesosphere-Lower Thermosphere.
In order to determine the kinematic parameters of a wave, the values of a
physical quantity in different space points and their changes in the time
should be known. As a result of the possibilities of the SATI instrument for
space scanning, different parts of the images (sectors of spectrograms)
correspond to the respective mesopause areas (where the radiation is
generated). An approach is proposed for sector spectra determination from SATI
images based on ordered statistics instead of meaning. Comparative results are
shown.
|
1107.2004
|
Quickest Paths in Simulations of Pedestrians
|
physics.soc-ph cs.MA
|
This contribution proposes a method to make agents in a microscopic
simulation of pedestrian traffic walk approximately along a path of estimated
minimal remaining travel time to their destination. Usually models of
pedestrian dynamics are (implicitly) built on the assumption that pedestrians
walk along the shortest path. Model elements formulated to make pedestrians
locally avoid collisions and intrusion into personal space do not produce
motion on quickest paths. Therefore a special model element is needed, if one
wants to model and simulate pedestrians for whom travel time matters most (e.g.
travelers in a station hall who are late for a train). Here such a model
element is proposed, discussed and used within the Social Force Model.
|
1107.2006
|
Port-Hamiltonian systems on graphs
|
math.OC cs.SY math.DS math.SG
|
In this paper we present a unifying geometric and compositional framework for
modeling complex physical network dynamics as port-Hamiltonian systems on open
graphs. Basic idea is to associate with the incidence matrix of the graph a
Dirac structure relating the flow and effort variables associated to the edges,
internal vertices, as well as boundary vertices of the graph, and to formulate
energy-storing or energy-dissipating relations between the flow and effort
variables of the edges and internal vertices. This allows for state variables
associated to the edges, and formalizes the interconnection of networks.
Examples from different origins such as consensus algorithms are shown to share
the same structure. It is shown how the identified Hamiltonian structure offers
systematic tools for the analysis of the resulting dynamics.
|
1107.2018
|
Distributed Robust Multi-Cell Coordinated Beamforming with Imperfect
CSI: An ADMM Approach
|
cs.IT math.IT
|
Multi-cell coordinated beamforming (MCBF), where multiple base stations (BSs)
collaborate with each other in the beamforming design for mitigating the
inter-cell interference, has been a subject drawing great attention recently.
Most MCBF designs assume perfect channel state information (CSI) of mobile
stations (MSs); however CSI errors are inevitable at the BSs in practice.
Assuming elliptically bounded CSI errors, this paper studies the robust MCBF
design problem that minimizes the weighted sum power of BSs subject to
worst-case signal-to-interference-plus-noise ratio (SINR) constraints on the
MSs. Our goal is to devise a distributed optimization method that can obtain
the worst-case robust beamforming solutions in a decentralized fashion, with
only local CSI used at each BS and little backhaul signaling for message
exchange between BSs. However, the considered problem is difficult to handle
even in the centralized form. We first propose an efficient approximation
method in the centralized form, based on the semidefinite relaxation (SDR)
technique. To obtain the robust beamforming solution in a decentralized
fashion, we further propose a distributed robust MCBF algorithm, using a
distributed convex optimization technique known as alternating direction method
of multipliers (ADMM). We analytically show the convergence of the proposed
distributed robust MCBF algorithm to the optimal centralized solution and its
better bandwidth efficiency in backhaul signaling over the existing dual
decomposition based algorithms. Simulation results are presented to examine the
effectiveness of the proposed SDR method and the distributed robust MCBF
algorithm.
|
1107.2021
|
Multi-Instance Learning with Any Hypothesis Class
|
cs.LG stat.ML
|
In the supervised learning setting termed Multiple-Instance Learning (MIL),
the examples are bags of instances, and the bag label is a function of the
labels of its instances. Typically, this function is the Boolean OR. The
learner observes a sample of bags and the bag labels, but not the instance
labels that determine the bag labels. The learner is then required to emit a
classification rule for bags based on the sample. MIL has numerous
applications, and many heuristic algorithms have been used successfully on this
problem, each adapted to specific settings or applications. In this work we
provide a unified theoretical analysis for MIL, which holds for any underlying
hypothesis class, regardless of a specific application or problem domain. We
show that the sample complexity of MIL is only poly-logarithmically dependent
on the size of the bag, for any underlying hypothesis class. In addition, we
introduce a new PAC-learning algorithm for MIL, which uses a regular supervised
learning algorithm as an oracle. We prove that efficient PAC-learning for MIL
can be generated from any efficient non-MIL supervised learning algorithm that
handles one-sided error. The computational complexity of the resulting
algorithm is only polynomially dependent on the bag size.
|
1107.2031
|
Stegobot: construction of an unobservable communication network
leveraging social behavior
|
cs.CR cs.NI cs.SI physics.soc-ph
|
We propose the construction of an unobservable communications network using
social networks. The communication endpoints are vertices on a social network.
Probabilistically unobservable communication channels are built by leveraging
image steganography and the social image sharing behavior of users. All
communication takes place along the edges of a social network overlay
connecting friends. We show that such a network can provide decent bandwidth
even with a far from optimal routing mechanism such as restricted flooding. We
show that such a network is indeed usable by constructing a botnet on top of
it, called Stegobot. It is designed to spread via social malware attacks and
steal information from its victims. Unlike conventional botnets, Stegobot
traffic does not introduce new communication endpoints between bots. We
analyzed a real-world dataset of image sharing between members of an online
social network. Analysis of Stegobot's network throughput indicates that
stealthy as it is, it is also functionally powerful -- capable of channeling
fair quantities of sensitive data from its victims to the botmaster at tens of
megabytes every month.
|
1107.2059
|
One dimensional Convolutional Goppa Codes over the projective line
|
cs.IT math.AG math.IT
|
We give a general method to construct MDS one-dimensional convolutional
codes. Our method generalizes previous constructions of H. Gluesing-Luerssen
and B. Langfeld. Moreover we give a classification of one-dimensional
Convolutional Goppa Codes and propose a characterization of MDS codes of this
type.
|
1107.2085
|
Kunchenko's Polynomials for Template Matching
|
cs.CV
|
This paper reviews Kunchenko's polynomials using as template matching method
to recognize template in one-dimensional input signal. Kunchenko's polynomials
method is compared with classical methods - cross-correlation and sum of
squared differences according to numerical statistical example.
|
1107.2086
|
Extend Commitment Protocols with Temporal Regulations: Why and How
|
cs.AI
|
The proposal of Elisa Marengo's thesis is to extend commitment protocols to
explicitly account for temporal regulations. This extension will satisfy two
needs: (1) it will allow representing, in a flexible and modular way, temporal
regulations with a normative force, posed on the interaction, so as to
represent conventions, laws and suchlike; (2) it will allow committing to
complex conditions, which describe not only what will be achieved but to some
extent also how. These two aspects will be deeply investigated in the proposal
of a unified framework, which is part of the ongoing work and will be included
in the thesis.
|
1107.2087
|
Rule-Based Semantic Sensing
|
cs.AI
|
Rule-Based Systems have been in use for decades to solve a variety of
problems but not in the sensor informatics domain. Rules aid the aggregation of
low-level sensor readings to form a more complete picture of the real world and
help to address 10 identified challenges for sensor network middleware. This
paper presents the reader with an overview of a system architecture and a pilot
application to demonstrate the usefulness of a system integrating rules with
sensor middleware.
|
1107.2088
|
Advancing Multi-Context Systems by Inconsistency Management
|
cs.AI
|
Multi-Context Systems are an expressive formalism to model (possibly)
non-monotonic information exchange between heterogeneous knowledge bases. Such
information exchange, however, often comes with unforseen side-effects leading
to violation of constraints, making the system inconsistent, and thus unusable.
Although there are many approaches to assess and repair a single inconsistent
knowledge base, the heterogeneous nature of Multi-Context Systems poses
problems which have not yet been addressed in a satisfying way: How to identify
and explain a inconsistency that spreads over multiple knowledge bases with
different logical formalisms (e.g., logic programs and ontologies)? What are
the causes of inconsistency if inference/information exchange is non-monotonic
(e.g., absent information as cause)? How to deal with inconsistency if access
to knowledge bases is restricted (e.g., companies exchange information, but do
not allow arbitrary modifications to their knowledge bases)? Many traditional
approaches solely aim for a consistent system, but automatic removal of
inconsistency is not always desireable. Therefore a human operator has to be
supported in finding the erroneous parts contributing to the inconsistency. In
my thesis those issues will be adressed mainly from a foundational perspective,
while our research project also provides algorithms and prototype
implementations.
|
1107.2089
|
Rule-based query answering method for a knowledge base of economic
crimes
|
cs.AI
|
We present a description of the PhD thesis which aims to propose a rule-based
query answering method for relational data. In this approach we use an
additional knowledge which is represented as a set of rules and describes the
source data at concept (ontological) level. Queries are posed in the terms of
abstract level. We present two methods. The first one uses hybrid reasoning and
the second one exploits only forward chaining. These two methods are
demonstrated by the prototypical implementation of the system coupled with the
Jess engine. Tests are performed on the knowledge base of the selected economic
crimes: fraudulent disbursement and money laundering.
|
1107.2090
|
Semantic-ontological combination of Business Rules and Business
Processes in IT Service Management
|
cs.AI
|
IT Service Management deals with managing a broad range of items related to
complex system environments. As there is both, a close connection to business
interests and IT infrastructure, the application of semantic expressions which
are seamlessly integrated within applications for managing ITSM environments,
can help to improve transparency and profitability. This paper focuses on the
challenges regarding the integration of semantics and ontologies within ITSM
environments. It will describe the paradigm of relationships and inheritance
within complex service trees and will present an approach of ontologically
expressing them. Furthermore, the application of SBVR-based rules as executable
SQL triggers will be discussed. Finally, the broad range of topics for further
research, derived from the findings, will be presented.
|
1107.2100
|
Interference Focusing for Simplified Optical Fiber Models with
Dispersion
|
cs.IT math.IT
|
A discrete-time two-user interference channel model is developed that
captures non-linear phenomena that arise in optical fiber communication
employing wavelength-division multiplexing (WDM). The effect of non-linearity
is that an amplitude variation on one carrier induces a phase variation on the
other carrier. Moreover, the model captures the effect of group velocity
mismatch that introduces memory in the channel. It is shown that both users can
achieve the maximum pre-log factor of 1 simultaneously by using an interference
focusing technique introduced in an earlier work.
|
1107.2101
|
Nearly Doubling the Throughput of Multiuser MIMO Systems Using Codebook
Tailored Limited Feedback Protocol
|
cs.IT math.IT
|
We present and analyze a new robust feedback and transmit strategy for
multiuser MIMO downlink communication systems, termed Rate Approximation (RA).
RA combines the flexibility and robustness needed for reliable communications
with the user terminal under a limited feedback constraint. It responds to two
important observations. First, it is not so significant to approximate the
channel but rather the rate, such that the optimal scheduling decision can be
mimicked at the base station. Second, a fixed transmit codebook at the
transmitter is often better when therefore the channel state information is
more accurate. In the RA scheme the transmit and feedback codebook are
separated and user rates are delivered to the base station subject to a
controlled uniform error. The scheme is analyzed and proved to have better
performance below a certain interference plus noise margin and better behavior
than the classical Jindal formula. LTE system simulations sustain the analytic
results showing performance gains of up to 50% or 70% compared to zeroforcing
when using multiple antennas at the base station and multiple antennas or a
single antenna at the terminals, respectively. A new feedback protocol is
developed which inherently considers the transmit codebook and which is able to
deal with the complexity issue at the terminal.
|
1107.2104
|
An estimation of distribution algorithm with adaptive Gibbs sampling for
unconstrained global optimization
|
cs.NE math.OC stat.ML
|
In this paper is proposed a new heuristic approach belonging to the field of
evolutionary Estimation of Distribution Algorithms (EDAs). EDAs builds a
probability model and a set of solutions is sampled from the model which
characterizes the distribution of such solutions. The main framework of the
proposed method is an estimation of distribution algorithm, in which an
adaptive Gibbs sampling is used to generate new promising solutions and, in
combination with a local search strategy, it improves the individual solutions
produced in each iteration. The Estimation of Distribution Algorithm with
Adaptive Gibbs Sampling we are proposing in this paper is called AGEDA. We
experimentally evaluate and compare this algorithm against two deterministic
procedures and several stochastic methods in three well known test problems for
unconstrained global optimization. It is empirically shown that our heuristic
is robust in problems that involve three central aspects that mainly determine
the difficulty of global optimization problems, namely high-dimensionality,
multi-modality and non-smoothness.
|
1107.2126
|
Strong Solutions of the Fuzzy Linear Systems
|
cs.NA cs.AI cs.IT math.IT math.LO math.NA
|
We consider a fuzzy linear system with crisp coefficient matrix and with an
arbitrary fuzzy number in parametric form on the right-hand side. It is known
that the well-known existence and uniqueness theorem of a strong fuzzy solution
is equivalent to the following: The coefficient matrix is the product of a
permutation matrix and a diagonal matrix. This means that this theorem can be
applicable only for a special form of linear systems, namely, only when the
system consists of equations, each of which has exactly one variable. We prove
an existence and uniqueness theorem, which can be use on more general systems.
The necessary and sufficient conditions of the theorem are dependent on both
the coefficient matrix and the right-hand side. This theorem is a
generalization of the well-known existence and uniqueness theorem for the
strong solution.
|
1107.2168
|
Information Symmetries in Irreversible Processes
|
cond-mat.stat-mech cs.IT math.IT math.ST nlin.CD stat.TH
|
We study dynamical reversibility in stationary stochastic processes from an
information theoretic perspective. Extending earlier work on the reversibility
of Markov chains, we focus on finitary processes with arbitrarily long
conditional correlations. In particular, we examine stationary processes
represented or generated by edge-emitting, finite-state hidden Markov models.
Surprisingly, we find pervasive temporal asymmetries in the statistics of such
stationary processes with the consequence that the computational resources
necessary to generate a process in the forward and reverse temporal directions
are generally not the same. In fact, an exhaustive survey indicates that most
stationary processes are irreversible. We study the ensuing relations between
model topology in different representations, the process's statistical
properties, and its reversibility in detail. A process's temporal asymmetry is
efficiently captured using two canonical unifilar representations of the
generating model, the forward-time and reverse-time epsilon-machines. We
analyze example irreversible processes whose epsilon-machine presentations
change size under time reversal, including one which has a finite number of
recurrent causal states in one direction, but an infinite number in the
opposite. From the forward-time and reverse-time epsilon-machines, we are able
to construct a symmetrized, but nonunifilar, generator of a process---the
bidirectional machine. Using the bidirectional machine, we show how to directly
calculate a process's fundamental information properties, many of which are
otherwise only poorly approximated via process samples. The tools we introduce
and the insights we offer provide a better understanding of the many facets of
reversibility and irreversibility in stochastic processes.
|
1107.2229
|
Scaling Behavior of Convolutional LDPC Ensembles over the BEC
|
cs.IT math.IT
|
We study the scaling behavior of coupled sparse graph codes over the binary
erasure channel. In particular, let 2L+1 be the length of the coupled chain,
let M be the number of variables in each of the 2L + 1 local copies, let l be
the number of iterations, let Pb denote the bit error probability, and let
{\epsilon} denote the channel parameter. We are interested in how these
quantities scale when we let the blocklength (2L + 1)M tend to infinity. Based
on empirical evidence we show that the threshold saturation phenomenon is
rather stable with respect to the scaling of the various parameters and we
formulate some general rules of thumb which can serve as a guide for the design
of coding systems based on coupled graphs.
|
1107.2321
|
An algorithm for list decoding number field codes
|
math.NT cs.CC cs.IT math.IT
|
We present an algorithm for list decoding codewords of algebraic number field
codes in polynomial time. This is the first explicit procedure for decoding
number field codes whose construction were previously described by Lenstra and
Guruswami. We rely on an equivalent of the LLL reduction algorithm for
$\OK$-modules due to Fieker and Stehl\'e and on algorithms due to Cohen for
computing the Hermite normal form of matrices representing modules over
Dedekind domains.
|
1107.2336
|
A Variation of the Box-Counting Algorithm Applied to Colour Images
|
cs.CV
|
The box counting method for fractal dimension estimation had not been applied
to large or colour images thus far due to the processing time required. In this
letter we present a fast, easy to implement and very easily expandable to any
number of dimensions variation, the box merging method. It is applied here in
RGB images which are considered as sets in 5-D space.
|
1107.2347
|
BSVM: A Banded Suport Vector Machine
|
stat.ML cs.CV
|
We describe a novel binary classification technique called Banded SVM
(B-SVM). In the standard C-SVM formulation of Cortes et al. (1995), the
decision rule is encouraged to lie in the interval [1, \infty]. The new B-SVM
objective function contains a penalty term that encourages the decision rule to
lie in a user specified range [\rho_1, \rho_2]. In addition to the standard set
of support vectors (SVs) near the class boundaries, B-SVM results in a second
set of SVs in the interior of each class.
|
1107.2353
|
Blending Bayesian and frequentist methods according to the precision of
prior information with an application to hypothesis testing
|
stat.ME cs.IT math.IT math.ST stat.TH
|
The following zero-sum game between nature and a statistician blends Bayesian
methods with frequentist methods such as p-values and confidence intervals.
Nature chooses a posterior distribution consistent with a set of possible
priors. At the same time, the statistician selects a parameter distribution for
inference with the goal of maximizing the minimum Kullback-Leibler information
gained over a confidence distribution or other benchmark distribution. An
application to testing a simple null hypothesis leads the statistician to
report a posterior probability of the hypothesis that is informed by both
Bayesian and frequentist methodology, each weighted according how well the
prior is known.
Since neither the Bayesian approach nor the frequentist approach is entirely
satisfactory in situations involving partial knowledge of the prior
distribution, the proposed procedure reduces to a Bayesian method given
complete knowledge of the prior, to a frequentist method given complete
ignorance about the prior, and to a blend between the two methods given partial
knowledge of the prior. The blended approach resembles the Bayesian method
rather than the frequentist method to the precise extent that the prior is
known.
The problem of testing a point null hypothesis illustrates the proposed
framework. The blended probability that the null hypothesis is true is equal to
the p-value or a lower bound of an unknown Bayesian posterior probability,
whichever is greater. Thus, given total ignorance represented by a lower bound
of 0, the p-value is used instead of any Bayesian posterior probability. At the
opposite extreme of a known prior, the p-value is ignored. In the intermediate
case, the possible Bayesian posterior probability that is closest to the
p-value is used for inference. Thus, both the Bayesian method and the
frequentist method influence the inferences made.
|
1107.2365
|
On some special cases of the Entropy Photon-Number Inequality
|
quant-ph cs.IT math.IT
|
We show that the Entropy Photon-Number Inequality (EPnI) holds where one of
the input states is the vacuum state and for several candidates of the other
input state that includes the cases when the state has the eigenvectors as the
number states and either has only two non-zero eigenvalues or has arbitrary
number of non-zero eigenvalues but is a high entropy state. We also discuss the
conditions, which if satisfied, would lead to an extension of these results.
|
1107.2379
|
Data Stability in Clustering: A Closer Look
|
cs.LG cs.DS
|
We consider the model introduced by Bilu and Linial (2010), who study
problems for which the optimal clustering does not change when distances are
perturbed. They show that even when a problem is NP-hard, it is sometimes
possible to obtain efficient algorithms for instances resilient to certain
multiplicative perturbations, e.g. on the order of $O(\sqrt{n})$ for max-cut
clustering. Awasthi et al. (2010) consider center-based objectives, and Balcan
and Liang (2011) analyze the $k$-median and min-sum objectives, giving
efficient algorithms for instances resilient to certain constant multiplicative
perturbations.
Here, we are motivated by the question of to what extent these assumptions
can be relaxed while allowing for efficient algorithms. We show there is little
room to improve these results by giving NP-hardness lower bounds for both the
$k$-median and min-sum objectives. On the other hand, we show that constant
multiplicative resilience parameters can be so strong as to make the clustering
problem trivial, leaving only a narrow range of resilience parameters for which
clustering is interesting. We also consider a model of additive perturbations
and give a correspondence between additive and multiplicative notions of
stability. Our results provide a close examination of the consequences of
assuming stability in data.
|
1107.2443
|
On the Approximability and Hardness of Minimum Topic Connected Overlay
and Its Special Instances
|
cs.DS cs.DC cs.SI
|
In the context of designing a scalable overlay network to support
decentralized topic-based pub/sub communication, the Minimum Topic-Connected
Overlay problem (Min-TCO in short) has been investigated: Given a set of t
topics and a collection of n users together with the lists of topics they are
interested in, the aim is to connect these users to a network by a minimum
number of edges such that every graph induced by users interested in a common
topic is connected. It is known that Min-TCO is NP-hard and approximable within
O(log t) in polynomial time. In this paper, we further investigate the problem
and some of its special instances. We give various hardness results for
instances where the number of topics in which an user is interested in is
bounded by a constant, and also for the instances where the number of users
interested in a common topic is constant. For the latter case, we present a
first constant approximation algorithm. We also present some polynomial-time
algorithms for very restricted instances of Min-TCO.
|
1107.2444
|
Private Data Release via Learning Thresholds
|
cs.CC cs.LG
|
This work considers computationally efficient privacy-preserving data
release. We study the task of analyzing a database containing sensitive
information about individual participants. Given a set of statistical queries
on the data, we want to release approximate answers to the queries while also
guaranteeing differential privacy---protecting each participant's sensitive
data.
Our focus is on computationally efficient data release algorithms; we seek
algorithms whose running time is polynomial, or at least sub-exponential, in
the data dimensionality. Our primary contribution is a computationally
efficient reduction from differentially private data release for a class of
counting queries, to learning thresholded sums of predicates from a related
class.
We instantiate this general reduction with a variety of algorithms for
learning thresholds. These instantiations yield several new results for
differentially private data release. As two examples, taking {0,1}^d to be the
data domain (of dimension d), we obtain differentially private algorithms for:
(*) Releasing all k-way conjunctions. For any given k, the resulting data
release algorithm has bounded error as long as the database is of size at least
d^{O(\sqrt{k\log(k\log d)})}. The running time is polynomial in the database
size.
(*) Releasing a (1-\gamma)-fraction of all parity queries. For any \gamma
\geq \poly(1/d), the algorithm has bounded error as long as the database is of
size at least \poly(d). The running time is polynomial in the database size.
Several other instantiations yield further results for privacy-preserving
data release. Of the two results highlighted above, the first learning
algorithm uses techniques for representing thresholded sums of predicates as
low-degree polynomial threshold functions. The second learning algorithm is
based on Jackson's Harmonic Sieve algorithm [Jackson 1997].
|
1107.2462
|
Statistical Topic Models for Multi-Label Document Classification
|
stat.ML cs.LG
|
Machine learning approaches to multi-label document classification have to
date largely relied on discriminative modeling techniques such as support
vector machines. A drawback of these approaches is that performance rapidly
drops off as the total number of labels and the number of labels per document
increase. This problem is amplified when the label frequencies exhibit the type
of highly skewed distributions that are often observed in real-world datasets.
In this paper we investigate a class of generative statistical topic models for
multi-label documents that associate individual word tokens with different
labels. We investigate the advantages of this approach relative to
discriminative models, particularly with respect to classification problems
involving large numbers of relatively rare labels. We compare the performance
of generative and discriminative approaches on document labeling tasks ranging
from datasets with several thousand labels to datasets with tens of labels. The
experimental results indicate that probabilistic generative models can achieve
competitive multi-label classification performance compared to discriminative
methods, and have advantages for datasets with many labels and skewed label
frequencies.
|
1107.2464
|
Epidemic Spread in Human Networks
|
physics.soc-ph cs.SY math.DS stat.AP
|
One of the popular dynamics on complex networks is the epidemic spreading. An
epidemic model describes how infections spread throughout a network. Among the
compartmental models used to describe epidemics, the
Susceptible-Infected-Susceptible (SIS) model has been widely used. In the SIS
model, each node can be susceptible, become infected with a given infection
rate, and become again susceptible with a given curing rate. In this paper, we
add a new compartment to the classic SIS model to account for human response to
epidemic spread. Each individual can be infected, susceptible, or alert.
Susceptible individuals can become alert with an alerting rate if infected
individuals exist in their neighborhood. An individual in the alert state is
less probable to become infected than an individual in the susceptible state;
due to a newly adopted cautious behavior. The problem is formulated as a
continuous-time Markov process on a general static graph and then modeled into
a set of ordinary differential equations using mean field approximation method
and the corresponding Kolmogorov forward equations. The model is then studied
using results from algebraic graph theory and center manifold theorem. We
analytically show that our model exhibits two distinct thresholds in the
dynamics of epidemic spread. Below the first threshold, infection dies out
exponentially. Beyond the second threshold, infection persists in the steady
state. Between the two thresholds, the infection spreads at the first stage but
then dies out asymptotically as the result of increased alertness in the
network. Finally, simulations are provided to support our findings. Our results
suggest that alertness can be considered as a strategy of controlling the
epidemics which propose multiple potential areas of applications, from
infectious diseases mitigations to malware impact reduction.
|
1107.2465
|
An Efficient Algorithm for Maximum-Entropy Extension of Block-Circulant
Covariance Matrices
|
math.OC cs.IT cs.SY math.IT
|
This paper deals with maximum entropy completion of partially specified
block-circulant matrices. Since positive definite symmetric circulants happen
to be covariance matrices of stationary periodic processes, in particular of
stationary reciprocal processes, this problem has applications in signal
processing, in particular to image modeling. In fact it is strictly related to
maximum likelihood estimation of bilateral AR-type representations of acausal
signals subject to certain conditional independence constraints. The maximum
entropy completion problem for block-circulant matrices has recently been
solved by the authors, although leaving open the problem of an efficient
computation of the solution. In this paper, we provide an effcient algorithm
for computing its solution which compares very favourably with existing
algorithms designed for positive definite matrix extension problems. The
proposed algorithm benefits from the analysis of the relationship between our
problem and the band-extension problem for block-Toeplitz matrices also
developed in this paper.
|
1107.2473
|
Network Extreme Eigenvalue - from Multimodal to Scale-free Network
|
physics.soc-ph cs.SI
|
The extreme eigenvalues of adjacency matrices are important indicators on the
influences of topological structures to collective dynamical behavior of
complex networks. Recent findings on the ensemble averageability of the extreme
eigenvalue further authenticate its sensibility in the study of network
dynamics. Here we determine the ensemble average of the extreme eigenvalue and
characterize the deviation across the ensemble through the discrete form of
random scale-free network. Remarkably, the analytical approximation derived
from the discrete form shows significant improvement over the previous results.
This has also led us to the same conclusion as [Phys. Rev. Lett. 98, 248701
(2007)] that deviation in the reduced extreme eigenvalues vanishes as the
network size grows.
|
1107.2487
|
Provably Safe and Robust Learning-Based Model Predictive Control
|
math.OC cs.LG cs.SY math.ST stat.TH
|
Controller design faces a trade-off between robustness and performance, and
the reliability of linear controllers has caused many practitioners to focus on
the former. However, there is renewed interest in improving system performance
to deal with growing energy constraints. This paper describes a learning-based
model predictive control (LBMPC) scheme that provides deterministic guarantees
on robustness, while statistical identification tools are used to identify
richer models of the system in order to improve performance; the benefits of
this framework are that it handles state and input constraints, optimizes
system performance with respect to a cost function, and can be designed to use
a wide variety of parametric or nonparametric statistical tools. The main
insight of LBMPC is that safety and performance can be decoupled under
reasonable conditions in an optimization framework by maintaining two models of
the system. The first is an approximate model with bounds on its uncertainty,
and the second model is updated by statistical methods. LBMPC improves
performance by choosing inputs that minimize a cost subject to the learned
dynamics, and it ensures safety and robustness by checking whether these same
inputs keep the approximate model stable when it is subject to uncertainty.
Furthermore, we show that if the system is sufficiently excited, then the LBMPC
control action probabilistically converges to that of an MPC computed using the
true dynamics.
|
1107.2490
|
Towards Optimal One Pass Large Scale Learning with Averaged Stochastic
Gradient Descent
|
cs.LG
|
For large scale learning problems, it is desirable if we can obtain the
optimal model parameters by going through the data in only one pass. Polyak and
Juditsky (1992) showed that asymptotically the test performance of the simple
average of the parameters obtained by stochastic gradient descent (SGD) is as
good as that of the parameters which minimize the empirical cost. However, to
our knowledge, despite its optimal asymptotic convergence rate, averaged SGD
(ASGD) received little attention in recent research on large scale learning.
One possible reason is that it may take a prohibitively large number of
training samples for ASGD to reach its asymptotic region for most real
problems. In this paper, we present a finite sample analysis for the method of
Polyak and Juditsky (1992). Our analysis shows that it indeed usually takes a
huge number of samples for ASGD to reach its asymptotic region for improperly
chosen learning rate. More importantly, based on our analysis, we propose a
simple way to properly set learning rate so that it takes a reasonable amount
of data for ASGD to reach its asymptotic region. We compare ASGD using our
proposed learning rate with other well known algorithms for training large
scale linear classifiers. The experiments clearly show the superiority of ASGD.
|
1107.2499
|
Improving Energy Efficiency Through Multimode Transmission in the
Downlink MIMO Systems
|
cs.IT math.IT
|
Adaptively adjusting system parameters including bandwidth, transmit power
and mode to maximize the "Bits per-Joule" energy efficiency (BPJ-EE) in the
downlink MIMO systems with imperfect channel state information at the
transmitter (CSIT) is considered in this paper. By mode we refer to choice of
transmission schemes i.e. singular value decomposition (SVD) or block
diagonalization (BD), active transmit/receive antenna number and active user
number. We derive optimal bandwidth and transmit power for each dedicated mode
at first. During the derivation, accurate capacity estimation strategies are
proposed to cope with the imperfect CSIT caused capacity prediction problem.
Then, an ergodic capacity based mode switching strategy is proposed to further
improve the BPJ-EE, which provides insights on the preferred mode under given
scenarios. Mode switching compromises different power parts, exploits the
tradeoff between the multiplexing gain and the imperfect CSIT caused inter-user
interference, improves the BPJ-EE significantly.
|
1107.2526
|
Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for
Non-Convex Optimization
|
math.OC cs.DC cs.SY
|
We introduce a new framework for the convergence analysis of a class of
distributed constrained non-convex optimization algorithms in multi-agent
systems. The aim is to search for local minimizers of a non-convex objective
function which is supposed to be a sum of local utility functions of the
agents. The algorithm under study consists of two steps: a local stochastic
gradient descent at each agent and a gossip step that drives the network of
agents to a consensus. Under the assumption of decreasing stepsize, it is
proved that consensus is asymptotically achieved in the network and that the
algorithm converges to the set of Karush-Kuhn-Tucker points. As an important
feature, the algorithm does not require the double-stochasticity of the gossip
matrices. It is in particular suitable for use in a natural broadcast scenario
for which no feedback messages between agents are required. It is proved that
our result also holds if the number of communications in the network per unit
of time vanishes at moderate speed as time increases, allowing for potential
savings of the network's energy. Applications to power allocation in wireless
ad-hoc networks are discussed. Finally, we provide numerical results which
sustain our claims.
|
1107.2527
|
On the Sensitivity of Continuous-Time Noncoherent Fading Channel
Capacity
|
cs.IT math.IT
|
The noncoherent capacity of stationary discrete-time fading channels is known
to be very sensitive to the fine details of the channel model. More
specifically, the measure of the support of the fading-process power spectral
density (PSD) determines if noncoherent capacity grows logarithmically in SNR
or slower than logarithmically. Such a result is unsatisfactory from an
engineering point of view, as the support of the PSD cannot be determined
through measurements. The aim of this paper is to assess whether, for general
continuous-time Rayleigh-fading channels, this sensitivity has a noticeable
impact on capacity at SNR values of practical interest.
To this end, we consider the general class of band-limited continuous-time
Rayleigh-fading channels that satisfy the wide-sense stationary
uncorrelated-scattering (WSSUS) assumption and are, in addition, underspread.
We show that, for all SNR values of practical interest, the noncoherent
capacity of every channel in this class is close to the capacity of an AWGN
channel with the same SNR and bandwidth, independently of the measure of the
support of the scattering function (the two-dimensional channel PSD). Our
result is based on a lower bound on noncoherent capacity, which is built on a
discretization of the channel input-output relation induced by projecting onto
Weyl-Heisenberg (WH) sets. This approach is interesting in its own right as it
yields a mathematically tractable way of dealing with the mutual information
between certain continuous-time random signals.
|
1107.2553
|
Learning Hypergraph Labeling for Feature Matching
|
cs.CV
|
This study poses the feature correspondence problem as a hypergraph node
labeling problem. Candidate feature matches and their subsets (usually of size
larger than two) are considered to be the nodes and hyperedges of a hypergraph.
A hypergraph labeling algorithm, which models the subset-wise interaction by an
undirected graphical model, is applied to label the nodes (feature
correspondences) as correct or incorrect. We describe a method to learn the
cost function of this labeling algorithm from labeled examples using a
graphical model training algorithm. The proposed feature matching algorithm is
different from the most of the existing learning point matching methods in
terms of the form of the objective function, the cost function to be learned
and the optimization method applied to minimize it. The results on standard
datasets demonstrate how learning over a hypergraph improves the matching
performance over existing algorithms, notably one that also uses higher order
information without learning.
|
1107.2647
|
Collective emotions online and their influence on community life
|
physics.soc-ph cs.SI
|
E-communities, social groups interacting online, have recently become an
object of interdisciplinary research. As with face-to-face meetings, Internet
exchanges may not only include factual information but also emotional
information - how participants feel about the subject discussed or other group
members. Emotions are known to be important in affecting interaction partners
in offline communication in many ways. Could emotions in Internet exchanges
affect others and systematically influence quantitative and qualitative aspects
of the trajectory of e-communities? The development of automatic sentiment
analysis has made large scale emotion detection and analysis possible using
text messages collected from the web. It is not clear if emotions in
e-communities primarily derive from individual group members' personalities or
if they result from intra-group interactions, and whether they influence group
activities. We show the collective character of affective phenomena on a large
scale as observed in 4 million posts downloaded from Blogs, Digg and BBC
forums. To test whether the emotions of a community member may influence the
emotions of others, posts were grouped into clusters of messages with similar
emotional valences. The frequency of long clusters was much higher than it
would be if emotions occurred at random. Distributions for cluster lengths can
be explained by preferential processes because conditional probabilities for
consecutive messages grow as a power law with cluster length. For BBC forum
threads, average discussion lengths were higher for larger values of absolute
average emotional valence in the first ten comments and the average amount of
emotion in messages fell during discussions. Our results prove that collective
emotional states can be created and modulated via Internet communication and
that emotional expressiveness is the fuel that sustains some e-communities.
|
1107.2677
|
On Decoding Irregular Tanner Codes with Local-Optimality Guarantees
|
cs.IT math.CO math.IT
|
We consider decoding of binary Tanner codes using message-passing iterative
decoding and linear programming (LP) decoding in MBIOS channels. We present new
certificates that are based on a combinatorial characterization for
local-optimality of a codeword in irregular Tanner codes with respect to any
MBIOS channel. This characterization is based on a conical combination of
normalized weighted subtrees in the computation trees of the Tanner graph.
These subtrees may have any finite height h (even equal or greater than half of
the girth of the Tanner graph). In addition, the degrees of local-code nodes in
these subtrees are not restricted to two. We prove that local optimality in
this new characterization implies maximum-likelihood (ML) optimality and LP
optimality, and show that a certificate can be computed efficiently.
We also present a new message-passing iterative decoding algorithm, called
normalized weighted min-sum (NWMS). NWMS decoding is a BP-type algorithm that
applies to any irregular binary Tanner code with single parity-check local
codes. We prove that if a locally-optimal codeword with respect to height
parameter h exists (whereby notably h is not limited by the girth of the Tanner
graph), then NWMS decoding finds this codeword in h iterations. The decoding
guarantee of the NWMS decoding algorithm applies whenever there exists a
locally optimal codeword. Because local optimality of a codeword implies that
it is the unique ML codeword, the decoding guarantee also provides an ML
certificate for this codeword.
Finally, we apply the new local optimality characterization to regular Tanner
codes, and prove lower bounds on the noise thresholds of LP decoding in MBIOS
channels. When the noise is below these lower bounds, the probability that LP
decoding fails decays doubly exponentially in the girth of the Tanner graph.
|
1107.2681
|
Coordinate-invariant incremental Lyapunov functions
|
math.OC cs.SY math.DS
|
In this note, we propose coordinate-invariant notions of incremental Lyapunov
function and provide characterizations of incremental stability in terms of
existence of the proposed Lyapunov functions.
|
1107.2691
|
On the Weakenesses of Correlation Measures used for Search Engines'
Results (Unsupervised Comparison of Search Engine Rankings)
|
stat.CO cs.IR
|
The correlation of the result lists provided by search engines is fundamental
and it has deep and multidisciplinary ramifications. Here, we present automatic
and unsupervised methods to assess whether or not search engines provide
results that are comparable or correlated. We have two main contributions:
First, we provide evidence that for more than 80% of the input queries -
independently of their frequency - the two major search engines share only
three or fewer URLs in their search results, leading to an increasing
divergence. In this scenario (divergence), we show that even the most robust
measures based on comparing lists is useless to apply; that is, the small
contribution by too few common items will infer no confidence. Second, to
overcome this problem, we propose the fist content-based measures - i.e.,
direct comparison of the contents from search results; these measures are based
on the Jaccard ratio and distribution similarity measures (CDF measures). We
show that they are orthogonal to each other (i.e., Jaccard and distribution)
and extend the discriminative power w.r.t. list based measures. Our approach
stems from the real need of comparing search-engine results, it is automatic
from the query selection to the final evaluation and it apply to any
geographical markets, thus designed to scale and to use as first filtering of
query selection (necessary) for supervised methods.
|
1107.2693
|
A Fuzzy View on k-Means Based Signal Quantization with Application in
Iris Segmentation
|
cs.CV
|
This paper shows that the k-means quantization of a signal can be interpreted
both as a crisp indicator function and as a fuzzy membership assignment
describing fuzzy clusters and fuzzy boundaries. Combined crisp and fuzzy
indicator functions are defined here as natural generalizations of the ordinary
crisp and fuzzy indicator functions, respectively. An application to iris
segmentation is presented together with a demo program.
|
1107.2696
|
Exploring New Directions in Iris Recognition
|
cs.CV
|
A new approach in iris recognition based on Circular Fuzzy Iris Segmentation
(CFIS) and Gabor Analytic Iris Texture Binary Encoder (GAITBE) is proposed and
tested here. CFIS procedure is designed to guarantee that similar iris segments
will be obtained for similar eye images, despite the fact that the degree of
occlusion may vary from one image to another. Its result is a circular iris
ring (concentric with the pupil) which approximates the actual iris. GAITBE
proves better encoding of statistical independence between the iris codes
extracted from different irides using Hilbert Transform. Irides from University
of Bath Iris Database are binary encoded on two different lengths (768 / 192
bytes) and tested in both single-enrollment and multi-enrollment identification
scenarios. All cases illustrate the capacity of the newly proposed methodology
to narrow down the distribution of inter-class matching scores, and
consequently, to guarantee a steeper descent of the False Accept Rate.
|
1107.2699
|
Linear Latent Force Models using Gaussian Processes
|
stat.ML cs.AI
|
Purely data driven approaches for machine learning present difficulties when
data is scarce relative to the complexity of the model or when the model is
forced to extrapolate. On the other hand, purely mechanistic approaches need to
identify and specify all the interactions in the problem at hand (which may not
be feasible) and still leave the issue of how to parameterize the system. In
this paper, we present a hybrid approach using Gaussian processes and
differential equations to combine data driven modelling with a physical model
of the system. We show how different, physically-inspired, kernel functions can
be developed through sensible, simple, mechanistic assumptions about the
underlying system. The versatility of our approach is illustrated with three
case studies from motion capture, computational biology and geostatistics.
|
1107.2700
|
Learning $k$-Modal Distributions via Testing
|
cs.DS cs.LG math.ST stat.TH
|
A $k$-modal probability distribution over the discrete domain $\{1,...,n\}$
is one whose histogram has at most $k$ "peaks" and "valleys." Such
distributions are natural generalizations of monotone ($k=0$) and unimodal
($k=1$) probability distributions, which have been intensively studied in
probability theory and statistics.
In this paper we consider the problem of \emph{learning} (i.e., performing
density estimation of) an unknown $k$-modal distribution with respect to the
$L_1$ distance. The learning algorithm is given access to independent samples
drawn from an unknown $k$-modal distribution $p$, and it must output a
hypothesis distribution $\widehat{p}$ such that with high probability the total
variation distance between $p$ and $\widehat{p}$ is at most $\epsilon.$ Our
main goal is to obtain \emph{computationally efficient} algorithms for this
problem that use (close to) an information-theoretically optimal number of
samples.
We give an efficient algorithm for this problem that runs in time
$\mathrm{poly}(k,\log(n),1/\epsilon)$. For $k \leq \tilde{O}(\log n)$, the
number of samples used by our algorithm is very close (within an
$\tilde{O}(\log(1/\epsilon))$ factor) to being information-theoretically
optimal. Prior to this work computationally efficient algorithms were known
only for the cases $k=0,1$ \cite{Birge:87b,Birge:97}.
A novel feature of our approach is that our learning algorithm crucially uses
a new algorithm for \emph{property testing of probability distributions} as a
key subroutine. The learning algorithm uses the property tester to efficiently
decompose the $k$-modal distribution into $k$ (near-)monotone distributions,
which are easier to learn.
|
1107.2702
|
Learning Poisson Binomial Distributions
|
cs.DS cs.LG math.ST stat.TH
|
We consider a basic problem in unsupervised learning: learning an unknown
\emph{Poisson Binomial Distribution}. A Poisson Binomial Distribution (PBD)
over $\{0,1,\dots,n\}$ is the distribution of a sum of $n$ independent
Bernoulli random variables which may have arbitrary, potentially non-equal,
expectations. These distributions were first studied by S. Poisson in 1837
\cite{Poisson:37} and are a natural $n$-parameter generalization of the
familiar Binomial Distribution. Surprisingly, prior to our work this basic
learning problem was poorly understood, and known results for it were far from
optimal.
We essentially settle the complexity of the learning problem for this basic
class of distributions. As our first main result we give a highly efficient
algorithm which learns to $\eps$-accuracy (with respect to the total variation
distance) using $\tilde{O}(1/\eps^3)$ samples \emph{independent of $n$}. The
running time of the algorithm is \emph{quasilinear} in the size of its input
data, i.e., $\tilde{O}(\log(n)/\eps^3)$ bit-operations. (Observe that each draw
from the distribution is a $\log(n)$-bit string.) Our second main result is a
{\em proper} learning algorithm that learns to $\eps$-accuracy using
$\tilde{O}(1/\eps^2)$ samples, and runs in time $(1/\eps)^{\poly (\log
(1/\eps))} \cdot \log n$. This is nearly optimal, since any algorithm {for this
problem} must use $\Omega(1/\eps^2)$ samples. We also give positive and
negative results for some extensions of this learning problem to weighted sums
of independent Bernoulli random variables.
|
1107.2723
|
Topographic Feature Extraction for Bengali and Hindi Character Images
|
cs.CV
|
Feature selection and extraction plays an important role in different
classification based problems such as face recognition, signature verification,
optical character recognition (OCR) etc. The performance of OCR highly depends
on the proper selection and extraction of feature set. In this paper, we
present novel features based on the topography of a character as visible from
different viewing directions on a 2D plane. By topography of a character we
mean the structural features of the strokes and their spatial relations. In
this work we develop topographic features of strokes visible with respect to
views from different directions (e.g. North, South, East, and West). We
consider three types of topographic features: closed region, convexity of
strokes, and straight line strokes. These features are represented as a
shape-based graph which acts as an invariant feature set for discriminating
very similar type characters efficiently. We have tested the proposed method on
printed and handwritten Bengali and Hindi character images. Initial results
demonstrate the efficacy of our approach.
|
1107.2727
|
Proposed Quality Evaluation Framework to Incorporate Quality Aspects in
Web Warehouse Creation
|
cs.IR
|
Web Warehouse is a read only repository maintained on the web to effectively
handle the relevant data. Web warehouse is a system comprised of various
subsystems and process. It supports the organizations in decision making.
Quality of data store in web warehouse can affect the quality of decision made.
For a valuable decision making it is required to consider the quality aspects
in designing and modelling of a web warehouse. Thus data quality is one of the
most important issues of the web warehousing system. Quality must be
incorporated at different stages of the web warehousing system development. It
is necessary to enhance existing data warehousing system to increase the data
quality. It results in the storage of high quality data in the repository and
efficient decision making. In this paper a Quality Evaluation Framework is
proposed keeping in view the quality dimensions associated with different
phases of a web warehouse. Further more, the proposed framework is validated
empirically with the help of quantitative analysis.
|
1107.2757
|
Subset sum phase transitions and data compression
|
cs.IT cond-mat.stat-mech math.IT
|
We propose a rigorous analysis approach for the subset sum problem in the
context of lossless data compression, where the phase transition of the subset
sum problem is directly related to the passage between ambiguous and
non-ambiguous decompression, for a compression scheme that is based on
specifying the sequence composition. The proposed analysis lends itself to
straightforward extensions in several directions of interest, including
non-binary alphabets, incorporation of side information at the decoder
(Slepian-Wolf coding), and coding schemes based on multiple subset sums. It is
also demonstrated that the proposed technique can be used to analyze the
critical behavior in a more involved situation where the sequence composition
is not specified by the encoder.
|
1107.2781
|
Face Recognition using Curvelet Transform
|
cs.CV
|
Face recognition has been studied extensively for more than 20 years now.
Since the beginning of 90s the subject has became a major issue. This
technology is used in many important real-world applications, such as video
surveillance, smart cards, database security, internet and intranet access.
This report reviews recent two algorithms for face recognition which take
advantage of a relatively new multiscale geometric analysis tool - Curvelet
transform, for facial processing and feature extraction. This transform proves
to be efficient especially due to its good ability to detect curves and lines,
which characterize the human's face. An algorithm which is based on the two
algorithms mentioned above is proposed, and its performance is evaluated on
three data bases of faces: AT&T (ORL), Essex Grimace and Georgia-Tech.
k-nearest neighbour (k-NN) and Support vector machine (SVM) classifiers are
used, along with Principal Component Analysis (PCA) for dimensionality
reduction. This algorithm shows good results, and it even outperforms other
algorithms in some cases.
|
1107.2782
|
The Chan-Vese Algorithm
|
cs.CV math.AP
|
Segmentation is the process of partitioning a digital image into multiple
segments (sets of pixels). Such common segmentation tasks including segmenting
written text or segmenting tumors from healthy brain tissue in an MRI image,
etc. Chan-Vese model for active contours is a powerful and flexible method
which is able to segment many types of images, including some that would be
quite difficult to segment in means of "classical" segmentation - i.e., using
thresholding or gradient based methods. This model is based on the Mumford-Shah
functional for segmentation, and is used widely in the medical imaging field,
especially for the segmentation of the brain, heart and trachea. The model is
based on an energy minimization problem, which can be reformulated in the level
set formulation, leading to an easier way to solve the problem. In this
project, the model will be presented (there is an extension to color
(vector-valued) images, but it will not be considered here), and Matlab code
that implements it will be introduced.
|
1107.2788
|
Diverse Consequences of Algorithmic Probability
|
cs.IT cs.AI cs.CY math.IT
|
We reminisce and discuss applications of algorithmic probability to a wide
range of problems in artificial intelligence, philosophy and technological
society. We propose that Solomonoff has effectively axiomatized the field of
artificial intelligence, therefore establishing it as a rigorous scientific
discipline. We also relate to our own work in incremental machine learning and
philosophy of complexity.
|
1107.2794
|
Enhancing synchronization in growing networks
|
physics.soc-ph cs.SI
|
Most real systems are growing. In order to model the evolution of real
systems, many growing network models have been proposed to reproduce some
specific topology properties. As the structure strongly influences the network
function, designing the function-aimed growing strategy is also a significant
task with many potential applications. In this letter, we focus on
synchronization in the growing networks. In order to enhance the
synchronizability during the network evolution, we propose the Spectral-Based
Growing (SBG) strategy. Based on the linear stability analysis of
synchronization, we show that our growing mechanism yields better
synchronizability than the existing topology-aimed growing strategies in both
artificial and real-world networks. We also observe that there is an optimal
degree of new added nodes, which means adding nodes with neither too large nor
too low degree could enhance the synchronizability. Furthermore, some topology
measurements are considered in the resultant networks. The results show that
the degree, node betweenness centrality from SBG strategy are more homogenous
than those from other growing strategies. Our work highlights the importance of
the function-aimed growth of the networks and deepens our understanding of it.
|
1107.2807
|
Modelling Distributed Shape Priors by Gibbs Random Fields of Second
Order
|
cs.CV cs.LG
|
We analyse the potential of Gibbs Random Fields for shape prior modelling. We
show that the expressive power of second order GRFs is already sufficient to
express simple shapes and spatial relations between them simultaneously. This
allows to model and recognise complex shapes as spatial compositions of simpler
parts.
|
1107.2822
|
A Survey on how Description Logic Ontologies Benefit from Formal Concept
Analysis
|
cs.LO cs.AI
|
Although the notion of a concept as a collection of objects sharing certain
properties, and the notion of a conceptual hierarchy are fundamental to both
Formal Concept Analysis and Description Logics, the ways concepts are described
and obtained differ significantly between these two research areas. Despite
these differences, there have been several attempts to bridge the gap between
these two formalisms, and attempts to apply methods from one field in the
other. The present work aims to give an overview on the research done in
combining Description Logics and Formal Concept Analysis.
|
1107.2859
|
Label-Specific Training Set Construction from Web Resource for Image
Annotation
|
cs.MM cs.CV
|
Recently many research efforts have been devoted to image annotation by
leveraging on the associated tags/keywords of web images as training labels. A
key issue to resolve is the relatively low accuracy of the tags. In this paper,
we propose a novel semi-automatic framework to construct a more accurate and
effective training set from these web media resources for each label that we
want to learn. Experiments conducted on a real-world dataset demonstrate that
the constructed training set can result in higher accuracy for image
annotation.
|
1107.2867
|
A Two Stage Selective Averaging LDPC Decoding
|
cs.IT math.IT
|
Low density parity-check (LDPC) codes are a class of linear block codes that
are decoded by running belief propagation (BP) algorithm or log-likelihood
ratio belief propagation (LLR-BP) over the factor graph of the code. One of the
disadvantages of LDPC codes is the onset of an error floor at high values of
signal to noise ratio caused by trapping sets. In this paper, we propose a two
stage decoder to deal with different types of trapping sets. Oscillating
trapping sets are taken care by the first stage of the decoder and the
elementary trapping sets are handled by the second stage of the decoder.
Simulation results on regular PEG (504,252,3,6) code shows that the proposed
two stage decoder performs significantly better than the standard decoder.
|
1107.2875
|
A Hilbert Scheme in Computer Vision
|
math.AG cs.CV
|
Multiview geometry is the study of two-dimensional images of
three-dimensional scenes, a foundational subject in computer vision. We
determine a universal Groebner basis for the multiview ideal of n generic
cameras. As the cameras move, the multiview varieties vary in a family of
dimension 11n-15. This family is the distinguished component of a multigraded
Hilbert scheme with a unique Borel-fixed point. We present a combinatorial
study of ideals lying on that Hilbert scheme.
|
1107.2879
|
Squeeze-and-Breathe Evolutionary Monte Carlo Optimisation with Local
Search Acceleration and its application to parameter fitting
|
q-bio.QM cs.SY math.OC
|
Motivation: Estimating parameters from data is a key stage of the modelling
process, particularly in biological systems where many parameters need to be
estimated from sparse and noisy data sets. Over the years, a variety of
heuristics have been proposed to solve this complex optimisation problem, with
good results in some cases yet with limitations in the biological setting.
Results: In this work, we develop an algorithm for model parameter fitting
that combines ideas from evolutionary algorithms, sequential Monte Carlo and
direct search optimisation. Our method performs well even when the order of
magnitude and/or the range of the parameters is unknown. The method refines
iteratively a sequence of parameter distributions through local optimisation
combined with partial resampling from a historical prior defined over the
support of all previous iterations. We exemplify our method with biological
models using both simulated and real experimental data and estimate the
parameters efficiently even in the absence of a priori knowledge about the
parameters.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.