id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1210.0808
|
Dynamics in online social networks
|
physics.soc-ph cs.SI
|
An increasing number of today's social interactions occurs using online
social media as communication channels. Some online social networks have become
extremely popular in the last decade. They differ among themselves in the
character of the service they provide to online users. For instance, Facebook
can be seen mainly as a platform for keeping in touch with close friends and
relatives, Twitter is used to propagate and receive news, LinkedIn facilitates
the maintenance of professional contacts, Flickr gathers amateurs and
professionals of photography, etc. Albeit different, all these online platforms
share an ingredient that pervades all their applications. There exists an
underlying social network that allows their users to keep in touch with each
other and helps to engage them in common activities or interactions leading to
a better fulfillment of the service's purposes. This is the reason why these
platforms share a good number of functionalities, e.g., personal communication
channels, broadcasted status updates, easy one-step information sharing, news
feeds exposing broadcasted content, etc. As a result, online social networks
are an interesting field to study an online social behavior that seems to be
generic among the different online services. Since at the bottom of these
services lays a network of declared relations and the basic interactions in
these platforms tend to be pairwise, a natural methodology for studying these
systems is provided by network science. In this chapter we describe some of the
results of research studies on the structure, dynamics and social activity in
online social networks. We present them in the interdisciplinary context of
network science, sociological studies and computer science.
|
1210.0818
|
Multibiometric: Feature Level Fusion Using FKP Multi-Instance biometric
|
cs.CV
|
This paper proposed the use of multi-instance feature level fusion as a means
to improve the performance of Finger Knuckle Print (FKP) verification. A
log-Gabor filter has been used to extract the image local orientation
information, and represent the FKP features. Experiments are performed using
the FKP database, which consists of 7,920 images. Results indicate that the
multi-instance verification approach outperforms higher performance than using
any single instance. The influence on biometric performance using feature level
fusion under different fusion rules have been demonstrated in this paper.
|
1210.0822
|
Discrete geodesic calculus in the space of viscous fluidic objects
|
math.NA cs.CV
|
Based on a local approximation of the Riemannian distance on a manifold by a
computationally cheap dissimilarity measure, a time discrete geodesic calculus
is developed, and applications to shape space are explored. The dissimilarity
measure is derived from a deformation energy whose Hessian reproduces the
underlying Riemannian metric, and it is used to define length and energy of
discrete paths in shape space. The notion of discrete geodesics defined as
energy minimizing paths gives rise to a discrete logarithmic map, a variational
definition of a discrete exponential map, and a time discrete parallel
transport. This new concept is applied to a shape space in which shapes are
considered as boundary contours of physical objects consisting of viscous
material. The flexibility and computational efficiency of the approach is
demonstrated for topology preserving shape morphing, the representation of
paths in shape space via local shape variations as path generators, shape
extrapolation via discrete geodesic flow, and the transfer of geometric
features.
|
1210.0824
|
Distributed High Dimensional Information Theoretical Image Registration
via Random Projections
|
cs.IT cs.LG math.IT stat.ML
|
Information theoretical measures, such as entropy, mutual information, and
various divergences, exhibit robust characteristics in image registration
applications. However, the estimation of these quantities is computationally
intensive in high dimensions. On the other hand, consistent estimation from
pairwise distances of the sample points is possible, which suits random
projection (RP) based low dimensional embeddings. We adapt the RP technique to
this task by means of a simple ensemble method. To the best of our knowledge,
this is the first distributed, RP based information theoretical image
registration approach. The efficiency of the method is demonstrated through
numerical examples.
|
1210.0829
|
A Survey of Multibiometric Systems
|
cs.CV
|
Most biometric systems deployed in real-world applications are unimodal.
Using unimodal biometric systems have to contend with a variety of problems
such as: Noise in sensed data; Intra-class variations; Inter-class
similarities; Non-universality; Spoof attacks. These problems have addressed by
using multibiometric systems, which expected to be more reliable due to the
presence of multiple, independent pieces of evidence.
|
1210.0848
|
Enhancing Twitter Data Analysis with Simple Semantic Filtering: Example
in Tracking Influenza-Like Illnesses
|
cs.SI cs.CL physics.soc-ph
|
Systems that exploit publicly available user generated content such as
Twitter messages have been successful in tracking seasonal influenza. We
developed a novel filtering method for Influenza-Like-Illnesses (ILI)-related
messages using 587 million messages from Twitter micro-blogs. We first filtered
messages based on syndrome keywords from the BioCaster Ontology, an extant
knowledge model of laymen's terms. We then filtered the messages according to
semantic features such as negation, hashtags, emoticons, humor and geography.
The data covered 36 weeks for the US 2009 influenza season from 30th August
2009 to 8th May 2010. Results showed that our system achieved the highest
Pearson correlation coefficient of 98.46% (p-value<2.2e-16), an improvement of
3.98% over the previous state-of-the-art method. The results indicate that
simple NLP-based enhancements to existing approaches to mine Twitter data can
increase the value of this inexpensive resource.
|
1210.0852
|
Detecting multiword phrases in mathematical text corpora
|
cs.CL cs.IR
|
We present an approach for detecting multiword phrases in mathematical text
corpora. The method used is based on characteristic features of mathematical
terminology. It makes use of a software tool named Lingo which allows to
identify words by means of previously defined dictionaries for specific word
classes as adjectives, personal names or nouns. The detection of multiword
groups is done algorithmically. Possible advantages of the method for indexing
and information retrieval and conclusions for applying dictionary-based methods
of automatic indexing instead of stemming procedures are discussed.
|
1210.0862
|
Non-consensus opinion models on complex networks
|
physics.soc-ph cs.SI
|
We focus on non-consensus opinion models in which above a certain threshold
two opinions coexist in a stable relationship. We revisit and extend the
non-consensus opinion (NCO) model introduced by Shao. We generalize the NCO
model by adding a weight factor W to individual's own opinion when determining
its future opinion (NCOW model). We find that as W increases the minority
opinion holders tend to form stable clusters with a smaller initial minority
fraction compared to the NCO model. We also revisit another non-consensus
opinion, the inflexible contrarian opinion (ICO) model, which introduces
inflexible contrarians to model a competition between two opinions in the
steady state. In the ICO model, the inflexible contrarians effectively decrease
the size of the largest cluster of the rival opinion. All of the above models
have previously been explored in terms of a single network. However opinions
propagate not only within single networks but also between networks, we study
here the opinion dynamics in coupled networks. We apply the NCO rule on each
individual network and the global majority rule on interdependent pairs. We
find that the interdependent links effectively force the system from a second
order phase transition, which is characteristic of the NCO model on a single
network, to a hybrid phase transition, i.e., a mix of second-order and abrupt
jump-like transitions that ultimately becomes, as we increase the percentage of
interdependent agents, a pure abrupt transition. We conclude that for the NCO
model on coupled networks, interactions through interdependent links could push
the non-consensus opinion type model to a consensus opinion type model, which
mimics the reality that increased mass communication causes people to hold
opinions that are increasingly similar.
|
1210.0864
|
Learning mixtures of structured distributions over discrete domains
|
cs.LG cs.DS math.ST stat.TH
|
Let $\mathfrak{C}$ be a class of probability distributions over the discrete
domain $[n] = \{1,...,n\}.$ We show that if $\mathfrak{C}$ satisfies a rather
general condition -- essentially, that each distribution in $\mathfrak{C}$ can
be well-approximated by a variable-width histogram with few bins -- then there
is a highly efficient (both in terms of running time and sample complexity)
algorithm that can learn any mixture of $k$ unknown distributions from
$\mathfrak{C}.$
We analyze several natural types of distributions over $[n]$, including
log-concave, monotone hazard rate and unimodal distributions, and show that
they have the required structural property of being well-approximated by a
histogram with few bins. Applying our general algorithm, we obtain
near-optimally efficient algorithms for all these mixture learning problems.
|
1210.0866
|
Classification of Hepatic Lesions using the Matching Metric
|
cs.CV cs.CG math.AT
|
In this paper we present a methodology of classifying hepatic (liver) lesions
using multidimensional persistent homology, the matching metric (also called
the bottleneck distance), and a support vector machine. We present our
classification results on a dataset of 132 lesions that have been outlined and
annotated by radiologists. We find that topological features are useful in the
classification of hepatic lesions. We also find that two-dimensional persistent
homology outperforms one-dimensional persistent homology in this application.
|
1210.0880
|
Schr\"{o}dinger Diffusion for Shape Analysis with Texture
|
cs.CV cs.CG cs.GR math.AP
|
In recent years, quantities derived from the heat equation have become
popular in shape processing and analysis of triangulated surfaces. Such
measures are often robust with respect to different kinds of perturbations,
including near-isometries, topological noise and partialities. Here, we propose
to exploit the semigroup of a Schr\"{o}dinger operator in order to deal with
texture data, while maintaining the desirable properties of the heat kernel. We
define a family of Schr\"{o}dinger diffusion distances analogous to the ones
associated to the heat kernels, and show that they are continuous under
perturbations of the data. As an application, we introduce a method for
retrieval of textured shapes through comparison of Schr\"{o}dinger diffusion
distance histograms with the earth's mover distance, and present some numerical
experiments showing superior performance compared to an analogous method that
ignores the texture.
|
1210.0887
|
The Definition of AI in Terms of Multi Agent Systems
|
cs.AI
|
The questions which we will consider here are "What is AI?" and "How can we
make AI?". Here we will present the definition of AI in terms of multi-agent
systems. This means that here you will not find a new answer to the question
"What is AI?", but an old answer in a new form.
This new form of the definition of AI is of interest for the theory of
multi-agent systems because it gives us better understanding of this theory.
More important is that this work will help us answer the second question. We
want to make a program which is capable of constructing a model of its
environment. Every multi-agent model is equivalent to a single-agent model but
multi-agent models are more natural and accordingly more easily discoverable.
|
1210.0888
|
Control Design along Trajectories with Sums of Squares Programming
|
cs.RO cs.SY math.OC
|
Motivated by the need for formal guarantees on the stability and safety of
controllers for challenging robot control tasks, we present a control design
procedure that explicitly seeks to maximize the size of an invariant "funnel"
that leads to a predefined goal set. Our certificates of invariance are given
in terms of sums of squares proofs of a set of appropriately defined Lyapunov
inequalities. These certificates, together with our proposed polynomial
controllers, can be efficiently obtained via semidefinite optimization. Our
approach can handle time-varying dynamics resulting from tracking a given
trajectory, input saturations (e.g. torque limits), and can be extended to deal
with uncertainty in the dynamics and state. The resulting controllers can be
used by space-filling feedback motion planning algorithms to fill up the space
with significantly fewer trajectories. We demonstrate our approach on a
severely torque limited underactuated double pendulum (Acrobot) and provide
extensive simulation and hardware validation.
|
1210.0891
|
A Reconfigurable Distributed Algorithm for K-user MIMO Interference
Networks
|
cs.IT math.IT
|
It is already well-known that interference alignment (IA) achieves the sum
capacity of the K-user interference channel at the high interference regime. On
the other hand, it is intuitively clear that when the interference levels are
very low, a sum-rate scaling of K (as opposed to K/2 for IA) should be accessed
at high signal-to-noise ratio values by simple ("myopic") single-link
multiple-input multiple-output (MIMO) techniques such as waterfilling. Recent
results have indicated that in certain low-to-moderate interference cases,
treating interference as noise may in fact be preferable. In this paper, we
present a distributed iterative algorithm for K-user MIMO interference networks
which attempts to adjust itself to the interference regime at hand, in the
above sense, as well as to the channel conditions. The proposed algorithm
combines the system-wide mean squared error minimization with the waterfilling
solution to adjust to the interference levels and channel conditions and
maximize accordingly each user's transmission rate. Sum-rate computer
simulations for the proposed algorithm over Ricean fading channels show that,
in the interference-limited regime, the proposed algorithm reconfigures itself
in order to achieve the IA scaling whereas, in the low-to-moderate interference
regime, it leads itself towards interference-myopic MIMO transmissions.
|
1210.0930
|
Optimality of Received Energy in Decision Fusion over Rayleigh Fading
Diversity MAC with Non-Identical Sensors
|
cs.IT math.IT
|
Received-energy test for non-coherent decision fusion over a Rayleigh fading
multiple access channel (MAC) without diversity was recently shown to be
optimum in the case of conditionally mutually independent and identically
distributed (i.i.d.) sensor decisions under specific conditions [1], [2]. Here,
we provide a twofold generalization, allowing sensors to be non identical on
one hand and introducing diversity on the other hand. Along with the
derivation, we provide also a general tool to verify optimality of the the
received energy test in scenarios with correlated sensor decisions. Finally, we
derive an analytical expression of the effect of the diversity on the
large-system performances, under both individual and total power constraints.
|
1210.0954
|
Learning from Collective Intelligence in Groups
|
cs.SI cs.LG
|
Collective intelligence, which aggregates the shared information from large
crowds, is often negatively impacted by unreliable information sources with the
low quality data. This becomes a barrier to the effective use of collective
intelligence in a variety of applications. In order to address this issue, we
propose a probabilistic model to jointly assess the reliability of sources and
find the true data. We observe that different sources are often not independent
of each other. Instead, sources are prone to be mutually influenced, which
makes them dependent when sharing information with each other. High dependency
between sources makes collective intelligence vulnerable to the overuse of
redundant (and possibly incorrect) information from the dependent sources.
Thus, we reveal the latent group structure among dependent sources, and
aggregate the information at the group level rather than from individual
sources directly. This can prevent the collective intelligence from being
inappropriately dominated by dependent sources. We will also explicitly reveal
the reliability of groups, and minimize the negative impacts of unreliable
groups. Experimental results on real-world data sets show the effectiveness of
the proposed approach with respect to existing algorithms.
|
1210.0999
|
Logical segmentation for article extraction in digitized old newspapers
|
cs.IR cs.CV cs.DL
|
Newspapers are documents made of news item and informative articles. They are
not meant to be red iteratively: the reader can pick his items in any order he
fancies. Ignoring this structural property, most digitized newspaper archives
only offer access by issue or at best by page to their content. We have built a
digitization workflow that automatically extracts newspaper articles from
images, which allows indexing and retrieval of information at the article
level. Our back-end system extracts the logical structure of the page to
produce the informative units: the articles. Each image is labelled at the
pixel level, through a machine learning based method, then the page logical
structure is constructed up from there by the detection of structuring entities
such as horizontal and vertical separators, titles and text lines. This logical
structure is stored in a METS wrapper associated to the ALTO file produced by
the system including the OCRed text. Our front-end system provides a web high
definition visualisation of images, textual indexing and retrieval facilities,
searching and reading at the article level. Articles transcriptions can be
collaboratively corrected, which as a consequence allows for better indexing.
We are currently testing our system on the archives of the Journal de Rouen,
one of France eldest local newspaper. These 250 years of publication amount to
300 000 pages of very variable image quality and layout complexity. Test year
1808 can be consulted at plair.univ-rouen.fr.
|
1210.1013
|
On the SCALE Algorithm for Multiuser Multicarrier Power Spectrum
Management
|
cs.IT math.IT
|
This paper studies the successive convex approximation for low complexity
(SCALE) algorithm, which was proposed to address the weighted sum rate (WSR)
maximized dynamic power spectrum management (DSM) problem for multiuser
multicarrier systems. To this end, we first revisit the algorithm, and then
present geometric interpretation and properties of the algorithm. A geometric
programming (GP) implementation approach is proposed and compared with the
low-complexity approach proposed previously. In particular, an analytical
method is proposed to set up the default lower-bound constraints added by a GP
solver. Finally, numerical experiments are used to illustrate the analysis and
compare the two implementation approaches.
|
1210.1029
|
Blurred Image Classification based on Adaptive Dictionary
|
cs.CV
|
Two types of framework for blurred image classification based on adaptive
dictionary are proposed. Given a blurred image, instead of image deblurring,
the semantic category of the image is determined by blur insensitive sparse
coefficients calculated depending on an adaptive dictionary. The dictionary is
adaptive to the Point Spread Function (PSF) estimated from input blurred image.
The PSF is assumed to be space invariant and inferred separately in one
framework or updated combining with sparse coefficients calculation in an
alternative and iterative algorithm in the other framework. The experiment has
evaluated three types of blur, naming defocus blur, simple motion blur and
camera shake blur. The experiment results confirm the effectiveness of the
proposed frameworks.
|
1210.1033
|
Robust Degraded Face Recognition Using Enhanced Local Frequency
Descriptor and Multi-scale Competition
|
cs.CV
|
Recognizing degraded faces from low resolution and blurred images are common
yet challenging task. Local Frequency Descriptor (LFD) has been proved to be
effective for this task yet it is extracted from a spatial neighborhood of a
pixel of a frequency plane independently regardless of correlations between
frequencies. In addition, it uses a fixed window size named single scale of
short-term Frequency transform (STFT). To explore the frequency correlations
and preserve low resolution and blur insensitive simultaneously, we propose
Enhanced LFD in which information in space and frequency is jointly utilized so
as to be more descriptive and discriminative than LFD. The multi-scale
competition strategy that extracts multiple descriptors corresponding to
multiple window sizes of STFT and take one corresponding to maximum confidence
as the final recognition result. The experiments conducted on Yale and FERET
databases demonstrate that promising results have been achieved by the proposed
Enhanced LFD and multi-scale competition strategy.
|
1210.1037
|
Laxity-Based Opportunistic Scheduling with Flow-Level Dynamics and
Deadlines
|
cs.IT math.IT
|
Many data applications in the next generation cellular networks, such as
content precaching and video progressive downloading, require flow-level
quality of service (QoS) guarantees. One such requirement is deadline, where
the transmission task needs to be completed before the application-specific
time. To minimize the number of uncompleted transmission tasks, we study
laxity-based scheduling policies in this paper. We propose a
Less-Laxity-Higher-Possible-Rate (L$^2$HPR) policy and prove its asymptotic
optimality in underloaded identical-deadline systems. The asymptotic optimality
of L$^2$HPR can be applied to estimate the schedulability of a system and
provide insights on the design of scheduling policies for general systems.
Based on it, we propose a framework and three heuristic policies for practical
systems. Simulation results demonstrate the asymptotic optimality of L$^2$HPR
and performance improvement of proposed policies over greedy policies.
|
1210.1040
|
A Comparative Analysis of Data Mining Tools in Agent Based Systems
|
cs.DB
|
World wide technological advancement has brought in a widespread change in
adoption and utilization of open source tools. Since, most of the organizations
across the globe deal with a large amount of data to be updated online and
transactions are made every second, managing, mining and processing this
dynamic data is very complex. Successful implementation of the data mining
technique requires a careful assessment of the various tools and algorithms
available to mining experts. This paper provides a comparative study of open
source data mining tools available to the professionals. Parameters influencing
the choice of apt tools in addition to the real time challenges are discussed.
However, it is well proven that agents aid in improving the performance of data
mining tools. This paper provides information on an agent-based framework for
data preprocessing with implementation details for the development of better
tool in the market. An integration of open source data mining tools with agent
simulation enable one to implement an effective data pre processing
architecture thereby providing robust capabilities of the application which can
be upgraded using a minimum of pre planning requirement from the application
developer.
|
1210.1048
|
Predicting human preferences using the block structure of complex social
networks
|
physics.soc-ph cs.SI physics.data-an stat.ML
|
With ever-increasing available data, predicting individuals' preferences and
helping them locate the most relevant information has become a pressing need.
Understanding and predicting preferences is also important from a fundamental
point of view, as part of what has been called a "new" computational social
science. Here, we propose a novel approach based on stochastic block models,
which have been developed by sociologists as plausible models of complex
networks of social interactions. Our model is in the spirit of predicting
individuals' preferences based on the preferences of others but, rather than
fitting a particular model, we rely on a Bayesian approach that samples over
the ensemble of all possible models. We show that our approach is considerably
more accurate than leading recommender algorithms, with major relative
improvements between 38% and 99% over industry-level algorithms. Besides, our
approach sheds light on decision-making processes by identifying groups of
individuals that have consistently similar preferences, and enabling the
analysis of the characteristics of those groups.
|
1210.1091
|
A Formula for the Capacity of the General Gel'fand-Pinsker Channel
|
cs.IT math.IT
|
We consider the Gel'fand-Pinsker problem in which the channel and state are
general, i.e., possibly non-stationary, non-memoryless and non-ergodic. Using
the information spectrum method and a non-trivial modification of the piggyback
coding lemma by Wyner, we prove that the capacity can be expressed as an
optimization over the difference of a spectral inf- and a spectral sup-mutual
information rate. We consider various specializations including the case where
the channel and state are memoryless but not necessarily stationary.
|
1210.1104
|
Sensory Anticipation of Optical Flow in Mobile Robotics
|
cs.RO cs.LG
|
In order to anticipate dangerous events, like a collision, an agent needs to
make long-term predictions. However, those are challenging due to uncertainties
in internal and external variables and environment dynamics. A sensorimotor
model is acquired online by the mobile robot using a state-of-the-art method
that learns the optical flow distribution in images, both in space and time.
The learnt model is used to anticipate the optical flow up to a given time
horizon and to predict an imminent collision by using reinforcement learning.
We demonstrate that multi-modal predictions reduce to simpler distributions
once actions are taken into account.
|
1210.1121
|
Smooth Sparse Coding via Marginal Regression for Learning Sparse
Representations
|
stat.ML cs.LG
|
We propose and analyze a novel framework for learning sparse representations,
based on two statistical techniques: kernel smoothing and marginal regression.
The proposed approach provides a flexible framework for incorporating feature
similarity or temporal information present in data sets, via non-parametric
kernel smoothing. We provide generalization bounds for dictionary learning
using smooth sparse coding and show how the sample complexity depends on the L1
norm of kernel function used. Furthermore, we propose using marginal regression
for obtaining sparse codes, which significantly improves the speed and allows
one to scale to large dictionary sizes easily. We demonstrate the advantages of
the proposed approach, both in terms of accuracy and speed by extensive
experimentation on several real data sets. In addition, we demonstrate how the
proposed approach could be used for improving semi-supervised sparse coding.
|
1210.1139
|
Cross-Layer Scheduling in Multi-user System with Delay and Secrecy
Constraints
|
cs.IT math.IT
|
Recently, physical layer security based approaches have drawn considerable
attentions and are envisaged to provide secure communications in the wireless
networks. However, most existing literatures only focus on the physical layer.
Thus, how to design an effective transmission scheme which also considers the
requirements from the upper layers is still an unsolved problem. We consider
such cross-layer resource allocation problem in the multi-user downlink
environment for both having instantaneous and partial eavesdropping channel
information scenarios. The problem is first formulated in a new security
framework. Then, the control scheme is designed to maximize the average
admission rate of the data, incorporating delay, power, and secrecy as
constraints, for both non-colluding and colluding eavesdropping cases in each
scenario. Performance analysis is given based on the stochastic optimization
theory and the simulations are carried out to validate the effectiveness of our
scheme.
|
1210.1161
|
Feature Subset Selection for Software Cost Modelling and Estimation
|
cs.SE cs.AI cs.LG
|
Feature selection has been recently used in the area of software engineering
for improving the accuracy and robustness of software cost models. The idea
behind selecting the most informative subset of features from a pool of
available cost drivers stems from the hypothesis that reducing the
dimensionality of datasets will significantly minimise the complexity and time
required to reach to an estimation using a particular modelling technique. This
work investigates the appropriateness of attributes, obtained from empirical
project databases and aims to reduce the cost drivers used while preserving
performance. Finding suitable subset selections that may cater improved
predictions may be considered as a pre-processing step of a particular
technique employed for cost estimation (filter or wrapper) or an internal
(embedded) step to minimise the fitting error. This paper compares nine
relatively popular feature selection methods and uses the empirical values of
selected attributes recorded in the ISBSG and Desharnais datasets to estimate
software development effort.
|
1210.1172
|
Modeling self-organized systems interacting with few individuals: from
microscopic to macroscopic dynamics
|
physics.bio-ph cs.SI physics.soc-ph q-bio.QM
|
In nature self-organized systems as flock of birds, school of fishes or herd
of sheeps have to deal with the presence of external agents such as predators
or leaders which modify their internal dynamic. Such situations take into
account a large number of individuals with their own social behavior which
interact with a few number of other individuals acting as external point source
forces. Starting from the microscopic description we derive the kinetic model
through a mean-field limit and finally the macroscopic system through a
suitable hydrodynamic limit.
|
1210.1184
|
Elegant Object-oriented Software Design via Interactive, Evolutionary
Computation
|
cs.SE cs.AI
|
Design is fundamental to software development but can be demanding to
perform. Thus to assist the software designer, evolutionary computing is being
increasingly applied using machine-based, quantitative fitness functions to
evolve software designs. However, in nature, elegance and symmetry play a
crucial role in the reproductive fitness of various organisms. In addition,
subjective evaluation has also been exploited in Interactive Evolutionary
Computation (IEC). Therefore to investigate the role of elegance and symmetry
in software design, four novel elegance measures are proposed based on the
evenness of distribution of design elements. In controlled experiments in a
dynamic interactive evolutionary computation environment, designers are
presented with visualizations of object-oriented software designs, which they
rank according to a subjective assessment of elegance. For three out of the
four elegance measures proposed, it is found that a significant correlation
exists between elegance values and reward elicited. These three elegance
measures assess the evenness of distribution of (a) attributes and methods
among classes, (b) external couples between classes, and (c) the ratio of
attributes to methods. It is concluded that symmetrical elegance is in some way
significant in software design, and that this can be exploited in dynamic,
multi-objective interactive evolutionary computation to produce elegant
software designs.
|
1210.1190
|
Fast Conical Hull Algorithms for Near-separable Non-negative Matrix
Factorization
|
stat.ML cs.LG
|
The separability assumption (Donoho & Stodden, 2003; Arora et al., 2012)
turns non-negative matrix factorization (NMF) into a tractable problem.
Recently, a new class of provably-correct NMF algorithms have emerged under
this assumption. In this paper, we reformulate the separable NMF problem as
that of finding the extreme rays of the conical hull of a finite set of
vectors. From this geometric perspective, we derive new separable NMF
algorithms that are highly scalable and empirically noise robust, and have
several other favorable properties in relation to existing methods. A parallel
implementation of our algorithm demonstrates high scalability on shared- and
distributed-memory machines.
|
1210.1207
|
Learning Human Activities and Object Affordances from RGB-D Videos
|
cs.RO cs.AI cs.CV
|
Understanding human activities and object affordances are two very important
skills, especially for personal robots which operate in human environments. In
this work, we consider the problem of extracting a descriptive labeling of the
sequence of sub-activities being performed by a human, and more importantly, of
their interactions with the objects in the form of associated affordances.
Given a RGB-D video, we jointly model the human activities and object
affordances as a Markov random field where the nodes represent objects and
sub-activities, and the edges represent the relationships between object
affordances, their relations with sub-activities, and their evolution over
time. We formulate the learning problem using a structural support vector
machine (SSVM) approach, where labelings over various alternate temporal
segmentations are considered as latent variables. We tested our method on a
challenging dataset comprising 120 activity videos collected from 4 subjects,
and obtained an accuracy of 79.4% for affordance, 63.4% for sub-activity and
75.0% for high-level activity labeling. We then demonstrate the use of such
descriptive labeling in performing assistive tasks by a PR2 robot.
|
1210.1230
|
Evaluating Discussion Boards on BlackBoard as a Collaborative Learning
Tool A Students Survey and Reflections
|
cs.CV cs.CY
|
In this paper, we investigate how the students think of their experience in a
junior level course that has a blackboard course presence where the students
use the discussion boards extensively. A survey is set up through blackboard as
a voluntary quiz and the student who participated were given a freebie point.
The results and the participation were very interesting in terms of the
feedback we got via open comments from the students as well as the statistics
we gathered from the answers to the questions. The students have shown
understanding and willingness to participate in pedagogy-enhancing endeavors.
|
1210.1258
|
Unfolding Latent Tree Structures using 4th Order Tensors
|
cs.LG stat.ML
|
Discovering the latent structure from many observed variables is an important
yet challenging learning task. Existing approaches for discovering latent
structures often require the unknown number of hidden states as an input. In
this paper, we propose a quartet based approach which is \emph{agnostic} to
this number. The key contribution is a novel rank characterization of the
tensor associated with the marginal distribution of a quartet. This
characterization allows us to design a \emph{nuclear norm} based test for
resolving quartet relations. We then use the quartet test as a subroutine in a
divide-and-conquer algorithm for recovering the latent tree structure. Under
mild conditions, the algorithm is consistent and its error probability decays
exponentially with increasing sample size. We demonstrate that the proposed
approach compares favorably to alternatives. In a real world stock dataset, it
also discovers meaningful groupings of variables, and produces a model that
fits the data better.
|
1210.1266
|
Nonanticipative Rate Distortion Function and Relations to Filtering
Theory
|
cs.IT math.IT math.OC
|
The relation between nonanticipative Rate Distortion Function (RDF) and
filtering theory is discussed on abstract spaces. The relation is established
by imposing a realizability constraint on the reconstruction conditional
distribution of the classical RDF. Existence of the extremum solution of the
nonanticipative RDF is shown using weak$^*$-convergence on appropriate
topology. The extremum reconstruction conditional distribution is derived in
closed form, for the case of stationary processes. The realization of the
reconstruction conditional distribution which achieves the infimum of the
nonanticipative RDF is described. Finally, an example is presented to
illustrate the concepts.
|
1210.1300
|
Properties of Stochastic Kronecker Graph
|
cs.SI cs.DM
|
The stochastic Kronecker Graph model can generate large random graph that
closely resembles many real world networks. For example, the output graph has a
heavy-tailed degree distribution, has a (low) diameter that effectively remains
constant over time and obeys the so-called densification power law [1]. Aside
from this list of very important graph properties, one may ask for some
additional information about the output graph: What will be the expected number
of isolated vertices? How many edges, self loops are there in the graph? What
will be the expected number of triangles in a random realization? Here we try
to answer the above questions. In the first phase, we bound the expected values
of the aforementioned features from above. Next we establish the sufficient
conditions to generate stochastic Kronecker graph with a wide range of
interesting properties. Finally we show two phase transitions for the
appearance of edges and self loops in stochastic Kronecker graph.
|
1210.1316
|
Learning Locality-Constrained Collaborative Representation for Face
Recognition
|
cs.CV
|
The model of low-dimensional manifold and sparse representation are two
well-known concise models that suggest each data can be described by a few
characteristics. Manifold learning is usually investigated for dimension
reduction by preserving some expected local geometric structures from the
original space to a low-dimensional one. The structures are generally
determined by using pairwise distance, e.g., Euclidean distance. Alternatively,
sparse representation denotes a data point as a linear combination of the
points from the same subspace. In practical applications, however, the nearby
points in terms of pairwise distance may not belong to the same subspace, and
vice versa. Consequently, it is interesting and important to explore how to get
a better representation by integrating these two models together. To this end,
this paper proposes a novel coding algorithm, called Locality-Constrained
Collaborative Representation (LCCR), which improves the robustness and
discrimination of data representation by introducing a kind of local
consistency. The locality term derives from a biologic observation that the
similar inputs have similar code. The objective function of LCCR has an
analytical solution, and it does not involve local minima. The empirical
studies based on four public facial databases, ORL, AR, Extended Yale B, and
Multiple PIE, show that LCCR is promising in recognizing human faces from
frontal views with varying expression and illumination, as well as various
corruptions and occlusions.
|
1210.1317
|
Learning Heterogeneous Similarity Measures for Hybrid-Recommendations in
Meta-Mining
|
cs.LG cs.AI
|
The notion of meta-mining has appeared recently and extends the traditional
meta-learning in two ways. First it does not learn meta-models that provide
support only for the learning algorithm selection task but ones that support
the whole data-mining process. In addition it abandons the so called black-box
approach to algorithm description followed in meta-learning. Now in addition to
the datasets, algorithms also have descriptors, workflows as well. For the
latter two these descriptions are semantic, describing properties of the
algorithms. With the availability of descriptors both for datasets and data
mining workflows the traditional modelling techniques followed in
meta-learning, typically based on classification and regression algorithms, are
no longer appropriate. Instead we are faced with a problem the nature of which
is much more similar to the problems that appear in recommendation systems. The
most important meta-mining requirements are that suggestions should use only
datasets and workflows descriptors and the cold-start problem, e.g. providing
workflow suggestions for new datasets.
In this paper we take a different view on the meta-mining modelling problem
and treat it as a recommender problem. In order to account for the meta-mining
specificities we derive a novel metric-based-learning recommender approach. Our
method learns two homogeneous metrics, one in the dataset and one in the
workflow space, and a heterogeneous one in the dataset-workflow space. All
learned metrics reflect similarities established from the dataset-workflow
preference matrix. We demonstrate our method on meta-mining over biological
(microarray datasets) problems. The application of our method is not limited to
the meta-mining problem, its formulations is general enough so that it can be
applied on problems with similar requirements.
|
1210.1326
|
Wireless Network Coding via Modified 802.11 MAC/PHY: Design and
Implementation on SDR
|
cs.IT cs.NI cs.PF math.IT
|
Network coding (NC), in principle, is a Layer-3 innovation that improves
network throughput in wired networks for multicast/broadcast scenarios. Due to
the fundamental differences between wired and wireless networks, extending NC
to wireless networks generates several new and significant practical
challenges. Two-way information exchange (both symmetric and asymmetric).
Network coding (NC), in principle, is a Layer-3 innovation that improves
network throughput in wired networks for multicast/broadcast scenarios. Due to
the fundamental differences between wired and wireless networks, extending NC
to wireless networks generates several new and significant practical
challenges. Two-way information exchange (both symmetric and asymmetric)
between a pair of 802.11 sources/sinks using an intermediate relay node is a
canonical scenario for evaluating the effectiveness of Wireless Network Coding
(WNC) in a practical setting. Our primary objective in this work is to suggest
pragmatic and novel modifications at the MAC and PHY layers of the 802.11
protocol stack on a Software Radio (SORA) platform to support WNC and obtain
achievable throughput estimates via lab-scale experiments. Our results show
that network coding (at the MAC or PHY layer) increases system
throughput-typically by 20-30%%.
|
1210.1356
|
Detecting and Describing Dynamic Equilibria in Adaptive Networks
|
nlin.AO cs.SI physics.soc-ph q-bio.PE
|
We review modeling attempts for the paradigmatic contact process (or SIS
model) on adaptive networks. Elaborating on one particular proposed mechanism
of topology change (rewiring) and its mean field analysis, we obtain a
coarse-grained view of coevolving network topology in the stationary active
phase of the system. Introducing an alternative framework applicable to a wide
class of adaptive networks, active stationary states are detected, and an
extended description of the resulting steady-state statistics is given for
three different rewiring schemes. We find that slight modifications of the
standard rewiring rule can result in either minuscule or drastic change of
steady-state network topologies.
|
1210.1357
|
A quantitative method for determining the robustness of complex networks
|
cs.SI nlin.AO physics.soc-ph
|
Most current studies estimate the invulnerability of complex networks using a
qualitative method that analyzes the inaccurate decay rate of network
efficiency. This method results in confusion over the invulnerability of
various types of complex networks. By normalizing network efficiency and
defining a baseline, this paper defines the invulnerability index as the
integral of the difference between the normalized network efficiency curve and
the baseline. This quantitative method seeks to establish a benchmark for the
robustness and fragility of networks and to measure network invulnerability
under both edge and node attacks. To validate the reliability of the proposed
method, three small-world networks were selected as test beds. The simulation
results indicate that the proposed invulnerability index can effectively and
accurately quantify network resilience. The index should provide a valuable
reference for determining network invulnerability in future research.
|
1210.1394
|
Revisiting Content Availability in Distributed Online Social Networks
|
cs.NI cs.SI
|
Online Social Networks (OSN) are among the most popular applications in
today's Internet. Decentralized online social networks (DOSNs), a special class
of OSNs, promise better privacy and autonomy than traditional centralized OSNs.
However, ensuring availability of content when the content owner is not online
remains a major challenge. In this paper, we rely on the structure of the
social graphs underlying DOSN for replication. In particular, we propose that
friends, who are anyhow interested in the content, are used to replicate the
users content. We study the availability of such natural replication schemes
via both theoretical analysis as well as simulations based on data from OSN
users. We find that the availability of the content increases drastically when
compared to the online time of the user, e. g., by a factor of more than 2 for
90% of the users. Thus, with these simple schemes we provide a baseline for any
more complicated content replication scheme.
|
1210.1441
|
Adaptive social recommendation in a multiple category landscape
|
physics.soc-ph cs.IR
|
People in the Internet era have to cope with the information overload,
striving to find what they are interested in, and usually face this situation
by following a limited number of sources or friends that best match their
interests. A recent line of research, namely adaptive social recommendation,
has therefore emerged to optimize the information propagation in social
networks and provide users with personalized recommendations. Validation of
these methods by agent-based simulations often assumes that the tastes of users
and can be represented by binary vectors, with entries denoting users'
preferences. In this work we introduce a more realistic assumption that users'
tastes are modeled by multiple vectors. We show that within this framework the
social recommendation process has a poor outcome. Accordingly, we design novel
measures of users' taste similarity that can substantially improve the
precision of the recommender system. Finally, we discuss the issue of enhancing
the recommendations' diversity while preserving their accuracy.
|
1210.1460
|
Effective resistance on graphs and the Epidemic quasimetric
|
math.CO cs.SI math.CV q-bio.PE q-bio.QM
|
We introduce the epidemic quasimetric on graphs and study its behavior with
respect to clustering techniques. In particular we compare its behavior to
known objects such as the graph distance, effective resistance, and modulus of
path families.
|
1210.1461
|
A Scalable CUR Matrix Decomposition Algorithm: Lower Time Complexity and
Tighter Bound
|
cs.LG cs.DM stat.ML
|
The CUR matrix decomposition is an important extension of Nystr\"{o}m
approximation to a general matrix. It approximates any data matrix in terms of
a small number of its columns and rows. In this paper we propose a novel
randomized CUR algorithm with an expected relative-error bound. The proposed
algorithm has the advantages over the existing relative-error CUR algorithms
that it possesses tighter theoretical bound and lower time complexity, and that
it can avoid maintaining the whole data matrix in main memory. Finally,
experiments on several real-world datasets demonstrate significant improvement
over the existing relative-error algorithms.
|
1210.1464
|
Networked Decision Making for Poisson Processes: Application to nuclear
detection
|
math.PR cs.RO
|
This paper addresses a detection problem where several spatially distributed
sensors independently observe a time-inhomogeneous stochastic process. The task
is to decide between two hypotheses regarding the statistics of the observed
process at the end of a fixed time interval. In the proposed method, each of
the sensors transmits once to a fusion center a locally processed summary of
its information in the form of a likelihood ratio. The fusion center then
combines these messages to arrive at an optimal decision in the Neyman-Pearson
framework. The approach is motivated by applications arising in the detection
of mobile radioactive sources, and offers a pathway toward the development of
novel fixed- interval detection algorithms that combine decentralized
processing with optimal centralized decision making.
|
1210.1470
|
A Framework for Joint Design of Pilot Sequence and Linear Precoder
|
cs.IT math.IT
|
Most performance measures of pilot-assisted multiple-input multiple-output
(MIMO) systems are functions that depend on both the linear precoding filter
and the pilot sequence. A framework for the optimization of these two
parameters is proposed, based on a matrix-valued generalization of the concept
of effective signal-to-noise ratio (SNR) introduced in a famous work by Hassibi
and Hochwald. The framework applies to a wide class of utility functions of
said effective SNR matrix, most notably a well-known mutual information
expression for Gaussian inputs, an upper bound on the minimum mean-square error
(MMSE), as well as approximations thereof. The approach consists in decomposing
the joint optimization problem into three subproblems: first, we describe how
to reformulate the optimization of the linear precoder subject to a fixed pilot
sequence as a convex problem. Second, we do likewise for the optimization of
the pilot sequence subject to a fixed precoder. Third, we describe how to
generate pairs of precoders and pilot sequences that are Pareto optimal in the
sense that they attain the Pareto boundary of the set of feasible effective SNR
matrices. By combining these three optimization problems into an iteration, we
obtain an algorithm which allows to compute jointly optimal pairs of precoders
and pilot sequences with respect to some generic utility function of the
effective SNR.
|
1210.1472
|
Biospectrogram: a tool for spectral analysis of biological sequences
|
q-bio.QM cs.CE q-bio.GN
|
Summary: Biospectrogam is an open-source software for the spectral analysis
of DNA and protein sequences. The software can fetch (from NCBI server), import
and manage biological data. One can analyze the data using Digital Signal
Processing (DSP) techniques since the software allows the user to convert the
symbolic data into numerical data using 23 popular encodings and then apply
popular transformations such as Fast Fourier Transform (FFT) etc. and export
it. The ability of exporting (both encoding files and transform files) as a
MATLAB .m file gives the user an option to apply variety of techniques of DSP.
User can also do window analysis (both sliding in forward and backward
directions and stagnant) with different size windows and search for meaningful
spectral pattern with the help of exported MATLAB file in a dynamic manner by
choosing time delay in the plot using Biospectrogram. Random encodings and user
choice encoding allows software to search for many possibilities in spectral
space.
Availability: Biospectrogam is written in Java and is available to download
freely from http://www.guptalab.org/biospectrogram. Software has been optimized
to run on Windows, Mac OSX and Linux. User manual and you-tube (product demo)
tutorial is also available on the website. We are in the process of acquiring
open source license for it.
|
1210.1507
|
Decomposition by Successive Convex Approximation: A Unifying Approach
for Linear Transceiver Design in Heterogeneous Networks
|
cs.IT math.IT
|
We study the downlink linear precoder design problem in a multi-cell dense
heterogeneous network (HetNet). The problem is formulated as a general
sum-utility maximization (SUM) problem, which includes as special cases many
practical precoder design problems such as multi-cell coordinated linear
precoding, full and partial per-cell coordinated multi-point transmission,
zero-forcing precoding and joint BS clustering and beamforming/precoding. The
SUM problem is difficult due to its non-convexity and the tight coupling of the
users' precoders. In this paper we propose a novel convex approximation
technique to approximate the original problem by a series of convex
subproblems, each of which decomposes across all the cells. The convexity of
the subproblems allows for efficient computation, while their decomposability
leads to distributed implementation. {Our approach hinges upon the
identification of certain key convexity properties of the sum-utility
objective, which allows us to transform the problem into a form that can be
solved using a popular algorithmic framework called BSUM (Block Successive
Upper-Bound Minimization).} Simulation experiments show that the proposed
framework is effective for solving interference management problems in large
HetNet.
|
1210.1530
|
A network of spiking neurons for computing sparse representations in an
energy efficient way
|
cs.NE q-bio.NC
|
Computing sparse redundant representations is an important problem both in
applied mathematics and neuroscience. In many applications, this problem must
be solved in an energy efficient way. Here, we propose a hybrid distributed
algorithm (HDA), which solves this problem on a network of simple nodes
communicating via low-bandwidth channels. HDA nodes perform both
gradient-descent-like steps on analog internal variables and
coordinate-descent-like steps via quantized external variables communicated to
each other. Interestingly, such operation is equivalent to a network of
integrate-and-fire neurons, suggesting that HDA may serve as a model of neural
computation. We show that the numerical performance of HDA is on par with
existing algorithms. In the asymptotic regime the representation error of HDA
decays with time, t, as 1/t. HDA is stable against time-varying noise,
specifically, the representation error decays as 1/sqrt(t) for Gaussian white
noise.
|
1210.1549
|
Source-Channel Secrecy with Causal Disclosure
|
cs.IT math.IT
|
Imperfect secrecy in communication systems is investigated. Instead of using
equivocation as a measure of secrecy, the distortion that an eavesdropper
incurs in producing an estimate of the source sequence is examined. The
communication system consists of a source and a broadcast (wiretap) channel,
and lossless reproduction of the source sequence at the legitimate receiver is
required. A key aspect of this model is that the eavesdropper's actions are
allowed to depend on the past behavior of the system. Achievability results are
obtained by studying the performance of source and channel coding operations
separately, and then linking them together digitally. Although the problem
addressed here has been solved when the secrecy resource is shared secret key,
it is found that substituting secret key for a wiretap channel brings new
insights and challenges: the notion of weak secrecy provides just as much
distortion at the eavesdropper as strong secrecy, and revealing public messages
freely is detrimental.
|
1210.1568
|
A Definition of Artificial Intelligence
|
cs.AI
|
In this paper we offer a formal definition of Artificial Intelligence and
this directly gives us an algorithm for construction of this object. Really,
this algorithm is useless due to the combinatory explosion.
The main innovation in our definition is that it does not include the
knowledge as a part of the intelligence. So according to our definition a newly
born baby also is an Intellect. Here we differs with Turing's definition which
suggests that an Intellect is a person with knowledge gained through the years.
|
1210.1624
|
Controlled Collaboration for Linear Coherent Estimation in Wireless
Sensor Networks
|
cs.IT math.IT
|
We consider a wireless sensor network consisting of multiple nodes that are
coordinated by a fusion center (FC) in order to estimate a common signal of
interest. In addition to being coordinated, the sensors are also able to
collaborate, i.e., share observations with other neighboring nodes, prior to
transmission. In an earlier work, we derived the energy-optimal collaboration
strategy for the single-snapshot framework, where the inference has to be made
based on observations collected at one particular instant. In this paper, we
make two important contributions. Firstly, for the single-snapshot framework,
we gain further insights into partially connected collaboration networks
(nearest-neighbor and random geometric graphs for example) through the analysis
of a family of topologies with regular structure. Secondly, we explore the
estimation problem by adding the dimension of time, where the goal is to
estimate a time-varying signal in a power-constrained network. To model the
time dynamics, we consider the stationary Gaussian process with exponential
covariance (sometimes referred to as Ornstein-Uhlenbeck process) as our
representative signal. For such a signal, we show that it is always beneficial
to sample as frequently as possible, despite the fact that the samples get
increasingly noisy due to the power-constrained nature of the problem.
Simulation results are presented to corroborate our analytical results.
|
1210.1626
|
Discovering and Leveraging the Most Valuable Links for Ranking
|
cs.IR
|
On the Web, visits of a page are often introduced by one or more valuable
linking sources. Indeed, good back links are valuable resources for Web pages
and sites. We propose to discovering and leveraging the best backlinks of pages
for ranking. Similar to PageRank, MaxRank scores are updated {recursively}. In
particular, with probability $\lambda$, the MaxRank of a document is updated
from the backlink source with the maximum score; with probability $1-\lambda$,
the MaxRank of a document is updated from a random backlink source. MaxRank has
an interesting relation to PageRank. When $\lambda=0$, MaxRank reduces to
PageRank; when $\lambda=1$, MaxRank only looks at the best backlink it thinks.
Empirical results on Wikipedia shows that the global authorities are very
influential; Overall large $\lambda$s (but smaller than 1) perform best: the
convergence is dramatically faster than PageRank, but the performance is still
comparable. We study the influence of these sources and propose a few measures
such as the times of being the best backlink for others, and related properties
of the proposed algorithm. The introduction of best backlink sources provides
new insights for link analysis. Besides ranking, our method can be used to
discover the most valuable linking sources for a page or Website, which is
useful for both search engines and site owners.
|
1210.1630
|
Symbolic Planning and Control Using Game Theory and Grammatical
Inference
|
cs.RO cs.GT
|
This paper presents an approach that brings together game theory with
grammatical inference and discrete abstractions in order to synthesize control
strategies for hybrid dynamical systems performing tasks in partially unknown
but rule-governed adversarial environments. The combined formulation guarantees
that a system specification is met if (a) the true model of the environment is
in the class of models inferable from a positive presentation, (b) a
characteristic sample is observed, and (c) the task specification is
satisfiable given the capabilities of the system (agent) and the environment.
|
1210.1646
|
Social network markets: the influence of network structure when
consumers face decisions over many similar choices
|
cs.SI physics.soc-ph
|
In social network markets, the act of consumer choice in these industries is
governed not just by the set of incentives described by conventional consumer
demand theory, but by the choices of others in which an individual's payoff is
an explicit function of the actions of others. We observe two key empirical
features of outcomes in social networked markets. First, a highly right-skewed,
non-Gaussian distribution of the number of times competing alternatives are
selected at a point in time. Second, there is turnover in the rankings of
popularity over time. We show here that such outcomes can arise either when
there is no alternative which exhibits inherent superiority in its attributes,
or when agents find it very difficult to discern any differences in quality
amongst the alternatives which are available so that it is as if no superiority
exists. These features appear to obtain, as a reasonable approximation, in many
social network markets. We examine the impact of network structure on both the
rank-size distribution of choices at a point in time, and on the life spans of
the most popular choices. We show that a key influence on outcomes is the
extent to which the network follows a hierarchical structure. It is the social
network properties of the markets, the meso-level structure, which determine
outcomes rather than the objective attributes of the products.
|
1210.1649
|
Conflict-driven ASP Solving with External Sources
|
cs.AI
|
Answer Set Programming (ASP) is a well-known problem solving approach based
on nonmonotonic logic programs and efficient solvers. To enable access to
external information, HEX-programs extend programs with external atoms, which
allow for a bidirectional communication between the logic program and external
sources of computation (e.g., description logic reasoners and Web resources).
Current solvers evaluate HEX-programs by a translation to ASP itself, in which
values of external atoms are guessed and verified after the ordinary answer set
computation. This elegant approach does not scale with the number of external
accesses in general, in particular in presence of nondeterminism (which is
instrumental for ASP). In this paper, we present a novel, native algorithm for
evaluating HEX-programs which uses learning techniques. In particular, we
extend conflict-driven ASP solving techniques, which prevent the solver from
running into the same conflict again, from ordinary to HEX-programs. We show
how to gain additional knowledge from external source evaluations and how to
use it in a conflict-driven algorithm. We first target the uninformed case,
i.e., when we have no extra information on external sources, and then extend
our approach to the case where additional meta-information is available.
Experiments show that learning from external sources can significantly decrease
both the runtime and the number of considered candidate compatible sets.
|
1210.1689
|
A New Quantum Data Processing Inequality
|
quant-ph cs.IT math.IT
|
Quantum data processing inequality bounds the set of bipartite states that
can be generated by two far apart parties under local operations; Having access
to a bipartite state as a resource, two parties cannot locally transform it to
another bipartite state with a mutual information greater than that of the
resource state. But due to the additivity of quantum mutual information under
tensor product, the data processing inequality gives no bound when the parties
are provided with arbitrary number of copies of the resource state. In this
paper we introduce a measure of correlation on bipartite quantum states, called
maximal correlation, that is not additive and gives the same number when
computed for multiple copies. Then by proving a data processing inequality for
this measure, we find a bound on the set of states that can be generated under
local operations even when an arbitrary number of copies of the resource state
is available.
|
1210.1745
|
Providing an Object Allocation Algorithm in Distributed Databases Using
Efficient Factors
|
cs.DB cs.DC
|
Data replication is a common method used to improve the performance of data
access in distributed database systems. In this paper, we present an object
replication algorithm in distributed database systems (ORAD). We optimize the
created replicated data in distributed database systems by using activity
functions of previous algorithms, changing them with new technical ways and
applying ORAD algorithm for making decisions. We propose ORAD algorithm with
using effective factors and observe its results in several valid situations.
Our objective is to propose an optimum method that replies read and write
requests with less cost in distributed database systems. Finally, we implement
ORAD and ADRW algorithms in a PC based network system and demonstrate that ORAD
algorithm is superior to ADRW algorithm in the field of average request
servicing cost.
|
1210.1752
|
Constellation Design for Channels Affected by Phase Noise
|
cs.IT math.IT
|
In this paper we optimize constellation sets to be used for channels affected
by phase noise. The main objective is to maximize the achievable mutual
information of the constellation under a given power constraint. The mutual
information and pragmatic mutual information of a given constellation is
calculated approximately assuming that both the channel and phase noise are
white. Then a simulated annealing algorithm is used to jointly optimize the
constellation and the binary labeling. The performance of optimized
constellations is compared with conventional constellations showing
considerable gains in all system scenarios.
|
1210.1753
|
Intelligent Search Heuristics for Cost Based Scheduling
|
cs.AI math.OC
|
Nurse scheduling is a difficult optimization problem with multiple
constraints. There is extensive research in the literature solving the problem
using meta-heuristics approaches. In this paper, we will investigate an
intelligent search heuristics that handles cost based scheduling problem. The
heuristics demonstrated superior performances compared to the original
algorithms used to solve the problems described in Li et. Al. (2003) and
Ozkarahan (1989) in terms of time needed to establish a feasible solution. Both
problems can be formulated as a cost problem. The search heuristic consists of
several phrases of search and input based on the cost of each assignment and
how the assignment will interact with the cost of the resources.
|
1210.1762
|
Constellation Design for Transmission over Nonlinear Satellite Channels
|
cs.IT math.IT
|
In this paper we use a variation of simulated annealing algorithm for
optimizing two-dimensional constellations with 32 signals. The main objective
is to maximize the symmetric pragmatic capacity under the peak-power
constraint. The method allows the joint optimization of constellation and
binary labeling. We also investigate the performance of the optimized
constellation over nonlinear satellite channel under additive white Gaussian
noise. We consider the performance over systems with and without
pre-distorters. In both cases the optimized constellations perform considerably
better than the conventional Amplitude Phase Shift Keying (APSK) modulations,
used in the current digital video broadcasting standard (DVB-S2) on satellite
channels. Based on our optimized constellations, we also propose a new labeling
for the 4+12+16-APSK constellation of the DVB-S2 standard which is Gray over
all rings.
|
1210.1766
|
Bayesian Inference with Posterior Regularization and applications to
Infinite Latent SVMs
|
cs.LG cs.AI stat.ME stat.ML
|
Existing Bayesian models, especially nonparametric Bayesian methods, rely on
specially conceived priors to incorporate domain knowledge for discovering
improved latent representations. While priors can affect posterior
distributions through Bayes' rule, imposing posterior regularization is
arguably more direct and in some cases more natural and general. In this paper,
we present regularized Bayesian inference (RegBayes), a novel computational
framework that performs posterior inference with a regularization term on the
desired post-data posterior distribution under an information theoretical
formulation. RegBayes is more flexible than the procedure that elicits expert
knowledge via priors, and it covers both directed Bayesian networks and
undirected Markov networks whose Bayesian formulation results in hybrid chain
graph models. When the regularization is induced from a linear operator on the
posterior distributions, such as the expectation operator, we present a general
convex-analysis theorem to characterize the solution of RegBayes. Furthermore,
we present two concrete examples of RegBayes, infinite latent support vector
machines (iLSVM) and multi-task infinite latent support vector machines
(MT-iLSVM), which explore the large-margin idea in combination with a
nonparametric Bayesian model for discovering predictive latent features for
classification and multi-task learning, respectively. We present efficient
inference methods and report empirical studies on several benchmark datasets,
which appear to demonstrate the merits inherited from both large-margin
learning and Bayesian nonparametrics. Such results were not available until
now, and contribute to push forward the interface between these two important
subfields, which have been largely treated as isolated in the community.
|
1210.1785
|
Relative Expressiveness of Defeasible Logics
|
cs.AI cs.LO
|
We address the relative expressiveness of defeasible logics in the framework
DL. Relative expressiveness is formulated as the ability to simulate the
reasoning of one logic within another logic. We show that such simulations must
be modular, in the sense that they also work if applied only to part of a
theory, in order to achieve a useful notion of relative expressiveness. We
present simulations showing that logics in DL with and without the capability
of team defeat are equally expressive. We also show that logics that handle
ambiguity differently -- ambiguity blocking versus ambiguity propagating --
have distinct expressiveness, with neither able to simulate the other under a
different formulation of expressiveness.
|
1210.1790
|
Everlasting Secrecy by Exploiting Non-Idealities of the Eavesdropper's
Receiver
|
cs.CR cs.IT math.IT
|
Secure communication over a memoryless wiretap channel in the presence of a
passive eavesdropper is considered. Traditional information-theoretic security
methods require an advantage for the main channel over the eavesdropper channel
to achieve a positive secrecy rate, which in general cannot be guaranteed in
wireless systems. Here, we exploit the non-linear conversion operation in the
eavesdropper's receiver to obtain the desired advantage - even when the
eavesdropper has perfect access to the transmitted signal at the input to their
receiver. The basic idea is to employ an ephemeral cryptographic key to force
the eavesdropper to conduct two operations, at least one of which is
non-linear, in a different order than the desired recipient. Since non-linear
operations are not necessarily commutative, the desired advantage can be
obtained and information-theoretic secrecy achieved even if the eavesdropper is
given the cryptographic key immediately upon transmission completion. In
essence, the lack of knowledge of the key during the short transmission time
inhibits the recording of the signal in such a way that the secret information
can never be extracted from it. The achievable secrecy rates for different
countermeasures that the eavesdropper might employ are evaluated. It is shown
that even in the case of an eavesdropper with uniformly better conditions
(channel and receiver quality) than the intended recipient, a positive secure
rate can be achieved.
|
1210.1791
|
An efficient algorithm for estimating state sequences in imprecise
hidden Markov models
|
cs.AI math.PR
|
We present an efficient exact algorithm for estimating state sequences from
outputs (or observations) in imprecise hidden Markov models (iHMM), where both
the uncertainty linking one state to the next, and that linking a state to its
output, are represented using coherent lower previsions. The notion of
independence we associate with the credal network representing the iHMM is that
of epistemic irrelevance. We consider as best estimates for state sequences the
(Walley--Sen) maximal sequences for the posterior joint state model conditioned
on the observed output sequence, associated with a gain function that is the
indicator of the state sequence. This corresponds to (and generalises) finding
the state sequence with the highest posterior probability in HMMs with precise
transition and output probabilities (pHMMs). We argue that the computational
complexity is at worst quadratic in the length of the Markov chain, cubic in
the number of states, and essentially linear in the number of maximal state
sequences. For binary iHMMs, we investigate experimentally how the number of
maximal state sequences depends on the model parameters. We also present a
simple toy application in optical character recognition, demonstrating that our
algorithm can be used to robustify the inferences made by precise probability
models.
|
1210.1840
|
A Further (Itakura-Saito/beta=0) Bi-stochaticization and Associated
Clustering/Regionalization of the 3,107-County 1995-2000 U. S. Migration
Network
|
physics.soc-ph cs.SI stat.AP
|
We extend to the beta-divergence (Itakura-Saito) case beta =0, the
comparative bi-stochaticization analyses-previously conducted (arXiv:1208.3428)
for the (Kullback-Leibler) beta=1 and (squared-Euclidean) beta = 2 cases -of
the 3,107 - county 1995-2000 U. S. migration network. A heuristic, "greedy"
algorithm is devised. While the largest 25,329 entries of the 735,531 non-zero
entries of the bi-stochasticized table - in the beta=1 case - are required to
complete the widely-applied two-stage (double-standardization and
strong-component hierarchical clustering) procedure, 105,363 of the 735,531 are
needed (reflective of greater uniformity of entries) in the beta=0 instance.
The North Carolina counties of Mecklenburg (Charlotte) and Wake (Raleigh) are
considerably relatively more cosmopolitan in the beta=0 study. The Colorado
county of El Paso (Colorado Springs) replaces the Florida Atlantic county of
Brevard (the "Space Coast") as the most cosmopolitan, with Brevard becoming the
second-most. Honolulu County splinters away from the other four (still-grouped)
Hawaiian counties, becoming the fifth most cosmopolitan county nation-wide. The
five counties of Rhode Island remain intact as a regional entity, but the eight
counties of Connecticut fragment, leaving only five counties clustered.
|
1210.1841
|
The Arab Spring: A Simple Compartmental Model for the Dynamics of a
Revolution
|
math.DS cs.SI physics.soc-ph
|
The self-immolation of Mohamed Bouazizi on December 17, 2011 in the small
Tunisian city of Sidi Bouzid, set off a sequence of events culminating in the
revolutions of the Arab Spring. It is widely believed that the Internet and
social media played a critical role in the growth and success of protests that
led to the downfall of the regimes in Egypt and Tunisia. However, the precise
mechanisms by which these new media affected the course of events remain
unclear. We introduce a simple compartmental model for the dynamics of a
revolution in a dictatorial regime such as Tunisia or Egypt which takes into
account the role of the Internet and social media. An elementary mathematical
analysis of the model identifies four main parameter regions: stable police
state, meta-stable police state, unstable police state, and failed state. We
illustrate how these regions capture, at least qualitatively, a wide range of
scenarios observed in the context of revolutionary movements by considering the
revolutions in Tunisia and Egypt, as well as the situation in Iran, China, and
Somalia, as case studies. We pose four questions about the dynamics of the Arab
Spring revolutions and formulate answers informed by the model. We conclude
with some possible directions for future work.
|
1210.1892
|
On Constant Gaps for the Two-way Gaussian Interference Channel
|
cs.IT math.IT
|
We introduce the two-way Gaussian interference channel in which there are
four nodes with four independent messages: two-messages to be transmitted over
a Gaussian interference channel in the $\rightarrow$ direction, simultaneously
with two-messages to be transmitted over an interference channel (in-band,
full-duplex) in the $\leftarrow$ direction. In such a two-way network, all
nodes are transmitters and receivers of messages, allowing them to adapt
current channel inputs to previously received channel outputs. We propose two
new outer bounds on the symmetric sum-rate for the two-way Gaussian
interference channel with complex channel gains: one under full adaptation (all
4 nodes are permitted to adapt inputs to previous outputs), and one under
partial adaptation (only 2 nodes are permitted to adapt, the other 2 are
restricted). We show that simple non-adaptive schemes such as the Han and
Kobayashi scheme, where inputs are functions of messages only and not past
outputs, utilized in each direction are sufficient to achieve within a constant
gap of these fully or partially adaptive outer bounds for all channel regimes.
|
1210.1904
|
Self-dual Permutation Codes of Finite Groups in Semisimple Case
|
cs.IT math.IT math.RT
|
The existence and construction of self-dual codes in a permutation module of
a finite group for the semisimple case are described from two aspects, one is
from the point of view of the composition factors which are self-dual modules,
the other one is from the point of view of the Galois group of the coefficient
field.
|
1210.1915
|
The Limitation of Random Network Coding
|
cs.IT cs.NI math.IT
|
It is already known that in multicast (single source, multiple sinks)
network, random linear network coding can achieve the maximum flow upper bound.
In this paper, we investigate how random linear network coding behaves in
general multi-source multi-sink case, where each sink has different demands,
and characterize all achievable rate of random linear network coding by a
simple maximum flow condition.
|
1210.1916
|
A comparative study on face recognition techniques and neural network
|
cs.CV
|
In modern times, face recognition has become one of the key aspects of
computer vision. There are at least two reasons for this trend; the first is
the commercial and law enforcement applications, and the second is the
availability of feasible technologies after years of research. Due to the very
nature of the problem, computer scientists, neuro-scientists and psychologists
all share a keen interest in this field. In plain words, it is a computer
application for automatically identifying a person from a still image or video
frame. One of the ways to accomplish this is by comparing selected features
from the image and a facial database. There are hundreds if not thousand
factors associated with this. In this paper some of the most common techniques
available including applications of neural network in facial recognition are
studied and compared with respect to their performance.
|
1210.1928
|
Information fusion in multi-task Gaussian processes
|
stat.ML cs.AI cs.LG
|
This paper evaluates heterogeneous information fusion using multi-task
Gaussian processes in the context of geological resource modeling.
Specifically, it empirically demonstrates that information integration across
heterogeneous information sources leads to superior estimates of all the
quantities being modeled, compared to modeling them individually. Multi-task
Gaussian processes provide a powerful approach for simultaneous modeling of
multiple quantities of interest while taking correlations between these
quantities into consideration. Experiments are performed on large scale real
sensor data.
|
1210.1931
|
D-FLAT: Declarative Problem Solving Using Tree Decompositions and
Answer-Set Programming
|
cs.AI cs.LO
|
In this work, we propose Answer-Set Programming (ASP) as a tool for rapid
prototyping of dynamic programming algorithms based on tree decompositions. In
fact, many such algorithms have been designed, but only a few of them found
their way into implementation. The main obstacle is the lack of easy-to-use
systems which (i) take care of building a tree decomposition and (ii) provide
an interface for declarative specifications of dynamic programming algorithms.
In this paper, we present D-FLAT, a novel tool that relieves the user of having
to handle all the technical details concerned with parsing, tree decomposition,
the handling of data structures, etc. Instead, it is only the dynamic
programming algorithm itself which has to be specified in the ASP language.
D-FLAT employs an ASP solver in order to compute the local solutions in the
dynamic programming algorithm. In the paper, we give a few examples
illustrating the use of D-FLAT and describe the main features of the system.
Moreover, we report experiments which show that ASP-based D-FLAT encodings for
some problems outperform monolithic ASP encodings on instances of small
treewidth.
|
1210.1935
|
Saddle-Node Bifurcation Associated with Parasitic Inductor Resistance in
Boost Converters
|
cs.SY math.DS nlin.CD
|
Saddle-node bifurcation occurs in a boost converter when parasitic inductor
resistance is modeled. Closed-form critical conditions of the bifurcation are
derived. If the parasitic inductor resistance is modeled, the saddle-node
bifurcation occurs in the voltage mode control or in the current mode control
with the voltage loop closed, but not in the current mode control with the
voltage loop open. If the parasitic inductor resistance is not modeled, the
saddle-node bifurcation does not occur, and one may be misled by the wrong
dynamics and the wrong steady-state solutions. The saddle-node bifurcation
still exists even in a boost converter with a popular type-III compensator.
When the saddle-node bifurcation occurs, multiple steady-state solutions may
coexist. The converter may operate with a voltage jump from one solution to
another. Care should be taken in the compensator design to ensure that only the
desired solution is stabilized. In industry practice, the solution with a
higher duty cycle (and thus the saddle-node bifurcation) may be prevented by
placing a limitation on the maximum duty cycle.
|
1210.1940
|
Variable-length Hill Cipher with MDS Key Matrix
|
cs.CR cs.IT math.IT
|
The Hill Cipher is a classical symmetric cipher which breaks plaintext into
blocks of size m and then multiplies each block by an m by m key matrix to
yield ciphertext. However, it is well known that the Hill cipher succumbs to
cryptanalysis relatively easily. As a result, there have been efforts to
strengthen the cipher through the use of various techniques e.g. permuting rows
and columns of the key matrix to encrypt each plaintext vector with a new key
matrix. In this paper, we strengthen the security of the Hill cipher against a
known-plaintext attack by encrypting each plaintext matrix by a variable-length
key matrix obtained from a Maximum Distance Separable (MDS) master key matrix.
|
1210.1959
|
Modeling and Instability of Average Current Control
|
cs.SY math.DS nlin.CD
|
Dynamics and stability of average current control of DC-DC converters are
analyzed by sampled-data modeling. Orbital stability is studied and it is found
unrelated to the ripple size of the orbit. Compared with the averaged modeling,
the sampled-data modeling is more accurate and systematic. An unstable range of
compensator pole is found by simulations, and is predicted by sampled-data
modeling and harmonic balance modeling.
|
1210.1960
|
Feature Selection via L1-Penalized Squared-Loss Mutual Information
|
stat.ML cs.LG
|
Feature selection is a technique to screen out less important features. Many
existing supervised feature selection algorithms use redundancy and relevancy
as the main criteria to select features. However, feature interaction,
potentially a key characteristic in real-world problems, has not received much
attention. As an attempt to take feature interaction into account, we propose
L1-LSMI, an L1-regularization based algorithm that maximizes a squared-loss
variant of mutual information between selected features and outputs. Numerical
results show that L1-LSMI performs well in handling redundancy, detecting
non-linear dependency, and considering feature interaction.
|
1210.1975
|
Some scale-free networks could be robust under the selective node
attacks
|
physics.soc-ph cs.NI cs.SI
|
It is a mainstream idea that scale-free network would be fragile under the
selective attacks. Internet is a typical scale-free network in the real world,
but it never collapses under the selective attacks of computer viruses and
hackers. This phenomenon is different from the deduction of the idea above
because this idea assumes the same cost to delete an arbitrary node. Hence this
paper discusses the behaviors of the scale-free network under the selective
node attack with different cost. Through the experiments on five complex
networks, we show that the scale-free network is possibly robust under the
selective node attacks; furthermore, the more compact the network is, and the
larger the average degree is, then the more robust the network is; With the
same average degrees, the more compact the network is, the more robust the
network is. This result would enrich the theory of the invulnerability of the
network, and can be used to build the robust social, technological and
biological networks, and also has the potential to find the target of drugs.
|
1210.1983
|
Reply to Comments on Neuroelectrodynamics: Where are the Real Conceptual
Pitfalls?
|
cs.NE nlin.AO physics.bio-ph q-bio.NC
|
The fundamental, powerful process of computation in the brain has been widely
misunderstood. The paper [1] associates the general failure to build
intelligent thinking machines with current reductionist principles of temporal
coding and advocates for a change in paradigm regarding the brain analogy.
Since fragments of information are stored in proteins which can shift between
several structures to perform their function, the biological substrate is
actively involved in physical computation. The intrinsic nonlinear dynamics of
action potentials and synaptic activities maintain physical interactions within
and between neurons in the brain. During these events the required information
is exchanged between molecular structures (proteins) which store fragments of
information and the generated electric flux which carries and integrates
information in the brain. The entire process of physical interaction explains
how the brain actively creates or experiences meaning. This process of
interaction during an action potential generation can be simply seen as the
moment when the neuron solves a many-body problem. A neuroelectrodynamic theory
shows that the neuron solves equations rather than exclusively computes
functions. With the main focus on temporal patterns, the spike timing dogma
(STD) has neglected important forms of computation which do occur inside
neurons. In addition, artificial neural models have missed the most important
part since the real super-computing power of the brain has its origins in
computations that occur within neurons.
|
1210.1996
|
Connect and win: The role of social networks in political elections
|
physics.soc-ph cs.SI
|
Many networks do not live in isolation but are strongly interacting, with
profound consequences on their dynamics. Here, we consider the case of two
interacting social networks and, in the context of a simple model, we address
the case of political elections. Each network represents a competing party and
every agent on the election day can choose to be either active in one of the
two networks (vote for the corresponding party) or to be inactive in both (not
vote). The opinion dynamics during the election campaign is described through a
simulated annealing algorithm. We find that for a large region of the parameter
space the result of the competition between the two parties allows for the
existence of pluralism in the society, where both parties have a finite share
of the votes. The central result is that a densely connected social network is
key for the final victory of a party. However, small committed minorities can
play a crucial role, and even reverse the election outcome.
|
1210.2018
|
Community Structure Detection in Complex Networks with Partial
Background Information
|
cs.SI physics.soc-ph
|
Constrained clustering has been well-studied in the unsupervised learning
society. However, how to encode constraints into community structure detection,
within complex networks, remains a challenging problem. In this paper, we
propose a semi-supervised learning framework for community structure detection.
This framework implicitly encodes the must-link and cannot-link constraints by
modifying the adjacency matrix of network, which can also be regarded as
de-noising the consensus matrix of community structures. Our proposed method
gives consideration to both the topology and the functions (background
information) of complex network, which enhances the interpretability of the
results. The comparisons performed on both the synthetic benchmarks and the
real-world networks show that the proposed framework can significantly improve
the community detection performance with few constraints, which makes it an
attractive methodology in the analysis of complex networks.
|
1210.2019
|
On the relation of nonanticipative rate distortion function and
filtering theory
|
cs.IT cs.SY math.IT
|
In this paper the relation between nonanticipative rate distortion function
(RDF) and Bayesian filtering theory is investigated using the topology of weak
convergence of probability measures on Polish spaces. The relation is
established via an optimization on the space of conditional distributions of
the so-called directed information subject to fidelity constraints. Existence
of the optimal reproduction distribution of the nonanticipative RDF is shown,
while the optimal nonanticipative reproduction conditional distribution for
stationary processes is derived in closed form. The realization procedure of
nonanticipative RDF which is equivalent to joint-source channel matching for
symbol-by-symbol transmission is described, while an example is introduced to
illustrate the concepts.
|
1210.2035
|
Synthesis of Reactive Protocols for Vehicle-to-Vehicle Communication
|
cs.SY
|
We present a synthesis method for communication protocols for active safety
applications that satisfy certain formal specifications on quality of service
requirements. The protocols are developed to provide reliable communication
services for automobile active safety applications. The synthesis method
transforms a specification into a distributed implementation of senders and
receivers that together satisfy the quality of service requirements by
transmitting messages over an unreliable medium. We develop a specification
language and an execution model for the implementations, and demonstrate the
viability of our method by developing a protocol for a traffic scenario in
which a car runs a red light at a busy intersection.
|
1210.2051
|
Anomalous Vacillatory Learning
|
math.LO cs.LG cs.LO
|
In 1986, Osherson, Stob and Weinstein asked whether two variants of anomalous
vacillatory learning, TxtFex^*_* and TxtFext^*_*, could be distinguished. In
both, a machine is permitted to vacillate between a finite number of hypotheses
and to make a finite number of errors. TxtFext^*_*-learning requires that
hypotheses output infinitely often must describe the same finite variant of the
correct set, while TxtFex^*_*-learning permits the learner to vacillate between
finitely many different finite variants of the correct set. In this paper we
show that TxtFex^*_* \neq TxtFext^*_*, thereby answering the question posed by
Osherson, \textit{et al}. We prove this in a strong way by exhibiting a family
in TxtFex^*_2 \setminus {TxtFext}^*_*.
|
1210.2067
|
An Approximation of the First Order Marcum $Q$-Function with Application
to Network Connectivity Analysis
|
cs.IT math.IT
|
An exponential-type approximation of the first order Marcum $Q$-function is
presented, which is robust to changes in its first argument and can easily be
integrated with respect to the second argument. Such characteristics are
particularly useful in network connectivity analysis. The proposed
approximation is exact in the limit of small first argument of the Marcum
$Q$-function, in which case the optimal parameters can be obtained
analytically. For larger values of the first argument, an optimization problem
is solved, and the parameters can be accurately represented using regression
analysis. Numerical results indicate that the proposed methods result in
approximations very close to the actual Marcum $Q$-function for small and
moderate values of the first argument. We demonstrate the accuracy of the
approximation by using it to analyze the connectivity properties of random ad
hoc networks operating in a Rician fading environment.
|
1210.2085
|
Privacy Aware Learning
|
stat.ML cs.IT cs.LG math.IT
|
We study statistical risk minimization problems under a privacy model in
which the data is kept confidential even from the learner. In this local
privacy framework, we establish sharp upper and lower bounds on the convergence
rates of statistical estimation procedures. As a consequence, we exhibit a
precise tradeoff between the amount of privacy the data preserves and the
utility, as measured by convergence rate, of any statistical estimator or
learning procedure.
|
1210.2107
|
On Optimal TCM Encoders
|
cs.IT math.IT
|
An asymptotically optimal trellis-coded modulation (TCM) encoder requires the
joint design of the encoder and the binary labeling of the constellation. Since
analytical approaches are unknown, the only available solution is to perform an
exhaustive search over the encoder and the labeling. For large constellation
sizes and/or many encoder states, however, an exhaustive search is unfeasible.
Traditional TCM designs overcome this problem by using a labeling that follows
the set-partitioning principle and by performing an exhaustive search over the
encoders. In this paper we study binary labelings for TCM and show how they can
be grouped into classes, which considerably reduces the search space in a joint
design. For 8-ary constellations, the number of different binary labelings that
must be tested is reduced from 8!=40320 to 240. For the particular case of an
8-ary pulse amplitude modulation constellation, this number is further reduced
to 120 and for 8-ary phase shift keying to only 30. An algorithm to generate
one labeling in each class is also introduced. Asymptotically optimal TCM
encoders are tabulated which are up to 0.3 dB better than the previously best
known encoders.
|
1210.2110
|
Repairable Replication-based Storage Systems Using Resolvable Designs
|
cs.IT math.IT
|
We consider the design of regenerating codes for distributed storage systems
at the minimum bandwidth regeneration (MBR) point. The codes allow for a repair
process that is exact and uncoded, but table-based. These codes were introduced
in prior work and consist of an outer MDS code followed by an inner fractional
repetition (FR) code where copies of the coded symbols are placed on the
storage nodes. The main challenge in this domain is the design of the inner FR
code.
In our work, we consider generalizations of FR codes, by establishing their
connection with a family of combinatorial structures known as resolvable
designs. Our constructions based on affine geometries, Hadamard designs and
mutually orthogonal Latin squares allow the design of systems where a new node
can be exactly regenerated by downloading $\beta \geq 1$ packets from a subset
of the surviving nodes (prior work only considered the case of $\beta = 1$).
Our techniques allow the design of systems over a large range of parameters.
Specifically, the repetition degree of a symbol, which dictates the resilience
of the system can be varied over a large range in a simple manner. Moreover,
the actual table needed for the repair can also be implemented in a rather
straightforward way. Furthermore, we answer an open question posed in prior
work by demonstrating the existence of codes with parameters that are not
covered by Steiner systems.
|
1210.2123
|
Privacy Against Statistical Inference
|
cs.IT cs.CR math.IT
|
We propose a general statistical inference framework to capture the privacy
threat incurred by a user that releases data to a passive but curious
adversary, given utility constraints. We show that applying this general
framework to the setting where the adversary uses the self-information cost
function naturally leads to a non-asymptotic information-theoretic approach for
characterizing the best achievable privacy subject to utility constraints.
Based on these results we introduce two privacy metrics, namely average
information leakage and maximum information leakage. We prove that under both
metrics the resulting design problem of finding the optimal mapping from the
user's data to a privacy-preserving output can be cast as a modified
rate-distortion problem which, in turn, can be formulated as a convex program.
Finally, we compare our framework with differential privacy.
|
1210.2126
|
Lists that are smaller than their parts: A coding approach to tunable
secrecy
|
cs.IT cs.CR math.IT
|
We present a new information-theoretic definition and associated results,
based on list decoding in a source coding setting. We begin by presenting
list-source codes, which naturally map a key length (entropy) to list size. We
then show that such codes can be analyzed in the context of a novel
information-theoretic metric, \epsilon-symbol secrecy, that encompasses both
the one-time pad and traditional rate-based asymptotic metrics, but, like most
cryptographic constructs, can be applied in non-asymptotic settings. We derive
fundamental bounds for \epsilon-symbol secrecy and demonstrate how these bounds
can be achieved with MDS codes when the source is uniformly distributed. We
discuss applications and implementation issues of our codes.
|
1210.2132
|
Equalitarian Societies are Economically Impossible
|
physics.soc-ph cs.SI nlin.AO q-fin.GN
|
The inequality of wealth distribution is a universal phenomenon in the
civilized nations, and it is often imputed to the Matthew effect, that is, the
rich get richer and the poor get poorer. Some philosophers unjustified this
phenomenon and tried to put the human civilization upon the evenness of wealth.
Noticing the facts that 1) the emergence of the centralism is the starting
point of human civilization, i.e., people in a society were organized
hierarchically, 2) the inequality of wealth emerges simultaneously, this paper
proposes a wealth distribution model based on the hidden tree structure from
the viewpoint of complex network. This model considers the organized structure
of people in a society as a hidden tree, and the cooperations among human
beings as the transactions on the hidden tree, thereby explains the
distribution of wealth. This model shows that the scale-free phenomenon of
wealth distribution can be produced by the cascade controlling of human
society, that is, the inequality of wealth can parasitize in the social
organizations, such that any actions in eliminating the unequal wealth
distribution would lead to the destroy of social or economic structures,
resulting in the collapse of the economic system, therefore, would fail in
vain.
|
1210.2143
|
Degrees of Freedom of Two-Hop Wireless Networks: "Everyone Gets the
Entire Cake"
|
cs.IT math.IT
|
We show that fully connected two-hop wireless networks with K sources, K
relays and K destinations have K degrees of freedom both in the case of
time-varying channel coefficients and in the case of constant channel
coefficients (in which case the result holds for almost all values of constant
channel coefficients). Our main contribution is a new achievability scheme
which we call Aligned Network Diagonalization. This scheme allows the data
streams transmitted by the sources to undergo a diagonal linear transformation
from the sources to the destinations, thus being received free of interference
by their intended destination. In addition, we extend our scheme to multi-hop
networks with fully connected hops, and multi-hop networks with MIMO nodes, for
which the degrees of freedom are also fully characterized.
|
1210.2144
|
Network Compression: Memory-Assisted Universal Coding of Sources with
Correlated Parameters
|
cs.IT math.IT
|
In this paper, we propose {\em distributed network compression via memory}.
We consider two spatially separated sources with correlated unknown source
parameters. We wish to study the universal compression of a sequence of length
$n$ from one of the sources provided that the decoder has access to (i.e.,
memorized) a sequence of length $m$ from the other source. In this setup, the
correlation does not arise from symbol-by-symbol dependency of two outputs from
the two sources (as in Slepian-Wolf setup). Instead, the two sequences are
correlated because they are originated from the two sources with \emph{unknown}
correlated parameters. The finite-length nature of the compression problem at
hand requires considering a notion of almost lossless source coding, where
coding incurs an error probability $p_e(n)$ that vanishes as sequence length
$n$ grows to infinity. We obtain bounds on the redundancy of almost lossless
codes when the decoder has access to a random memory of length $m$ as a
function of the sequence length $n$ and the permissible error probability
$p_e(n)$. Our results demonstrate that distributed network compression via
memory has the potential to significantly improve over conventional end-to-end
compression when sufficiently large memory from previous communications is
available to the decoder.
|
1210.2146
|
Amplitude Space Sharing among the Macro-Cell and Small-Cell Users
|
cs.IT math.IT
|
The crushing demand for wireless data services will soon exceed the
capability of the current homogeneous cellular architecture. An emerging
solution is to overlay small-cell networks with the macro-cell networks. In
this paper, we propose an amplitude space sharing (ASS) method among the
macro-cell user and small-cell users. By transmit layer design and data-rate
optimization, the signals and interferences are promised to be separable at
each receiver and the network sum-rate is maximized. The Han-Koboyashi coding
is employed and optimal power allocation is derived for the one small-cell
scenario, and a simple ASS transmission scheme is developed for the multiple
small-cells scenarios. Simulation results show great superiority over other
interference management schemes.
|
1210.2159
|
Strong Coordination with Polar Codes
|
cs.IT math.IT
|
In this paper, we design explicit codes for strong coordination in two-node
networks. Specifically, we consider a two-node network in which the action
imposed by nature is binary and uniform, and the action to coordinate is
obtained via a symmetric discrete memoryless channel. By observing that polar
codes are useful for channel resolvability over binary symmetric channels, we
prove that nested polar codes achieve a subset of the strong coordination
capacity region, and therefore provide a constructive and low complexity
solution for strong coordination.
|
1210.2162
|
Semisupervised Classifier Evaluation and Recalibration
|
cs.LG cs.CV
|
How many labeled examples are needed to estimate a classifier's performance
on a new dataset? We study the case where data is plentiful, but labels are
expensive. We show that by making a few reasonable assumptions on the structure
of the data, it is possible to estimate performance curves, with confidence
bounds, using a small number of ground truth labels. Our approach, which we
call Semisupervised Performance Evaluation (SPE), is based on a generative
model for the classifier's confidence scores. In addition to estimating the
performance of classifiers on new datasets, SPE can be used to recalibrate a
classifier by re-estimating the class-conditional confidence distributions.
|
1210.2164
|
ET-LDA: Joint Topic Modeling For Aligning, Analyzing and Sensemaking of
Public Events and Their Twitter Feeds
|
cs.LG cs.AI cs.SI physics.soc-ph
|
Social media channels such as Twitter have emerged as popular platforms for
crowds to respond to public events such as speeches, sports and debates. While
this promises tremendous opportunities to understand and make sense of the
reception of an event from the social media, the promises come entwined with
significant technical challenges. In particular, given an event and an
associated large scale collection of tweets, we need approaches to effectively
align tweets and the parts of the event they refer to. This in turn raises
questions about how to segment the event into smaller yet meaningful parts, and
how to figure out whether a tweet is a general one about the entire event or
specific one aimed at a particular segment of the event. In this work, we
present ET-LDA, an effective method for aligning an event and its tweets
through joint statistical modeling of topical influences from the events and
their associated tweets. The model enables the automatic segmentation of the
events and the characterization of tweets into two categories: (1) episodic
tweets that respond specifically to the content in the segments of the events,
and (2) steady tweets that respond generally about the events. We present an
efficient inference method for this model, and a comprehensive evaluation of
its effectiveness over existing methods. In particular, through a user study,
we demonstrate that users find the topics, the segments, the alignment, and the
episodic tweets discovered by ET-LDA to be of higher quality and more
interesting as compared to the state-of-the-art, with improvements in the range
of 18-41%.
|
1210.2179
|
Fast Online EM for Big Topic Modeling
|
cs.LG
|
The expectation-maximization (EM) algorithm can compute the
maximum-likelihood (ML) or maximum a posterior (MAP) point estimate of the
mixture models or latent variable models such as latent Dirichlet allocation
(LDA), which has been one of the most popular probabilistic topic modeling
methods in the past decade. However, batch EM has high time and space
complexities to learn big LDA models from big data streams. In this paper, we
present a fast online EM (FOEM) algorithm that infers the topic distribution
from the previously unseen documents incrementally with constant memory
requirements. Within the stochastic approximation framework, we show that FOEM
can converge to the local stationary point of the LDA's likelihood function. By
dynamic scheduling for the fast speed and parameter streaming for the low
memory usage, FOEM is more efficient for some lifelong topic modeling tasks
than the state-of-the-art online LDA algorithms to handle both big data and big
models (aka, big topic modeling) on just a PC.
|
1210.2182
|
Approximate Ergodic Capacity of a Class of Fading 2-user 2-hop Networks
|
cs.IT math.IT
|
We consider a fading AWGN 2-user 2-hop network where the channel coefficients
are independent and identically distributed (i.i.d.) drawn from a continuous
distribution and vary over time. For a broad class of channel distributions, we
characterize the ergodic sum capacity to within a constant number of
bits/sec/Hz, independent of signal-to-noise ratio. The achievability follows
from the analysis of an interference neutralization scheme where the relays are
partitioned into $M$ pairs, and interference is neutralized separately by each
pair of relays. When $M=1$, the proposed ergodic interference neutralization
characterizes the ergodic sum capacity to within $4$ bits/sec/Hz for i.i.d.
uniform phase fading and approximately $4.7$ bits/sec/Hz for i.i.d. Rayleigh
fading. We further show that this gap can be tightened to $4\log \pi-4$
bits/sec/Hz (approximately $2.6$) for i.i.d. uniform phase fading and $4-4\log(
\frac{3\pi}{8})$ bits/sec/Hz (approximately $3.1$) for i.i.d. Rayleigh fading
in the limit of large $M$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.