id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1306.6737 | Digital Image Tamper Detection Techniques - A Comprehensive Study | cs.CR cs.CV | Photographs are considered to be the most powerful and trustworthy media of
expression. For a long time, those were accepted as proves of evidences in
varied fields such as journalism, forensic investigations, military
intelligence, scientific research and publications, crime detection and legal
proceedings, investigation of insurance claims, medical imaging etc. Today,
digital images have completely replaced the conventional photographs from every
sphere of life but unfortunately, they seldom enjoy the credibility of their
conventional counterparts, thanks to the rapid advancements in the field of
digital image processing. The increasing availability of low cost and sometimes
free of cost image editing software such as Photoshop, Corel Paint Shop,
Photoscape, PhotoPlus, GIMP and Pixelmator have made the tampering of digital
images even more easier and a common practice. Now it has become quite
impossible to say whether a photograph is a genuine camera output or a
manipulated version of it just by looking at it. As a result, photographs have
almost lost their reliability and place as proves of evidences in all fields.
This is why digital image tamper detection has emerged as an important research
area to establish the authenticity of digital photographs by separating the
tampered lots from the original ones. This paper gives a brief history of image
tampering and a state-of-the-art review of the tamper detection techniques.
|
1306.6755 | Arabizi Detection and Conversion to Arabic | cs.CL cs.IR | Arabizi is Arabic text that is written using Latin characters. Arabizi is
used to present both Modern Standard Arabic (MSA) or Arabic dialects. It is
commonly used in informal settings such as social networking sites and is often
with mixed with English. In this paper we address the problems of: identifying
Arabizi in text and converting it to Arabic characters. We used word and
sequence-level features to identify Arabizi that is mixed with English. We
achieved an identification accuracy of 98.5%. As for conversion, we used
transliteration mining with language modeling to generate equivalent Arabic
text. We achieved 88.7% conversion accuracy, with roughly a third of errors
being spelling and morphological variants of the forms in ground truth.
|
1306.6802 | Evaluation Measures for Hierarchical Classification: a unified view and
novel approaches | cs.AI cs.LG | Hierarchical classification addresses the problem of classifying items into a
hierarchy of classes. An important issue in hierarchical classification is the
evaluation of different classification algorithms, which is complicated by the
hierarchical relations among the classes. Several evaluation measures have been
proposed for hierarchical classification using the hierarchy in different ways.
This paper studies the problem of evaluation in hierarchical classification by
analyzing and abstracting the key components of the existing performance
measures. It also proposes two alternative generic views of hierarchical
evaluation and introduces two corresponding novel measures. The proposed
measures, along with the state-of-the art ones, are empirically tested on three
large datasets from the domain of text classification. The empirical results
illustrate the undesirable behavior of existing approaches and how the proposed
methods overcome most of these methods across a range of cases.
|
1306.6805 | Simultaneous Discrimination Prevention and Privacy Protection in Data
Publishing and Mining | cs.DB cs.CR | Data mining is an increasingly important technology for extracting useful
knowledge hidden in large collections of data. There are, however, negative
social perceptions about data mining, among which potential privacy violation
and potential discrimination. Automated data collection and data mining
techniques such as classification have paved the way to making automated
decisions, like loan granting/denial, insurance premium computation. If the
training datasets are biased in what regards discriminatory attributes like
gender, race, religion, discriminatory decisions may ensue. In the first part
of this thesis, we tackle discrimination prevention in data mining and propose
new techniques applicable for direct or indirect discrimination prevention
individually or both at the same time. We discuss how to clean training
datasets and outsourced datasets in such a way that direct and/or indirect
discriminatory decision rules are converted to legitimate (non-discriminatory)
classification rules. In the second part of this thesis, we argue that privacy
and discrimination risks should be tackled together. We explore the
relationship between privacy preserving data mining and discrimination
prevention in data mining to design holistic approaches capable of addressing
both threats simultaneously during the knowledge discovery process. As part of
this effort, we have investigated for the first time the problem of
discrimination and privacy aware frequent pattern discovery, i.e. the
sanitization of the collection of patterns mined from a transaction database in
such a way that neither privacy-violating nor discriminatory inferences can be
inferred on the released patterns. Moreover, we investigate the problem of
discrimination and privacy aware data publishing, i.e. transforming the data,
instead of patterns, in order to simultaneously fulfill privacy preservation
and discrimination prevention.
|
1306.6812 | Epidemics in Multipartite Networks: Emergent Dynamics | cs.SI physics.soc-ph q-bio.PE | Single virus epidemics over complete networks are widely explored in the
literature as the fraction of infected nodes is, under appropriate microscopic
modeling of the virus infection, a Markov process. With non-complete networks,
this macroscopic variable is no longer Markov. In this paper, we study virus
diffusion, in particular, multi-virus epidemics, over non-complete stochastic
networks. We focus on multipartite networks. In companying work
http://arxiv.org/abs/1306.6198, we show that the peer-to-peer local random
rules of virus infection lead, in the limit of large multipartite networks, to
the emergence of structured dynamics at the macroscale. The exact fluid limit
evolution of the fraction of nodes infected by each virus strain across islands
obeys a set of nonlinear coupled differential equations, see
http://arxiv.org/abs/1306.6198. In this paper, we develop methods to analyze
the qualitative behavior of these limiting dynamics, establishing conditions on
the virus micro characteristics and network structure under which a virus
persists or a natural selection phenomenon is observed.
|
1306.6815 | Distributed Greedy Pursuit Algorithms | cs.IT math.IT | For compressed sensing over arbitrarily connected networks, we consider the
problem of estimating underlying sparse signals in a distributed manner. We
introduce a new signal model that helps to describe inter-signal correlation
among connected nodes. Based on this signal model along with a brief survey of
existing greedy algorithms, we develop distributed greedy algorithms with low
communication overhead. Incorporating appropriate modifications, we design two
new distributed algorithms where the local algorithms are based on
appropriately modified existing orthogonal matching pursuit and subspace
pursuit. Further, by combining advantages of these two local algorithms, we
design a new greedy algorithm that is well suited for a distributed scenario.
By extensive simulations we demonstrate that the new algorithms in a sparsely
connected network provide good performance, close to the performance of a
centralized greedy solution.
|
1306.6834 | Social Network Intelligence Analysis to Combat Street Gang Violence | cs.SI physics.soc-ph | In this paper we introduce the Organization, Relationship, and Contact
Analyzer (ORCA) that is designed to aide intelligence analysis for law
enforcement operations against violent street gangs. ORCA is designed to
address several police analytical needs concerning street gangs using new
techniques in social network analysis. Specifically, it can determine "degree
of membership" for individuals who do not admit to membership in a street gang,
quickly identify sets of influential individuals (under the tipping model), and
identify criminal ecosystems by decomposing gangs into sub-groups. We describe
this software and the design decisions considered in building an intelligence
analysis tool created specifically for countering violent street gangs as well
as provide results based on conducting analysis on real-world police data
provided by a major American metropolitan police department who is partnering
with us and currently deploying this system for real-world use.
|
1306.6842 | New Mathematical and Algorithmic Schemes for Pattern Classification with
Application to the Identification of Writers of Important Ancient Documents | cs.CV | In this paper, a novel approach is introduced for classifying curves into
proper families, according to their similarity. First, a mathematical quantity
we call plane curvature is introduced and a number of propositions are stated
and proved. Proper similarity measures of two curves are introduced and a
subsequent statistical analysis is applied. First, the efficiency of the curve
fitting process has been tested on 2 shapes datasets of reference. Next, the
methodology has been applied to the very important problem of classifying 23
Byzantine codices and 46 Ancient inscriptions to their writers, thus achieving
correct dating of their content. The inscriptions have been attributed to ten
individual hands and the Byzantine codices to four writers.
|
1306.6843 | Error AMP Chain Graphs | stat.ML cs.AI | Any regular Gaussian probability distribution that can be represented by an
AMP chain graph (CG) can be expressed as a system of linear equations with
correlated errors whose structure depends on the CG. However, the CG represents
the errors implicitly, as no nodes in the CG correspond to the errors. We
propose in this paper to add some deterministic nodes to the CG in order to
represent the errors explicitly. We call the result an EAMP CG. We will show
that, as desired, every AMP CG is Markov equivalent to its corresponding EAMP
CG under marginalization of the error nodes. We will also show that every EAMP
CG under marginalization of the error nodes is Markov equivalent to some LWF CG
under marginalization of the error nodes, and that the latter is Markov
equivalent to some directed and acyclic graph (DAG) under marginalization of
the error nodes and conditioning on some selection nodes. This is important
because it implies that the independence model represented by an AMP CG can be
accounted for by some data generating process that is partially observed and
has selection bias. Finally, we will show that EAMP CGs are closed under
marginalization. This is a desirable feature because it guarantees parsimonious
models under marginalization.
|
1306.6852 | Axiomatic properties of inconsistency indices for pairwise comparisons | cs.AI | Pairwise comparisons are a well-known method for the representation of the
subjective preferences of a decision maker. Evaluating their inconsistency has
been a widely studied and discussed topic and several indices have been
proposed in the literature to perform this task. Since an acceptable level of
consistency is closely related with the reliability of preferences, a suitable
choice of an inconsistency index is a crucial phase in decision making
processes. The use of different methods for measuring consistency must be
carefully evaluated, as it can affect the decision outcome in practical
applications. In this paper, we present five axioms aimed at characterizing
inconsistency indices. In addition, we prove that some of the indices proposed
in the literature satisfy these axioms, while others do not, and therefore, in
our view, they may fail to correctly evaluate inconsistency.
|
1306.6909 | Exact Support Recovery for Sparse Spikes Deconvolution | math.OC cs.IT math.IT math.NA | This paper studies sparse spikes deconvolution over the space of measures. We
focus our attention to the recovery properties of the support of the measure,
i.e. the location of the Dirac masses. For non-degenerate sums of Diracs, we
show that, when the signal-to-noise ratio is large enough, total variation
regularization (which is the natural extension of the L1 norm of vectors to the
setting of measures) recovers the exact same number of Diracs. We also show
that both the locations and the heights of these Diracs converge toward those
of the input measure when the noise drops to zero. The exact speed of
convergence is governed by a specific dual certificate, which can be computed
by solving a linear system. We draw connections between the support of the
recovered measure on a continuous domain and on a discretized grid. We show
that when the signal-to-noise level is large enough, the solution of the
discretized problem is supported on pairs of Diracs which are neighbors of the
Diracs of the input measure. This gives a precise description of the
convergence of the solution of the discretized problem toward the solution of
the continuous grid-free problem, as the grid size tends to zero.
|
1306.6924 | Optimal Tx-BF for MIMO SC-FDE Systems | cs.IT math.IT | Transmit beamforming (Tx-BF) for multiple-input multiple-output (MIMO)
channels is an effective means to improve system performance. In
frequency-selective channels, Tx-BF can be implemented in combination with
single-carrier frequency-domain equalization (SC-FDE) to combat inter-symbol
interference. In this paper, we consider the optimal design of the Tx-BF matrix
for a MIMO SC-FDE system employing a linear minimum mean square error (MSE)
receiver. We formulate the Tx-BF optimization problem as the minimization of a
general function of the stream MSEs, subject to a transmit power constraint.
The optimal structure of the Tx-BF matrix is obtained in closed form and an
efficient algorithm is proposed for computing the optimal power allocation. Our
simulation results validate the excellent performance of the proposed scheme in
terms of uncoded bit-error rate and achievable bit rate.
|
1306.6929 | Power indices of influence games and new centrality measures for social
networks | cs.GT cs.SI physics.soc-ph | In social network analysis, there is a common perception that influence is
relevant to determine the global behavior of the society and thus it can be
used to enforce cooperation by targeting an adequate initial set of individuals
or to analyze global choice processes. Here we propose centrality measures that
can be used to analyze the relevance of the actors in process related to spread
of influence. In [39] it was considered a multiagent system in which the agents
are eager to perform a collective task depending on the perception of the
willingness to perform the task of other individuals. The setting is modeled
using a notion of simple games called influence games. Those games are defined
on graphs were the nodes are labeled by their influence threshold and the
spread of influence between its nodes is used to determine whether a coalition
is winning or not. Influence games provide tools to measure the importance of
the actors of a social network by means of classic power indices and provide a
framework to consider new centrality criteria. In this paper we consider two of
the most classical power indices, i.e., Banzhaf and Shapley-Shubik indices, as
centrality measures for social networks in influence games. Although there is
some work related to specific scenarios of game-theoretic networks, here we use
such indices as centrality measures in any social network where the spread of
influence phenomenon can be applied. Further, we define new centrality measures
such as satisfaction and effort that, as far as we know, have not been
considered so far. We also perform a comparison of the proposed measures with
other three classic centrality measures, degree, closeness and betweenness,
considering three social networks. We show that in some cases our measurements
provide centrality hierarchies similar to those of other measures, while in
other cases provide different hierarchies.
|
1306.6944 | The DeLiVerMATH project - Text analysis in mathematics | cs.CL cs.DL cs.IR | A high-quality content analysis is essential for retrieval functionalities
but the manual extraction of key phrases and classification is expensive.
Natural language processing provides a framework to automatize the process.
Here, a machine-based approach for the content analysis of mathematical texts
is described. A prototype for key phrase extraction and classification of
mathematical texts is presented.
|
1307.0024 | Investigation of "Enhancing flexibility and robustness in multi-agent
task scheduling" | cs.DS cs.AI | Wilson et al. propose a measure of flexibility in project scheduling problems
and propose several ways of distributing flexibility over tasks without
overrunning the deadline. These schedules prove quite robust: delays of some
tasks do not necessarily lead to delays of subsequent tasks. The number of
tasks that finish late depends, among others, on the way of distributing
flexibility.
In this paper I study the different flexibility distributions proposed by
Wilson et al. and the differences in number of violations (tasks that finish
too late). I show one factor in the instances that causes differences in the
number of violations, as well as two properties of the flexibility distribution
that cause them to behave differently. Based on these findings, I propose three
new flexibility distributions. Depending on the nature of the delays, these new
flexibility distributions perform as good as or better than the distributions
by Wilson et al.
|
1307.0029 | Fractal and Mathematical Morphology in Intricate Comparison between
Tertiary Protein Structures | cs.CG cs.CE | Intricate comparison between two given tertiary structures of proteins is as
important as the comparison of their functions. Several algorithms have been
devised to compute the similarity and dissimilarity among protein structures.
But, these algorithms compare protein structures by structural alignment of the
protein backbones which are usually unable to determine precise differences. In
this paper, an attempt has been made to compute the similarities and
dissimilarities among 3D protein structures using the fundamental mathematical
morphology operations and fractal geometry which can resolve the problem of
real differences. In doing so, two techniques are being used here in
determining the superficial structural (global similarity) and local similarity
in atomic level of the protein molecules. This intricate structural difference
would provide insight to Biologists to understand the protein structures and
their functions more precisely.
|
1307.0031 | On the Hyperbolicity of Large-Scale Networks | physics.soc-ph cs.SI | Through detailed analysis of scores of publicly available data sets
corresponding to a wide range of large-scale networks, from communication and
road networks to various forms of social networks, we explore a little-studied
geometric characteristic of real-life networks, namely their hyperbolicity. In
smooth geometry, hyperbolicity captures the notion of negative curvature;
within the more abstract context of metric spaces, it can be generalized as
d-hyperbolicity. This generalized definition can be applied to graphs, which we
explore in this report. We provide strong evidence that communication and
social networks exhibit this fundamental property, and through extensive
computations we quantify the degree of hyperbolicity of each network in
comparison to its diameter. By contrast, and as evidence of the validity of the
methodology, applying the same methods to the road networks shows that they are
not hyperbolic, which is as expected. Finally, we present practical
computational means for detection of hyperbolicity and show how the test itself
may be scaled to much larger graphs than those we examined via renormalization
group methodology. Using well-understood mechanisms, we provide evidence
through synthetically generated graphs that hyperbolicity is preserved and
indeed amplified by renormalization. This allows us to detect hyperbolicity in
large networks efficiently, through much smaller renormalized versions. These
observations indicate that d-hyperbolicity is a common feature of large-scale
networks. We propose that d-hyperbolicity in conjunction with other local
characteristics of networks, such as the degree distribution and clustering
coefficients, provide a more complete unifying picture of networks, and helps
classify in a parsimonious way what is otherwise a bewildering and complex
array of features and characteristics specific to each natural and man-made
network.
|
1307.0032 | Memory Limited, Streaming PCA | stat.ML cs.IT cs.LG math.IT | We consider streaming, one-pass principal component analysis (PCA), in the
high-dimensional regime, with limited memory. Here, $p$-dimensional samples are
presented sequentially, and the goal is to produce the $k$-dimensional subspace
that best approximates these points. Standard algorithms require $O(p^2)$
memory; meanwhile no algorithm can do better than $O(kp)$ memory, since this is
what the output itself requires. Memory (or storage) complexity is most
meaningful when understood in the context of computational and sample
complexity. Sample complexity for high-dimensional PCA is typically studied in
the setting of the {\em spiked covariance model}, where $p$-dimensional points
are generated from a population covariance equal to the identity (white noise)
plus a low-dimensional perturbation (the spike) which is the signal to be
recovered. It is now well-understood that the spike can be recovered when the
number of samples, $n$, scales proportionally with the dimension, $p$. Yet, all
algorithms that provably achieve this, have memory complexity $O(p^2)$.
Meanwhile, algorithms with memory-complexity $O(kp)$ do not have provable
bounds on sample complexity comparable to $p$. We present an algorithm that
achieves both: it uses $O(kp)$ memory (meaning storage of any kind) and is able
to compute the $k$-dimensional spike with $O(p \log p)$ sample-complexity --
the first algorithm of its kind. While our theoretical analysis focuses on the
spiked covariance model, our simulations show that our algorithm is successful
on much more general models for the data.
|
1307.0036 | Increasing Compression Ratio in PNG Images by k-Modulus Method for Image
Transformation | cs.CV cs.MM | Image compression is an important filed in image processing. The science
welcomes any tinny contribution that may increase the compression ratio by
whichever insignificant percentage. Therefore, the essential contribution in
this paper is to increase the compression ratio for the well known Portable
Network Graphics (PNG) image file format. The contribution starts with
converting the original PNG image into k-Modulus Method (k-MM). Practically,
taking k equals to ten, and then the pixels in the constructed image will be
integers divisible by ten. Since PNG uses Lempel-Ziv compression algorithm,
then the ability to reduce file size will increase according to the repetition
in pixels in each k-by-k window according to the transformation done by k-MM.
Experimental results show that the proposed technique (k-PNG) produces high
compression ratio with smaller file size in comparison to the original PNG
file.
|
1307.0048 | Simple one-pass algorithm for penalized linear regression with
cross-validation on MapReduce | stat.ML cs.DC cs.LG | In this paper, we propose a one-pass algorithm on MapReduce for penalized
linear regression
\[f_\lambda(\alpha, \beta) = \|Y - \alpha\mathbf{1} - X\beta\|_2^2 +
p_{\lambda}(\beta)\] where $\alpha$ is the intercept which can be omitted
depending on application; $\beta$ is the coefficients and $p_{\lambda}$ is the
penalized function with penalizing parameter $\lambda$. $f_\lambda(\alpha,
\beta)$ includes interesting classes such as Lasso, Ridge regression and
Elastic-net. Compared to latest iterative distributed algorithms requiring
multiple MapReduce jobs, our algorithm achieves huge performance improvement;
moreover, our algorithm is exact compared to the approximate algorithms such as
parallel stochastic gradient decent. Moreover, what our algorithm distinguishes
with others is that it trains the model with cross validation to choose optimal
$\lambda$ instead of user specified one.
Key words: penalized linear regression, lasso, elastic-net, ridge, MapReduce
|
1307.0052 | Beamforming Design for Multiuser Two-Way Relaying: A Unified Approach
via Max-Min SINR | cs.IT math.IT | In this paper, we develop a unified framework for beamforming designs in
non-regenerative multiuser two-way relaying (TWR).
|
1307.0060 | Approximate Bayesian Image Interpretation using Generative Probabilistic
Graphics Programs | cs.AI cs.CV stat.ML | The idea of computer vision as the Bayesian inverse problem to computer
graphics has a long history and an appealing elegance, but it has proved
difficult to directly implement. Instead, most vision tasks are approached via
complex bottom-up processing pipelines. Here we show that it is possible to
write short, simple probabilistic graphics programs that define flexible
generative models and to automatically invert them to interpret real-world
images. Generative probabilistic graphics programs consist of a stochastic
scene generator, a renderer based on graphics software, a stochastic likelihood
model linking the renderer's output and the data, and latent variables that
adjust the fidelity of the renderer and the tolerance of the likelihood model.
Representations and algorithms from computer graphics, originally designed to
produce high-quality images, are instead used as the deterministic backbone for
highly approximate and stochastic generative models. This formulation combines
probabilistic programming, computer graphics, and approximate Bayesian
computation, and depends only on general-purpose, automatic inference
techniques. We describe two applications: reading sequences of degraded and
adversarially obscured alphanumeric characters, and inferring 3D road models
from vehicle-mounted camera images. Each of the probabilistic graphics programs
we present relies on under 20 lines of probabilistic code, and supports
accurate, approximately Bayesian inferences about ambiguous real-world images.
|
1307.0067 | Extrinsic Jensen-Shannon Divergence: Applications to Variable-Length
Coding | cs.IT math.IT math.OC math.ST stat.TH | This paper considers the problem of variable-length coding over a discrete
memoryless channel (DMC) with noiseless feedback. The paper provides a
stochastic control view of the problem whose solution is analyzed via a newly
proposed symmetrized divergence, termed extrinsic Jensen-Shannon (EJS)
divergence. It is shown that strictly positive lower bounds on EJS divergence
provide non-asymptotic upper bounds on the expected code length. The paper
presents strictly positive lower bounds on EJS divergence, and hence
non-asymptotic upper bounds on the expected code length, for the following two
coding schemes: variable-length posterior matching and MaxEJS coding scheme
which is based on a greedy maximization of the EJS divergence.
As an asymptotic corollary of the main results, this paper also provides a
rate-reliability test. Variable-length coding schemes that satisfy the
condition(s) of the test for parameters $R$ and $E$, are guaranteed to achieve
rate $R$ and error exponent $E$. The results are specialized for posterior
matching and MaxEJS to obtain deterministic one-phase coding schemes achieving
capacity and optimal error exponent. For the special case of symmetric
binary-input channels, simpler deterministic schemes of optimal performance are
proposed and analyzed.
|
1307.0085 | Coded Slotted ALOHA with Varying Packet Loss Rate across Users | cs.IT math.IT | The recent research has established an analogy between successive
interference cancellation in slotted ALOHA framework and iterative
belief-propagation erasure-decoding, which has opened the possibility to
enhance random access protocols by utilizing theory and tools of
erasure-correcting codes. In this paper we present a generalization of the
and-or tree evaluation, adapted for the asymptotic analysis of the slotted
ALOHA-based random-access protocols, for the case when the contending users
experience different channel conditions, resulting in packet loss probability
that varies across users. We apply the analysis to the example of frameless
ALOHA, where users contend on a slot basis. We present results regarding the
optimal access probabilities and contention period lengths, such that the
throughput and probability of user resolution are maximized.
|
1307.0087 | Semantics and pragmatics in actual software applications and in web
search engines: exploring innovations | cs.IR cs.CL cs.HC | While new ways to use the Semantic Web are developed every week, which allow
the user to find information on web more accurately - for example in search
engines - some sophisticated pragmatic tools are becoming more important - for
example in web interfaces known as Social Intelligence, or in the most famous
Siri by Apple. The work aims to analyze whether and where we can identify the
boundary between semantics and pragmatics in the software used by analyzed
systems. examining how the linguistic disciplines are fundamental in their
progress. Is it possible to assume that the tools of social intelligence have a
pragmatic approach to the questions of the user, or it is just a use of a very
rich vocabulary, with the use of semantic tools?
|
1307.0127 | Concentration and Confidence for Discrete Bayesian Sequence Predictors | cs.LG stat.ML | Bayesian sequence prediction is a simple technique for predicting future
symbols sampled from an unknown measure on infinite sequences over a countable
alphabet. While strong bounds on the expected cumulative error are known, there
are only limited results on the distribution of this error. We prove tight
high-probability bounds on the cumulative error, which is measured in terms of
the Kullback-Leibler (KL) divergence. We also consider the problem of
constructing upper confidence bounds on the KL and Hellinger errors similar to
those constructed from Hoeffding-like bounds in the i.i.d. case. The new
results are applied to show that Bayesian sequence prediction can be used in
the Knows What It Knows (KWIK) framework with bounds that match the
state-of-the-art.
|
1307.0129 | Hyperspectral Data Unmixing Using GNMF Method and Sparseness Constraint | cs.CV | Hyperspectral images contain mixed pixels due to low spatial resolution of
hyperspectral sensors. Mixed pixels are pixels containing more than one
distinct material called endmembers. The presence percentages of endmembers in
mixed pixels are called abundance fractions. Spectral unmixing problem refers
to decomposing these pixels into a set of endmembers and abundance fractions.
Due to nonnegativity constraint on abundance fractions, nonnegative matrix
factorization methods (NMF) have been widely used for solving spectral unmixing
problem. In this paper we have used graph regularized (GNMF) method with
sparseness constraint to unmix hyperspectral data. This method applied on
simulated data using AVIRIS Indian Pines dataset and USGS library and results
are quantified based on AAD and SAD measures. Results in comparison with other
methods show that the proposed method can unmix data more effectively.
|
1307.0180 | One generator $(1+u)$-quasi twisted codes over $F_2+uF_2$ | cs.IT math.IT | This paper gives the minimum generating sets of three types of one generator
$(1+u)$-quasi twisted (QT) codes over $F_2+uF_2$, $u^2=0$. Moreover, it
discusses the generating sets and the lower bounds on the minimum Lee distance
of a special class of $A_2$ type one generator $(1+u)$-QT codes. Some good
(optimal or suboptimal) linear codes over $F_2$ are obtained by these types of
one generator $(1+u)$-QT codes.
|
1307.0187 | Compression and Combining Based on Channel Shortening and Rank Reduction
Techniques for Cooperative Wireless Sensor Networks | cs.IT math.IT | This paper investigates and compares the performance of wireless sensor
networks where sensors operate on the principles of cooperative communications.
We consider a scenario where the source transmits signals to the destination
with the help of $L$ sensors. As the destination has the capacity of processing
only $U$ out of these $L$ signals, the strongest $U$ signals are selected while
the remaining $(L-U)$ signals are suppressed. A preprocessing block similar to
channel-shortening is proposed in this contribution. However, this
preprocessing block employs a rank-reduction technique instead of
channel-shortening. By employing this preprocessing, we are able to decrease
the computational complexity of the system without affecting the bit error rate
(BER) performance. From our simulations, it can be shown that these schemes
outperform the channel-shortening schemes in terms of computational complexity.
In addition, the proposed schemes have a superior BER performance as compared
to channel-shortening schemes when sensors employ fixed gain amplification.
However, for sensors which employ variable gain amplification, a tradeoff
exists in terms of BER performance between the channel-shortening and these
schemes. These schemes outperform channel-shortening scheme for lower
signal-to-noise ratio.
|
1307.0191 | NoSQL Database: New Era of Databases for Big data Analytics -
Classification, Characteristics and Comparison | cs.DB | Digital world is growing very fast and become more complex in the volume
(terabyte to petabyte), variety (structured and un-structured and hybrid),
velocity (high speed in growth) in nature. This refers to as Big Data that is a
global phenomenon. This is typically considered to be a data collection that
has grown so large it can not be effectively managed or exploited using
conventional data management tools: e.g., classic relational database
management systems (RDBMS) or conventional search engines. To handle this
problem, traditional RDBMS are complemented by specifically designed a rich set
of alternative DBMS; such as - NoSQL, NewSQL and Search-based systems. This
paper motivation is to provide - classification, characteristics and evaluation
of NoSQL databases in Big Data Analytics. This report is intended to help
users, especially to the organizations to obtain an independent understanding
of the strengths and weaknesses of various NoSQL database approaches to
supporting applications that process huge volumes of data.
|
1307.0193 | A Sampling Algebra for Aggregate Estimation | cs.DB | As of 2005, sampling has been incorporated in all major database systems.
While efficient sampling techniques are realizable, determining the accuracy of
an estimate obtained from the sample is still an unresolved problem. In this
paper, we present a theoretical framework that allows an elegant treatment of
the problem. We base our work on generalized uniform sampling (GUS), a class of
sampling methods that subsumes a wide variety of sampling techniques. We
introduce a key notion of equivalence that allows GUS sampling operators to
commute with selection and join, and derivation of confidence intervals. We
illustrate the theory through extensive examples and give indications on how to
use it to provide meaningful estimations in database systems.
|
1307.0194 | A new DNA alignment method based on inverted index | q-bio.GN cs.CE | This paper presents a novel DNA sequences alignment method based on inverted
index. Now most large scale information retrieval system are all use inverted
index as the basic data structure. But its application in DNA sequence
alignment is still not found. This paper just discuss such applications. Three
main problems, DNA segmenting, long DNA query search, DNA search ranking
algorithm and evaluation method are detailed respectively. This research
presents a new avenue to build more effective DNA alignment methods.
|
1307.0201 | Simulating Ability: Representing Skills in Games | cs.GT cs.AI | Throughout the history of games, representing the abilities of the various
agents acting on behalf of the players has been a central concern. With
increasingly sophisticated games emerging, these simulations have become more
realistic, but the underlying mechanisms are still, to a large extent, of an ad
hoc nature. This paper proposes using a logistic model from psychometrics as a
unified mechanism for task resolution in simulation-oriented games.
|
1307.0219 | Ornitolog\'ia Virtual: Caracterizando a #Chile en Twitter | cs.SI | Este art\'iculo presenta un an\'alisis de los tweets recolectados el 28 de
Octubre de 2012, en el contexto de las elecciones municipales de 2012 en Chile.
Dicho an\'alisis se realiza mediante una metodolog\'ia basada en literatura
previa, en particular en t\'ecnicas de recuperaci\'on de la informaci\'on y de
an\'alisis de espacios de informaci\'on. Como resultado, se determinan: 1)
caracter\'isticas demogr\'aficas b\'asicas de la poblaci\'on virtual chilena,
incluyendo su distribuci\'on geogr\'afica, 2) el contenido que caracteriza a
cada regi\'on, y c\'omo fluye informaci\'on entre regiones, y 3) el grado de
representatividad de la poblaci\'on virtual participante en el evento con
respecto a la poblaci\'on f\'isica. Se determina que la muestra obtenida es
representativa de la poblaci\'on en t\'erminos de distribuci\'on geogr\'afica,
que el centralismo que afecta al pa\'is se ve reflejado en Twitter, y que, a
pesar de los sesgos poblacionales, es posible identificar el contenido que
caracteriza a cada regi\'on. Se finaliza con una discusi\'on de las
implicaciones y conclusiones pr\'acticas de este trabajo, as\'i como futuras
aplicaciones.
|
1307.0252 | Semi-supervised clustering methods | stat.ME cs.LG stat.ML | Cluster analysis methods seek to partition a data set into homogeneous
subgroups. It is useful in a wide variety of applications, including document
processing and modern genetics. Conventional clustering methods are
unsupervised, meaning that there is no outcome variable nor is anything known
about the relationship between the observations in the data set. In many
situations, however, information about the clusters is available in addition to
the values of the features. For example, the cluster labels of some
observations may be known, or certain observations may be known to belong to
the same cluster. In other cases, one may wish to identify clusters that are
associated with a particular outcome variable. This review describes several
clustering algorithms (known as "semi-supervised clustering" methods) that can
be applied in these situations. The majority of these methods are modifications
of the popular k-means clustering method, and several of them will be described
in detail. A brief description of some other semi-supervised clustering
algorithms is also provided.
|
1307.0253 | Exploratory Learning | cs.LG | In multiclass semi-supervised learning (SSL), it is sometimes the case that
the number of classes present in the data is not known, and hence no labeled
examples are provided for some classes. In this paper we present variants of
well-known semi-supervised multiclass learning methods that are robust when the
data contains an unknown number of classes. In particular, we present an
"exploratory" extension of expectation-maximization (EM) that explores
different numbers of classes while learning. "Exploratory" SSL greatly improves
performance on three datasets in terms of F1 on the classes with seed examples
i.e., the classes which are expected to be in the data. Our Exploratory EM
algorithm also outperforms a SSL method based non-parametric Bayesian
clustering.
|
1307.0258 | Verification-Based Interval-Passing Algorithm for Compressed Sensing | cs.IT math.IT | We propose a verification-based Interval-Passing (IP) algorithm for
iteratively reconstruction of nonnegative sparse signals using parity check
matrices of low-density parity check (LDPC) codes as measurement matrices. The
proposed algorithm can be considered as an improved IP algorithm by further
incorporation of the mechanism of verification algorithm. It is proved that the
proposed algorithm performs always better than either the IP algorithm or the
verification algorithm. Simulation results are also given to demonstrate the
superior performance of the proposed algorithm.
|
1307.0261 | WebSets: Extracting Sets of Entities from the Web Using Unsupervised
Information Extraction | cs.LG cs.CL cs.IR | We describe a open-domain information extraction method for extracting
concept-instance pairs from an HTML corpus. Most earlier approaches to this
problem rely on combining clusters of distributionally similar terms and
concept-instance pairs obtained with Hearst patterns. In contrast, our method
relies on a novel approach for clustering terms found in HTML tables, and then
assigning concept names to these clusters using Hearst patterns. The method can
be efficiently applied to a large corpus, and experimental results on several
datasets show that our method can accurately extract large numbers of
concept-instance pairs.
|
1307.0264 | Utility-maximization Resource Allocation for Device-to-Device
Communication Underlaying Cellular Networks | cs.IT cs.NI math.IT | Device-to-device(D2D) underlaying communication brings great benefits to the
cellular networks from the improvement of coverage and spectral efficiency at
the expense of complicated transceiver design. With frequency spectrum sharing
mode, the D2D user generates interference to the existing cellular networks
either in downlink or uplink. Thus the resource allocation for D2D pairs should
be designed properly in order to reduce possible interference, in particular
for uplink. In this paper, we introduce a novel bandwidth allocation scheme to
maximize the utilities of both D2D users and cellular users. Since the
allocation problem is strongly NP-hard, we apply a relaxation to the
association indicators. We propose a low-complexity distributed algorithm and
prove the convergence in a static environment. The numerical result shows that
the proposed scheme can significant improve the performance in terms of
utilities.The performance of D2D communications depends on D2D user locations,
the number of D2D users and QoS(Quality of Service) parameters.
|
1307.0276 | Controllability Analysis and Degraded Control for a Class of Hexacopters
Subject to Rotor Failures | cs.SY cs.RO | This paper considers the controllability analysis and fault tolerant control
problem for a class of hexacopters. It is shown that the considered hexacopter
is uncontrollable when one rotor fails, even though the hexacopter is
over-actuated and its controllability matrix is row full rank. According to
this, a fault tolerant control strategy is proposed to control a degraded
system, where the yaw states of the considered hexacopter are ignored.
Theoretical analysis indicates that the degraded system is controllable if and
only if the maximum lift of each rotor is greater than a certain value. The
simulation and experiment results on a prototype hexacopter show the
feasibility of our controllability analysis and degraded control strategy.
|
1307.0277 | Multilevel Threshold Based Gray Scale Image Segmentation using Cuckoo
Search | cs.CV | Image Segmentation is a technique of partitioning the original image into
some distinct classes. Many possible solutions may be available for segmenting
an image into a certain number of classes, each one having different quality of
segmentation. In our proposed method, multilevel thresholding technique has
been used for image segmentation. A new approach of Cuckoo Search (CS) is used
for selection of optimal threshold value. In other words, the algorithm is used
to achieve the best solution from the initial random threshold values or
solutions and to evaluate the quality of a solution correlation function is
used. Finally, MSE and PSNR are measured to understand the segmentation
quality.
|
1307.0284 | The effectiveness of altruistic lobbying: A model study | cs.MA cs.SI cs.SY math.OC physics.soc-ph | Altruistic lobbying is lobbying in the public interest or in the interest of
the least protected part of the society. In fact, an altruist has a wide range
of strategies, from behaving in the interest of the society as a whole to the
support of the most disadvantaged ones. How can we compare the effectiveness of
such strategies? Another question is: "Given a strategy, is it possible to
estimate the optimal number of participants choosing it?" Finally, do the
answers to these questions depend on the level of well-being in the society?
Can we say that the poorer the society, the more important is to focus on the
support of the poorest? We answer these questions within the framework of the
model of social dynamics determined by voting in a stochastic environment.
|
1307.0309 | The Social Media Genome: Modeling Individual Topic-Specific Behavior in
Social Media | cs.SI physics.soc-ph | Information propagation in social media depends not only on the static
follower structure but also on the topic-specific user behavior. Hence novel
models incorporating dynamic user behavior are needed. To this end, we propose
a model for individual social media users, termed a genotype. The genotype is a
per-topic summary of a user's interest, activity and susceptibility to adopt
new information. We demonstrate that user genotypes remain invariant within a
topic by adopting them for classification of new information spread in
large-scale real networks. Furthermore, we extract topic-specific influence
backbone structures based on information adoption and show that they differ
significantly from the static follower network. When employed for influence
prediction of new content spread, our genotype model and influence backbones
enable more than $20% improvement, compared to purely structural features. We
also demonstrate that knowledge of user genotypes and influence backbones allow
for the design of effective strategies for latency minimization of
topic-specific information spread.
|
1307.0317 | Algorithms of the LDA model [REPORT] | cs.LG cs.IR stat.ML | We review three algorithms for Latent Dirichlet Allocation (LDA). Two of them
are variational inference algorithms: Variational Bayesian inference and Online
Variational Bayesian inference and one is Markov Chain Monte Carlo (MCMC)
algorithm -- Collapsed Gibbs sampling. We compare their time complexity and
performance. We find that online variational Bayesian inference is the fastest
algorithm and still returns reasonably good results.
|
1307.0320 | BigDataBench: a Big Data Benchmark Suite from Web Search Engines | cs.IR cs.DB | This paper presents our joint research efforts on big data benchmarking with
several industrial partners. Considering the complexity, diversity, workload
churns, and rapid evolution of big data systems, we take an incremental
approach in big data benchmarking. For the first step, we pay attention to
search engines, which are the most important domain in Internet services in
terms of the number of page views and daily visitors. However, search engine
service providers treat data, applications, and web access logs as business
confidentiality, which prevents us from building benchmarks. To overcome those
difficulties, with several industry partners, we widely investigated the open
source solutions in search engines, and obtained the permission of using
anonymous Web access logs. Moreover, with two years' great efforts, we created
a sematic search engine named ProfSearch (available from
http://prof.ict.ac.cn). These efforts pave the path for our big data benchmark
suite from search engines---BigDataBench, which is released on the web page
(http://prof.ict.ac.cn/BigDataBench). We report our detailed analysis of search
engine workloads, and present our benchmarking methodology. An innovative data
generation methodology and tool are proposed to generate scalable volumes of
big data from a small seed of real data, preserving semantics and locality of
data. Also, we preliminarily report two case studies using BigDataBench for
both system and architecture researches.
|
1307.0339 | Syntactic sensitive complexity for symbol-free sequence | cs.AI | This work uses the L-system to construct a tree structure for the text
sequence and derives its complexity. It serves as a measure of structural
complexity of the text. It is applied to anomaly detection in data
transmission.
|
1307.0345 | Performance Bounds for the Scenario Approach and an Extension to a Class
of Non-convex Programs | math.OC cs.SY | We consider the Scenario Convex Program (SCP) for two classes of optimization
problems that are not tractable in general: Robust Convex Programs (RCPs) and
Chance-Constrained Programs (CCPs). We establish a probabilistic bridge from
the optimal value of SCP to the optimal values of RCP and CCP in which the
uncertainty takes values in a general, possibly infinite dimensional, metric
space. We then extend our results to a certain class of non-convex problems
that includes, for example, binary decision variables. In the process, we also
settle a measurability issue for a general class of scenario programs, which to
date has been addressed by an assumption. Finally, we demonstrate the
applicability of our results on a benchmark problem and a problem in fault
detection and isolation.
|
1307.0366 | Learning directed acyclic graphs based on sparsest permutations | math.ST cs.LG stat.TH | We consider the problem of learning a Bayesian network or directed acyclic
graph (DAG) model from observational data. A number of constraint-based,
score-based and hybrid algorithms have been developed for this purpose. For
constraint-based methods, statistical consistency guarantees typically rely on
the faithfulness assumption, which has been show to be restrictive especially
for graphs with cycles in the skeleton. However, there is only limited work on
consistency guarantees for score-based and hybrid algorithms and it has been
unclear whether consistency guarantees can be proven under weaker conditions
than the faithfulness assumption. In this paper, we propose the sparsest
permutation (SP) algorithm. This algorithm is based on finding the causal
ordering of the variables that yields the sparsest DAG. We prove that this new
score-based method is consistent under strictly weaker conditions than the
faithfulness assumption. We also demonstrate through simulations on small DAGs
that the SP algorithm compares favorably to the constraint-based PC and SGS
algorithms as well as the score-based Greedy Equivalence Search and hybrid
Max-Min Hill-Climbing method. In the Gaussian setting, we prove that our
algorithm boils down to finding the permutation of the variables with sparsest
Cholesky decomposition for the inverse covariance matrix. Using this
connection, we show that in the oracle setting, where the true covariance
matrix is known, the SP algorithm is in fact equivalent to $\ell_0$-penalized
maximum likelihood estimation.
|
1307.0396 | On Optimal Zero-Delay Coding of Vector Markov Sources | math.OC cs.IT cs.SY math.IT | Optimal zero-delay coding (quantization) of a vector-valued Markov source
driven by a noise process is considered. Using a stochastic control problem
formulation, the existence and structure of optimal quantization policies are
studied. For a finite-horizon problem with bounded per-stage distortion
measure, the existence of an optimal zero-delay quantization policy is shown
provided that the quantizers allowed are ones with convex codecells. The
bounded distortion assumption is relaxed to cover cases that include the linear
quadratic Gaussian problem. For the infinite horizon problem and a stationary
Markov source the optimality of deterministic Markov coding policies is shown.
The existence of optimal stationary Markov quantization policies is also shown
provided randomization that is shared by the encoder and the decoder is
allowed.
|
1307.0412 | Characterizing and Predicting the Robustness of Power-law Networks | physics.soc-ph cs.SI | Power-law networks such as the Internet, terrorist cells, species
relationships, and cellular metabolic interactions are susceptible to node
failures, yet maintaining network connectivity is essential for network
functionality. Disconnection of the network leads to fragmentation and, in some
cases, collapse of the underlying system. However, the influences of the
topology of networks on their ability to withstand node failures are poorly
understood. Based on a study of the response of 2,000 power-law networks to
node failures, we find that networks with higher nodal degree and clustering
coefficient, lower betweenness centrality, and lower variability in path length
and clustering coefficient maintain their cohesion better during such events.
We also find that network robustness, i.e., the ability to withstand node
failures, can be accurately predicted a priori for power-law networks across
many fields. These results provide a basis for designing new, more robust
networks, improving the robustness of existing networks such as the Internet
and cellular metabolic pathways, and efficiently degrading networks such as
terrorist cells.
|
1307.0414 | Challenges in Representation Learning: A report on three machine
learning contests | stat.ML cs.LG | The ICML 2013 Workshop on Challenges in Representation Learning focused on
three challenges: the black box learning challenge, the facial expression
recognition challenge, and the multimodal learning challenge. We describe the
datasets created for these challenges and summarize the results of the
competitions. We provide suggestions for organizers of future challenges and
some comments on what kind of knowledge can be gained from machine learning
competitions.
|
1307.0426 | An Empirical Study into Annotator Agreement, Ground Truth Estimation,
and Algorithm Evaluation | cs.CV cs.AI cs.LG | Although agreement between annotators has been studied in the past from a
statistical viewpoint, little work has attempted to quantify the extent to
which this phenomenon affects the evaluation of computer vision (CV) object
detection algorithms. Many researchers utilise ground truth (GT) in experiments
and more often than not this GT is derived from one annotator's opinion. How
does the difference in opinion affect an algorithm's evaluation? Four examples
of typical CV problems are chosen, and a methodology is applied to each to
quantify the inter-annotator variance and to offer insight into the mechanisms
behind agreement and the use of GT. It is found that when detecting linear
objects annotator agreement is very low. The agreement in object position,
linear or otherwise, can be partially explained through basic image properties.
Automatic object detectors are compared to annotator agreement and it is found
that a clear relationship exists. Several methods for calculating GTs from a
number of annotations are applied and the resulting differences in the
performance of the object detectors are quantified. It is found that the rank
of a detector is highly dependent upon the method used to form the GT. It is
also found that although the STAPLE and LSML GT estimation methods appear to
represent the mean of the performance measured using the individual
annotations, when there are few annotations, or there is a large variance in
them, these estimates tend to degrade. Furthermore, one of the most commonly
adopted annotation combination methods--consensus voting--accentuates more
obvious features, which results in an overestimation of the algorithm's
performance. Finally, it is concluded that in some datasets it may not be
possible to state with any confidence that one algorithm outperforms another
when evaluating upon one GT and a method for calculating confidence bounds is
discussed.
|
1307.0441 | Aggregation and Ordering in Factorised Databases | cs.DB cs.DS | A common approach to data analysis involves understanding and manipulating
succinct representations of data. In earlier work, we put forward a succinct
representation system for relational data called factorised databases and
reported on the main-memory query engine FDB for select-project-join queries on
such databases.
In this paper, we extend FDB to support a larger class of practical queries
with aggregates and ordering. This requires novel optimisation and evaluation
techniques. We show how factorisation coupled with partial aggregation can
effectively reduce the number of operations needed for query evaluation. We
also show how factorisations of query results can support enumeration of tuples
in desired orders as efficiently as listing them from the unfactorised, sorted
results.
We experimentally observe that FDB can outperform off-the-shelf relational
engines by orders of magnitude.
|
1307.0445 | Networked Estimation using Sparsifying Basis Prediction | cs.SY math.OC | We present a framework for networked state estimation, where systems encode
their (possibly high dimensional) state vectors using a mutually agreed basis
between the system and the estimator (in a remote monitoring unit). The basis
sparsifies the state vectors, i.e., it represents them using vectors with few
non-zero components, and as a result, the systems might need to transmit only a
fraction of the original information to be able to recover the non-zero
components of the transformed state vector. Hence, the estimator can recover
the state vector of the system from an under-determined linear set of
equations. We use a greedy search algorithm to calculate the sparsifying basis.
Then, we present an upper bound for the estimation error. Finally, we
demonstrate the results on a numerical example.
|
1307.0449 | Arising information regularities in an observer | nlin.AO cs.IT math.IT | The approach defines information process from probabilistic observation,
emerging microprocess,qubit, encoding bits, evolving macroprocess, and extends
to Observer information self-organization, cognition, intelligence and
understanding communicating information. Studying information originating in
quantum process focuses not on particle physics but on natural interactive
impulse modeling Bit composing information observer. Information emerges from
Kolmogorov probabilities field when sequences of 1-0 probabilities link Markov
probabilities modeling arising observer. These objective yes-no probabilities
virtually cuts observing entropy hidden in cutting correlation decreasing
Markov process entropy and increasing entropy of cutting impulse running
minimax principle. Merging impulse curves and rotates yes-no conjugated
entropies in microprocess. The entropies entangle within impulse time interval
ending with beginning space. The opposite curvature lowers potential energy
converting entropy to memorized bit. The memorized information binds reversible
microprocess with irreversible information macroprocess. Multiple interacting
Bits self-organize information process encoding causality, logic and
complexity. Trajectory of observation process carries probabilistic and certain
wave function self-building structural macrounits. Macrounits logically
self-organize information networks encoding in triplet code. Multiple IN
enclose observer information cognition and intelligence. Observer cognition
assembles attracting common units in resonances forming IN hierarchy accepting
only units recognizing IN node. Maximal number of accepted triplets measures
the observer information intelligence. Intelligent observer recognizes and
encodes digital images in message transmission enables understanding the
message meaning. Cognitive logic self-controls encoding the intelligence in
double helix code.
|
1307.0468 | Discrete Signal Processing on Graphs: Frequency Analysis | cs.SI math.SP | Signals and datasets that arise in physical and engineering applications, as
well as social, genetics, biomolecular, and many other domains, are becoming
increasingly larger and more complex. In contrast to traditional time and image
signals, data in these domains are supported by arbitrary graphs. Signal
processing on graphs extends concepts and techniques from traditional signal
processing to data indexed by generic graphs. This paper studies the concepts
of low and high frequencies on graphs, and low-, high-, and band-pass graph
filters. In traditional signal processing, there concepts are easily defined
because of a natural frequency ordering that has a physical interpretation. For
signals residing on graphs, in general, there is no obvious frequency ordering.
We propose a definition of total variation for graph signals that naturally
leads to a frequency ordering on graphs and defines low-, high-, and band-pass
graph signals and filters. We study the design of graph filters with specified
frequency response, and illustrate our approach with applications to sensor
malfunction detection and data classification.
|
1307.0471 | Quantum support vector machine for big data classification | quant-ph cs.LG | Supervised machine learning is the classification of new data based on
already classified training examples. In this work, we show that the support
vector machine, an optimized binary classifier, can be implemented on a quantum
computer, with complexity logarithmic in the size of the vectors and the number
of training examples. In cases when classical sampling algorithms require
polynomial time, an exponential speed-up is obtained. At the core of this
quantum big data algorithm is a non-sparse matrix exponentiation technique for
efficiently performing a matrix inversion of the training data inner-product
(kernel) matrix.
|
1307.0473 | Online discrete optimization in social networks in the presence of
Knightian uncertainty | math.OC cs.DC cs.LG | We study a model of collective real-time decision-making (or learning) in a
social network operating in an uncertain environment, for which no a priori
probabilistic model is available. Instead, the environment's impact on the
agents in the network is seen through a sequence of cost functions, revealed to
the agents in a causal manner only after all the relevant actions are taken.
There are two kinds of costs: individual costs incurred by each agent and
local-interaction costs incurred by each agent and its neighbors in the social
network. Moreover, agents have inertia: each agent has a default mixed strategy
that stays fixed regardless of the state of the environment, and must expend
effort to deviate from this strategy in order to respond to cost signals coming
from the environment. We construct a decentralized strategy, wherein each agent
selects its action based only on the costs directly affecting it and on the
decisions made by its neighbors in the network. In this setting, we quantify
social learning in terms of regret, which is given by the difference between
the realized network performance over a given time horizon and the best
performance that could have been achieved in hindsight by a fictitious
centralized entity with full knowledge of the environment's evolution. We show
that our strategy achieves the regret that scales polylogarithmically with the
time horizon and polynomially with the number of agents and the maximum number
of neighbors of any agent in the social network.
|
1307.0475 | A Random Matrix Approach to Differential Privacy and Structure Preserved
Social Network Graph Publishing | cs.CR cs.SI physics.soc-ph | Online social networks are being increasingly used for analyzing various
societal phenomena such as epidemiology, information dissemination, marketing
and sentiment flow. Popular analysis techniques such as clustering and
influential node analysis, require the computation of eigenvectors of the real
graph's adjacency matrix. Recent de-anonymization attacks on Netflix and AOL
datasets show that an open access to such graphs pose privacy threats. Among
the various privacy preserving models, Differential privacy provides the
strongest privacy guarantees.
In this paper we propose a privacy preserving mechanism for publishing social
network graph data, which satisfies differential privacy guarantees by
utilizing a combination of theory of random matrix and that of differential
privacy. The key idea is to project each row of an adjacency matrix to a low
dimensional space using the random projection approach and then perturb the
projected matrix with random noise. We show that as compared to existing
approaches for differential private approximation of eigenvectors, our approach
is computationally efficient, preserves the utility and satisfies differential
privacy. We evaluate our approach on social network graphs of Facebook, Live
Journal and Pokec. The results show that even for high values of noise variance
sigma=1 the clustering quality given by normalized mutual information gain is
as low as 0.74. For influential node discovery, the propose approach is able to
correctly recover 80 of the most influential nodes. We also compare our results
with an approach presented in [43], which directly perturbs the eigenvector of
the original data by a Laplacian noise. The results show that this approach
requires a large random perturbation in order to preserve the differential
privacy, which leads to a poor estimation of eigenvectors for large social
networks.
|
1307.0516 | Dynamical Structure of a Traditional Amazonian Social Network | cs.SI nlin.AO physics.soc-ph q-bio.PE | Reciprocity is a vital feature of social networks, but relatively little is
known about its temporal structure or the mechanisms underlying its persistence
in real world behavior. In pursuit of these two questions, we study the
stationary and dynamical signals of reciprocity in a network of manioc beer
(Spanish: chicha; Tsimane': shocdye') drinking events in a Tsimane' village in
lowland Bolivia. At the stationary level, our analysis reveals that social
exchange within the community is heterogeneously patterned according to kinship
and spatial proximity. A positive relationship between the frequencies at which
two families host each other, controlling for kinship and proximity, provides
evidence for stationary reciprocity. Our analysis of the dynamical structure of
this network presents a novel method for the study of conditional, or
non-stationary, reciprocity effects. We find evidence that short-timescale
reciprocity (within three days) is present among non- and distant-kin pairs;
conversely, we find that levels of cooperation among close kin can be accounted
for on the stationary hypothesis alone.
|
1307.0539 | The Evolution of Beliefs over Signed Social Networks | cs.SI physics.soc-ph | We study the evolution of opinions (or beliefs) over a social network modeled
as a signed graph. The sign attached to an edge in this graph characterizes
whether the corresponding individuals or end nodes are friends (positive links)
or enemies (negative links). Pairs of nodes are randomly selected to interact
over time, and when two nodes interact, each of them updates its opinion based
on the opinion of the other node and the sign of the corresponding link. This
model generalizes DeGroot model to account for negative links: when two enemies
interact, their opinions go in opposite directions. We provide conditions for
convergence and divergence in expectation, in mean-square, and in almost sure
sense, and exhibit phase transition phenomena for these notions of convergence
depending on the parameters of the opinion update model and on the structure of
the underlying graph. We establish a {\it no-survivor} theorem, stating that
the difference in opinions of any two nodes diverges whenever opinions in the
network diverge as a whole. We also prove a {\it live-or-die} lemma, indicating
that almost surely, the opinions either converge to an agreement or diverge.
Finally, we extend our analysis to cases where opinions have hard lower and
upper limits. In these cases, we study when and how opinions may become
asymptotically clustered to the belief boundaries, and highlight the crucial
influence of (strong or weak) structural balance of the underlying network on
this clustering phenomenon.
|
1307.0555 | An Application of Joint Spectral Radius in Power Control Problem for
Wireless Communications | math.DS cs.IT math.IT | Resource management, including power control, is one of the most essential
functionalities of any wireless telecommunication system. Various transmitter
power-control methods have been developed to deliver a desired quality of
service in wireless networks. We consider two of these methods: Distributed
Power Control and Distributed Balancing Algorithm schemes. We use the concept
of joint spectral radius to come up with conditions for convergence of the
transmitted power in these two schemes when the gains on all the communications
links are assumed to vary at each time-step.
|
1307.0571 | Efficient Sequential and Parallel Algorithms for Planted Motif Search | cs.DS cs.CE | Motif searching is an important step in the detection of rare events
occurring in a set of DNA or protein sequences. One formulation of the problem
is known as (l,d)-motif search or Planted Motif Search (PMS). In PMS we are
given two integers l and d and n biological sequences. We want to find all
sequences of length l that appear in each of the input sequences with at most d
mismatches. The PMS problem is NP-complete. PMS algorithms are typically
evaluated on certain instances considered challenging. This paper presents an
exact parallel PMS algorithm called PMS8. PMS8 is the first algorithm to solve
the challenging (l,d) instances (25,10) and (26,11). PMS8 is also efficient on
instances with larger l and d such as (50,21). This paper also introduces
necessary and sufficient conditions for 3 l-mers to have a common d-neighbor.
|
1307.0578 | A non-parametric conditional factor regression model for
high-dimensional input and response | stat.ML cs.LG | In this paper, we propose a non-parametric conditional factor regression
(NCFR)model for domains with high-dimensional input and response. NCFR enhances
linear regression in two ways: a) introducing low-dimensional latent factors
leading to dimensionality reduction and b) integrating an Indian Buffet Process
as a prior for the latent factors to derive unlimited sparse dimensions.
Experimental results comparing NCRF to several alternatives give evidence to
remarkable prediction performance.
|
1307.0585 | Fundamentals of Throughput Maximization with Random Arrivals for M2M
Communications | cs.IT cs.NI math.IT | For wireless systems in which randomly arriving devices attempt to transmit a
fixed payload to a central receiver, we develop a framework to characterize the
system throughput as a function of arrival rate and per-user data rate. The
framework considers both coordinated transmission (where devices are scheduled)
and uncoordinated transmission (where devices communicate on a random access
channel and a provision is made for retransmissions). Our main contribution is
a novel characterization of the optimal throughput for the case of
uncoordinated transmission and a strategy for achieving this throughput that
relies on overlapping transmissions and joint decoding. Simulations for a
noise-limited cellular network show that the optimal strategy provides a factor
of four improvement in throughput compared to slotted aloha. We apply our
framework to evaluate more general system-level designs that account for
overhead signaling. We demonstrate that, for small payload sizes relevant for
machine-to-machine (M2M) communications (200 bits or less), a one-stage
strategy, where identity and data are transmitted optimally over the random
access channel, can support at least twice the number of devices compared to a
conventional strategy, where identity is established over an initial
random-access stage and data transmission is scheduled.
|
1307.0589 | The Orchive : Data mining a massive bioacoustic archive | cs.LG cs.DB cs.SD | The Orchive is a large collection of over 20,000 hours of audio recordings
from the OrcaLab research facility located off the northern tip of Vancouver
Island. It contains recorded orca vocalizations from the 1980 to the present
time and is one of the largest resources of bioacoustic data in the world. We
have developed a web-based interface that allows researchers to listen to these
recordings, view waveform and spectral representations of the audio, label
clips with annotations, and view the results of machine learning classifiers
based on automatic audio features extraction. In this paper we describe such
classifiers that discriminate between background noise, orca calls, and the
voice notes that are present in most of the tapes. Furthermore we show
classification results for individual calls based on a previously existing orca
call catalog. We have also experimentally investigated the scalability of
classifiers over the entire Orchive.
|
1307.0596 | Improving Pointwise Mutual Information (PMI) by Incorporating
Significant Co-occurrence | cs.CL | We design a new co-occurrence based word association measure by incorporating
the concept of significant cooccurrence in the popular word association measure
Pointwise Mutual Information (PMI). By extensive experiments with a large
number of publicly available datasets we show that the newly introduced measure
performs better than other co-occurrence based measures and despite being
resource-light, compares well with the best known resource-heavy distributional
similarity and knowledge based word association measures. We investigate the
source of this performance improvement and find that of the two types of
significant co-occurrence - corpus-level and document-level, the concept of
corpus level significance combined with the use of document counts in place of
word counts is responsible for all the performance gains observed. The concept
of document level significance is not helpful for PMI adaptation.
|
1307.0608 | Reliability and Secrecy Functions of the Wiretap Channel under Cost
Constraint | cs.IT cs.CR math.IT | The wiretap channel has been devised and studied first by Wyner, and
subsequently extended to the case with non-degraded general wiretap channels by
Csiszar and Korner. Focusing mainly on the Poisson wiretap channel with cost
constraint, we newly introduce the notion of reliability and security functions
as a fundamental tool to analyze and/or design the performance of an efficient
wiretap channel system. Compact formulae for those functions are explicitly
given for stationary memoryless wiretap channels. It is also demonstrated that,
based on such a pair of reliability and security functions, we can control the
tradeoff between reliability and security (usually conflicting), both with
exponentially decreasing rates as block length n becomes large. Two ways to do
so are given on the basis of concatenation and rate exchange. In this
framework, the notion of the {\delta} secrecy capacity is defined and shown to
attain the strongest security standard among others. The maximized vs. averaged
security measures is also discussed.
|
1307.0626 | Simulation Un-Symmetrical 2 Phase Induction Motor | cs.SY | The equations of unsymmetrical 2-phase induction motors are established and a
computer representation is developed from these equations. Computer
representation of single phase motors are developed by extension and
modification of the unsymmetrical 2-phase induction motors representation.
These equations of an unsymmetrical 2-phase induction motors are describe the
dynamic performance of equations of unsymmetrical 2-phase induction motors. The
system is simulated to verify its capability such as input phase voltage,
stator and rotor currents, electromagnetic torque and rotor speed. The
performance of an unsymmetrical 2-p
|
1307.0643 | Discovering the Markov network structure | cs.IT cs.LG math.IT | In this paper a new proof is given for the supermodularity of information
content. Using the decomposability of the information content an algorithm is
given for discovering the Markov network graph structure endowed by the
pairwise Markov property of a given probability distribution. A discrete
probability distribution is given for which the equivalence of
Hammersley-Clifford theorem is fulfilled although some of the possible vector
realizations are taken on with zero probability. Our algorithm for discovering
the pairwise Markov network is illustrated on this example, too.
|
1307.0685 | Achievable Degrees of Freedom Region of the MIMO Relay Networks using
the Detour Schemes | cs.IT math.IT | In this paper, we study the degrees of freedom (DoF) of the MIMO relay
networks. We start with a general Y channel, where each user has $M_i$ antennas
and aims to exchange messages with the other two users via a relay equipped
with $N$ antennas. Then, we extend our work to a general 4-user MIMO relay
network. Unlike most previous work which focused on the total DoF of the
network, our aim here is to characterize the achievable DoF region as well. We
develop an outer bound on the DoF region based on the notion of one sided
genie. Then, we define a new achievable region using the Signal Space Alignment
(SSA) and the Detour Schemes. Our achievable scheme achieves the upper bound
for certain conditions relating $M_i$'s and $N$.
|
1307.0747 | Simulating the Dynamics of T Cell Subsets Throughout the Lifetime | cs.CE | It is widely accepted that the immune system undergoes age-related changes
correlating with increased disease in the elderly. T cell subsets have been
implicated. The aim of this work is firstly to implement and validate a
simulation of T regulatory cell (Treg) dynamics throughout the lifetime, based
on a model by Baltcheva. We show that our initial simulation produces an
inversion between precursor and mature Treys at around 20 years of age, though
the output differs significantly from the original laboratory dataset.
Secondly, this report discusses development of the model to incorporate new
data from a cross-sectional study of healthy blood donors addressing balance
between Treys and Th17 cells with novel markers for Treg. The potential for
simulation to add insight into immune aging is discussed.
|
1307.0749 | Comparing Decison Support Tools for Cargo Screening Processes | cs.CE | When planning to change operations at ports there are two key stake holders
with very different interests involved in the decision making processes. Port
operators are attentive to their standards, a smooth service flow and economic
viability while border agencies are concerned about national security. The time
taken for security checks often interferes with the compliance to service
standards that port operators would like to achieve. Decision support tools as
for example Cost-Benefit Analysis or Multi Criteria Analysis are useful helpers
to better understand the impact of changes to a system. They allow
investigating future scenarios and helping to find solutions that are
acceptable for all parties involved in port operations. In this paper we
evaluate two different modelling methods, namely scenario analysis and discrete
event simulation. These are useful for driving the decision support tools (i.e.
they provide the inputs the decision support tools require). Our aims are, on
the one hand, to guide the reader through the modelling processes and, on the
other hand, to demonstrate what kind of decision support information one can
obtain from the different modelling methods presented.
|
1307.0776 | Regularized Spherical Polar Fourier Diffusion MRI with Optimal
Dictionary Learning | cs.CV | Compressed Sensing (CS) takes advantage of signal sparsity or compressibility
and allows superb signal reconstruction from relatively few measurements. Based
on CS theory, a suitable dictionary for sparse representation of the signal is
required. In diffusion MRI (dMRI), CS methods were proposed to reconstruct
diffusion-weighted signal and the Ensemble Average Propagator (EAP), and there
are two kinds of Dictionary Learning (DL) methods: 1) Discrete Representation
DL (DR-DL), and 2) Continuous Representation DL (CR-DL). DR-DL is susceptible
to numerical inaccuracy owing to interpolation and regridding errors in a
discretized q-space. In this paper, we propose a novel CR-DL approach, called
Dictionary Learning - Spherical Polar Fourier Imaging (DL-SPFI) for effective
compressed-sensing reconstruction of the q-space diffusion-weighted signal and
the EAP. In DL-SPFI, an dictionary that sparsifies the signal is learned from
the space of continuous Gaussian diffusion signals. The learned dictionary is
then adaptively applied to different voxels using a weighted LASSO framework
for robust signal reconstruction. The adaptive dictionary is proved to be
optimal. Compared with the start-of-the-art CR-DL and DR-DL methods proposed by
Merlet et al. and Bilgic et al., espectively, our work offers the following
advantages. First, the learned dictionary is proved to be optimal for Gaussian
diffusion signals. Second, to our knowledge, this is the first work to learn a
voxel-adaptive dictionary. The importance of the adaptive dictionary in EAP
reconstruction will be demonstrated theoretically and empirically. Third,
optimization in DL-SPFI is only performed in a small subspace resided by the
SPF coefficients, as opposed to the q-space approach utilized by Merlet et al.
The experiment results demonstrate the advantages of DL-SPFI over the original
SPF basis and Bilgic et al.'s method.
|
1307.0781 | Distributed Online Big Data Classification Using Context Information | cs.LG stat.ML | Distributed, online data mining systems have emerged as a result of
applications requiring analysis of large amounts of correlated and
high-dimensional data produced by multiple distributed data sources. We propose
a distributed online data classification framework where data is gathered by
distributed data sources and processed by a heterogeneous set of distributed
learners which learn online, at run-time, how to classify the different data
streams either by using their locally available classification functions or by
helping each other by classifying each other's data. Importantly, since the
data is gathered at different locations, sending the data to another learner to
process incurs additional costs such as delays, and hence this will be only
beneficial if the benefits obtained from a better classification will exceed
the costs. We model the problem of joint classification by the distributed and
heterogeneous learners from multiple data sources as a distributed contextual
bandit problem where each data is characterized by a specific context. We
develop a distributed online learning algorithm for which we can prove
sublinear regret. Compared to prior work in distributed online data mining, our
work is the first to provide analytic regret results characterizing the
performance of the proposed algorithm.
|
1307.0802 | A Statistical Learning Theory Framework for Supervised Pattern Discovery | stat.ML cs.AI | This paper formalizes a latent variable inference problem we call {\em
supervised pattern discovery}, the goal of which is to find sets of
observations that belong to a single ``pattern.'' We discuss two versions of
the problem and prove uniform risk bounds for both. In the first version,
collections of patterns can be generated in an arbitrary manner and the data
consist of multiple labeled collections. In the second version, the patterns
are assumed to be generated independently by identically distributed processes.
These processes are allowed to take an arbitrary form, so observations within a
pattern are not in general independent of each other. The bounds for the second
version of the problem are stated in terms of a new complexity measure, the
quasi-Rademacher complexity.
|
1307.0803 | Data Fusion by Matrix Factorization | cs.LG cs.AI cs.DB stat.ML | For most problems in science and engineering we can obtain data sets that
describe the observed system from various perspectives and record the behavior
of its individual components. Heterogeneous data sets can be collectively mined
by data fusion. Fusion can focus on a specific target relation and exploit
directly associated data together with contextual data and data about system's
constraints. In the paper we describe a data fusion approach with penalized
matrix tri-factorization (DFMF) that simultaneously factorizes data matrices to
reveal hidden associations. The approach can directly consider any data that
can be expressed in a matrix, including those from feature-based
representations, ontologies, associations and networks. We demonstrate the
utility of DFMF for gene function prediction task with eleven different data
sources and for prediction of pharmacologic actions by fusing six data sources.
Our data fusion algorithm compares favorably to alternative data integration
approaches and achieves higher accuracy than can be obtained from any single
data source alone.
|
1307.0805 | Novel Factorization Strategies for Higher Order Tensors: Implications
for Compression and Recovery of Multi-linear Data | cs.IT cs.CV math.IT | In this paper we propose novel methods for compression and recovery of
multilinear data under limited sampling. We exploit the recently proposed
tensor- Singular Value Decomposition (t-SVD)[1], which is a group theoretic
framework for tensor decomposition. In contrast to popular existing tensor
decomposition techniques such as higher-order SVD (HOSVD), t-SVD has optimality
properties similar to the truncated SVD for matrices. Based on t-SVD, we first
construct novel tensor-rank like measures to characterize informational and
structural complexity of multilinear data. Following that we outline a
complexity penalized algorithm for tensor completion from missing entries. As
an application, 3-D and 4-D (color) video data compression and recovery are
considered. We show that videos with linear camera motion can be represented
more efficiently using t-SVD compared to traditional approaches based on
vectorizing or flattening of the tensors. Application of the proposed tensor
completion algorithm for video recovery from missing entries is shown to yield
a superior performance over existing methods. In conclusion we point out
several research directions and implications to online prediction of
multilinear data.
|
1307.0813 | Multi-Task Policy Search | stat.ML cs.AI cs.LG cs.RO | Learning policies that generalize across multiple tasks is an important and
challenging research topic in reinforcement learning and robotics. Training
individual policies for every single potential task is often impractical,
especially for continuous task variations, requiring more principled approaches
to share and transfer knowledge among similar tasks. We present a novel
approach for learning a nonlinear feedback policy that generalizes across
multiple tasks. The key idea is to define a parametrized policy as a function
of both the state and the task, which allows learning a single policy that
generalizes across multiple known and unknown tasks. Applications of our novel
approach to reinforcement and imitation learning in real-robot experiments are
shown.
|
1307.0814 | A survey on Human Mobility and its applications | cs.SI physics.soc-ph | Human Mobility has attracted attentions from different fields of studies such
as epidemic modeling, traffic engineering, traffic prediction and urban
planning. In this survey we review major characteristics of human mobility
studies including from trajectory-based studies to studies using graph and
network theory. In trajectory-based studies statistical measures such as jump
length distribution and radius of gyration are analyzed in order to investigate
how people move in their daily life, and if it is possible to model this
individual movements and make prediction based on them. Using graph in mobility
studies, helps to investigate the dynamic behavior of the system, such as
diffusion and flow in the network and makes it easier to estimate how much one
part of the network influences another by using metrics like centrality
measures. We aim to study population flow in transportation networks using
mobility data to derive models and patterns, and to develop new applications in
predicting phenomena such as congestion. Human Mobility studies with the new
generation of mobility data provided by cellular phone networks, arise new
challenges such as data storing, data representation, data analysis and
computation complexity. A comparative review of different data types used in
current tools and applications of Human Mobility studies leads us to new
approaches for dealing with mentioned challenges.
|
1307.0841 | Comparing various regression methods on ensemble strategies in
differential evolution | cs.NE | Differential evolution possesses a multitude of various strategies for
generating new trial solutions. Unfortunately, the best strategy is not known
in advance. Moreover, this strategy usually depends on the problem to be
solved. This paper suggests using various regression methods (like random
forest, extremely randomized trees, gradient boosting, decision trees, and a
generalized linear model) on ensemble strategies in differential evolution
algorithm by predicting the best differential evolution strategy during the
run. Comparing the preliminary results of this algorithm by optimizing a suite
of five well-known functions from literature, it was shown that using the
random forest regression method substantially outperformed the results of the
other regression methods.
|
1307.0844 | Making massive probabilistic databases practical | cs.DB | Existence of incomplete and imprecise data has moved the database paradigm
from deterministic to proba- babilistic information. Probabilistic databases
contain tuples that may or may not exist with some probability. As a result,
the number of possible deterministic database instances that can be observed
from a probabilistic database grows exponentially with the number of
probabilistic tuples. In this paper, we consider the problem of answering both
aggregate and non-aggregate queries on massive probabilistic databases. We
adopt the tuple independence model, in which each tuple is assigned a
probability value. We develop a method that exploits Probability Generating
Functions (PGF) to answer such queries efficiently. Our method maintains a
polynomial for each tuple. It incrementally builds a master polynomial that
expresses the distribution of the possible result values precisely. We also
develop an approximation method that finds the distribution of the result value
with negligible errors. Our experiments suggest that our methods are orders of
magnitude faster than the most recent systems that answer such queries,
including MayBMS and SPROUT. In our experiments, we were able to scale up to
several terabytes of data on TPC- H queries, while existing methods could only
run for a few gigabytes of data on the same queries.
|
1307.0845 | The SP theory of intelligence: benefits and applications | cs.AI | This article describes existing and expected benefits of the "SP theory of
intelligence", and some potential applications. The theory aims to simplify and
integrate ideas across artificial intelligence, mainstream computing, and human
perception and cognition, with information compression as a unifying theme. It
combines conceptual simplicity with descriptive and explanatory power across
several areas of computing and cognition. In the "SP machine" -- an expression
of the SP theory which is currently realized in the form of a computer model --
there is potential for an overall simplification of computing systems,
including software. The SP theory promises deeper insights and better solutions
in several areas of application including, most notably, unsupervised learning,
natural language processing, autonomous robots, computer vision, intelligent
databases, software engineering, information compression, medical diagnosis and
big data. There is also potential in areas such as the semantic web,
bioinformatics, structuring of documents, the detection of computer viruses,
data fusion, new kinds of computer, and the development of scientific theories.
The theory promises seamless integration of structures and functions within and
between different areas of application. The potential value, worldwide, of
these benefits and applications is at least $190 billion each year. Further
development would be facilitated by the creation of a high-parallel,
open-source version of the SP machine, available to researchers everywhere.
|
1307.0846 | Semi-supervised Ranking Pursuit | stat.ML cs.IR cs.LG | We propose a novel sparse preference learning/ranking algorithm. Our
algorithm approximates the true utility function by a weighted sum of basis
functions using the squared loss on pairs of data points, and is a
generalization of the kernel matching pursuit method. It can operate both in a
supervised and a semi-supervised setting and allows efficient search for
multiple, near-optimal solutions. Furthermore, we describe the extension of the
algorithm suitable for combined ranking and regression tasks. In our
experiments we demonstrate that the proposed algorithm outperforms several
state-of-the-art learning methods when taking into account unlabeled data and
performs comparably in a supervised learning scenario, while providing sparser
solutions.
|
1307.0855 | A Local Control Approach to Voltage Regulation in Distribution Networks | math.OC cs.SY | This paper address the problem of voltage regulation in power distribution
networks with deep penetration of distributed energy resources (DERs) without
any explicit communication between the buses in the network. We cast the
problem as an optimization problem with the objective of minimizing the
distance between the bus voltage magnitudes and some reference voltage profile.
We present an iterative algorithm where each bus updates the reactive power
injection provided by their DER. The update at a bus only depends on the
voltage magnitude at that bus, and for this reason, we call the algorithm a
local control algorithm. We provide sufficient conditions that guarantee the
convergence of the algorithm and these conditions can be checked a priori for a
set of feasible power injections. We also provide necessary conditions
establishing that longer and more heavily loaded networks are inherently more
difficult to control. We illustrate the operation of the algorithm through case
studies involving 8-,34- and 123-bus test distribution systems.
|
1307.0861 | Reconstruction of Signals Drawn from a Gaussian Mixture from Noisy
Compressive Measurements | cs.IT math.IT | This paper determines to within a single measurement the minimum number of
measurements required to successfully reconstruct a signal drawn from a
Gaussian mixture model in the low-noise regime. The method is to develop upper
and lower bounds that are a function of the maximum dimension of the linear
subspaces spanned by the Gaussian mixture components. The method not only
reveals the existence or absence of a minimum mean-squared error (MMSE) error
floor (phase transition) but also provides insight into the MMSE decay via
multivariate generalizations of the MMSE dimension and the MMSE power offset,
which are a function of the interaction between the geometrical properties of
the kernel and the Gaussian mixture. These results apply not only to standard
linear random Gaussian measurements but also to linear kernels that minimize
the MMSE. It is shown that optimal kernels do not change the number of
measurements associated with the MMSE phase transition, rather they affect the
sensed power required to achieve a target MMSE in the low-noise regime.
Overall, our bounds are tighter and sharper than standard bounds on the minimum
number of measurements needed to recover sparse signals associated with a union
of subspaces model, as they are not asymptotic in the signal dimension or
signal sparsity.
|
1307.0885 | The Proof of Lin's Conjecture via the Decimation-Hadamard Transform | cs.IT math.IT | In 1998, Lin presented a conjecture on a class of ternary sequences with
ideal 2-level autocorrelation in his Ph.D thesis. Those sequences have a very
simple structure, i.e., their trace representation has two trace monomial
terms. In this paper, we present a proof for the conjecture. The mathematical
tools employed are the second-order multiplexing decimation-Hadamard transform,
Stickelberger's theorem, the Teichm\"{u}ller character, and combinatorial
techniques for enumerating the Hamming weights of ternary numbers. As a
by-product, we also prove that the Lin conjectured ternary sequences are
Hadamard equivalent to ternary $m$-sequences.
|
1307.0920 | Domain Specific Hierarchical Huffman Encoding | cs.IT cs.DS math.IT | In this paper, we revisit the classical data compression problem for domain
specific texts. It is well-known that classical Huffman algorithm is optimal
with respect to prefix encoding and the compression is done at character level.
Since many data transfer are domain specific, for example, downloading of
lecture notes, web-blogs, etc., it is natural to think of data compression in
larger dimensions (i.e. word level rather than character level). Our framework
employs a two-level compression scheme in which the first level identifies
frequent patterns in the text using classical frequent pattern algorithms. The
identified patterns are replaced with special strings and to acheive a better
compression ratio the length of a special string is ensured to be shorter than
the length of the corresponding pattern. After this transformation, on the
resultant text, we employ classical Huffman data compression algorithm. In
short, in the first level compression is done at word level and in the second
level it is at character level. Interestingly, this two level compression
technique for domain specific text outperforms classical Huffman technique. To
support our claim, we have presented both theoretical and simulation results
for domain specific texts.
|
1307.0927 | On the bounds and achievability about the ODPC of $\mathcal{GRM}(2,m)^*$
over prime field for increasing message length | cs.IT math.IT | The optimum distance profiles of linear block codes were studied for
increasing or decreasing message length while keeping the minimum distances as
large as possible, especially for Golay codes and the second-order Reed-Muller
codes, etc. Cyclic codes have more efficient encoding and decoding algorithms.
In this paper, we investigate the optimum distance profiles with respect to the
cyclic subcode chains (ODPCs) of the punctured generalized second-order
Reed-Muller codes $\mathcal{GRM}(2,m)^*$ which were applied in Power Control in
OFDM Modulations in channels with synchronization, and so on. For this, two
standards are considered in the inverse dictionary order, i.e., for increasing
message length. Four lower bounds and upper bounds on ODPC are presented, where
the lower bounds almost achieve the corresponding upper bounds in some sense.
The discussions are over nonbinary prime field.
|
1307.0937 | Extending UML for Conceptual Modeling of Annotation of Medical Images | cs.CV | Imaging has occupied a huge role in the management of patients, whether
hospitalized or not. Depending on the patients clinical problem, a variety of
imaging modalities were available for use. This gave birth of the annotation of
medical image process. The annotation is intended to image analysis and solve
the problem of semantic gap. The reason for image annotation is due to increase
in acquisition of images. Physicians and radiologists feel better while using
annotation techniques for faster remedy in surgery and medicine due to the
following reasons: giving details to the patients, searching the present and
past records from the larger databases, and giving solutions to them in a
faster and more accurate way. However, classical conceptual modeling does not
incorporate the specificity of medical domain specially the annotation of
medical image. The design phase is the most important activity in the
successful building of annotation process. For this reason, we focus in this
paper on presenting the conceptual modeling of the annotation of medical image
by defining a new profile using the StarUML extensibility mechanism.
|
1307.0957 | Modeling the emergence of a new language: Naming Game with hybridization | physics.soc-ph cs.SI | In recent times, the research field of language dynamics has focused on the
investigation of language evolution, dividing the work in three evolutive
steps, according to the level of complexity: lexicon, categories and grammar.
The Naming Game is a simple model capable of accounting for the emergence of a
lexicon, intended as the set of words through which objects are named. We
introduce a stochastic modification of the Naming Game model with the aim of
characterizing the emergence of a new language as the result of the interaction
of agents. We fix the initial phase by splitting the population in two sets
speaking either language A or B. Whenever the result of the interaction of two
individuals results in an agent able to speak both A and B, we introduce a
finite probability that this state turns into a new idiom C, so to mimic a sort
of hybridization process. We study the system in the space of parameters
defining the interaction, and show that the proposed model displays a rich
variety of behaviours, despite the simple mean field topology of interactions.
|
1307.0966 | Improving data utility in differential privacy and k-anonymity | cs.CR cs.DB | We focus on two mainstream privacy models: k-anonymity and differential
privacy. Once a privacy model has been selected, the goal is to enforce it
while preserving as much data utility as possible. The main objective of this
thesis is to improve the data utility in k-anonymous and differentially private
data releases. k-Anonymity has several drawbacks. On the disclosure limitation
side, there is a lack of protection against attribute disclosure and against
informed intruders. On the data utility side, dealing with a large number of
quasi-identifier attributes is problematic. We propose a relaxation of
k-anonymity that deals with these issues.
Differential privacy limits disclosure risk through noise addition. The
Laplace distribution is commonly used for the random noise. We show that the
Laplace distribution is not optimal: the same disclosure limitation guarantee
can be attained by adding less noise. Optimal univariate and multivariate
noises are characterized and constructed.
Common mechanisms to attain differential privacy do not take into account the
users prior knowledge; they implicitly assume zero initial knowledge about the
query response. We propose a mechanism that focuses on limiting the knowledge
gain over the prior knowledge.
Microaggregation-based k-anonymity and differential privacy can be combined
to produce microdata releases with the strong privacy guarantees of
differential privacy and improved data accuracy.
The last contribution delves into the relation between t-closeness and
differential privacy. We see that for a specific distance and under some
reasonable assumptions on the intruders knowledge, t-closeness leads to
differential privacy.
|
1307.0974 | On Secure Source Coding with Side Information at the Encoder | cs.IT math.IT | We consider a secure source coding problem with side information (S.I.) at
the decoder and the eavesdropper. The encoder has a source that it wishes to
describe with limited distortion through a rate limited link to a legitimate
decoder. The message sent is also observed by the eavesdropper. The encoder
aims to minimize both the distortion incurred by the legitimate decoder; and
the information leakage rate at the eavesdropper. When the encoder has access
to the uncoded S.I. at the decoder, we characterize the
rate-distortion-information leakage rate (R.D.I.) region under a Markov chain
assumption and when S.I. at the encoder does not improve the rate-distortion
region as compared to the case when S.I. is absent. When the decoder also has
access to the eavesdroppers S.I., we characterize the R.D.I. region without the
Markov Chain condition. We then consider a related setting where the encoder
and decoder obtain coded S.I. through a rate limited helper, and characterize
the R.D.I. region for several special cases, including special cases under
logarithmic loss distortion and for special cases of the Quadratic Gaussian
setting. Finally, we consider the amplification measures of list or entropy
constraint at the decoder, and show that the R.D.I. regions for the settings
considered in this paper under these amplification measures coincide with
R.D.I. regions under per symbol logarithmic loss distortion constraint at the
decoder.
|
1307.0991 | Mixed Noisy Network Coding and Cooperative Unicasting in Wireless
Networks | cs.IT math.IT | The problem of communicating a single message to a destination in presence of
multiple relay nodes, referred to as cooperative unicast network, is
considered. First, we introduce "Mixed Noisy Network Coding" (MNNC) scheme
which generalizes "Noisy Network Coding" (NNC) where relays are allowed to
decode-and-forward (DF) messages while all of them (without exception) transmit
noisy descriptions of their observations. These descriptions are exploited at
the destination and the DF relays aim to decode the transmitted messages while
creating full cooperation among the nodes. Moreover, the destination and the DF
relays can independently select the set of descriptions to be decoded or
treated as interference. This concept is further extended to multi-hopping
scenarios, referred to as "Layered MNNC" (LMNNC), where DF relays are organized
into disjoint groups representing one hop in the network. For cooperative
unicast additive white Gaussian noise (AWGN) networks we show that -provided DF
relays are properly chosen- MNNC improves over all previously established
constant gaps to the cut-set bound. Secondly, we consider the composite
cooperative unicast network where the channel parameters are randomly drawn
before communication starts and remain fixed during the transmission. Each draw
is assumed to be unknown at the source and fully known at the destination but
only partly known at the relays. We introduce through MNNC scheme the concept
of "Selective Coding Strategy" (SCS) that enables relays to decide dynamically
whether, in addition to communicate noisy descriptions, is possible to decode
and forward messages. It is demonstrated through slow-fading AWGN relay
networks that SCS clearly outperforms conventional coding schemes.
|
1307.0995 | An Efficient Model Selection for Gaussian Mixture Model in a Bayesian
Framework | cs.LG stat.ML | In order to cluster or partition data, we often use
Expectation-and-Maximization (EM) or Variational approximation with a Gaussian
Mixture Model (GMM), which is a parametric probability density function
represented as a weighted sum of $\hat{K}$ Gaussian component densities.
However, model selection to find underlying $\hat{K}$ is one of the key
concerns in GMM clustering, since we can obtain the desired clusters only when
$\hat{K}$ is known. In this paper, we propose a new model selection algorithm
to explore $\hat{K}$ in a Bayesian framework. The proposed algorithm builds the
density of the model order which any information criterions such as AIC and BIC
basically fail to reconstruct. In addition, this algorithm reconstructs the
density quickly as compared to the time-consuming Monte Carlo simulation.
|
1307.0998 | A Unified Framework of Elementary Geometric Transformation
Representation | cs.CV | As an extension of projective homology, stereohomology is proposed via an
extension of Desargues theorem and the extended Desargues configuration.
Geometric transformations such as reflection, translation, central symmetry,
central projection, parallel projection, shearing, central dilation, scaling,
and so on are all included in stereohomology and represented as
Householder-Chen elementary matrices. Hence all these geometric transformations
are called elementary. This makes it possible to represent these elementary
geometric transformations in homogeneous square matrices independent of a
particular choice of coordinate system.
|
1307.1024 | Overview of Web Content Mining Tools | cs.IR | Nowadays, the Web has become one of the most widespread platforms for
information change and retrieval. As it becomes easier to publish documents, as
the number of users, and thus publishers, increases and as the number of
documents grows, searching for information is turning into a cumbersome and
time-consuming operation. Due to heterogeneity and unstructured nature of the
data available on the WWW, Web mining uses various data mining techniques to
discover useful knowledge from Web hyperlinks, page content and usage log. The
main uses of web content mining are to gather, categorize, organize and provide
the best possible information available on the Web to the user requesting the
information. The mining tools are imperative to scanning the many HTML
documents, images, and text. Then, the result is used by the search engines. In
this paper, we first introduce the concepts related to web mining; we then
present an overview of different Web Content Mining tools. We conclude by
presenting a comparative table of these tools based on some pertinent criteria.
|
1307.1058 | On the minimal teaching sets of two-dimensional threshold functions | math.CO cs.LG math.NT | It is known that a minimal teaching set of any threshold function on the
twodimensional rectangular grid consists of 3 or 4 points. We derive exact
formulae for the numbers of functions corresponding to these values and further
refine them in the case of a minimal teaching set of size 3. We also prove that
the average cardinality of the minimal teaching sets of threshold functions is
asymptotically 7/2.
We further present corollaries of these results concerning some special
arrangements of lines in the plane.
|
1307.1061 | Recursive Bayesian Initialization of Localization Based on Ranging and
Dead Reckoning | cs.RO cs.MA | The initialization of the state estimation in a localization scenario based
on ranging and dead reckoning is studied. Specifically, we start with a
cooperative localization setup and consider the problem of recursively arriving
at a uni-modal state estimate with sufficiently low covariance such that
covariance based filters can be used to estimate an agent's state subsequently.
A number of simplifications/assumptions are made such that the estimation
problem can be seen as that of estimating the initial agent state given a
deterministic surrounding and dead reckoning. This problem is solved by means
of a particle filter and it is described how continual states and covariance
estimates are derived from the solution. Finally, simulations are used to
illustrate the characteristics of the method and experimental data are briefly
presented.
|
1307.1070 | A Comparison of Non-stationary, Type-2 and Dual Surface Fuzzy Control | cs.AI cs.NE | Type-1 fuzzy logic has frequently been used in control systems. However this
method is sometimes shown to be too restrictive and unable to adapt in the
presence of uncertainty. In this paper we compare type-1 fuzzy control with
several other fuzzy approaches under a range of uncertain conditions. Interval
type-2 and non-stationary fuzzy controllers are compared, along with 'dual
surface' type-2 control, named due to utilising both the lower and upper values
produced from standard interval type-2 systems. We tune a type-1 controller,
then derive the membership functions and footprints of uncertainty from the
type-1 system and evaluate them using a simulated autonomous sailing problem
with varying amounts of environmental uncertainty. We show that while these
more sophisticated controllers can produce better performance than the type-1
controller, this is not guaranteed and that selection of Footprint of
Uncertainty (FOU) size has a large effect on this relative performance.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.