id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1208.0288
|
Multiple Location Profiling for Users and Relationships from Social
Network and Content
|
cs.DB
|
Users' locations are important for many applications such as personalized
search and localized content delivery. In this paper, we study the problem of
profiling Twitter users' locations with their following network and tweets. We
propose a multiple location profiling model (MLP), which has three key
features: 1) it formally models how likely a user follows another user given
their locations and how likely a user tweets a venue given his location, 2) it
fundamentally captures that a user has multiple locations and his following
relationships and tweeted venues can be related to any of his locations, and
some of them are even noisy, and 3) it novelly utilizes the home locations of
some users as partial supervision. As a result, MLP not only discovers users'
locations accurately and completely, but also "explains" each following
relationship by revealing users' true locations in the relationship.
Experiments on a large-scale data set demonstrate those advantages.
Particularly, 1) for predicting users' home locations, MLP successfully places
62% users and outperforms two state-of-the-art methods by 10% in accuracy, 2)
for discovering users' multiple locations, MLP improves the baseline methods by
14% in recall, and 3) for explaining following relationships, MLP achieves 57%
accuracy.
|
1208.0289
|
Flash-based Extended Cache for Higher Throughput and Faster Recovery
|
cs.DB
|
Considering the current price gap between disk and flash memory drives, for
applications dealing with large scale data, it will be economically more
sensible to use flash memory drives to supplement disk drives rather than to
replace them. This paper presents FaCE, which is a new low-overhead caching
strategy that uses flash memory as an extension to the DRAM buffer. FaCE aims
at improving the transaction throughput as well as shortening the recovery time
from a system failure. To achieve the goals, we propose two novel algorithms
for flash cache management, namely, Multi-Version FIFO replacement and Group
Second Chance. One striking result from FaCE is that using a small flash memory
drive as a caching device could deliver even higher throughput than using a
large flash memory drive to store the entire database tables. This was possible
due to flash write optimization as well as disk access reduction obtained by
the FaCE caching methods. In addition, FaCE takes advantage of the
non-volatility of flash memory to fully support database recovery by extending
the scope of a persistent database to include the data pages stored in the
flash cache. We have implemented FaCE in the PostgreSQL open source database
server and demonstrated its effectiveness for TPC-C benchmarks.
|
1208.0290
|
Don't Thrash: How to Cache Your Hash on Flash
|
cs.DB
|
This paper presents new alternatives to the well-known Bloom filter data
structure. The Bloom filter, a compact data structure supporting set insertion
and membership queries, has found wide application in databases, storage
systems, and networks. Because the Bloom filter performs frequent random reads
and writes, it is used almost exclusively in RAM, limiting the size of the sets
it can represent. This paper first describes the quotient filter, which
supports the basic operations of the Bloom filter, achieving roughly comparable
performance in terms of space and time, but with better data locality.
Operations on the quotient filter require only a small number of contiguous
accesses. The quotient filter has other advantages over the Bloom filter: it
supports deletions, it can be dynamically resized, and two quotient filters can
be efficiently merged. The paper then gives two data structures, the buffered
quotient filter and the cascade filter, which exploit the quotient filter
advantages and thus serve as SSD-optimized alternatives to the Bloom filter.
The cascade filter has better asymptotic I/O performance than the buffered
quotient filter, but the buffered quotient filter outperforms the cascade
filter on small to medium data sets. Both data structures significantly
outperform recently-proposed SSD-optimized Bloom filter variants, such as the
elevator Bloom filter, buffered Bloom filter, and forest-structured Bloom
filter. In experiments, the cascade filter and buffered quotient filter
performed insertions 8.6-11 times faster than the fastest Bloom filter variant
and performed lookups 0.94-2.56 times faster.
|
1208.0291
|
Learning Expressive Linkage Rules using Genetic Programming
|
cs.DB
|
A central problem in data integration and data cleansing is to find entities
in different data sources that describe the same real-world object. Many
existing methods for identifying such entities rely on explicit linkage rules
which specify the conditions that entities must fulfill in order to be
considered to describe the same real-world object. In this paper, we present
the GenLink algorithm for learning expressive linkage rules from a set of
existing reference links using genetic programming. The algorithm is capable of
generating linkage rules which select discriminative properties for comparison,
apply chains of data transformations to normalize property values, choose
appropriate distance measures and thresholds and combine the results of
multiple comparisons using non-linear aggregation functions. Our experiments
show that the GenLink algorithm outperforms the state-of-the-art genetic
programming approach to learning linkage rules recently presented by Carvalho
et. al. and is capable of learning linkage rules which achieve a similar
accuracy as human written rules for the same problem.
|
1208.0292
|
Mining Frequent Itemsets over Uncertain Databases
|
cs.DB
|
In recent years, due to the wide applications of uncertain data, mining
frequent itemsets over uncertain databases has attracted much attention. In
uncertain databases, the support of an itemset is a random variable instead of
a fixed occurrence counting of this itemset. Thus, unlike the corresponding
problem in deterministic databases where the frequent itemset has a unique
definition, the frequent itemset under uncertain environments has two different
definitions so far. The first definition, referred as the expected
support-based frequent itemset, employs the expectation of the support of an
itemset to measure whether this itemset is frequent. The second definition,
referred as the probabilistic frequent itemset, uses the probability of the
support of an itemset to measure its frequency. Thus, existing work on mining
frequent itemsets over uncertain databases is divided into two different groups
and no study is conducted to comprehensively compare the two different
definitions. In addition, since no uniform experimental platform exists,
current solutions for the same definition even generate inconsistent results.
In this paper, we firstly aim to clarify the relationship between the two
different definitions. Through extensive experiments, we verify that the two
definitions have a tight connection and can be unified together when the size
of data is large enough. Secondly, we provide baseline implementations of eight
existing representative algorithms and test their performances with uniform
measures fairly. Finally, according to the fair tests over many different
benchmark data sets, we clarify several existing inconsistent conclusions and
discuss some new findings.
|
1208.0293
|
The Distributed Ontology Language (DOL): Use Cases, Syntax, and
Extensibility
|
cs.AI cs.DL cs.LO
|
The Distributed Ontology Language (DOL) is currently being standardized
within the OntoIOp (Ontology Integration and Interoperability) activity of
ISO/TC 37/SC 3. It aims at providing a unified framework for (1) ontologies
formalized in heterogeneous logics, (2) modular ontologies, (3) links between
ontologies, and (4) annotation of ontologies. This paper presents the current
state of DOL's standardization. It focuses on use cases where distributed
ontologies enable interoperability and reusability. We demonstrate relevant
features of the DOL syntax and semantics and explain how these integrate into
existing knowledge engineering environments.
|
1208.0318
|
Artificial Neural Network Based Prediction of Optimal Pseudo-Damping and
Meta-Damping in Oscillatory Fractional Order Dynamical Systems
|
cs.SY cs.NE
|
This paper investigates typical behaviors like damped oscillations in
fractional order (FO) dynamical systems. Such response occurs due to the
presence of, what is conceived as, pseudo-damping and meta-damping in some
special class of FO systems. Here, approximation of such damped oscillation in
FO systems with the conventional notion of integer order damping and time
constant has been carried out using Genetic Algorithm (GA). Next, a multilayer
feed-forward Artificial Neural Network (ANN) has been trained using the GA
based results to predict the optimal pseudo and meta-damping from knowledge of
the maximum order or number of terms in the FO dynamical system.
|
1208.0326
|
Logarithmic Lipschitz norms and diffusion-induced instability
|
cs.SY math.AP
|
This paper proves that contractive ordinary differential equation systems
remain contractive when diffusion is added. Thus, diffusive instabilities, in
the sense of the Turing phenomenon, cannot arise for such systems. An important
biochemical system is shown to satisfy the required conditions.
|
1208.0353
|
Signal Space CoSaMP for Sparse Recovery with Redundant Dictionaries
|
cs.IT math.IT
|
Compressive sensing (CS) has recently emerged as a powerful framework for
acquiring sparse signals. The bulk of the CS literature has focused on the case
where the acquired signal has a sparse or compressible representation in an
orthonormal basis. In practice, however, there are many signals that cannot be
sparsely represented or approximated using an orthonormal basis, but that do
have sparse representations in a redundant dictionary. Standard results in CS
can sometimes be extended to handle this case provided that the dictionary is
sufficiently incoherent or well-conditioned, but these approaches fail to
address the case of a truly redundant or overcomplete dictionary. In this paper
we describe a variant of the iterative recovery algorithm CoSaMP for this more
challenging setting. We utilize the D-RIP, a condition on the sensing matrix
analogous to the well-known restricted isometry property. In contrast to prior
work, the method and analysis are "signal-focused"; that is, they are oriented
around recovering the signal rather than its dictionary coefficients. Under the
assumption that we have a near-optimal scheme for projecting vectors in signal
space onto the model family of candidate sparse signals, we provide provable
recovery guarantees. Developing a practical algorithm that can provably compute
the required near-optimal projections remains a significant open problem, but
we include simulation results using various heuristics that empirically exhibit
superior performance to traditional recovery algorithms.
|
1208.0359
|
An Automat for the Semantic Processing of Structured Information
|
cs.IR cs.DL
|
Using the database of the PuertoTerm project, an indexing system based on the
cognitive model of Brigitte Enders was built. By analyzing the cognitive
strategies of three abstractors, we built an automat that serves to simulate
human indexing processes. The automat allows the texts integrated in the system
to be assessed, evaluated and grouped by means of the bipartite spectral graph
partitioning algorithm, which also permits visualization of the terms and the
documents. The system features an ontology and a database to enhance its
operativity. As a result of the application, we achieved better rates of
exhaustivity in the indexing of documents, as well as greater precision and
retrieval of information, with high levels of efficiency.
|
1208.0378
|
Fast Planar Correlation Clustering for Image Segmentation
|
cs.CV cs.DS cs.LG stat.ML
|
We describe a new optimization scheme for finding high-quality correlation
clusterings in planar graphs that uses weighted perfect matching as a
subroutine. Our method provides lower-bounds on the energy of the optimal
correlation clustering that are typically fast to compute and tight in
practice. We demonstrate our algorithm on the problem of image segmentation
where this approach outperforms existing global optimization techniques in
minimizing the objective and is competitive with the state of the art in
producing high-quality segmentations.
|
1208.0385
|
A phase-sensitive method for filtering on the sphere
|
math.RT cs.CV
|
This paper examines filtering on a sphere, by first examining the roles of
spherical harmonic magnitude and phase. We show that phase is more important
than magnitude in determining the structure of a spherical function. We examine
the properties of linear phase shifts in the spherical harmonic domain, which
suggest a mechanism for constructing finite-impulse-response (FIR) filters. We
show that those filters have desirable properties, such as being associative,
mapping spherical functions to spherical functions, allowing directional
filtering, and being defined by relatively simple equations. We provide
examples of the filters for both spherical and manifold data.
|
1208.0393
|
Classification of a family of completely transitive codes
|
math.CO cs.IT math.IT
|
The completely regular codes in Hamming graphs have a high degree of
combinatorial symmetry and have attracted a lot of interest since their
introduction in 1973 by Delsarte. This paper studies the subfamily of
completely transitive codes, those in which an automorphism group is transitive
on each part of the distance partition. This family is a natural generalisation
of the binary completely transitive codes introduced by Sole in 1990. We take
the first step towards a classification of these codes, determining those for
which the automorphism group is faithful on entries.
|
1208.0402
|
Multidimensional Membership Mixture Models
|
cs.LG stat.ML
|
We present the multidimensional membership mixture (M3) models where every
dimension of the membership represents an independent mixture model and each
data point is generated from the selected mixture components jointly. This is
helpful when the data has a certain shared structure. For example, three unique
means and three unique variances can effectively form a Gaussian mixture model
with nine components, while requiring only six parameters to fully describe it.
In this paper, we present three instantiations of M3 models (together with the
learning and inference algorithms): infinite, finite, and hybrid, depending on
whether the number of mixtures is fixed or not. They are built upon Dirichlet
process mixture models, latent Dirichlet allocation, and a combination
respectively. We then consider two applications: topic modeling and learning 3D
object arrangements. Our experiments show that our M3 models achieve better
performance using fewer topics than many classic topic models. We also observe
that topics from the different dimensions of M3 models are meaningful and
orthogonal to each other.
|
1208.0414
|
Grey Power Models Based on Optimization of Initial Condition and Model
Parameters
|
cs.SY math.OC
|
We propose a novel approach to improve prediction accuracy of grey power
models including GM(1,1) and grey Verhulst model through optimization of the
initial condition and model parameters in this paper. And we propose a modified
grey Verhulst model. The new initial condition consists of the first item and
the last item of a sequence generated from applying the first-order
accumulative generation operator on the sequence of raw data. Weighted
coefficients of the first item and the last item in the combination as the
initial condition are derived from a method of minimizing error summation of
square. We shows that the newly modified grey power model is an extension of
the previous optimized GM(1,1) models and grey Verhulst models. The new
optimized initial condition can express the principle of new information
priority emphasized on in grey systems theory fully. The result of a numerical
example indicates that the modified grey model presented in this paper can
obtain a better prediction performance than that from the original grey model.
|
1208.0432
|
Efficient Point-to-Subspace Query in $\ell^1$ with Application to Robust
Object Instance Recognition
|
cs.CV cs.LG stat.ML
|
Motivated by vision tasks such as robust face and object recognition, we
consider the following general problem: given a collection of low-dimensional
linear subspaces in a high-dimensional ambient (image) space, and a query point
(image), efficiently determine the nearest subspace to the query in $\ell^1$
distance. In contrast to the naive exhaustive search which entails large-scale
linear programs, we show that the computational burden can be cut down
significantly by a simple two-stage algorithm: (1) projecting the query and
data-base subspaces into lower-dimensional space by random Cauchy matrix, and
solving small-scale distance evaluations (linear programs) in the projection
space to locate candidate nearest; (2) with few candidates upon independent
repetition of (1), getting back to the high-dimensional space and performing
exhaustive search. To preserve the identity of the nearest subspace with
nontrivial probability, the projection dimension typically is low-order
polynomial of the subspace dimension multiplied by logarithm of number of the
subspaces (Theorem 2.1). The reduced dimensionality and hence complexity
renders the proposed algorithm particularly relevant to vision application such
as robust face and object instance recognition that we investigate empirically.
|
1208.0435
|
Outage Probability of Dual-Hop Multiple Antenna AF Relaying Systems with
Interference
|
cs.IT math.IT
|
This paper presents an analytical investigation on the outage performance of
dual-hop multiple antenna amplify-and-forward relaying systems in the presence
of interference. For both the fixed-gain and variable-gain relaying schemes,
exact analytical expressions for the outage probability of the systems are
derived. Moreover, simple outage probability approximations at the high signal
to noise ratio regime are provided, and the diversity order achieved by the
systems are characterized. Our results suggest that variable-gain relaying
systems always outperform the corresponding fixed-gain relaying systems. In
addition, the fixed-gain relaying schemes only achieve diversity order of one,
while the achievable diversity order of the variable-gain relaying scheme
depends on the location of the multiple antennas.
|
1208.0451
|
Directed Random Markets: Connectivity determines Money
|
nlin.AO cs.MA q-fin.TR
|
Boltzmann-Gibbs distribution arises as the statistical equilibrium
probability distribution of money among the agents of a closed economic system
where random and undirected exchanges are allowed. When considering a model
with uniform savings in the exchanges, the final distribution is close to the
gamma family. In this work, we implement these exchange rules on networks and
we find that these stationary probability distributions are robust and they are
not affected by the topology of the underlying network. We introduce a new
family of interactions: random but directed ones. In this case, it is found the
topology to be determinant and the mean money per economic agent is related to
the degree of the node representing the agent in the network. The relation
between the mean money per economic agent and its degree is shown to be linear.
|
1208.0468
|
Probabilistic interconnection between interdependent networks promotes
cooperation in the public goods game
|
physics.soc-ph cond-mat.stat-mech cs.GT cs.SI
|
Most previous works study the evolution of cooperation in a structured
population by commonly employing an isolated single network. However, realistic
systems are composed of many interdependent networks coupled with each other,
rather than the isolated single one. In this paper, we consider a system
including two interacting networks with the same size, entangled with each
other by the introduction of probabilistic interconnections. We introduce the
public goods game into such system, and study how the probabilistic
interconnection influences the evolution of cooperation of the whole system and
the coupling effect between two layers of interdependent networks. Simulation
results show that there exists an intermediate region of interconnection
probability leading to the maximum cooperation level in the whole system.
Interestingly, we find that at the optimal interconnection probability the
fraction of internal links between cooperators in two layers is maximal. Also,
even if initially there are no cooperators in one layer of interdependent
networks, cooperation can still be promoted by probabilistic interconnection,
and the cooperation levels in both layers can more easily reach an agreement at
the intermediate interconnection probability. Our results may be helpful in
understanding the cooperative behavior in some realistic interdependent
networks and thus highlight the importance of probabilistic interconnection on
the evolution of cooperation.
|
1208.0482
|
The concurrent evolution of cooperation and the population structures
that support it
|
q-bio.PE cs.SI physics.soc-ph
|
The evolution of cooperation often depends upon population structure, yet
nearly all models of cooperation implicitly assume that this structure remains
static. This is a simplifying assumption, because most organisms possess
genetic traits that affect their population structure to some degree. These
traits, such as a group size preference, affect the relatedness of interacting
individuals and hence the opportunity for kin or group selection. We argue that
models that do not explicitly consider their evolution cannot provide a
satisfactory account of the origin of cooperation, because they cannot explain
how the prerequisite population structures arise. Here, we consider the
concurrent evolution of genetic traits that affect population structure, with
those that affect social behavior. We show that not only does population
structure drive social evolution, as in previous models, but that the
opportunity for cooperation can in turn drive the creation of population
structures that support it. This occurs through the generation of linkage
disequilibrium between socio-behavioral and population-structuring traits, such
that direct kin selection on social behavior creates indirect selection
pressure on population structure. We illustrate our argument with a model of
the concurrent evolution of group size preference and social behavior.
|
1208.0526
|
Optimization hardness as transient chaos in an analog approach to
constraint satisfaction
|
cs.CC cs.NE math.DS nlin.CD physics.comp-ph
|
Boolean satisfiability [1] (k-SAT) is one of the most studied optimization
problems, as an efficient (that is, polynomial-time) solution to k-SAT (for
$k\geq 3$) implies efficient solutions to a large number of hard optimization
problems [2,3]. Here we propose a mapping of k-SAT into a deterministic
continuous-time dynamical system with a unique correspondence between its
attractors and the k-SAT solution clusters. We show that beyond a constraint
density threshold, the analog trajectories become transiently chaotic [4-7],
and the boundaries between the basins of attraction [8] of the solution
clusters become fractal [7-9], signaling the appearance of optimization
hardness [10]. Analytical arguments and simulations indicate that the system
always finds solutions for satisfiable formulae even in the frozen regimes of
random 3-SAT [11] and of locked occupation problems [12] (considered among the
hardest algorithmic benchmarks); a property partly due to the system's
hyperbolic [4,13] character. The system finds solutions in polynomial
continuous-time, however, at the expense of exponential fluctuations in its
energy function.
|
1208.0541
|
A hybrid artificial immune system and Self Organising Map for network
intrusion detection
|
cs.NE cs.CR
|
Network intrusion detection is the problem of detecting unauthorised use of,
or access to, computer systems over a network. Two broad approaches exist to
tackle this problem: anomaly detection and misuse detection. An anomaly
detection system is trained only on examples of normal connections, and thus
has the potential to detect novel attacks. However, many anomaly detection
systems simply report the anomalous activity, rather than analysing it further
in order to report higher-level information that is of more use to a security
officer. On the other hand, misuse detection systems recognise known attack
patterns, thereby allowing them to provide more detailed information about an
intrusion. However, such systems cannot detect novel attacks.
A hybrid system is presented in this paper with the aim of combining the
advantages of both approaches. Specifically, anomalous network connections are
initially detected using an artificial immune system. Connections that are
flagged as anomalous are then categorised using a Kohonen Self Organising Map,
allowing higher-level information, in the form of cluster membership, to be
extracted. Experimental results on the KDD 1999 Cup dataset show a low false
positive rate and a detection and classification rate for Denial-of-Service and
User-to-Root attacks that is higher than those in a sample of other works.
|
1208.0562
|
Learning the Interference Graph of a Wireless Network
|
cs.IT math.IT
|
A key challenge in wireless networking is the management of interference
between transmissions. Identifying which transmitters interfere with each other
is a crucial first step. In this paper we cast the task of estimating the a
wireless interference environment as a graph learning problem. Nodes represent
transmitters and edges represent the presence of interference between pairs of
transmitters. We passively observe network traffic transmission patterns and
collect information on transmission successes and failures. We establish bounds
on the number of observations (each a snapshot of a network traffic pattern)
required to identify the interference graph reliably with high probability.
Our main results are scaling laws that tell us how the number of observations
must grow in terms of the total number of nodes $n$ in the network and the
maximum number of interfering transmitters $d$ per node (maximum node degree).
The effects of hidden terminal interference (i.e., interference not detectable
via carrier sensing) on the observation requirements are also quantified. We
show that to identify the graph it is necessary and sufficient that the
observation period grows like $d^2 \log n$, and we propose a practical
algorithm that reliably identifies the graph from this length of observation.
The observation requirements scale quite mildly with network size, and networks
with sparse interference (small $d$) can be identified more rapidly.
Computational experiments based on a realistic simulations of the traffic and
protocol lend additional support to these conclusions.
|
1208.0564
|
Detection of Deviations in Mobile Applications Network Behavior
|
cs.CR cs.LG
|
In this paper a novel system for detecting meaningful deviations in a mobile
application's network behavior is proposed. The main goal of the proposed
system is to protect mobile device users and cellular infrastructure companies
from malicious applications. The new system is capable of: (1) identifying
malicious attacks or masquerading applications installed on a mobile device,
and (2) identifying republishing of popular applications injected with a
malicious code. The detection is performed based on the application's network
traffic patterns only. For each application two types of models are learned.
The first model, local, represents the personal traffic pattern for each user
using an application and is learned on the device. The second model,
collaborative, represents traffic patterns of numerous users using an
application and is learned on the system server. Machine-learning methods are
used for learning and detection purposes. This paper focuses on methods
utilized for local (i.e., on mobile device) learning and detection of
deviations from the normal application's behavior. These methods were
implemented and evaluated on Android devices. The evaluation experiments
demonstrate that: (1) various applications have specific network traffic
patterns and certain application categories can be distinguishable by their
network patterns, (2) different levels of deviations from normal behavior can
be detected accurately, and (3) local learning is feasible and has a low
performance overhead on mobile devices.
|
1208.0573
|
Invariants for Homology Classes with Application to Optimal Search and
Planning Problem in Robotics
|
math.AT cs.RO
|
We consider planning problems on a punctured Euclidean spaces, $\mathbb{R}^D
- \widetilde{\mathcal{O}}$, where $\widetilde{\mathcal{O}}$ is a collection of
obstacles. Such spaces are of frequent occurrence as configuration spaces of
robots, where $\widetilde{\mathcal{O}}$ represent either physical obstacles
that the robots need to avoid (e.g., walls, other robots, etc.) or illegal
states (e.g., all legs off-the-ground). As state-planning is translated to
path-planning on a configuration space, we collate equivalent plannings via
topologically-equivalent paths. This prompts finding or exploring the different
homology classes in such environments and finding representative optimal
trajectories in each such class.
In this paper we start by considering the problem of finding a complete set
of easily computable homology class invariants for $(N-1)$-cycles in
$(\mathbb{R}^D - \widetilde{\mathcal{O}})$. We achieve this by finding explicit
generators of the $(N-1)^{st}$ de Rham cohomology group of this punctured
Euclidean space, and using their integrals to define cocycles. The action of
those dual cocycles on $(N-1)$-cycles gives the desired complete set of
invariants. We illustrate the computation through examples.
We further show that, due to the integral approach, this complete set of
invariants is well-suited for efficient search-based planning of optimal robot
trajectories with topological constraints. Finally we extend this approach to
computation of invariants in spaces derived from $(\mathbb{R}^D -
\widetilde{\mathcal{O}})$ by collapsing subspace, thereby permitting
application to a wider class of non-Euclidean ambient spaces.
|
1208.0588
|
Betweenness Preference: Quantifying Correlations in the Topological
Dynamics of Temporal Networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We study correlations in temporal networks and introduce the notion of
betweenness preference. It allows to quantify to what extent paths, existing in
time-aggregated representations of temporal networks, are actually realizable
based on the sequence of interactions. We show that betweenness preference is
present in empirical temporal network data and that it influences the length of
shortest time-respecting paths. Using four different data sets, we further
argue that neglecting betweenness preference leads to wrong conclusions about
dynamical processes on temporal networks.
|
1208.0593
|
The green grid saga - a green initiative to data centers: a review
|
cs.SY cs.DC
|
Information Technology (IT) significantly impacts the environment throughout
its life cycle. Most enterprises have not paid enough attention to this until
recently. IT's environmental impact can be significantly reduced by behavioral
changes, as well as technology changes. Given the relative energy and materials
inefficiency of most IT infrastructures today, many green IT initiatives can be
easily tackled at no incremental cost. The Green Grid - a non-profit trade
organization of IT professionals is such an initiative, formed to initiate the
issues of power and cooling in data centers, scattered world-wide. The Green
Grid seeks to define best practices for optimizing the efficient consumption of
power at IT equipment and facility levels, as well as the manner in which
cooling is delivered at these levels hence, providing promising attitude in
bringing down the environmental hazards, as well as proceeding to the new era
of green computing. In this paper we review the various analytical aspects of
The Green Grid upon the data centers and found green facts.
|
1208.0631
|
Economics of Electric Vehicle Charging: A Game Theoretic Approach
|
cs.GT cs.IT math.IT
|
In this paper, the problem of grid-to-vehicle energy exchange between a smart
grid and plug-in electric vehicle groups (PEVGs) is studied using a
noncooperative Stackelberg game. In this game, on the one hand, the smart grid
that acts as a leader, needs to decide on its price so as to optimize its
revenue while ensuring the PEVGs' participation. On the other hand, the PEVGs,
which act as followers, need to decide on their charging strategies so as to
optimize a tradeoff between the benefit from battery charging and the
associated cost. Using variational inequalities, it is shown that the proposed
game possesses a socially optimal Stackelberg equilibrium in which the grid
optimizes its price while the PEVGs choose their equilibrium strategies. A
distributed algorithm that enables the PEVGs and the smart grid to reach this
equilibrium is proposed and assessed by extensive simulations. Further, the
model is extended to a time-varying case that can incorporate and handle slowly
varying environments.
|
1208.0645
|
On the Consistency of AUC Pairwise Optimization
|
cs.LG stat.ML
|
AUC (area under ROC curve) is an important evaluation criterion, which has
been popularly used in many learning tasks such as class-imbalance learning,
cost-sensitive learning, learning to rank, etc. Many learning approaches try to
optimize AUC, while owing to the non-convexity and discontinuousness of AUC,
almost all approaches work with surrogate loss functions. Thus, the consistency
of AUC is crucial; however, it has been almost untouched before. In this paper,
we provide a sufficient condition for the asymptotic consistency of learning
approaches based on surrogate loss functions. Based on this result, we prove
that exponential loss and logistic loss are consistent with AUC, but hinge loss
is inconsistent. Then, we derive the $q$-norm hinge loss and general hinge loss
that are consistent with AUC. We also derive the consistent bounds for
exponential loss and logistic loss, and obtain the consistent bounds for many
surrogate loss functions under the non-noise setting. Further, we disclose an
equivalence between the exponential surrogate loss of AUC and exponential
surrogate loss of accuracy, and one straightforward consequence of such finding
is that AdaBoost and RankBoost are equivalent.
|
1208.0651
|
Fast and Accurate Algorithms for Re-Weighted L1-Norm Minimization
|
stat.CO cs.IT math.IT stat.ML
|
To recover a sparse signal from an underdetermined system, we often solve a
constrained L1-norm minimization problem. In many cases, the signal sparsity
and the recovery performance can be further improved by replacing the L1 norm
with a "weighted" L1 norm. Without any prior information about nonzero elements
of the signal, the procedure for selecting weights is iterative in nature.
Common approaches update the weights at every iteration using the solution of a
weighted L1 problem from the previous iteration.
In this paper, we present two homotopy-based algorithms that efficiently
solve reweighted L1 problems. First, we present an algorithm that quickly
updates the solution of a weighted L1 problem as the weights change. Since the
solution changes only slightly with small changes in the weights, we develop a
homotopy algorithm that replaces the old weights with the new ones in a small
number of computationally inexpensive steps. Second, we propose an algorithm
that solves a weighted L1 problem by adaptively selecting the weights while
estimating the signal. This algorithm integrates the reweighting into every
step along the homotopy path by changing the weights according to the changes
in the solution and its support, allowing us to achieve a high quality signal
reconstruction by solving a single homotopy problem. We compare the performance
of both algorithms, in terms of reconstruction accuracy and computational
complexity, against state-of-the-art solvers and show that our methods have
smaller computational cost. In addition, we will show that the adaptive
selection of the weights inside the homotopy often yields reconstructions of
higher quality.
|
1208.0661
|
Private Quantum Coding for Quantum Relay Networks
|
quant-ph cs.IT math.IT
|
The relay encoder is an unreliable probabilistic device which is aimed at
helping the communication between the sender and the receiver. In this work we
show that in the quantum setting the probabilistic behavior can be completely
eliminated. We also show how to combine quantum polar encoding with
superactivation-assistance in order to achieve reliable and capacity-achieving
private communication over noisy quantum relay channels.
|
1208.0684
|
Comparative Evaluation of Data Stream Indexing Models
|
cs.DB
|
In recent years, the management and processing of data streams has become a
topic of active research in several fields of computer science such as,
distributed systems, database systems, and data mining. A data stream can be
thought of as a transient, continuously increasing sequence of data. In data
streams' applications, because of online monitoring, answering to the user's
queries should be time and space efficient. In this paper, we consider the
special requirements of indexing to determine the performance of different
techniques in data stream processing environments. Stream indexing has main
differences with approaches in traditional databases. Also, we compare data
stream indexing models analytically that can provide a suitable method for
stream indexing.
|
1208.0690
|
Semantic Web Requirements through Web Mining Techniques
|
cs.IR cs.DL
|
In recent years, Semantic web has become a topic of active research in
several fields of computer science and has applied in a wide range of domains
such as bioinformatics, life sciences, and knowledge management. The two
fast-developing research areas semantic web and web mining can complement each
other and their different techniques can be used jointly or separately to solve
the issues in both areas. In addition, since shifting from current web to
semantic web mainly depends on the enhancement of knowledge, web mining can
play a key role in facing numerous challenges of this changing. In this paper,
we analyze and classify the application of divers web mining techniques in
different challenges of the semantic web in form of an analytical framework.
|
1208.0782
|
Wisdom of the Crowd: Incorporating Social Influence in Recommendation
Models
|
cs.IR cs.LG cs.SI physics.soc-ph
|
Recommendation systems have received considerable attention recently.
However, most research has been focused on improving the performance of
collaborative filtering (CF) techniques. Social networks, indispensably,
provide us extra information on people's preferences, and should be considered
and deployed to improve the quality of recommendations. In this paper, we
propose two recommendation models, for individuals and for groups respectively,
based on social contagion and social influence network theory. In the
recommendation model for individuals, we improve the result of collaborative
filtering prediction with social contagion outcome, which simulates the result
of information cascade in the decision-making process. In the recommendation
model for groups, we apply social influence network theory to take
interpersonal influence into account to form a settled pattern of disagreement,
and then aggregate opinions of group members. By introducing the concept of
susceptibility and interpersonal influence, the settled rating results are
flexible, and inclined to members whose ratings are "essential".
|
1208.0787
|
A Random Walk Based Model Incorporating Social Information for
Recommendations
|
cs.IR cs.LG
|
Collaborative filtering (CF) is one of the most popular approaches to build a
recommendation system. In this paper, we propose a hybrid collaborative
filtering model based on a Makovian random walk to address the data sparsity
and cold start problems in recommendation systems. More precisely, we construct
a directed graph whose nodes consist of items and users, together with item
content, user profile and social network information. We incorporate user's
ratings into edge settings in the graph model. The model provides personalized
recommendations and predictions to individuals and groups. The proposed
algorithms are evaluated on MovieLens and Epinions datasets. Experimental
results show that the proposed methods perform well compared with other
graph-based methods, especially in the cold start case.
|
1208.0803
|
A Novel Approach of Color Image Hiding using RGB Color planes and DWT
|
cs.CR cs.CV
|
This work proposes a wavelet based Steganographic technique for the color
image. The true color cover image and the true color secret image both are
decomposed into three separate color planes namely R, G and B. Each plane of
the images is decomposed into four sub bands using DWT. Each color plane of the
secret image is hidden by alpha blending technique in the corresponding sub
bands of the respective color planes of the original image. During embedding,
secret image is dispersed within the original image depending upon the alpha
value. Extraction of the secret image varies according to the alpha value. In
this approach the stego image generated is of acceptable level of
imperceptibility and distortion compared to the cover image and the overall
security is high.
|
1208.0805
|
On the control of abelian group codes with information group of prime
order
|
cs.IT math.GR math.IT
|
Finite State Machine (FSM) model is widely used in the construction of binary
convolutional codes. If Z_2={0,1} is the binary mod-2 addition group and
(Z_2)^n is the n-times direct product of Z_2, then a binary convolutional
encoder, with rate (k/n)< 1 and memory m, is a FSM with (Z_2)^k as inputs
group, (Z_2)^n as outputs group and (Z_2)^m as states group. The next state
mapping nu:[(Z_2)^k x (Z_2)^m] --> (Z_2)^m is a surjective group homomorphism.
The encoding mapping omega:[(Z_2)^k x (Z_2)^m] --> (Z_2)^n is a homomorphism
adequately restricted by the trellis graph produced by nu. The binary
convolutional code is the family of bi-infinite sequences produced by the
binary convolutional encoder. Thus, a convolutional code can be considered as a
dynamical system and it is known that well behaved dynamical systems must be
necessarily controllable. The generalization of binary convolutional encoders
over arbitrary finite groups is made by using the extension of groups, instead
of direct product. In this way, given finite groups U,S and Y, a wide-sense
homomorphic encoder (WSHE) is a FSM with U as inputs group, S as states group,
and Y as outputs group. By denoting (U x S) as the extension of U by S, the
next state homomorphism nu:(U x S) --> S needs to be surjective and the
encoding homomorphism omega:(U x S) --> Y has restrictions given by the trellis
graph produced by nu. The code produced by a WSHE is known as group code. In
this work we will study the case when the extension (U x S) is abelian with U
being Z_p, p a positive prime number. We will show that this class of WSHEs
will produce controllable codes only if the states group S is isomorphic with
(Z_p)^j, for some positive integer j.
|
1208.0806
|
Cross-conformal predictors
|
stat.ML cs.LG
|
This note introduces the method of cross-conformal prediction, which is a
hybrid of the methods of inductive conformal prediction and cross-validation,
and studies its validity and predictive efficiency empirically.
|
1208.0848
|
Learning Theory Approach to Minimum Error Entropy Criterion
|
cs.LG stat.ML
|
We consider the minimum error entropy (MEE) criterion and an empirical risk
minimization learning algorithm in a regression setting. A learning theory
approach is presented for this MEE algorithm and explicit error bounds are
provided in terms of the approximation ability and capacity of the involved
hypothesis space when the MEE scaling parameter is large. Novel asymptotic
analysis is conducted for the generalization error associated with Renyi's
entropy and a Parzen window function, to overcome technical difficulties arisen
from the essential differences between the classical least squares problems and
the MEE setting. A semi-norm and the involved symmetrized least squares error
are introduced, which is related to some ranking algorithms.
|
1208.0864
|
Statistical Results on Filtering and Epi-convergence for Learning-Based
Model Predictive Control
|
math.OC cs.LG cs.SY
|
Learning-based model predictive control (LBMPC) is a technique that provides
deterministic guarantees on robustness, while statistical identification tools
are used to identify richer models of the system in order to improve
performance. This technical note provides proofs that elucidate the reasons for
our choice of measurement model, as well as giving proofs concerning the
stochastic convergence of LBMPC. The first part of this note discusses
simultaneous state estimation and statistical identification (or learning) of
unmodeled dynamics, for dynamical systems that can be described by ordinary
differential equations (ODE's). The second part provides proofs concerning the
epi-convergence of different statistical estimators that can be used with the
learning-based model predictive control (LBMPC) technique. In particular, we
prove results on the statistical properties of a nonparametric estimator that
we have designed to have the correct deterministic and stochastic properties
for numerical implementation when used in conjunction with LBMPC.
|
1208.0874
|
A Projection Argument for Differential Inclusions, with Applications to
Persistence of Mass-Action Kinetics
|
math.DS cs.SY q-bio.MN
|
Motivated by questions in mass-action kinetics, we introduce the notion of
vertexical family of differential inclusions. Defined on open hypercubes, these
families are characterized by particular good behavior under projection maps.
The motivating examples are certain families of reaction networks -- including
reversible, weakly reversible, endotactic, and strongly endotactic reaction
networks -- that give rise to vertexical families of mass-action differential
inclusions. We prove that vertexical families are amenable to structural
induction. Consequently, a trajectory of a vertexical family approaches the
boundary if and only if either the trajectory approaches a vertex of the
hypercube, or a trajectory in a lower-dimensional member of the family
approaches the boundary. With this technology, we make progress on the global
attractor conjecture, a central open problem concerning mass-action kinetics
systems. Additionally, we phrase mass-action kinetics as a functor on reaction
networks with variable rates.
|
1208.0946
|
A Supermodular Optimization Framework for Leader Selection under Link
Noise in Linear Multi-Agent Systems
|
cs.SY math.OC
|
In many applications of multi-agent systems (MAS), a set of leader agents
acts as a control input to the remaining follower agents. In this paper, we
introduce an analytical approach to selecting leader agents in order to
minimize the total mean-square error of the follower agent states from their
desired value in steady-state in the presence of noisy communication links. We
show that the problem of choosing leaders in order to minimize this error can
be solved using supermodular optimization techniques, leading to efficient
algorithms that are within a provable bound of the optimum. We formulate two
leader selection problems within our framework, namely the problem of choosing
a fixed number of leaders to minimize the error, as well as the problem of
choosing the minimum number of leaders to achieve a tolerated level of error.
We study both leader selection criteria for different scenarios, including MAS
with static topologies, topologies experiencing random link or node failures,
switching topologies, and topologies that vary arbitrarily in time due to node
mobility. In addition to providing provable bounds for all these cases,
simulation results demonstrate that our approach outperforms other leader
selection methods, such as node degree-based and random selection methods, and
provides comparable performance to current state of the art algorithms.
|
1208.0959
|
Recklessly Approximate Sparse Coding
|
cs.LG cs.CV stat.ML
|
It has recently been observed that certain extremely simple feature encoding
techniques are able to achieve state of the art performance on several standard
image classification benchmarks including deep belief networks, convolutional
nets, factored RBMs, mcRBMs, convolutional RBMs, sparse autoencoders and
several others. Moreover, these "triangle" or "soft threshold" encodings are
ex- tremely efficient to compute. Several intuitive arguments have been put
forward to explain this remarkable performance, yet no mathematical
justification has been offered.
The main result of this report is to show that these features are realized as
an approximate solution to the a non-negative sparse coding problem. Using this
connection we describe several variants of the soft threshold features and
demonstrate their effectiveness on two image classification benchmark tasks.
|
1208.0967
|
Human Activity Learning using Object Affordances from RGB-D Videos
|
cs.CV
|
Human activities comprise several sub-activities performed in a sequence and
involve interactions with various objects. This makes reasoning about the
object affordances a central task for activity recognition. In this work, we
consider the problem of jointly labeling the object affordances and human
activities from RGB-D videos. We frame the problem as a Markov Random Field
where the nodes represent objects and sub-activities, and the edges represent
the relationships between object affordances, their relations with
sub-activities, and their evolution over time. We formulate the learning
problem using a structural SVM approach, where labeling over various alternate
temporal segmentations are considered as latent variables. We tested our method
on a dataset comprising 120 activity videos collected from four subjects, and
obtained an end-to-end precision of 81.8% and recall of 80.0% for labeling the
activities.
|
1208.0984
|
APRIL: Active Preference-learning based Reinforcement Learning
|
cs.LG
|
This paper focuses on reinforcement learning (RL) with limited prior
knowledge. In the domain of swarm robotics for instance, the expert can hardly
design a reward function or demonstrate the target behavior, forbidding the use
of both standard RL and inverse reinforcement learning. Although with a limited
expertise, the human expert is still often able to emit preferences and rank
the agent demonstrations. Earlier work has presented an iterative
preference-based RL framework: expert preferences are exploited to learn an
approximate policy return, thus enabling the agent to achieve direct policy
search. Iteratively, the agent selects a new candidate policy and demonstrates
it; the expert ranks the new demonstration comparatively to the previous best
one; the expert's ranking feedback enables the agent to refine the approximate
policy return, and the process is iterated. In this paper, preference-based
reinforcement learning is combined with active ranking in order to decrease the
number of ranking queries to the expert needed to yield a satisfactory policy.
Experiments on the mountain car and the cancer treatment testbeds witness that
a couple of dozen rankings enable to learn a competent policy.
|
1208.1004
|
Social Trust as a solution to address sparsity-inherent problems of
Recommender systems
|
cs.SI cs.IR
|
Trust has been explored by many researchers in the past as a successful
solution for assisting recommender systems. Even though the approach of using a
web-of-trust scheme for assisting the recommendation production is well
adopted, issues like the sparsity problem have not been explored adequately so
far with regard to this. In this work we are proposing and testing a scheme
that uses the existing ratings of users to calculate the hypothetical trust
that might exist between them. The purpose is to demonstrate how some basic
social networking when applied to an existing system can help in alleviating
problems of traditional recommender system schemes. Interestingly, such schemes
are also alleviating the cold start problem from which mainly new users are
suffering. In order to show how good the system is in that respect, we measure
the performance at various times as the system evolves and we also contrast the
solution with existing approaches. Finally, we present the results which
justify that such schemes undoubtedly work better than a system that makes no
use of trust at all.
|
1208.1011
|
Credibility in Web Search Engines
|
cs.IR
|
Web search engines apply a variety of ranking signals to achieve user
satisfaction, i.e., results pages that provide the best-possible results to the
user. While these ranking signals implicitly consider credibility (e.g., by
measuring popularity), explicit measures of credibility are not applied. In
this chapter, credibility in Web search engines is discussed in a broad
context: credibility as a measure for including documents in a search engine's
index, credibility as a ranking signal, credibility in the context of universal
search results, and the possibility of using credibility as an explicit measure
for ranking purposes. It is found that while search engines-at least to a
certain extent-show credible results to their users, there is no fully
integrated credibility framework for Web search engines.
|
1208.1035
|
The concavity of R\`enyi entropy power
|
cs.IT math.FA math.IT
|
We associate to the p-th R\'enyi entropy a definition of entropy power, which
is the natural extension of Shannon's entropy power and exhibits a nice
behaviour along solutions to the p-nonlinear heat equation in $R^n$. We show
that the R\'enyi entropy power of general probability densities solving such
equations is always a concave function of time, whereas it has a linear
behaviour in correspondence to the Barenblatt source-type solutions. We then
shown that the p-th R\'enyi entropy power of a probability density which solves
the nonlinear diffusion of order p, is a concave function of time. This result
extends Costa's concavity inequality for Shannon's entropy power to R\'enyi
entropies.
|
1208.1045
|
Remarks on contractions of reaction-diffusion PDE's on weighted L^2
norms
|
cs.SY math.AP
|
In [1], we showed contractivity of reaction-diffusion PDE: \frac{\partial
u}{\partial t}({\omega},t) = F(u({\omega},t)) + D\Delta u({\omega},t) with
Neumann boundary condition, provided \mu_{p,Q}(J_F (u)) < 0 (uniformly on u),
for some 1 \leq p \leq \infty and some positive, diagonal matrix Q, where J_F
is the Jacobian matrix of F. This note extends the result for Q weighted L_2
norms, where Q is a positive, symmetric (not merely diagonal) matrix and
Q^2D+DQ^2>0.
|
1208.1056
|
Sequential Estimation Methods from Inclusion Principle
|
math.ST cs.LG math.PR stat.TH
|
In this paper, we propose new sequential estimation methods based on
inclusion principle. The main idea is to reformulate the estimation problems as
constructing sequential random intervals and use confidence sequences to
control the associated coverage probabilities. In contrast to existing
asymptotic sequential methods, our estimation procedures rigorously guarantee
the pre-specified levels of confidence.
|
1208.1070
|
Timing Channels with Multiple Identical Quanta
|
cs.IT math.IT q-bio.MN
|
We consider mutual information between release times and capture times for a
set of M identical quanta traveling independently from a source to a target.
The quanta are immediately captured upon arrival, first-passage times are
assumed independent and identically distributed and the quantum emission times
are constrained by a deadline. The primary application area is intended to be
inter/intracellular molecular signaling in biological systems whereby an
organelle, cell or group of cells must deliver some message (such as
transcription or developmental instructions) over distance with reasonable
certainty to another organelles, cells or group of cells. However, the model
can also be applied to communications systems wherein indistinguishable signals
have random transit latencies.
|
1208.1103
|
System identification and modeling for interacting and non-interacting
tank systems using intelligent techniques
|
cs.AI cs.SY
|
System identification from the experimental data plays a vital role for model
based controller design. Derivation of process model from first principles is
often difficult due to its complexity. The first stage in the development of
any control and monitoring system is the identification and modeling of the
system. Each model is developed within the context of a specific control
problem. Thus, the need for a general system identification framework is
warranted. The proposed framework should be able to adapt and emphasize
different properties based on the control objective and the nature of the
behavior of the system. Therefore, system identification has been a valuable
tool in identifying the model of the system based on the input and output data
for the design of the controller. The present work is concerned with the
identification of transfer function models using statistical model
identification, process reaction curve method, ARX model, genetic algorithm and
modeling using neural network and fuzzy logic for interacting and non
interacting tank process. The identification technique and modeling used is
prone to parameter change & disturbance. The proposed methods are used for
identifying the mathematical model and intelligent model of interacting and non
interacting process from the real time experimental data.
|
1208.1111
|
Strategies for Distributed Sensor Selection Using Convex Optimization
|
cs.IT math.IT
|
Consider the estimation of an unknown parameter vector in a linear
measurement model. Centralized sensor selection consists in selecting a set of
k_s sensor measurements, from a total number of m potential measurements. The
performance of the corresponding selection is measured by the volume of an
estimation error covariance matrix. In this work, we consider the problem of
selecting these sensors in a distributed or decentralized fashion. In
particular, we study the case of two leader nodes that perform naive
decentralized selections. We demonstrate that this can degrade the performance
severely. Therefore, two heuristics based on convex optimization methods are
introduced, where we first allow one leader to make a selection, and then to
share a modest amount of information about his selection with the remaining
node. We will show that both heuristics clearly outperform the naive
decentralized selection, and achieve a performance close to the centralized
selection.
|
1208.1116
|
Optimal Non-Uniform Mapping for Probabilistic Shaping
|
cs.IT math.IT
|
The construction of optimal non-uniform mappings for discrete input
memoryless channels (DIMCs) is investigated. An efficient algorithm to find
optimal mappings is proposed and the rate by which a target distribution is
approached is investigated. The results are applied to non-uniform mappings for
additive white Gaussian noise (AWGN) channels with finite signal
constellations. The mappings found by the proposed methods outperform those
obtained via a central limit theorem approach as suggested in the literature.
|
1208.1136
|
Credal nets under epistemic irrelevance
|
cs.AI math.PR
|
We present a new approach to credal nets, which are graphical models that
generalise Bayesian nets to imprecise probability. Instead of applying the
commonly used notion of strong independence, we replace it by the weaker notion
of epistemic irrelevance. We show how assessments of epistemic irrelevance
allow us to construct a global model out of given local uncertainty models and
mention some useful properties. The main results and proofs are presented using
the language of sets of desirable gambles, which provides a very general and
expressive way of representing imprecise probability models.
|
1208.1149
|
Uncertainty-dependent data collection in vehicular sensor networks
|
cs.NI cs.SY
|
Vehicular sensor networks (VSNs) are built on top of vehicular ad-hoc
networks (VANETs) by equipping vehicles with sensing devices. These new
technologies create a huge opportunity to extend the sensing capabilities of
the existing road traffic control systems and improve their performance.
Efficient utilisation of wireless communication channel is one of the basic
issues in the vehicular networks development. This paper presents and evaluates
data collection algorithms that use uncertainty estimates to reduce data
transmission in a VSN-based road traffic control system.
|
1208.1151
|
Classical-Quantum Arbitrarily Varying Wiretap Channel
|
cs.IT math.IT quant-ph
|
We derive a lower bound on the capacity of classical-quantum arbitrarily
varying wiretap channel and determine the capacity of the classicalquantum
arbitrarily varying wiretap channel with channel state information at the
transmitter.
|
1208.1180
|
A Regularized Saddle-Point Algorithm for Networked Optimization with
Resource Allocation Constraints
|
cs.SY math.OC
|
We propose a regularized saddle-point algorithm for convex networked
optimization problems with resource allocation constraints. Standard
distributed gradient methods suffer from slow convergence and require excessive
communication when applied to problems of this type. Our approach offers an
alternative way to address these problems, and ensures that each iterative
update step satisfies the resource allocation constraints. We derive step-size
conditions under which the distributed algorithm converges geometrically to the
regularized optimal value, and show how these conditions are affected by the
underlying network topology. We illustrate our method on a robotic network
application example where a group of mobile agents strive to maintain a moving
target in the barycenter of their positions.
|
1208.1184
|
Payment Rules through Discriminant-Based Classifiers
|
cs.GT cs.AI
|
In mechanism design it is typical to impose incentive compatibility and then
derive an optimal mechanism subject to this constraint. By replacing the
incentive compatibility requirement with the goal of minimizing expected ex
post regret, we are able to adapt statistical machine learning techniques to
the design of payment rules. This computational approach to mechanism design is
applicable to domains with multi-dimensional types and situations where
computational efficiency is a concern. Specifically, given an outcome rule and
access to a type distribution, we train a support vector machine with a special
discriminant function structure such that it implicitly establishes a payment
rule with desirable incentive properties. We discuss applications to a
multi-minded combinatorial auction with a greedy winner-determination algorithm
and to an assignment problem with egalitarian outcome rule. Experimental
results demonstrate both that the construction produces payment rules with low
ex post regret, and that penalizing classification errors is effective in
preventing failures of ex post individual rationality.
|
1208.1187
|
Toward an Integrated Framework for Automated Development and
Optimization of Online Advertising Campaigns
|
cs.IR cs.AI
|
Creating and monitoring competitive and cost-effective pay-per-click
advertisement campaigns through the web-search channel is a resource demanding
task in terms of expertise and effort. Assisting or even automating the work of
an advertising specialist will have an unrivaled commercial value. In this
paper we propose a methodology, an architecture, and a fully functional
framework for semi- and fully- automated creation, monitoring, and optimization
of cost-efficient pay-per-click campaigns with budget constraints. The campaign
creation module generates automatically keywords based on the content of the
web page to be advertised extended with corresponding ad-texts. These keywords
are used to create automatically the campaigns fully equipped with the
appropriate values set. The campaigns are uploaded to the auctioneer platform
and start running. The optimization module focuses on the learning process from
existing campaign statistics and also from applied strategies of previous
periods in order to invest optimally in the next period. The objective is to
maximize the performance (i.e. clicks, actions) under the current budget
constraint. The fully functional prototype is experimentally evaluated on real
world Google AdWords campaigns and presents a promising behavior with regards
to campaign performance statistics as it outperforms systematically the
competing manually maintained campaigns.
|
1208.1225
|
Average redundancy of the Shannon code for Markov sources
|
cs.IT math.IT
|
It is known that for memoryless sources, the average and maximal redundancy
of fixed-to-variable length codes, such as the Shannon and Huffman codes,
exhibit two modes of behavior for long blocks. It either converges to a limit
or it has an oscillatory pattern, depending on the irrationality or
rationality, respectively, of certain parameters that depend on the source. In
this paper, we extend these findings, concerning the Shannon code, to the case
of a Markov source, which is considerably more involved. While this dichotomy,
of convergent vs. oscillatory behavior, is well known in other contexts
(including renewal theory, ergodic theory, local limit theorems and large
deviations of discrete distributions), in information theory (e.g., in
redundancy analysis) it was recognized relatively recently. To the best of our
knowledge, no results of this type were reported thus far for Markov sources.
We provide a precise characterization of the convergent vs. oscillatory
behavior of the Shannon code redundancy for a class of irreducible, periodic
and aperiodic, Markov sources. These findings are obtained by analytic methods,
such as Fourier/Fejer series analysis and spectral analysis of matrices.
|
1208.1230
|
A conservation-law-based modular fluid-flow model for network congestion
modeling
|
cs.NI cs.SY math.CA math.DS math.OC
|
A modular fluid-flow model for network congestion analysis and control is
proposed. The model is derived from an information conservation law stating
that the information is either in transit, lost or received. Mathematical
models of network elements such as queues, users, and transmission channels,
and network description variables, including sending/acknowledgement rates and
delays, are inferred from this law and obtained by applying this principle
locally. The modularity of the devised model makes it sufficiently generic to
describe any network topology, and appealing for building simulators. Previous
models in the literature are often not capable of capturing the transient
behavior of the network precisely, making the resulting analysis inaccurate in
practice. Those models can be recovered from exact reduction or approximation
of this new model. An important aspect of this particular modeling approach is
the introduction of new tight building blocks that implement mechanisms ignored
by the existing ones, notably at the queue and user levels. Comparisons with
packet-level simulations corroborate the proposed model.
|
1208.1231
|
Building and Maintaining Halls of Fame over a Database
|
cs.DB
|
Halls of Fame are fascinating constructs. They represent the elite of an
often very large amount of entities---persons, companies, products, countries
etc. Beyond their practical use as static rankings, changes to them are
particularly interesting---for decision making processes, as input to common
media or novel narrative science applications, or simply consumed by users. In
this work, we aim at detecting events that can be characterized by changes to a
Hall of Fame ranking in an automated way. We describe how the schema and data
of a database can be used to generate Halls of Fame. In this database scenario,
by Hall of Fame we refer to distinguished tuples; entities, whose
characteristics set them apart from the majority. We define every Hall of Fame
as one specific instance of an SQL query, such that a change in its result is
considered a noteworthy event. Identified changes (i.e., events) are ranked
using lexicographic tradeoffs over event and query properties and presented to
users or fed in higher-level applications. We have implemented a full-fledged
prototype system that uses either database triggers or a Java based middleware
for event identification. We report on an experimental evaluation using a
real-world dataset of basketball statistics.
|
1208.1237
|
Fast and Robust Recursive Algorithms for Separable Nonnegative Matrix
Factorization
|
stat.ML cs.LG math.OC
|
In this paper, we study the nonnegative matrix factorization problem under
the separability assumption (that is, there exists a cone spanned by a small
subset of the columns of the input nonnegative data matrix containing all
columns), which is equivalent to the hyperspectral unmixing problem under the
linear mixing model and the pure-pixel assumption. We present a family of fast
recursive algorithms, and prove they are robust under any small perturbations
of the input data matrix. This family generalizes several existing
hyperspectral unmixing algorithms and hence provides for the first time a
theoretical justification of their better practical performance.
|
1208.1259
|
One Permutation Hashing for Efficient Search and Learning
|
cs.LG cs.IR cs.IT math.IT stat.CO stat.ML
|
Recently, the method of b-bit minwise hashing has been applied to large-scale
linear learning and sublinear time near-neighbor search. The major drawback of
minwise hashing is the expensive preprocessing cost, as the method requires
applying (e.g.,) k=200 to 500 permutations on the data. The testing time can
also be expensive if a new data point (e.g., a new document or image) has not
been processed, which might be a significant issue in user-facing applications.
We develop a very simple solution based on one permutation hashing.
Conceptually, given a massive binary data matrix, we permute the columns only
once and divide the permuted columns evenly into k bins; and we simply store,
for each data vector, the smallest nonzero location in each bin. The
interesting probability analysis (which is validated by experiments) reveals
that our one permutation scheme should perform very similarly to the original
(k-permutation) minwise hashing. In fact, the one permutation scheme can be
even slightly more accurate, due to the "sample-without-replacement" effect.
Our experiments with training linear SVM and logistic regression on the
webspam dataset demonstrate that this one permutation hashing scheme can
achieve the same (or even slightly better) accuracies compared to the original
k-permutation scheme. To test the robustness of our method, we also experiment
with the small news20 dataset which is very sparse and has merely on average
500 nonzeros in each data vector. Interestingly, our one permutation scheme
noticeably outperforms the k-permutation scheme when k is not too small on the
news20 dataset. In summary, our method can achieve at least the same accuracy
as the original k-permutation scheme, at merely 1/k of the original
preprocessing cost.
|
1208.1270
|
Properties of the Quantum Channel
|
quant-ph cs.IT math.IT
|
Quantum information processing exploits the quantum nature of information. It
offers fundamentally new solutions in the field of computer science and extends
the possibilities to a level that cannot be imagined in classical communication
systems. For quantum communication channels, many new capacity definitions were
developed in comparison to classical counterparts. A quantum channel can be
used to realize classical information transmission or to deliver quantum
information, such as quantum entanglement. In this paper we overview the
properties of the quantum communication channel, the various capacity measures
and the fundamental differences between the classical and quantum channels.
|
1208.1275
|
Spectra of random graphs with arbitrary expected degrees
|
cs.SI cond-mat.stat-mech physics.soc-ph
|
We study random graphs with arbitrary distributions of expected degree and
derive expressions for the spectra of their adjacency and modularity matrices.
We give a complete prescription for calculating the spectra that is exact in
the limit of large network size and large vertex degrees. We also study the
effect on the spectra of hubs in the network, vertices of unusually high
degree, and show that these produce isolated eigenvalues outside the main
spectral band, akin to impurity states in condensed matter systems, with
accompanying eigenvectors that are strongly localized around the hubs. We also
give numerical results that confirm our analytic expressions.
|
1208.1290
|
Scaling Behaviors of Wireless Device-to-Device Communications with
Distributed Caching
|
cs.NI cs.IT math.IT
|
We analyze a novel architecture for caching popular video content to enable
wireless device-to-device collaboration. We focus on the asymptotic scaling
characteristics and show how they depends on video content popularity
statistics. We identify a fundamental conflict between collaboration distance
and interference and show how to optimize the transmission power to maximize
frequency reuse. Our main result is a closed form expression of the optimal
collaboration distance as a function of the model parameters. Under the common
assumption of a Zipf distribution for content reuse, we show that if the Zipf
exponent is greater than 1, it is possible to have a number of D2D
interference-free collaboration pairs that scales linearly in the number of
nodes. If the Zipf exponent is smaller than 1, we identify the best possible
scaling in the number of D2D collaborating links. Surprisingly, a very simple
distributed caching policy achieves the optimal scaling behavior and therefore
there is no need to centrally coordinate what each node is caching.
|
1208.1315
|
Data Selection for Semi-Supervised Learning
|
cs.LG
|
The real challenge in pattern recognition task and machine learning process
is to train a discriminator using labeled data and use it to distinguish
between future data as accurate as possible. However, most of the problems in
the real world have numerous data, which labeling them is a cumbersome or even
an impossible matter. Semi-supervised learning is one approach to overcome
these types of problems. It uses only a small set of labeled with the company
of huge remain and unlabeled data to train the discriminator. In
semi-supervised learning, it is very essential that which data is labeled and
depend on position of data it effectiveness changes. In this paper, we proposed
an evolutionary approach called Artificial Immune System (AIS) to determine
which data is better to be labeled to get the high quality data. The
experimental results represent the effectiveness of this algorithm in finding
these data points.
|
1208.1326
|
Numerical Issues Affecting LDPC Error Floors
|
cs.IT cs.NA math.IT math.NA
|
Numerical issues related to the occurrence of error floors in floating-point
simulations of belief propagation (BP) decoders are examined. Careful
processing of messages corresponding to highly-certain bit values can sometimes
reduce error floors by several orders of magnitude. Computational solutions for
properly handling such messages are provided for the sum-product algorithm
(SPA) and several variants.
|
1208.1353
|
A comparative study of two molecular mechanics models based on harmonic
potentials
|
cond-mat.mtrl-sci cs.CE
|
We show that the two molecular mechanics models, the stick-spiral and the
beam models, predict considerably different mechanical properties of materials
based on energy equivalence. The difference between the two models is
independent of the materials since all parameters of the beam model are
obtained from the harmonic potentials. We demonstrate this difference for
finite width graphene nanoribbons and a single polyethylene chain comparing
results of the molecular dynamics (MD) simulations with harmonic potentials and
the finite element method with the beam model. We also find that the difference
strongly depends on the loading modes, chirality and width of the graphene
nanoribbons, and it increases with decreasing width of the nanoribbons under
pure bending condition. The maximum difference of the predicted mechanical
properties using the two models can exceed 300% in different loading modes.
Comparing the two models with the MD results of AIREBO potential, we find that
the stick-spiral model overestimates and the beam model underestimates the
mechanical properties in narrow armchair graphene nanoribbons under pure
bending condition.
|
1208.1400
|
Second-order asymptotics for quantum hypothesis testing
|
quant-ph cs.IT math.IT math.ST stat.TH
|
In the asymptotic theory of quantum hypothesis testing, the minimal error
probability of the first kind jumps sharply from zero to one when the error
exponent of the second kind passes by the point of the relative entropy of the
two states in an increasing way. This is well known as the direct part and
strong converse of quantum Stein's lemma. Here we look into the behavior of
this sudden change and have make it clear how the error of first kind grows
smoothly according to a lower order of the error exponent of the second kind,
and hence we obtain the second-order asymptotics for quantum hypothesis
testing. This actually implies quantum Stein's lemma as a special case.
Meanwhile, our analysis also yields tight bounds for the case of finite sample
size. These results have potential applications in quantum information theory.
Our method is elementary, based on basic linear algebra and probability theory.
It deals with the achievability part and the optimality part in a unified
fashion.
|
1208.1401
|
Study of dynamic and static routing for improvement of the
transportation efficiency on small complex networks
|
physics.soc-ph cs.NI cs.SI
|
In this paper, we are exploring strategies for the reduction of the
congestion in the complex networks. The nodes without buffers are considered,
so, if the congestion occurs, the information packets will be dropped. The
focus is on the efficient routing. The routing strategies are compared using
two generic models, i.e., Barab\`asi-Albert scale-free network and scale-free
network on lattice, and the academic router networks of the Netherlands and
France. We propose a dynamic deflection routing algorithm which automatically
extends path of the packet before it arrives at congested node. The simulation
results indicate that the dynamic routing strategy can further reduce number of
dropped packets in a combination with the efficient path routing proposed by
Yan et al. [Phys. Rev. E 73, 046108 (2006)].
|
1208.1448
|
The Best Answers? Think Twice: Online Detection of Commercial Campaigns
in the CQA Forums
|
cs.IR cs.SI
|
In an emerging trend, more and more Internet users search for information
from Community Question and Answer (CQA) websites, as interactive communication
in such websites provides users with a rare feeling of trust. More often than
not, end users look for instant help when they browse the CQA websites for the
best answers. Hence, it is imperative that they should be warned of any
potential commercial campaigns hidden behind the answers. However, existing
research focuses more on the quality of answers and does not meet the above
need. In this paper, we develop a system that automatically analyzes the hidden
patterns of commercial spam and raises alarms instantaneously to end users
whenever a potential commercial campaign is detected. Our detection method
integrates semantic analysis and posters' track records and utilizes the
special features of CQA websites largely different from those in other types of
forums such as microblogs or news reports. Our system is adaptive and
accommodates new evidence uncovered by the detection algorithms over time.
Validated with real-world trace data from a popular Chinese CQA website over a
period of three months, our system shows great potential towards adaptive
online detection of CQA spams.
|
1208.1544
|
Guess Who Rated This Movie: Identifying Users Through Subspace
Clustering
|
cs.LG
|
It is often the case that, within an online recommender system, multiple
users share a common account. Can such shared accounts be identified solely on
the basis of the user- provided ratings? Once a shared account is identified,
can the different users sharing it be identified as well? Whenever such user
identification is feasible, it opens the way to possible improvements in
personalized recommendations, but also raises privacy concerns. We develop a
model for composite accounts based on unions of linear subspaces, and use
subspace clustering for carrying out the identification task. We show that a
significant fraction of such accounts is identifiable in a reliable manner, and
illustrate potential uses for personalized recommendation.
|
1208.1592
|
Improved Perfect Space-Time Block Codes
|
cs.IT math.IT
|
The perfect space-time block codes (STBCs) are based on four design criteria
- full-rateness, non-vanishing determinant, cubic shaping and uniform average
transmitted energy per antenna per time slot. Cubic shaping and transmission at
uniform average energy per antenna per time slot are important from the
perspective of energy efficiency of STBCs. The shaping criterion demands that
the {\it generator matrix} of the lattice from which each layer of the perfect
STBC is carved be unitary. In this paper, it is shown that unitariness is not a
necessary requirement for energy efficiency in the context of space-time coding
with finite input constellations, and an alternative criterion is provided that
enables one to obtain full-rate (rate of $n_t$ complex symbols per channel use
for an $n_t$ transmit antenna system) STBCs with larger {\it normalized minimum
determinants} than the perfect STBCs. Further, two such STBCs, one each for 4
and 6 transmit antennas, are presented and they are shown to have larger
normalized minimum determinants than the comparable perfect STBCs which
hitherto had the best known normalized minimum determinants.
|
1208.1593
|
Fast-Decodable MIDO Codes with Large Coding Gain
|
cs.IT math.IT
|
In this paper, a new method is proposed to obtain full-diversity, rate-2
(rate of 2 complex symbols per channel use) space-time block codes (STBCs) that
are full-rate for multiple input, double output (MIDO) systems. Using this
method, rate-2 STBCs for $4\times2$, $6 \times 2$, $8\times2$ and $12 \times 2$
systems are constructed and these STBCs are fast ML-decodable, have large
coding gains, and STBC-schemes consisting of these STBCs have a non-vanishing
determinant (NVD) so that they are DMT-optimal for their respective MIDO
systems. It is also shown that the SR-code [R. Vehkalahti, C. Hollanti, and F.
Oggier, "Fast-Decodable Asymmetric Space-Time Codes from Division Algebras,"
IEEE Trans. Inf. Theory, Apr. 2012] for the $4\times2$ system, which has the
lowest ML-decoding complexity among known rate-2 STBCs for the $4\times2$ MIDO
system with a large coding gain for 4-/16-QAM, has the same algebraic structure
as the STBC constructed in this paper for the $4\times2$ system. This also
settles in positive a previous conjecture that the STBC-scheme that is based on
the SR-code has the NVD property and hence is DMT-optimal for the $4\times2$
system.
|
1208.1613
|
A Dynamic Phase Selection Strategy for Satisfiability Solvers
|
cs.LO cs.AI
|
The phase selection is an important of a SAT Solver based on conflict-driven
DPLL. This paper presents a new phase selection strategy, in which the weight
of each literal is defined as the sum of its implied-literals static weights.
The implied literals of each literal is computed dynamically during the search.
Therefore, it is call a dynamic phase selection strategy. In general, computing
dynamically a weight is time-consuming. Hence, so far no SAT solver applies
successfully a dynamic phase selection. Since the implied literal of our
strategy conforms to that of the search process, the usual two watched-literals
scheme can be applied here. Thus, the cost of our dynamic phase selection is
very low. To improve Glucose 2.0 which won a Gold Medal for application
category at SAT 2011 competition, we build five phase selection schemes using
the dynamic phase selection policy. On application instances of SAT 2011,
Glucose improved by the dynamic phase selection is significantly better than
the original Glucose. We conduct also experiments on Lingeling, using the
dynamic phase selection policy, and build two phase selection schemes.
Experimental results show that the improved Lingeling is better than the
original Lingeling.
|
1208.1661
|
Fully Proportional Representation as Resource Allocation:
Approximability Results
|
cs.GT cs.MA
|
We model Monroe's and Chamberlin and Courant's multiwinner voting systems as
a certain resource allocation problem. We show that for many restricted
variants of this problem, under standard complexity-theoretic assumptions,
there are no constant-factor approximation algorithms. Yet, we also show cases
where good approximation algorithms exist (briefly put, these variants
correspond to optimizing total voter satisfaction under Borda scores, within
Monroe's and Chamberlin and Courant's voting systems).
|
1208.1670
|
Performance Measurement and Method Analysis (PMMA) for Fingerprint
Reconstruction
|
cs.CV
|
Fingerprint reconstruction is one of the most well-known and publicized
biometrics. Because of their uniqueness and consistency over time, fingerprints
have been used for identification over a century, more recently becoming
automated due to advancements in computed capabilities. Fingerprint
reconstruction is popular because of the inherent ease of acquisition, the
numerous sources (e.g. ten fingers) available for collection, and their
established use and collections by law enforcement and immigration.
Fingerprints have always been the most practical and positive means of
identification. Offenders, being well aware of this, have been coming up with
ways to escape identification by that means. Erasing left over fingerprints,
using gloves, fingerprint forgery; are certain examples of methods tried by
them, over the years. Failing to prevent themselves, they moved to an extent of
mutilating their finger skin pattern, to remain unidentified. This article is
based upon obliteration of finger ridge patterns and discusses some known cases
in relation to the same, in chronological order; highlighting the reasons why
offenders go to an extent of performing such act. The paper gives an overview
of different methods and performance measurement of the fingerprint
reconstruction.
|
1208.1672
|
An Efficient Automatic Attendance System Using Fingerprint
Reconstruction Technique
|
cs.CV
|
Biometric time and attendance system is one of the most successful
applications of biometric technology. One of the main advantage of a biometric
time and attendance system is it avoids "buddy-punching". Buddy punching was a
major loophole which will be exploiting in the traditional time attendance
systems. Fingerprint recognition is an established field today, but still
identifying individual from a set of enrolled fingerprints is a time taking
process. Most fingerprint-based biometric systems store the minutiae template
of a user in the database. It has been traditionally assumed that the minutiae
template of a user does not reveal any information about the original
fingerprint. This belief has now been shown to be false; several algorithms
have been proposed that can reconstruct fingerprint images from minutiae
templates. In this paper, a novel fingerprint reconstruction algorithm is
proposed to reconstruct the phase image, which is then converted into the
grayscale image. The proposed reconstruction algorithm reconstructs the phase
image from minutiae. The proposed reconstruction algorithm is used to automate
the whole process of taking attendance, manually which is a laborious and
troublesome work and waste a lot of time, with its managing and maintaining the
records for a period of time is also a burdensome task. The proposed
reconstruction algorithm has been evaluated with respect to the success rates
of type-I attack (match the reconstructed fingerprint against the original
fingerprint) and type-II attack (match the reconstructed fingerprint against
different impressions of the original fingerprint) using a commercial
fingerprint recognition system. Given the reconstructed image from our
algorithm, we show that both types of attacks can be effectively launched
against a fingerprint recognition system.
|
1208.1676
|
Mechanism Design for Time Critical and Cost Critical Task Execution via
Crowdsourcing
|
cs.GT cs.MA
|
An exciting application of crowdsourcing is to use social networks in complex
task execution. In this paper, we address the problem of a planner who needs to
incentivize agents within a network in order to seek their help in executing an
{\em atomic task} as well as in recruiting other agents to execute the task. We
study this mechanism design problem under two natural resource optimization
settings: (1) cost critical tasks, where the planner's goal is to minimize the
total cost, and (2) time critical tasks, where the goal is to minimize the
total time elapsed before the task is executed. We identify a set of desirable
properties that should ideally be satisfied by a crowdsourcing mechanism. In
particular, {\em sybil-proofness} and {\em collapse-proofness} are two
complementary properties in our desiderata. We prove that no mechanism can
satisfy all the desirable properties simultaneously. This leads us naturally to
explore approximate versions of the critical properties. We focus our attention
on approximate sybil-proofness and our exploration leads to a parametrized
family of payment mechanisms which satisfy collapse-proofness. We characterize
the approximate versions of the desirable properties in cost critical and time
critical domain.
|
1208.1679
|
Color Assessment and Transfer for Web Pages
|
cs.HC cs.CV cs.GR
|
Colors play a particularly important role in both designing and accessing Web
pages. A well-designed color scheme improves Web pages' visual aesthetic and
facilitates user interactions. As far as we know, existing color assessment
studies focus on images; studies on color assessment and editing for Web pages
are rare. This paper investigates color assessment for Web pages based on
existing online color theme-rating data sets and applies this assessment to Web
color edit. This study consists of three parts. First, we study the extraction
of a Web page's color theme. Second, we construct color assessment models that
score the color compatibility of a Web page by leveraging machine learning
techniques. Third, we incorporate the learned color assessment model into a new
application, namely, color transfer for Web pages. Our study combines
techniques from computer graphics, Web mining, computer vision, and machine
learning. Experimental results suggest that our constructed color assessment
models are effective, and useful in the color transfer for Web pages, which has
received little attention in both Web mining and computer graphics communities.
|
1208.1692
|
On Finding Optimal Polytrees
|
cs.DS cs.AI cs.CC
|
Inferring probabilistic networks from data is a notoriously difficult task.
Under various goodness-of-fit measures, finding an optimal network is NP-hard,
even if restricted to polytrees of bounded in-degree. Polynomial-time
algorithms are known only for rare special cases, perhaps most notably for
branchings, that is, polytrees in which the in-degree of every node is at most
one. Here, we study the complexity of finding an optimal polytree that can be
turned into a branching by deleting some number of arcs or nodes, treated as a
parameter.
We show that the problem can be solved via a matroid intersection formulation
in polynomial time if the number of deleted arcs is bounded by a constant. The
order of the polynomial time bound depends on this constant, hence the
algorithm does not establish fixed-parameter tractability when parameterized by
the number of deleted arcs. We show that a restricted version of the problem
allows fixed-parameter tractability and hence scales well with the parameter.
We contrast this positive result by showing that if we parameterize by the
number of deleted nodes, a somewhat more powerful parameter, the problem is not
fixed-parameter tractable, subject to a complexity-theoretic assumption.
|
1208.1697
|
Information-Theoretical Security for Several Models of Multiple-Access
Channel
|
cs.IT math.IT
|
Several security models of multiple-access channel (MAC) are investigated.
First, we study the degraded MAC with confidential messages, where two users
transmit their confidential messages (no common message) to a destination, and
each user obtains a degraded version of the output of the MAC. Each user views
the other user as a eavesdropper, and wishes to keep its confidential message
as secret as possible from the other user. Measuring each user's uncertainty
about the other user's confidential message by equivocation, the inner and
outer bounds on the capacity-equivocation region for this model have been
provided. The result is further explained via the binary and Gaussian examples.
Second, the discrete memoryless multiple-access wiretap channel (MAC-WT) is
studied, where two users transmit their corresponding confidential messages (no
common message) to a legitimate receiver, while an additional wiretapper wishes
to obtain the messages via a wiretap channel. This new model is considered into
two cases: the general MAC-WT with cooperative encoders, and the degraded
MAC-WT with non-cooperative encoders. The capacity-equivocation region is
totally determined for the cooperative case, and inner and outer bounds on the
capacity-equivocation region are provided for the non-cooperative case. For
both cases, the results are further explained via the binary examples.
|
1208.1740
|
On the Relation between Centrality Measures and Consensus Algorithms
|
cs.SY cs.SI math.OC
|
This paper introduces some tools from graph theory and distributed consensus
algorithms to construct an optimal, yet robust, hierarchical information
sharing structure for large-scale decision making and control problems. The
proposed method is motivated by the robustness and optimality of leaf-venation
patterns. We introduce a new class of centrality measures which are built based
on the degree distribution of nodes within network graph. Furthermore, the
proposed measure is used to select the appropriate weight of the corresponding
consensus algorithm. To this end, an implicit hierarchical structure is derived
that control the flow of information in different situations. In addition, the
performance analysis of the proposed measure with respect to other standard
measures is performed to investigate the convergence and asymptotic behavior of
the measure. Gas Transmission Network is served as our test-bed to demonstrate
the applicability and the efficiently of the method.
|
1208.1743
|
Hybrid systems modeling for gas transmission network
|
cs.AI
|
Gas Transmission Networks are large-scale complex systems, and corresponding
design and control problems are challenging. In this paper, we consider the
problem of control and management of these systems in crisis situations. We
present these networks by a hybrid systems framework that provides required
analysis models. Further, we discuss decision-making using computational
discrete and hybrid optimization methods. In particular, several reinforcement
learning methods are employed to explore decision space and achieve the best
policy in a specific crisis situation. Simulations are presented to illustrate
the efficiency of the method.
|
1208.1750
|
Guidelines for a Dynamic Ontology - Integrating Tools of Evolution and
Versioning in Ontology
|
cs.SE cs.AI
|
Ontologies are built on systems that conceptually evolve over time. In
addition, techniques and languages for building ontologies evolve too. This has
led to numerous studies in the field of ontology versioning and ontology
evolution. This paper presents a new way to manage the lifecycle of an ontology
incorporating both versioning tools and evolution process. This solution,
called VersionGraph, is integrated in the source ontology since its creation in
order to make it possible to evolve and to be versioned. Change management is
strongly related to the model in which the ontology is represented. Therefore,
we focus on the OWL language in order to take into account the impact of the
changes on the logical consistency of the ontology like specified in OWL DL.
|
1208.1784
|
Worst-Case Source for Distributed Compression with Quadratic Distortion
|
cs.IT math.IT
|
We consider the k-encoder source coding problem with a quadratic distortion
measure. We show that among all source distributions with a given covariance
matrix K, the jointly Gaussian source requires the highest rates in order to
meet a given set of distortion constraints.
|
1208.1819
|
Self-Organizing Time Map: An Abstraction of Temporal Multivariate
Patterns
|
cs.LG cs.DS
|
This paper adopts and adapts Kohonen's standard Self-Organizing Map (SOM) for
exploratory temporal structure analysis. The Self-Organizing Time Map (SOTM)
implements SOM-type learning to one-dimensional arrays for individual time
units, preserves the orientation with short-term memory and arranges the arrays
in an ascending order of time. The two-dimensional representation of the SOTM
attempts thus twofold topology preservation, where the horizontal direction
preserves time topology and the vertical direction data topology. This enables
discovering the occurrence and exploring the properties of temporal structural
changes in data. For representing qualities and properties of SOTMs, we adapt
measures and visualizations from the standard SOM paradigm, as well as
introduce a measure of temporal structural changes. The functioning of the
SOTM, and its visualizations and quality and property measures, are illustrated
on artificial toy data. The usefulness of the SOTM in a real-world setting is
shown on poverty, welfare and development indicators.
|
1208.1829
|
Metric Learning across Heterogeneous Domains by Respectively Aligning
Both Priors and Posteriors
|
cs.LG
|
In this paper, we attempts to learn a single metric across two heterogeneous
domains where source domain is fully labeled and has many samples while target
domain has only a few labeled samples but abundant unlabeled samples. To the
best of our knowledge, this task is seldom touched. The proposed learning model
has a simple underlying motivation: all the samples in both the source and the
target domains are mapped into a common space, where both their priors
P(sample)s and their posteriors P(label|sample)s are forced to be respectively
aligned as much as possible. We show that the two mappings, from both the
source domain and the target domain to the common space, can be reparameterized
into a single positive semi-definite(PSD) matrix. Then we develop an efficient
Bregman Projection algorithm to optimize the PDS matrix over which a LogDet
function is used to regularize. Furthermore, we also show that this model can
be easily kernelized and verify its effectiveness in crosslanguage retrieval
task and cross-domain object recognition task.
|
1208.1842
|
Logic of Non-Monotonic Interactive Proofs (Formal Theory of Temporary
Knowledge Transfer)
|
cs.LO cs.CR cs.DC cs.MA math.LO
|
We propose a monotonic logic of internalised non-monotonic or instant
interactive proofs (LiiP) and reconstruct an existing monotonic logic of
internalised monotonic or persistent interactive proofs (LiP) as a minimal
conservative extension of LiiP. Instant interactive proofs effect a fragile
epistemic impact in their intended communities of peer reviewers that consists
in the impermanent induction of the knowledge of their proof goal by means of
the knowledge of the proof with the interpreting reviewer: If my peer reviewer
knew my proof then she would at least then (in that instant) know that its
proof goal is true. Their impact is fragile and their induction of knowledge
impermanent in the sense of being the case possibly only at the instant of
learning the proof. This accounts for the important possibility of
internalising proofs of statements whose truth value can vary, which, as
opposed to invariant statements, cannot have persistent proofs. So instant
interactive proofs effect a temporary transfer of certain propositional
knowledge (knowable ephemeral facts) via the transmission of certain individual
knowledge (knowable non-monotonic proofs) in distributed systems of multiple
interacting agents.
|
1208.1846
|
Margin Distribution Controlled Boosting
|
cs.LG
|
Schapire's margin theory provides a theoretical explanation to the success of
boosting-type methods and manifests that a good margin distribution (MD) of
training samples is essential for generalization. However the statement that a
MD is good is vague, consequently, many recently developed algorithms try to
generate a MD in their goodness senses for boosting generalization. Unlike
their indirect control over MD, in this paper, we propose an alternative
boosting algorithm termed Margin distribution Controlled Boosting (MCBoost)
which directly controls the MD by introducing and optimizing a key adjustable
margin parameter. MCBoost's optimization implementation adopts the column
generation technique to ensure fast convergence and small number of weak
classifiers involved in the final MCBooster. We empirically demonstrate: 1)
AdaBoost is actually also a MD controlled algorithm and its iteration number
acts as a parameter controlling the distribution and 2) the generalization
performance of MCBoost evaluated on UCI benchmark datasets is validated better
than those of AdaBoost, L2Boost, LPBoost, AdaBoost-CG and MDBoost.
|
1208.1860
|
Scaling Multiple-Source Entity Resolution using Statistically Efficient
Transfer Learning
|
cs.DB cs.LG
|
We consider a serious, previously-unexplored challenge facing almost all
approaches to scaling up entity resolution (ER) to multiple data sources: the
prohibitive cost of labeling training data for supervised learning of
similarity scores for each pair of sources. While there exists a rich
literature describing almost all aspects of pairwise ER, this new challenge is
arising now due to the unprecedented ability to acquire and store data from
online sources, features driven by ER such as enriched search verticals, and
the uniqueness of noisy and missing data characteristics for each source. We
show on real-world and synthetic data that for state-of-the-art techniques, the
reality of heterogeneous sources means that the number of labeled training data
must scale quadratically in the number of sources, just to maintain constant
precision/recall. We address this challenge with a brand new transfer learning
algorithm which requires far less training data (or equivalently, achieves
superior accuracy with the same data) and is trained using fast convex
optimization. The intuition behind our approach is to adaptively share
structure learned about one scoring problem with all other scoring problems
sharing a data source in common. We demonstrate that our theoretically
motivated approach incurs no runtime cost while it can maintain constant
precision/recall with the cost of labeling increasing only linearly with the
number of sources.
|
1208.1878
|
Sets of Zero-Difference Balanced Functions and Their Applications
|
cs.IT math.CO math.IT
|
Zero-difference balanced (ZDB) functions can be employed in many
applications, e.g., optimal constant composition codes, optimal and perfect
difference systems of sets, optimal frequency hopping sequences, etc. In this
paper, two results are summarized to characterize ZDB functions, among which a
lower bound is used to achieve optimality in applications and determine the
size of preimage sets of ZDB functions. As the main contribution, a generic
construction of ZDB functions is presented, and many new classes of ZDB
functions can be generated. This construction is then extended to construct a
set of ZDB functions, in which any two ZDB functions are related uniformly.
Furthermore, some applications of such sets of ZDB functions are also
introduced.
|
1208.1880
|
Stereo Acoustic Perception based on Real Time Video Acquisition for
Navigational Assistance
|
cs.CV cs.MM cs.SD
|
A smart navigation system (an Electronic Travel Aid) based on an object
detection mechanism has been designed to detect the presence of obstacles that
immediately impede the path, by means of real time video processing. The
algorithm can be used for any general purpose navigational aid. This paper is
discussed, keeping in mind the navigation of the visually impaired, and is not
limited to the same. A video camera feeds images of the surroundings to a Da-
Vinci Digital Media Processor, DM642, which works on the video, frame by frame.
The processor carries out image processing techniques whose result contains
information about the object in terms of image pixels. The algorithm aims to
select the object which, among all others, poses maximum threat to the
navigation. A database containing a total of three sounds is constructed.
Hence, each image translates to a beep, where every beep informs the navigator
of the obstacles directly in front of him. This paper implements an algorithm
that is more efficient as compared to its predecessors.
|
1208.1885
|
Performance and Detection of M-ary Frequency Shift Keying in Triple
Layer Wireless Sensor Network
|
cs.IT math.IT
|
This paper proposes an innovative triple layer Wireless Sensor Network (WSN)
system, which monitors M-ary events like temperature, pressure, humidity, etc.
with the help of geographically distributed sensors. The sensors convey signals
to the fusion centre using M-ary Frequency Shift Keying (MFSK)modulation scheme
over independent Rayleigh fading channels. At the fusion centre, detection
takes place with the help of Selection Combining (SC) diversity scheme, which
assures a simple and economical receiver circuitry. With the aid of various
simulations, the performance and efficacy of the system has been analyzed by
varying modulation levels, number of local sensors and probability of correct
detection by the sensors. The study endeavors to prove that triple layer WSN
system is an economical and dependable system capable of correct detection of
M-ary events by integrating frequency diversity together with antenna
diversity.
|
1208.1886
|
Semantic Web Techniques for Yellow Page Service Providers
|
cs.IR
|
Use of web pages providing unstructured information poses variety of problems
to the user, such as use of arbitrary formats, unsuitability for machine
processing and likely incompleteness of information. Structured data alleviates
these problems but we require more. Very often yellow page systems are
implemented using a centralized database. In some cases, human intermediaries
accessible over the phone network examine a centralized database and use their
reasoning ability to deal with the user's need for information. Scaling up such
systems is difficult. This paper explores an alternative - a highly distributed
system design meeting a variety of needs - considerably reducing efforts
required at a central organization, enabling large numbers of vendors to enter
information about their own products and services, enabling end-users to
contribute information such as their own ratings, using an ontology to describe
each domain of application in a flexible manner for uses foreseen and
unforeseen, enabling distributed search and mash-ups, use of vendor independent
standards, using reasoning to find the best matches to a given query,
geo-spatial reasoning and a simple, interactive, mobile application/interface.
We give importance to geo-spatial information and mobile applications because
of the very wide-spread use of mobile phones and their inherent ability to
provide some information about the current location of the user. We have
created a prototype using the Jena Toolkit and geo-spatial extensions to
SPARQL. We have tested this prototype by asking a group of typical users to use
it and to provide structured feedback. We have summarized this feedback in the
paper. We believe that the technology can be applied in many contexts in
addition to yellow page systems.
|
1208.1921
|
Algorithmic Simplicity and Relevance
|
cs.AI cs.CC
|
The human mind is known to be sensitive to complexity. For instance, the
visual system reconstructs hidden parts of objects following a principle of
maximum simplicity. We suggest here that higher cognitive processes, such as
the selection of relevant situations, are sensitive to variations of
complexity. Situations are relevant to human beings when they appear simpler to
describe than to generate. This definition offers a predictive (i.e.
falsifiable) model for the selection of situations worth reporting
(interestingness) and for what individuals consider an appropriate move in
conversation.
|
1208.1924
|
Moderate Deviations in Channel Coding
|
cs.IT math.IT
|
We consider block codes whose rate converges to the channel capacity with
increasing block length at a certain speed and examine the best possible decay
of the probability of error. We prove that a moderate deviation principle holds
for all convergence rates between the large deviation and the central limit
theorem regimes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.