id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1304.3553 | On the Reliability Function of the Discrete Memoryless Relay Channel | cs.IT math.IT | Bounds on the reliability function for the discrete memoryless relay channel
are derived using the method of types. Two achievable error exponents are
derived based on partial decode-forward and compress-forward which are
well-known superposition block-Markov coding schemes. The derivations require
combinations of the techniques involved in the proofs of
Csisz\'ar-K\"orner-Marton's packing lemma for the error exponent of channel
coding and Marton's type covering lemma for the error exponent of source coding
with a fidelity criterion. The decode-forward error exponent is evaluated on
Sato's relay channel. From this example, it is noted that to obtain the fastest
possible decay in the error probability for a fixed effective coding rate, one
ought to optimize the number of blocks in the block-Markov coding scheme
assuming the blocklength within each block is large. An upper bound on the
reliability function is also derived using ideas from Haroutunian's lower bound
on the error probability for point-to-point channel coding with feedback.
|
1304.3563 | Data, text and web mining for business intelligence: a survey | cs.IR | The Information and Communication Technologies revolution brought a digital
world with huge amounts of data available. Enterprises use mining technologies
to search vast amounts of data for vital insight and knowledge. Mining tools
such as data mining, text mining, and web mining are used to find hidden
knowledge in large databases or the Internet.
|
1304.3568 | Distributed dictionary learning over a sensor network | stat.ML cs.LG stat.AP | We consider the problem of distributed dictionary learning, where a set of
nodes is required to collectively learn a common dictionary from noisy
measurements. This approach may be useful in several contexts including sensor
networks. Diffusion cooperation schemes have been proposed to solve the
distributed linear regression problem. In this work we focus on a
diffusion-based adaptive dictionary learning strategy: each node records
observations and cooperates with its neighbors by sharing its local dictionary.
The resulting algorithm corresponds to a distributed block coordinate descent
(alternate optimization). Beyond dictionary learning, this strategy could be
adapted to many matrix factorization problems and generalized to various
settings. This article presents our approach and illustrates its efficiency on
some numerical examples.
|
1304.3573 | Astronomical Image Denoising Using Dictionary Learning | astro-ph.IM cs.CV | Astronomical images suffer a constant presence of multiple defects that are
consequences of the intrinsic properties of the acquisition equipments, and
atmospheric conditions. One of the most frequent defects in astronomical
imaging is the presence of additive noise which makes a denoising step
mandatory before processing data. During the last decade, a particular modeling
scheme, based on sparse representations, has drawn the attention of an ever
growing community of researchers. Sparse representations offer a promising
framework to many image and signal processing tasks, especially denoising and
restoration applications. At first, the harmonics, wavelets, and similar bases
and overcomplete representations have been considered as candidate domains to
seek the sparsest representation. A new generation of algorithms, based on
data-driven dictionaries, evolved rapidly and compete now with the
off-the-shelf fixed dictionaries. While designing a dictionary beforehand leans
on a guess of the most appropriate representative elementary forms and
functions, the dictionary learning framework offers to construct the dictionary
upon the data themselves, which provides us with a more flexible setup to
sparse modeling and allows to build more sophisticated dictionaries. In this
paper, we introduce the Centered Dictionary Learning (CDL) method and we study
its performances for astronomical image denoising. We show how CDL outperforms
wavelet or classic dictionary learning denoising techniques on astronomical
images, and we give a comparison of the effect of these different algorithms on
the photometry of the denoised images.
|
1304.3602 | An age structured demographic theory of technological change | physics.soc-ph cs.SI math.DS q-fin.GN | At the heart of technology transitions lie complex processes of social and
industrial dynamics. The quantitative study of sustainability transitions
requires modelling work, which necessitates a theory of technology
substitution. Many, if not most, contemporary modelling approaches for future
technology pathways overlook most aspects of transitions theory, for instance
dimensions of heterogenous investor choices, dynamic rates of diffusion and the
profile of transitions. A significant body of literature however exists that
demonstrates how transitions follow S-shaped diffusion curves or Lotka-Volterra
systems of equations. This framework is used ex-post since timescales can only
be reliably obtained in cases where the transitions have already occurred,
precluding its use for studying cases of interest where nascent innovations in
protective niches await favourable conditions for their diffusion. In
principle, scaling parameters of transitions can, however, be derived from
knowledge of industrial dynamics, technology turnover rates and technology
characteristics. In this context, this paper presents a theory framework for
evaluating the parameterisation of S-shaped diffusion curves for use in
simulation models of technology transitions without the involvement of
historical data fitting, making use of standard demography theory applied to
technology at the unit level. The classic Lotka-Volterra competition system
emerges from first principles from demography theory, its timescales explained
in terms of technology lifetimes and industrial dynamics. The theory is placed
in the context of the multi-level perspective on technology transitions, where
innovation and the diffusion of new socio-technical regimes take a prominent
place, as well as discrete choice theory, the primary theoretical framework for
introducing agent diversity.
|
1304.3603 | SCAF An effective approach to Classify Subspace Clustering algorithms | cs.DB | Subspace clustering discovers the clusters embedded in multiple, overlapping
subspaces of high dimensional data. Many significant subspace clustering
algorithms exist, each having different characteristics caused by the use of
different techniques, assumptions, heuristics used etc. A comprehensive
classification scheme is essential which will consider all such characteristics
to divide subspace clustering approaches in various families. The algorithms
belonging to same family will satisfy common characteristics. Such a
categorization will help future developers to better understand the quality
criteria to be used and similar algorithms to be used to compare results with
their proposed clustering algorithms. In this paper, we first proposed the
concept of SCAF (Subspace Clustering Algorithms Family). Characteristics of
SCAF will be based on the classes such as cluster orientation, overlap of
dimensions etc. As an illustration, we further provided a comprehensive,
systematic description and comparison of few significant algorithms belonging
to 'Axis parallel, overlapping, density based' SCAF.
|
1304.3604 | On Model-Based RIP-1 Matrices | cs.DS cs.IT math.IT math.NA | The Restricted Isometry Property (RIP) is a fundamental property of a matrix
enabling sparse recovery. Informally, an m x n matrix satisfies RIP of order k
in the l_p norm if ||Ax||_p \approx ||x||_p for any vector x that is k-sparse,
i.e., that has at most k non-zeros. The minimal number of rows m necessary for
the property to hold has been extensively investigated, and tight bounds are
known. Motivated by signal processing models, a recent work of Baraniuk et al
has generalized this notion to the case where the support of x must belong to a
given model, i.e., a given family of supports. This more general notion is much
less understood, especially for norms other than l_2. In this paper we present
tight bounds for the model-based RIP property in the l_1 norm. Our bounds hold
for the two most frequently investigated models: tree-sparsity and
block-sparsity. We also show implications of our results to sparse recovery
problems.
|
1304.3610 | Modified Soft Brood Crossover in Genetic Programming | cs.NE | Premature convergence is one of the important issues while using Genetic
Programming for data modeling. It can be avoided by improving population
diversity. Intelligent genetic operators can help to improve the population
diversity. Crossover is an important operator in Genetic Programming. So, we
have analyzed number of intelligent crossover operators and proposed an
algorithm with the modification of soft brood crossover operator. It will help
to improve the population diversity and reduce the premature convergence. We
have performed experiments on three different symbolic regression problems.
Then we made the performance comparison of our proposed crossover (Modified
Soft Brood Crossover) with the existing soft brood crossover and subtree
crossover operators.
|
1304.3612 | A Novel Metaheuristics To Solve Mixed Shop Scheduling Problems | cs.NE | This paper represents the metaheuristics proposed for solving a class of Shop
Scheduling problem. The Bacterial Foraging Optimization algorithm is featured
with Ant Colony Optimization algorithm and proposed as a natural inspired
computing approach to solve the Mixed Shop Scheduling problem. The Mixed Shop
is the combination of Job Shop, Flow Shop and Open Shop scheduling problems.
The sample instances for all mentioned Shop problems are used as test data and
Mixed Shop survive its computational complexity to minimize the makespan. The
computational results show that the proposed algorithm is gentler to solve and
performs better than the existing algorithms.
|
1304.3623 | The Rise and Fall of R&D Networks | physics.soc-ph cs.SI | Drawing on a large database of publicly announced R&D alliances, we
empirically investigate the evolution of R&D networks and the process of
alliance formation in several manufacturing sectors over a 24-year period
(1986-2009). Our goal is to empirically evaluate the temporal and sectoral
robustness of a large set of network indicators, thus providing a more complete
description of R&D networks with respect to the existing literature. We find
that most network properties are not only invariant across sectors, but also
independent of the scale of aggregation at which they are observed, and we
highlight the presence of core-periphery architectures in explaining some
properties emphasized in previous empirical studies (e.g. asymmetric degree
distributions and small worlds). In addition, we show that many properties of
R&D networks are characterized by a rise-and-fall dynamics with a peak in the
mid-nineties. We find that such dynamics is driven by mechanisms of
accumulative advantage, structural homophily and multiconnectivity. In
particular, the change from the "rise" to the "fall" phase is associated to a
structural break in the importance of multiconnectivity.
|
1304.3640 | Aloha Games with Spatial Reuse | cs.GT cs.IT math.IT | Aloha games study the transmission probabilities of a group of
non-cooperative users which share a channel to transmit via the slotted Aloha
protocol. This paper extends the Aloha games to spatial reuse scenarios, and
studies the system equilibrium and performance. Specifically, fixed point
theory and order theory are used to prove the existence of a least fixed point
as the unique Nash equilibrium (NE) of the game and the optimal choice of all
players. The Krasovskii's method is used to construct a Lyapunov function and
obtain the conditions to examine the stability of the NE. Simulations show that
the theories derived are applicable to large-scale distributed systems of
complicated network topologies. An empirical relationship between the network
connectivity and the achievable total throughput is finally obtained through
simulations.
|
1304.3646 | Network connectivity through small openings | cond-mat.dis-nn cs.IT math.IT | Network connectivity is usually addressed for convex domains where a direct
line of sight exists between any two transmitting/receiving nodes. Here, we
develop a general theory for the network connectivity properties across a small
opening, rendering the domain essentially non-convex. Our analytic approach can
go only so far as we encounter what is referred to in statistical physics as
quenched disorder making the problem non-trivial. We confirm our theory through
computer simulations, obtain leading order approximations and discuss possible
extensions and applications.
|
1304.3658 | Efficient One-Way Secret-Key Agreement and Private Channel Coding via
Polarization | cs.IT cs.CR math.IT | We introduce explicit schemes based on the polarization phenomenon for the
tasks of one-way secret key agreement from common randomness and private
channel coding. For the former task, we show how to use common randomness and
insecure one-way communication to obtain a strongly secure key such that the
key construction has a complexity essentially linear in the blocklength and the
rate at which the key is produced is optimal, i.e., equal to the one-way
secret-key rate. For the latter task, we present a private channel coding
scheme that achieves the secrecy capacity using the condition of strong secrecy
and whose encoding and decoding complexity are again essentially linear in the
blocklength.
|
1304.3663 | Cooperative localization by dual foot-mounted inertial sensors and
inter-agent ranging | cs.RO cs.MA cs.SY | The implementation challenges of cooperative localization by dual
foot-mounted inertial sensors and inter-agent ranging are discussed and work on
the subject is reviewed. System architecture and sensor fusion are identified
as key challenges. A partially decentralized system architecture based on
step-wise inertial navigation and step-wise dead reckoning is presented. This
architecture is argued to reduce the computational cost and required
communication bandwidth by around two orders of magnitude while only giving
negligible information loss in comparison with a naive centralized
implementation. This makes a joint global state estimation feasible for up to a
platoon-sized group of agents. Furthermore, robust and low-cost sensor fusion
for the considered setup, based on state space transformation and
marginalization, is presented. The transformation and marginalization are used
to give the necessary flexibility for presented sampling based updates for the
inter-agent ranging and ranging free fusion of the two feet of an individual
agent. Finally, characteristics of the suggested implementation are
demonstrated with simulations and a real-time system implementation.
|
1304.3700 | A planetary nervous system for social mining and collective awareness | cs.CY cs.SI physics.soc-ph | We present a research roadmap of a Planetary Nervous System (PNS), capable of
sensing and mining the digital breadcrumbs of human activities and unveiling
the knowledge hidden in the big data for addressing the big questions about
social complexity. We envision the PNS as a globally distributed,
self-organizing, techno-social system for answering analytical questions about
the status of world-wide society, based on three pillars: social sensing,
social mining, and the idea of trust networks and privacy-aware social mining.
We discuss the ingredients of a science and a technology necessary to build the
PNS upon the three mentioned pillars, beyond the limitations of their
respective state-of-art. Social sensing is aimed at developing better methods
for harvesting the big data from the techno-social ecosystem and make them
available for mining, learning and analysis at a properly high abstraction
level.Social mining is the problem of discovering patterns and models of human
behaviour from the sensed data across the various social dimensions by data
mining, machine learning and social network analysis. Trusted networks and
privacy-aware social mining is aimed at creating a new deal around the
questions of privacy and data ownership empowering individual persons with full
awareness and control on own personal data, so that users may allow access and
use of their data for their own good and the common good. The PNS will provide
a goal-oriented knowledge discovery framework, made of technology and people,
able to configure itself to the aim of answering questions about the pulse of
global society. Given an analytical request, the PNS activates a process
composed by a variety of interconnected tasks exploiting the social sensing and
mining methods within the transparent ecosystem provided by the trusted
network.
|
1304.3708 | Advice-Efficient Prediction with Expert Advice | cs.LG stat.ML | Advice-efficient prediction with expert advice (in analogy to label-efficient
prediction) is a variant of prediction with expert advice game, where on each
round of the game we are allowed to ask for advice of a limited number $M$ out
of $N$ experts. This setting is especially interesting when asking for advice
of every expert on every round is expensive. We present an algorithm for
advice-efficient prediction with expert advice that achieves
$O(\sqrt{\frac{N}{M}T\ln N})$ regret on $T$ rounds of the game.
|
1304.3733 | General Quantum Hilbert Space Modeling Scheme for Entanglement | quant-ph cs.AI | We work out a classification scheme for quantum modeling in Hilbert space of
any kind of composite entity violating Bell's inequalities and exhibiting
entanglement. Our theoretical framework includes situations with entangled
states and product measurements ('customary quantum situation'), and also
situations with both entangled states and entangled measurements ('nonlocal box
situation', 'nonlocal non-marginal box situation'). We show that entanglement
is structurally a joint property of states and measurements. Furthermore,
entangled measurements enable quantum modeling of situations that are usually
believed to be 'beyond quantum'. Our results are also extended from pure states
to quantum mixtures.
|
1304.3742 | From Cookies to Cooks: Insights on Dietary Patterns via Analysis of Web
Usage Logs | cs.CY cs.IR physics.soc-ph | Nutrition is a key factor in people's overall health. Hence, understanding
the nature and dynamics of population-wide dietary preferences over time and
space can be valuable in public health. To date, studies have leveraged small
samples of participants via food intake logs or treatment data. We propose a
complementary source of population data on nutrition obtained via Web logs. Our
main contribution is a spatiotemporal analysis of population-wide dietary
preferences through the lens of logs gathered by a widely distributed
Web-browser add-on, using the access volume of recipes that users seek via
search as a proxy for actual food consumption. We discover that variation in
dietary preferences as expressed via recipe access has two main periodic
components, one yearly and the other weekly, and that there exist
characteristic regional differences in terms of diet within the United States.
In a second study, we identify users who show evidence of having made an acute
decision to lose weight. We characterize the shifts in interests that they
express in their search queries and focus on changes in their recipe queries in
particular. Last, we correlate nutritional time series obtained from recipe
queries with time-aligned data on hospital admissions, aimed at understanding
how behavioral data captured in Web logs might be harnessed to identify
potential relationships between diet and acute health problems. In this
preliminary study, we focus on patterns of sodium identified in recipes over
time and patterns of admission for congestive heart failure, a chronic illness
that can be exacerbated by increases in sodium intake.
|
1304.3745 | Towards more accurate clustering method by using dynamic time warping | cs.LG stat.ML | An intrinsic problem of classifiers based on machine learning (ML) methods is
that their learning time grows as the size and complexity of the training
dataset increases. For this reason, it is important to have efficient
computational methods and algorithms that can be applied on large datasets,
such that it is still possible to complete the machine learning tasks in
reasonable time. In this context, we present in this paper a more accurate
simple process to speed up ML methods. An unsupervised clustering algorithm is
combined with Expectation, Maximization (EM) algorithm to develop an efficient
Hidden Markov Model (HMM) training. The idea of the proposed process consists
of two steps. In the first step, training instances with similar inputs are
clustered and a weight factor which represents the frequency of these instances
is assigned to each representative cluster. Dynamic Time Warping technique is
used as a dissimilarity function to cluster similar examples. In the second
step, all formulas in the classical HMM training algorithm (EM) associated with
the number of training instances are modified to include the weight factor in
appropriate terms. This process significantly accelerates HMM training while
maintaining the same initial, transition and emission probabilities matrixes as
those obtained with the classical HMM training algorithm. Accordingly, the
classification accuracy is preserved. Depending on the size of the training
set, speedups of up to 2200 times is possible when the size is about 100.000
instances. The proposed approach is not limited to training HMMs, but it can be
employed for a large variety of MLs methods.
|
1304.3747 | The Social Maintenance of Cooperation through Hypocrisy | cs.SI physics.soc-ph q-bio.OT | Cooperation is widespread in human societies, but its maintenance at the
group level remains puzzling if individuals benefit from not cooperating.
Explanations of the maintenance of cooperation generally assume that
cooperative and non-cooperative behavior in others can be assessed and copied
accurately. However, humans have a well known capacity to deceive and thus to
manipulate how others assess their behavior. Here, we show that hypocrisy -
claiming to be acting cooperatively while acting selfishly - can maintain
social cooperation because it prevents the spread of selfish behavior. We
demonstrate this effect both theoretically and experimentally. Hypocrisy allows
the cooperative strategy to spread by taking credit for the success of the
non-cooperative strategy.
|
1304.3760 | Identification of relevant subtypes via preweighted sparse clustering | stat.ME cs.LG q-bio.QM stat.AP stat.ML | Cluster analysis methods are used to identify homogeneous subgroups in a data
set. In biomedical applications, one frequently applies cluster analysis in
order to identify biologically interesting subgroups. In particular, one may
wish to identify subgroups that are associated with a particular outcome of
interest. Conventional clustering methods generally do not identify such
subgroups, particularly when there are a large number of high-variance features
in the data set. Conventional methods may identify clusters associated with
these high-variance features when one wishes to obtain secondary clusters that
are more interesting biologically or more strongly associated with a particular
outcome of interest. A modification of sparse clustering can be used to
identify such secondary clusters or clusters associated with an outcome of
interest. This method correctly identifies such clusters of interest in several
simulation scenarios. The method is also applied to a large prospective cohort
study of temporomandibular disorders and a leukemia microarray data set.
|
1304.3762 | Evolutionary Turing in the Context of Evolutionary Machines | cs.AI | One of the roots of evolutionary computation was the idea of Turing about
unorganized machines. The goal of this work is the development of foundations
for evolutionary computations, connecting Turing's ideas and the contemporary
state of art in evolutionary computations. To achieve this goal, we develop a
general approach to evolutionary processes in the computational context,
building mathematical models of computational systems, functioning of which is
based on evolutionary processes, and studying properties of such systems.
Operations with evolutionary machines are described and it is explored when
definite classes of evolutionary machines are closed with respect to basic
operations with these machines. We also study such properties as linguistic and
functional equivalence of evolutionary machines and their classes, as well as
computational power of evolutionary machines and their classes, comparing of
evolutionary machines to conventional automata, such as finite automata or
Turing machines.
|
1304.3763 | An Improved ACS Algorithm for the Solutions of Larger TSP Problems | cs.AI cs.DS cs.NE | Solving large traveling salesman problem (TSP) in an efficient way is a
challenging area for the researchers of computer science. This paper presents a
modified version of the ant colony system (ACS) algorithm called Red-Black Ant
Colony System (RB-ACS) for the solutions of TSP which is the most prominent
member of the combinatorial optimization problem. RB-ACS uses the concept of
ant colony system together with the parallel search of genetic algorithm for
obtaining the optimal solutions quickly. In this paper, it is shown that the
proposed RB-ACS algorithm yields significantly better performance than the
existing best-known algorithms.
|
1304.3778 | Optimal Control Theory in Intelligent Transportation Systems Research -
A Review | cs.SY cs.NI | Continuous motorization and urbanization around the globe leads to an
expansion of population in major cities. Therefore, ever-growing pressure
imposed on the existing mass transit systems calls for a better technology,
Intelligent Transportation Systems (ITS), to solve many new and demanding
management issues. Many studies in the extant ITS literature attempted to
address these issues within which various research methodologies were adopted.
However, there is very few paper summarized what does optimal control theory
(OCT), one of the sharpest tools to tackle management issues in engineering, do
in solving these issues. It{\textquoteright}s both important and interesting to
answer the following two questions. (1) How does OCT contribute to ITS research
objectives? (2) What are the research gaps and possible future research
directions? We searched 11 top transportation and control journals and reviewed
41 research articles in ITS area in which OCT was used as the main research
methodology. We categorized the articles by four different ways to address our
research questions. We can conclude from the review that OCT is widely used to
address various aspects of management issues in ITS within which a large
portion of the studies aimed to reduce traffic congestion. We also critically
discussed these studies and pointed out some possible future research
directions towards which OCT can be used.
|
1304.3779 | Improving Generalization Ability of Genetic Programming: Comparative
Study | cs.NE | In the field of empirical modeling using Genetic Programming (GP), it is
important to evolve solution with good generalization ability. Generalization
ability of GP solutions get affected by two important issues: bloat and
over-fitting. Bloat is uncontrolled growth of code without any gain in fitness
and important issue in GP. We surveyed and classified existing literature
related to different techniques used by GP research community to deal with the
issue of bloat. Moreover, the classifications of different bloat control
approaches and measures for bloat are discussed. Next, we tested four bloat
control methods: Tarpeian, double tournament, lexicographic parsimony pressure
with direct bucketing and ratio bucketing on six different problems and
identified where each bloat control method performs well on per problem basis.
Based on the analysis of each method, we combined two methods: double
tournament (selection method) and Tarpeian method (works before evaluation) to
avoid bloated solutions and compared with the results obtained from individual
performance of double tournament method. It was found that the results were
improved with this combination of two methods.
|
1304.3792 | Solving Linear Equations Using a Jacobi Based Time-Variant Adaptive
Hybrid Evolutionary Algorithm | cs.NE | Large set of linear equations, especially for sparse and structured
coefficient (matrix) equations, solutions using classical methods become
arduous. And evolutionary algorithms have mostly been used to solve various
optimization and learning problems. Recently, hybridization of classical
methods (Jacobi method and Gauss-Seidel method) with evolutionary computation
techniques have successfully been applied in linear equation solving. In the
both above hybrid evolutionary methods, uniform adaptation (UA) techniques are
used to adapt relaxation factor. In this paper, a new Jacobi Based Time-Variant
Adaptive (JBTVA) hybrid evolutionary algorithm is proposed. In this algorithm,
a Time-Variant Adaptive (TVA) technique of relaxation factor is introduced
aiming at both improving the fine local tuning and reducing the disadvantage of
uniform adaptation of relaxation factors. This algorithm integrates the Jacobi
based SR method with time variant adaptive evolutionary algorithm. The
convergence theorems of the proposed algorithm are proved theoretically. And
the performance of the proposed algorithm is compared with JBUA hybrid
evolutionary algorithm and classical methods in the experimental domain. The
proposed algorithm outperforms both the JBUA hybrid algorithm and classical
methods in terms of convergence speed and effectiveness.
|
1304.3795 | An Investigation of Wavelet Packet Transform for Spectrum Estimation | cs.IT math.IT math.SP | In this article, we investigate the application of wavelet packet transform
as a novel spectrum sensing approach. The main attraction for wavelet packets
is the tradeoffs they offer in terms of satisfying various performance metrics
such as frequency resolution, variance of the estimated power spectral density
(PSD) and complexity. The results of the experiments show that the wavelet
based approach offers great flexibility, reconfigure ability and adaptability
apart from its performances which are comparable and at times even better than
Fourier based estimates.
|
1304.3796 | Nodes having a major influence to break cooperation define a novel
centrality measure: game centrality | q-bio.MN cs.GT cs.SI nlin.AO physics.soc-ph | Cooperation played a significant role in the self-organization and evolution
of living organisms. Both network topology and the initial position of
cooperators heavily affect the cooperation of social dilemma games. We
developed a novel simulation program package, called 'NetworGame', which is
able to simulate any type of social dilemma games on any model, or real world
networks with any assignment of initial cooperation or defection strategies to
network nodes. The ability of initially defecting single nodes to break overall
cooperation was called as 'game centrality'. The efficiency of this measure was
verified on well-known social networks, and was extended to 'protein games',
i.e. the simulation of cooperation between proteins, or their amino acids. Hubs
and in particular, party hubs of yeast protein-protein interaction networks had
a large influence to convert the cooperation of other nodes to defection.
Simulations on methionyl-tRNA synthetase protein structure network indicated an
increased influence of nodes belonging to intra-protein signaling pathways on
breaking cooperation. The efficiency of single, initially defecting nodes to
convert the cooperation of other nodes to defection in social dilemma games may
be an important measure to predict the importance of nodes in the integration
and regulation of complex systems. Game centrality may help to design more
efficient interventions to cellular networks (in forms of drugs), to ecosystems
and social networks. The NetworGame algorithm is downloadable from here:
www.NetworGame.linkgroup.hu
|
1304.3819 | SybilFence: Improving Social-Graph-Based Sybil Defenses with User
Negative Feedback | cs.SI physics.soc-ph | Detecting and suspending fake accounts (Sybils) in online social networking
(OSN) services protects both OSN operators and OSN users from illegal
exploitation. Existing social-graph-based defense schemes effectively bound the
accepted Sybils to the total number of social connections between Sybils and
non-Sybil users. However, Sybils may still evade the defenses by soliciting
many social connections to real users. We propose SybilFence, a system that
improves over social-graph-based Sybil defenses to further thwart Sybils.
SybilFence is based on the observation that even well-maintained fake accounts
inevitably receive a significant number of user negative feedback, such as the
rejections to their friend requests. Our key idea is to discount the social
edges on users that have received negative feedback, thereby limiting the
impact of Sybils' social edges. The preliminary simulation results show that
our proposal is more resilient to attacks where fake accounts continuously
solicit social connections over time.
|
1304.3826 | Multi-Layer Transmission and Hybrid Relaying for Relay Channels with
Multiple Out-of-Band Relays | cs.IT math.IT | In this work, a relay channel is studied in which a source encoder
communicates with a destination decoder through a number of out-of-band relays
that are connected to the decoder through capacity-constrained digital backhaul
links. This model is motivated by the uplink of cloud radio access networks. In
this scenario, a novel transmission and relaying strategies are proposed in
which multi-layer transmission is used, on the one hand, to adaptively leverage
the different decoding capabilities of the relays and, on the other hand, to
enable hybrid decode-and-forward (DF) and compress-and-forward (CF) relaying.
The hybrid relaying strategy allows each relay to forward part of the decoded
messages and a compressed version of the received signal to the decoder. The
problem of optimizing the power allocation across the layers and the
compression test channels is formulated. Albeit non-convex, the derived problem
is found to belong to the class of so called complementary geometric programs
(CGPs). Using this observation, an iterative algorithm based on the homotopy
method is proposed that achieves a stationary point of the original problem by
solving a sequence of geometric programming (GP), and thus convex, problems.
Numerical results are provided that show the effectiveness of the proposed
multi-layer hybrid scheme in achieving performance close to a theoretical
(cutset) upper bound.
|
1304.3840 | A New Homogeneity Inter-Clusters Measure in SemiSupervised Clustering | cs.LG | Many studies in data mining have proposed a new learning called
semi-Supervised. Such type of learning combines unlabeled and labeled data
which are hard to obtain. However, in unsupervised methods, the only unlabeled
data are used. The problem of significance and the effectiveness of
semi-supervised clustering results is becoming of main importance. This paper
pursues the thesis that muchgreater accuracy can be achieved in such clustering
by improving the similarity computing. Hence, we introduce a new approach of
semisupervised clustering using an innovative new homogeneity measure of
generated clusters. Our experimental results demonstrate significantly improved
accuracy as a result.
|
1304.3841 | The risks of mixing dependency lengths from sequences of different
length | cs.CL physics.data-an | Mixing dependency lengths from sequences of different length is a common
practice in language research. However, the empirical distribution of
dependency lengths of sentences of the same length differs from that of
sentences of varying length and the distribution of dependency lengths depends
on sentence length for real sentences and also under the null hypothesis that
dependencies connect vertices located in random positions of the sequence. This
suggests that certain results, such as the distribution of syntactic dependency
lengths mixing dependencies from sentences of varying length, could be a mere
consequence of that mixing. Furthermore, differences in the global averages of
dependency length (mixing lengths from sentences of varying length) for two
different languages do not simply imply a priori that one language optimizes
dependency lengths better than the other because those differences could be due
to differences in the distribution of sentence lengths and other factors.
|
1304.3842 | Proceedings of the Sixteenth Conference on Uncertainty in Artificial
Intelligence (2000) | cs.AI | This is the Proceedings of the Sixteenth Conference on Uncertainty in
Artificial Intelligence, which was held in San Francisco, CA, June 30 - July 3,
2000
|
1304.3843 | Proceedings of the Fifteenth Conference on Uncertainty in Artificial
Intelligence (1999) | cs.AI | This is the Proceedings of the Fifteenth Conference on Uncertainty in
Artificial Intelligence, which was held in Stockholm Sweden, July 30 - August
1, 1999
|
1304.3844 | Proceedings of the Fourteenth Conference on Uncertainty in Artificial
Intelligence (1998) | cs.AI | This is the Proceedings of the Fourteenth Conference on Uncertainty in
Artificial Intelligence, which was held in Madison, WI, July 24-26, 1998
|
1304.3845 | The Impact of Situation Clustering in Contextual-Bandit Algorithm for
Context-Aware Recommender Systems | cs.IR | Most existing approaches in Context-Aware Recommender Systems (CRS) focus on
recommending relevant items to users taking into account contextual
information, such as time, location, or social aspects. However, few of them
have considered the problem of user's content dynamicity. We introduce in this
paper an algorithm that tackles the user's content dynamicity by modeling the
CRS as a contextual bandit algorithm and by including a situation clustering
algorithm to improve the precision of the CRS. Within a deliberately designed
offline simulation framework, we conduct evaluations with real online event log
data. The experimental results and detailed analysis reveal several important
discoveries in context aware recommender system.
|
1304.3846 | Proceedings of the Thirteenth Conference on Uncertainty in Artificial
Intelligence (1997) | cs.AI | This is the Proceedings of the Thirteenth Conference on Uncertainty in
Artificial Intelligence, which was held in Providence, RI, August 1-3, 1997
|
1304.3847 | Proceedings of the Twelfth Conference on Uncertainty in Artificial
Intelligence (1996) | cs.AI | This is the Proceedings of the Twelfth Conference on Uncertainty in
Artificial Intelligence, which was held in Portland, OR, August 1-4, 1996
|
1304.3848 | Proceedings of the Eleventh Conference on Uncertainty in Artificial
Intelligence (1995) | cs.AI | This is the Proceedings of the Eleventh Conference on Uncertainty in
Artificial Intelligence, which was held in Montreal, QU, August 18-20, 1995
|
1304.3849 | Proceedings of the Tenth Conference on Uncertainty in Artificial
Intelligence (1994) | cs.AI | This is the Proceedings of the Tenth Conference on Uncertainty in Artificial
Intelligence, which was held in Seattle, WA, July 29-31, 1994
|
1304.3850 | Polar Coding for Fading Channels | cs.IT math.IT | A polar coding scheme for fading channels is proposed in this paper. More
specifically, the focus is Gaussian fading channel with a BPSK modulation
technique, where the equivalent channel could be modeled as a binary symmetric
channel with varying cross-over probabilities. To deal with variable channel
states, a coding scheme of hierarchically utilizing polar codes is proposed. In
particular, by observing the polarization of different binary symmetric
channels over different fading blocks, each channel use corresponding to a
different polarization is modeled as a binary erasure channel such that polar
codes could be adopted to encode over blocks. It is shown that the proposed
coding scheme, without instantaneous channel state information at the
transmitter, achieves the capacity of the corresponding fading binary symmetric
channel, which is constructed from the underlying fading AWGN channel through
the modulation scheme.
|
1304.3851 | Proceedings of the Ninth Conference on Uncertainty in Artificial
Intelligence (1993) | cs.AI | This is the Proceedings of the Ninth Conference on Uncertainty in Artificial
Intelligence, which was held in Washington, DC, July 9-11, 1993
|
1304.3852 | Proceedings of the Eighth Conference on Uncertainty in Artificial
Intelligence (1992) | cs.AI | This is the Proceedings of the Eighth Conference on Uncertainty in Artificial
Intelligence, which was held in Stanford, CA, July 17-19, 1992
|
1304.3853 | Proceedings of the Seventh Conference on Uncertainty in Artificial
Intelligence (1991) | cs.AI | This is the Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence, which was held in Los Angeles, CA, July 13-15, 1991
|
1304.3854 | Proceedings of the Sixth Conference on Uncertainty in Artificial
Intelligence (1990) | cs.AI | This is the Proceedings of the Sixth Conference on Uncertainty in Artificial
Intelligence, which was held in Cambridge, MA, Jul 27 - Jul 29, 1990
|
1304.3855 | Proceedings of the Fifth Conference on Uncertainty in Artificial
Intelligence (1989) | cs.AI | This is the Proceedings of the Fifth Conference on Uncertainty in Artificial
Intelligence, which was held in Windsor, ON, August 18-20, 1989
|
1304.3856 | Proceedings of the Fourth Conference on Uncertainty in Artificial
Intelligence (1988) | cs.AI | This is the Proceedings of the Fourth Conference on Uncertainty in Artificial
Intelligence, which was held in Minneapolis, MN, July 10-12, 1988
|
1304.3857 | Proceedings of the Third Conference on Uncertainty in Artificial
Intelligence (1987) | cs.AI | This is the Proceedings of the Third Conference on Uncertainty in Artificial
Intelligence, which was held in Seattle, WA, July 10-12, 1987
|
1304.3859 | Proceedings of the Second Conference on Uncertainty in Artificial
Intelligence (1986) | cs.AI | This is the Proceedings of the Second Conference on Uncertainty in Artificial
Intelligence, which was held in Philadelphia, PA, August 8-10, 1986
|
1304.3860 | Justificatory and Explanatory Argumentation for Committing Agents | cs.AI | In the interaction between agents we can have an explicative discourse, when
communicating preferences or intentions, and a normative discourse, when
considering normative knowledge. For justifying their actions our agents are
endowed with a Justification and Explanation Logic (JEL), capable to cover both
the justification for their commitments and explanations why they had to act in
that way, due to the current situation in the environment. Social commitments
are used to formalise justificatory and explanatory patterns. The combination
of ex- planation, justification, and commitments
|
1304.3865 | Distributed Cognitive Multiple Access Networks: Power Control,
Scheduling and Multiuser Diversity | cs.IT math.IT | This paper studies optimal distributed power allocation and scheduling
policies (DPASPs) for distributed total power and interference limited (DTPIL)
cognitive multiple access networks in which secondary users (SU) independently
perform power allocation and scheduling tasks using their local knowledge of
secondary transmitter secondary base-station (STSB) and secondary transmitter
primary base-station (STPB) channel gains. In such networks, transmission
powers of SUs are limited by an average total transmission power constraint and
by a constraint on the average interference power that SUs cause to the primary
base-station. We first establish the joint optimality of water-filling power
allocation and threshold-based scheduling policies for DTPIL networks. We then
show that the secondary network throughput under the optimal DPASP scales
according to $\frac{1}{\e{}n_h}\log\logp{N}$, where $n_h$ is a parameter
obtained from the distribution of STSB channel power gains and $N$ is the total
number of SUs. From a practical point of view, our results signify the fact
that distributed cognitive multiple access networks are capable of harvesting
multiuser diversity gains without employing centralized schedulers and feedback
links as well as without disrupting primary's quality-of-service (QoS)
|
1304.3874 | Sparsity-Aware STAP Algorithms Using $L_1$-norm Regularization For Radar
Systems | cs.IT math.IT | This article proposes novel sparsity-aware space-time adaptive processing
(SA-STAP) algorithms with $l_1$-norm regularization for airborne phased-array
radar applications. The proposed SA-STAP algorithms suppose that a number of
samples of the full-rank STAP data cube are not meaningful for processing and
the optimal full-rank STAP filter weight vector is sparse, or nearly sparse.
The core idea of the proposed method is imposing a sparse regularization
($l_1$-norm type) to the minimum variance (MV) STAP cost function. Under some
reasonable assumptions, we firstly propose a $l_1$-based sample matrix
inversion (SMI) to compute the optimal filter weight vector. However, it is
impractical due to its matrix inversion, which requires a high computational
cost when in a large phased-array antenna. Then, we devise lower complexity
algorithms based on conjugate gradient (CG) techniques. A computational
complexity comparison with the existing algorithms and an analysis of the
proposed algorithms are conducted. Simulation results with both simulated and
the Mountain Top data demonstrate that fast
signal-to-interference-plus-noise-ratio (SINR) convergence and good performance
of the proposed algorithms are achieved.
|
1304.3877 | Linear models based on noisy data and the Frisch scheme | cs.SY math.OC math.ST stat.TH | We address the problem of identifying linear relations among variables based
on noisy measurements. This is, of course, a central question in problems
involving "Big Data." Often a key assumption is that measurement errors in each
variable are independent. This precise formulation has its roots in the work of
Charles Spearman in 1904 and of Ragnar Frisch in the 1930's. Various topics
such as errors-in-variables, factor analysis, and instrumental variables, all
refer to alternative formulations of the problem of how to account for the
anticipated way that noise enters in the data. In the present paper we begin by
describing the basic theory and provide alternative modern proofs to some key
results. We then go on to consider certain generalizations of the theory as
well applying certain novel numerical techniques to the problem. A central role
is played by the Frisch-Kalman dictum which aims at a noise contribution that
allows a maximal set of simultaneous linear relations among the noise-free
variables --a rank minimization problem. In the years since Frisch's original
formulation, there have been several insights including trace minimization as a
convenient heuristic to replace rank minimization. We discuss convex
relaxations and certificates guaranteeing global optimality. A complementary
point of view to the Frisch-Kalman dictum is introduced in which models lead to
a min-max quadratic estimation error for the error-free variables. Points of
contact between the two formalisms are discussed and various alternative
regularization schemes are indicated.
|
1304.3879 | Automatic case acquisition from texts for process-oriented case-based
reasoning | cs.AI cs.CL | This paper introduces a method for the automatic acquisition of a rich case
representation from free text for process-oriented case-based reasoning. Case
engineering is among the most complicated and costly tasks in implementing a
case-based reasoning system. This is especially so for process-oriented
case-based reasoning, where more expressive case representations are generally
used and, in our opinion, actually required for satisfactory case adaptation.
In this context, the ability to acquire cases automatically from procedural
texts is a major step forward in order to reason on processes. We therefore
detail a methodology that makes case acquisition from processes described as
free text possible, with special attention given to assembly instruction texts.
This methodology extends the techniques we used to extract actions from cooking
recipes. We argue that techniques taken from natural language processing are
required for this task, and that they give satisfactory results. An evaluation
based on our implemented prototype extracting workflows from recipe texts is
provided.
|
1304.3886 | Minimum Variance Estimation of a Sparse Vector within the Linear
Gaussian Model: An RKHS Approach | cs.IT math.IT | We consider minimum variance estimation within the sparse linear Gaussian
model (SLGM). A sparse vector is to be estimated from a linearly transformed
version embedded in Gaussian noise. Our analysis is based on the theory of
reproducing kernel Hilbert spaces (RKHS). After a characterization of the RKHS
associated with the SLGM, we derive novel lower bounds on the minimum variance
achievable by estimators with a prescribed bias function. This includes the
important case of unbiased estimation. The variance bounds are obtained via an
orthogonal projection of the prescribed mean function onto a subspace of the
RKHS associated with the SLGM. Furthermore, we specialize our bounds to
compressed sensing measurement matrices and express them in terms of the
restricted isometry and coherence parameters. For the special case of the SLGM
given by the sparse signal in noise model (SSNM), we derive closed-form
expressions of the minimum achievable variance (Barankin bound) and the
corresponding locally minimum variance estimator. We also analyze the effects
of exact and approximate sparsity information and show that the minimum
achievable variance for exact sparsity is not a limiting case of that for
approximate sparsity. Finally, we compare our bounds with the variance of three
well-known estimators, namely, the maximum-likelihood estimator, the
hard-thresholding estimator, and compressive reconstruction using the
orthogonal matching pursuit.
|
1304.3892 | An accelerated CLPSO algorithm | cs.NE | The particle swarm approach provides a low complexity solution to the
optimization problem among various existing heuristic algorithms. Recent
advances in the algorithm resulted in improved performance at the cost of
increased computational complexity, which is undesirable. Literature shows that
the particle swarm optimization algorithm based on comprehensive learning
provides the best complexity-performance trade-off. We show how to reduce the
complexity of this algorithm further, with a slight but acceptable performance
loss. This enhancement allows the application of the algorithm in time critical
applications, such as, real-time tracking, equalization etc.
|
1304.3898 | Analyzing user behavior of the micro-blogging website Sinaweibo during
hot social events | cs.SI physics.soc-ph | The spread and resonance of users' opinions on SinaWeibo, the most popular
micro-blogging website in China, are tremendously influential, having
significantly affected the processes of many real-world hot social events. We
select 21 hot events that were widely discussed on SinaWeibo in 2011, and do
some statistical analyses. Our main findings are that (i) male users are more
likely to be involved, (ii) messages that contain pictures and those posted by
verified users are more likely to be reposted, while those with URLs are less
likely, (iii) gender factor, for most events, presents no significant
difference in reposting likelihood.
|
1304.3911 | Least Mean Square/Fourth Algorithm with Application to Sparse Channel
Estimation | cs.IT math.IT | Broadband signal transmission over frequency-selective fading channel often
requires accurate channel state information at receiver. One of the most
attracting adaptive channel estimation methods is least mean square (LMS)
algorithm. However, LMS-based method is often degraded by random scaling of
input training signal. To improve the estimation performance, in this paper we
apply the standard least mean square/fourth (LMS/F) algorithm to adaptive
channel estimation (ACE). Since the broadband channel is often described by
sparse channel model, such sparsity could be exploited as prior information.
First, we propose an adaptive sparse channel estimation (ASCE) method using
zero-attracting LMS/F (ZA-LMS/F) algorithm. To exploit the sparsity
effectively, an improved channel estimation method is also proposed, using
reweighted zero-attracting LMS/F (RZA-LMS/F) algorithm. We explain the reason
why sparse LMS/F algorithms using l_1-norm sparse constraint function can
improve the estimation performance by virtual of geometrical interpretation. In
addition, for different channel sparsity, we propose a Monte Carlo method to
select a regularization parameter for RA-LMS/F and RZA-LMS/F to achieve
approximate optimal estimation performance. Finally, simulation results show
that the proposed ASCE methods achieve better estimation performance than the
conventional one.
|
1304.3915 | Single View Depth Estimation from Examples | cs.CV | We describe a non-parametric, "example-based" method for estimating the depth
of an object, viewed in a single photo. Our method consults a database of
example 3D geometries, searching for those which look similar to the object in
the photo. The known depths of the selected database objects act as shape
priors which constrain the process of estimating the object's depth. We show
how this process can be performed by optimizing a well defined target
likelihood function, via a hard-EM procedure. We address the problem of
representing the (possibly infinite) variability of viewing conditions with a
finite (and often very small) example set, by proposing an on-the-fly example
update scheme. We further demonstrate the importance of non-stationarity in
avoiding misleading examples when estimating structured shapes. We evaluate our
method and present both qualitative as well as quantitative results for
challenging object classes. Finally, we show how this same technique may be
readily applied to a number of related problems. These include the novel task
of estimating the occluded depth of an object's backside and the task of
tailoring custom fitting image-maps for input depths.
|
1304.3931 | Matrix-valued Monge-Kantorovich Optimal Mass Transport | cs.SY math.DS math.FA math.OC | We formulate an optimal transport problem for matrix-valued density
functions. This is pertinent in the spectral analysis of multivariable
time-series. The "mass" represents energy at various frequencies whereas, in
addition to a usual transportation cost across frequencies, a cost of rotation
is also taken into account. We show that it is natural to seek the
transportation plan in the tensor product of the spaces for the two
matrix-valued marginals. In contrast to the classical Monge-Kantorovich
setting, the transportation plan is no longer supported on a thin zero-measure
set.
|
1304.3940 | Unveiling the link between logical fallacies and web persuasion | cs.HC cs.AI | In the last decade Human-Computer Interaction (HCI) has started to focus
attention on forms of persuasive interaction where computer technologies have
the goal of changing users behavior and attitudes according to a predefined
direction. In this work, we hypothesize a strong connection between logical
fallacies (forms of reasoning which are logically invalid but cognitively
effective) and some common persuasion strategies adopted within web
technologies. With the aim of empirically evaluating our hypothesis, we carried
out a pilot study on a sample of 150 e-commerce websites.
|
1304.3944 | Smart Microgrids: Overview and Outlook | cs.ET cs.CY cs.SY | The idea of changing our energy system from a hierarchical design into a set
of nearly independent microgrids becomes feasible with the availability of
small renewable energy generators. The smart microgrid concept comes with
several challenges in research and engineering targeting load balancing,
pricing, consumer integration and home automation. In this paper we first
provide an overview on these challenges and present approaches that target the
problems identified. While there exist promising algorithms for the particular
field, we see a missing integration which specifically targets smart
microgrids. Therefore, we propose an architecture that integrates the presented
approaches and defines interfaces between the identified components such as
generators, storage, smart and \dq{dumb} devices.
|
1304.3949 | Dynamic vehicle redistribution and online price incentives in shared
mobility systems | cs.SY | This paper considers a combination of intelligent repositioning decisions and
dynamic pricing for the improved operation of shared mobility systems. The
approach is applied to London's Barclays Cycle Hire scheme, which the authors
have simulated based on historical data. Using model-based predictive control
principles, dynamically varying rewards are computed and offered to customers
carrying out journeys. The aim is to encourage them to park bicycles at nearby
under-used stations, thereby reducing the expected cost of repositioning them
using dedicated staff. In parallel, the routes that repositioning staff should
take are periodically recomputed using a model-based heuristic. It is shown
that a trade-off between reward payouts to customers and the cost of hiring
repositioning staff could be made, in order to minimize operating costs for a
given desired service level.
|
1304.3962 | Parametric Sensitivity Analysis for Biochemical Reaction Networks based
on Pathwise Information Theory | cs.IT math.IT q-bio.MN | Stochastic modeling and simulation provide powerful predictive methods for
the intrinsic understanding of fundamental mechanisms in complex biochemical
networks. Typically, such mathematical models involve networks of coupled jump
stochastic processes with a large number of parameters that need to be suitably
calibrated against experimental data. In this direction, the parameter
sensitivity analysis of reaction networks is an essential mathematical and
computational tool, yielding information regarding the robustness and the
identifiability of model parameters. However, existing sensitivity analysis
approaches such as variants of the finite difference method can have an
overwhelming computational cost in models with a high-dimensional parameter
space. We develop a sensitivity analysis methodology suitable for complex
stochastic reaction networks with a large number of parameters. The proposed
approach is based on Information Theory methods and relies on the
quantification of information loss due to parameter perturbations between
time-series distributions. For this reason, we need to work on path-space,
i.e., the set consisting of all stochastic trajectories, hence the proposed
approach is referred to as "pathwise". The pathwise sensitivity analysis method
is realized by employing the rigorously-derived Relative Entropy Rate (RER),
which is directly computable from the propensity functions. A key aspect of the
method is that an associated pathwise Fisher Information Matrix (FIM) is
defined, which in turn constitutes a gradient-free approach to quantifying
parameter sensitivities. The structure of the FIM turns out to be
block-diagonal, revealing hidden parameter dependencies and sensitivities in
reaction networks.
|
1304.3972 | Reaching a Consensus in Networks of High-Order Integral Agents under
Switching Directed Topology | cs.SY | Consensus problem of high-order integral multi-agent systems under switching
directed topology is considered in this study. Depending on whether the agent's
full state is available or not, two distributed protocols are proposed to
ensure that states of all agents can be convergent to a same stationary value.
In the proposed protocols, the gain vector associated with the agent's
(estimated) state and the gain vector associated with the relative (estimated)
states between agents are designed in a sophisticated way. By this particular
design, the high-order integral multi-agent system can be transformed into a
first-order integral multi-agent system. And the convergence of the transformed
first-order integral agent's state indicates the convergence of the original
high-order integral agent's state if and only if all roots of the polynomial,
whose coefficients are the entries of the gain vector associated with the
relative (estimated) states between agents, are in the open left-half complex
plane. Therefore, many analysis techniques in the first-order integral
multi-agent system can be directly borrowed to solve the problems in the
high-order integral multi-agent system. Due to this property, it is proved that
to reach a consensus, the switching directed topology of multi-agent system is
only required to be "uniformly jointly quasi-strongly connected", which seems
the mildest connectivity condition in the literature. In addition, the
consensus problem of discrete-time high-order integral multi-agent systems is
studied. The corresponding consensus protocol and performance analysis are
presented. Finally, three simulation examples are provided to show the
effectiveness of the proposed approach.
|
1304.3992 | GPU Acclerated Automated Feature Extraction from Satellite Images | cs.DC cs.CV | The availability of large volumes of remote sensing data insists on higher
degree of automation in feature extraction, making it a need of the hour.The
huge quantum of data that needs to be processed entails accelerated processing
to be enabled.GPUs, which were originally designed to provide efficient
visualization, are being massively employed for computation intensive parallel
processing environments. Image processing in general and hence automated
feature extraction, is highly computation intensive, where performance
improvements have a direct impact on societal needs. In this context, an
algorithm has been formulated for automated feature extraction from a
panchromatic or multispectral image based on image processing techniques. Two
Laplacian of Guassian (LoG) masks were applied on the image individually
followed by detection of zero crossing points and extracting the pixels based
on their standard deviation with the surrounding pixels. The two extracted
images with different LoG masks were combined together which resulted in an
image with the extracted features and edges. Finally the user is at liberty to
apply the image smoothing step depending on the noise content in the extracted
image. The image is passed through a hybrid median filter to remove the salt
and pepper noise from the image. This paper discusses the aforesaid algorithm
for automated feature extraction, necessity of deployment of GPUs for the same;
system-level challenges and quantifies the benefits of integrating GPUs in such
environment. The results demonstrate that substantial enhancement in
performance margin can be achieved with the best utilization of GPU resources
and an efficient parallelization strategy. Performance results in comparison
with the conventional computing scenario have provided a speedup of 20x, on
realization of this parallelizing strategy.
|
1304.3994 | Worst-case User Analysis in Poisson Voronoi Cells | cs.IT math.IT | In this letter, we focus on the performance of a worst-case mobile user (MU)
in the downlink cellular network. We derive the coverage probability and the
spectral efficiency of the worst-case MU using stochastic geometry. Through
analytical and numerical results, we draw out interesting insights that the
coverage probability and the spectral efficiency of the worst-case MU decrease
down to 23% and 19% of those of a typical MU, respectively. By applying a
coordinated scheduling (CS) scheme, we also investigate how much the
performance of the worst-case MU is improved.
|
1304.3996 | Cyber-Physical Security: A Game Theory Model of Humans Interacting over
Control Systems | cs.GT cs.CR cs.CY cs.SY | Recent years have seen increased interest in the design and deployment of
smart grid devices and control algorithms. Each of these smart communicating
devices represents a potential access point for an intruder spurring research
into intruder prevention and detection. However, no security measures are
complete, and intruding attackers will compromise smart grid devices leading to
the attacker and the system operator interacting via the grid and its control
systems. The outcome of these machine-mediated human-human interactions will
depend on the design of the physical and control systems mediating the
interactions. If these outcomes can be predicted via simulation, they can be
used as a tool for designing attack-resilient grids and control systems.
However, accurate predictions require good models of not just the physical and
control systems, but also of the human decision making. In this manuscript, we
present an approach to develop such tools, i.e. models of the decisions of the
cyber-physical intruder who is attacking the systems and the system operator
who is defending it, and demonstrate its usefulness for design.
|
1304.3997 | A Survey of Quantum Lyapunov Control Methods | math-ph cs.SY math.MP | The condition of a quantum Lyapunov-based control which can be well used in a
closed quantum system is that the method can make the system convergent but not
just stable. In the convergence study of the quantum Lyapunov control, two
situations are classified: non-degenerate cases and degenerate cases. In this
paper, for these two situations, respectively, the target state is divided into
four categories: eigenstate, the mixed state which commutes with the internal
Hamiltonian, the superposition state, and the mixed state which does not
commute with the internal Hamiltonian state. For these four categories, the
quantum Lyapunov control methods for the closed quantum systems are summarized
and analyzed. Especially, the convergence of the control system to the
different target states is reviewed, and how to make the convergence conditions
be satisfied is summarized and analyzed.
|
1304.3998 | Computationally Efficient Robust Beamforming for SINR Balancing in
Multicell Downlink | cs.IT math.IT | We address the problem of downlink beamformer design for
signal-to-interference-plus-noise ratio (SINR) balancing in a multiuser
multicell environment with imperfectly estimated channels at base stations
(BSs). We first present a semidefinite program (SDP) based approximate solution
to the problem. Then, as our main contribution, by exploiting some properties
of the robust counterpart of the optimization problem, we arrive at a
second-order cone program (SOCP) based approximation of the balancing problem.
The advantages of the proposed SOCP-based design are twofold. First, it greatly
reduces the computational complexity compared to the SDP-based method. Second,
it applies to a wide range of uncertainty models. As a case study, we
investigate the performance of proposed formulations when the base station is
equipped with a massive antenna array. Numerical experiments are carried out to
confirm that the proposed robust designs achieve favorable results in scenarios
of practical interest.
|
1304.3999 | Off-policy Learning with Eligibility Traces: A Survey | cs.AI cs.RO | In the framework of Markov Decision Processes, off-policy learning, that is
the problem of learning a linear approximation of the value function of some
fixed policy from one trajectory possibly generated by some other policy. We
briefly review on-policy learning algorithms of the literature (gradient-based
and least-squares-based), adopting a unified algorithmic view. Then, we
highlight a systematic approach for adapting them to off-policy learning with
eligibility traces. This leads to some known algorithms - off-policy
LSTD(\lambda), LSPE(\lambda), TD(\lambda), TDC/GQ(\lambda) - and suggests new
extensions - off-policy FPKF(\lambda), BRM(\lambda), gBRM(\lambda),
GTD2(\lambda). We describe a comprehensive algorithmic derivation of all
algorithms in a recursive and memory-efficent form, discuss their known
convergence properties and illustrate their relative empirical behavior on
Garnet problems. Our experiments suggest that the most standard algorithms on
and off-policy LSTD(\lambda)/LSPE(\lambda) - and TD(\lambda) if the feature
space dimension is too large for a least-squares approach - perform the best.
|
1304.4003 | Iterative Detection with Soft Decision in Spectrally Efficient FDM
Systems | cs.IT math.IT | In Spectrally Efficient Frequency Division Multiplexing systems the input
data stream is divided into several adjacent subchannels where the distance of
the subchannels is less than that of Orthogonal Frequency Division
Multiplexing(OFDM)systems. Since the subcarriers are not orthogonal in SEFDM
systems, they lead to interference at the receiver side. In this paper, an
iterative method is proposed for interference compensation for SEFDM systems.
In this method a soft mapping technique is used after each iteration block to
improve its performance. The performance of the proposed method is comparable
to that of Sphere Detection(SD)which is a nearly optimal detection method.
|
1304.4028 | A Fuzzy Logic Based Certain Trust Model for E-Commerce | cs.AI cs.CR | Trustworthiness especially for service oriented system is very important
topic now a day in IT field of the whole world. There are many successful
E-commerce organizations presently run in the whole world, but E-commerce has
not reached its full potential. The main reason behind this is lack of Trust of
people in e-commerce. Again, proper models are still absent for calculating
trust of different e-commerce organizations. Most of the present trust models
are subjective and have failed to account vagueness and ambiguity of different
domain. In this paper we have proposed a new fuzzy logic based Certain Trust
model which considers these ambiguity and vagueness of different domain. Fuzzy
Based Certain Trust Model depends on some certain values given by experts and
developers. can be applied in a system like cloud computing, internet, website,
e-commerce, etc. to ensure trustworthiness of these platforms. In this paper we
show, although fuzzy works with uncertainties, proposed model works with some
certain values. Some experimental results and validation of the model with
linguistics terms are shown at the last part of the paper.
|
1304.4041 | Multispectral Spatial Characterization: Application to Mitosis Detection
in Breast Cancer Histopathology | cs.CV | Accurate detection of mitosis plays a critical role in breast cancer
histopathology. Manual detection and counting of mitosis is tedious and subject
to considerable inter- and intra-reader variations. Multispectral imaging is a
recent medical imaging technology, proven successful in increasing the
segmentation accuracy in other fields. This study aims at improving the
accuracy of mitosis detection by developing a specific solution using
multispectral and multifocal imaging of breast cancer histopathological data.
We propose to enable clinical routine-compliant quality of mitosis
discrimination from other objects. The proposed framework includes
comprehensive analysis of spectral bands and z-stack focus planes, detection of
expected mitotic regions (candidates) in selected focus planes and spectral
bands, computation of multispectral spatial features for each candidate,
selection of multispectral spatial features and a study of different
state-of-the-art classification methods for candidates classification as
mitotic or non mitotic figures. This framework has been evaluated on MITOS
multispectral medical dataset and achieved 60% detection rate and 57%
F-Measure. Our results indicate that multispectral spatial features have more
information for mitosis classification in comparison with white spectral band
features, being therefore a very promising exploration area to improve the
quality of the diagnosis assistance in histopathology.
|
1304.4051 | Coordinating metaheuristic agents with swarm intelligence | cs.MA cs.NE | Coordination of multi agent systems remains as a problem since there is no
prominent method to completely solve this problem. Metaheuristic agents are
specific implementations of multi-agent systems, which imposes working together
to solve optimisation problems with metaheuristic algorithms. The idea borrowed
from swarm intelligence seems working much better than those implementations
suggested before. This paper reports the performance of swarms of simulated
annealing agents collaborating with particle swarm optimization algorithm. The
proposed approach is implemented for multidimensional knapsack problem and has
resulted much better than some other works published before.
|
1304.4055 | Multiobjective optimization in Gene Expression Programming for Dew Point | cs.NE | The processes occurring in climatic change evolution and their variations
play a major role in environmental engineering. Different techniques are used
to model the relationship between temperatures, dew point and relative
humidity. Gene expression programming is capable of modelling complex realities
with great accuracy, allowing, at the same time, the extraction of knowledge
from the evolved models compared to other learning algorithms. This research
aims to use Gene Expression Programming for modelling of dew point. Generally,
accuracy of the model is the only objective used by selection mechanism of GEP.
This will evolve large size models with low training error. To avoid this
situation, use of multiple objectives, like accuracy and size of the model are
preferred by Genetic Programming practitioners. Multi-objective problem finds a
set of solutions satisfying the objectives given by decision maker.
Multiobjective based GEP will be used to evolve simple models. Various
algorithms widely used for multi objective optimization like NSGA II and SPEA 2
are tested for different test cases. The results obtained thereafter gives idea
that SPEA 2 is better algorithm compared to NSGA II based on the features like
execution time, number of solutions obtained and convergence rate. Thus
compared to models obtained by GEP, multi-objective algorithms fetch better
solutions considering the dual objectives of fitness and size of the equation.
These simple models can be used to predict dew point.
|
1304.4058 | Link Prediction with Social Vector Clocks | cs.SI physics.soc-ph stat.ML | State-of-the-art link prediction utilizes combinations of complex features
derived from network panel data. We here show that computationally less
expensive features can achieve the same performance in the common scenario in
which the data is available as a sequence of interactions. Our features are
based on social vector clocks, an adaptation of the vector-clock concept
introduced in distributed computing to social interaction networks. In fact,
our experiments suggest that by taking into account the order and spacing of
interactions, social vector clocks exploit different aspects of link formation
so that their combination with previous approaches yields the most accurate
predictor to date.
|
1304.4071 | Near-optimal Binary Compressed Sensing Matrix | cs.IT math.IT | Compressed sensing is a promising technique that attempts to faithfully
recover sparse signal with as few linear and nonadaptive measurements as
possible. Its performance is largely determined by the characteristic of
sensing matrix. Recently several zero-one binary sensing matrices have been
deterministically constructed for their relative low complexity and competitive
performance. Considering the complexity of implementation, it is of great
practical interest if one could further improve the sparsity of binary matrix
without performance loss. Based on the study of restricted isometry property
(RIP), this paper proposes the near-optimal binary sensing matrix, which
guarantees nearly the best performance with as sparse distribution as possible.
The proposed near-optimal binary matrix can be deterministically constructed
with progressive edge-growth (PEG) algorithm. Its performance is confirmed with
extensive simulations.
|
1304.4077 | A new Bayesian ensemble of trees classifier for identifying multi-class
labels in satellite images | stat.ME cs.CV cs.LG | Classification of satellite images is a key component of many remote sensing
applications. One of the most important products of a raw satellite image is
the classified map which labels the image pixels into meaningful classes.
Though several parametric and non-parametric classifiers have been developed
thus far, accurate labeling of the pixels still remains a challenge. In this
paper, we propose a new reliable multiclass-classifier for identifying class
labels of a satellite image in remote sensing applications. The proposed
multiclass-classifier is a generalization of a binary classifier based on the
flexible ensemble of regression trees model called Bayesian Additive Regression
Trees (BART). We used three small areas from the LANDSAT 5 TM image, acquired
on August 15, 2009 (path/row: 08/29, L1T product, UTM map projection) over
Kings County, Nova Scotia, Canada to classify the land-use. Several prediction
accuracy and uncertainty measures have been used to compare the reliability of
the proposed classifier with the state-of-the-art classifiers in remote
sensing.
|
1304.4086 | Hubiness, length, crossings and their relationships in dependency trees | cs.CL cs.DM cs.SI physics.soc-ph | Here tree dependency structures are studied from three different
perspectives: their degree variance (hubiness), the mean dependency length and
the number of dependency crossings. Bounds that reveal pairwise dependencies
among these three metrics are derived. Hubiness (the variance of degrees) plays
a central role: the mean dependency length is bounded below by hubiness while
the number of crossings is bounded above by hubiness. Our findings suggest that
the online memory cost of a sentence might be determined not just by the
ordering of words but also by the hubiness of the underlying structure. The 2nd
moment of degree plays a crucial role that is reminiscent of its role in large
complex networks.
|
1304.4112 | Shadow Estimation Method for "The Episolar Constraint: Monocular Shape
from Shadow Correspondence" | cs.CV | Recovering shadows is an important step for many vision algorithms. Current
approaches that work with time-lapse sequences are limited to simple
thresholding heuristics. We show these approaches only work with very careful
tuning of parameters, and do not work well for long-term time-lapse sequences
taken over the span of many months. We introduce a parameter-free expectation
maximization approach which simultaneously estimates shadows, albedo, surface
normals, and skylight. This approach is more accurate than previous methods,
works over both very short and very long sequences, and is robust to the
effects of nonlinear camera response. Finally, we demonstrate that the shadow
masks derived through this algorithm substantially improve the performance of
sun-based photometric stereo compared to earlier shadow mask estimation.
|
1304.4119 | Assessing Visualization Techniques for the Search Process in Digital
Libraries | cs.DL cs.IR | In this paper we present an overview of several visualization techniques to
support the search process in Digital Libraries (DLs). The search process
typically can be separated into three major phases: query formulation and
refinement, browsing through result lists and viewing and interacting with
documents and their properties. We discuss a selection of popular visualization
techniques that have been developed for the different phases to support the
user during the search process. Along prototypes based on the different
techniques we show how the approaches have been implemented. Although various
visualizations have been developed in prototypical systems very few of these
approaches have been adapted into today's DLs. We conclude that this is most
likely due to the fact that most systems are not evaluated intensely in
real-life scenarios with real information seekers and that results of the
interesting visualization techniques are often not comparable. We can say that
many of the assessed systems did not properly address the information need of
cur-rent users.
|
1304.4137 | Group Evolution Discovery in Social Networks | cs.SI physics.soc-ph | Group extraction and their evolution are among the topics which arouse the
greatest interest in the domain of social network analysis. However, while the
grouping methods in social networks are developed very dynamically, the methods
of group evolution discovery and analysis are still uncharted territory on the
social network analysis map. Therefore the new method for the group evolution
discovery called GED is proposed in this paper. Additionally, the results of
the first experiments on the email based social network together with
comparison with two other methods of group evolution discovery are presented.
|
1304.4156 | Non-parametric resampling of random walks for spectral network
clustering | physics.soc-ph cs.SI stat.AP | Parametric resampling schemes have been recently introduced in complex
network analysis with the aim of assessing the statistical significance of
graph clustering and the robustness of community partitions. We propose here a
method to replicate structural features of complex networks based on the
non-parametric resampling of the transition matrix associated with an unbiased
random walk on the graph. We test this bootstrapping technique on synthetic and
real-world modular networks and we show that the ensemble of replicates
obtained through resampling can be used to improve the performance of standard
spectral algorithms for community detection.
|
1304.4161 | Compressed Sensing Matrices: Binary vs. Ternary | cs.IT math.IT | Binary matrix and ternary matrix are two types of popular sensing matrices in
compressed sensing for their competitive performance and low computation.
However, to the best of our knowledge, there seems no literature aiming at
evaluating their performances if they hold the same sparisty, though it is of
practical importance. Based on both RIP analysis and numerical simulations,
this paper, for the first time, discloses that {0, 1} binary matrix holds
better overall performance over {0, +1, -1} ternary matrix, if they share the
same distribution on nonzero positions.
|
1304.4162 | Greedy Approach for Low-Rank Matrix Recovery | math.NA cs.IT cs.NA math.IT | We describe the Simple Greedy Matrix Completion Algorithm providing an
efficient method for restoration of low-rank matrices from incomplete corrupted
entries.
We provide numerical evidences that, even in the simplest implementation, the
greedy approach may increase the recovery capability of existing algorithms
significantly.
|
1304.4181 | Rate-Distortion-Based Physical Layer Secrecy with Applications to
Multimode Fiber | cs.CR cs.IT math.IT | Optical networks are vulnerable to physical layer attacks; wiretappers can
improperly receive messages intended for legitimate recipients. Our work
considers an aspect of this security problem within the domain of multimode
fiber (MMF) transmission. MMF transmission can be modeled via a broadcast
channel in which both the legitimate receiver's and wiretapper's channels are
multiple-input-multiple-output complex Gaussian channels. Source-channel coding
analyses based on the use of distortion as the metric for secrecy are
developed. Alice has a source sequence to be encoded and transmitted over this
broadcast channel so that the legitimate user Bob can reliably decode while
forcing the distortion of wiretapper, or eavesdropper, Eve's estimate as high
as possible. Tradeoffs between transmission rate and distortion under two
extreme scenarios are examined: the best case where Eve has only her channel
output and the worst case where she also knows the past realization of the
source. It is shown that under the best case, an operationally separate
source-channel coding scheme guarantees maximum distortion at the same rate as
needed for reliable transmission. Theoretical bounds are given, and
particularized for MMF. Numerical results showing the rate distortion tradeoff
are presented and compared with corresponding results for the perfect secrecy
case.
|
1304.4182 | Proceedings of the First Conference on Uncertainty in Artificial
Intelligence (1985) | cs.AI | This is the Proceedings of the First Conference on Uncertainty in Artificial
Intelligence, which was held in Los Angeles, CA, July 10-12, 1985
|
1304.4184 | Bidirectional Growth based Mining and Cyclic Behaviour Analysis of Web
Sequential Patterns | cs.DB | Web sequential patterns are important for analyzing and understanding users
behaviour to improve the quality of service offered by the World Wide Web. Web
Prefetching is one such technique that utilizes prefetching rules derived
through Cyclic Model Analysis of the mined Web sequential patterns. The more
accurate the prediction and more satisfying the results of prefetching if we
use a highly efficient and scalable mining technique such as the Bidirectional
Growth based Directed Acyclic Graph. In this paper, we propose a novel
algorithm called Bidirectional Growth based mining Cyclic behavior Analysis of
web sequential Patterns (BGCAP) that effectively combines these strategies to
generate prefetching rules in the form of 2-sequence patterns with Periodicity
and threshold of Cyclic Behaviour that can be utilized to effectively prefetch
Web pages, thus reducing the users perceived latency. As BGCAP is based on
Bidirectional pattern growth, it performs only (log n+1) levels of recursion
for mining n Web sequential patterns. Our experimental results show that
prefetching rules generated using BGCAP is 5-10 percent faster for different
data sizes and 10-15% faster for a fixed data size than TD-Mine. In addition,
BGCAP generates about 5-15 percent more prefetching rules than TD-Mine.
|
1304.4187 | The Webdamlog System Managing Distributed Knowledge on the Web | cs.DB | We study the use of WebdamLog, a declarative high-level lan- guage in the
style of datalog, to support the distribution of both data and knowledge (i.e.,
programs) over a network of au- tonomous peers. The main novelty of WebdamLog
compared to datalog is its use of delegation, that is, the ability for a peer
to communicate a program to another peer. We present results of a user study,
showing that users can write WebdamLog programs quickly and correctly, and with
a minimal amount of training. We present an implementation of the WebdamLog
inference engine relying on the Bud dat- alog engine. We describe an
experimental evaluation of the WebdamLog engine, demonstrating that WebdamLog
can be im- plemented efficiently. We conclude with a discussion of ongoing and
future work.
|
1304.4191 | Correcting Errors in Linear Measurements and Compressed Sensing of
Multiple Sources | math.NA cs.IT math.IT | We present an algorithm for finding sparse solutions of the system of linear
equations $\Phi\mathbf{x}=\mathbf{y}$ with rectangular matrices $\Phi$ of size
$n\times N$, where $n<N$, when measurement vector $\mathbf{y}$ is corrupted by
a sparse vector of errors $\mathbf e$. We call our algorithm the
$\ell^1$-greedy-generous (LGGA) since it combines both greedy and generous
strategies in decoding. Main advantage of LGGA over traditional error
correcting methods consists in its ability to work efficiently directly on
linear data measurements. It uses the natural residual redundancy of the
measurements and does not require any additional redundant channel encoding. We
show how to use this algorithm for encoding-decoding multichannel sources. This
algorithm has a significant advantage over existing straightforward decoders
when the encoded sources have different density/sparsity of the information
content. That nice property can be used for very efficient blockwise encoding
of the sets of data with a non-uniform distribution of the information. The
images are the most typical example of such sources. The important feature of
LGGA is its separation from the encoder. The decoder does not need any
additional side information from the encoder except for linear measurements and
the knowledge that those measurements created as a linear combination of
different sources.
|
1304.4199 | Green Power Control in Cognitive Wireless Networks | cs.IT cs.GT math.IT | A decentralized network of cognitive and non-cognitive transmitters where
each transmitter aims at maximizing his energy-efficiency is considered. The
cognitive transmitters are assumed to be able to sense the transmit power of
their non-cognitive counterparts and the former have a cost for sensing. The
Stackelberg equilibrium analysis of this $2-$level hierarchical game is
conducted, which allows us to better understand the effects of cognition on
energy-efficiency. In particular, it is proven that the network
energy-efficiency is maximized when only a given fraction of terminals are
cognitive. Then, we study a sensing game where all the transmitters are assumed
to take the decision whether to sense (namely to be cognitive) or not. This
game is shown to be a weighted potential game and its set of equilibria is
studied. Playing the sensing game in a first phase (e.g., of a time-slot) and
then playing the power control game is shown to be more efficient individually
for all transmitters than playing a game where a transmitter would jointly
optimize whether to sense and his power level, showing the existence of a kind
of Braess paradox. The derived results are illustrated by numerical results and
provide some insights on how to deploy cognitive radios in heterogeneous
networks in terms of sensing capabilities. Keywords: Power Control, Stackelberg
Equilibrium, Energy-Efficiency.
|
1304.4280 | Navigability on Networks: A Graph Theoretic Perspective | cs.DS cs.SI | Human navigation has been of interest to psychologists and cognitive
scientists since the past few decades. It was in the recent past that a study
of human navigational strategies was initiated with a network analytic
approach, instigated mainly by Milgrams small world experiment. We brief the
work in this direction and provide answers to the algorithmic questions raised
by the previous study. It is noted that humans have a tendency to navigate
using centers of the network - such paths are called the
center-strategic-paths. We show that the problem of finding a
center-strategic-path is an easy one. We provide a polynomial time algorithm to
find a center-strategic-path between a given pair of nodes. We apply our
finding in empirically checking the navigability on synthetic networks and
analyze few special types of graphs.
|
1304.4303 | Learning and Verifying Quantified Boolean Queries by Example | cs.DB | To help a user specify and verify quantified queries --- a class of database
queries known to be very challenging for all but the most expert users --- one
can question the user on whether certain data objects are answers or
non-answers to her intended query. In this paper, we analyze the number of
questions needed to learn or verify qhorn queries, a special class of Boolean
quantified queries whose underlying form is conjunctions of quantified Horn
expressions. We provide optimal polynomial-question and polynomial-time
learning and verification algorithms for two subclasses of the class qhorn with
upper constant limits on a query's causal density.
|
1304.4321 | Polar Codes: Speed of polarization and polynomial gap to capacity | cs.IT cs.DS math.IT math.PR | We prove that, for all binary-input symmetric memoryless channels, polar
codes enable reliable communication at rates within $\epsilon > 0$ of the
Shannon capacity with a block length, construction complexity, and decoding
complexity all bounded by a {\em polynomial} in $1/\epsilon$. Polar coding
gives the {\em first known explicit construction} with rigorous proofs of all
these properties; previous constructions were not known to achieve capacity
with less than $\exp(1/\epsilon)$ decoding complexity except for erasure
channels.
We establish the capacity-achieving property of polar codes via a direct
analysis of the underlying martingale of conditional entropies, without relying
on the martingale convergence theorem. This step gives rough polarization
(noise levels $\approx \epsilon$ for the "good" channels), which can then be
adequately amplified by tracking the decay of the channel Bhattacharyya
parameters. Our effective bounds imply that polar codes can have block length
(and encoding/decoding complexity) bounded by a polynomial in $1/\epsilon$. The
generator matrix of such polar codes can be constructed in polynomial time by
algorithmically computing an adequate approximation of the polarization
process.
|
1304.4324 | Popularity Prediction in Microblogging Network: A Case Study on Sina
Weibo | cs.SI physics.soc-ph | Predicting the popularity of content is important for both the host and users
of social media sites. The challenge of this problem comes from the inequality
of the popularity of con- tent. Existing methods for popularity prediction are
mainly based on the quality of content, the interface of social media site to
highlight contents, and the collective behavior of user- s. However, little
attention is paid to the structural charac- teristics of the networks spanned
by early adopters, i.e., the users who view or forward the content in the early
stage of content dissemination. In this paper, taking the Sina Weibo as a case,
we empirically study whether structural character- istics can provide clues for
the popularity of short messages. We find that the popularity of content is
well reflected by the structural diversity of the early adopters. Experimental
results demonstrate that the prediction accuracy is signif- icantly improved by
incorporating the factor of structural diversity into existing methods.
|
1304.4329 | Privacy Preserving Data Mining by Using Implicit Function Theorem | cs.CR cs.DB | Data mining has made broad significant multidisciplinary field used in vast
application domains and extracts knowledge by identifying structural
relationship among the objects in large data bases. Privacy preserving data
mining is a new area of data mining research for providing privacy of sensitive
knowledge of information extracted from data mining system to be shared by the
intended persons not to everyone to access. In this paper, we proposed a new
approach of privacy preserving data mining by using implicit function theorem
for secure transformation of sensitive data obtained from data mining system.
we proposed two way enhanced security approach. First transforming original
values of sensitive data into different partial derivatives of functional
values for perturbation of data. secondly generating symmetric key value by
Eigen values of jacobian matrix for secure computation. we given an example of
academic sensitive data converting into vector valued functions to explain
about our proposed concept and presented implementation based results of new
proposed of approach.
|
1304.4344 | Sparse Coding and Dictionary Learning for Symmetric Positive Definite
Matrices: A Kernel Approach | cs.LG cs.CV stat.ML | Recent advances suggest that a wide range of computer vision problems can be
addressed more appropriately by considering non-Euclidean geometry. This paper
tackles the problem of sparse coding and dictionary learning in the space of
symmetric positive definite matrices, which form a Riemannian manifold. With
the aid of the recently introduced Stein kernel (related to a symmetric version
of Bregman matrix divergence), we propose to perform sparse coding by embedding
Riemannian manifolds into reproducing kernel Hilbert spaces. This leads to a
convex and kernel version of the Lasso problem, which can be solved
efficiently. We furthermore propose an algorithm for learning a Riemannian
dictionary (used for sparse coding), closely tied to the Stein kernel.
Experiments on several classification tasks (face recognition, texture
classification, person re-identification) show that the proposed sparse coding
approach achieves notable improvements in discrimination accuracy, in
comparison to state-of-the-art methods such as tensor sparse coding, Riemannian
locality preserving projection, and symmetry-driven accumulation of local
features.
|
1304.4371 | Efficient Computation of Mean Truncated Hitting Times on Very Large
Graphs | cs.DS cs.AI | Previous work has shown the effectiveness of random walk hitting times as a
measure of dissimilarity in a variety of graph-based learning problems such as
collaborative filtering, query suggestion or finding paraphrases. However,
application of hitting times has been limited to small datasets because of
computational restrictions. This paper develops a new approximation algorithm
with which hitting times can be computed on very large, disk-resident graphs,
making their application possible to problems which were previously out of
reach. This will potentially benefit a range of large-scale problems.
|
1304.4379 | RockIt: Exploiting Parallelism and Symmetry for MAP Inference in
Statistical Relational Models | cs.AI | RockIt is a maximum a-posteriori (MAP) query engine for statistical
relational models. MAP inference in graphical models is an optimization problem
which can be compiled to integer linear programs (ILPs). We describe several
advances in translating MAP queries to ILP instances and present the novel
meta-algorithm cutting plane aggregation (CPA). CPA exploits local
context-specific symmetries and bundles up sets of linear constraints. The
resulting counting constraints lead to more compact ILPs and make the symmetry
of the ground model more explicit to state-of-the-art ILP solvers. Moreover,
RockIt parallelizes most parts of the MAP inference pipeline taking advantage
of ubiquitous shared-memory multi-core architectures.
We report on extensive experiments with Markov logic network (MLN) benchmarks
showing that RockIt outperforms the state-of-the-art systems Alchemy, Markov
TheBeast, and Tuffy both in terms of efficiency and quality of results.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.