id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1112.2251
|
Recommendation systems: a joint analysis of technical aspects with
marketing implications
|
cs.IR stat.AP
|
In 2010, Web users ordered, only in Amazon, 73 items per second and massively
contribute reviews about their consuming experience. As the Web matures and
becomes social and participatory, collaborative filters are the basic
complement in searching online information about people, events and products.
In Web 2.0, what connected consumers create is not simply content (e.g.
reviews) but context. This new contextual framework of consumption emerges
through the aggregation and collaborative filtering of personal preferences
about goods in the Web in massive scale. More importantly, facilitates
connected consumers to search and navigate the complex Web more effectively and
amplifies incentives for quality. The objective of the present article is to
jointly review the basic stylized facts of relevant research in recommendation
systems in computer and marketing studies in order to share some common
insights. After providing a comprehensive definition of goods and Users in the
Web, we describe a classification of recommendation systems based on two
families of criteria: how recommendations are formed and input data
availability. The classification is presented under a common minimal matrix
notation and is used as a bridge to related issues in the business and
marketing literature. We focus our analysis in the fields of one-to-one
marketing, network-based marketing Web merchandising and atmospherics and their
implications in the processes of personalization and adaptation in the Web.
Market basket analysis is investigated in context of recommendation systems.
Discussion on further research refers to the business implications and
technological challenges of recommendation systems.
|
1112.2254
|
SocialCloud: Using Social Networks for Building Distributed Computing
Services
|
cs.DC cs.SI
|
In this paper we investigate a new computing paradigm, called SocialCloud, in
which computing nodes are governed by social ties driven from a bootstrapping
trust-possessing social graph. We investigate how this paradigm differs from
existing computing paradigms, such as grid computing and the conventional cloud
computing paradigms. We show that incentives to adopt this paradigm are
intuitive and natural, and security and trust guarantees provided by it are
solid. We propose metrics for measuring the utility and advantage of this
computing paradigm, and using real-world social graphs and structures of social
traces; we investigate the potential of this paradigm for ordinary users. We
study several design options and trade-offs, such as scheduling algorithms,
centralization, and straggler handling, and show how they affect the utility of
the paradigm. Interestingly, we conclude that whereas graphs known in the
literature for high trust properties do not serve distributed trusted computing
algorithms, such as Sybil defenses---for their weak algorithmic properties,
such graphs are good candidates for our paradigm for their self-load-balancing
features.
|
1112.2262
|
Perfectly secure encryption of individual sequences
|
cs.IT cs.CR math.IT
|
In analogy to the well-known notion of finite--state compressibility of
individual sequences, due to Lempel and Ziv, we define a similar notion of
"finite-state encryptability" of an individual plaintext sequence, as the
minimum asymptotic key rate that must be consumed by finite-state encrypters so
as to guarantee perfect secrecy in a well-defined sense. Our main basic result
is that the finite-state encryptability is equal to the finite-state
compressibility for every individual sequence. This is in parallelism to
Shannon's classical probabilistic counterpart result, asserting that the
minimum required key rate is equal to the entropy rate of the source. However,
the redundancy, defined as the gap between the upper bound (direct part) and
the lower bound (converse part) in the encryption problem, turns out to decay
at a different rate (in fact, much slower) than the analogous redundancy
associated with the compression problem. We also extend our main theorem in
several directions, allowing: (i) availability of side information (SI) at the
encrypter/decrypter/eavesdropper, (ii) lossy reconstruction at the decrypter,
and (iii) the combination of both lossy reconstruction and SI, in the spirit of
the Wyner--Ziv problem.
|
1112.2265
|
A Novel Approach for Password Authentication Using Bidirectional
Associative Memory
|
cs.CR cs.NE
|
Password authentication is a very important system security procedure to gain
access to user resources. In the Traditional password authentication methods a
server has check the authenticity of the users. In our proposed method users
can freely select their passwords from a predefined character set. They can
also use a graphical image as password. The password may be a character or an
image it will be converted into binary form and the binary values will be
normalized. Associative memories have been used recently for password
authentication in order to overcome drawbacks of the traditional password
authentication methods. In this paper we proposed a method using Bidirectional
Associative Memory algorithm for both alphanumeric (Text) and graphical
password. By doing so the amount of security what we provide for the user can
be enhanced. This paper along with test results show that converting user
password in to Probabilistic values and giving them as input for BAM improves
the security of the system
|
1112.2271
|
Anglers' fishing problem
|
math.PR cs.GT cs.SY math.OC math.ST stat.TH
|
The considered model will be formulated as related to "the fishing problem"
even if the other applications of it are much more obvious. The angler goes
fishing. He uses various techniques and he has at most two fishing rods. He
buys a fishing ticket for a fixed time. The fishes are caught with the use of
different methods according to the renewal processes. The fishes' value and the
inter arrival times are given by the sequences of independent, identically
distributed (i.i.d.) random variables with the known distribution functions. It
forms the marked renewal--reward process. The angler's measure of satisfaction
is given by the difference between the utility function, depending on the value
of the fishes caught, and the cost function connected with the time of fishing.
In this way, the angler's relative opinion about the methods of fishing is
modelled. The angler's aim is to have as much satisfaction as possible and
additionally he has to leave the lake before a fixed moment. Therefore his goal
is to find two optimal stopping times in order to maximize his satisfaction. At
the first moment, he changes the technique of fishing, e.g. by excluding one
rod and intensifying on the rest. Next, he decides when he should stop the
expedition. These stopping times have to be shorter than the fixed time of
fishing. The dynamic programming methods have been used to find these two
optimal stopping times and to specify the expected satisfaction of the angler
at these times.
|
1112.2306
|
Secrecy Degrees of Freedom of MIMO Broadcast Channels with Delayed CSIT
|
cs.IT math.IT
|
The degrees of freedom (DoF) of the two-user Gaussian multiple-input and
multiple-output (MIMO) broadcast channel with confidential message (BCC) is
studied under the assumption that delayed channel state information (CSI) is
available at the transmitter. We characterize the optimal secrecy DoF (SDoF)
region and show that it can be achieved by a simple artificial noise alignment
(ANA) scheme. The proposed scheme sends the confidential messages superposed
with the artificial noise over several time slots. Exploiting delayed CSI, the
transmitter aligns the signal in such a way that the useful message can be
extracted at the intended receiver but is completely drowned by the artificial
noise at the unintended receiver. The proposed scheme can be interpreted as a
non-trivial extension of Maddah-Ali Tse (MAT) scheme and enables us to quantify
the resource overhead, or equivalently the DoF loss, to be paid for the secrecy
communications.
|
1112.2315
|
Adaptive Forgetting Factor Fictitious Play
|
stat.ML cs.LG cs.MA
|
It is now well known that decentralised optimisation can be formulated as a
potential game, and game-theoretical learning algorithms can be used to find an
optimum. One of the most common learning techniques in game theory is
fictitious play. However fictitious play is founded on an implicit assumption
that opponents' strategies are stationary. We present a novel variation of
fictitious play that allows the use of a more realistic model of opponent
strategy. It uses a heuristic approach, from the online streaming data
literature, to adaptively update the weights assigned to recently observed
actions. We compare the results of the proposed algorithm with those of
stochastic and geometric fictitious play in a simple strategic form game, a
vehicle target assignment game and a disaster management problem. In all the
tests the rate of convergence of the proposed algorithm was similar or better
than the variations of fictitious play we compared it with. The new algorithm
therefore improves the performance of game-theoretical learning in
decentralised optimisation.
|
1112.2318
|
Low-rank optimization with trace norm penalty
|
math.OC cs.LG
|
The paper addresses the problem of low-rank trace norm minimization. We
propose an algorithm that alternates between fixed-rank optimization and
rank-one updates. The fixed-rank optimization is characterized by an efficient
factorization that makes the trace norm differentiable in the search space and
the computation of duality gap numerically tractable. The search space is
nonlinear but is equipped with a particular Riemannian structure that leads to
efficient computations. We present a second-order trust-region algorithm with a
guaranteed quadratic rate of convergence. Overall, the proposed optimization
scheme converges super-linearly to the global solution while maintaining
complexity that is linear in the number of rows and columns of the matrix. To
compute a set of solutions efficiently for a grid of regularization parameters
we propose a predictor-corrector approach that outperforms the naive
warm-restart approach on the fixed-rank quotient manifold. The performance of
the proposed algorithm is illustrated on problems of low-rank matrix completion
and multivariate linear regression.
|
1112.2336
|
The Spatial Nearest Neighbor Skyline Queries
|
cs.DB
|
User preference queries are very important in spatial databases. With the
help of these queries, one can found best location among points saved in
database. In many situation users evaluate quality of a location with its
distance from its nearest neighbor among a special set of points. There has
been less attention about evaluating a location with its distance to nearest
neighbors in spatial user preference queries. This problem has application in
many domains such as service recommendation systems and investment planning.
Related works in this field are based on top-k queries. The problem with top-k
queries is that user must set weights for attributes and a function for
aggregating them. This is hard for him in most cases. In this paper a new type
of user preference queries called spatial nearest neighbor skyline queries will
be introduced in which user has some sets of points as query parameters. For
each point in database attributes are its distances to the nearest neighbors
from each set of query points. By separating this query as a subset of dynamic
skyline queries N2S2 algorithm is provided for computing it. This algorithm has
good performance compared with the general branch and bound algorithm for
skyline queries.
|
1112.2372
|
On Tractability Aspects of Optimal Resource Allocation in OFDMA Systems
|
cs.NI cs.IT math.IT
|
Joint channel and rate allocation with power minimization in orthogonal
frequency-division multiple access (OFDMA) has attracted extensive attention.
Most of the research has dealt with the development of sub-optimal but
low-complexity algorithms. In this paper, the contributions comprise new
insights from revisiting tractability aspects of computing optimum. Previous
complexity analyses have been limited by assumptions of fixed power on each
subcarrier, or power-rate functions that locally grow arbitrarily fast. The
analysis under the former assumption does not generalize to problem
tractability with variable power, whereas the latter assumption prohibits the
result from being applicable to well-behaved power-rate functions. As the first
contribution, we overcome the previous limitations by rigorously proving the
problem's NP-hardness for the representative logarithmic rate function. Next,
we extend the proof to reach a much stronger result, namely that the problem
remains NP-hard, even if the channels allocated to each user is restricted to a
consecutive block with given size. We also prove that, under these
restrictions, there is a special case with polynomial-time tractability. Then,
we treat the problem class where the channels can be partitioned into an
arbitrarily large but constant number of groups, each having uniform gain for
every individual user. For this problem class, we present a polynomial-time
algorithm and prove optimality guarantee. In addition, we prove that the
recognition of this class is polynomial-time solvable.
|
1112.2386
|
Improvement of BM3D Algorithm and Employment to Satellite and CFA Images
Denoising
|
cs.CV
|
This paper proposes a new procedure in order to improve the performance of
block matching and 3-D filtering (BM3D) image denoising algorithm. It is
demonstrated that it is possible to achieve a better performance than that of
BM3D algorithm in a variety of noise levels. This method changes BM3D algorithm
parameter values according to noise level, removes prefiltering, which is used
in high noise level; therefore Peak Signal-to-Noise Ratio (PSNR) and visual
quality get improved, and BM3D complexities and processing time are reduced.
This improved BM3D algorithm is extended and used to denoise satellite and
color filter array (CFA) images. Output results show that the performance has
upgraded in comparison with current methods of denoising satellite and CFA
images. In this regard this algorithm is compared with Adaptive PCA algorithm,
that has led to superior performance for denoising CFA images, on the subject
of PSNR and visual quality. Also the processing time has decreased
significantly.
|
1112.2388
|
Information Filtering via Implicit Trust-based Network
|
physics.data-an cs.IR
|
Based on the user-item bipartite network, collaborative filtering (CF)
recommender systems predict users' interests according to their history
collections, which is a promising way to solve the information exploration
problem. However, CF algorithm encounters cold start and sparsity problems. The
trust-based CF algorithm is implemented by collecting the users' trust
statements, which is time-consuming and must use users' private friendship
information. In this paper, we present a novel measurement to calculate users'
implicit trust-based correlation by taking into account their average ratings,
rating ranges, and the number of common rated items. By applying the similar
idea to the items, a item-based CF algorithm is constructed. The simulation
results on three benchmark data sets show that the performances of both
user-based and item-based algorithms could be enhanced greatly. Finally, a
hybrid algorithm is constructed by integrating the user-based and item-based
algorithms, the simulation results indicate that hybrid algorithm outperforms
the state-of-the-art methods. Specifically, it can not only provide more
accurate recommendations, but also alleviate the cold start problem.
|
1112.2392
|
Information filtering via biased heat conduction
|
physics.data-an cs.IR
|
Heat conduction process has recently found its application in personalized
recommendation [T. Zhou \emph{et al.}, PNAS 107, 4511 (2010)], which is of high
diversity but low accuracy. By decreasing the temperatures of small-degree
objects, we present an improved algorithm, called biased heat conduction (BHC),
which could simultaneously enhance the accuracy and diversity. Extensive
experimental analyses demonstrate that the accuracy on MovieLens, Netflix and
Delicious datasets could be improved by 43.5%, 55.4% and 19.2% compared with
the standard heat conduction algorithm, and the diversity is also increased or
approximately unchanged. Further statistical analyses suggest that the present
algorithm could simultaneously identify users' mainstream and special tastes,
resulting in better performance than the standard heat conduction algorithm.
This work provides a creditable way for highly efficient information filtering.
|
1112.2401
|
A Real-Time Database QoS-aware Service Selection Protocol for MANET
|
cs.DB
|
The real-time database service selection depends typically to the system
stability in order to handle the time-constrained transactions within their
deadline. However, applying the real-time database system in the mobile ad hoc
networks requires considering the mobile nodes limited capacities. In this
paper, we propose cross-layer service selection which combines performance
metrics measured in the real-time database system to those used by the routing
protocol in order to make the best selection decision. It ensures both
timeliness and energy efficiency by avoiding low-power and busy service
provider node. A multicast packet is used in order to reduce the transmission
cost and network load when sending the same packet to multiple service
providers. In this paper, we evaluate the performance of our proposed protocol.
Simulation results, using the Network Simulator NS2, improve that the protocol
decreases the deadline miss ratio of packets, increases the service
availability and reduces the service response time.
|
1112.2408
|
Maximum Production of Transmission Messages Rate for Service Discovery
Protocols
|
cs.NI cs.AI
|
Minimizing the number of dropped User Datagram Protocol (UDP) messages in a
network is regarded as a challenge by researchers. This issue represents
serious problems for many protocols particularly those that depend on sending
messages as part of their strategy, such us service discovery protocols. This
paper proposes and evaluates an algorithm to predict the minimum period of time
required between two or more consecutive messages and suggests the minimum
queue sizes for the routers, to manage the traffic and minimise the number of
dropped messages that has been caused by either congestion or queue overflow or
both together. The algorithm has been applied to the Universal Plug and Play
(UPnP) protocol using ns2 simulator. It was tested when the routers were
connected in two configurations; as a centralized and de centralized. The
message length and bandwidth of the links among the routers were taken in the
consideration. The result shows Better improvement in number of dropped
messages `among the routers.
|
1112.2409
|
Medium Access Control Protocols for Wireless Sensor Networks with Energy
Harvesting
|
cs.IT cs.NI math.IT
|
The design of Medium Access Control (MAC) protocols for wireless sensor
networks (WSNs) has been conventionally tackled by assuming battery-powered
devices and by adopting the network lifetime as the main performance criterion.
While WSNs operated by energy-harvesting (EH) devices are not limited by
network lifetime, they pose new design challenges due to the uncertain amount
of harvestable energy. Novel design criteria are thus required to capture the
trade-offs between the potentially infinite network lifetime and the uncertain
energy availability. This paper addresses the analysis and design of WSNs with
EH devices by focusing on conventional MAC protocols, namely TDMA, Framed-ALOHA
(FA) and Dynamic-FA (DFA), and by accounting for the performance trade-offs and
design issues arising due to EH. A novel metric, referred to as delivery
probability, is introduced to measure the capability of a MAC protocol to
deliver the measure of any sensor in the network to the intended destination
(or fusion center, FC). The interplay between delivery efficiency and time
efficiency (i.e., the data collection rate at the FC), is investigated
analytically using Markov models. Numerical results validate the analysis and
emphasize the critical importance of accounting for both delivery probability
and time efficiency in the design of EH-WSNs.
|
1112.2410
|
Networks Utilization Improvements for Service Discovery Performance
|
cs.NI cs.AI
|
Service discovery requests' messages have a vital role in sharing and
locating resources in many of service discovery protocols. Sending more
messages than a link can handle may cause congestion and loss of messages which
dramatically influences the performance of these protocols. Re-send the lost
messages result in latency and inefficiency in performing the tasks which
user(s) require from the connected nodes. This issue become a serious problem
in two cases: first, when the number of clients which performs a service
discovery request is increasing, as this result in increasing in the number of
sent discovery messages; second, when the network resources such as bandwidth
capacity are consumed by other applications. These two cases lead to network
congestion and loss of messages. This paper propose an algorithm to improve the
services discovery protocols performance by separating each consecutive burst
of messages with a specific period of time which calculated regarding the
available network resources. It was tested when the routers were connected in
two configurations; decentralised and centralised .In addition, this paper
explains the impact of increasing the number of clients and the consumed
network resources on the proposed algorithm.
|
1112.2437
|
Competition and Regulation in Wireless Services Markets
|
cs.NI cs.GT cs.SY
|
We consider a wireless services market where a set of operators compete for a
large common pool of users. The latter have a reservation utility of U0 units
or, equivalently, an alternative option to satisfy their communication needs.
The operators must satisfy these minimum requirements in order to attract the
users. We model the users decisions and interaction as an evolutionary game and
the competition among the operators as a non cooperative price game which is
proved to be a potential game. For each set of prices selected by the
operators, the evolutionary game attains a different stationary point. We show
that the outcome of both games depend on the reservation utility of the users
and the amount of spectrum W the operators have at their disposal. We express
the market welfare and the revenue of the operators as functions of these two
parameters. Accordingly, we consider the scenario where a regulating agency is
able to intervene and change the outcome of the market by tuning W and/or U0.
Different regulators may have different objectives and criteria according to
which they intervene. We analyze the various possible regulation methods and
discuss their requirements, implications and impact on the market.
|
1112.2459
|
Hybrid Centrality Measures for Binary and Weighted Networks
|
physics.soc-ph cs.DL cs.SI
|
Existing centrality measures for social network analysis suggest the
im-portance of an actor and give consideration to actor's given structural
position in a network. These existing measures suggest specific attribute of an
actor (i.e., popularity, accessibility, and brokerage behavior). In this study,
we propose new hybrid centrality measures (i.e., Degree-Degree,
Degree-Closeness and Degree-Betweenness), by combining existing measures (i.e.,
degree, closeness and betweenness) with a proposition to better understand the
importance of actors in a given network. Generalized set of measures are also
proposed for weighted networks. Our analysis of co-authorship networks dataset
suggests significant correlation of our proposed new centrality measures
(especially weighted networks) than traditional centrality measures with
performance of the scholars. Thus, they are useful measures which can be used
instead of traditional measures to show prominence of the actors in a network.
|
1112.2460
|
Social Capital and Individual Performance: A Study of Academic
Collaboration
|
cs.SI cs.IR physics.soc-ph
|
Studies on social networks highlight the importance of network structure or
structural properties of a given network and its impact on performance outcome.
One of the important properties of this network structure is referred as
"social capital" which is the "network of contacts" and the associated values
attached to these networks of contacts. In this study, our aim is to provide
empirical evidence of the influence of social capital and performance within
the context of academic collaboration. We suggest that the collaborative
process involves social capital embedded within relationships and network
structures among direct co-authors. Thus, we examine whether scholars' social
capital is associated with their citation-based performance, using
co-authorship and citation data. In order to test and validate our proposed
hypotheses, we extract publication records from Scopus having "information
science" in their title or keywords or abstracts during 2001 and 2010. To
overcome the limitations of traditional social network metrics for measuring
the influence of scholars' social capital within their co-authorship network,
we extend the traditional social network metrics by proposing a new measure
(Power-Diversity Index). We then use Spearman's correlation rank test to
examine the association between scholars' social capital measures and their
citation-based performance. Results suggest that research performance of
authors is positively correlated with their social capital measures. This study
highlights that the Power-diversity Index, which is introduced as a new hybrid
centrality measure, serves as an indicator of power and influence of an
individual's ability to control communication and information.
|
1112.2468
|
Creating a Live, Public Short Message Service Corpus: The NUS SMS Corpus
|
cs.CL
|
Short Message Service (SMS) messages are largely sent directly from one
person to another from their mobile phones. They represent a means of personal
communication that is an important communicative artifact in our current
digital era. As most existing studies have used private access to SMS corpora,
comparative studies using the same raw SMS data has not been possible up to
now. We describe our efforts to collect a public SMS corpus to address this
problem. We use a battery of methodologies to collect the corpus, paying
particular attention to privacy issues to address contributors' concerns. Our
live project collects new SMS message submissions, checks their quality and
adds the valid messages, releasing the resultant corpus as XML and as SQL
dumps, along with corpus statistics, every month. We opportunistically collect
as much metadata about the messages and their sender as possible, so as to
enable different types of analyses. To date, we have collected about 60,000
messages, focusing on English and Mandarin Chinese.
|
1112.2475
|
Permutation Complexity via Duality between Values and Orderings
|
nlin.CD cs.IT math.IT physics.data-an
|
We study the permutation complexity of finite-state stationary stochastic
processes based on a duality between values and orderings between values.
First, we establish a duality between the set of all words of a fixed length
and the set of all permutations of the same length. Second, on this basis, we
give an elementary alternative proof of the equality between the permutation
entropy rate and the entropy rate for a finite-state stationary stochastic
processes first proved in [Amigo, J.M., Kennel, M. B., Kocarev, L., 2005.
Physica D 210, 77-95]. Third, we show that further information on the
relationship between the structure of values and the structure of orderings for
finite-state stationary stochastic processes beyond the entropy rate can be
obtained from the established duality. In particular, we prove that the
permutation excess entropy is equal to the excess entropy, which is a measure
of global correlation present in a stationary stochastic process, for
finite-state stationary ergodic Markov processes.
|
1112.2483
|
Capacity Bounds and Exact Results for the Cognitive Z-interference
Channel
|
cs.IT math.IT
|
We study the discrete memoryless Z-interference channel (ZIC) where the
transmitter of the pair that suffers from interference is cognitive. We first
provide an upper bound on the capacity of this channel. We then show that, when
the channel of the transmitter-receiver pair that does not experience
interference is deterministic, our proposed upper bound matches the known lower
bound provided by Cao and Chen in 2008. The obtained results imply that, unlike
in the Gaussian cognitive ZIC, in the considered channel superposition encoding
at the non-cognitive transmitter as well as Gel'fand-Pinsker encoding at the
cognitive transmitter are needed in order to minimize the impact of
interference. As a byproduct of the obtained capacity region, we obtain the
capacity under the generalized Gel'fand-Pinsker conditions where a
transmitter-receiver pair communicates in the presence of interference
noncausally known at the encoder.
|
1112.2491
|
Permutation Excess Entropy and Mutual Information between the Past and
Future
|
nlin.CD cs.IT math.IT physics.data-an
|
We address the excess entropy, which is a measure of complexity for
stationary time series, from the ordinal point of view. We show that the
permutation excess entropy is equal to the mutual information between two
adjacent semi-infinite blocks in the space of orderings for finite-state
stationary ergodic Markov processes. This result may shed a new light on the
relationship between complexity and anticipation.
|
1112.2493
|
Symbolic transfer entropy rate is equal to transfer entropy rate for
bivariate finite-alphabet stationary ergodic Markov processes
|
nlin.CD cs.IT math.IT physics.data-an
|
Transfer entropy is a measure of the magnitude and the direction of
information flow between jointly distributed stochastic processes. In recent
years, its permutation analogues are considered in the literature to estimate
the transfer entropy by counting the number of occurrences of orderings of
values, not the values themselves. It has been suggested that the method of
permutation is easy to implement, computationally low cost and robust to noise
when applying to real world time series data. In this paper, we initiate a
theoretical treatment of the corresponding rates. In particular, we consider
the transfer entropy rate and its permutation analogue, the symbolic transfer
entropy rate, and show that they are equal for any bivariate finite-alphabet
stationary ergodic Markov process. This result is an illustration of the
duality method introduced in [T. Haruna and K. Nakajima, Physica D 240, 1370
(2011)]. We also discuss the relationship among the transfer entropy rate, the
time-delayed mutual information rate and their permutation analogues.
|
1112.2558
|
Success-driven distribution of public goods promotes cooperation but
preserves defection
|
physics.soc-ph cond-mat.stat-mech cs.SI q-bio.PE
|
Established already in the Biblical times, the Matthew effect stands for the
fact that in societies rich tend to get richer and the potent even more
powerful. Here we investigate a game theoretical model describing the evolution
of cooperation on structured populations where the distribution of public goods
is driven by the reproductive success of individuals. Phase diagrams reveal
that cooperation is promoted irrespective of the uncertainty by strategy
adoptions and the type of interaction graph, yet the complete dominance of
cooperators is elusive due to the spontaneous emergence of super-persistent
defectors that owe their survival to extremely rare microscopic patterns. This
indicates that success-driven mechanisms are crucial for effectively harvesting
benefits from collective actions, but that they may also account for the
observed persistence of maladaptive behavior.
|
1112.2605
|
Secure Querying of Recursive XML Views: A Standard XPath-based Technique
|
cs.CR cs.DB
|
Most state-of-the art approaches for securing XML documents allow users to
access data only through authorized views defined by annotating an XML grammar
(e.g. DTD) with a collection of XPath expressions. To prevent improper
disclosure of confidential information, user queries posed on these views need
to be rewritten into equivalent queries on the underlying documents. This
rewriting enables us to avoid the overhead of view materialization and
maintenance. A major concern here is that query rewriting for recursive XML
views is still an open problem. To overcome this problem, some works have been
proposed to translate XPath queries into non-standard ones, called Regular
XPath queries. However, query rewriting under Regular XPath can be of
exponential size as it relies on automaton model. Most importantly, Regular
XPath remains a theoretical achievement. Indeed, it is not commonly used in
practice as translation and evaluation tools are not available. In this paper,
we show that query rewriting is always possible for recursive XML views using
only the expressive power of the standard XPath. We investigate the extension
of the downward class of XPath, composed only by child and descendant axes,
with some axes and operators and we propose a general approach to rewrite
queries under recursive XML views. Unlike Regular XPath-based works, we provide
a rewriting algorithm which processes the query only over the annotated DTD
grammar and which can run in linear time in the size of the query. An
experimental evaluation demonstrates that our algorithm is efficient and scales
well.
|
1112.2608
|
Rohlin Distance and the Evolution of Influenza A virus: Weak Attractors
and Precursors
|
q-bio.PE cond-mat.other cs.CE q-bio.QM
|
The evolution of the hemagglutinin amino acids sequences of Influenza A virus
is studied by a method based on an informational metrics, originally introduced
by Rohlin for partitions in abstract probability spaces. This metrics does not
require any previous functional or syntactic knowledge about the sequences and
it is sensitive to the correlated variations in the characters disposition. Its
efficiency is improved by algorithmic tools, designed to enhance the detection
of the novelty and to reduce the noise of useless mutations. We focus on the
USA data from 1993/94 to 2010/2011 for A/H3N2 and on USA data from 2006/07 to
2010/2011 for A/H1N1 . We show that the clusterization of the distance matrix
gives strong evidence to a structure of domains in the sequence space, acting
as weak attractors for the evolution, in very good agreement with the
epidemiological history of the virus. The structure proves very robust with
respect to the variations of the clusterization parameters, and extremely
coherent when restricting the observation window. The results suggest an
efficient strategy in the vaccine forecast, based on the presence of
"precursors" (or "buds") populating the most recent attractor.
|
1112.2610
|
The ViP2P Platform: XML Views in P2P
|
cs.DB
|
The growing volumes of XML data sources on the Web or produced by
enterprises, organizations etc. raise many performance challenges for data
management applications. In this work, we are concerned with the distributed,
peer-to-peer management of large corpora of XML documents, based on distributed
hash table (or DHT, in short) overlay networks. We present ViP2P (standing for
Views in Peer-to-Peer), a distributed platform for sharing XML documents based
on a structured P2P network infrastructure (DHT). At the core of ViP2P stand
distributed materialized XML views, defined by arbitrary XML queries, filled in
with data published anywhere in the network, and exploited to efficiently
answer queries issued by any network peer. ViP2P allows user queries to be
evaluated over XML documents published by peers in two modes. First, a
long-running subscription mode, when a query can be registered in the system
and receive answers incrementally when and if published data matches the query.
Second, queries can also be asked in an ad-hoc, snapshot mode, where results
are required immediately and must be computed based on the results of other
long-running, subscription queries. ViP2P innovates over other similar
DHT-based XML sharing platforms by using a very expressive structured XML query
language. This expressivity leads to a very flexible distribution of XML
content in the ViP2P network, and to efficient snapshot query execution. ViP2P
has been tested in real deployments of hundreds of computers. We present the
platform architecture, its internal algorithms, and demonstrate its efficiency
and scalability through a set of experiments. Our experimental results outgrow
by orders of magnitude similar competitor systems in terms of data volumes,
network size and data dissemination throughput.
|
1112.2627
|
Fast Hybrid PSO and Tabu Search Approach for Optimization of a Fuzzy
Controller
|
cs.SY
|
In this paper, a fuzzy controller type Takagi_Sugeno zero order is optimized
by the method of hybrid Particle Swarm Optimization (PSO) and Tabu Search (TS).
The algorithm automatically adjusts the membership functions of fuzzy
controller inputs and the conclusions of fuzzy rules. At each iteration of PSO,
we calculate the best solution and we seek the best neighbor by Tabu search,
this operation minimizes the number of iterations and computation time while
maintaining accuracy and minimum response time. We apply this algorithm to
optimize a fuzzy controller for a simple inverted pendulum with three rules.
|
1112.2628
|
Simulation Performance of MMSE Iterative Equalization with Soft Boolean
Value Propagation
|
cs.IT math.IT
|
The performance of MMSE Iterative Equalization based on MAP-SBVP and COD-MAP
algorithms (for generating extrinsic information) are compared for fading and
non-fading communication channels employing serial concatenated convolution
codes.
MAP-SBVP is a convolution decoder using a conventional soft-MAP decoder
followed by a soft-convolution encoder using the soft-boolean value propagation
(SBVP).
From the simulations it is observed that for MMSE Iterative Equalization,
MAP-SBVP performance is comparable to COD-MAP for fading and non-fading
channels.
|
1112.2640
|
Threshold Choice Methods: the Missing Link
|
cs.AI
|
Many performance metrics have been introduced for the evaluation of
classification performance, with different origins and niches of application:
accuracy, macro-accuracy, area under the ROC curve, the ROC convex hull, the
absolute error, and the Brier score (with its decomposition into refinement and
calibration). One way of understanding the relation among some of these metrics
is the use of variable operating conditions (either in the form of
misclassification costs or class proportions). Thus, a metric may correspond to
some expected loss over a range of operating conditions. One dimension for the
analysis has been precisely the distribution we take for this range of
operating conditions, leading to some important connections in the area of
proper scoring rules. However, we show that there is another dimension which
has not received attention in the analysis of performance metrics. This new
dimension is given by the decision rule, which is typically implemented as a
threshold choice method when using scoring models. In this paper, we explore
many old and new threshold choice methods: fixed, score-uniform, score-driven,
rate-driven and optimal, among others. By calculating the loss of these methods
for a uniform range of operating conditions we get the 0-1 loss, the absolute
error, the Brier score (mean squared error), the AUC and the refinement loss
respectively. This provides a comprehensive view of performance metrics as well
as a systematic approach to loss minimisation, namely: take a model, apply
several threshold choice methods consistent with the information which is (and
will be) available about the operating condition, and compare their expected
losses. In order to assist in this procedure we also derive several connections
between the aforementioned performance metrics, and we highlight the role of
calibration in choosing the threshold choice method.
|
1112.2661
|
Location- and Time-Dependent VPD for Privacy-Preserving Wireless
Accesses to Cloud Services
|
cs.CR cs.DB
|
The advent of smartphones in recent years has changed the wireless landscape.
Smartphones have become a platform for online user interface to cloud
databases. Cloud databases may provide a large set of user-private and
sensitive data (i.e., objects), while smartphone users (i.e., subjects) provide
location-sensitive information. Secure and private services in wireless
accessing to cloud databases have been discussed actively for the past recent
years. However, the previous techniques are unsatisfactory for dynamism of
moving subjects' wireless accesses. In this paper, we propose a novel technique
to dynamically generate virtual private databases (VPD) for each access by
taking subjects' location and time information into account. The contribution
of this paper includes a privacy-preserving access control mechanism for
dynamism of wireless access.
|
1112.2663
|
Customer Data Clustering using Data Mining Technique
|
cs.DB
|
Classification and patterns extraction from customer data is very important
for business support and decision making. Timely identification of newly
emerging trends is very important in business process. Large companies are
having huge volume of data but starving for knowledge. To overcome the
organization current issue, the new breed of technique is required that has
intelligence and capability to solve the knowledge scarcity and the technique
is called Data mining. The objectives of this paper are to identify the
high-profit, high-value and low-risk customers by one of the data mining
technique - customer clustering. In the first phase, cleansing the data and
developed the patterns via demographic clustering algorithm using IBM I-Miner.
In the second phase, profiling the data, develop the clusters and identify the
high-value low-risk customers. This cluster typically represents the 10-20
percent of customers which yields 80% of the revenue.
|
1112.2679
|
Truncated Power Method for Sparse Eigenvalue Problems
|
stat.ML cs.AI
|
This paper considers the sparse eigenvalue problem, which is to extract
dominant (largest) sparse eigenvectors with at most $k$ non-zero components. We
propose a simple yet effective solution called truncated power method that can
approximately solve the underlying nonconvex optimization problem. A strong
sparse recovery result is proved for the truncated power method, and this
theory is our key motivation for developing the new algorithm. The proposed
method is tested on applications such as sparse principal component analysis
and the densest $k$-subgraph problem. Extensive experiments on several
synthetic and real-world large scale datasets demonstrate the competitive
empirical performance of our method.
|
1112.2680
|
Random Differential Privacy
|
stat.ME cs.CR cs.LG
|
We propose a relaxed privacy definition called {\em random differential
privacy} (RDP). Differential privacy requires that adding any new observation
to a database will have small effect on the output of the data-release
procedure. Random differential privacy requires that adding a {\em randomly
drawn new observation} to a database will have small effect on the output. We
show an analog of the composition property of differentially private procedures
which applies to our new definition. We show how to release an RDP histogram
and we show that RDP histograms are much more accurate than histograms obtained
using ordinary differential privacy. We finally show an analog of the global
sensitivity framework for the release of functions under our privacy
definition.
|
1112.2681
|
Inference in Probabilistic Logic Programs with Continuous Random
Variables
|
cs.AI
|
Probabilistic Logic Programming (PLP), exemplified by Sato and Kameya's
PRISM, Poole's ICL, Raedt et al's ProbLog and Vennekens et al's LPAD, is aimed
at combining statistical and logical knowledge representation and inference. A
key characteristic of PLP frameworks is that they are conservative extensions
to non-probabilistic logic programs which have been widely used for knowledge
representation. PLP frameworks extend traditional logic programming semantics
to a distribution semantics, where the semantics of a probabilistic logic
program is given in terms of a distribution over possible models of the
program. However, the inference techniques used in these works rely on
enumerating sets of explanations for a query answer. Consequently, these
languages permit very limited use of random variables with continuous
distributions. In this paper, we present a symbolic inference procedure that
uses constraints and represents sets of explanations without enumeration. This
permits us to reason over PLPs with Gaussian or Gamma-distributed random
variables (in addition to discrete-valued random variables) and linear equality
constraints over reals. We develop the inference procedure in the context of
PRISM; however the procedure's core ideas can be easily applied to other PLP
languages as well. An interesting aspect of our inference procedure is that
PRISM's query evaluation process becomes a special case in the absence of any
continuous random variables in the program. The symbolic inference procedure
enables us to reason over complex probabilistic models such as Kalman filters
and a large subclass of Hybrid Bayesian networks that were hitherto not
possible in PLP frameworks. (To appear in Theory and Practice of Logic
Programming).
|
1112.2690
|
Multilevel Coding Schemes for Compute-and-Forward with Flexible Decoding
|
cs.IT math.IT
|
We consider the design of coding schemes for the wireless two-way relaying
channel when there is no channel state information at the transmitter. In the
spirit of the compute and forward paradigm, we present a multilevel coding
scheme that permits computation (or, decoding) of a class of functions at the
relay. The function to be computed (or, decoded) is then chosen depending on
the channel realization. We define such a class of functions which can be
decoded at the relay using the proposed coding scheme and derive rates that are
universally achievable over a set of channel gains when this class of functions
is used at the relay. We develop our framework with general modulation formats
in mind, but numerical results are presented for the case where each node
transmits using the QPSK constellation. Numerical results with QPSK show that
the flexibility afforded by our proposed scheme results in substantially higher
rates than those achievable by always using a fixed function or by adapting the
function at the relay but coding over GF(4).
|
1112.2723
|
Correlation-aware Resource Allocation in Multi-Cell Networks
|
cs.IT math.IT
|
We propose a cross-layer strategy for resource allocation between spatially
correlated sources in the uplink of multi-cell FDMA networks. Our objective is
to find the optimum power and channel to sources, in order to minimize the
maximum distortion achieved by any source in the network. Given that the
network is multi-cell, the inter-cell interference must also be taken into
consideration. This resource allocation problem is NP-hard and the optimal
solution can only be found by exhaustive search over the entire solution space,
which is not computationally feasible. We propose a three step method to be
performed separately by the scheduler in each cell, which finds cross-layer
resource allocation in simple steps. The three- step algorithm separates the
problem into inter-cell resource management, grouping of sources for joint
decoding, and intra- cell channel assignment. For each of the steps we propose
allocation methods that satisfy different design constraints. In the
simulations we compare methods for each step of the algorithm. We also
demonstrate the overall gain of using correlation-aware resource allocation for
a typical multi-cell network of Gaussian sources. We show that, while using
correlation in compression and joint decoding can achieve 25% loss in
distortion over independent decoding, this loss can be increased to 37% when
correlation is also utilized in resource allocation method. This significant
distortion loss motivates further work in correlation-aware resource
allocation. Overall, we find that our method achieves a 60% decrease in 5
percentile distortion compared to independent methods.
|
1112.2738
|
Robust Learning via Cause-Effect Models
|
stat.ML cs.LG
|
We consider the problem of function estimation in the case where the data
distribution may shift between training and test time, and additional
information about it may be available at test time. This relates to popular
scenarios such as covariate shift, concept drift, transfer learning and
semi-supervised learning. This working paper discusses how these tasks could be
tackled depending on the kind of changes of the distributions. It argues that
knowledge of an underlying causal direction can facilitate several of these
tasks.
|
1112.2755
|
Using Proximity to Predict Activity in Social Networks
|
cs.SI physics.soc-ph
|
The structure of a social network contains information useful for predicting
its evolution. Nodes that are "close" in some sense are more likely to become
linked in the future than more distant nodes. We show that structural
information can also help predict node activity. We use proximity to capture
the degree to which two nodes are "close" to each other in the network. In
addition to standard proximity metrics used in the link prediction task, such
as neighborhood overlap, we introduce new metrics that model different types of
interactions that can occur between network nodes. We argue that the "closer"
nodes are in a social network, the more similar will be their activity. We
study this claim using data about URL recommendation on social media sites Digg
and Twitter. We show that structural proximity of two users in the follower
graph is related to similarity of their activity, i.e., how many URLs they both
recommend. We also show that given friends' activity, knowing their proximity
to the user can help better predict which URLs the user will recommend. We
compare the performance of different proximity metrics on the activity
prediction task and find that some metrics lead to substantial performance
improvements.
|
1112.2774
|
Measuring Tie Strength in Implicit Social Networks
|
cs.SI physics.soc-ph
|
Given a set of people and a set of events they attend, we address the problem
of measuring connectedness or tie strength between each pair of persons given
that attendance at mutual events gives an implicit social network between
people. We take an axiomatic approach to this problem. Starting from a list of
axioms that a measure of tie strength must satisfy, we characterize functions
that satisfy all the axioms and show that there is a range of measures that
satisfy this characterization. A measure of tie strength induces a ranking on
the edges (and on the set of neighbors for every person). We show that for
applications where the ranking, and not the absolute value of the tie strength,
is the important thing about the measure, the axioms are equivalent to a
natural partial order. Also, to settle on a particular measure, we must make a
non-obvious decision about extending this partial order to a total order, and
that this decision is best left to particular applications. We classify
measures found in prior literature according to the axioms that they satisfy.
In our experiments, we measure tie strength and the coverage of our axioms in
several datasets. Also, for each dataset, we bound the maximum Kendall's Tau
divergence (which measures the number of pairwise disagreements between two
lists) between all measures that satisfy the axioms using the partial order.
This informs us if particular datasets are well behaved where we do not have to
worry about which measure to choose, or we have to be careful about the exact
choice of measure we make.
|
1112.2791
|
Secrecy Outage Capacity of Fading Channels
|
cs.IT math.IT
|
This paper considers point to point secure communication over flat fading
channels under an outage constraint. More specifically, we extend the
definition of outage capacity to account for the secrecy constraint and obtain
sharp characterizations of the corresponding fundamental limits under two
different assumptions on the transmitter CSI (Channel state information).
First, we find the outage secrecy capacity assuming that the transmitter has
perfect knowledge of the legitimate and eavesdropper channel gains. In this
scenario, the capacity achieving scheme relies on opportunistically exchanging
private keys between the legitimate nodes. These keys are stored in a key
buffer and later used to secure delay sensitive data using the Vernam's one
time pad technique. We then extend our results to the more practical scenario
where the transmitter is assumed to know only the legitimate channel gain.
Here, our achievability arguments rely on privacy amplification techniques to
generate secret key bits. In the two cases, we also characterize the optimal
power control policies which, interestingly, turn out to be a judicious
combination of channel inversion and the optimal ergodic strategy. Finally, we
analyze the effect of key buffer overflow on the overall outage probability.
|
1112.2792
|
Hybrid Heuristic-Based Artificial Immune System for Task Scheduling
|
cs.DC cs.NE
|
Task scheduling problem in heterogeneous systems is the process of allocating
tasks of an application to heterogeneous processors interconnected by
high-speed networks, so that minimizing the finishing time of application as
much as possible. Tasks are processing units of application and have
precedenceconstrained, communication and also, are presented by Directed
Acyclic Graphs (DAGs). Evolutionary algorithms are well suited for solving task
scheduling problem in heterogeneous environment. In this paper, we propose a
hybrid heuristic-based Artificial Immune System (AIS) algorithm for solving the
scheduling problem. In this regard, AIS with some heuristics and Single
Neighbourhood Search (SNS) technique are hybridized. Clonning and immune-remove
operators of AIS provide diversity, while heuristics and SNS provide
convergence of algorithm into good solutions, that is balancing between
exploration and exploitation. We have compared our method with some
state-of-the art algorithms. The results of the experiments show the validity
and efficiency of our method.
|
1112.2793
|
Secret Key Generation Via Localization and Mobility
|
cs.IT math.IT
|
We consider secret key generation from relative localization information of a
pair of nodes in a mobile wireless network in the presence of a mobile
eavesdropper. Our problem can be categorized under the source models of
information theoretic secrecy, where the distance between the legitimate nodes
acts as the observed common randomness. We characterize the theoretical limits
on the achievable secret key bit rate, in terms of the observation noise
variance at the legitimate nodes and the eavesdropper. This work provides a
framework that combines information theoretic secrecy and wireless
localization, and proves that the localization information provides a
significant additional resource for secret key generation in mobile wireless
networks.
|
1112.2801
|
A new order theory of set systems and better quasi-orderings
|
math.CO cs.LG
|
By reformulating a learning process of a set system L as a game between
Teacher (presenter of data) and Learner (updater of the abstract independent
set), we define the order type dim L of L to be the order type of the game
tree. The theory of this new order type and continuous, monotone function
between set systems corresponds to the theory of well quasi-orderings (WQOs).
As Nash-Williams developed the theory of WQOs to the theory of better
quasi-orderings (BQOs), we introduce a set system that has order type and
corresponds to a BQO. We prove that the class of set systems corresponding to
BQOs is closed by any monotone function. In (Shinohara and Arimura. "Inductive
inference of unbounded unions of pattern languages from positive data."
Theoretical Computer Science, pp. 191-209, 2000), for any set system L, they
considered the class of arbitrary (finite) unions of members of L. From
viewpoint of WQOs and BQOs, we characterize the set systems L such that the
class of arbitrary (finite) unions of members of L has order type. The
characterization shows that the order structure of the set system L with
respect to the set-inclusion is not important for the resulting set system
having order type. We point out continuous, monotone function of set systems is
similar to positive reduction to Jockusch-Owings' weakly semirecursive sets.
|
1112.2807
|
Design and Implementation of a Simple Web Search Engine
|
cs.IR
|
We present a simple web search engine for indexing and searching html
documents using python programming language. Because python is well known for
its simple syntax and strong support for main operating systems, we hope it
will be beneficial for learning information retrieval techniques, especially
web search engine technology.
|
1112.2810
|
Exact Modeling of the Performance of Random Linear Network Coding in
Finite-buffer Networks
|
cs.IT math.IT
|
In this paper, we present an exact model for the analysis of the performance
of Random Linear Network Coding (RLNC) in wired erasure networks with finite
buffers. In such networks, packets are delayed due to either random link
erasures or blocking by full buffers. We assert that because of RLNC, the
content of buffers have dependencies which cannot be captured directly using
the classical queueing theoretical models. We model the performance of the
network using Markov chains by a careful derivation of the buffer occupancy
states and their transition rules. We verify by simulations that the proposed
framework results in an accurate measure of the network throughput offered by
RLNC. Further, we introduce a class of acyclic networks for which the number of
state variables is significantly reduced.
|
1112.2816
|
Phase transition to two-peaks phase in an information cascade voting
experiment
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Observational learning is an important information aggregation mechanism.
However, it occasionally leads to a state in which an entire population chooses
a sub-optimal option. When it occurs and whether it is a phase transition
remain unanswered. To address these questions, we performed a voting experiment
in which subjects answered a two-choice quiz sequentially with and without
information about the prior subjects' choices. The subjects who could copy
others are called herders. We obtained a microscopic rule regarding how herders
copy others. Varying the ratio of herders led to qualitative changes in the
macroscopic behavior in the experiment of about 50 subjects. If the ratio is
small, the sequence of choices rapidly converges to the true one. As the ratio
approaches 100%, convergence becomes extremely slow and information aggregation
almost terminates. A simulation study of a stochastic model for 10^{6} subjects
based on the herder's microscopic rule showed a phase transition to the
two-peaks phase, where the convergence completely terminates, as the ratio
exceeds some critical value.
|
1112.2892
|
A Constrained Coding Approach to Error-Free Half-Duplex Relay Networks
|
cs.IT math.IT
|
We show that the broadcast capacity of an infinite-depth tree-structured
network of error-free half-duplex-constrained relays can be achieved using
constrained coding at the source and symbol forwarding at the relays.
|
1112.2903
|
Large Scale Correlation Clustering Optimization
|
cs.CV
|
Clustering is a fundamental task in unsupervised learning. The focus of this
paper is the Correlation Clustering functional which combines positive and
negative affinities between the data points. The contribution of this paper is
two fold: (i) Provide a theoretic analysis of the functional. (ii) New
optimization algorithms which can cope with large scale problems (>100K
variables) that are infeasible using existing methods. Our theoretic analysis
provides a probabilistic generative interpretation for the functional, and
justifies its intrinsic "model-selection" capability. Furthermore, we draw an
analogy between optimizing this functional and the well known Potts energy
minimization. This analogy allows us to suggest several new optimization
algorithms, which exploit the intrinsic "model-selection" capability of the
functional to automatically recover the underlying number of clusters. We
compare our algorithms to existing methods on both synthetic and real data. In
addition we suggest two new applications that are made possible by our
algorithms: unsupervised face identification and interactive multi-object
segmentation by rough boundary delineation.
|
1112.2954
|
Synthesis of Spherical 4R Mechanism for Path Generation using
Differential Evolution
|
cs.CE
|
The problem of path generation for the spherical 4R mechanism is solved using
the Differential Evolution algorithm (DE). Formulas for the spherical geodesics
are employed in order to obtain the parametric equation for the generated
trajectory. Direct optimization of the objective function gives the solution to
the path generation task without prescribed timing. Therefore, there is no need
to separate this task into two stages to make the optimization. Moreover, the
order defect problem can be solved without difficulty by means of manipulations
of the individuals in the DE algorithm. Two examples of optimum synthesis
showing the simplicity and effectiveness of this approach are included.
|
1112.2957
|
Inverse targeting -- an effective immunization strategy
|
physics.soc-ph cond-mat.stat-mech cs.SI physics.comp-ph
|
We propose a new method to immunize populations or computer networks against
epidemics which is more efficient than any method considered before. The
novelty of our method resides in the way of determining the immunization
targets. First we identify those individuals or computers that contribute the
least to the disease spreading measured through their contribution to the size
of the largest connected cluster in the social or a computer network. The
immunization process follows the list of identified individuals or computers in
inverse order, immunizing first those which are most relevant for the epidemic
spreading. We have applied our immunization strategy to several model networks
and two real networks, the Internet and the collaboration network of high
energy physicists. We find that our new immunization strategy is in the case of
model networks up to 14%, and for real networks up to 33% more efficient than
immunizing dynamically the most connected nodes in a network. Our strategy is
also numerically efficient and can therefore be applied to large systems.
|
1112.2962
|
Period Estimation in Astronomical Time Series Using Slotted Correntropy
|
cs.IT astro-ph.IM math.IT stat.ML
|
In this letter, we propose a method for period estimation in light curves
from periodic variable stars using correntropy. Light curves are astronomical
time series of stellar brightness over time, and are characterized as being
noisy and unevenly sampled. We propose to use slotted time lags in order to
estimate correntropy directly from irregularly sampled time series. A new
information theoretic metric is proposed for discriminating among the peaks of
the correntropy spectral density. The slotted correntropy method outperformed
slotted correlation, string length, VarTools (Lomb-Scargle periodogram and
Analysis of Variance), and SigSpec applications on a set of light curves drawn
from the MACHO survey.
|
1112.2972
|
Fast Distributed Gradient Methods
|
cs.IT math.IT
|
We study distributed optimization problems when $N$ nodes minimize the sum of
their individual costs subject to a common vector variable. The costs are
convex, have Lipschitz continuous gradient (with constant $L$), and bounded
gradient. We propose two fast distributed gradient algorithms based on the
centralized Nesterov gradient algorithm and establish their convergence rates
in terms of the per-node communications $\mathcal{K}$ and the per-node gradient
evaluations $k$. Our first method, Distributed Nesterov Gradient, achieves
rates $O\left({\log \mathcal{K}}/{\mathcal{K}}\right)$ and $O\left({\log
k}/{k}\right)$. Our second method, Distributed Nesterov gradient with Consensus
iterations, assumes at all nodes knowledge of $L$ and $\mu(W)$ -- the second
largest singular value of the $N \times N$ doubly stochastic weight matrix $W$.
It achieves rates $O\left({1}/{\mathcal{K}^{2-\xi}}\right)$ and
$O\left({1}/{k^2}\right)$ ($\xi>0$ arbitrarily small). Further, we give with
both methods explicit dependence of the convergence constants on $N$ and $W$.
Simulation examples illustrate our findings.
|
1112.2988
|
Supervised Generative Reconstruction: An Efficient Way To Flexibly Store
and Recognize Patterns
|
cs.CV
|
Matching animal-like flexibility in recognition and the ability to quickly
incorporate new information remains difficult. Limits are yet to be adequately
addressed in neural models and recognition algorithms. This work proposes a
configuration for recognition that maintains the same function of conventional
algorithms but avoids combinatorial problems. Feedforward recognition
algorithms such as classical artificial neural networks and machine learning
algorithms are known to be subject to catastrophic interference and forgetting.
Modifying or learning new information (associations between patterns and
labels) causes loss of previously learned information. I demonstrate using
mathematical analysis how supervised generative models, with feedforward and
feedback connections, can emulate feedforward algorithms yet avoid catastrophic
interference and forgetting. Learned information in generative models is stored
in a more intuitive form that represents the fixed points or solutions of the
network and moreover displays similar difficulties as cognitive phenomena.
Brain-like capabilities and limits associated with generative models suggest
the brain may perform recognition and store information using a similar
approach. Because of the central role of recognition, progress understanding
the underlying principles may reveal significant insight on how to better study
and integrate with the brain.
|
1112.3010
|
A new variational principle for the Euclidean distance function: Linear
approach to the non-linear eikonal problem
|
cs.CV math.NA
|
We present a fast convolution-based technique for computing an approximate,
signed Euclidean distance function $S$ on a set of 2D and 3D grid locations.
Instead of solving the non-linear, static Hamilton-Jacobi equation ($\|\nabla
S\|=1$), our solution stems from first solving for a scalar field $\phi$ in a
linear differential equation and then deriving the solution for $S$ by taking
the negative logarithm. In other words, when $S$ and $\phi$ are related by
$\phi = \exp \left(-\frac{S}{\tau} \right)$ and $\phi$ satisfies a specific
linear differential equation corresponding to the extremum of a variational
problem, we obtain the approximate Euclidean distance function $S = -\tau
\log(\phi)$ which converges to the true solution in the limit as $\tau
\rightarrow 0$. This is in sharp contrast to techniques like the fast marching
and fast sweeping methods which directly solve the Hamilton-Jacobi equation by
the Godunov upwind discretization scheme. Our linear formulation results in a
closed-form solution to the approximate Euclidean distance function expressible
as a discrete convolution, and hence efficiently computable using the fast
Fourier transform (FFT). Our solution also circumvents the need for spatial
discretization of the derivative operator. As $\tau\rightarrow0$ we show the
convergence of our results to the true solution and also bound the error for a
given value of $\tau$. The differentiability of our solution allows us to
compute---using a set of convolutions---the first and second derivatives of the
approximate distance function. In order to determine the sign of the distance
function (defined to be positive inside a closed region and negative outside),
we compute the winding number in 2D and the topological degree in 3D, whose
computations can also be performed via fast convolutions. We demonstrate the
efficacy of our method through a set of experimental results.
|
1112.3018
|
Open Source CRM Systems for SMEs
|
cs.DB
|
Customer Relationship Management (CRM) systems are very common in large
companies. However, CRM systems are not very common in Small and Medium
Enterprises (SMEs). Most SMEs do not implement CRM systems due to several
reasons, such as lack of knowledge about CRM or lack of financial resources to
implement CRM systems. SMEs have to start implementing Information Systems (IS)
technology into their business operations in order to improve business values
and gain more competitive advantage over rivals. CRM system has the potential
to help improve the business value and competitive capabilities of SMEs. Given
the high fixed costs of normal activity of companies, we intend to promote free
and viable solutions for small and medium businesses. In this paper, we explain
the reasons why SMEs do not implement CRM system and the benefits of using open
source CRM system in SMEs. We also describe the functionalities of top open
source CRM systems, examining the applicability of these tools in fitting the
needs of SMEs.
|
1112.3052
|
Strategic Arrivals into Queueing Networks: The Network Concert Queueing
Game
|
cs.GT cs.SY math.OC math.PR
|
Queueing networks are typically modelled assuming that the arrival process is
exogenous, and unaffected by admission control, scheduling policies, etc. In
many situations, however, users choose the time of their arrival strategically,
taking delay and other metrics into account. In this paper, we develop a
framework to study such strategic arrivals into queueing networks. We start by
deriving a functional strong law of large numbers (FSLLN) approximation to the
queueing network. In the fluid limit derived, we then study the population game
wherein users strategically choose when to arrive, and upon arrival which of
the K queues to join. The queues start service at given times, which can
potentially be different. We characterize the (strategic) arrival process at
each of the queues, and the price of anarchy of the ensuing strategic arrival
game. We then extend the analysis to multiple populations of users, each with a
different cost metric. The equilibrium arrival profile and price of anarchy are
derived. Finally, we present the methodology for exact equilibrium analysis.
This, however, is tractable for only some simple cases such as two users
arriving at a two node queueing network, which we then present.
|
1112.3059
|
Data Processing For Atomic Resolution EELS
|
cond-mat.mtrl-sci cs.CV physics.data-an
|
The high beam current and sub-angstrom resolution of aberration-corrected
scanning transmission electron microscopes has enabled electron energy loss
spectroscopic (EELS) mapping with atomic resolution. These spectral maps are
often dose-limited and spatially oversampled, leading to low counts/channel and
are thus highly sensitive to errors in background estimation. However, by
taking advantage of redundancy in the dataset map one can improve background
estimation and increase chemical sensitivity. We consider two such approaches-
linear combination of power laws and local background averaging-that reduce
background error and improve signal extraction. Principal components analysis
(PCA) can also be used to analyze spectrum images, but the poor
peak-to-background ratio in EELS can lead to serious artifacts if raw EELS data
is PCA filtered. We identify common artifacts and discuss alternative
approaches. These algorithms are implemented within the Cornell Spectrum
Imager, an open source software package for spectroscopic analysis.
|
1112.3062
|
Using Provenance to support Good Laboratory Practice in Grid
Environments
|
cs.DC cs.CE cs.DB
|
Conducting experiments and documenting results is daily business of
scientists. Good and traceable documentation enables other scientists to
confirm procedures and results for increased credibility. Documentation and
scientific conduct are regulated and termed as "good laboratory practice."
Laboratory notebooks are used to record each step in conducting an experiment
and processing data. Originally, these notebooks were paper based. Due to
computerised research systems, acquired data became more elaborate, thus
increasing the need for electronic notebooks with data storage, computational
features and reliable electronic documentation. As a new approach to this, a
scientific data management system (DataFinder) is enhanced with features for
traceable documentation. Provenance recording is used to meet requirements of
traceability, and this information can later be queried for further analysis.
DataFinder has further important features for scientific documentation: It
employs a heterogeneous and distributed data storage concept. This enables
access to different types of data storage systems (e. g. Grid data
infrastructure, file servers). In this chapter we describe a number of building
blocks that are available or close to finished development. These components
are intended for assembling an electronic laboratory notebook for use in Grid
environments, while retaining maximal flexibility on usage scenarios as well as
maximal compatibility overlap towards each other. Through the usage of such a
system, provenance can successfully be used to trace the scientific workflow of
preparation, execution, evaluation, interpretation and archiving of research
data. The reliability of research results increases and the research process
remains transparent to remote research partners.
|
1112.3096
|
Joint Source and Relay Precoding Designs for MIMO Two-Way Relaying Based
on MSE Criterion
|
cs.IT math.IT
|
Properly designed precoders can significantly improve the spectral efficiency
of multiple-input multiple-output (MIMO) relay systems. In this paper, we
investigate joint source and relay precoding design based on the
mean-square-error (MSE) criterion in MIMO two-way relay systems, where two
multi-antenna source nodes exchange information via a multi-antenna
amplify-and-forward relay node. This problem is non-convex and its optimal
solution remains unsolved. Aiming to find an efficient way to solve the
problem, we first decouple the primal problem into three tractable
sub-problems, and then propose an iterative precoding design algorithm based on
alternating optimization. The solution to each sub-problem is optimal and
unique, thus the convergence of the iterative algorithm is guaranteed.
Secondly, we propose a structured precoding design to lower the computational
complexity. The proposed precoding structure is able to parallelize the
channels in the multiple access (MAC) phase and broadcast (BC) phase. It thus
reduces the precoding design to a simple power allocation problem. Lastly, for
the special case where only a single data stream is transmitted from each
source node, we present a source-antenna-selection (SAS) based precoding design
algorithm. This algorithm selects only one antenna for transmission from each
source and thus requires lower signalling overhead. Comprehensive simulation is
conducted to evaluate the effectiveness of all the proposed precoding designs.
|
1112.3110
|
GPU-based Image Analysis on Mobile Devices
|
cs.GR cs.CV
|
With the rapid advances in mobile technology many mobile devices are capable
of capturing high quality images and video with their embedded camera. This
paper investigates techniques for real-time processing of the resulting images,
particularly on-device utilizing a graphical processing unit. Issues and
limitations of image processing on mobile devices are discussed, and the
performance of graphical processing units on a range of devices measured
through a programmable shader implementation of Canny edge detection.
|
1112.3115
|
The Diversity Paradox: How Nature Resolves an Evolutionary Dilemma
|
nlin.AO cs.AI q-bio.PE
|
Adaptation to changing environments is a hallmark of biological systems.
Diversity in traits is necessary for adaptation and can influence the survival
of a population faced with novelty. In habitats that remain stable over many
generations, stabilizing selection reduces trait differences within
populations, thereby appearing to remove the diversity needed for heritable
adaptive responses in new environments. Paradoxically, field studies have
documented numerous populations under long periods of stabilizing selection and
evolutionary stasis that have rapidly evolved under changed environmental
conditions. In this article, we review how cryptic genetic variation (CGV)
resolves this diversity paradox by allowing populations in a stable environment
to gradually accumulate hidden genetic diversity that is revealed as trait
differences when environments change. Instead of being in conflict,
environmental stasis supports CGV accumulation and thus appears to facilitate
rapid adaptation in new environments as suggested by recent CGV studies.
Similarly, degeneracy has been found to support both genetic and non-genetic
adaptation at many levels of biological organization. Degenerate, as opposed to
diverse or redundant, ensembles appear functionally redundant in certain
environmental contexts but functionally diverse in others. CGV and degeneracy
paradigms for adaptation are integrated in this review, revealing a common set
of principles that support adaptation at multiple levels of biological
organization. Though a discussion of simulation studies, molecular-based
experimental systems, principles from population genetics, and field
experiments, we demonstrate that CGV and degeneracy reflect complementary
top-down and bottom-up, respectively, conceptualizations of the same basic
phenomenon and arguably capture a universal feature of biological adaptive
processes.
|
1112.3117
|
Pervasive Flexibility in Living Technologies through Degeneracy Based
Design
|
nlin.AO cs.AI
|
The capacity to adapt can greatly influence the success of systems that need
to compensate for damaged parts, learn how to achieve robust performance in new
environments, or exploit novel opportunities that originate from new
technological interfaces or emerging markets. Many of the conditions in which
technology is required to adapt cannot be anticipated during its design stage,
creating a significant challenge for the designer. Inspired by the study of a
range of biological systems, we propose that degeneracy - the realization of
multiple, functionally versatile components with contextually overlapping
functional redundancy - will support adaptation in technologies because it
effects pervasive flexibility, evolutionary innovation, and homeostatic
robustness. We provide examples of degeneracy in a number of rudimentary living
technologies from military socio-technical systems to swarm robotics and we
present design principles - including protocols, loose regulatory coupling, and
functional versatility - that allow degeneracy to arise in both biological and
man-made systems.
|
1112.3134
|
Proposing Cluster_Similarity Method in Order to Find as Much Better
Similarities in Databases
|
cs.DB
|
Different ways of entering data into databases result in duplicate records
that cause increasing of databases' size. This is a fact that we cannot ignore
it easily. There are several methods that are used for this purpose. In this
paper, we have tried to increase the accuracy of operations by using cluster
similarity instead of direct similarity of fields. So that clustering is done
on fields of database and according to accomplished clustering on fields,
similarity degree of records is obtained. In this method by using present
information in database, more logical similarity is obtained for deficient
information that in general, the method of cluster similarity could improve
operations 24% compared with previous methods.
|
1112.3166
|
Higher-Order Momentum Distributions and Locally Affine LDDMM
Registration
|
cs.CV cs.NA
|
To achieve sparse parametrizations that allows intuitive analysis, we aim to
represent deformation with a basis containing interpretable elements, and we
wish to use elements that have the description capacity to represent the
deformation compactly. To accomplish this, we introduce in this paper
higher-order momentum distributions in the LDDMM registration framework. While
the zeroth order moments previously used in LDDMM only describe local
displacement, the first-order momenta that are proposed here represent a basis
that allows local description of affine transformations and subsequent compact
description of non-translational movement in a globally non-rigid deformation.
The resulting representation contains directly interpretable information from
both mathematical and modeling perspectives. We develop the mathematical
construction of the registration framework with higher-order momenta, we show
the implications for sparse image registration and deformation description, and
we provide examples of how the parametrization enables registration with a very
low number of parameters. The capacity and interpretability of the
parametrization using higher-order momenta lead to natural modeling of
articulated movement, and the method promises to be useful for quantifying
ventricle expansion and progressing atrophy during Alzheimer's disease.
|
1112.3173
|
Automatic post-picking improves particle image detection from Cryo-EM
micrographs
|
cs.CV q-bio.BM
|
Cryo-electron microscopy (cryo-EM) studies using single particle
reconstruction is extensively used to reveal structural information of
macromolecular complexes. Aiming at the highest achievable resolution, state of
the art electron microscopes acquire thousands of high-quality images. Having
collected these data, each single particle must be detected and windowed out.
Several fully- or semi-automated approaches have been developed for the
selection of particle images from digitized micrographs. However they still
require laborious manual post processing, which will become the major
bottleneck for next generation of electron microscopes. Instead of focusing on
improvements in automated particle selection from micrographs, we propose a
post-picking step for classifying small windowed images, which are output by
common picking software. A supervised strategy for the classification of
windowed micrograph images into particles and non-particles reduces the manual
workload by orders of magnitude. The method builds on new powerful image
features, and the proper training of an ensemble classifier. A few hundred
training samples are enough to achieve a human-like classification performance.
|
1112.3208
|
Practical Methods for Wireless Network Coding with Multiple Unicast
Transmissions
|
cs.IT math.IT
|
We propose a simple yet effective wireless network coding and decoding
technique for a multiple unicast network. It utilizes spatial diversity through
cooperation between nodes which carry out distributed encoding operations
dictated by generator matrices of linear block codes. In order to exemplify the
technique, we make use of greedy codes over the binary field and show that the
arbitrary diversity orders can be flexibly assigned to nodes. Furthermore, we
present the optimal detection rule for the given model that accounts for
intermediate node errors and suggest a low-complexity network decoder using the
sum-product (SP) algorithm. The proposed SP detector exhibits near optimal
performance. We also show asymptotic superiority of network coding over a
method that utilizes the wireless channel in a repetitive manner without
network coding (NC) and give related rate-diversity trade-off curves. Finally,
we extend the given encoding method through selective encoding in order to
obtain extra coding gains.
|
1112.3212
|
A Compressed Sensing Framework of Frequency-Sparse Signals through
Chaotic Systems
|
cs.IT math.IT nlin.CD
|
This paper proposes a compressed sensing (CS) framework for the acquisition
and reconstruction of frequency-sparse signals with chaotic dynamical systems.
The sparse signal is acting as an excitation term of a discrete-time chaotic
system and the compressed measurement is obtained by downsampling the system
output. The reconstruction is realized through the estimation of the excitation
coefficients with principle of impulsive chaos synchronization. The -norm
regularized nonlinear least squares is used to find the estimation. The
proposed framework is easily implementable and creates secure measurements. The
Henon map is used as an example to illustrate the principle and the
performance.
|
1112.3257
|
Exact Computation of Kullback-Leibler Distance for Hidden Markov Trees
and Models
|
cs.IT math.IT
|
We suggest new recursive formulas to compute the exact value of the
Kullback-Leibler distance (KLD) between two general Hidden Markov Trees (HMTs).
For homogeneous HMTs with regular topology, such as homogeneous Hidden Markov
Models (HMMs), we obtain a closed-form expression for the KLD when no evidence
is given. We generalize our recursive formulas to the case of HMMs conditioned
on the observable variables. Our proposed formulas are validated through
several numerical examples in which we compare the exact KLD value with Monte
Carlo estimations.
|
1112.3265
|
Jointly Predicting Links and Inferring Attributes using a
Social-Attribute Network (SAN)
|
cs.SI physics.soc-ph
|
The effects of social influence and homophily suggest that both network
structure and node attribute information should inform the tasks of link
prediction and node attribute inference. Recently, Yin et al. proposed
Social-Attribute Network (SAN), an attribute-augmented social network, to
integrate network structure and node attributes to perform both link prediction
and attribute inference. They focused on generalizing the random walk with
restart algorithm to the SAN framework and showed improved performance. In this
paper, we extend the SAN framework with several leading supervised and
unsupervised link prediction algorithms and demonstrate performance improvement
for each algorithm on both link prediction and attribute inference. Moreover,
we make the novel observation that attribute inference can help inform link
prediction, i.e., link prediction accuracy is further improved by first
inferring missing attributes. We comprehensively evaluate these algorithms and
compare them with other existing algorithms using a novel, large-scale Google+
dataset, which we make publicly available.
|
1112.3307
|
Polytope Codes Against Adversaries in Networks
|
cs.IT math.IT
|
Network coding is studied when an adversary controls a subset of nodes in the
network of limited quantity but unknown location. This problem is shown to be
more difficult than when the adversary controls a given number of edges in the
network, in that linear codes are insufficient. To solve the node problem, the
class of Polytope Codes is introduced. Polytope Codes are constant composition
codes operating over bounded polytopes in integer vector fields. The polytope
structure creates additional complexity, but it induces properties on marginal
distributions of code vectors so that validities of codewords can be checked by
internal nodes of the network. It is shown that Polytope Codes achieve a
cut-set bound for a class of planar networks. It is also shown that this
cut-set bound is not always tight, and a tighter bound is given for an example
network.
|
1112.3308
|
Spatial correlations in attribute communities
|
physics.soc-ph cs.SI
|
Community detection is an important tool for exploring and classifying the
properties of large complex networks and should be of great help for spatial
networks. Indeed, in addition to their location, nodes in spatial networks can
have attributes such as the language for individuals, or any other
socio-economical feature that we would like to identify in communities. We
discuss in this paper a crucial aspect which was not considered in previous
studies which is the possible existence of correlations between space and
attributes. Introducing a simple toy model in which both space and node
attributes are considered, we discuss the effect of space-attribute
correlations on the results of various community detection methods proposed for
spatial networks in this paper and in previous studies. When space is
irrelevant, our model is equivalent to the stochastic block model which has
been shown to display a detectability-non detectability transition. In the
regime where space dominates the link formation process, most methods can fail
to recover the communities, an effect which is particularly marked when
space-attributes correlations are strong. In this latter case, community
detection methods which remove the spatial component of the network can miss a
large part of the community structure and can lead to incorrect results.
|
1112.3324
|
Generalized Master Equations for Non-Poisson Dynamics on Networks
|
physics.soc-ph cs.SI math.DS
|
The traditional way of studying temporal networks is to aggregate the
dynamics of the edges to create a static weighted network. This implicitly
assumes that the edges are governed by Poisson processes, which is not
typically the case in empirical temporal networks. Consequently, we examine the
effects of non-Poisson inter-event statistics on the dynamics of edges, and we
apply the concept of a generalized master equation to the study of
continuous-time random walks on networks. We show that the equation reduces to
the standard rate equations when the underlying process is Poisson and that the
stationary solution is determined by an effective transition matrix whose
leading eigenvector is easy to calculate. We discuss the implications of our
work for dynamical processes on temporal networks and for the construction of
network diagnostics that take into account their nontrivial stochastic nature.
|
1112.3415
|
Performance of the Eschenauer-Gligor key distribution scheme under an
ON/OFF channel
|
cs.IT math.CO math.IT
|
We investigate the secure connectivity of wireless sensor networks under the
random key distribution scheme of Eschenauer and Gligor. Unlike recent work
which was carried out under the assumption of full visibility, here we assume a
(simplified) communication model where unreliable wireless links are
represented as on/off channels. We present conditions on how to scale the model
parameters so that the network i) has no secure node which is isolated and ii)
is securely connected, both with high probability when the number of sensor
nodes becomes large. The results are given in the form of full zero-one laws,
and constitute the first complete analysis of the EG scheme under non-full
visibility. Through simulations these zero-one laws are shown to be valid also
under a more realistic communication model, i.e., the disk model. The relations
to the Gupta and Kumar's conjecture on the connectivity of geometric random
graphs with randomly deleted edges are also discussed.
|
1112.3426
|
Percolation on the Signal to Interference Ratio Graph with Fading
|
cs.IT math.IT
|
A wireless communication network is considered where any two nodes are
connected if the signal-to-interference ratio (SIR) between them is greater
than a threshold. We consider the the path-loss plus fading model of wireless
signal propagation. Assuming that the nodes of the wireless network are
distributed as a Poisson point process (PPP), percolation (formation of an
unbounded connected cluster) on the resulting SIR graph is studied as a
function of the density of the PPP. We study the super critical regime of
percolation and show that for a small enough threshold, there exists a closed
interval of densities for which percolation happens with non-zero probability.
|
1112.3446
|
Improving Noise Robustness in Subspace-based Joint Sparse Recovery
|
cs.IT math.IT
|
In a multiple measurement vector problem (MMV), where multiple signals share
a common sparse support and are sampled by a common sensing matrix, we can
expect joint sparsity to enable a further reduction in the number of required
measurements. While a diversity gain from joint sparsity had been demonstrated
earlier in the case of a convex relaxation method using an $l_1/l_2$ mixed norm
penalty, only recently was it shown that similar diversity gain can be achieved
by greedy algorithms if we combine greedy steps with a MUSIC-like subspace
criterion. However, the main limitation of these hybrid algorithms is that they
often require a large number of snapshots or a high signal-to-noise ratio (SNR)
for an accurate subspace as well as partial support estimation. One of the main
contributions of this work is to show that the noise robustness of these
algorithms can be significantly improved by allowing sequential subspace
estimation and support filtering, even when the number of snapshots is
insufficient. Numerical simulations show that a novel sequential compressive
MUSIC (sequential CS-MUSIC) that combines the sequential subspace estimation
and support filtering steps significantly outperforms the existing greedy
algorithms and is quite comparable with computationally expensive state-of-art
algorithms.
|
1112.3471
|
A Nonstochastic Information Theory for Communication and State
Estimation
|
cs.SY cs.IT math.IT math.OC
|
In communications, unknown variables are usually modelled as random
variables, and concepts such as independence, entropy and information are
defined in terms of the underlying probability distributions. In contrast,
control theory often treats uncertainties and disturbances as bounded unknowns
having no statistical structure. The area of networked control combines both
fields, raising the question of whether it is possible to construct meaningful
analogues of stochastic concepts such as independence, Markovness, entropy and
information without assuming a probability space. This paper introduces a
framework for doing so, leading to the construction of a maximin information
functional for nonstochastic variables. It is shown that the largest maximin
information rate through a memoryless, error-prone channel in this framework
coincides with the block-coding zero-error capacity of the channel. Maximin
information is then used to derive tight conditions for uniformly estimating
the state of a linear time-invariant system over such a channel, paralleling
recent results of Matveev and Savkin.
|
1112.3475
|
Discovering universal statistical laws of complex networks
|
physics.soc-ph cs.SI q-bio.QM
|
Different network models have been suggested for the topology underlying
complex interactions in natural systems. These models are aimed at replicating
specific statistical features encountered in real-world networks. However, it
is rarely considered to which degree the results obtained for one particular
network class can be extrapolated to real-world networks. We address this issue
by comparing different classical and more recently developed network models
with respect to their generalisation power, which we identify with large
structural variability and absence of constraints imposed by the construction
scheme. After having identified the most variable networks, we address the
issue of which constraints are common to all network classes and are thus
suitable candidates for being generic statistical laws of complex networks. In
fact, we find that generic, not model-related dependencies between different
network characteristics do exist. This allows, for instance, to infer global
features from local ones using regression models trained on networks with high
generalisation power. Our results confirm and extend previous findings
regarding the synchronisation properties of neural networks. Our method seems
especially relevant for large networks, which are difficult to map completely,
like the neural networks in the brain. The structure of such large networks
cannot be fully sampled with the present technology. Our approach provides a
method to estimate global properties of under-sampled networks with good
approximation. Finally, we demonstrate on three different data sets (C.
elegans' neuronal network, R. prowazekii's metabolic network, and a network of
synonyms extracted from Roget's Thesaurus) that real-world networks have
statistical relations compatible with those obtained using regression models.
|
1112.3555
|
Decentralized Supervisory Control of Discrete Event Systems for
Bisimulation Equivalence
|
cs.SY
|
In decentralized systems, branching behaviors naturally arise due to
communication, unmodeled dynamics and system abstraction, which can not be
adequately captured by the traditional sequencing-based language equivalence.
As a finer behavior equivalence than language equivalence, bisimulation not
only allows the full set of branching behaviors but also explicitly specifies
the properties in terms of temporal logic such as CTL* and mu-calculus. This
observation motivates us to consider the decentralized control of discrete
event systems (DESs) for bisimulation equivalence in this paper, where the
plant and the specification are taken to be nondeterministic and the supervisor
is taken to be deterministic. An automata-based control framework is
formalized, upon which we develop three architectures with respect to different
decision fusion rules for the decentralized bisimilarity control, named a
conjunctive architecture, a disjunctive architecture and a general
architecture. Under theses three architectures, necessary and sufficient
conditions for the existence of decentralized bisimilarity supervisors are
derived respectively, which extend the traditional results of supervisory
control from language equivalence to bisimulation equivalence. It is shown that
these conditions can be verified with exponential complexity. Furthermore, the
synthesis of bisimilarity supervisors is presented when the existence condition
holds.
|
1112.3599
|
Cooperative Network Navigation: Fundamental Limit and its Geometrical
Interpretation
|
cs.IT math.IT
|
Localization and tracking of moving nodes via network navigation gives rise
to a new paradigm, where nodes exploit both temporal and spatial cooperation to
infer their positions based on intra- and inter-node measurements. While such
cooperation can significantly improve the performance, it imposes intricate
information processing that impedes network design and operation. In this
paper, we establish a theoretical framework for cooperative network navigation
and determine the fundamental limits of navigation accuracy using equivalent
Fisher information analysis. We then introduce the notion of carry-over
information, and provide a geometrical interpretation of the navigation
information and its evolution in time. Our framework unifies the navigation
information obtained from temporal and spatial cooperation, leading to a deep
understanding of information evolution in the network and benefit of
cooperation.
|
1112.3644
|
Community structure and scale-free collections of Erd\"os-R\'enyi graphs
|
cs.SI physics.soc-ph
|
Community structure plays a significant role in the analysis of social
networks and similar graphs, yet this structure is little understood and not
well captured by most models. We formally define a community to be a subgraph
that is internally highly connected and has no deeper substructure. We use
tools of combinatorics to show that any such community must contain a dense
Erd\"os-R\'enyi (ER) subgraph. Based on mathematical arguments, we hypothesize
that any graph with a heavy-tailed degree distribution and community structure
must contain a scale free collection of dense ER subgraphs. These theoretical
observations corroborate well with empirical evidence. From this, we propose
the Block Two-Level Erd\"os-R\'enyi (BTER) model, and demonstrate that it
accurately captures the observable properties of many real-world social
networks.
|
1112.3670
|
Echoes of power: Language effects and power differences in social
interaction
|
cs.SI cs.CL physics.soc-ph
|
Understanding social interaction within groups is key to analyzing online
communities. Most current work focuses on structural properties: who talks to
whom, and how such interactions form larger network structures. The
interactions themselves, however, generally take place in the form of natural
language --- either spoken or written --- and one could reasonably suppose that
signals manifested in language might also provide information about roles,
status, and other aspects of the group's dynamics. To date, however, finding
such domain-independent language-based signals has been a challenge.
Here, we show that in group discussions power differentials between
participants are subtly revealed by how much one individual immediately echoes
the linguistic style of the person they are responding to. Starting from this
observation, we propose an analysis framework based on linguistic coordination
that can be used to shed light on power relationships and that works
consistently across multiple types of power --- including a more "static" form
of power based on status differences, and a more "situational" form of power in
which one individual experiences a type of dependence on another. Using this
framework, we study how conversational behavior can reveal power relationships
in two very different settings: discussions among Wikipedians and arguments
before the U.S. Supreme Court.
|
1112.3697
|
Insights from Classifying Visual Concepts with Multiple Kernel Learning
|
cs.CV
|
Combining information from various image features has become a standard
technique in concept recognition tasks. However, the optimal way of fusing the
resulting kernel functions is usually unknown in practical applications.
Multiple kernel learning (MKL) techniques allow to determine an optimal linear
combination of such similarity matrices. Classical approaches to MKL promote
sparse mixtures. Unfortunately, so-called 1-norm MKL variants are often
observed to be outperformed by an unweighted sum kernel. The contribution of
this paper is twofold: We apply a recently developed non-sparse MKL variant to
state-of-the-art concept recognition tasks within computer vision. We provide
insights on benefits and limits of non-sparse MKL and compare it against its
direct competitors, the sum kernel SVM and the sparse MKL. We report empirical
results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo
Annotation challenge data sets. About to be submitted to PLoS ONE.
|
1112.3712
|
Analysis and Extension of Arc-Cosine Kernels for Large Margin
Classification
|
cs.LG
|
We investigate a recently proposed family of positive-definite kernels that
mimic the computation in large neural networks. We examine the properties of
these kernels using tools from differential geometry; specifically, we analyze
the geometry of surfaces in Hilbert space that are induced by these kernels.
When this geometry is described by a Riemannian manifold, we derive results for
the metric, curvature, and volume element. Interestingly, though, we find that
the simplest kernel in this family does not admit such an interpretation. We
explore two variations of these kernels that mimic computation in neural
networks with different activation functions. We experiment with these new
kernels on several data sets and highlight their general trends in performance
for classification.
|
1112.3714
|
Nonnegative Matrix Factorization for Semi-supervised Dimensionality
Reduction
|
cs.LG
|
We show how to incorporate information from labeled examples into nonnegative
matrix factorization (NMF), a popular unsupervised learning algorithm for
dimensionality reduction. In addition to mapping the data into a space of lower
dimensionality, our approach aims to preserve the nonnegative components of the
data that are important for classification. We identify these components from
the support vectors of large-margin classifiers and derive iterative updates to
preserve them in a semi-supervised version of NMF. These updates have a simple
multiplicative form like their unsupervised counterparts; they are also
guaranteed at each iteration to decrease their loss function---a weighted sum
of I-divergences that captures the trade-off between unsupervised and
supervised learning. We evaluate these updates for dimensionality reduction
when they are used as a precursor to linear classification. In this role, we
find that they yield much better performance than their unsupervised
counterparts. We also find one unexpected benefit of the low dimensional
representations discovered by our approach: often they yield more accurate
classifiers than both ordinary and transductive SVMs trained in the original
input space.
|
1112.3730
|
Stability of Iterative Decoding of Multi-Edge Type Doubly-Generalized
LDPC Codes Over the BEC
|
cs.IT math.IT
|
Using the EXIT chart approach, a necessary and sufficient condition is
developed for the local stability of iterative decoding of multi-edge type
(MET) doubly-generalized low-density parity-check (D-GLDPC) code ensembles. In
such code ensembles, the use of arbitrary linear block codes as component codes
is combined with the further design of local Tanner graph connectivity through
the use of multiple edge types. The stability condition for these code
ensembles is shown to be succinctly described in terms of the value of the
spectral radius of an appropriately defined polynomial matrix.
|
1112.3810
|
Energy and Spectral Efficiency of Very Large Multiuser MIMO Systems
|
cs.IT math.IT
|
A multiplicity of autonomous terminals simultaneously transmits data streams
to a compact array of antennas. The array uses imperfect channel-state
information derived from transmitted pilots to extract the individual data
streams. The power radiated by the terminals can be made inversely proportional
to the square-root of the number of base station antennas with no reduction in
performance. In contrast if perfect channel-state information were available
the power could be made inversely proportional to the number of antennas. Lower
capacity bounds for maximum-ratio combining (MRC), zero-forcing (ZF) and
minimum mean-square error (MMSE) detection are derived. A MRC receiver normally
performs worse than ZF and MMSE. However as power levels are reduced, the
cross-talk introduced by the inferior maximum-ratio receiver eventually falls
below the noise level and this simple receiver becomes a viable option. The
tradeoff between the energy efficiency (as measured in bits/J) and spectral
efficiency (as measured in bits/channel use/terminal) is quantified. It is
shown that the use of moderately large antenna arrays can improve the spectral
and energy efficiency with orders of magnitude compared to a single-antenna
system.
|
1112.3839
|
Optimal Structured Static State-Feedback Control Design with Limited
Model Information for Fully-Actuated Systems
|
math.OC cs.SY
|
We introduce the family of limited model information control design methods,
which construct controllers by accessing the plant's model in a constrained
way, according to a given design graph. We investigate the closed-loop
performance achievable by such control design methods for fully-actuated
discrete-time linear time-invariant systems, under a separable quadratic cost.
We restrict our study to control design methods which produce structured static
state feedback controllers, where each subcontroller can at least access the
state measurements of those subsystems that affect its corresponding subsystem.
We compute the optimal control design strategy (in terms of the competitive
ratio and domination metrics) when the control designer has access to the local
model information and the global interconnection structure of the
plant-to-be-controlled. Lastly, we study the trade-off between the amount of
model information exploited by a control design method and the best closed-loop
performance (in terms of the competitive ratio) of controllers it can produce.
|
1112.3867
|
The use of information theory in evolutionary biology
|
q-bio.PE cs.IT math.IT q-bio.NC
|
Information is a key concept in evolutionary biology. Information is stored
in biological organism's genomes, and used to generate the organism as well as
to maintain and control it. Information is also "that which evolves". When a
population adapts to a local environment, information about this environment is
fixed in a representative genome. However, when an environment changes,
information can be lost. At the same time, information is processed by animal
brains to survive in complex environments, and the capacity for information
processing also evolves. Here I review applications of information theory to
the evolution of proteins as well as to the evolution of information processing
in simulated agents that adapt to perform a complex task.
|
1112.3946
|
Strongly Convex Programming for Exact Matrix Completion and Robust
Principal Component Analysis
|
cs.IT cs.LG math.IT
|
The common task in matrix completion (MC) and robust principle component
analysis (RPCA) is to recover a low-rank matrix from a given data matrix. These
problems gained great attention from various areas in applied sciences
recently, especially after the publication of the pioneering works of Cand`es
et al.. One fundamental result in MC and RPCA is that nuclear norm based convex
optimizations lead to the exact low-rank matrix recovery under suitable
conditions. In this paper, we extend this result by showing that strongly
convex optimizations can guarantee the exact low-rank matrix recovery as well.
The result in this paper not only provides sufficient conditions under which
the strongly convex models lead to the exact low-rank matrix recovery, but also
guides us on how to choose suitable parameters in practical algorithms.
|
1112.3972
|
Developing Autonomic Properties for Distributed Pattern-Recognition
Systems with ASSL: A Distributed MARF Case Study
|
cs.DC cs.CV cs.SE
|
In this paper, we discuss our research towards developing special properties
that introduce autonomic behavior in pattern-recognition systems. In our
approach we use ASSL (Autonomic System Specification Language) to formally
develop such properties for DMARF (Distributed Modular Audio Recognition
Framework). These properties enhance DMARF with an autonomic middleware that
manages the four stages of the framework's pattern-recognition pipeline. DMARF
is a biologically inspired system employing pattern recognition, signal
processing, and natural language processing helping us process audio, textual,
or imagery data needed by a variety of scientific applications, e.g., biometric
applications. In that context, the notion go autonomic DMARF (ADMARF) can be
employed by autonomous and robotic systems that theoretically require
less-to-none human intervention other than data collection for pattern analysis
and observing the results. In this article, we explain the ASSL specification
models for the autonomic properties of DMARF.
|
1112.4002
|
Conjoining Speeds up Information Diffusion in Overlaying Social-Physical
Networks
|
cs.SI physics.soc-ph
|
We study the diffusion of information in an overlaying social-physical
network. Specifically, we consider the following set-up: There is a physical
information network where information spreads amongst people through
conventional communication media (e.g., face-to-face communication, phone
calls), and conjoint to this physical network, there are online social networks
where information spreads via web sites such as Facebook, Twitter, FriendFeed,
YouTube, etc. We quantify the size and the critical threshold of information
epidemics in this conjoint social-physical network by assuming that information
diffuses according to the SIR epidemic model. One interesting finding is that
even if there is no percolation in the individual networks, percolation (i.e.,
information epidemics) can take place in the conjoint social-physical network.
We also show, both analytically and experimentally, that the fraction of
individuals who receive an item of information (started from an arbitrary node)
is significantly larger in the conjoint social-physical network case, as
compared to the case where the networks are disjoint. These findings reveal
that conjoining the physical network with online social networks can have a
dramatic impact on the speed and scale of information diffusion.
|
1112.4011
|
Coherence in Large-Scale Networks: Dimension-Dependent Limitations of
Local Feedback
|
math.OC cs.MA cs.SY
|
We consider distributed consensus and vehicular formation control problems.
Specifically we address the question of whether local feedback is sufficient to
maintain coherence in large-scale networks subject to stochastic disturbances.
We define macroscopic performance measures which are global quantities that
capture the notion of coherence; a notion of global order that quantifies how
closely the formation resembles a solid object. We consider how these measures
scale asymptotically with network size in the topologies of regular lattices in
1, 2 and higher dimensions, with vehicular platoons corresponding to the 1
dimensional case. A common phenomenon appears where a higher spatial dimension
implies a more favorable scaling of coherence measures, with a dimensions of 3
being necessary to achieve coherence in consensus and vehicular formations
under certain conditions. In particular, we show that it is impossible to have
large coherent one dimensional vehicular platoons with only local feedback. We
analyze these effects in terms of the underlying energetic modes of motion,
showing that they take the form of large temporal and spatial scales resulting
in an accordion-like motion of formations. A conclusion can be drawn that in
low spatial dimensions, local feedback is unable to regulate large-scale
disturbances, but it can in higher spatial dimensions. This phenomenon is
distinct from, and unrelated to string instability issues which are commonly
encountered in control problems for automated highways.
|
1112.4020
|
Clustering and Latent Semantic Indexing Aspects of the Nonnegative
Matrix Factorization
|
cs.LG
|
This paper provides a theoretical support for clustering aspect of the
nonnegative matrix factorization (NMF). By utilizing the Karush-Kuhn-Tucker
optimality conditions, we show that NMF objective is equivalent to graph
clustering objective, so clustering aspect of the NMF has a solid
justification. Different from previous approaches which usually discard the
nonnegativity constraints, our approach guarantees the stationary point being
used in deriving the equivalence is located on the feasible region in the
nonnegative orthant. Additionally, since clustering capability of a matrix
decomposition technique can sometimes imply its latent semantic indexing (LSI)
aspect, we will also evaluate LSI aspect of the NMF by showing its capability
in solving the synonymy and polysemy problems in synthetic datasets. And more
extensive evaluation will be conducted by comparing LSI performances of the NMF
and the singular value decomposition (SVD), the standard LSI method, using some
standard datasets.
|
1112.4031
|
Application of Data Mining Techniques to a Selected Business
Organisation with Special Reference to Buying Behaviour
|
cs.DB cs.AI
|
Data mining is a new concept & an exploration and analysis of large data
sets, in order to discover meaningful patterns and rules. Many organizations
are now using the data mining techniques to find out meaningful patterns from
the database. The present paper studies how data mining techniques can be apply
to the large database. These data mining techniques give certain behavioral
pattern from the database. The results which come after analysis of the
database are useful for organization. This paper examines the result after
applying association rule mining technique, rule induction technique and
Apriori algorithm. These techniques are applied to the database of shopping
mall. Market basket analysis is performing by the above mentioned techniques
and some important results are found such as buying behavior.
|
1112.4035
|
Distributed Source Localization in Wireless Underground Sensor Networks
|
cs.IT math.IT
|
Node localization plays an important role in many practical applications of
wireless underground sensor networks (WUSNs), such as finding the locations of
earthquake epicenters, underground explosions, and microseismic events in
mines. It is more difficult to obtain the time-difference-of-arrival (TDOA)
measurements in WUSNs than in terrestrial wireless sensor networks because of
the unfavorable channel characteristics in the underground environment. The
robust Chinese remainder theorem (RCRT) has been shown to be an effective tool
for solving the phase ambiguity problem and frequency estimation problem in
wireless sensor networks. In this paper, the RCRT is used to robustly estimate
TDOA or range difference in WUSNs and therefore improves the ranging accuracy
in such networks. After obtaining the range difference, distributed source
localization algorithms based on a diffusion strategy are proposed to decrease
the communication cost while satisfying the localization accuracy requirement.
Simulation results confirm the validity and efficiency of the proposed methods.
|
1112.4055
|
Fuzzy cellular model for on-line traffic simulation
|
cs.ET cs.SY nlin.CG
|
This paper introduces a fuzzy cellular model of road traffic that was
intended for on-line applications in traffic control. The presented model uses
fuzzy sets theory to deal with uncertainty of both input data and simulation
results. Vehicles are modelled individually, thus various classes of them can
be taken into consideration. In the proposed approach, all parameters of
vehicles are described by means of fuzzy numbers. The model was implemented in
a simulation of vehicles queue discharge process. Changes of the queue length
were analysed in this experiment and compared to the results of NaSch cellular
automata model.
|
1112.4057
|
Performance Evaluation of Road Traffic Control Using a Fuzzy Cellular
Model
|
cs.AI cs.SY
|
In this paper a method is proposed for performance evaluation of road traffic
control systems. The method is designed to be implemented in an on-line
simulation environment, which enables optimisation of adaptive traffic control
strategies. Performance measures are computed using a fuzzy cellular traffic
model, formulated as a hybrid system combining cellular automata and fuzzy
calculus. Experimental results show that the introduced method allows the
performance to be evaluated using imprecise traffic measurements. Moreover, the
fuzzy definitions of performance measures are convenient for uncertainty
determination in traffic control decisions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.