id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1308.1389 | Degrees of Freedom for the MIMO Multi-way Relay Channel | cs.IT math.IT | This paper investigates the degrees of freedom (DoF) of the L-cluster, K-user
MIMO multi-way relay channel, where users in each cluster wish to exchange
messages within the cluster, and they can only communicate through the relay. A
novel DoF upper bound is derived by providing users with carefully designed
genie information. Achievable DoF is identified using signal space alignment
and multiple-access transmission. For the two-cluster MIMO multi-way relay
channel with two users in each cluster, DoF is established for the general case
when users and the relay have arbitrary number of antennas, and it is shown
that the DoF upper bound can be achieved using signal space alignment or
multiple-access transmission, or a combination of both. The result is then
generalized to the three user case. For the L-cluster K-user MIMO multi-way
relay channel in the symmetric setting, conditions under which the DoF upper
bound can be achieved are established. In addition to being shown to be tight
in a variety of scenarios of interests of the multi-way relay channel, the
newly derived upperbound also establishes the optimality of several previously
established achievable DoF results for multiuser relay channels that are
special cases of the multi-way relay channel.
|
1308.1391 | Low-Dimensional Reconciliation for Continuous-Variable Quantum Key
Distribution | quant-ph cs.IT math.IT | We propose an efficient logical layer-based reconciliation method for
continuous-variable quantum key distribution (CVQKD) to extract binary
information from correlated Gaussian variables. We demonstrate that by
operating on the raw-data level, the noise of the quantum channel can be
corrected in the low-dimensional (scalar) space and the reconciliation can be
extended to arbitrary dimensions. The CVQKD systems allow an unconditionally
secret communication over standard telecommunication networks. To exploit the
real potential of CVQKD a robust reconciliation technique is needed. It is
currently unavailable, which makes it impossible to reach the real performance
of the CVQKD protocols. The reconciliation is a post-processing step separated
from the transmission of quantum states, which is aimed to derive the secret
key from the raw data. The reconciliation process of correlated Gaussian
variables is a complex problem that requires either tomography in the physical
layer that is intractable in a practical scenario, or high-cost calculations in
the multidimensional spherical space with strict dimensional limitations. To
avoid these issues we define the low-dimensional reconciliation. We prove that
the error probability of one-dimensional reconciliation is zero in any
practical CVQKD scenario, and provides unconditional security. The results
allow to significantly improve the currently available key rates and
transmission distances of CVQKD.
|
1308.1418 | A Latent Social Approach to YouTube Popularity Prediction | cs.SI cs.MM cs.NI physics.soc-ph | Current works on Information Centric Networking assume the spectrum of
caching strategies under the Least Recently/ Frequently Used (LRFU) scheme as
the de-facto standard, due to the ease of implementation and easier analysis of
such strategies. In this paper we predict the popularity distribution of
YouTube videos within a campus network. We explore two broad approaches in
predicting the popularity of videos in the network: consensus approaches based
on aggregate behavior in the network, and social approaches based on the
information diffusion over an implicit network. We measure the performance of
our approaches under a simple caching framework by picking the k most popular
videos according to our predicted distribution and calculating the hit rate on
the cache. We develop our approach by first incorporating video inter-arrival
time (based on the power-law distribution governing the transmission time
between two receivers of the same message in scale-free networks) to the
baseline (LRFU), then combining with an information diffusion model over the
inferred latent social graph that governs diffusion of videos in the network.
We apply techniques from latent social network inference to learn the sharing
probabilities between users in the network and apply a virus propagation model
borrowed from mathematical epidemiology to estimate the number of times a video
will be accessed in the future. Our approach gives rise to a 14% hit rate
improvement over the baseline.
|
1308.1440 | Graywulf: A platform for federated scientific databases and services | cs.DB | Many fields of science rely on relational database management systems to
analyze, publish and share data. Since RDBMS are originally designed for, and
their development directions are primarily driven by, business use cases they
often lack features very important for scientific applications. Horizontal
scalability is probably the most important missing feature which makes it
challenging to adapt traditional relational database systems to the ever
growing data sizes. Due to the limited support of array data types and metadata
management, successful application of RDBMS in science usually requires the
development of custom extensions. While some of these extensions are specific
to the field of science, the majority of them could easily be generalized and
reused in other disciplines. With the Graywulf project we intend to target
several goals. We are building a generic platform that offers reusable
components for efficient storage, transformation, statistical analysis and
presentation of scientific data stored in Microsoft SQL Server. Graywulf also
addresses the distributed computational issues arising from current RDBMS
technologies. The current version supports load balancing of simple queries and
parallel execution of partitioned queries over a set of mirrored databases.
Uniform user access to the data is provided through a web based query interface
and a data surface for software clients. Queries are formulated in a slightly
modified syntax of SQL that offers a transparent view of the distributed data.
The software library consists of several components that can be reused to
develop complex scientific data warehouses: a system registry, administration
tools to manage entire database server clusters, a sophisticated workflow
execution framework, and a SQL parser library.
|
1308.1464 | ManyClaw: Slicing and dicing Riemann solvers for next generation highly
parallel architectures | cs.CE cs.MS | Next generation computer architectures will include order of magnitude more
intra-node parallelism; however, many application programmers have a difficult
time keeping their codes current with the state-of-the-art machines. In this
context, we analyze Hyperbolic PDE solvers, which are used in the solution of
many important applications in science and engineering. We present ManyClaw, a
project intended to explore the exploitation of intra-node parallelism in
hyperbolic PDE solvers via the Clawpack software package for solving hyperbolic
PDEs. Our goal is to separate the low level parallelism and the physical
equations thus providing users the capability to leverage intra-node
parallelism without explicitly writing code to take advantage of newer
architectures.
|
1308.1471 | Application of Inventory Management Principles for Efficient Data
Placement in Storage Networks | cs.DB | The principles and strategies found in material management are comparable and
analogue with the data management. This paper concentrates on the conversion of
product inventory management principles into data inventory management
principles. Efforts were made to enumerate various impacting parameters that
would be appropriate to consider if any data inventory model could be plotted.
|
1308.1482 | Increasing Robustness of the Anesthesia Process from Difference
Patient's Delay Using a State-Space Model Predictive Controller | cs.SY | The process of anesthesia is nonlinear with time delay and also there are
some constraints which have to be considered in calculating administrative drug
dosage. We present an Extended Kalman Filter (EKF) observer to estimate drug
concentration in the patient's body and use this estimation in a state-space
based Model of Predictive Controller (MPC) for controlling the depth of
anesthesia. Bispectral Index (BIS) is used as a patient consciousness index and
propofol as an anesthetic agent. Performance evaluations of the proposed
controller, the results have been compared with those of a MPC controller. The
results demonstrate that state-space MPC including the EKF estimator for
controlling the anesthesia process can significantly increase the robustness in
encountering patients' delay deviations in comparison with the MPC.
|
1308.1484 | A Multi-Swarm Cellular PSO based on Clonal Selection Algorithm in
Dynamic Environments | cs.NE cs.AI | Many real-world problems are dynamic optimization problems. In this case, the
optima in the environment change dynamically. Therefore, traditional
optimization algorithms disable to track and find optima. In this paper, a new
multi-swarm cellular particle swarm optimization based on clonal selection
algorithm (CPSOC) is proposed for dynamic environments. In the proposed
algorithm, the search space is partitioned into cells by a cellular automaton.
Clustered particles in each cell, which make a sub-swarm, are evolved by the
particle swarm optimization and clonal selection algorithm. Experimental
results on Moving Peaks Benchmark demonstrate the superiority of the CPSOC its
popular methods.
|
1308.1503 | ALOHA Random Access that Operates as a Rateless Code | cs.IT math.IT | Various applications of wireless Machine-to-Machine (M2M) communications have
rekindled the research interest in random access protocols, suitable to support
a large number of connected devices. Slotted ALOHA and its derivatives
represent a simple solution for distributed random access in wireless networks.
Recently, a framed version of slotted ALOHA gained renewed interest due to the
incorporation of successive interference cancellation (SIC) in the scheme,
which resulted in substantially higher throughputs. Based on similar principles
and inspired by the rateless coding paradigm, a frameless approach for
distributed random access in slotted ALOHA framework is described in this
paper. The proposed approach shares an operational analogy with rateless
coding, expressed both through the user access strategy and the adaptive length
of the contention period, with the objective to end the contention when the
instantaneous throughput is maximized. The paper presents the related analysis,
providing heuristic criteria for terminating the contention period and showing
that very high throughputs can be achieved, even for a low number for
contending users. The demonstrated results potentially have more direct
practical implications compared to the approaches for coded random access that
lead to high throughputs only asymptotically.
|
1308.1507 | Logical analysis of natural language semantics to solve the problem of
computer understanding | cs.CL | An object--oriented approach to create a natural language understanding
system is considered. The understanding program is a formal system built on the
base of predicative calculus. Horn's clauses are used as well--formed formulas.
An inference is based on the principle of resolution. Sentences of natural
language are represented in the view of typical predicate set. These predicates
describe physical objects and processes, abstract objects, categories and
semantic relations between objects. Predicates for concrete assertions are
saved in a database. To describe the semantics of classes for physical objects,
abstract concepts and processes, a knowledge base is applied. The proposed
representation of natural language sentences is a semantic net. Nodes of such
net are typical predicates. This approach is perspective as, firstly, such
typification of nodes facilitates essentially forming of processing algorithms
and object descriptions, secondly, the effectiveness of algorithms is increased
(particularly for the great number of nodes), thirdly, to describe the
semantics of words, encyclopedic knowledge is used, and this permits
essentially to extend the class of solved problems.
|
1308.1509 | Monotone Smoothing Splines Using General Linear Systems | cs.SY cs.IT math.IT math.OC stat.AP | In this paper, a method is proposed to solve the problem of monotone
smoothing splines using general linear systems. This problem, also called
monotone control theoretic splines, has been solved only when the curve
generator is modeled by the second-order integrator, but not for other cases.
The difficulty in the problem is that the monotonicity constraint should be
satisfied over an interval which has the cardinality of the continuum. To solve
this problem, we first formulate the problem as a semi-infinite quadratic
programming, and then we adopt a discretization technique to obtain a
finite-dimensional quadratic programming problem. It is shown that the solution
of the finite-dimensional problem always satisfies the infinite-dimensional
monotonicity constraint. It is also proved that the approximated solution
converges to the exact solution as the discretization grid-size tends to zero.
An example is presented to show the effectiveness of the proposed method.
|
1308.1533 | Topological Structure of Urban Street Networks from the Perspective of
Degree Correlations | physics.soc-ph cs.SI nlin.AO | Many complex networks demonstrate a phenomenon of striking degree
correlations, i.e., a node tends to link to other nodes with similar (or
dissimilar) degrees. From the perspective of degree correlations, this paper
attempts to characterize topological structures of urban street networks. We
adopted six urban street networks (three European and three North American),
and converted them into network topologies in which nodes and edges
respectively represent individual streets and street intersections, and
compared the network topologies to three reference network topologies
(biological, technological, and social). The urban street network topologies
(with the exception of Manhattan) showed a consistent pattern that distinctly
differs from the three reference networks. The topologies of urban street
networks lack striking degree correlations in general. Through reshuffling the
network topologies towards for example maximum or minimum degree correlations
while retaining the initial degree distributions, we found that all the
surrogate topologies of the urban street networks, as well as the reference
ones, tended to deviate from small world properties. This implies that the
initial degree correlations do not have any positive or negative effect on the
networks' performance or functions.
Keywords: Scale free, small world, rewiring, rich club effect, reshuffle, and
complex networks
|
1308.1590 | Sparse Representations for Packetized Predictive Networked Control | cs.SY cs.IT math.IT math.OC | We investigate a networked control architecture for LTI plant models with a
scalar input. Communication from controller to actuator is over an unreliable
network which introduces packet dropouts. To achieve robustness against
dropouts, we adopt a packetized predictive control paradigm wherein each
control packet transmitted contains tentative future plant input values. The
novelty of our approach is that we seek that the control packets transmitted be
sparse. For that purpose, we adapt tools from the area of compressed sensing
and propose to design the control packets via on-line minimization of a
suitable L1/L2 cost function. We then show how to choose parameters of the cost
function to ensure that the resultant closed loop system be practically stable,
provided the maximum number of consecutive packet dropouts is bounded. A
numerical example illustrates that sparsity reduces bit-rates, thereby making
our proposal suited to control over unreliable and bit-rate limited networks.
|
1308.1600 | Universal codes of the natural numbers | cs.LO cs.IT math.IT | A code of the natural numbers is a uniquely-decodable binary code of the
natural numbers with non-decreasing codeword lengths, which satisfies Kraft's
inequality tightly. We define a natural partial order on the set of codes, and
show how to construct effectively a code better than a given sequence of codes,
in a certain precise sense. As an application, we prove that the existence of a
scale of codes (a well-ordered set of codes which contains a code better than
any given code) is independent of ZFC.
|
1308.1603 | A Note on Topology Preservation in Classification, and the Construction
of a Universal Neuron Grid | cs.NE cs.AI nlin.AO stat.ML | It will be shown that according to theorems of K. Menger, every neuron grid
if identified with a curve is able to preserve the adopted qualitative
structure of a data space. Furthermore, if this identification is made, the
neuron grid structure can always be mapped to a subset of a universal neuron
grid which is constructable in three space dimensions. Conclusions will be
drawn for established neuron grid types as well as neural fields.
|
1308.1605 | The stability of a graph partition: A dynamics-based framework for
community detection | physics.soc-ph cond-mat.stat-mech cs.SI physics.data-an | Recent years have seen a surge of interest in the analysis of complex
networks, facilitated by the availability of relational data and the
increasingly powerful computational resources that can be employed for their
analysis. Naturally, the study of real-world systems leads to highly complex
networks and a current challenge is to extract intelligible, simplified
descriptions from the network in terms of relevant subgraphs, which can provide
insight into the structure and function of the overall system.
Sparked by seminal work by Newman and Girvan, an interesting line of research
has been devoted to investigating modular community structure in networks,
revitalising the classic problem of graph partitioning.
However, modular or community structure in networks has notoriously evaded
rigorous definition. The most accepted notion of community is perhaps that of a
group of elements which exhibit a stronger level of interaction within
themselves than with the elements outside the community. This concept has
resulted in a plethora of computational methods and heuristics for community
detection. Nevertheless a firm theoretical understanding of most of these
methods, in terms of how they operate and what they are supposed to detect, is
still lacking to date.
Here, we will develop a dynamical perspective towards community detection
enabling us to define a measure named the stability of a graph partition. It
will be shown that a number of previously ad-hoc defined heuristics for
community detection can be seen as particular cases of our method providing us
with a dynamic reinterpretation of those measures. Our dynamics-based approach
thus serves as a unifying framework to gain a deeper understanding of different
aspects and problems associated with community detection and allows us to
propose new dynamically-inspired criteria for community structure.
|
1308.1609 | Geometric Relationships Between Gaussian and Modulo-Lattice Error
Exponents | cs.IT math.IT | Lattice coding and decoding have been shown to achieve the capacity of the
additive white Gaussian noise (AWGN) channel. This was accomplished using a
minimum mean-square error scaling and randomization to transform the AWGN
channel into a modulo-lattice additive noise channel of the same capacity. It
has been further shown that when operating at rates below capacity but above
the critical rate of the channel, there exists a rate-dependent scaling such
that the associated modulo-lattice channel attains the error exponent of the
AWGN channel. A geometric explanation for this result is developed. In
particular, it is shown how the geometry of typical error events for the
modulo-lattice channel coincides with that of a spherical code for the AWGN
channel.
|
1308.1688 | The Number Theoretic Hilbert Transform | cs.CR cs.IT math.IT math.NT | This paper presents a general expression for a number-theoretic Hilbert
transform (NHT). The transformations preserve the circulant nature of the
discrete Hilbert transform (DHT) matrix together with alternating values in
each row being zero and non-zero. Specific examples for 4-point, 6-point, and
8-point NHT are provided. The NHT transformation can be used as a primitive to
create cryptographically useful scrambling transformations.
|
1308.1725 | State Estimation over Sensor Networks with Correlated Wireless Fading
Channels | math.OC cs.IT cs.SY math.IT | Stochastic stability for centralized time-varying Kalman filtering over a
wireles ssensor network with correlated fading channels is studied. On their
route to the gateway, sensor packets, possibly aggregated with measurements
from several nodes, may be dropped because of fading links. To study this
situation, we introduce a network state process, which describes a finite set
of configurations of the radio environment. The network state characterizes the
channel gain distributions of the links, which are allowed to be correlated
between each other. Temporal correlations of channel gains are modeled by
allowing the network state process to form a (semi-)Markov chain. We establish
sufficient conditions that ensure the Kalman filter to be exponentially
bounded. In the one-sensor case, this new stability condition is shown to
include previous results obtained in the literature as special cases. The
results also hold when using power and bit-rate control policies, where the
transmission power and bit-rate of each node are nonlinear mapping of the
network state and channel gains.
|
1308.1744 | Adaptive Controller Placement for Wireless Sensor-Actuator Networks with
Erasure Channels | cs.SY math.OC | Wireless sensor-actuator networks offer flexibility for control design. One
novel element which may arise in networks with multiple nodes is that the role
of some nodes does not need to be fixed. In particular, there is no need to
pre-allocate which nodes assume controller functions and which ones merely
relay data. We present a flexible architecture for networked control using
multiple nodes connected in series over analog erasure channels without
acknowledgments. The control architecture proposed adapts to changes in network
conditions, by allowing the role played by individual nodes to depend upon
transmission outcomes. We adopt stochastic models for transmission outcomes and
characterize the distribution of controller location and the covariance of
system states. Simulation results illustrate that the proposed architecture has
the potential to give better performance than limiting control calculations to
be carried out at a fixed node.
|
1308.1745 | Power Control and Coding Formulation for State Estimation with Wireless
Sensors | cs.IT cs.SY math.IT math.OC | Technological advances have made wireless sensors cheap and reliable enough
to be brought into industrial use. A major challenge arises from the fact that
wireless channels introduce random packet dropouts. Power control and coding
are key enabling technologies in wireless communications to ensure efficient
communications. In the present work, we examine the role of power control and
coding for Kalman filtering over wireless correlated channels. Two estimation
architectures are considered: In the first, the sensors send their measurements
directly to a single gateway. In the second scheme, wireless relay nodes
provide additional links. The gateway decides on the coding scheme and the
transmitter power levels of the wireless nodes. The decision process is carried
out on-line and adapts to varying channel conditions in order to improve the
trade-off between state estimation accuracy and energy expenditure. In
combination with predictive power control, we investigate the use of
multiple-description coding, zero-error coding and network coding and provide
sufficient conditions for the expectation of the estimation error covariance
matrix to be bounded. Numerical results suggest that the proposed method may
lead to energy savings of around 50 %, when compared to an alternative scheme,
wherein transmission power levels and bit-rates are governed by simple logic.
In particular, zero-error coding is preferable at time instances with high
channel gains, whereas multiple-description coding is superior for time
instances with low gains. When channels between the sensors and the gateway are
in deep fades, network coding improves estimation accuracy significantly
without sacrificing energy efficiency.
|
1308.1746 | Online Decision Making in Crowdsourcing Markets: Theoretical Challenges
(Position Paper) | cs.SI cs.CY cs.HC | Over the past decade, crowdsourcing has emerged as a cheap and efficient
method of obtaining solutions to simple tasks that are difficult for computers
to solve but possible for humans. The popularity and promise of crowdsourcing
markets has led to both empirical and theoretical research on the design of
algorithms to optimize various aspects of these markets, such as the pricing
and assignment of tasks. Much of the existing theoretical work on crowdsourcing
markets has focused on problems that fall into the broad category of online
decision making; task requesters or the crowdsourcing platform itself make
repeated decisions about prices to set, workers to filter out, problems to
assign to specific workers, or other things. Often these decisions are complex,
requiring algorithms that learn about the distribution of available tasks or
workers over time and take into account the strategic (or sometimes irrational)
behavior of workers.
As human computation grows into its own field, the time is ripe to address
these challenges in a principled way. However, it appears very difficult to
capture all pertinent aspects of crowdsourcing markets in a single coherent
model. In this paper, we reflect on the modeling issues that inhibit
theoretical research on online decision making for crowdsourcing, and identify
some steps forward. This paper grew out of the authors' own frustration with
these issues, and we hope it will encourage the community to attempt to
understand, debate, and ultimately address them.
The authors welcome feedback for future revisions of this paper.
|
1308.1747 | Sequence-based Anytime Control | math.OC cs.SY | We present two related anytime algorithms for control of nonlinear systems
when the processing resources available are time-varying. The basic idea is to
calculate tentative control input sequences for as many time steps into the
future as allowed by the available processing resources at every time step.
This serves to compensate for the time steps when the processor is not
available to perform any control calculations. Using a stochastic Lyapunov
function based approach, we analyze the stability of the resulting closed loop
system for the cases when the processor availability can be modeled as an
independent and identically distributed sequence and via an underlying Markov
chain. Numerical simulations indicate that the increase in performance due to
the proposed algorithms can be significant.
|
1308.1761 | The Deterministic Capacity of Relay Networks with Relay Private Messages | cs.IT math.IT | We study the capacity region of a deterministic 4-node network, where 3 nodes
can only communicate via the fourth one. However, the fourth node is not merely
a relay since it can exchange private messages with all other nodes. This
situation resembles the case where a base station relays messages between users
and delivers messages between the backbone system and the users. We assume an
asymmetric scenario where the channel between any two nodes is not reciprocal.
First, an upper bound on the capacity region is obtained based on the notion of
single sided genie. Subsequently, we construct an achievable scheme that
achieves this upper bound using a superposition of broadcasting node 4 messages
and an achievable "detour" scheme for a reduced 3-user relay network.
|
1308.1776 | Comparing the usage of global and local Wikipedias with focus on Swedish
Wikipedia | physics.soc-ph cs.SI | This report summarizes the results of a short-term student research project
focused on the usage of Swedish Wikipedia. It is trying to answer the following
question: To what extent (and why) do people from non-English language
communities use the English Wikipedia instead of the one in their local
language? Article access time series and article edit time series from major
Wikipedias including Swedish Wikipedia are analyzed with various tools.
|
1308.1779 | Proving soundness of combinatorial Vickrey auctions and generating
verified executable code | cs.GT cs.CE cs.LO | Using mechanised reasoning we prove that combinatorial Vickrey auctions are
soundly specified in that they associate a unique outcome (allocation and
transfers) to any valid input (bids). Having done so, we auto-generate verified
executable code from the formally defined auction. This removes a source of
error in implementing the auction design. We intend to use formal methods to
verify new auction designs. Here, our contribution is to introduce and
demonstrate the use of formal methods for auction verification in the familiar
setting of a well-known auction.
|
1308.1792 | OFF-Set: One-pass Factorization of Feature Sets for Online
Recommendation in Persistent Cold Start Settings | cs.LG cs.IR | One of the most challenging recommendation tasks is recommending to a new,
previously unseen user. This is known as the 'user cold start' problem.
Assuming certain features or attributes of users are known, one approach for
handling new users is to initially model them based on their features.
Motivated by an ad targeting application, this paper describes an extreme
online recommendation setting where the cold start problem is perpetual. Every
user is encountered by the system just once, receives a recommendation, and
either consumes or ignores it, registering a binary reward.
We introduce One-pass Factorization of Feature Sets, OFF-Set, a novel
recommendation algorithm based on Latent Factor analysis, which models users by
mapping their features to a latent space. Furthermore, OFF-Set is able to model
non-linear interactions between pairs of features. OFF-Set is designed for
purely online recommendation, performing lightweight updates of its model per
each recommendation-reward observation. We evaluate OFF-Set against several
state of the art baselines, and demonstrate its superiority on real
ad-targeting data.
|
1308.1801 | Satellite image classification methods and Landsat 5TM bands | cs.CV astro-ph.IM | This paper attempts to find the most accurate classification method among
parallelepiped, minimum distance and chain methods. Moreover, this study also
challenges to find the suitable combination of bands, which can lead to better
results in case combinations of bands occur. After comparing these three
methods, the chain method over perform the other methods with 79% overall
accuracy. Hence, it is more accurate than minimum distance with 67% and
parallelepiped with 65%. On the other hand, based on bands features, and also
by combining several researchers' findings, a table was created which includes
the main objects on the land and the suitable combination of the bands for
accurately detecting of landcover objects. During this process, it was observed
that band 4 (out of 7 bands of Landsat 5TM) is the band, which can be used for
increasing the accuracy of the combined bands in detecting objects on the land.
|
1308.1817 | Semantic Computing of Moods Based on Tags in Social Media of Music | cs.MM cs.IR cs.SI | Social tags inherent in online music services such as Last.fm provide a rich
source of information on musical moods. The abundance of social tags makes this
data highly beneficial for developing techniques to manage and retrieve mood
information, and enables study of the relationships between music content and
mood representations with data substantially larger than that available for
conventional emotion research. However, no systematic assessment has been done
on the accuracy of social tags and derived semantic models at capturing mood
information in music. We propose a novel technique called Affective Circumplex
Transformation (ACT) for representing the moods of music tracks in an
interpretable and robust fashion based on semantic computing of social tags and
research in emotion modeling. We validate the technique by predicting listener
ratings of moods in music tracks, and compare the results to prediction with
the Vector Space Model (VSM), Singular Value Decomposition (SVD), Nonnegative
Matrix Factorization (NMF), and Probabilistic Latent Semantic Analysis (PLSA).
The results show that ACT consistently outperforms the baseline techniques, and
its performance is robust against a low number of track-level mood tags. The
results give validity and analytical insights for harnessing millions of music
tracks and associated mood data available through social tags in application
development.
|
1308.1847 | The Royal Birth of 2013: Analysing and Visualising Public Sentiment in
the UK Using Twitter | cs.CL cs.IR cs.SI physics.soc-ph | Analysis of information retrieved from microblogging services such as Twitter
can provide valuable insight into public sentiment in a geographic region. This
insight can be enriched by visualising information in its geographic context.
Two underlying approaches for sentiment analysis are dictionary-based and
machine learning. The former is popular for public sentiment analysis, and the
latter has found limited use for aggregating public sentiment from Twitter
data. The research presented in this paper aims to extend the machine learning
approach for aggregating public sentiment. To this end, a framework for
analysing and visualising public sentiment from a Twitter corpus is developed.
A dictionary-based approach and a machine learning approach are implemented
within the framework and compared using one UK case study, namely the royal
birth of 2013. The case study validates the feasibility of the framework for
analysis and rapid visualisation. One observation is that there is good
correlation between the results produced by the popular dictionary-based
approach and the machine learning approach when large volumes of tweets are
analysed. However, for rapid analysis to be possible faster methods need to be
developed using big data techniques and parallel methods.
|
1308.1857 | PANAS-t: A Pychometric Scale for Measuring Sentiments on Twitter | cs.SI physics.soc-ph | Online social networks have become a major communication platform, where
people share their thoughts and opinions about any topic real-time. The short
text updates people post in these network contain emotions and moods, which
when measured collectively can unveil the public mood at population level and
have exciting implications for businesses, governments, and societies.
Therefore, there is an urgent need for developing solid methods for accurately
measuring moods from large-scale social media data. In this paper, we propose
PANAS-t, which measures sentiments from short text updates in Twitter based on
a well-established psychometric scale, PANAS (Positive and Negative Affect
Schedule). We test the efficacy of PANAS-t over 10 real notable events drawn
from 1.8 billion tweets and demonstrate that it can efficiently capture the
expected sentiments of a wide variety of issues spanning tragedies, technology
releases, political debates, and healthcare.
|
1308.1860 | An Optimization Framework to Improve 4D-Var Data Assimilation System
Performance | cs.CE | This paper develops a computational framework for optimizing the parameters
of data assimilation systems in order to improve their performance. The
approach formulates a continuous meta-optimization problem for parameters; the
meta-optimization is constrained by the original data assimilation problem. The
numerical solution process employs adjoint models and iterative solvers. The
proposed framework is applied to optimize observation values, data weighting
coefficients, and the location of sensors for a test problem. The ability to
optimize a distributed measurement network is crucial for cutting down
operating costs and detecting malfunctions.
|
1308.1876 | A Non-Alternating Algorithm for Joint BS-RS Precoding Design in Two-Way
Relay Systems | cs.IT math.IT | Cooperative relay systems have become an active area of research during
recent years since they help cellular networks to enhance data rate and
coverage. In this paper we develop a method to jointly optimize precoding
matrices for amplify-and-forward relay station and base station. Our objective
is to increase max-min SINR fairness within co-channel users in a cell. The
main achievement of this work is avoiding any tedious alternating optimization
for joint design of RS/BS precoders, in order to save complexity. Moreover, no
convex solver is required in this method. RS precoding is done by transforming
the underlying non-convex problem into a system of nonlinear equations which is
then solved using Levenberg-Marquardt algorithm. This method for RS precoder
design is guaranteed to converge to a local optimum. For the BS precoder a
low-complexity iterative method is proposed. The efficiency of the joint
optimization method is verified by simulations.
|
1308.1887 | Comparing cost and performance of replication and erasure coding | cs.IT math.IT | Data storage systems are more reliable than their individual components. In
order to build highly reliable systems out of less reliable parts, systems
introduce redundancy. In replicated systems, objects are simply copied several
times with each copy residing on a different physical device. While such an
approach is simple and direct, more elaborate approaches such as erasure coding
can achieve equivalent levels of data protection while using less redundancy.
This report examines the trade-offs in cost and performance between replicated
and erasure encoded storage systems.
|
1308.1889 | SOSOPT: A Toolbox for Polynomial Optimization | math.OC cs.SY | SOSOPT is a Matlab toolbox for formulating and solving Sum-of-Squares (SOS)
polynomial optimizations. This document briefly describes the use and
functionality of this toolbox. Section 1 introduces the problem formulations
for SOS tests, SOS feasibility problems, SOS optimizations, and generalized SOS
problems. Section 2 reviews the SOSOPT toolbox for solving these optimizations.
This section includes information on toolbox installation, formulating
constraints, solving SOS optimizations, and setting optimization options.
Finally, Section 3 briefly reviews the connections between SOS optimizations
and semidefinite programs (SDPs). It is the connection to SDPs that enables SOS
optimizations to be solved in an efficient manner
|
1308.1940 | Time series modeling with pruned multi-layer perceptron and 2-stage
damped least-squares method | cs.NE | A Multi-Layer Perceptron (MLP) defines a family of artificial neural networks
often used in TS modeling and forecasting. Because of its "black box" aspect,
many researchers refuse to use it. Moreover, the optimization (often based on
the exhaustive approach where "all" configurations are tested) and learning
phases of this artificial intelligence tool (often based on the
Levenberg-Marquardt algorithm; LMA) are weaknesses of this approach
(exhaustively and local minima). These two tasks must be repeated depending on
the knowledge of each new problem studied, making the process, long, laborious
and not systematically robust. In this paper a pruning process is proposed.
This method allows, during the training phase, to carry out an inputs selecting
method activating (or not) inter-nodes connections in order to verify if
forecasting is improved. We propose to use iteratively the popular damped
least-squares method to activate inputs and neurons. A first pass is applied to
10% of the learning sample to determine weights significantly different from 0
and delete other. Then a classical batch process based on LMA is used with the
new MLP. The validation is done using 25 measured meteorological TS and
cross-comparing the prediction results of the classical LMA and the 2-stage
LMA.
|
1308.1947 | Interdependent network reciprocity in evolutionary games | physics.soc-ph cs.GT cs.SI q-bio.PE | Besides the structure of interactions within networks, also the interactions
between networks are of the outmost importance. We therefore study the outcome
of the public goods game on two interdependent networks that are connected by
means of a utility function, which determines how payoffs on both networks
jointly influence the success of players in each individual network. We show
that an unbiased coupling allows the spontaneous emergence of interdependent
network reciprocity, which is capable to maintain healthy levels of public
cooperation even in extremely adverse conditions. The mechanism, however,
requires simultaneous formation of correlated cooperator clusters on both
networks. If this does not emerge or if the coordination process is disturbed,
network reciprocity fails, resulting in the total collapse of cooperation.
Network interdependence can thus be exploited effectively to promote
cooperation past the limits imposed by isolated networks, but only if the
coordination between the interdependent networks is not disturbed.
|
1308.1968 | Detection and Isolation of Link Failures under the Agreement Protocol | cs.SY cs.SI math.DS math.OC | In this paper a property of the multi-agent consensus dynamics that relates
the failure of links in the network to jump discontinuities in the derivatives
of the output responses of the nodes is derived and verified analytically. At
the next step, an algorithm for sensor placement is proposed, which would
enable the designer to detect and isolate any link failures across the network
based on the observed jump discontinuities in the derivatives of the responses
of a subset of nodes. These results are explained through elaborative examples.
|
1308.1975 | Predicting protein contact map using evolutionary and physical
constraints by integer programming (extended version) | q-bio.QM cs.CE cs.LG math.OC q-bio.BM stat.ML | Motivation. Protein contact map describes the pairwise spatial and functional
relationship of residues in a protein and contains key information for protein
3D structure prediction. Although studied extensively, it remains very
challenging to predict contact map using only sequence information. Most
existing methods predict the contact map matrix element-by-element, ignoring
correlation among contacts and physical feasibility of the whole contact map. A
couple of recent methods predict contact map based upon residue co-evolution,
taking into consideration contact correlation and enforcing a sparsity
restraint, but these methods require a very large number of sequence homologs
for the protein under consideration and the resultant contact map may be still
physically unfavorable.
Results. This paper presents a novel method PhyCMAP for contact map
prediction, integrating both evolutionary and physical restraints by machine
learning and integer linear programming (ILP). The evolutionary restraints
include sequence profile, residue co-evolution and context-specific statistical
potential. The physical restraints specify more concrete relationship among
contacts than the sparsity restraint. As such, our method greatly reduces the
solution space of the contact map matrix and thus, significantly improves
prediction accuracy. Experimental results confirm that PhyCMAP outperforms
currently popular methods no matter how many sequence homologs are available
for the protein under consideration. PhyCMAP can predict contacts within
minutes after PSIBLAST search for sequence homologs is done, much faster than
the two recent methods PSICOV and EvFold.
See http://raptorx.uchicago.edu for the web server.
|
1308.1981 | A Framework for the Analysis of Computational Imaging Systems with
Practical Applications | cs.CV | Over the last decade, a number of Computational Imaging (CI) systems have
been proposed for tasks such as motion deblurring, defocus deblurring and
multispectral imaging. These techniques increase the amount of light reaching
the sensor via multiplexing and then undo the deleterious effects of
multiplexing by appropriate reconstruction algorithms. Given the widespread
appeal and the considerable enthusiasm generated by these techniques, a
detailed performance analysis of the benefits conferred by this approach is
important.
Unfortunately, a detailed analysis of CI has proven to be a challenging
problem because performance depends equally on three components: (1) the
optical multiplexing, (2) the noise characteristics of the sensor, and (3) the
reconstruction algorithm. A few recent papers have performed analysis taking
multiplexing and noise characteristics into account. However, analysis of CI
systems under state-of-the-art reconstruction algorithms, most of which exploit
signal prior models, has proven to be unwieldy. In this paper, we present a
comprehensive analysis framework incorporating all three components.
In order to perform this analysis, we model the signal priors using a
Gaussian Mixture Model (GMM). A GMM prior confers two unique characteristics.
Firstly, GMM satisfies the universal approximation property which says that any
prior density function can be approximated to any fidelity using a GMM with
appropriate number of mixtures. Secondly, a GMM prior lends itself to
analytical tractability allowing us to derive simple expressions for the
`minimum mean square error' (MMSE), which we use as a metric to characterize
the performance of CI systems. We use our framework to analyze several
previously proposed CI techniques, giving conclusive answer to the question:
`How much performance gain is due to use of a signal prior and how much is due
to multiplexing?
|
1308.1995 | Predicting Trends in Social Networks via Dynamic Activeness Model | cs.SI physics.soc-ph | With the effect of word-of-the-mouth, trends in social networks are now
playing a significant role in shaping people's lives. Predicting dynamic trends
is an important problem with many useful applications. There are three dynamic
characteristics of a trend that should be captured by a trend model: intensity,
coverage and duration. However, existing approaches on the information
diffusion are not capable of capturing these three characteristics. In this
paper, we study the problem of predicting dynamic trends in social networks. We
first define related concepts to quantify the dynamic characteristics of trends
in social networks, and formalize the problem of trend prediction. We then
propose a Dynamic Activeness (DA) model based on the novel concept of
activeness, and design a trend prediction algorithm using the DA model. Due to
the use of stacking principle, we are able to make the prediction algorithm
very efficient. We examine the prediction algorithm on a number of real social
network datasets, and show that it is more accurate than state-of-the-art
approaches.
|
1308.2013 | Min-Max Design of FIR Digital Filters by Semidefinite Programming | cs.IT cs.SY math.IT math.OC | In this article we consider two problems: FIR (Finite Impulse Response)
approximation of IIR (Infinite Impulse Response) filters and inverse FIR
filtering of FIR or IIR filters. By means of Kalman-Yakubovich-Popov (KYP)
lemma and its generalization (GKYP), the problems are reduced to semidefinite
programming described in linear matrix inequalities (LMIs). MATLAB codes for
these design methods are given. An design example shows the effectiveness of
these methods.
|
1308.2015 | Role of social environment and social clustering in spread of opinions
in co-evolving networks | physics.soc-ph cs.SI | Taking a pragmatic approach to the processes involved in the phenomena of
collective opinion formation, we investigate two specific modifications to the
co-evolving network voter model of opinion formation, studied by Holme and
Newman [1]. First, we replace the rewiring probability parameter by a
distribution of probability of accepting or rejecting opinions between
individuals, accounting for the asymmetric influences in relationships among
individuals in a social group. Second, we modify the rewiring step by a
path-length-based preference for rewiring that reinforces local clustering. We
have investigated the influences of these modifications on the outcomes of the
simulations of this model. We found that varying the shape of the distribution
of probability of accepting or rejecting opinions can lead to the emergence of
two qualitatively distinct final states, one having several isolated connected
components each in internal consensus leading to the existence of diverse set
of opinions and the other having one single dominant connected component with
each node within it having the same opinion. Furthermore, and more importantly,
we found that the initial clustering in network can also induce similar
transitions. Our investigation also brings forward that these transitions are
governed by a weak and complex dependence on system size. We found that the
networks in the final states of the model have rich structural properties
including the small world property for some parameter regimes. [1] P. Holme and
M. Newman, Phys. Rev. E 74, 056108 (2006).
|
1308.2027 | Symmetric Toeplitz-Structured Compressed Sensing Matrices | cs.IT math.IT | How to construct a suitable measurement matrix is still an open question in
compressed sensing. A significant part of the recent work is that the
measurement matrices are not completely random on the entries but exhibit
considerable structure. In this paper, we proved that the symmetric Toeplitz
matrix and its transforms can be used as measurement matrix and recovery signal
with high probability. Compared with random matrices (e.g. Gaussian and
Bernullio matrices) and some structured matrices (e.g. Toeplitz and circulant
matrices), we need to generate fewer independent entries to obtain the
measurement matrix while the effectiveness of recovery does not get worse.
Furthermore, the signal can be recovered more efficiently by the algorithm.
|
1308.2058 | RBioCloud: A Light-weight Framework for Bioconductor and R-based Jobs on
the Cloud | cs.DC cs.CE cs.PF cs.SE | Large-scale ad hoc analytics of genomic data is popular using the
R-programming language supported by 671 software packages provided by
Bioconductor. More recently, analytical jobs are benefitting from on-demand
computing and storage, their scalability and their low maintenance cost, all of
which are offered by the cloud. While Biologists and Bioinformaticists can take
an analytical job and execute it on their personal workstations, it remains
challenging to seamlessly execute the job on the cloud infrastructure without
extensive knowledge of the cloud dashboard. How analytical jobs can not only
with minimum effort be executed on the cloud, but also how both the resources
and data required by the job can be managed is explored in this paper. An
open-source light-weight framework for executing R-scripts using Bioconductor
packages, referred to as `RBioCloud', is designed and developed. RBioCloud
offers a set of simple command-line tools for managing the cloud resources, the
data and the execution of the job. Three biological test cases validate the
feasibility of RBioCloud. The framework is publicly available from
http://www.rbiocloud.com.
|
1308.2063 | Signal Reconstruction via H-infinity Sampled-Data Control Theory: Beyond
the Shannon Paradigm | cs.IT cs.SY math.IT math.OC | This paper presents a new method for signal reconstruction by leveraging
sampled-data control theory. We formulate the signal reconstruction problem in
terms of an analog performance optimization problem using a stable
discrete-time filter. The proposed H-infinity performance criterion naturally
takes intersample behavior into account, reflecting the energy distributions of
the signal. We present methods for computing optimal solutions which are
guaranteed to be stable and causal. Detailed comparisons to alternative methods
are provided. We discuss some applications in sound and image reconstruction.
|
1308.2066 | Parallel Simulations for Analysing Portfolios of Catastrophic Event Risk | cs.DC cs.CE cs.PF | At the heart of the analytical pipeline of a modern quantitative
insurance/reinsurance company is a stochastic simulation technique for
portfolio risk analysis and pricing process referred to as Aggregate Analysis.
Support for the computation of risk measures including Probable Maximum Loss
(PML) and the Tail Value at Risk (TVAR) for a variety of types of complex
property catastrophe insurance contracts including Cat eXcess of Loss (XL), or
Per-Occurrence XL, and Aggregate XL, and contracts that combine these measures
is obtained in Aggregate Analysis.
In this paper, we explore parallel methods for aggregate risk analysis. A
parallel aggregate risk analysis algorithm and an engine based on the algorithm
is proposed. This engine is implemented in C and OpenMP for multi-core CPUs and
in C and CUDA for many-core GPUs. Performance analysis of the algorithm
indicates that GPUs offer an alternative HPC solution for aggregate risk
analysis that is cost effective. The optimised algorithm on the GPU performs a
1 million trial aggregate simulation with 1000 catastrophic events per trial on
a typical exposure set and contract structure in just over 20 seconds which is
approximately 15x times faster than the sequential counterpart. This can
sufficiently support the real-time pricing scenario in which an underwriter
analyses different contractual terms and pricing while discussing a deal with a
client over the phone.
|
1308.2069 | Finite p-groups, entropy vectors and the Ingleton inequality for
nilpotent groups | cs.IT math.GR math.IT | In this paper we study the capacity/entropy region of finite, directed,
acyclic, multiple-sources, multiple-sinks network by means of group theory and
entropy vectors coming from groups. There is a one-to-one correspondence
between the entropy vector of a collection of n random variables and a certain
group-characterizable vector obtained from a finite group and n of its
subgroups. We are looking at nilpotent group characterizable entropy vectors
and show that they are all also Abelian group characterizable, and hence they
satisfy the Ingleton inequality. It is known that not all entropic vectors can
be obtained from Abelian groups, so our result implies that in order to get
more exotic entropic vectors, one has to go at least to soluble groups or
larger nilpotency classes. The result also implies that Ingleton inequality is
satisfied by nilpotent groups of bounded class, depending on the order of the
group.
|
1308.2116 | MaLeS: A Framework for Automatic Tuning of Automated Theorem Provers | cs.AI | MaLeS is an automatic tuning framework for automated theorem provers. It
provides solutions for both the strategy finding as well as the strategy
scheduling problem. This paper describes the tool and the methods used in it,
and evaluates its performance on three automated theorem provers: E, LEO-II and
Satallax. An evaluation on a subset of the TPTP library problems shows that on
average a MaLeS-tuned prover solves 8.67% more problems than the prover with
its default settings.
|
1308.2119 | Deconstructing analogy | cs.AI | Analogy has been shown to be important in many key cognitive abilities,
including learning, problem solving, creativity and language change. For
cognitive models of analogy, the fundamental computational question is how its
inherent complexity (its NP-hardness) is solved by the human cognitive system.
Indeed, different models of analogical processing can be categorized by the
simplification strategies they adopt to make this computational problem more
tractable. In this paper, I deconstruct several of these models in terms of the
simplification-strategies they use; a deconstruction that provides some
interesting perspectives on the relative differences between them. Later, I
consider whether any of these computational simplifications reflect the actual
strategies used by people and sketch a new cognitive model that tries to
present a closer fit to the psychological evidence.
|
1308.2124 | Space as an invention of biological organisms | cs.AI | The question of the nature of space around us has occupied thinkers since the
dawn of humanity, with scientists and philosophers today implicitly assuming
that space is something that exists objectively. Here we show that this does
not have to be the case: the notion of space could emerge when biological
organisms seek an economic representation of their sensorimotor flow. The
emergence of spatial notions does not necessitate the existence of real
physical space, but only requires the presence of sensorimotor invariants
called `compensable' sensory changes. We show mathematically and then in
simulations that na\"ive agents making no assumptions about the existence of
space are able to learn these invariants and to build the abstract notion that
physicists call rigid displacement, which is independent of what is being
displaced. Rigid displacements may underly perception of space as an unchanging
medium within which objects are described by their relative positions. Our
findings suggest that the question of the nature of space, currently exclusive
to philosophy and physics, should also be addressed from the standpoint of
neuroscience and artificial intelligence.
|
1308.2140 | Axioms for Centrality | cs.SI physics.soc-ph | Given a social network, which of its nodes are more central? This question
has been asked many times in sociology, psychology and computer science, and a
whole plethora of centrality measures (a.k.a. centrality indices, or rankings)
were proposed to account for the importance of the nodes of a network. In this
paper, we try to provide a mathematically sound survey of the most important
classic centrality measures known from the literature and propose an axiomatic
approach to establish whether they are actually doing what they have been
designed for. Our axioms suggest some simple, basic properties that a
centrality measure should exhibit.
Surprisingly, only a new simple measure based on distances, harmonic
centrality, turns out to satisfy all axioms; essentially, harmonic centrality
is a correction to Bavelas's classic closeness centrality designed to take
unreachable nodes into account in a natural way.
As a sanity check, we examine in turn each measure under the lens of
information retrieval, leveraging state-of-the-art knowledge in the discipline
to measure the effectiveness of the various indices in locating web pages that
are relevant to a query. While there are some examples of this comparisons in
the literature, here for the first time we take into consideration centrality
measures based on distances, such as closeness, in an information-retrieval
setting. The results match closely the data we gathered using our axiomatic
approach.
Our results suggest that centrality measures based on distances, which have
been neglected in information retrieval in favour of spectral centrality
measures in the last years, are actually of very high quality; moreover,
harmonic centrality pops up as an excellent general-purpose centrality index
for arbitrary directed graphs.
|
1308.2144 | In-Core Computation of Geometric Centralities with HyperBall: A Hundred
Billion Nodes and Beyond | cs.DS cs.SI physics.soc-ph | Given a social network, which of its nodes are more central? This question
has been asked many times in sociology, psychology and computer science, and a
whole plethora of centrality measures (a.k.a. centrality indices, or rankings)
were proposed to account for the importance of the nodes of a network. In this
paper, we approach the problem of computing geometric centralities, such as
closeness and harmonic centrality, on very large graphs; traditionally this
task requires an all-pairs shortest-path computation in the exact case, or a
number of breadth-first traversals for approximated computations, but these
techniques yield very weak statistical guarantees on highly disconnected
graphs. We rather assume that the graph is accessed in a semi-streaming
fashion, that is, that adjacency lists are scanned almost sequentially, and
that a very small amount of memory (in the order of a dozen bytes) per node is
available in core memory. We leverage the newly discovered algorithms based on
HyperLogLog counters, making it possible to approximate a number of geometric
centralities at a very high speed and with high accuracy. While the application
of similar algorithms for the approximation of closeness was attempted in the
MapReduce framework, our exploitation of HyperLogLog counters reduces
exponentially the memory footprint, paving the way for in-core processing of
networks with a hundred billion nodes using "just" 2TiB of RAM. Moreover, the
computations we describe are inherently parallelizable, and scale linearly with
the number of available cores.
|
1308.2147 | Exploiting Locality in Lease-Based Replicated Transactional Memory via
Task Migration | cs.DB cs.DC | We present Lilac-TM, the first locality-aware Distributed Software
Transactional Memory (DSTM) implementation. Lilac-TM is a fully decentralized
lease-based replicated DSTM. It employs a novel self- optimizing lease
circulation scheme based on the idea of dynamically determining whether to
migrate transactions to the nodes that own the leases required for their
validation, or to demand the acquisition of these leases by the node that
originated the transaction. Our experimental evaluation establishes that
Lilac-TM provides significant performance gains for distributed workloads
exhibiting data locality, while typically incurring no overhead for non-data
local workloads.
|
1308.2166 | Parallel Triangle Counting in Massive Streaming Graphs | cs.DB cs.DC cs.DS cs.SI | The number of triangles in a graph is a fundamental metric, used in social
network analysis, link classification and recommendation, and more. Driven by
these applications and the trend that modern graph datasets are both large and
dynamic, we present the design and implementation of a fast and cache-efficient
parallel algorithm for estimating the number of triangles in a massive
undirected graph whose edges arrive as a stream. It brings together the
benefits of streaming algorithms and parallel algorithms. By building on the
streaming algorithms framework, the algorithm has a small memory footprint. By
leveraging the paralell cache-oblivious framework, it makes efficient use of
the memory hierarchy of modern multicore machines without needing to know its
specific parameters. We prove theoretical bounds on accuracy, memory access
cost, and parallel runtime complexity, as well as showing empirically that the
algorithm yields accurate results and substantial speedups compared to an
optimized sequential implementation.
(This is an expanded version of a CIKM'13 paper of the same title.)
|
1308.2188 | Finite-State Markov Modeling of Leaky Waveguide Channels in
Communication-based Train Control (CBTC) Systems | cs.DM cs.IT math.IT | Leaky waveguide has been adopted in communication based train control (CBTC)
systems, as it can significantly enhance railway network efficiency, safety and
capacity. Since CBTC systems have high requirements for the train ground
communications, modeling the leaky waveguide channels is very important to
design the wireless networks and evaluate the performance of CBTC systems. In
the letter, we develop a finite-state Markov channel (FSMC) model for leaky
waveguide channels in CBTC systems based on real field channel measurements
obtained from a business operating subway line. The proposed FSMC channel model
takes train locations into account to have a more accurate channel model. The
overall leaky waveguide is divided into intervals, and an FSMC model is applied
in each interval. The accuracy of the proposed FSMC model is illustrated by the
simulation results generated from the model and the real field measurement
results.
|
1308.2218 | Coding for Random Projections | cs.LG cs.DS cs.IT math.IT stat.CO | The method of random projections has become very popular for large-scale
applications in statistical learning, information retrieval, bio-informatics
and other applications. Using a well-designed coding scheme for the projected
data, which determines the number of bits needed for each projected value and
how to allocate these bits, can significantly improve the effectiveness of the
algorithm, in storage cost as well as computational speed. In this paper, we
study a number of simple coding schemes, focusing on the task of similarity
estimation and on an application to training linear classifiers. We demonstrate
that uniform quantization outperforms the standard existing influential method
(Datar et. al. 2004). Indeed, we argue that in many cases coding with just a
small number of bits suffices. Furthermore, we also develop a non-uniform 2-bit
coding scheme that generally performs well in practice, as confirmed by our
experiments on training linear support vector machines (SVM).
|
1308.2234 | Innovation networks | cs.AI cs.SI physics.soc-ph | This paper advances a framework for modeling the component interactions
between cognitive and social aspects of scientific creativity and technological
innovation. Specifically, it aims to characterize Innovation Networks; those
networks that involve the interplay of people, ideas and organizations to
create new, technologically feasible, commercially-realizable products,
processes and organizational structures. The tri-partite framework captures
networks of ideas (Concept Level), people (Individual Level) and social
structures (Social-Organizational Level) and the interactions between these
levels. At the concept level, new ideas are the nodes that are created and
linked, kept open for further investigation or closed if solved by actors at
the individual or organizational levels. At the individual level, the nodes are
actors linked by shared worldviews (based on shared professional, educational,
experiential backgrounds) who are the builders of the concept level. At the
social-organizational level, the nodes are organizations linked by common
efforts on a given project (e.g., a company-university collaboration) that by
virtue of their intellectual property or rules of governance constrain the
actions of individuals (at the Individual Level) or ideas (at the Concept
Level). After describing this framework and its implications we paint a number
of scenarios to flesh out how it can be applied.
|
1308.2236 | Surprise: Youve got some explaining to do | cs.AI cs.HC | Why are some events more surprising than others? We propose that events that
are more difficult to explain are those that are more surprising. The two
experiments reported here test the impact of different event outcomes
(Outcome-Type) and task demands (Task) on ratings of surprise for simple story
scenarios. For the Outcome-Type variable, participants saw outcomes that were
either known or less-known surprising outcomes for each scenario. For the Task
variable, participants either answered comprehension questions or provided an
explanation of the outcome. Outcome-Type reliably affected surprise judgments;
known outcomes were rated as less surprising than less-known outcomes. Task
also reliably affected surprise judgments; when people provided an explanation
it lowered surprise judgments relative to simply answering comprehension
questions. Both experiments thus provide evidence on this less-explored
explanation aspect of surprise, specifically showing that ease of explanation
is a key factor in determining the level of surprise experienced.
|
1308.2240 | Cognitive residues of similarity | cs.AI cs.HC | What are the cognitive after-effects of making a similarity judgement? What,
cognitively, is left behind and what effect might these residues have on
subsequent processing? In this paper, we probe for such after-effects using a
visual search task, performed after a task in which pictures of real-world
objects were compared. So, target objects were first presented in a comparison
task (e.g., rate the similarity of this object to another) thus, presumably,
modifying some of their features before asking people to visually search for
the same object in complex scenes (with distractors and camouflaged
backgrounds). As visual search is known to be influenced by the features of
target objects, then any after-effects of the comparison task should be
revealed in subsequent visual searches. Results showed that when people
previously rated an object as being high on a scale (e.g., colour similarity or
general similarity) then visual search is inhibited (slower RTs and more
saccades in eye-tracking) relative to an object being rated as low in the same
scale. There was also some evidence that different comparison tasks (e.g.,
compare on colour or compare on general similarity) have differential effects
on visual search.
|
1308.2248 | Topology Identification of Directed Dynamical Networks via Power
Spectral Analysis | cs.SY math.DS math.OC | We address the problem of identifying the topology of an unknown weighted,
directed network of LTI systems stimulated by wide-sense stationary noises of
unknown power spectral densities. We propose several reconstruction algorithms
based on the cross-power spectral densities of the network's response to the
input noises. Our first algorithm reconstructs the Boolean structure (i.e.,
existence and directions of links) of a directed network from a series of
dynamical responses. Moreover, we propose a second algorithm to recover the
exact structure of the network (including edge weights), as well as the power
spectral density of the input noises, when an eigenvalue-eigenvector pair of
the connectivity matrix is known (for example, Laplacian connectivity
matrices). Finally, for the particular cases of nonreciprocal networks (i.e.,
networks with no directed edges pointing in opposite directions) and undirected
networks, we propose specialized algorithms that result in a lower
computational cost.
|
1308.2260 | Communication Practices in a Distributed Scrum Project | cs.SE cs.SI | While global software development (GSD) projects face cultural and time
differences, the biggest challenge is communication. We studied a distributed
student project with an industrial customer. The project lasted 3 months,
involved 25 participants, and was distributed between the University of
Victoria, Canada and Aalto University, Finland. We analyzed email
communication, version control system (VCS) data, and surveys on satisfaction.
Our aim was to find out whether reflecting on communication affected it, if
standups influenced when developers committed to the VCS repository, and if
leaders emerged in the three distributed Scrum teams. Initially students sent
on average 21 emails per day. With the reduction to 16 emails, satisfaction
with communication increased. By comparing Scrum standup times and VCS activity
we found that the live communication of standups activated people to work on
the project. Out of the three teams, one had an emergent communication
facilitator.
|
1308.2264 | Error Performance Analysis of DF and AF Multi-way Relay Networks with
BPSK Modulation | cs.IT math.IT | In this paper, we analyze the error performance of decode and forward (DF)
and amplify and forward (AF) multi-way relay networks (MWRN). We consider a
MWRN with pair-wise data exchange protocol using binary phase shift keying
(BPSK) modulation in both additive white Gaussian noise (AWGN) and Rayleigh
fading channels. We quantify the possible error events in an $L$-user DF or AF
MWRN and derive accurate asymptotic bounds on the probability for the general
case that a user incorrectly decodes the messages of exactly $k$
($k\in[1,L-1]$) users. We show that at high signal-to-noise ratio (SNR), the
higher order error events ($k\geq 3$) are less probable in AF MWRN, but all
error events are equally probable in a DF MWRN. We derive the average BER of a
user in a DF or AF MWRN in both AWGN and Rayleigh fading channels under high
SNR conditions. Simulation results validate the correctness of the derived
expressions. Our results show that at medium to high SNR, DF MWRN provides
better error performance than AF MWRN in AWGN channels even with a large number
of users (for example, L=100). Whereas, AF MWRN outperforms DF MWRN in Rayleigh
fading channels even for much smaller number of users (for example, $L > 10$).
|
1308.2272 | Search Optimization for Minimum Load under Detection Performance
Constraints in Multifunction Radars | cs.SY math.OC | This paper presents a solution procedure of search parameter optimization for
minimum load ensuring desired one-off and cumulative probabilities of detection
in a multifunction phased array radar. The key approach is to convert this
nonlinear optimization on four search parameters into a scalar optimization on
signal-to-noise ratio by a semi-analytic process based on subproblem
decomposition. The efficacy of the proposed solution approach is verified with
theoretical analysis and numerical case studies.
|
1308.2291 | Compressive Sampling for Networked Feedback Control | cs.SY cs.IT math.IT math.OC | We investigate the use of compressive sampling for networked feedback control
systems. The method proposed serves to compress the control vectors which are
transmitted through rate-limited channels without much deterioration of control
performance. The control vectors are obtained by an L1-L2 optimization, which
can be solved very efficiently by FISTA (Fast Iterative Shrinkage-Thresholding
Algorithm). Simulation results show that the proposed sparsity-promoting
control scheme gives a better control performance than a conventional
energy-limiting L2-optimal control.
|
1308.2292 | Fast image segmentation and restoration using parametric curve evolution
with junctions and topology changes | cs.CV math.AP math.NA | Curve evolution schemes for image segmentation based on a region based
contour model allowing for junctions, vector-valued images and topology changes
are introduced. Together with an a posteriori denoising in the segmented
homogeneous regions this leads to a fast and efficient method for image
segmentation and restoration. An uneven spread of mesh points is avoided by
using the tangential degrees of freedom. Several numerical simulations on
artificial test problems and on real images illustrate the performance of the
method.
|
1308.2293 | Recovery of Low-Rank Matrices under Affine Constraints via a Smoothed
Rank Function | cs.IT math.IT | In this paper, the problem of matrix rank minimization under affine
constraints is addressed. The state-of-the-art algorithms can recover matrices
with a rank much less than what is sufficient for the uniqueness of the
solution of this optimization problem. We propose an algorithm based on a
smooth approximation of the rank function, which practically improves recovery
limits on the rank of the solution. This approximation leads to a non-convex
program; thus, to avoid getting trapped in local solutions, we use the
following scheme. Initially, a rough approximation of the rank function subject
to the affine constraints is optimized. As the algorithm proceeds, finer
approximations of the rank are optimized and the solver is initialized with the
solution of the previous approximation until reaching the desired accuracy.
On the theoretical side, benefiting from the spherical section property, we
will show that the sequence of the solutions of the approximating function
converges to the minimum rank solution. On the experimental side, it will be
shown that the proposed algorithm, termed SRF standing for Smoothed Rank
Function, can recover matrices which are unique solutions of the rank
minimization problem and yet not recoverable by nuclear norm minimization.
Furthermore, it will be demonstrated that, in completing partially observed
matrices, the accuracy of SRF is considerably and consistently better than some
famous algorithms when the number of revealed entries is close to the minimum
number of parameters that uniquely represent a low-rank matrix.
|
1308.2299 | Lossless Data Compression with Error Detection using Cantor Set | cs.IT math.IT nlin.CD | In 2009, a lossless compression algorithm based on 1D chaotic maps known as
Generalized Lur\"{o}th Series (or GLS) has been proposed. This algorithm
(GLS-coding) encodes the input message as a symbolic sequence on an appropriate
1D chaotic map (GLS) and the compressed file is obtained as the initial value
by iterating backwards on the map. For ergodic sources, it was shown that
GLS-coding achieves the best possible lossless compression (in the noiseless
setting) bounded by Shannon entropy. However, in the presence of noise, even
small errors in the compressed file leads to catastrophic decoding errors owing
to sensitive dependence on initial values. In this paper, we first show that
Repetition codes $\mathcal{R}_n$ (every symbol is repeated $n$ times, where $n$
is a positive odd integer), the oldest and the most basic error correction and
detection codes in literature, actually lie on a Cantor set with a fractal
dimension of $\frac{1}{n}$, which is also the rate of the code. Inspired by
this, we incorporate error detection capability to GLS-coding by ensuring that
the compressed file (initial value on the map) lies on a Cantor set of measure
zero. Even a 1-bit error in the initial value will throw it outside the Cantor
set which can be detected while decoding. The error detection performance (and
also the rate of the code) can be controlled by the fractal dimension of the
Cantor set and could be suitably adjusted depending on the noise level of the
communication channel.
|
1308.2302 | High-Dimensional Regression with Gaussian Mixtures and Partially-Latent
Response Variables | cs.LG stat.ML | In this work we address the problem of approximating high-dimensional data
with a low-dimensional representation. We make the following contributions. We
propose an inverse regression method which exchanges the roles of input and
response, such that the low-dimensional variable becomes the regressor, and
which is tractable. We introduce a mixture of locally-linear probabilistic
mapping model that starts with estimating the parameters of inverse regression,
and follows with inferring closed-form solutions for the forward parameters of
the high-dimensional regression problem of interest. Moreover, we introduce a
partially-latent paradigm, such that the vector-valued response variable is
composed of both observed and latent entries, thus being able to deal with data
contaminated by experimental artifacts that cannot be explained with noise
models. The proposed probabilistic formulation could be viewed as a
latent-variable augmentation of regression. We devise expectation-maximization
(EM) procedures based on a data augmentation strategy which facilitates the
maximum-likelihood search over the model parameters. We propose two
augmentation schemes and we describe in detail the associated EM inference
procedures that may well be viewed as generalizations of a number of EM
regression, dimension reduction, and factor analysis algorithms. The proposed
framework is validated with both synthetic and real data. We provide
experimental evidence that our method outperforms several existing regression
techniques.
|
1308.2307 | Finite Element Model Updating Using Fish School Search Optimization
Method | cs.CE cs.NE | A recent nature inspired optimization algorithm, Fish School Search (FSS) is
applied to the finite element model (FEM) updating problem. This method is
tested on a GARTEUR SM-AG19 aeroplane structure. The results of this algorithm
are compared with two other metaheuristic algorithms; Genetic Algorithm (GA)
and Particle Swarm Optimization (PSO). It is observed that on average, the FSS
and PSO algorithms give more accurate results than the GA. A minor modification
to the FSS is proposed. This modification improves the performance of FSS on
the FEM updating problem which has a constrained search space.
|
1308.2309 | Applying the Negative Selection Algorithm for Merger and Acquisition
Target Identification | cs.AI | In this paper, we propose a new methodology based on the Negative Selection
Algorithm that belongs to the field of Computational Intelligence,
specifically, Artificial Immune Systems to identify takeover targets. Although
considerable research based on customary statistical techniques and some
contemporary Computational Intelligence techniques have been devoted to
identify takeover targets, most of the existing studies are based upon multiple
previous mergers and acquisitions. Contrary to previous research, the novelty
of this proposal lies in its ability to suggest takeover targets for novice
firms that are at the beginning of their merger and acquisition spree. We first
discuss the theoretical perspective and then provide a case study with details
for practical implementation, both capitalizing from unique generalization
capabilities of artificial immune systems algorithms.
|
1308.2310 | Mining Positive and Negative Association Rules Using CoherentApproach | cs.DB | In the data mining field, association rules are discovered having domain
knowledge specified as a minimum support threshold. The accuracy in setting up
this threshold directly influences the number and the quality of association
rules discovered. Typically, before association rules are mined, a user needs
to determine a support threshold in order to obtain only the frequent item
sets. Having users to determine a support threshold attracts a number of
issues. We propose an association rule mining framework that does not require a
per-set support threshold. Often, the number of association rules, even though
large in number, misses some interesting rules and the rules quality
necessitates further analysis. As a result, decision making using these rules
could lead to risky actions.
|
1308.2338 | Lossy Compression of Exponential and Laplacian Sources using Expansion
Coding | cs.IT math.IT | A general method of source coding over expansion is proposed in this paper,
which enables one to reduce the problem of compressing an analog
(continuous-valued source) to a set of much simpler problems, compressing
discrete sources. Specifically, the focus is on lossy compression of
exponential and Laplacian sources, which is subsequently expanded using a
finite alphabet prior to being quantized. Due to decomposability property of
such sources, the resulting random variables post expansion are independent and
discrete. Thus, each of the expanded levels corresponds to an independent
discrete source coding problem, and the original problem is reduced to coding
over these parallel sources with a total distortion constraint. Any feasible
solution to the optimization problem is an achievable rate distortion pair of
the original continuous-valued source compression problem. Although finding the
solution to this optimization problem at every distortion is hard, we show that
our expansion coding scheme presents a good solution in the low distrotion
regime. Further, by adopting low-complexity codes designed for discrete source
coding, the total coding complexity can be tractable in practice.
|
1308.2350 | Learning Features and their Transformations by Spatial and Temporal
Spherical Clustering | cs.NE cs.AI cs.CV cs.LG q-bio.NC | Learning features invariant to arbitrary transformations in the data is a
requirement for any recognition system, biological or artificial. It is now
widely accepted that simple cells in the primary visual cortex respond to
features while the complex cells respond to features invariant to different
transformations. We present a novel two-layered feedforward neural model that
learns features in the first layer by spatial spherical clustering and
invariance to transformations in the second layer by temporal spherical
clustering. Learning occurs in an online and unsupervised manner following the
Hebbian rule. When exposed to natural videos acquired by a camera mounted on a
cat's head, the first and second layer neurons in our model develop simple and
complex cell-like receptive field properties. The model can predict by learning
lateral connections among the first layer neurons. A topographic map to their
spatial features emerges by exponentially decaying the flow of activation with
distance from one neuron to another in the first layer that fire in close
temporal proximity, thereby minimizing the pooling length in an online manner
simultaneously with feature learning.
|
1308.2354 | RAProp: Ranking Tweets by Exploiting the Tweet/User/Web Ecosystem and
Inter-Tweet Agreement | cs.IR | The increasing popularity of Twitter renders improved trustworthiness and
relevance assessment of tweets much more important for search. However, given
the limitations on the size of tweets, it is hard to extract measures for
ranking from the tweets' content alone. We present a novel ranking method,
called RAProp, which combines two orthogonal measures of relevance and
trustworthiness of a tweet. The first, called Feature Score, measures the
trustworthiness of the source of the tweet. This is done by extracting features
from a 3-layer twitter ecosystem, consisting of users, tweets and the pages
referred to in the tweets. The second measure, called agreement analysis,
estimates the trustworthiness of the content of the tweet, by analyzing how and
whether the content is independently corroborated by other tweets. We view the
candidate result set of tweets as the vertices of a graph, with the edges
measuring the estimated agreement between each pair of tweets. The feature
score is propagated over this agreement graph to compute the top-k tweets that
have both trustworthy sources and independent corroboration. The evaluation of
our method on 16 million tweets from the TREC 2011 Microblog Dataset shows that
for top-30 precision we achieve 53% higher than current best performing method
on the Dataset and over 300% over current Twitter Search. We also present a
detailed internal empirical evaluation of RAProp in comparison to several
alternative approaches proposed by us.
|
1308.2357 | On the Detection of Passive Eavesdroppers in the MIMO Wiretap Channel | cs.IT math.IT | The classic MIMO wiretap channel comprises a passive eavesdropper that
attempts to intercept communications between an authorized transmitter-receiver
pair, each node being equipped with multiple antennas. In a dynamic network, it
is imperative that the presence of an eavesdropper be determined before the
transmitter can deploy robust secrecyencoding schemes as a countermeasure. This
is a difficult task in general, since by definition the eavesdropper is passive
and never transmits. In this work we adopt a method that allows the legitimate
nodes to detect the passive eavesdropper from the local oscillator power that
is inadvertently leaked from its RF front end. We examine the performance of
non-coherent energy detection and optimal coherent detection, followed by
composite GLRT detection methods that account for unknown parameters. Numerical
experiments demonstrate that the proposed detectors allow the legitimate nodes
to increase the secrecy rate of the MIMO wiretap channel.
|
1308.2359 | Exploratory Analysis of Highly Heterogeneous Document Collections | cs.CL cs.HC cs.IR | We present an effective multifaceted system for exploratory analysis of
highly heterogeneous document collections. Our system is based on intelligently
tagging individual documents in a purely automated fashion and exploiting these
tags in a powerful faceted browsing framework. Tagging strategies employed
include both unsupervised and supervised approaches based on machine learning
and natural language processing. As one of our key tagging strategies, we
introduce the KERA algorithm (Keyword Extraction for Reports and Articles).
KERA extracts topic-representative terms from individual documents in a purely
unsupervised fashion and is revealed to be significantly more effective than
state-of-the-art methods. Finally, we evaluate our system in its ability to
help users locate documents pertaining to military critical technologies buried
deep in a large heterogeneous sea of information.
|
1308.2372 | Throughput of One-Hop Wireless Networks with Noisy Feedback Channel | cs.IT math.IT | In this paper, we consider the effect of feedback channel error on the
throughput of one-hop wireless networks under the random connection model. The
transmission strategy is based on activating source-destination pairs with
strongest direct links. While these activated pairs are identified based on
Channel State Information (CSI) at the receive side, the transmit side will be
provided with a noisy version of this information via the feedback channel.
Such error will degrade network throughput, as we investigate in this paper.
Our results show that if the feedback error probability is below a given
threshold, network can tolerate such error without any significant throughput
loss. The threshold value depends on the number of nodes in the network and the
channel fading distribution. Such analysis is crucial in design of error
correction codes for feedback channel in such networks.
|
1308.2375 | A radial basis function neural network based approach for the electrical
characteristics estimation of a photovoltaic module | cs.NE | The design process of photovoltaic (PV) modules can be greatly enhanced by
using advanced and accurate models in order to predict accurately their
electrical output behavior. The main aim of this paper is to investigate the
application of an advanced neural network based model of a module to improve
the accuracy of the predicted output I--V and P--V curves and to keep in
account the change of all the parameters at different operating conditions.
Radial basis function neural networks (RBFNN) are here utilized to predict the
output characteristic of a commercial PV module, by reading only the data of
solar irradiation and temperature. A lot of available experimental data were
used for the training of the RBFNN, and a backpropagation algorithm was
employed. Simulation and experimental validation is reported.
|
1308.2390 | Adaptive Technique for Computationally Efficient Time Delay and
Magnitude Estimation of Sinusoidal Signals | cs.SY | An online, adaptive method of time delay and magnitude estimation for
sinusoidal signals is presented. The method is based on an adaptive gradient
descent algorithm that directly determines the time delay and magnitudes of two
noisy sinusoidal signals. The new estimator uses a novel quadrature carrier
generator to produce the carriers for an adaptive quadrature phase detector,
which in turn uses an arc tan function to compute the time delay. The proposed
method is quite robust and can adapt to significant variation in input signal
characteristics like magnitude and frequency imposing no requirement on the
magnitudes of the two signals. It even works effectively when the signals have
time-varying magnitudes. The convergence analysis of the proposed technique
shows that estimate converges exponentially fast to their nominal values. In
addition, if the technique is implemented in the continuous time domain, the
delay estimation accuracy will not be constrained by the sampling frequency as
observed in some of the classical techniques. Extensive simulations show that
the proposed method provides very accurate estimates of the time delay
comparable to that of the popular methods like Sinc-based estimator, Lagrange
estimator, and the Quadrature estimator, as well the magnitude estimate of the
input signals at lower signal to noise ratio at appreciably reduced
computational cost.
|
1308.2401 | Numerical Fitting-based Likelihood Calculation to Speed up the Particle
Filter | cs.IT cs.NA math.IT | The likelihood calculation of a vast number of particles is the computational
bottleneck for the particle filter in applications where the observation
information is rich. For fast computing the likelihood of particles, a
numerical fitting approach is proposed to construct the Likelihood Probability
Density Function (Li-PDF) by using a comparably small number of so-called
fulcrums. The likelihood of particles is thereby analytically inferred,
explicitly or implicitly, based on the Li-PDF instead of directly computed by
utilizing the observation, which can significantly reduce the computation and
enables real time filtering. The proposed approach guarantees the estimation
quality when an appropriate fitting function and properly distributed fulcrums
are used. The details for construction of the fitting function and fulcrums are
addressed respectively in detail. In particular, to deal with multivariate
fitting, the nonparametric kernel density estimator is presented which is
flexible and convenient for implicit Li-PDF implementation. Simulation
comparison with a variety of existing approaches on a benchmark 1-dimensional
model and multi-dimensional robot localization and visual tracking demonstrate
the validity of our approach.
|
1308.2426 | Bias of the SIR filter in estimation of the state transition noise | cs.SY cs.NA | This Note investigates the bias of the sampling importance resampling (SIR)
filter in estimation of the state transition noise in the state space model.
The SIR filter may suffer from sample impoverishment that is caused by the
resampling and therefore will benefit from a sampling proposal that has a
heavier tail, e.g. the state transition noise simulated for particle
preparation is bigger than the true noise involved with the state dynamics.
This is because a comparably big transition noise used for particle propagation
can spread overlapped particles to counteract impoverishment, giving better
approximation of the posterior. As such, the SIR filter tends to yield a biased
(bigger-than-the-truth) estimate of the transition noise if it is unknown and
needs to be estimated, at least, in the forward-only filtering estimation. The
bias is elaborated via the direct roughening approach by means of both
qualitative logical deduction and quantitative numerical simulation.
|
1308.2428 | Hidden Structure and Function in the Lexicon | cs.CL | How many words are needed to define all the words in a dictionary?
Graph-theoretic analysis reveals that about 10% of a dictionary is a unique
Kernel of words that define one another and all the rest, but this is not the
smallest such subset. The Kernel consists of one huge strongly connected
component (SCC), about half its size, the Core, surrounded by many small SCCs,
the Satellites. Core words can define one another but not the rest of the
dictionary. The Kernel also contains many overlapping Minimal Grounding Sets
(MGSs), each about the same size as the Core, each part-Core, part-Satellite.
MGS words can define all the rest of the dictionary. They are learned earlier,
more concrete and more frequent than the rest of the dictionary. Satellite
words, not correlated with age or frequency, are less concrete (more abstract)
words that are also needed for full lexical power.
|
1308.2433 | Archiving the Relaxed Consistency Web | cs.DL cs.DB cs.SI | The historical, cultural, and intellectual importance of archiving the web
has been widely recognized. Today, all countries with high Internet penetration
rate have established high-profile archiving initiatives to crawl and archive
the fast-disappearing web content for long-term use. As web technologies
evolve, established web archiving techniques face challenges. This paper
focuses on the potential impact of the relaxed consistency web design on
crawler driven web archiving. Relaxed consistent websites may disseminate,
albeit ephemerally, inaccurate and even contradictory information. If captured
and preserved in the web archives as historical records, such information will
degrade the overall archival quality. To assess the extent of such quality
degradation, we build a simplified feed-following application and simulate its
operation with synthetic workloads. The results indicate that a non-trivial
portion of a relaxed consistency web archive may contain observable
inconsistency, and the inconsistency window may extend significantly longer
than that observed at the data store. We discuss the nature of such quality
degradation and propose a few possible remedies.
|
1308.2443 | Fighting Sample Degeneracy and Impoverishment in Particle Filters: A
Review of Intelligent Approaches | cs.AI stat.CO | During the last two decades there has been a growing interest in Particle
Filtering (PF). However, PF suffers from two long-standing problems that are
referred to as sample degeneracy and impoverishment. We are investigating
methods that are particularly efficient at Particle Distribution Optimization
(PDO) to fight sample degeneracy and impoverishment, with an emphasis on
intelligence choices. These methods benefit from such methods as Markov Chain
Monte Carlo methods, Mean-shift algorithms, artificial intelligence algorithms
(e.g., Particle Swarm Optimization, Genetic Algorithm and Ant Colony
Optimization), machine learning approaches (e.g., clustering, splitting and
merging) and their hybrids, forming a coherent standpoint to enhance the
particle filter. The working mechanism, interrelationship, pros and cons of
these approaches are provided. In addition, Approaches that are effective for
dealing with high-dimensionality are reviewed. While improving the filter
performance in terms of accuracy, robustness and convergence, it is noted that
advanced techniques employed in PF often causes additional computational
requirement that will in turn sacrifice improvement obtained in real life
filtering. This fact, hidden in pure simulations, deserves the attention of the
users and designers of new filters.
|
1308.2451 | What can Social Media teach us about protests? Analyzing the Chilean
2011-12 Student Movement's Network evolution through Twitter data | cs.SI cs.CY physics.soc-ph | Using social media data -specially twitter -of the Chilean 2011-12 student
movement, we study their social network evolution over time to analyze how
leaders and participants self-organize and spread information. Based on a few
key events of the student movement's timeline, we visualize the student network
trajectory and analyze their structural and semantic properties. Therefore, in
this paper we: i) describe the basic network topology of the 2011-12 Chilean
massive student movement; ii) explore how the 180 key central nodes of the
movement are connected, self-organize and spread information. We contend that
this social media enabled massive movement is yet another manifestation of the
network era, which leverages agents' socio-technical networks, and thus
accelerates how agents coordinate, mobilize resources and enact collective
intelligence.
|
1308.2454 | Understanding the Benefits of Open Access in Femtocell Networks:
Stochastic Geometric Analysis in the Uplink | cs.NI cs.IT math.IT | We introduce a comprehensive analytical framework to compare between open
access and closed access in two-tier femtocell networks, with regard to uplink
interference and outage. Interference at both the macrocell and femtocell
levels is considered. A stochastic geometric approach is employed as the basis
for our analysis. We further derive sufficient conditions for open access and
closed access to outperform each other in terms of the outage probability,
leading to closed-form expressions to upper and lower bound the difference in
the targeted received power between the two access modes. Simulations are
conducted to validate the accuracy of the analytical model and the correctness
of the bounds.
|
1308.2462 | Wireless Information and Power Transfer in Multiuser OFDM Systems | cs.IT math.IT | In this paper, we study the optimal design for simultaneous wireless
information and power transfer (SWIPT) in downlink multiuser orthogonal
frequency division multiplexing (OFDM) systems. For information transmission,
we consider two types of multiple access schemes, namely, time division
multiple access (TDMA) and orthogonal frequency division multiple access
(OFDMA). At the receiver side, due to the practical limitation that circuits
for harvesting energy from radio signals are not yet able to decode the carried
information directly, each user applies either time switching (TS) or power
splitting (PS) to coordinate the energy harvesting (EH) and information
decoding (ID) processes. For the TDMA-based information transmission, we employ
TS at the receivers; for the OFDMA-based information transmission, we employ PS
at the receivers. Under the above two scenarios, we address the problem of
maximizing the weighted sum-rate over all users by varying the time/frequency
power allocation and either TS or PS ratio, subject to a minimum harvested
energy constraint on each user as well as a peak and/or total transmission
power constraint. For the TS scheme, by an appropriate variable transformation
the problem is reformulated as a convex problem, for which the optimal power
allocation and TS ratio are obtained by the Lagrange duality method. For the PS
scheme, we propose an iterative algorithm to optimize the power allocation,
subcarrier (SC) allocation and the PS ratio for each user. The performances of
the two schemes are compared numerically as well as analytically for the
special case of single-user setup. It is revealed that the peak power
constraint imposed on each OFDM SC as well as the number of users in the system
play a key role in the rate-energy performance comparison by the two proposed
schemes.
|
1308.2464 | Faster gradient descent and the efficient recovery of images | cs.CV cs.NA math.NA | Much recent attention has been devoted to gradient descent algorithms where
the steepest descent step size is replaced by a similar one from a previous
iteration or gets updated only once every second step, thus forming a {\em
faster gradient descent method}. For unconstrained convex quadratic
optimization these methods can converge much faster than steepest descent. But
the context of interest here is application to certain ill-posed inverse
problems, where the steepest descent method is known to have a smoothing,
regularizing effect, and where a strict optimization solution is not necessary.
Specifically, in this paper we examine the effect of replacing steepest
descent by a faster gradient descent algorithm in the practical context of
image deblurring and denoising tasks. We also propose several highly efficient
schemes for carrying out these tasks independently of the step size selection,
as well as a scheme for the case where both blur and significant noise are
present.
In the above context there are situations where many steepest descent steps
are required, thus building slowness into the solution procedure. Our general
conclusion regarding gradient descent methods is that in such cases the faster
gradient descent methods offer substantial advantages. In other situations
where no such slowness buildup arises the steepest descent method can still be
very effective.
|
1308.2505 | Stability Results for Simple Traffic Models Under PI-Regulator Control | math.OC cs.SY | This paper provides necessary conditions and sufficient conditions for the
(global) Input-to-State Stability property of simple uncertain
vehicular-traffic network models under the effect of a PI-regulator. Local
stability properties for vehicular-traffic networks under the effect of
PI-regulator control are studied as well: the region of attraction of a locally
exponentially stable equilibrium point is estimated by means of Lyapunov
functions. All obtained results are illustrated by means of simple examples.
|
1308.2509 | Coding and Compression of Three Dimensional Meshes by Planes | cs.CG cs.IT math.IT | The present paper suggests a new approach for geometric representation of 3D
spatial models and provides a new compression algorithm for 3D meshes, which is
based on mathematical theory of convex geometry. In our approach we represent a
3D convex polyhedron by means of planes, containing only its faces. This allows
not to consider topological aspects of the problem (connectivity information
among vertices and edges) since by means of the planes we construct the
polyhedron uniquely. Due to the fact that the topological data is ignored this
representation provides high degree of compression. Also planes based
representation provides a compression of geometrical data because most of the
faces of the polyhedron are not triangles but polygons with more than three
vertices.
|
1308.2516 | Fluctuation in e-mail sizes weakens power-law correlations in e-mail
flow | physics.soc-ph cs.SI physics.data-an | Power-law correlations have been observed in packet flow over the Internet.
The possible origin of these correlations includes demand for Internet
services. We observe the demand for e-mail services in an organization, and
analyze correlations in the flow and the sequence of send requests using a
Detrended Fluctuation Analysis (DFA). The correlation in the flow is found to
be weaker than that in the send requests. Four types of artificial flow are
constructed to investigate the effects of fluctuations in e-mail sizes. As a
result, we find that the correlation in the flow originates from that in the
sequence of send requests. The strength of the power-law correlation decreases
as a function of the ratio of the standard deviation of e-mail sizes to their
average.
|
1308.2565 | A place-focused model for social networks in cities | cs.SI physics.soc-ph | The focused organization theory of social ties proposes that the structure of
human social networks can be arranged around extra-network foci, which can
include shared physical spaces such as homes, workplaces, restaurants, and so
on. Until now, this has been difficult to investigate on a large scale, but the
huge volume of data available from online location-based social services now
makes it possible to examine the friendships and mobility of many thousands of
people, and to investigate the relationship between meetings at places and the
structure of the social network. In this paper, we analyze a large dataset from
Foursquare, the most popular online location-based social network. We examine
the properties of city-based social networks, finding that they have common
structural properties, and that the category of place where two people meet has
very strong influence on the likelihood of their being friends. Inspired by
these observations in combination with the focused organization theory, we then
present a model to generate city-level social networks, and show that it
produces networks with the structural properties seen in empirical data.
|
1308.2572 | Achieving Speedup in Aggregate Risk Analysis using Multiple GPUs | cs.DC cs.CE cs.DS q-fin.RM | Stochastic simulation techniques employed for the analysis of portfolios of
insurance/reinsurance risk, often referred to as `Aggregate Risk Analysis', can
benefit from exploiting state-of-the-art high-performance computing platforms.
In this paper, parallel methods to speed-up aggregate risk analysis for
supporting real-time pricing are explored. An algorithm for analysing aggregate
risk is proposed and implemented for multi-core CPUs and for many-core GPUs.
Experimental studies indicate that GPUs offer a feasible alternative solution
over traditional high-performance computing systems. A simulation of 1,000,000
trials with 1,000 catastrophic events per trial on a typical exposure set and
contract structure is performed in less than 5 seconds on a multiple GPU
platform. The key result is that the multiple GPU implementation can be used in
real-time pricing scenarios as it is approximately 77x times faster than the
sequential counterpart implemented on a CPU.
|
1308.2591 | Alpha current flow betweenness centrality | cs.SI physics.soc-ph | A class of centrality measures called betweenness centralities reflects
degree of participation of edges or nodes in communication between different
parts of the network. The original shortest-path betweenness centrality is
based on counting shortest paths which go through a node or an edge. One of
shortcomings of the shortest-path betweenness centrality is that it ignores the
paths that might be one or two steps longer than the shortest paths, while the
edges on such paths can be important for communication processes in the
network. To rectify this shortcoming a current flow betweenness centrality has
been proposed. Similarly to the shortest path betwe has prohibitive complexity
for large size networks. In the present work we propose two regularizations of
the current flow betweenness centrality, \alpha-current flow betweenness and
truncated \alpha-current flow betweenness, which can be computed fast and
correlate well with the original current flow betweenness.
|
1308.2592 | Sparse Command Generator for Remote Control | cs.SY cs.IT math.IT math.OC | In this article, we consider remote-controlled systems, where the command
generator and the controlled object are connected with a bandwidth-limited
communication link. In the remote-controlled systems, efficient representation
of control commands is one of the crucial issues because of the bandwidth
limitations of the link. We propose a new representation method for control
commands based on compressed sensing. In the proposed method, compressed
sensing reduces the number of bits in each control signal by representing it as
a sparse vector. The compressed sensing problem is solved by an L1-L2
optimization, which can be effectively implemented with an iterative shrinkage
algorithm. A design example also shows the effectiveness of the proposed
method.
|
1308.2600 | An Enhanced Time Space Priority Scheme to Manage QoS for Multimedia
Flows transmitted to an end user in HSDPA Network | cs.NI cs.MM cs.SY | When different type of packets with different needs of Quality of Service
(QoS) requirements share the same network resources, it became important to use
queue management and scheduling schemes in order to maintain perceived quality
at the end users at an acceptable level. Many schemes have been studied in the
literature, these schemes use time priority (to maintain QoS for Real Time (RT)
packets) and/or space priority (to maintain QoS for Non Real Time (NRT)
packets). In this paper, we study and show the drawback of a combined time and
space priority (TSP) scheme used to manage QoS for RT and NRT packets intended
for an end user in High Speed Downlink Packet Access (HSDPA) cell, and we
propose an enhanced scheme (Enhanced Basic-TSP scheme) to improve QoS
relatively to the RT packets, and to exploit efficiently the network resources.
A mathematical model for the EB-TSP scheme is done, and numerical results show
the positive impact of this scheme.
|
1308.2654 | Local image registration a comparison for bilateral registration
mammography | cs.CV | Early tumor detection is key in reducing the number of breast cancer death
and screening mammography is one of the most widely available and reliable
method for early detection. However, it is difficult for the radiologist to
process with the same attention each case, due the large amount of images to be
read. Computer aided detection (CADe) systems improve tumor detection rate; but
the current efficiency of these systems is not yet adequate and the correct
interpretation of CADe outputs requires expert human intervention. Computer
aided diagnosis systems (CADx) are being designed to improve cancer diagnosis
accuracy, but they have not been efficiently applied in breast cancer. CADx
efficiency can be enhanced by considering the natural mirror symmetry between
the right and left breast. The objective of this work is to evaluate
co-registration algorithms for the accurate alignment of the left to right
breast for CADx enhancement. A set of mammograms were artificially altered to
create a ground truth set to evaluate the registration efficiency of DEMONs,
and SPLINE deformable registration algorithms. The registration accuracy was
evaluated using mean square errors, mutual information and correlation. The
results on the 132 images proved that the SPLINE deformable registration
over-perform the DEMONS on mammography images.
|
1308.2655 | KL-based Control of the Learning Schedule for Surrogate Black-Box
Optimization | cs.LG cs.AI stat.ML | This paper investigates the control of an ML component within the Covariance
Matrix Adaptation Evolution Strategy (CMA-ES) devoted to black-box
optimization. The known CMA-ES weakness is its sample complexity, the number of
evaluations of the objective function needed to approximate the global optimum.
This weakness is commonly addressed through surrogate optimization, learning an
estimate of the objective function a.k.a. surrogate model, and replacing most
evaluations of the true objective function with the (inexpensive) evaluation of
the surrogate model. This paper presents a principled control of the learning
schedule (when to relearn the surrogate model), based on the Kullback-Leibler
divergence of the current search distribution and the training distribution of
the former surrogate model. The experimental validation of the proposed
approach shows significant performance gains on a comprehensive set of
ill-conditioned benchmark problems, compared to the best state of the art
including the quasi-Newton high-precision BFGS method.
|
1308.2696 | B(eo)W(u)LF: Facilitating recurrence analysis on multi-level language | cs.CL | Discourse analysis may seek to characterize not only the overall composition
of a given text but also the dynamic patterns within the data. This technical
report introduces a data format intended to facilitate multi-level
investigations, which we call the by-word long-form or B(eo)W(u)LF. Inspired by
the long-form data format required for mixed-effects modeling, B(eo)W(u)LF
structures linguistic data into an expanded matrix encoding any number of
researchers-specified markers, making it ideal for recurrence-based analyses.
While we do not necessarily claim to be the first to use methods along these
lines, we have created a series of tools utilizing Python and MATLAB to enable
such discourse analyses and demonstrate them using 319 lines of the Old English
epic poem, Beowulf, translated into modern English.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.