id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1305.5913 | Performance of Opportunistic Fixed Gain Bidirectional Relaying With
Outdated CSI | cs.IT math.IT | This paper studies the impact of using outdated channel state information for
relay selection on the performance of a network where two sources communicate
with each other via fixed-gain amplifyand- forward relays. For a Rayleigh faded
channel, closed-form expressions for the outage probability, moment generating
function and symbol error rate are derived. Simulations results are also
presented to corroborate the derived analytical results. It is shown that
adding relays does not improve the performance if the channel is substantially
outdated. Furthermore, relay location is also taken into consideration and it
is shown that the performance can be improved by placing the relay closer to
the source whose channel is more outdated.
|
1305.5918 | Reduce Meaningless Words for Joint Chinese Word Segmentation and
Part-of-speech Tagging | cs.CL | Conventional statistics-based methods for joint Chinese word segmentation and
part-of-speech tagging (S&T) have generalization ability to recognize new words
that do not appear in the training data. An undesirable side effect is that a
number of meaningless words will be incorrectly created. We propose an
effective and efficient framework for S&T that introduces features to
significantly reduce meaningless words generation. A general lexicon, Wikepedia
and a large-scale raw corpus of 200 billion characters are used to generate
word-based features for the wordhood. The word-lattice based framework consists
of a character-based model and a word-based model in order to employ our
word-based features. Experiments on Penn Chinese treebank 5 show that this
method has a 62.9% reduction of meaningless word generation in comparison with
the baseline. As a result, the F1 measure for segmentation is increased to
0.984.
|
1305.5950 | Agent Based Intelligent Alert System for Smart-Phones | cs.HC cs.CY cs.MA | The paper deals with the design of an agent which modifies and enhances the
various alert systems in the smartphones. The actions of the agent includes
sorting the notifications abiding to human thinking, helping the user to have a
safe conversation, assisting in tracking back the reach-ability status of the
caller when needed, conveying the user about the notifications in times of
situations like drained battery and smartly alerting the user in situations
like sleeping. The agent uses the information gathered from a survey, to modify
the existing methods of alerts and produce alerts which abide by the human
cognitive responses.
|
1305.5959 | ArcLink: Optimization Techniques to Build and Retrieve the Temporal Web
Graph | cs.IR | Archiving the web is socially and culturally critical, but presents problems
of scale. The Internet Archive's Wayback Machine can replay captured web pages
as they existed at a certain point in time, but it has limited ability to
provide extensive content and structural metadata about the web graph. While
the live web has developed a rich ecosystem of APIs to facilitate web
applications (e.g., APIs from Google and Twitter), the web archiving community
has not yet broadly implemented this level of access.
We present ArcLink, a proof-of-concept system that complements open source
Wayback Machine installations by optimizing the construction, storage, and
access to the temporal web graph. We divide the web graph construction into
four stages (filtering, extraction, storage, and access) and explore
optimization for each stage. ArcLink extends the current Web archive interfaces
to return content and structural metadata for each URI. We show how this API
can be applied to such applications as retrieving inlinks, outlinks,
anchortext, and PageRank.
|
1305.5960 | Coding for Computing Irreducible Markovian Functions of Sources with
Memory | cs.IT math.IT | One open problem in source coding is to characterize the limits of
representing losslessly a non-identity discrete function of the data encoded
independently by the encoders of several correlated sources with memory. This
paper investigates this problem under Markovian conditions, namely either the
sources or the functions considered are Markovian. We propose using linear
mappings over finite rings as encoders. If the function considered admits
certain polynomial structure, the linear encoders can make use of this
structure to establish "implicit collaboration" and boost the performance. In
fact, this approach universally applies to any scenario (arbitrary function)
because any discrete function admits a polynomial presentation of required
format.
There are several useful discoveries in the paper. The first says that linear
encoder over non-field ring can be equally optimal for compressing data
generated by an irreducible Markov source. Secondly, regarding the previous
function-encoding problem, there are infinitely many circumstances where linear
encoder over non-field ring strictly outperforms its field counterpart. To be
more precise, it is seen that the set of coding rates achieved by linear
encoder over certain non-field rings is strictly larger than the one achieved
by the field version, regardless which finite field is considered. Therefore,
in this sense, linear coding over finite field is not optimal. In addition, for
certain scenarios where the sources do not possess the ergodic property, our
ring approach is still able to offer a solution.
|
1305.5970 | The Private Classical Capacity of a Partially Degradable Quantum Channel | quant-ph cs.IT math.IT | For a partially degradable (PD) channel, the channel output state can be used
to simulate the degraded environment state. The quantum capacity of a PD
channel has been proven to be additive. Here, we show that the private
classical capacity of arbitrary dimensional PD channels is equal to the quantum
capacity of the channel and also single-letterizes. We prove that higher rates
of private classical communication can be achieved over a PD channel in
comparison to standard degradable channels.
|
1305.5981 | Query Representation with Global Consistency on User Click Graph | cs.IR cs.HC cs.NI | Extensive research has been conducted on query log analysis. A query log is
generally represented as a bipartite graph on a query set and a URL set. Most
of the traditional methods used the raw click frequency to weigh the link
between a query and a URL on the click graph. In order to address the
disadvantages of raw click frequency, researchers proposed the entropy-biased
model, which incorporates raw click frequency with inverse query frequency of
the URL as the weighting scheme for query representation. In this paper, we
observe that the inverse query frequency can be considered a global property of
the URL on the click graph, which is more informative than raw click frequency,
which can be considered a local property of the URL. Based on this insight, we
develop the global consistency model for query representation, which utilizes
the click frequency and the inverse query frequency of a URL in a consistent
manner. Furthermore, we propose a new scheme called inverse URL frequency as an
effective way to capture the global property of a URL. Experiments have been
conducted on the AOL search engine log data. The result shows that our global
consistency model achieved better performance than the current models.
|
1305.5992 | Strong Coordination over a Line Network | cs.IT math.IT | We study the problem of strong coordination in a three-terminal line network,
in which agents use common randomness and communicate over a line network to
ensure that their actions follow a prescribed behavior, modeled by a target
joint distribution of actions. We provide inner and outer bounds to the
coordination capacity region, and show that these bounds are partially optimal.
We leverage this characterization to develop insight into the interplay between
communication and coordination. Specifically, we show that common randomness
helps to achieve optimal communication rates between agents, and that matching
the network topology to the behavior structure may reduce inter-agent
communication rates.
|
1305.6000 | Local Privacy and Minimax Bounds: Sharp Rates for Probability Estimation | math.ST cs.CR cs.IT math.IT stat.TH | We provide a detailed study of the estimation of probability
distributions---discrete and continuous---in a stringent setting in which data
is kept private even from the statistician. We give sharp minimax rates of
convergence for estimation in these locally private settings, exhibiting
fundamental tradeoffs between privacy and convergence rate, as well as
providing tools to allow movement along the privacy-statistical efficiency
continuum. One of the consequences of our results is that Warner's classical
work on randomized response is an optimal way to perform survey sampling while
maintaining privacy of the respondents.
|
1305.6003 | Exploiting Self-Interference Suppression for Improved Spectrum
Awareness/Efficiency in Cognitive Radio Systems | cs.NI cs.IT math.IT math.OC | Inspired by recent developments in full-duplex communications, we propose and
study new modes of operation for cognitive radios with the goal of achieving
improved primary user (PU) detection and/or secondary user (SU) throughput.
Specifically, we consider an opportunistic PU/SU setting in which the SU is
equipped with partial/complete self-interference suppression (SIS), enabling it
to transmit and receive/sense at the same time. Following a brief sensing
period, the SU can operate in either simultaneous transmit-and-sense (TS) mode
or simultaneous transmit-and-receive (TR) mode. We analytically study the
performance metrics for the two modes, namely the detection and false-alarm
probabilities, the PU outage probability, and the SU throughput. From this
analysis, we evaluate the sensing-throughput tradeoff for both modes. Our
objective is to find the optimal sensing and transmission durations for the SU
that maximize its throughput subject to a given outage probability. We also
explore the spectrum awareness/efficiency tradeoff that arises from the two
modes by determining an efficient adaptive strategy for the SU link. This
strategy has a threshold structure, which depends on the PU traffic load. Our
study considers both perfect and imperfect sensing as well as perfect/imperfect
SIS.
|
1305.6012 | Cognitive Beamforming for Multiple Secondary Data Streams With
Individual SNR Constraints | cs.IT math.IT | In this paper, we consider cognitive beamforming for multiple secondary data
streams subject to individual signal-to-noise ratio (SNR) requirements for each
secondary data stream. In such a cognitive radio system, the secondary user is
permitted to use the spectrum allocated to the primary user as long as the
caused interference at the primary receiver is tolerable. With both secondary
SNR constraint and primary interference power constraint, we aim to minimize
the secondary transmit power consumption. By exploiting the individual SNR
requirements, we formulate this cognitive beamforming problem as an
optimization problem on the Stiefel manifold. Both zero forcing beamforming
(ZFB) and nonzero forcing beamforming (NFB) are considered. For the ZFB case,
we derive a closed form beamforming solution. For the NFB case, we prove that
the strong duality holds for the nonconvex primal problem and thus the optimal
solution can be easily obtained by solving the dual problem. Finally, numerical
results are presented to illustrate the performance of the proposed cognitive
beamforming solutions.
|
1305.6021 | On the $\ell_1$-Norm Invariant Convex k-Sparse Decomposition of Signals | cs.IT math.IT | Inspired by an interesting idea of Cai and Zhang, we formulate and prove the
convex $k$-sparse decomposition of vectors which is invariant with respect to
$\ell_1$ norm. This result fits well in discussing compressed sensing problems
under RIP, but we believe it also has independent interest. As an application,
a simple derivation of the RIP recovery condition $\delta_k+\theta_{k,k} < 1$
is presented.
|
1305.6037 | Semi-bounded Rationality: A model for decision making | cs.AI q-fin.GN | In this paper the theory of semi-bounded rationality is proposed as an
extension of the theory of bounded rationality. In particular, it is proposed
that a decision making process involves two components and these are the
correlation machine, which estimates missing values, and the causal machine,
which relates the cause to the effect. Rational decision making involves using
information which is almost always imperfect and incomplete as well as some
intelligent machine which if it is a human being is inconsistent to make
decisions. In the theory of bounded rationality this decision is made
irrespective of the fact that the information to be used is incomplete and
imperfect and the human brain is inconsistent and thus this decision that is to
be made is taken within the bounds of these limitations. In the theory of
semi-bounded rationality, signal processing is used to filter noise and
outliers in the information and the correlation machine is applied to complete
the missing information and artificial intelligence is used to make more
consistent decisions.
|
1305.6046 | Supervised Feature Selection for Diagnosis of Coronary Artery Disease
Based on Genetic Algorithm | cs.LG cs.CE | Feature Selection (FS) has become the focus of much research on decision
support systems areas for which data sets with tremendous number of variables
are analyzed. In this paper we present a new method for the diagnosis of
Coronary Artery Diseases (CAD) founded on Genetic Algorithm (GA) wrapped Bayes
Naive (BN) based FS. Basically, CAD dataset contains two classes defined with
13 features. In GA BN algorithm, GA generates in each iteration a subset of
attributes that will be evaluated using the BN in the second step of the
selection procedure. The final set of attribute contains the most relevant
feature model that increases the accuracy. The algorithm in this case produces
85.50% classification accuracy in the diagnosis of CAD. Thus, the asset of the
Algorithm is then compared with the use of Support Vector Machine (SVM),
MultiLayer Perceptron (MLP) and C4.5 decision tree Algorithm. The result of
classification accuracy for those algorithms are respectively 83.5%, 83.16% and
80.85%. Consequently, the GA wrapped BN Algorithm is correspondingly compared
with other FS algorithms. The Obtained results have shown very promising
outcomes for the diagnosis of CAD.
|
1305.6091 | Robust power allocation for energy-efficient location aware networks | cs.IT math.IT | In wireless location-aware networks, mobile nodes (agents) typically obtain
their positions through ranging with respect to nodes with known positions
(anchors). Transmit power allocation not only affects network lifetime,
throughput, and interference, but also determines localization accuracy. In
this paper, we present an optimization framework for robust power allocation in
network localization to tackle imperfect knowledge of network parameters. In
particular, we formulate power allocation problems to minimize the squared
position error bound (SPEB) and the maximum directional position error bound
(mDPEB), respectively, for a given power budget. We show that such formulations
can be efficiently solved via conic programming. Moreover, we design an
efficient power allocation scheme that allows distributed computations among
agents. The simulation results show that the proposed schemes significantly
outperform uniform power allocation, and the robust schemes outperform their
non-robust counterparts when the network parameters are subject to uncertainty.
|
1305.6126 | Problems on q-Analogs in Coding Theory | cs.IT math.CO math.IT | The interest in $q$-analogs of codes and designs has been increased in the
last few years as a consequence of their new application in error-correction
for random network coding. There are many interesting theoretical, algebraic,
and combinatorial coding problems concerning these q-analogs which remained
unsolved. The first goal of this paper is to make a short summary of the large
amount of research which was done in the area mainly in the last few years and
to provide most of the relevant references. The second goal of this paper is to
present one hundred open questions and problems for future research, whose
solution will advance the knowledge in this area. The third goal of this paper
is to present and start some directions in solving some of these problems.
|
1305.6129 | Information-Theoretic Approach to Efficient Adaptive Path Planning for
Mobile Robotic Environmental Sensing | cs.LG cs.AI cs.MA cs.RO | Recent research in robot exploration and mapping has focused on sampling
environmental hotspot fields. This exploration task is formalized by Low,
Dolan, and Khosla (2008) in a sequential decision-theoretic planning under
uncertainty framework called MASP. The time complexity of solving MASP
approximately depends on the map resolution, which limits its use in
large-scale, high-resolution exploration and mapping. To alleviate this
computational difficulty, this paper presents an information-theoretic approach
to MASP (iMASP) for efficient adaptive path planning; by reformulating the
cost-minimizing iMASP as a reward-maximizing problem, its time complexity
becomes independent of map resolution and is less sensitive to increasing robot
team size as demonstrated both theoretically and empirically. Using the
reward-maximizing dual, we derive a novel adaptive variant of maximum entropy
sampling, thus improving the induced exploration policy performance. It also
allows us to establish theoretical bounds quantifying the performance advantage
of optimal adaptive over non-adaptive policies and the performance quality of
approximately optimal vs. optimal adaptive policies. We show analytically and
empirically the superior performance of iMASP-based policies for sampling the
log-Gaussian process to that of policies for the widely-used Gaussian process
in mapping the hotspot field. Lastly, we provide sufficient conditions that,
when met, guarantee adaptivity has no benefit under an assumed environment
model.
|
1305.6143 | Fast and accurate sentiment classification using an enhanced Naive Bayes
model | cs.CL cs.IR cs.LG | We have explored different methods of improving the accuracy of a Naive Bayes
classifier for sentiment analysis. We observed that a combination of methods
like negation handling, word n-grams and feature selection by mutual
information results in a significant improvement in accuracy. This implies that
a highly accurate and fast sentiment classifier can be built using a simple
Naive Bayes model that has linear training and testing time complexities. We
achieved an accuracy of 88.80% on the popular IMDB movie reviews dataset.
|
1305.6146 | Streamforce: outsourcing access control enforcement for stream data to
the clouds | cs.DB cs.CR | As tremendous amount of data being generated everyday from human activity and
from devices equipped with sensing capabilities, cloud computing emerges as a
scalable and cost-effective platform to store and manage the data. While
benefits of cloud computing are numerous, security concerns arising when data
and computation are outsourced to a third party still hinder the complete
movement to the cloud. In this paper, we focus on the problem of data privacy
on the cloud, particularly on access controls over stream data. The nature of
stream data and the complexity of sharing data make access control a more
challenging issue than in traditional archival databases. We present
Streamforce - a system allowing data owners to securely outsource their data to
the cloud. The owner specifies fine-grained policies which are enforced by the
cloud. The latter performs most of the heavy computations, while learning
nothing about the data. To this end, we employ a number of encryption schemes,
including deterministic encryption, proxy-based attribute based encryption and
sliding-window encryption. In Streamforce, access control policies are modeled
as secure continuous queries, which entails minimal changes to existing stream
processing engines, and allows for easy expression of a wide-range of policies.
In particular, Streamforce comes with a number of secure query operators
including Map, Filter, Join and Aggregate. Finally, we implement Streamforce
over an open source stream processing engine (Esper) and evaluate its
performance on a cloud platform. The results demonstrate practical performance
for many real-world applications, and although the security overhead is
visible, Streamforce is highly scalable.
|
1305.6151 | Effects of Channel Aging in Massive MIMO Systems | cs.IT math.IT | MIMO communication may provide high spectral efficiency through the
deployment of a very large number of antenna elements at the base stations. The
gains from massive MIMO communication come from the use of multi-user MIMO on
the uplink and downlink, but with a large excess of antennas at the base
station compared to the number of served users. Initial work on massive MIMO
did not fully address several practical issues associated with its deployment.
This paper considers the impact of channel aging on the performance of massive
MIMO systems. The effects of channel variation are characterized as a function
of different system parameters assuming a simple model for the channel time
variations at the transmitter. Channel prediction is proposed to overcome
channel aging effects. The analytical results on aging show how capacity is
lost due to time variation in the channel. Numerical results in a multicell
network show that massive MIMO works even with some channel variation and that
channel prediction could partially overcome channel aging effects.
|
1305.6161 | Power Control for D2D Underlaid Cellular Networks: Modeling, Algorithms
and Analysis | cs.IT math.IT | This paper considers a device-to-device (D2D) underlaid cellular network
where an uplink cellular user communicates with the base station while multiple
direct D2D links share the uplink spectrum. This paper proposes a random
network model based on stochastic geometry and develops centralized and
distributed power control algorithms. The goal of the proposed power control
algorithms is two-fold: ensure the cellular users have sufficient coverage
probability by limiting the interference created by underlaid D2D users, while
also attempting to support as many D2D links as possible. For the distributed
power control method, expressions for the coverage probabilities of cellular
and D2D links are derived and a lower bound on the sum rate of the D2D links is
provided. The analysis reveals the impact of key system parameters on the
network performance. For example, the bottleneck of D2D underlaid cellular
networks is the cross-tier interference between D2D links and the cellular
user, not the D2D intra-tier interference. Numerical results show the gains of
the proposed power control algorithms and accuracy of the analysis.
|
1305.6187 | Improved Branch-and-Bound for Low Autocorrelation Binary Sequences | cs.AI | The Low Autocorrelation Binary Sequence problem has applications in
telecommunications, is of theoretical interest to physicists, and has inspired
many optimisation researchers. Metaheuristics for the problem have progressed
greatly in recent years but complete search has not progressed since a
branch-and-bound method of 1996. In this paper we find four ways of improving
branch-and-bound, leading to a tighter relaxation, faster convergence to
optimality, and better empirical scalability.
|
1305.6204 | Direct coupling information measure from non-uniform embedding | physics.data-an cs.IT math.IT nlin.CD stat.ME | A measure to estimate the direct and directional coupling in multivariate
time series is proposed. The measure is an extension of a recently published
measure of conditional Mutual Information from Mixed Embedding (MIME) for
bivariate time series. In the proposed measure of Partial MIME (PMIME), the
embedding is on all observed variables, and it is optimized in explaining the
response variable. It is shown that PMIME detects correctly direct coupling,
and outperforms the (linear) conditional Granger causality and the partial
transfer entropy. We demonstrate that PMIME does not rely on significance test
and embedding parameters, and the number of observed variables has no effect on
its statistical accuracy, it may only slow the computations. The importance of
these points is shown in simulations and in an application to epileptic
multi-channel scalp EEG.
|
1305.6211 | Development of a Hindi Lemmatizer | cs.CL | We live in a translingual society, in order to communicate with people from
different parts of the world we need to have an expertise in their respective
languages. Learning all these languages is not at all possible; therefore we
need a mechanism which can do this task for us. Machine translators have
emerged as a tool which can perform this task. In order to develop a machine
translator we need to develop several different rules. The very first module
that comes in machine translation pipeline is morphological analysis. Stemming
and lemmatization comes under morphological analysis. In this paper we have
created a lemmatizer which generates rules for removing the affixes along with
the addition of rules for creating a proper root word.
|
1305.6213 | Some results on a $\chi$-divergence, an~extended~Fisher information
and~generalized~Cram\'er-Rao inequalities | cs.IT math.IT stat.ML | We propose a modified $\chi^{\beta}$-divergence, give some of its properties,
and show that this leads to the definition of a generalized Fisher information.
We give generalized Cram\'er-Rao inequalities, involving this Fisher
information, an extension of the Fisher information matrix, and arbitrary norms
and power of the estimation error. In the case of a location parameter, we
obtain new characterizations of the generalized $q$-Gaussians, for instance as
the distribution with a given moment that minimizes the generalized Fisher
information. Finally we indicate how the generalized Fisher information can
lead to new uncertainty relations.
|
1305.6215 | On some interrelations of generalized $q$-entropies and a generalized
Fisher information, including a Cram\'er-Rao inequality | cs.IT cond-mat.other math.IT stat.ML | In this communication, we describe some interrelations between generalized
$q$-entropies and a generalized version of Fisher information. In information
theory, the de Bruijn identity links the Fisher information and the derivative
of the entropy. We show that this identity can be extended to generalized
versions of entropy and Fisher information. More precisely, a generalized
Fisher information naturally pops up in the expression of the derivative of the
Tsallis entropy. This generalized Fisher information also appears as a special
case of a generalized Fisher information for estimation problems. Indeed, we
derive here a new Cram\'er-Rao inequality for the estimation of a parameter,
which involves a generalized form of Fisher information. This generalized
Fisher information reduces to the standard Fisher information as a particular
case. In the case of a translation parameter, the general Cram\'er-Rao
inequality leads to an inequality for distributions which is saturated by
generalized $q$-Gaussian distributions. These generalized $q$-Gaussians are
important in several areas of physics and mathematics. They are known to
maximize the $q$-entropies subject to a moment constraint. The Cram\'er-Rao
inequality shows that the generalized $q$-Gaussians also minimize the
generalized Fisher information among distributions with a fixed moment.
Similarly, the generalized $q$-Gaussians also minimize the generalized Fisher
information among distributions with a given $q$-entropy.
|
1305.6216 | Resource Efficient LDPC Decoders for Multimedia Communication | cs.IT cs.MM math.IT | Achieving high image quality is an important aspect in an increasing number
of wireless multimedia applications. These applications require resource
efficient error correction hardware to detect and correct errors introduced by
the communication channel. This paper presents an innovative flexible
architecture for error correction using Low-Density Parity-Check (LDPC) codes.
The proposed partially-parallel decoder architecture utilizes a novel code
construction technique based on multi-level Hierarchical Quasi-Cyclic (HQC)
matrix with innovative layering of random sub-matrices. Simulation of a
high-level MATLAB model shows that the proposed HQC matrices have bit error
rate (BER) performance close to that of unstructured random matrices. The
proposed decoder has been implemented on FPGA. It is very resource efficient
and provides very high throughput compared to other decoders reported to date.
Performance evaluation of the decoder has been carried out by transmitting JPEG
images over an AWGN channel and comparing the quality of the reconstructed
images with those from other decoders.
|
1305.6228 | Detecting hierarchical and overlapping network communities using locally
optimal modularity changes | physics.soc-ph cond-mat.dis-nn cs.SI | Agglomerative clustering is a well established strategy for identifying
communities in networks. Communities are successively merged into larger
communities, coarsening a network of actors into a more manageable network of
communities. The order in which merges should occur is not in general clear,
necessitating heuristics for selecting pairs of communities to merge. We
describe a hierarchical clustering algorithm based on a local optimality
property. For each edge in the network, we associate the modularity change for
merging the communities it links. For each community vertex, we call the
preferred edge that edge for which the modularity change is maximal. When an
edge is preferred by both vertices that it links, it appears to be the optimal
choice from the local viewpoint. We use the locally optimal edges to define the
algorithm: simultaneously merge all pairs of communities that are connected by
locally optimal edges that would increase the modularity, redetermining the
locally optimal edges after each step and continuing so long as the modularity
can be further increased. We apply the algorithm to model and empirical
networks, demonstrating that it can efficiently produce high-quality community
solutions. We relate the performance and implementation details to the
structure of the resulting community hierarchies. We additionally consider a
complementary local clustering algorithm, describing how to identify
overlapping communities based on the local optimality condition.
|
1305.6238 | Extended Lambek calculi and first-order linear logic | cs.CL cs.LO | First-order multiplicative intuitionistic linear logic (MILL1) can be seen as
an extension of the Lambek calculus. In addition to the fragment of MILL1 which
corresponds to the Lambek calculus (of Moot & Piazza 2001), I will show
fragments of MILL1 which generate the multiple context-free languages and which
correspond to the Displacement calculus of Morrilll e.a.
|
1305.6239 | Optimal rates of convergence for persistence diagrams in Topological
Data Analysis | math.ST cs.CG cs.LG math.GT stat.TH | Computational topology has recently known an important development toward
data analysis, giving birth to the field of topological data analysis.
Topological persistence, or persistent homology, appears as a fundamental tool
in this field. In this paper, we study topological persistence in general
metric spaces, with a statistical approach. We show that the use of persistent
homology can be naturally considered in general statistical frameworks and
persistence diagrams can be used as statistics with interesting convergence
properties. Some numerical experiments are performed in various contexts to
illustrate our results.
|
1305.6254 | A Stochastic Geometry Framework for Analyzing Pairwise-Cooperative
Cellular Networks | cs.IT math.IT | Cooperation in cellular networks has been recently suggested as a promising
scheme to improve system performance, especially for cell-edge users. In this
work, we use stochastic geometry to analyze cooperation models where the
positions of Base Stations (BSs) follow a Poisson point process distribution
and where Voronoi cells define the planar areas associated with them. For the
service of each user, either one or two BSs are involved. If two, these
cooperate by exchange of user data and channel related information with
conferencing over some backhaul link. Our framework generally allows variable
levels of channel information at the transmitters. In this paper we investigate
the case of limited channel state information for cooperation (channel phase,
second neighbour interference), but not the fully adaptive case which would
require considerable feedback. The total per-user transmission power is further
split between the two transmitters and a common message is encoded. The
decision for a user to choose service with or without cooperation is directed
by a family of geometric policies depending on its relative position to its two
closest base stations. An exact expression of the network coverage probability
is derived. Numerical evaluation allows one to analyze significant coverage
benefits compared to the non-cooperative case. As a conclusion, cooperation
schemes can improve system performance without exploitation of extra network
resources.
|
1305.6292 | Near-Optimal Sensor Placement for Linear Inverse Problems | cs.IT math.IT | A classic problem is the estimation of a set of parameters from measurements
collected by only a few sensors. The number of sensors is often limited by
physical or economical constraints and their placement is of fundamental
importance to obtain accurate estimates. Unfortunately, the selection of the
optimal sensor locations is intrinsically combinatorial and the available
approximation algorithms are not guaranteed to generate good solutions in all
cases of interest. We propose FrameSense, a greedy algorithm for the selection
of optimal sensor locations. The core cost function of the algorithm is the
frame potential, a scalar property of matrices that measures the orthogonality
of its rows. Notably, FrameSense is the first algorithm that is near-optimal in
terms of mean square error, meaning that its solution is always guaranteed to
be close to the optimal one. Moreover, we show with an extensive set of
numerical experiments that FrameSense achieves state-of-the-art performance
while having the lowest computational cost, when compared to other greedy
methods.
|
1305.6336 | Adaptive Reduced-Rank Processing Using a Projection Operator Based on
Joint Iterative Optimization of Adaptive Filters For CDMA Interference
Suppression | cs.IT math.IT | This paper proposes a novel adaptive reduced-rank filtering scheme based on
the joint iterative optimization of adaptive filters. The proposed scheme
consists of a joint iterative optimization of a bank of full-rank adaptive
filters that constitutes the projection matrix and an adaptive reduced-rank
filter that operates at the output of the bank of filters. We describe minimum
mean-squared error (MMSE) expressions for the design of the projection matrix
and the reduced-rank filter and simple least-mean squares (LMS) adaptive
algorithms for its computationally efficient implementation. Simulation results
for a CDMA interference suppression application reveals that the proposed
scheme significantly outperforms the state-of-the-art reduced-rank schemes,
while requiring a significantly lower computational complexity.
|
1305.6339 | Universality of scholarly impact metrics | cs.DL cs.SI physics.soc-ph | Given the growing use of impact metrics in the evaluation of scholars,
journals, academic institutions, and even countries, there is a critical need
for means to compare scientific impact across disciplinary boundaries.
Unfortunately, citation-based metrics are strongly biased by diverse field
sizes and publication and citation practices. As a result, we have witnessed an
explosion in the number of newly proposed metrics that claim to be "universal."
However, there is currently no way to objectively assess whether a normalized
metric can actually compensate for disciplinary bias. We introduce a new method
to assess the universality of any scholarly impact metric, and apply it to
evaluate a number of established metrics. We also define a very simple new
metric hs, which proves to be universal, thus allowing to compare the impact of
scholars across scientific disciplines. These results move us closer to a
formal methodology in the measure of scholarly impact.
|
1305.6364 | Unraveling the origin of exponential law in intra-urban human mobility | physics.soc-ph cs.SI | The vast majority of travel takes place within cities. Recently, new data has
become available which allows for the discovery of urban mobility patterns
which differ from established results about long distance travel. Specifically,
the latest evidence increasingly points to exponential trip length
distributions, contrary to the scaling laws observed on larger scales. In this
paper, in order to explore the origin of the exponential law, we propose a new
model which can predict individual flows in urban areas better. Based on the
model, we explain the exponential law of intra-urban mobility as a result of
the exponential decrease in average population density in urban areas. Indeed,
both empirical and analytical results indicate that the trip length and the
population density share the same exponential decaying rate.
|
1305.6379 | Robust Precision Positioning Control on Linear Ultrasonic Motor | cs.SY | Ultrasonic motors used in high-precision mechatronics are characterized by
strong frictional effects, which are among the main problems in precision
motion control. The traditional methods apply model-based nonlinear feedforward
to compensate the friction, thus requiring closed-loop stability and safety
constraint considerations. Implementation of these methods requires complex
designed experiments. This paper introduces a systematic approach using
piecewise affine models to emulate the friction effect of the motor motion. The
well-known model predictive control method is employed to deal with piecewise
affine models. The increased complexity of the model offers a higher tracking
precision on a simpler gain scheduling scheme.
|
1305.6387 | Higher-order Segmentation via Multicuts | cs.CV | Multicuts enable to conveniently represent discrete graphical models for
unsupervised and supervised image segmentation, in the case of local energy
functions that exhibit symmetries. The basic Potts model and natural extensions
thereof to higher-order models provide a prominent class of such objectives,
that cover a broad range of segmentation problems relevant to image analysis
and computer vision. We exhibit a way to systematically take into account such
higher-order terms for computational inference. Furthermore, we present results
of a comprehensive and competitive numerical evaluation of a variety of
dedicated cutting-plane algorithms. Our approach enables the globally optimal
evaluation of a significant subset of these models, without compromising
runtime. Polynomially solvable relaxations are studied as well, along with
advanced rounding schemes for post-processing.
|
1305.6394 | Enhanced Predictive Ratio Control of Interacting Systems | cs.SY | Ratio control for two interacting processes is proposed with a PID
feedforward design based on model predictive control (MPC) scheme. At each
sampling instant, the MPC control action minimizes a state-dependent
performance index associated with a PID-type state vector, thus yielding a
PID-type control structure. Compared to the standard MPC formulations with
separated single-variable control, such a control action allows one to take
into account the non-uniformity of the two process outputs. After reformulating
the MPC control law as a PID control law, we provide conditions for prediction
horizon and weighting matrices so that the closed-loop control is
asymptotically stable, and show the effectiveness of the approach with
simulation and experiment results.
|
1305.6402 | From Parametric Model-based Optimization to robust PID Gain Scheduling | cs.SY | In chemical process applications, model predictive control effectively deals
with input and state constraints during transient operations. However,
industrial PID controllers directly manipulates the actuators, so they play the
key role in small perturbation robustness. This paper considers the problem of
augmenting the commonplace PID with the constraint handling and optimization
functionalities of MPC. First, we review the MPC framework, which employs a
linear feedback gain in its unconstrained region. This linear gain can be any
preexisting multiloop PID design, or based on the two stabilizing PI or PID
designs for multivariable systems proposed in the paper. The resulting
controller is a feedforward PID mapping, a straightforward form without the
need of tuning PID to fit an optimal input. The parametrized solution of MPC
under constraints further leverages a familiar PID gain scheduling structure.
Steady state robustness is achieved along with the PID design so that
additional robustness analysis is avoided.
|
1305.6429 | Super-star networks: Growing optimal scale-free networks via likelihood | nlin.AO cs.SI physics.soc-ph | Preferential attachment --- by which new nodes attach to existing nodes with
probability proportional to the existing nodes' degree --- has become the
standard growth model for scale-free networks, where the asymptotic probability
of a node having degree $k$ is proportional to $k^{-\gamma}$. However, the
motivation for this model is entirely ad hoc. We use exact likelihood arguments
and show that the optimal way to build a scale-free network is to attach most
new links to nodes of low degree. Curiously, this leads to a scale-free
networks with a single dominant hub: a star-like structure we call a super-star
network. Asymptotically, the optimal strategy is to attach each new node to one
of the nodes of degree $k$ with probability proportional to
$\frac{1}{N+\zeta(\gamma)(k+1)^\gamma}$ (in a $N$ node network) --- a stronger
bias toward high degree nodes than exhibited by standard preferential
attachment. Our algorithm generates optimally scale-free networks (the
super-star networks) as well as randomly sampling the space of all scale-free
networks with a given degree exponent $\gamma$. We generate viable realisation
with finite $N$ for $1\ll \gamma<2$ as well as $\gamma>2$. We observe an
apparently discontinuous transition at $\gamma\approx 2$ between so-called
super-star networks and more tree-like realisations. Gradually increasing
$\gamma$ further leads to re-emergence of a super-star hub. To quantify these
structural features we derive a new analytic expression for the expected degree
exponent of a pure preferential attachment process, and introduce alternative
measures of network entropy. Our approach is generic and may also be applied to
an arbitrary degree distribution.
|
1305.6441 | Matrices of forests, analysis of networks, and ranking problems | math.CO cs.CV cs.DM cs.NI | The matrices of spanning rooted forests are studied as a tool for analysing
the structure of networks and measuring their properties. The problems of
revealing the basic bicomponents, measuring vertex proximity, and ranking from
preference relations / sports competitions are considered. It is shown that the
vertex accessibility measure based on spanning forests has a number of
desirable properties. An interpretation for the stochastic matrix of
out-forests in terms of information dissemination is given.
|
1305.6451 | Data Leak Aware Crowdsourcing in Social Network | cs.SI physics.soc-ph | Harnessing human computation for solving complex problems call spawns the
issue of finding the unknown competitive group of solvers. In this paper, we
propose an approach called Friendlysourcing to build up teams from social
network answering a business call, all the while avoiding partial solution
disclosure to competitive groups. The contributions of this paper include (i) a
clustering based approach for discovering collaborative and competitive team in
social network (ii) a Markov-chain based algorithm for discovering implicit
interactions in the social network.
|
1305.6489 | Social Sensor Placement in Large Scale Networks: A Graph Sampling
Perspective | cs.SI physics.soc-ph | Sensor placement for the purpose of detecting/tracking news outbreak and
preventing rumor spreading is a challenging problem in a large scale online
social network (OSN). This problem is a kind of subset selection problem:
choosing a small set of items from a large population so to maximize some
prespecified set function. However, it is known to be NP-complete. Existing
heuristics are very costly especially for modern OSNs which usually contain
hundreds of millions of users. This paper aims to design methods to find
\emph{good solutions} that can well trade off efficiency and accuracy. We first
show that it is possible to obtain a high quality solution with a probabilistic
guarantee from a "{\em candidate set}" of the underlying social network. By
exploring this candidate set, one can increase the efficiency of placing social
sensors. We also present how this candidate set can be obtained using "{\em
graph sampling}", which has an advantage over previous methods of not requiring
the prior knowledge of the complete network topology. Experiments carried out
on two real datasets demonstrate not only the accuracy and efficiency of our
approach, but also effectiveness in detecting and predicting news outbreak.
|
1305.6506 | Notes on Physical & Logical Data Layouts | cs.DB | In this short note I review and discuss fundamental options for physical and
logical data layouts as well as the impact of the choices on data processing. I
should say in advance that these notes offer no new insights, that is,
everything stated here has already been published elsewhere. In fact, it has
been published in so many different places, such as blog posts, in the
literature, etc. that the main contribution is to bring it all together in one
place.
|
1305.6537 | A Cooperative Coevolutionary Genetic Algorithm for Learning Bayesian
Network Structures | cs.NE cs.AI | We propose a cooperative coevolutionary genetic algorithm for learning
Bayesian network structures from fully observable data sets. Since this problem
can be decomposed into two dependent subproblems, that is to find an ordering
of the nodes and an optimal connectivity matrix, our algorithm uses two
subpopulations, each one representing a subtask. We describe the empirical
results obtained with simulations of the Alarm and Insurance networks. We show
that our algorithm outperforms the deterministic algorithm K2.
|
1305.6545 | Construction of all general symmetric informationally complete
measurements | quant-ph cs.IT math-ph math.IT math.MP | We construct the set of all general (i.e. not necessarily rank 1) symmetric
informationally complete (SIC) positive operator valued measures (POVMs). In
particular, we show that any orthonormal basis of a real vector space of
dimension d^2-1 corresponds to some general SIC POVM and vice versa. Our
constructed set of all general SIC-POVMs contains weak SIC-POVMs for which each
POVM element can be made arbitrarily close to a multiple times the identity. On
the other hand, it remains open if for all finite dimensions our constructed
family contains a rank 1 SIC-POVM.
|
1305.6568 | Reinforcement Learning for the Soccer Dribbling Task | cs.LG cs.RO stat.ML | We propose a reinforcement learning solution to the \emph{soccer dribbling
task}, a scenario in which a soccer agent has to go from the beginning to the
end of a region keeping possession of the ball, as an adversary attempts to
gain possession. While the adversary uses a stationary policy, the dribbler
learns the best action to take at each decision point. After defining
meaningful variables to represent the state space, and high-level macro-actions
to incorporate domain knowledge, we describe our application of the
reinforcement learning algorithm \emph{Sarsa} with CMAC for function
approximation. Our experiments show that, after the training period, the
dribbler is able to accomplish its task against a strong adversary around 58%
of the time.
|
1305.6569 | Mathematical Analysis of Temperature Accelerated Dynamics | math-ph cs.CE math.MP | We give a mathematical framework for temperature accelerated dynamics (TAD),
an algorithm proposed by M.R. S{\o}rensen and A.F. Voter to efficiently
generate metastable stochastic dynamics. Using the notion of quasistationary
distributions, we propose some modifications to TAD. Then considering the
modified algorithm in an idealized setting, we show how TAD can be made
mathematically rigorous.
|
1305.6646 | Normalized Online Learning | cs.LG stat.ML | We introduce online learning algorithms which are independent of feature
scales, proving regret bounds dependent on the ratio of scales existent in the
data rather than the absolute scale. This has several useful effects: there is
no need to pre-normalize data, the test-time and test-space complexity are
reduced, and the algorithms are more robust.
|
1305.6650 | Active Sensing as Bayes-Optimal Sequential Decision Making | cs.AI cs.CV | Sensory inference under conditions of uncertainty is a major problem in both
machine learning and computational neuroscience. An important but poorly
understood aspect of sensory processing is the role of active sensing. Here, we
present a Bayes-optimal inference and control framework for active sensing,
C-DAC (Context-Dependent Active Controller). Unlike previously proposed
algorithms that optimize abstract statistical objectives such as information
maximization (Infomax) [Butko & Movellan, 2010] or one-step look-ahead accuracy
[Najemnik & Geisler, 2005], our active sensing model directly minimizes a
combination of behavioral costs, such as temporal delay, response error, and
effort. We simulate these algorithms on a simple visual search task to
illustrate scenarios in which context-sensitivity is particularly beneficial
and optimization with respect to generic statistical objectives particularly
inadequate. Motivated by the geometric properties of the C-DAC policy, we
present both parametric and non-parametric approximations, which retain
context-sensitivity while significantly reducing computational complexity.
These approximations enable us to investigate the more complex problem
involving peripheral vision, and we notice that the difference between C-DAC
and statistical policies becomes even more evident in this scenario.
|
1305.6656 | Controlling self-organized criticality in complex networks | physics.soc-ph cs.SI nlin.AO | A control scheme to reduce the size of avalanches of the Bak-Tang-Wiesenfeld
model on complex networks is proposed. Three network types are considered:
those proposed by Erd\H{o}s-Renyi, Goh-Kahng-Kim, and a real network
representing the main connections of the electrical power grid of the western
United States. The control scheme is based on the idea of triggering avalanches
in the highest degree nodes that are near to become critical. We show that this
strategy works in the sense that the dissipation of mass occurs most locally
avoiding larger avalanches. We also compare this strategy with a random
strategy where the nodes are chosen randomly. Although the random control has
some ability to reduce the probability of large avalanches, its performance is
much worse than the one based on the choice of the highest degree nodes.
Finally, we argue that the ability of the proposed control scheme is related to
its ability to reduce the concentration of mass on the network.
|
1305.6659 | Dynamic Clustering via Asymptotics of the Dependent Dirichlet Process
Mixture | cs.LG stat.ML | This paper presents a novel algorithm, based upon the dependent Dirichlet
process mixture model (DDPMM), for clustering batch-sequential data containing
an unknown number of evolving clusters. The algorithm is derived via a
low-variance asymptotic analysis of the Gibbs sampling algorithm for the DDPMM,
and provides a hard clustering with convergence guarantees similar to those of
the k-means algorithm. Empirical results from a synthetic test with moving
Gaussian clusters and a test with real ADS-B aircraft trajectory data
demonstrate that the algorithm requires orders of magnitude less computational
time than contemporary probabilistic and hard clustering algorithms, while
providing higher accuracy on the examined datasets.
|
1305.6663 | Generalized Denoising Auto-Encoders as Generative Models | cs.LG | Recent work has shown how denoising and contractive autoencoders implicitly
capture the structure of the data-generating density, in the case where the
corruption noise is Gaussian, the reconstruction error is the squared error,
and the data is continuous-valued. This has led to various proposals for
sampling from this implicitly learned density function, using Langevin and
Metropolis-Hastings MCMC. However, it remained unclear how to connect the
training procedure of regularized auto-encoders to the implicit estimation of
the underlying data-generating distribution when the data are discrete, or
using other forms of corruption process and reconstruction errors. Another
issue is the mathematical justification which is only valid in the limit of
small corruption noise. We propose here a different attack on the problem,
which deals with all these issues: arbitrary (but noisy enough) corruption,
arbitrary reconstruction loss (seen as a log-likelihood), handling both
discrete and continuous-valued variables, and removing the bias due to
non-infinitesimal corruption noise (or non-infinitesimal contractive penalty).
|
1305.6783 | Low-Rate Machine-Type Communication via Wireless Device-to-Device (D2D)
Links | cs.IT cs.NI math.IT | Wireless cellular networks feature two emerging technological trends. The
first is the direct Device-to-Device (D2D) communications, which enables direct
links between the wireless devices that reutilize the cellular spectrum and
radio interface. The second is that of Machine-Type Communications (MTC), where
the objective is to attach a large number of low-rate low-power devices, termed
Machine-Type Devices (MTDs) to the cellular network. MTDs pose new challenges
to the cellular network, one if which is that the low transmission power can
lead to outage problems for the cell-edge devices. Another issue imminent to
MTC is the \emph{massive access} that can lead to overload of the radio
interface. In this paper we explore the opportunity opened by D2D links for
supporting MTDs, since it can be desirable to carry the MTC traffic not through
direct links to a Base Station, but through a nearby relay. MTC is modeled as a
fixed-rate traffic with an outage requirement. We propose two network-assisted
D2D schemes that enable the cooperation between MTDs and standard cellular
devices, thereby meeting the MTC outage requirements while maximizing the rate
of the broadband services for the other devices. The proposed schemes apply the
principles Opportunistic Interference Cancellation and the Cognitive Radio's
underlaying. We show through analysis and numerical results the gains of the
proposed schemes.
|
1305.6789 | Second-Order Coding Rates for Channels with State | cs.IT math.IT | We study the performance limits of state-dependent discrete memoryless
channels with a discrete state available at both the encoder and the decoder.
We establish the epsilon-capacity as well as necessary and sufficient
conditions for the strong converse property for such channels when the sequence
of channel states is not necessarily stationary, memoryless or ergodic. We then
seek a finer characterization of these capacities in terms of second-order
coding rates. The general results are supplemented by several examples
including i.i.d. and Markov states and mixed channels.
|
1305.6836 | About the Discriminant Power of the Subgraph Centrality and Other
Centrality Measures About the Discriminant Power of the Subgraph Centrality
and Other Centrality Measures(Working paper) | cs.SI math.CO physics.soc-ph | The discriminant power of centrality indices for the degree, eigenvector,
closeness, betweenness and subgraph centrality is analyzed. It is defined by
the number of graphs for which the standard deviation of the centrality of its
nodes is zero. On the basis of empirical analysis it is concluded that the
subgraph centrality displays better discriminant power than the rest of
centralities. We also propose some new conjectures about the types of graphs
for which the subgraph centrality does not discriminate among nonequivalent
nodes.
|
1305.6861 | Design and Realization of a Scalable Simulator of Magnetic Resonance
Tomography | cs.DC cs.CE q-bio.QM | In research activities regarding Magnetic Resonance Imaging in medicine,
simulation tools with a universal approach are rare. Usually, simulators are
developed and used which tend to be restricted to a particular, small range of
applications. This led to the design and implementation of a new simulator
PARSPIN, the subject of this thesis. In medical applications, the Bloch
equation is a well-suited mathematical model of the underlying physics with a
wide scope. In this thesis, it is shown how analytical solutions of the Bloch
equation can be found, which promise substantial execution time advantages over
numerical solution methods. From these analytical solutions of the Bloch
equation, a new formalism for the description and the analysis of complex
imaging experiments is derived, the K-t formalism. It is shown that modern
imaging methods can be better explained by the K-t formalism than by observing
and analysing the magnetization of each spin of a spin ensemble. Various
approaches for a numerical simulation of Magnetic Resonance imaging are
discussed. It is shown that a simulation tool based on the K-t formalism
promises a substantial gain in execution time. Proper spatial discretization
according to the sampling theorem, a topic rarely discussed in literature, is
universally derived from the K-t formalism in this thesis. A spin-based
simulator is an application with high demands to computing facilities even on
modern hardware. In this thesis, two approaches for a parallelized software
architecture are designed, analysed and evaluated with regard to a reduction of
execution time. A number of possible applications in research and education are
demonstrated. For a choice of imaging experiments, results produced both
experimentally and by simulation are compared.
|
1305.6864 | Resolution-aware network coded storage | cs.IT math.IT | In this paper, we show that coding can be used in storage area networks
(SANs) to improve various quality of service metrics under normal SAN operating
conditions, without requiring additional storage space. For our analysis, we
develop a model which captures modern characteristics such as constrained I/O
access bandwidth limitations. Using this model, we consider two important
cases: single-resolution (SR) and multi-resolution (MR) systems. For SR
systems, we use blocking probability as the quality of service metric and
propose the network coded storage (NCS) scheme as a way to reduce blocking
probability. The NCS scheme codes across file chunks in time, exploiting file
striping and file duplication. Under our assumptions, we illustrate cases where
SR NCS provides an order of magnitude savings in blocking probability. For MR
systems, we introduce saturation probability as a quality of service metric to
manage multiple user types, and we propose the uncoded resolution- aware
storage (URS) and coded resolution-aware storage (CRS) schemes as ways to
reduce saturation probability. In MR URS, we align our MR layout strategy with
traffic requirements. In MR CRS, we code videos across MR layers. Under our
assumptions, we illustrate that URS can in some cases provide an order of
magnitude gain in saturation probability over classic non-resolution aware
systems. Further, we illustrate that CRS provides additional saturation
probability savings over URS.
|
1305.6883 | Rotation invariants of two dimensional curves based on iterated
integrals | cs.CV stat.ML | We introduce a novel class of rotation invariants of two dimensional curves
based on iterated integrals. The invariants we present are in some sense
complete and we describe an algorithm to calculate them, giving explicit
computations up to order six. We present an application to online
(stroke-trajectory based) character recognition. This seems to be the first
time in the literature that the use of iterated integrals of a curve is
proposed for (invariant) feature extraction in machine learning applications.
|
1305.6918 | Video Human Segmentation using Fuzzy Object Models and its Application
to Body Pose Estimation of Toddlers for Behavior Studies | cs.CV | Video object segmentation is a challenging problem due to the presence of
deformable, connected, and articulated objects, intra- and inter-object
occlusions, object motion, and poor lighting. Some of these challenges call for
object models that can locate a desired object and separate it from its
surrounding background, even when both share similar colors and textures. In
this work, we extend a fuzzy object model, named cloud system model (CSM), to
handle video segmentation, and evaluate it for body pose estimation of toddlers
at risk of autism. CSM has been successfully used to model the parts of the
brain (cerebrum, left and right brain hemispheres, and cerebellum) in order to
automatically locate and separate them from each other, the connected brain
stem, and the background in 3D MR-images. In our case, the objects are
articulated parts (2D projections) of the human body, which can deform, cause
self-occlusions, and move along the video. The proposed CSM extension handles
articulation by connecting the individual clouds, body parts, of the system
using a 2D stickman model. The stickman representation naturally allows us to
extract 2D body pose measures of arm asymmetry patterns during unsupported gait
of toddlers, a possible behavioral marker of autism. The results show that our
method can provide insightful knowledge to assist the specialist's observations
during real in-clinic assessments.
|
1305.6954 | Greedy type algorithms for RIP matrices. A study of two selection rules | cs.IT math.IT | Some consequences of the Restricted Isometry Property (RIP) of matrices have
been applied to develop a greedy algorithm called "ROMP" (Regularized
Orthogonal Matching Pursuit) to recover sparse signals and to approximate
non-sparse ones. These consequences were subsequently applied to other greedy
and thresholding algorithms like "SThresh", "CoSaMP", "StOMP" and "SWCGP". In
this paper, we find another consequence of the RIP property and use it to
analyze the approximation to k-sparse signals with Stagewise Weak versions of
Gradient Pursuit (SWGP), Matching Pursuit (SWMP) and Orthogonal Matching
Pursuit (SWOMP). We combine the above mentioned algorithms with another
selection rule similar to the ones that have appeared in the literature showing
that results are obtained with less restrictions in the RIP constant, but we
need a smaller threshold parameter for the coefficients. The results of some
experiments are shown.
|
1305.6974 | Applications of Temporal Graph Metrics to Real-World Networks | physics.soc-ph cs.SI | Real world networks exhibit rich temporal information: friends are added and
removed over time in online social networks; the seasons dictate the
predator-prey relationship in food webs; and the propagation of a virus depends
on the network of human contacts throughout the day. Recent studies have
demonstrated that static network analysis is perhaps unsuitable in the study of
real world network since static paths ignore time order, which, in turn,
results in static shortest paths overestimating available links and
underestimating their true corresponding lengths. Temporal extensions to
centrality and efficiency metrics based on temporal shortest paths have also
been proposed. Firstly, we analyse the roles of key individuals of a corporate
network ranked according to temporal centrality within the context of a
bankruptcy scandal; secondly, we present how such temporal metrics can be used
to study the robustness of temporal networks in presence of random errors and
intelligent attacks; thirdly, we study containment schemes for mobile phone
malware which can spread via short range radio, similar to biological viruses;
finally, we study how the temporal network structure of human interactions can
be exploited to effectively immunise human populations. Through these
applications we demonstrate that temporal metrics provide a more accurate and
effective analysis of real-world networks compared to their static
counterparts.
|
1305.6979 | Graph cluster randomization: network exposure to multiple universes | cs.SI physics.soc-ph stat.ME | A/B testing is a standard approach for evaluating the effect of online
experiments; the goal is to estimate the `average treatment effect' of a new
feature or condition by exposing a sample of the overall population to it. A
drawback with A/B testing is that it is poorly suited for experiments involving
social interference, when the treatment of individuals spills over to
neighboring individuals along an underlying social network. In this work, we
propose a novel methodology using graph clustering to analyze average treatment
effects under social interference. To begin, we characterize graph-theoretic
conditions under which individuals can be considered to be `network exposed' to
an experiment. We then show how graph cluster randomization admits an efficient
exact algorithm to compute the probabilities for each vertex being network
exposed under several of these exposure conditions. Using these probabilities
as inverse weights, a Horvitz-Thompson estimator can then provide an effect
estimate that is unbiased, provided that the exposure model has been properly
specified.
Given an estimator that is unbiased, we focus on minimizing the variance.
First, we develop simple sufficient conditions for the variance of the
estimator to be asymptotically small in n, the size of the graph. However, for
general randomization schemes, this variance can be lower bounded by an
exponential function of the degrees of a graph. In contrast, we show that if a
graph satisfies a restricted-growth condition on the growth rate of
neighborhoods, then there exists a natural clustering algorithm, based on
vertex neighborhoods, for which the variance of the estimator can be upper
bounded by a linear function of the degrees. Thus we show that proper cluster
randomization can lead to exponentially lower estimator variance when
experimentally measuring average treatment effects under interference.
|
1305.6993 | On The Optimality of Myopic Sensing in Multi-State Channels | cs.SY cs.IT math.IT | We consider the channel sensing problem arising in opportunistic scheduling
over fading channels, cognitive radio networks, and resource constrained
jamming. The communication system consists of N channels. Each channel is
modeled as a multi-state Markov chain (M.C.). At each time instant a user
selects one channel to sense and uses it to transmit information. A reward
depending on the state of the selected channel is obtained for each
transmission. The objective is to design a channel sensing policy that
maximizes the expected total reward collected over a finite or infinite
horizon. This problem can be viewed as an instance of a restless bandit
problem, for which the form of optimal policies is unknown in general. We
discover sets of conditions sufficient to guarantee the optimality of a myopic
sensing policy; we show that under one particular set of conditions the myopic
policy coincides with the Gittins index rule.
|
1305.7006 | Subgraph Pattern Matching over Uncertain Graphs with Identity Linkage
Uncertainty | cs.DB | There is a growing need for methods which can capture uncertainties and
answer queries over graph-structured data. Two common types of uncertainty are
uncertainty over the attribute values of nodes and uncertainty over the
existence of edges. In this paper, we combine those with identity uncertainty.
Identity uncertainty represents uncertainty over the mapping from objects
mentioned in the data, or references, to the underlying real-world entities. We
propose the notion of a probabilistic entity graph (PEG), a probabilistic graph
model that defines a distribution over possible graphs at the entity level. The
model takes into account node attribute uncertainty, edge existence
uncertainty, and identity uncertainty, and thus enables us to systematically
reason about all three types of uncertainties in a uniform manner. We introduce
a general framework for constructing a PEG given uncertain data at the
reference level and develop highly efficient algorithms to answer subgraph
pattern matching queries in this setting. Our algorithms are based on two novel
ideas: context-aware path indexing and reduction by join-candidates, which
drastically reduce the query search space. A comprehensive experimental
evaluation shows that our approach outperforms baseline implementations by
orders of magnitude.
|
1305.7014 | Tweets Miner for Stock Market Analysis | cs.IR cs.CL cs.SI | In this paper, we present a software package for the data mining of Twitter
microblogs for the purpose of using them for the stock market analysis. The
package is written in R langauge using apropriate R packages. The model of
tweets has been considered. We have also compared stock market charts with
frequent sets of keywords in Twitter microblogs messages.
|
1305.7038 | Enhanced blind decoding of Tardos codes with new map-based functions | cs.CR cs.IT math.IT | This paper presents a new decoder for probabilistic binary traitor tracing
codes under the marking assumption. It is based on a binary hypothesis testing
rule which integrates a collusion channel relaxation so as to obtain numerical
and simple accusation functions. This decoder is blind as no estimation of the
collusion channel prior to the accusation is required. Experimentations show
that using the proposed decoder gives better performance than the well-known
symmetric version of the Tardos decoder for common attack channels.
|
1305.7053 | A Local Active Contour Model for Image Segmentation with Intensity
Inhomogeneity | cs.CV | A novel locally statistical active contour model (ACM) for image segmentation
in the presence of intensity inhomogeneity is presented in this paper. The
inhomogeneous objects are modeled as Gaussian distributions of different means
and variances, and a moving window is used to map the original image into
another domain, where the intensity distributions of inhomogeneous objects are
still Gaussian but are better separated. The means of the Gaussian
distributions in the transformed domain can be adaptively estimated by
multiplying a bias field with the original signal within the window. A
statistical energy functional is then defined for each local region, which
combines the bias field, the level set function, and the constant approximating
the true signal of the corresponding object. Experiments on both synthetic and
real images demonstrate the superiority of our proposed algorithm to
state-of-the-art and representative methods.
|
1305.7056 | Dienstplanerstellung in Krankenhaeusern mittels genetischer Algorithmen | cs.NE | This thesis investigates the use of problem-specific knowledge to enhance a
genetic algorithm approach to multiple-choice optimisation problems. It shows
that such information can significantly enhance performance, but that the
choice of information and the way it is included are important factors for
success.
|
1305.7057 | Predicting the Severity of Breast Masses with Data Mining Methods | cs.LG stat.ML | Mammography is the most effective and available tool for breast cancer
screening. However, the low positive predictive value of breast biopsy
resulting from mammogram interpretation leads to approximately 70% unnecessary
biopsies with benign outcomes. Data mining algorithms could be used to help
physicians in their decisions to perform a breast biopsy on a suspicious lesion
seen in a mammogram image or to perform a short term follow-up examination
instead. In this research paper data mining classification algorithms; Decision
Tree (DT), Artificial Neural Network (ANN), and Support Vector Machine (SVM)
are analyzed on mammographic masses data set. The purpose of this study is to
increase the ability of physicians to determine the severity (benign or
malignant) of a mammographic mass lesion from BI-RADS attributes and the
patient,s age. The whole data set is divided for training the models and test
them by the ratio of 70:30% respectively and the performances of classification
algorithms are compared through three statistical measures; sensitivity,
specificity, and classification accuracy. Accuracy of DT, ANN and SVM are
78.12%, 80.56% and 81.25% of test samples respectively. Our analysis shows that
out of these three classification models SVM predicts severity of breast cancer
with least error rate and highest accuracy.
|
1305.7058 | Towards an Ontology based integrated Framework for Semantic Web | cs.AI | This Ontologies are widely used as a means for solving the information
heterogeneity problems on the web because of their capability to provide
explicit meaning to the information. They become an efficient tool for
knowledge representation in a structured manner. There is always more than one
ontology for the same domain. Furthermore, there is no standard method for
building ontologies, and there are many ontology building tools using different
ontology languages. Because of these reasons, interoperability between the
ontologies is very low. Current ontology tools mostly use functions to build,
edit and inference the ontology. Methods for merging heterogeneous domain
ontologies are not included in most tools. This paper presents ontology merging
methodology for building a single global ontology from heterogeneous eXtensible
Markup Language (XML) data sources to capture and maintain all the knowledge
which XML data sources can contain
|
1305.7071 | Simulation of magnetic active polymers for versatile microfluidic
devices | physics.bio-ph cs.CE physics.comp-ph physics.flu-dyn | We propose to use a compound of magnetic nanoparticles (20-100 nm) embedded
in a flexible polymer (Polydimethylsiloxane PDMS) to filter circulating tumor
cells (CTCs). The analysis of CTCs is an emerging tool for cancer biology
research and clinical cancer management including the detection, diagnosis and
monitoring of cancer. The combination of experiments and simulations lead to a
versatile microfluidic lab-on-chip device. Simulations are essential to
understand the influence of the embedded nanoparticles in the elastic PDMS when
applying a magnetic gradient field. It combines finite element calculations of
the polymer, magnetic simulations of the embedded nanoparticles and the fluid
dynamic calculations of blood plasma and blood cells. With the use of magnetic
active polymers a wide range of tunable microfluidic structures can be created.
The method can help to increase the yield of needed isolated CTCs.
|
1305.7072 | Guided self-assembly of magnetic beads for biomedical applications | physics.bio-ph cs.CE physics.comp-ph physics.flu-dyn | Micromagnetic beads are widely used in biomedical applications for cell
separation, drug delivery, and hypothermia cancer treatment. Here we propose to
use self-organized magnetic bead structures which accumulate on fixed magnetic
seeding points to isolate circulating tumor cells. The analysis of circulating
tumor cells is an emerging tool for cancer biology research and clinical cancer
management including the detection, diagnosis and monitoring of cancer.
Microfluidic chips for isolating circulating tumor cells use either affinity,
size or density capturing methods. We combine multiphysics simulation
techniques to understand the microscopic behavior of magnetic beads interacting
with Nickel accumulation points used in lab-on-chip technologies. Our proposed
chip technology offers the possibility to combine affinity and size capturing
with special antibody-coated bead arrangements using a magnetic gradient field
created by Neodymium Iron Boron permanent magnets. The multiscale simulation
environment combines magnetic field computation, fluid dynamics and discrete
particle dynamics.
|
1305.7111 | Test cost and misclassification cost trade-off using reframing | cs.LG | Many solutions to cost-sensitive classification (and regression) rely on some
or all of the following assumptions: we have complete knowledge about the cost
context at training time, we can easily re-train whenever the cost context
changes, and we have technique-specific methods (such as cost-sensitive
decision trees) that can take advantage of that information. In this paper we
address the problem of selecting models and minimising joint cost (integrating
both misclassification cost and test costs) without any of the above
assumptions. We introduce methods and plots (such as the so-called JROC plots)
that can work with any off-the-shelf predictive technique, including ensembles,
such that we reframe the model to use the appropriate subset of attributes (the
feature configuration) during deployment time. In other words, models are
trained with the available attributes (once and for all) and then deployed by
setting missing values on the attributes that are deemed ineffective for
reducing the joint cost. As the number of feature configuration combinations
grows exponentially with the number of features we introduce quadratic methods
that are able to approximate the optimal configuration and model choices, as
shown by the experimental results.
|
1305.7117 | Adaptation and optimization of synchronization gains in networked
distributed parameter systems | math.OC cs.SY | This work is concerned with the design and effects of the synchronization
gains on the synchronization problem for a class of networked distributed
parameter systems. The networked systems, assumed to be described by the same
evolution equation in a Hilbert space, differ in their initial conditions. The
proposed synchronization controllers aim at achieving both the control
objective and the synchronization objective. To enhance the synchronization, as
measured by the norm of the pairwise state difference of the networked systems,
an adaptation of the gains is proposed. An alternative design arrives at
constant gains that are optimized with respect to an appropriate measure of
synchronization. A subsequent formulation casts the control and synchronization
design problem into an optimal control problem for the aggregate systems. An
extensive numerical study examines the various aspects of the optimization and
adaptation of the gains on the control and synchronization of networked 1D
parabolic differential equations.
|
1305.7121 | Regression techniques for subspace-based black-box state-space system
identification: an overview | cs.SY | As far as the identification of linear time-invariant state-space
representation is concerned, among all of the solutions available in the
literature, the subspace-based state-space model identification techniques have
proved their efficiency in many practical cases since the beginning of the
90's. This paper introduces an overview of these techniques by focusing on
their formulation as a least-squares problem. Apart from an article written by
J. Qin, to the author's knowledge, such a regression formulation is not totally
investigated in the books which can be considered as the references as far as
subspace-based identification is concerned. Thus, in this paper, a specific
attention is payed to the regression-based techniques used to identify systems
working under open-loop as well as closed-loop conditions.
|
1305.7130 | Memory Implementations - Current Alternatives | cs.AI cs.NE | Memory can be defined as the ability to retain and recall information in a
diverse range of forms. It is a vital component of the way in which we as human
beings operate on a day to day basis. Given a particular situation, decisions
are made and actions undertaken in response to that situation based on our
memory of related prior events and experiences. By utilising our memory we can
anticipate the outcome of our chosen actions to avoid unexpected or unwanted
events. In addition, as we subtly alter our actions and recognise altered
outcomes we learn and create new memories, enabling us to improve the
efficiency of our actions over time. However, as this process occurs so
naturally in the subconscious its importance is often overlooked.
|
1305.7144 | Immune System Approaches to Intrusion Detection - A Review (ICARIS) | cs.CR cs.NE | The use of artificial immune systems in intrusion detection is an appealing
concept for two reasons. Firstly, the human immune system provides the human
body with a high level of protection from invading pathogens, in a robust,
self-organised and distributed manner. Secondly, current techniques used in
computer security are not able to cope with the dynamic and increasingly
complex nature of computer systems and their security. It is hoped that
biologically inspired approaches in this area, including the use of
immune-based systems will be able to meet this challenge. Here we collate the
algorithms used, the development of the systems and the outcome of their
implementation. It provides an introduction and review of the key developments
within this field, in addition to making suggestions for future research.
|
1305.7145 | Modelling and Analysing Cargo Screening Processes: A Project Outline | cs.AI cs.CY | The efficiency of current cargo screening processes at sea and air ports is
unknown as no benchmarks exists against which they could be measured. Some
manufacturer benchmarks exist for individual sensors but we have not found any
benchmarks that take a holistic view of the screening procedures assessing a
combination of sensors and also taking operator variability into account. Just
adding up resources and manpower used is not an effective way for assessing
systems where human decision-making and operator compliance to rules play a
vital role. For such systems more advanced assessment methods need to be used,
taking into account that the cargo screening process is of a dynamic and
stochastic nature. Our project aim is to develop a decision support tool
(cargo-screening system simulator) that will map the right technology and
manpower to the right commodity-threat combination in order to maximize
detection rates. In this paper we present a project outline and highlight the
research challenges we have identified so far. In addition we introduce our
first case study, where we investigate the cargo screening process at the ferry
port in Calais.
|
1305.7146 | "You Know Because I Know": a Multidimensional Network Approach to Human
Resources Problem | cs.SI cs.CY cs.DS physics.soc-ph | Finding talents, often among the people already hired, is an endemic
challenge for organizations. The social networking revolution, with online
tools like Linkedin, made possible to make explicit and accessible what we
perceived, but not used, for thousands of years: the exact position and ranking
of a person in a network of professional and personal connections. To search
and mine where and how an employee is positioned on a global skill network will
enable organizations to find unpredictable sources of knowledge, innovation and
know-how. This data richness and hidden knowledge demands for a
multidimensional and multiskill approach to the network ranking problem.
Multidimensional networks are networks with multiple kinds of relations. To the
best of our knowledge, no network-based ranking algorithm is able to handle
multidimensional networks and multiple rankings over multiple attributes at the
same time. In this paper we propose such an algorithm, whose aim is to address
the node multi-ranking problem in multidimensional networks. We test our
algorithm over several real world networks, extracted from DBLP and the Enron
email corpus, and we show its usefulness in providing less trivial and more
flexible rankings than the current state of the art algorithms.
|
1305.7169 | Structural and Functional Discovery in Dynamic Networks with
Non-negative Matrix Factorization | cs.SI physics.soc-ph stat.ML | Time series of graphs are increasingly prevalent in modern data and pose
unique challenges to visual exploration and pattern extraction. This paper
describes the development and application of matrix factorizations for
exploration and time-varying community detection in time-evolving graph
sequences. The matrix factorization model allows the user to home in on and
display interesting, underlying structure and its evolution over time. The
methods are scalable to weighted networks with a large number of time points or
nodes, and can accommodate sudden changes to graph topology. Our techniques are
demonstrated with several dynamic graph series from both synthetic and real
world data, including citation and trade networks. These examples illustrate
how users can steer the techniques and combine them with existing methods to
discover and display meaningful patterns in sizable graphs over many time
points.
|
1305.7181 | Lensless Imaging by Compressive Sensing | cs.CV | In this paper, we propose a lensless compressive imaging architecture. The
architecture consists of two components, an aperture assembly and a sensor. No
lens is used. The aperture assembly consists of a two dimensional array of
aperture elements. The transmittance of each aperture element is independently
controllable. The sensor is a single detection element. A compressive sensing
matrix is implemented by adjusting the transmittance of the individual aperture
elements according to the values of the sensing matrix. The proposed
architecture is simple and reliable because no lens is used. The architecture
can be used for capturing images of visible and other spectra such as infrared,
or millimeter waves, in surveillance applications for detecting anomalies or
extracting features such as speed of moving objects. Multiple sensors may be
used with a single aperture assembly to capture multi-view images
simultaneously. A prototype was built by using a LCD panel and a photoelectric
sensor for capturing images of visible spectrum.
|
1305.7182 | Average Consensus on Arbitrary Strongly Connected Digraphs with
Time-Varying Topologies | cs.SY | We have recently proposed a "surplus-based" algorithm which solves the
multi-agent average consensus problem on general strongly connected and static
digraphs. The essence of that algorithm is to employ an additional variable to
keep track of the state changes of each agent, thereby achieving averaging even
though the state sum is not preserved. In this note, we extend this approach to
the more interesting and challenging case of time-varying topologies: An
extended surplus-based averaging algorithm is designed, under which a necessary
and sufficient graphical condition is derived that guarantees state averaging.
The derived condition requires only that the digraphs be arbitrary strongly
connected in a \emph{joint} sense, and does not impose "balanced" or
"symmetric" properties on the network topology, which is therefore more general
than those previously reported in the literature.
|
1305.7185 | Collaborative ontology sharing and editing | cs.AI | This article first lists reasons why - in the long term or when creating a
new knowledge base (KB) for general knowledge sharing purposes -
collaboratively building a well-organized KB does/can provide more
possibilities, with on the whole no more costs, than the mainstream approach
where knowledge creation and re-use involves searching, merging and creating
(semi-)independent (relatively small) ontologies or semi-formal documents. The
article lists elements required to achieve this and describes the main one: a
KB editing protocol that keeps the KB free of automatically/manually detected
inconsistencies while not forcing them to discuss or agree on terminology and
beliefs nor requiring a selection committee.
|
1305.7196 | For a Semantic Web based Peer-reviewing and Publication of Research
Results | cs.DL cs.AI | This article shows why the diffusion and peer-reviewing of research results
would be more efficient, precise and relevant if all or at least some parts of
the descriptions and peer-reviews of research results took the form of a
fine-grained semantic network, within articles or knowledge bases, as part of
the Semantic Web. This article also shows some ways this can be done and hence
how research journal/proceeding publishers could allow this. So far, the World
Wide Web Consortium (W3C) has not proposed simple notations and cooperation
protocols - similar to those illustrated or referred to in this article - but
it now seems likely that Wikipedia/Wikidata, Google or the W3C will propose
them sooner or later. Then, research journal/proceeding publishers and
researchers may or may not quickly use this approach.
|
1305.7200 | Organizing Linked Data Quality Related Methods | cs.DL cs.AI cs.IR | This article presents the top-level of an ontology categorizing and
generalizing best practices and quality criteria or measures for Linked Data.
It permits to compare these techniques and have a synthetic organized view of
what can or should be done for knowledge sharing purposes. This ontology is
part of a general knowledge base that can be accessed and complemented by any
Web user. Thus, it can be seen as a cooperatively built library for the above
cited elements. Since they permit to evaluate information objects and create
better ones, these elements also permit knowledge-based tools and techniques -
as well as knowledge providers - to be evaluated and categorized based on their
input/output information objects. One top-level distinction permitting to
organize this ontology is the one between content, medium and containers of
descriptions. Various structural, ontological, syntactical and lexical
distinctions are then used.
|
1305.7214 | Secure Degrees of Freedom of K-User Gaussian Interference Channels: A
Unified View | cs.IT cs.CR math.IT | We determine the exact sum secure degrees of freedom (d.o.f.) of the K-user
Gaussian interference channel. We consider three different secrecy constraints:
1) K-user interference channel with one external eavesdropper (IC-EE), 2)
K-user interference channel with confidential messages (IC-CM), and 3) K-user
interference channel with confidential messages and one external eavesdropper
(IC-CM-EE). We show that for all of these three cases, the exact sum secure
d.o.f. is K(K-1)/(2K-1). We show converses for IC-EE and IC-CM, which imply a
converse for IC-CM-EE. We show achievability for IC-CM-EE, which implies
achievability for IC-EE and IC-CM. We develop the converses by relating the
channel inputs of interfering users to the reliable rates of the interfered
users, and by quantifying the secrecy penalty in terms of the eavesdroppers'
observations. Our achievability uses structured signaling, structured
cooperative jamming, channel prefixing, and asymptotic real interference
alignment. While the traditional interference alignment provides some amount of
secrecy by mixing unintended signals in a smaller sub-space at every receiver,
in order to attain the optimum sum secure d.o.f., we incorporate structured
cooperative jamming into the achievable scheme, and intricately design the
structure of all of the transmitted signals jointly.
|
1305.7250 | Harnessing Simultaneously the Benefits of UWB and MBWA: A Practical
Scenario | cs.IT cs.NI math.IT | UWB has a very large bandwidth in a WPAN network, which is best used for
HD-video applications. Meanwhile, MBWA is a WMAN option optimized for
wireless-IP in a fast moving vehicle. In this paper, we propose a practical
engineering scenario that harnesses simultaneously the distinctive feature of
both UWB and MBWA. However, this in-proximity operation of the technologies
will inevitably cause mutual interference to both systems. In light of this, as
a preliminary phase to coexistence, we have derived, under various
circumstances, the maximum interference power limit that needs to be respected
in order to ensure an acceptable system performance as requested by the new
IEEE 802.20 standard.
|
1305.7252 | Joint Spatial Division and Multiplexing: Opportunistic Beamforming and
User Grouping | cs.IT math.IT | Joint Spatial Division and Multiplexing (JSDM) is a recently proposed scheme
to enable massive MIMO like gains and simplified system operations for
Frequency Division Duplexing (FDD) systems. The key idea lies in partitioning
the users into groups with approximately similar covariances, and use a two
stage downlink beamforming: a pre-beamformer that depends on the channel
covariances and minimizes interference across groups and a multiuser MIMO
precoder for the effective channel after pre-beamforming, to counteract
interference within a group. We first focus on the regime of a fixed number of
antennas and large number of users, and show that opportunistic beamforming
with user selection yields significant gain, and thus, channel correlation may
yield a capacity improvement over the uncorrelated "isotropic" channel result
of Sharif and Hassibi. We prove that in the presence of different correlations
among groups, a block diagonalization approach for the design of
pre-beamformers achieves the optimal sum-rate scaling. Next, we consider the
regime of large number of antennas and users, where user selection does not
provide significant gain. Here, we propose a simplified user grouping algorithm
to cluster users into groups when the number of antennas becomes very large, in
a realistic setting where users are randomly distributed and have different
angles of arrival and angular spreads depending on the propagation environment.
Our subsequent analysis leads to a probabilistic scheduling algorithm, where
users within each group are preselected at random based on probabilities
derived from the large system analysis, depending on the fairness criterion.
This is advantageous since only the selected users are required to feedback
their channel state information (CSIT).
|
1305.7254 | Harmony search to solve the container storage problem with different
container types | cs.AI | This paper presents an adaptation of the harmony search algorithm to solve
the storage allocation problem for inbound and outbound containers. This
problem is studied considering multiple container type (regular, open side,
open top, tank, empty and refrigerated) which lets the situation more
complicated, as various storage constraints appeared. The objective is to find
an optimal container arrangement which respects their departure dates, and
minimize the re-handle operations of containers. The performance of the
proposed approach is verified comparing to the results generated by genetic
algorithm and LIFO algorithm.
|
1305.7265 | A Focused Crawler Combinatory Link and Content Model Based on T-Graph
Principles | cs.IR cs.DL | The two significant tasks of a focused Web crawler are finding relevant
topic-specific documents on the Web and analytically prioritizing them for
later effective and reliable download. For the first task, we propose a
sophisticated custom algorithm to fetch and analyze the most effective HTML
structural elements of the page as well as the topical boundary and anchor text
of each unvisited link, based on which the topical focus of an unvisited page
can be predicted and elicited with a high accuracy. Thus, our novel method
uniquely combines both link-based and content-based approaches. For the second
task, we propose a scoring function of the relevant URLs through the use of
T-Graph (Treasure Graph) to assist in prioritizing the unvisited links that
will later be put into the fetching queue. Our Web search system is called the
Treasure-Crawler. This research paper embodies the architectural design of the
Treasure-Crawler system which satisfies the principle requirements of a focused
Web crawler, and asserts the correctness of the system structure including all
its modules through illustrations and by the test results.
|
1305.7272 | Accuracy of Range-Based Cooperative Localization in Wireless Sensor
Networks: A Lower Bound Analysis | cs.NI cs.MA | Accurate location information is essential for many wireless sensor network
(WSN) applications. A location-aware WSN generally includes two types of nodes:
sensors whose locations to be determined and anchors whose locations are known
a priori. For range-based localization, sensors' locations are deduced from
anchor-to-sensor and sensor-to-sensor range measurements. Localization accuracy
depends on the network parameters such as network connectivity and size. This
paper provides a generalized theory that quantitatively characterizes such
relation between network parameters and localization accuracy. We use the
average degree as a connectivity metric and use geometric dilution of precision
(DOP), equivalent to the Cramer-Rao bound, to quantify localization accuracy.
We prove a novel lower bound on expectation of average geometric DOP
(LB-E-AGDOP) and derives a closed-form formula that relates LB-E-AGDOP to only
three parameters: average anchor degree, average sensor degree, and number of
sensor nodes. The formula shows that localization accuracy is approximately
inversely proportional to the average degree, and a higher ratio of average
anchor degree to average sensor degree yields better localization accuracy.
Furthermore, the paper demonstrates a strong connection between LB-E-AGDOP and
the best achievable accuracy. Finally, we validate the theory via numerical
simulations with three different random graph models.
|
1305.7294 | A Note on Cyclic Codes from APN Functions | cs.IT math.IT | Cyclic codes, as linear block error-correcting codes in coding theory, play a
vital role and have wide applications. Ding in \cite{D} constructed a number of
classes of cyclic codes from almost perfect nonlinear (APN) functions and
planar functions over finite fields and presented ten open problems on cyclic
codes from highly nonlinear functions. In this paper, we consider two open
problems involving the inverse APN functions $f(x)=x^{q^m-2}$ and the Dobbertin
APN function $f(x)=x^{2^{4i}+2^{3i}+2^{2i}+2^{i}-1}$. From the calculation of
linear spans and the minimal polynomials of two sequences generated by these
two classes of APN functions, the dimensions of the corresponding cyclic codes
are determined and lower bounds on the minimum weight of these cyclic codes are
presented. Actually, we present a framework for the minimal polynomial and
linear span of the sequence $s^{\infty}$ defined by $s_t=Tr((1+\alpha^t)^e)$,
where $\alpha$ is a primitive element in $GF(q)$. These techniques can also be
applied into other open problems in \cite{D}.
|
1305.7296 | What exactly are the properties of scale-free and other networks? | nlin.AO cs.SI physics.comp-ph physics.soc-ph | The concept of scale-free networks has been widely applied across natural and
physical sciences. Many claims are made about the properties of these networks,
even though the concept of scale-free is often vaguely defined. We present
tools and procedures to analyse the statistical properties of networks defined
by arbitrary degree distributions and other constraints. Doing so reveals the
highly likely properties, and some unrecognised richness, of scale-free
networks, and casts doubt on some previously claimed properties being due to a
scale-free characteristic.
|
1305.7311 | Robust Hyperspectral Unmixing with Correntropy based Metric | cs.CV | Hyperspectral unmixing is one of the crucial steps for many hyperspectral
applications. The problem of hyperspectral unmixing has proven to be a
difficult task in unsupervised work settings where the endmembers and
abundances are both unknown. What is more, this task becomes more challenging
in the case that the spectral bands are degraded with noise. This paper
presents a robust model for unsupervised hyperspectral unmixing. Specifically,
our model is developed with the correntropy based metric where the non-negative
constraints on both endmembers and abundances are imposed to keep physical
significance. In addition, a sparsity prior is explicitly formulated to
constrain the distribution of the abundances of each endmember. To solve our
model, a half-quadratic optimization technique is developed to convert the
original complex optimization problem into an iteratively re-weighted NMF with
sparsity constraints. As a result, the optimization of our model can adaptively
assign small weights to noisy bands and give more emphasis on noise-free bands.
In addition, with sparsity constraints, our model can naturally generate sparse
abundances. Experiments on synthetic and real data demonstrate the
effectiveness of our model in comparison to the related state-of-the-art
unmixing models.
|
1305.7316 | A hybrid approach for semantic enrichment of MathML mathematical
expressions | cs.DL cs.IR | In this paper, we present a new approach to the semantic enrichment of
mathematical expression problem. Our approach is a combination of statistical
machine translation and disambiguation which makes use of surrounding text of
the mathematical expressions. We first use Support Vector Machine classifier to
disambiguate mathematical terms using both their presentation form and
surrounding text. We then use the disambiguation result to enhance the semantic
enrichment of a statistical-machine-translation-based system. Experimental
results show that our system archives improvements over prior systems.
|
1305.7323 | Sub-Stream Fairness and Numerical Correctness in MIMO Interference
Channels | cs.IT math.IT | Signal-to-interference plus noise ratio (SINR) and rate fairness in a system
are substantial quality-of-service (QoS) metrics. The acclaimed SINR
maximization (max-SINR) algorithm does not achieve fairness between user's
streams, i.e., sub-stream fairness is not achieved. To this end, we propose a
distributed power control algorithm to render sub-stream fairness in the
system. Sub-stream fairness is a less restrictive design metric than stream
fairness (i.e., fairness between all streams) thus sum-rate degradation is
milder. Algorithmic parameters can significantly differentiate the results of
numerical algorithms. A complete picture for comparison of algorithms can only
be depicted by varying these parameters. For example, a predetermined iteration
number or a negligible increment in the sum-rate can be the stopping criteria
of an algorithm. While the distributed interference alignment (DIA) can
reasonably achieve sub-stream fairness for the later, the imbalance between
sub-streams increases as the preset iteration number decreases. Thus comparison
of max-SINR and DIA with a low preset iteration number can only depict a part
of the picture. We analyze such important parameters and their effects on SINR
and rate metrics to exhibit numerical correctness in executing the benchmarks.
Finally, we propose group filtering schemes that jointly design the streams of
a user in contrast to max-SINR scheme that designs each stream of a user
separately.
|
1305.7331 | Alternating Decision trees for early diagnosis of dengue fever | cs.LG q-bio.QM stat.AP | Dengue fever is a flu-like illness spread by the bite of an infected mosquito
which is fast emerging as a major health problem. Timely and cost effective
diagnosis using clinical and laboratory features would reduce the mortality
rates besides providing better grounds for clinical management and disease
surveillance. We wish to develop a robust and effective decision tree based
approach for predicting dengue disease. Our analysis is based on the clinical
characteristics and laboratory measurements of the diseased individuals. We
have developed and trained an alternating decision tree with boosting and
compared its performance with C4.5 algorithm for dengue disease diagnosis. Of
the 65 patient records a diagnosis establishes that 53 individuals have been
confirmed to have dengue fever. An alternating decision tree based algorithm
was able to differentiate the dengue fever using the clinical and laboratory
data with number of correctly classified instances as 89%, F-measure of 0.86
and receiver operator characteristics (ROC) of 0.826 as compared to C4.5 having
correctly classified instances as 78%,h F-measure of 0.738 and ROC of 0.617
respectively. Alternating decision tree based approach with boosting has been
able to predict dengue fever with a higher degree of accuracy than C4.5 based
decision tree using simple clinical and laboratory features. Further analysis
on larger data sets is required to improve the sensitivity and specificity of
the alternating decision trees.
|
1305.7332 | Compositional Verification and Optimization of Interactive Markov Chains | cs.LO cs.SY | Interactive Markov chains (IMC) are compositional behavioural models
extending labelled transition systems and continuous-time Markov chains. We
provide a framework and algorithms for compositional verification and
optimization of IMC with respect to time-bounded properties. Firstly, we give a
specification formalism for IMC. Secondly, given a time-bounded property, an
IMC component and the assumption that its unknown environment satisfies a given
specification, we synthesize a scheduler for the component optimizing the
probability that the property is satisfied in any such environment.
|
1305.7345 | Algebraic Properties of Qualitative Spatio-Temporal Calculi | cs.AI | Qualitative spatial and temporal reasoning is based on so-called qualitative
calculi. Algebraic properties of these calculi have several implications on
reasoning algorithms. But what exactly is a qualitative calculus? And to which
extent do the qualitative calculi proposed meet these demands? The literature
provides various answers to the first question but only few facts about the
second. In this paper we identify the minimal requirements to binary
spatio-temporal calculi and we discuss the relevance of the according axioms
for representation and reasoning. We also analyze existing qualitative calculi
and provide a classification involving different notions of a relation algebra.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.