id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1304.5629 | Art History on Wikipedia, a Macroscopic Observation | cs.SI cs.DL | How are articles about art historical actors interlinked within Wikipedia?
Lead by this question, we seek an overview on the link structure of a domain
specific subset of Wikipedia articles. We use an established domain-specific
person name authority, the Getty Union List of Artist Names (ULAN), in order to
externally identify relevant actors. Besides containing consistent biographical
person data, this database also provides associative relationships between its
person records, serving as a reference link structure for comparison. As a
first step, we use mappings between the ULAN and English Dbpedia provided by
the Virtual Internet Authority File (VIAF). This way, we are able to identify
18,002 relevant person articles. Examining the link structure between these
resources reveals interesting insight about the high level structure of art
historical knowledge as it is represented on Wikipedia.
|
1304.5633 | Tighter Upper Bounds for the Minimum Number of Calls and Rigorous
Minimal Time in Fault-Tolerant Gossip Schemes | cs.IT cs.DS math.IT | The gossip problem (telephone problem) is an information dissemination
problem in which each of $n$ nodes of a communication network has a unique
piece of information that must be transmitted to all the other nodes using
two-way communications (telephone calls) between the pairs of nodes. During a
call between the given two nodes, they exchange the whole information known to
them at that moment. In this paper we investigate the $k$-fault-tolerant gossip
problem, which is a generalization of the gossip problem, where at most $k$
arbitrary faults of calls are allowed. The problem is to find the minimal
number of calls $\tau(n,k)$ needed to guarantee the $k$-fault-tolerance. We
construct two classes of $k$-fault-tolerant gossip schemes (sequences of calls)
and found two upper bounds of $\tau(n,k)$, which improve the previously known
results. The first upper bound for general even $n$ is $\tau(n,k) \leq 1/2 n
\lceil\log_2 n\rceil + 1/2 n k$. This result is used to obtain the upper bound
for general odd $n$. From the expressions for the second upper bound it follows
that $\tau(n,k) \leq 2/3 n k + O(n)$ for large $n$. Assuming that the calls can
take place simultaneously, it is also of interest to find $k$-fault-tolerant
gossip schemes, which can spread the full information in minimal time. For even
$n$ we showed that the minimal time is $T(n,k)=\lceil\log_2 n\rceil + k$.
|
1304.5634 | A Survey on Multi-view Learning | cs.LG | In recent years, a great many methods of learning from multi-view data by
considering the diversity of different views have been proposed. These views
may be obtained from multiple sources or different feature subsets. In trying
to organize and highlight similarities and differences between the variety of
multi-view learning approaches, we review a number of representative multi-view
learning algorithms in different areas and classify them into three groups: 1)
co-training, 2) multiple kernel learning, and 3) subspace learning. Notably,
co-training style algorithms train alternately to maximize the mutual agreement
on two distinct views of the data; multiple kernel learning algorithms exploit
kernels that naturally correspond to different views and combine kernels either
linearly or non-linearly to improve learning performance; and subspace learning
algorithms aim to obtain a latent subspace shared by multiple views by assuming
that the input views are generated from this latent subspace. Though there is
significant variance in the approaches to integrating multiple views to improve
learning performance, they mainly exploit either the consensus principle or the
complementary principle to ensure the success of multi-view learning. Since
accessing multiple views is the fundament of multi-view learning, with the
exception of study on learning a model from multiple views, it is also valuable
to study how to construct multiple views and how to evaluate these views.
Overall, by exploring the consistency and complementary properties of different
views, multi-view learning is rendered more effective, more promising, and has
better generalization ability than single-view learning.
|
1304.5643 | Satisfiability and Canonisation of Timely Constraints | math.CO cs.MA | We abstractly formulate an analytic problem that arises naturally in the
study of coordination in multi-agent systems. Let I be a set of arbitrary
cardinality (the set of actions) and assume that for each pair of distinct
actions (i,j), we are given a number \delta(i,j). We say that a function t,
specifying a time for each action, satisfies the timely constraint {\delta} if
for every pair of distinct actions (i,j), we have t(j)-t(i) <= \delta(i,j) (and
thus also t(j)-t(i) >= -\delta(j,i)). While the approach that first comes to
mind for analysing these definitions is an analytic/geometric one, it turns out
that graph-theoretic tools yield powerful results when applied to these
definitions. Using such tools, we characterise the set of satisfiable timely
constraints, and reduce the problem of satisfiability of a timely constraint to
the all-pairs shortest-path problem, and for finite I, furthermore to the
negative-cycle detection problem. Moreover, we constructively show that every
satisfiable timely constraint has a minimal satisfying function - a key
milestone on the way to optimally solving a large class of coordination
problems - and reduce the problem of finding this minimal satisfying function,
as well as the problems of classifying and comparing timely constraints, to the
all-pairs shortest-path problem. At the heart of our analysis lies the
constructive definition of a "nicely-behaved" representative for each class of
timely constraints sharing the same set of satisfying functions. We show that
this canonical representative, as well as the map from such canonical
representatives to the the sets of functions satisfying the classes of timely
constraints they represent, has many desired properties, which provide deep
insights into the structure underlying the above definitions.
|
1304.5666 | The Structure and Quantum Capacity of a Partially Degradable Quantum
Channel | quant-ph cs.IT math.IT | The quantum capacity of degradable quantum channels has been proven to be
additive. On the other hand, there is no general rule for the behavior of
quantum capacity for non-degradable quantum channels. We introduce the set of
partially degradable (PD) quantum channels to answer the question of additivity
of quantum capacity for a well-separable subset of non-degradable channels. A
quantum channel is partially degradable if the channel output can be used to
simulate the degraded environment state. PD channels could exist both in the
degradable, non-degradable and conjugate degradable family. We define the term
partial simulation, which is a clear benefit that arises from the structure of
the complementary channel of a PD channel. We prove that the quantum capacity
of an arbitrary dimensional PD channel is additive. We also demonstrate that
better quantum data rates can be achieved over a PD channel in comparison to
standard (non-PD) channels. Our results indicate that the partial degradability
property can be exploited and yet still hold many benefits for quantum
communications.
|
1304.5670 | Particularities of Analog FCS Optimization | cs.IT math.IT | There is analyzed a performance of optimal feedback communication systems
with the analog transmitters in the forward channel (AFCS). It is shown that
measures and limit boundaries of AFCS performance are similar but differ from
those used in digital communications and information theory. The causes of the
differences are discussed.
|
1304.5678 | Analytic Feature Selection for Support Vector Machines | cs.LG stat.ML | Support vector machines (SVMs) rely on the inherent geometry of a data set to
classify training data. Because of this, we believe SVMs are an excellent
candidate to guide the development of an analytic feature selection algorithm,
as opposed to the more commonly used heuristic methods. We propose a
filter-based feature selection algorithm based on the inherent geometry of a
feature set. Through observation, we identified six geometric properties that
differ between optimal and suboptimal feature sets, and have statistically
significant correlations to classifier performance. Our algorithm is based on
logistic and linear regression models using these six geometric properties as
predictor variables. The proposed algorithm achieves excellent results on high
dimensional text data sets, with features that can be organized into a handful
of feature types; for example, unigrams, bigrams or semantic structural
features. We believe this algorithm is a novel and effective approach to
solving the feature selection problem for linear SVMs.
|
1304.5700 | Guiding Blind Transmitters: Degrees of Freedom Optimal Interference
Alignment Using Relays | cs.IT math.IT | Channel state information (CSI) at the transmitters (CSIT) is of importance
for interference alignment schemes to achieve the optimal degrees of freedom
(DoF) for wireless networks. This paper investigates the impact of half-duplex
relays on the degrees of freedom (DoF) of the X channel and the interference
channel when the transmitters are blind in the sense that no ISIT is available.
In particular, it is shown that adding relay nodes with global CSI to the
communication model is sufficient to recover the DoF that is the optimal for
these models with global CSI at the transmitters. The relay nodes in essence
help steer the directions of the transmitted signals to facilitate interference
alignment to achieve the optimal DoF with CSIT. The general MxN X channel with
relays and the K-user interference channel are both investigated, and
sufficient conditions on the number of antennas at the relays and the number of
relays needed to achieve the optimal DoF with CSIT are established. Using
relays, the optimal DoF can be achieved in finite channel uses. The DoF for the
case when relays only have delayed CSI is also investigated, and it is shown
that with delayed CSI at the relay the optimal DoF with full CSIT cannot be
achieved. Special cases of the X channel and interference channel are
investigated to obtain further design insights.
|
1304.5705 | A novice looks at emotional cognition | cs.AI | Modeling emotional-cognition is in a nascent stage and therefore wide-open
for new ideas and discussions. In this paper the author looks at the modeling
problem by bringing in ideas from axiomatic mathematics, information theory,
computer science, molecular biology, non-linear dynamical systems and quantum
computing and explains how ideas from these disciplines may have applications
in modeling emotional-cognition.
|
1304.5706 | Calculation and analysis of solitary waves and kinks in elastic tubes | cs.CE math-ph math.MP nlin.PS | The paper is devoted to analysis of different models that describe waves in
fluid-filled and gas-filled elastic tubes and development of methods of
calculation and numerical analysis of solutions with solitary waves and kinks
for these models. Membrane model and plate model are used for tube. Two types
of solitary waves are found. One-parametric families are stable and may be used
as shock structures. Null-parametric solitary waves are unstable. The process
of split of such solitary waves is investigated. It may lead to appearance of
solutions with kinks. Kink solutions are null-parametric and stable. General
theory of reversible shocks is used for analysis of numerical solutions.
|
1304.5723 | Classical information storage in an $n$-level quantum system | cs.IT math-ph math.IT math.MP quant-ph | A game is played by a team of two --- say Alice and Bob --- in which the
value of a random variable $x$ is revealed to Alice only, who cannot freely
communicate with Bob. Instead, she is given a quantum $n$-level system,
respectively a classical $n$-state system, which she can put in possession of
Bob in any state she wishes. We evaluate how successfully they managed to store
and recover the value of $x$ in the used system by requiring Bob to specify a
value $z$ and giving a reward of value $ f(x,z)$ to the team.
We show that whatever the probability distribution of $x$ and the reward
function $f$ are, when using a quantum $n$-level system, the maximum expected
reward obtainable with the best possible team strategy is equal to that
obtainable with the use of a classical $n$-state system.
The proof relies on mixed discriminants of positive matrices and --- perhaps
surprisingly --- an application of the Supply--Demand Theorem for bipartite
graphs. As a corollary, we get an infinite set of new, dimension dependent
inequalities regarding positive operator valued measures and density operators
on complex $n$-space.
As a further corollary, we see that the greatest value, with respect to a
given distribution of $x$, of the mutual information $I(x;z)$ that is
obtainable using an $n$-level quantum system equals the analogous maximum for a
classical $n$-state system.
|
1304.5745 | Proactive Data Download and User Demand Shaping for Data Networks | cs.IT cs.NI math.IT | In this work, we propose and study optimal proactive resource allocation and
demand shaping for data networks. Motivated by the recent findings on the
predictability of human behavior patterns in data networks, and the emergence
of highly capable handheld devices, our design aims to smooth out the network
traffic over time and minimize the data delivery costs.
Our framework utilizes proactive data services as well as smart content
recommendation schemes for shaping the demand. Proactive data services take
place during the off-peak hours based on a statistical prediction of a demand
profile for each user, whereas smart content recommendation assigns modified
valuations to data items so as to render the users' demand less uncertain.
Hence, our recommendation scheme aims to boost the performance of proactive
services within the allowed flexibility of user requirements. We conduct
theoretical performance analysis that quantifies the leveraged cost reduction
through the proposed framework. We show that the cost reduction scales at the
same rate as the cost function scales with the number of users. Further, we
prove that \emph{demand shaping} through smart recommendation strictly reduces
the incurred cost even below that of proactive downloads without
recommendation.
|
1304.5758 | Prior-free and prior-dependent regret bounds for Thompson Sampling | stat.ML cs.LG | We consider the stochastic multi-armed bandit problem with a prior
distribution on the reward distributions. We are interested in studying
prior-free and prior-dependent regret bounds, very much in the same spirit as
the usual distribution-free and distribution-dependent bounds for the
non-Bayesian stochastic bandit. Building on the techniques of Audibert and
Bubeck [2009] and Russo and Roy [2013] we first show that Thompson Sampling
attains an optimal prior-free bound in the sense that for any prior
distribution its Bayesian regret is bounded from above by $14 \sqrt{n K}$. This
result is unimprovable in the sense that there exists a prior distribution such
that any algorithm has a Bayesian regret bounded from below by $\frac{1}{20}
\sqrt{n K}$. We also study the case of priors for the setting of Bubeck et al.
[2013] (where the optimal mean is known as well as a lower bound on the
smallest gap) and we show that in this case the regret of Thompson Sampling is
in fact uniformly bounded over time, thus showing that Thompson Sampling can
greatly take advantage of the nice properties of these priors.
|
1304.5790 | Gaussian Half-Duplex Relay Networks: improved constant gap and
connections with the assignment problem | cs.IT math.IT | This paper considers a general Gaussian relay network where a source
transmits a message to a destination with the help of N half-duplex relays. It
proves that the information theoretic cut-set upper bound to the capacity can
be achieved to within 2:021(N +2) bits with noisy network coding, thereby
reducing the previously known gap. Further improved gap results are presented
for more structured networks like diamond networks. It is then shown that the
generalized Degrees-of-Freedom of a general Gaussian half-duplex relay network
is the solution of a linear program, where the coefficients of the linear
inequality constraints are proved to be the solution of several linear
programs, known in graph theory as the assignment problem, for which efficient
numerical algorithms exist. The optimal schedule, that is, the optimal value of
the 2^N possible transmit-receive configurations/states for the relays, is
investigated and known results for diamond networks are extended to general
relay networks. It is shown, for the case of 2 relays, that only 3 out of the 4
possible states have strictly positive probability. Extensive experimental
results show that, for a general N-relay network with N<9, the optimal schedule
has at most N +1 states with strictly positive probability. As an extension of
a conjecture presented for diamond networks, it is conjectured that this result
holds for any HD relay network and any number of relays. Finally, a 2-relay
network is studied to determine the channel conditions under which selecting
the best relay is not optimal, and to highlight the nature of the rate gain due
to multiple relays.
|
1304.5793 | Continuum armed bandit problem of few variables in high dimensions | cs.LG | We consider the stochastic and adversarial settings of continuum armed
bandits where the arms are indexed by [0,1]^d. The reward functions r:[0,1]^d
-> R are assumed to intrinsically depend on at most k coordinate variables
implying r(x_1,..,x_d) = g(x_{i_1},..,x_{i_k}) for distinct and unknown
i_1,..,i_k from {1,..,d} and some locally Holder continuous g:[0,1]^k -> R with
exponent 0 < alpha <= 1. Firstly, assuming (i_1,..,i_k) to be fixed across
time, we propose a simple modification of the CAB1 algorithm where we construct
the discrete set of sampling points to obtain a bound of
O(n^((alpha+k)/(2*alpha+k)) (log n)^((alpha)/(2*alpha+k)) C(k,d)) on the
regret, with C(k,d) depending at most polynomially in k and sub-logarithmically
in d. The construction is based on creating partitions of {1,..,d} into k
disjoint subsets and is probabilistic, hence our result holds with high
probability. Secondly we extend our results to also handle the more general
case where (i_1,...,i_k) can change over time and derive regret bounds for the
same.
|
1304.5802 | Nonlinear Basis Pursuit | cs.IT math.IT math.ST stat.TH | In compressive sensing, the basis pursuit algorithm aims to find the sparsest
solution to an underdetermined linear equation system. In this paper, we
generalize basis pursuit to finding the sparsest solution to higher order
nonlinear systems of equations, called nonlinear basis pursuit. In contrast to
the existing nonlinear compressive sensing methods, the new algorithm that
solves the nonlinear basis pursuit problem is convex and not greedy. The novel
algorithm enables the compressive sensing approach to be used for a broader
range of applications where there are nonlinear relationships between the
measurements and the unknowns.
|
1304.5810 | Exchanging OWL 2 QL Knowledge Bases | cs.AI | Knowledge base exchange is an important problem in the area of data exchange
and knowledge representation, where one is interested in exchanging information
between a source and a target knowledge base connected through a mapping. In
this paper, we study this fundamental problem for knowledge bases and mappings
expressed in OWL 2 QL, the profile of OWL 2 based on the description logic
DL-Lite_R. More specifically, we consider the problem of computing universal
solutions, identified as one of the most desirable translations to be
materialized, and the problem of computing UCQ-representations, which optimally
capture in a target TBox the information that can be extracted from a source
TBox and a mapping by means of unions of conjunctive queries. For the former we
provide a novel automata-theoretic technique, and complexity results that range
from NP to EXPTIME, while for the latter we show NLOGSPACE-completeness.
|
1304.5817 | Frequency-Domain Group-based Shrinkage Estimators for UWB Systems | cs.IT math.IT | In this work, we propose low-complexity adaptive biased estimation
algorithms, called group-based shrinkage estimators (GSEs), for parameter
estimation and interference suppression scenarios with mechanisms to
automatically adjust the shrinkage factors. The proposed estimation algorithms
divide the target parameter vector into a number of groups and adaptively
calculate one shrinkage factor for each group. GSE schemes improve the
performance of the conventional least squares (LS) estimator in terms of the
mean-squared error (MSE), while requiring a very modest increase in complexity.
An MSE analysis is presented which indicates the lower bounds of the GSE
schemes with different group sizes. We prove that our proposed schemes
outperform the biased estimation with only one shrinkage factor and the best
performance of GSE can be obtained with the maximum number of groups. Then, we
consider an application of the proposed algorithms to single-carrier
frequency-domain equalization (SC-FDE) of direct-sequence ultra-wideband
(DS-UWB) systems, in which the structured channel estimation (SCE) algorithm
and the frequency domain receiver employ the GSE. The simulation results show
that the proposed algorithms significantly outperform the conventional unbiased
estimator in the analyzed scenarios.
|
1304.5821 | A Unified Approach to Joint and Iterative Adaptive Interference
Cancellation and Parameter Estimation for CDMA Systems in Multipath Channels | cs.IT math.IT | This paper proposes a unified approach to joint adaptive parameter estimation
and interference cancellation (IC) for direct sequence
code-division-multiple-access (DS-CDMA) systems in multipath channels. A
unified framework is presented in which the IC problem is formulated as an
optimization problem with extra degrees of freedom of an IC parameter vector
for each stage and user. We propose a joint optimization method for estimating
the IC parameter vector, the linear receiver filter front-end, and the channel
along with minimum mean squared error (MMSE) expressions for the estimators.
Based on the proposed joint optimization approach, we derive low-complexity
stochastic gradient (SG) algorithms for estimating the desired parameters.
Simulation results for the uplink of a synchronous DS-CDMA system show that the
proposed methods significantly outperform the best known IC receivers.
|
1304.5822 | Bargaining for Revenue Shares on Tree Trading Networks | cs.GT cs.AI | We study trade networks with a tree structure, where a seller with a single
indivisible good is connected to buyers, each with some value for the good, via
a unique path of intermediaries. Agents in the tree make multiplicative revenue
share offers to their parent nodes, who choose the best offer and offer part of
it to their parent, and so on; the winning path is determined by who finally
makes the highest offer to the seller. In this paper, we investigate how these
revenue shares might be set via a natural bargaining process between agents on
the tree, specifically, egalitarian bargaining between endpoints of each edge
in the tree. We investigate the fixed point of this system of bargaining
equations and prove various desirable for this solution concept, including (i)
existence, (ii) uniqueness, (iii) efficiency, (iv) membership in the core, (v)
strict monotonicity, (vi) polynomial-time computability to any given accuracy.
Finally, we present numerical evidence that asynchronous dynamics with randomly
ordered updates always converges to the fixed point, indicating that the fixed
point shares might arise from decentralized bargaining amongst agents on the
trade network.
|
1304.5823 | Towards a Formal Distributional Semantics: Simulating Logical Calculi
with Tensors | math.LO cs.CL cs.LO | The development of compositional distributional models of semantics
reconciling the empirical aspects of distributional semantics with the
compositional aspects of formal semantics is a popular topic in the
contemporary literature. This paper seeks to bring this reconciliation one step
further by showing how the mathematical constructs commonly used in
compositional distributional models, such as tensors and matrices, can be used
to simulate different aspects of predicate logic.
This paper discusses how the canonical isomorphism between tensors and
multilinear maps can be exploited to simulate a full-blown quantifier-free
predicate calculus using tensors. It provides tensor interpretations of the set
of logical connectives required to model propositional calculi. It suggests a
variant of these tensor calculi capable of modelling quantifiers, using few
non-linear operations. It finally discusses the relation between these
variants, and how this relation should constitute the subject of future work.
|
1304.5827 | An Efficient MAC Protocol with Selective Grouping and Cooperative
Sensing in Cognitive Radio Networks | cs.ET cs.IT cs.NI math.IT | In cognitive radio networks, spectrum sensing is a crucial technique to
discover spectrum opportunities for the Secondary Users (SUs). The quality of
spectrum sensing is evaluated by both sensing accuracy and sensing efficiency.
Here, sensing accuracy is represented by the false alarm probability and the
detection probability while sensing efficiency is represented by the sensing
overhead and network throughput. In this paper, we propose a group-based
cooperative Medium Access Control (MAC) protocol called GC-MAC, which addresses
the tradeoff between sensing accuracy and efficiency. In GC-MAC, the
cooperative SUs are grouped into several teams. During a sensing period, each
team senses a different channel while SUs in the same team perform the joint
detection on the targeted channel. The sensing process will not stop unless an
available channel is discovered. To reduce the sensing overhead, an
SU-selecting algorithm is presented to selectively choose the cooperative SUs
based on the channel dynamics and usage patterns. Then, an analytical model is
built to study the sensing accuracy-efficiency tradeoff under two types of
channel conditions: time-invariant channel and time-varying channel. An
optimization problem that maximizes achievable throughput is formulated to
optimize the important design parameters. Both saturation and non-saturation
situations are investigated with respect to throughput and sensing overhead.
Simulation results indicate that the proposed protocol is able to significantly
decrease sensing overhead and increase network throughput with guaranteed
sensing accuracy.
|
1304.5846 | A hybrid scheme for encoding audio signal using hidden Markov models of
waveforms | math.ST cs.IT math.IT stat.TH | This paper reports on recent results related to audiophonic signals encoding
using time-scale and time-frequency transform. More precisely, non-linear,
structured approximations for tonal and transient components using local cosine
and wavelet bases will be described, yielding expansions of audio signals in
the form tonal + transient + residual. We describe a general formulation
involving hidden Markov models, together with corresponding rate estimates.
Estimators for the balance transient/tonal are also discussed.
|
1304.5850 | Large System Analysis of Linear Precoding in MISO Broadcast Channels
with Confidential Messages | cs.IT math.IT | In this paper, we study the performance of regularized channel inversion
(RCI) precoding in large MISO broadcast channels with confidential messages
(BCC). We obtain a deterministic approximation for the achievable secrecy
sum-rate which is almost surely exact as the number of transmit antennas $M$
and the number of users $K$ grow to infinity in a fixed ratio $\beta=K/M$. We
derive the optimal regularization parameter $\xi$ and the optimal network load
$\beta$ that maximize the per-antenna secrecy sum-rate. We then propose a
linear precoder based on RCI and power reduction (RCI-PR) that significantly
increases the high-SNR secrecy sum-rate for $1<\beta<2$. Our proposed precoder
achieves a per-user secrecy rate which has the same high-SNR scaling factor as
both the following upper bounds: (i) the rate of the optimum RCI precoder
without secrecy requirements, and (ii) the secrecy capacity of a single-user
system without interference. Furthermore, we obtain a deterministic
approximation for the secrecy sum-rate achievable by RCI precoding in the
presence of channel state information (CSI) error. We also analyze the
performance of our proposed RCI-PR precoder with CSI error, and we determine
how the error must scale with the SNR in order to maintain a given rate gap to
the case with perfect CSI.
|
1304.5856 | Fundamental Limits of Distributed Caching in D2D Wireless Networks | cs.IT cs.NI math.IT | We consider a wireless Device-to-Device (D2D) network where communication is
restricted to be single-hop, users make arbitrary requests from a finite
library of possible files and user devices cache information in the form of
linear combinations of packets from the files in the library (coded caching).
We consider the combined effect of coding in the caching and delivery phases,
achieving "coded multicast gain", and of spatial reuse due to local short-range
D2D communication. Somewhat counterintuitively, we show that the coded
multicast gain and the spatial reuse gain do not cumulate, in terms of the
throughput scaling laws. In particular, the spatial reuse gain shown in our
previous work on uncoded random caching and the coded multicast gain shown in
this paper yield the same scaling laws behavior, but no further scaling law
gain can be achieved by using both coded caching and D2D spatial reuse.
|
1304.5862 | Multi-Label Classifier Chains for Bird Sound | cs.LG cs.SD stat.ML | Bird sound data collected with unattended microphones for automatic surveys,
or mobile devices for citizen science, typically contain multiple
simultaneously vocalizing birds of different species. However, few works have
considered the multi-label structure in birdsong. We propose to use an ensemble
of classifier chains combined with a histogram-of-segments representation for
multi-label classification of birdsong. The proposed method is compared with
binary relevance and three multi-instance multi-label learning (MIML)
algorithms from prior work (which focus more on structure in the sound, and
less on structure in the label sets). Experiments are conducted on two
real-world birdsong datasets, and show that the proposed method usually
outperforms binary relevance (using the same features and base-classifier), and
is better in some cases and worse in others compared to the MIML algorithms.
|
1304.5863 | Commonsense Reasoning and Large Network Analysis: A Computational Study
of ConceptNet 4 | cs.AI cs.SI | In this report a computational study of ConceptNet 4 is performed using tools
from the field of network analysis. Part I describes the process of extracting
the data from the SQL database that is available online, as well as how the
closure of the input among the assertions in the English language is computed.
This part also performs a validation of the input as well as checks for the
consistency of the entire database. Part II investigates the structural
properties of ConceptNet 4. Different graphs are induced from the knowledge
base by fixing different parameters. The degrees and the degree distributions
are examined, the number and sizes of connected components, the transitivity
and clustering coefficient, the cores, information related to shortest paths in
the graphs, and cliques. Part III investigates non-overlapping, as well as
overlapping communities that are found in ConceptNet 4. Finally, Part IV
describes an investigation on rules.
|
1304.5878 | Visual Room-Awareness for Humanoid Robot Self-Localization | cs.RO | Humanoid robots without internal sensors such as a compass tend to lose their
orientation after a fall. Furthermore, re-initialisation is often ambiguous due
to symmetric man-made environments. The room-awareness module proposed here is
inspired by the results of psychological experiments and improves existing
self-localization strategies by mapping and matching the visual background with
colour histograms. The matching algorithm uses a particle-filter to generate
hypotheses of the viewing directions independent of the self-localization
algorithm and generates confidence values for various possible poses. The
robot's behaviour controller uses those confidence values to control
self-localization algorithm to converge to the most likely pose and prevents
the algorithm from getting stuck in local minima. Experiments with a symmetric
Standard Platform League RoboCup playing field with a simulated and a real
humanoid NAO robot show the significant improvement of the system.
|
1304.5880 | Dealing with natural language interfaces in a geolocation context | cs.CL | In the geolocation field where high-level programs and low-level devices
coexist, it is often difficult to find a friendly user inter- face to configure
all the parameters. The challenge addressed in this paper is to propose
intuitive and simple, thus natural lan- guage interfaces to interact with
low-level devices. Such inter- faces contain natural language processing and
fuzzy represen- tations of words that facilitate the elicitation of
business-level objectives in our context.
|
1304.5892 | A Social Welfare Optimal Sequential Allocation Procedure | cs.AI cs.GT cs.MA | We consider a simple sequential allocation procedure for sharing indivisible
items between agents in which agents take turns to pick items. Supposing
additive utilities and independence between the agents, we show that the
expected utility of each agent is computable in polynomial time. Using this
result, we prove that the expected utilitarian social welfare is maximized when
agents take alternate turns. We also argue that this mechanism remains optimal
when agents behave strategically
|
1304.5894 | Bayesian crack detection in ultra high resolution multimodal images of
paintings | cs.CV cs.LG | The preservation of our cultural heritage is of paramount importance. Thanks
to recent developments in digital acquisition techniques, powerful image
analysis algorithms are developed which can be useful non-invasive tools to
assist in the restoration and preservation of art. In this paper we propose a
semi-supervised crack detection method that can be used for high-dimensional
acquisitions of paintings coming from different modalities. Our dataset
consists of a recently acquired collection of images of the Ghent Altarpiece
(1432), one of Northern Europe's most important art masterpieces. Our goal is
to build a classifier that is able to discern crack pixels from the background
consisting of non-crack pixels, making optimal use of the information that is
provided by each modality. To accomplish this we employ a recently developed
non-parametric Bayesian classifier, that uses tensor factorizations to
characterize any conditional probability. A prior is placed on the parameters
of the factorization such that every possible interaction between predictors is
allowed while still identifying a sparse subset among these predictors. The
proposed Bayesian classifier, which we will refer to as conditional Bayesian
tensor factorization or CBTF, is assessed by visually comparing classification
results with the Random Forest (RF) algorithm.
|
1304.5897 | Towards an Extension of the 2-tuple Linguistic Model to Deal With
Unbalanced Linguistic Term sets | cs.AI | In the domain of Computing with words (CW), fuzzy linguistic approaches are
known to be relevant in many decision-making problems. Indeed, they allow us to
model the human reasoning in replacing words, assessments, preferences,
choices, wishes... by ad hoc variables, such as fuzzy sets or more
sophisticated variables.
This paper focuses on a particular model: Herrera & Martinez' 2-tuple
linguistic model and their approach to deal with unbalanced linguistic term
sets. It is interesting since the computations are accomplished without loss of
information while the results of the decision-making processes always refer to
the initial linguistic term set. They propose a fuzzy partition which
distributes data on the axis by using linguistic hierarchies to manage the
non-uniformity. However, the required input (especially the density around the
terms) taken by their fuzzy partition algorithm may be considered as too much
demanding in a real-world application, since density is not always easy to
determine. Moreover, in some limit cases (especially when two terms are very
closed semantically to each other), the partition doesn't comply with the data
themselves, it isn't close to the reality. Therefore we propose to modify the
required input, in order to offer a simpler and more faithful partition. We
have added an extension to the package jFuzzyLogic and to the corresponding
script language FCL. This extension supports both 2-tuple models: Herrera &
Martinez' and ours. In addition to the partition algorithm, we present two
aggregation algorithms: the arithmetic means and the addition. We also discuss
these kinds of 2-tuple models.
|
1304.5940 | Low-Complexity Channel Estimation in Large-Scale MIMO using Polynomial
Expansion | cs.IT math.IT | This paper considers pilot-based channel estimation in large-scale
multiple-input multiple-output (MIMO) communication systems, also known as
"massive MIMO". Unlike previous works on this topic, which mainly considered
the impact of inter-cell disturbance due to pilot reuse (so-called pilot
contamination), we are concerned with the computational complexity. The
conventional minimum mean square error (MMSE) and minimum variance unbiased
(MVU) channel estimators rely on inverting covariance matrices, which has cubic
complexity in the multiplication of number of antennas at each side. Since this
is extremely expensive when there are hundreds of antennas, we propose to
approximate the inversion by an L-order matrix polynomial. A set of
low-complexity Bayesian channel estimators, coined Polynomial ExpAnsion CHannel
(PEACH) estimators, are introduced. The coefficients of the polynomials are
optimized to yield small mean square error (MSE). We show numerically that
near-optimal performance is achieved with low polynomial orders. In practice,
the order L can be selected to balance between complexity and MSE.
Interestingly, pilot contamination is beneficial to the PEACH estimators in the
sense that smaller L can be used to achieve near-optimal MSEs.
|
1304.5961 | Backdoors to Abduction | cs.AI cs.CC cs.LO | Abductive reasoning (or Abduction, for short) is among the most fundamental
AI reasoning methods, with a broad range of applications, including fault
diagnosis, belief revision, and automated planning. Unfortunately, Abduction is
of high computational complexity; even propositional Abduction is
\Sigma_2^P-complete and thus harder than NP and coNP. This complexity barrier
rules out the existence of a polynomial transformation to propositional
satisfiability (SAT). In this work we use structural properties of the
Abduction instance to break this complexity barrier. We utilize the problem
structure in terms of small backdoor sets. We present fixed-parameter tractable
transformations from Abduction to SAT, which make the power of today's SAT
solvers available to Abduction.
|
1304.5966 | SW# - GPU enabled exact alignments on genome scale | cs.DC cs.CE q-bio.GN | Sequence alignment is one of the oldest and the most famous problems in
bioinformatics. Even after 45 years, for one reason or another, this problem is
still actual; current solutions are trade-offs between execution time, memory
consumption and accuracy. We purpose SW#, a new CUDA GPU enabled and memory
efficient implementation of dynamic programming algorithms for local alignment.
In this implementation indels are treated using the affine gap model. Although
there are other GPU implementations of the Smith-Waterman algorithm, SW# is the
only publicly available implementation that can produce sequence alignments on
genome-wide scale. For long sequences, our implementation is at least a few
hundred times faster than a CPU version of the same algorithm.
|
1304.5970 | Three Generalizations of the FOCUS Constraint | cs.AI | The FOCUS constraint expresses the notion that solutions are concentrated. In
practice, this constraint suffers from the rigidity of its semantics. To tackle
this issue, we propose three generalizations of the FOCUS constraint. We
provide for each one a complete filtering algorithm as well as discussing
decompositions.
|
1304.5974 | Dynamic stochastic blockmodels: Statistical models for time-evolving
networks | cs.SI cs.LG physics.soc-ph stat.ME | Significant efforts have gone into the development of statistical models for
analyzing data in the form of networks, such as social networks. Most existing
work has focused on modeling static networks, which represent either a single
time snapshot or an aggregate view over time. There has been recent interest in
statistical modeling of dynamic networks, which are observed at multiple points
in time and offer a richer representation of many complex phenomena. In this
paper, we propose a state-space model for dynamic networks that extends the
well-known stochastic blockmodel for static networks to the dynamic setting. We
then propose a procedure to fit the model using a modification of the extended
Kalman filter augmented with a local search. We apply the procedure to analyze
a dynamic social network of email communication.
|
1304.6000 | Mixture Gaussian Signal Estimation with L_infty Error Metric | cs.IT math.IT | We consider the problem of estimating an input signal from noisy measurements
in both parallel scalar Gaussian channels and linear mixing systems. The
performance of the estimation process is quantified by the $\ell_\infty$ norm
error metric. We first study the minimum mean $\ell_\infty$ error estimator in
parallel scalar Gaussian channels, and verify that, when the input is
independent and identically distributed (i.i.d.) mixture Gaussian, the Wiener
filter is asymptotically optimal with probability 1. For linear mixing systems
with i.i.d. sparse Gaussian or mixture Gaussian inputs, under the assumption
that the relaxed belief propagation (BP) algorithm matches Tanaka's fixed point
equation, applying the Wiener filter to the output of relaxed BP is also
asymptotically optimal with probability 1. However, in order to solve the
practical problem where the signal dimension is finite, we apply an estimation
algorithm that has been proposed in our previous work, and illustrate that an
$\ell_\infty$ error minimizer can be approximated by an $\ell_p$ error
minimizer provided the value of $p$ is properly chosen.
|
1304.6023 | Spaces, Trees and Colors: The Algorithmic Landscape of Document
Retrieval on Sequences | cs.IR cs.DS | Document retrieval is one of the best established information retrieval
activities since the sixties, pervading all search engines. Its aim is to
obtain, from a collection of text documents, those most relevant to a pattern
query. Current technology is mostly oriented to "natural language" text
collections, where inverted indices are the preferred solution. As successful
as this paradigm has been, it fails to properly handle some East Asian
languages and other scenarios where the "natural language" assumptions do not
hold. In this survey we cover the recent research in extending the document
retrieval techniques to a broader class of sequence collections, which has
applications bioinformatics, data and Web mining, chemoinformatics, software
engineering, multimedia information retrieval, and many others. We focus on the
algorithmic aspects of the techniques, uncovering a rich world of relations
between document retrieval challenges and fundamental problems on trees,
strings, range queries, discrete geometry, and others.
|
1304.6026 | Displacement Convexity, A Useful Framework for the Study of Spatially
Coupled Codes | cs.IT math.IT | Spatial coupling has recently emerged as a powerful paradigm to construct
graphical models that work well under low-complexity message-passing
algorithms. Although much progress has been made on the analysis of spatially
coupled models under message passing, there is still room for improvement, both
in terms of simplifying existing proofs as well as in terms of proving
additional properties.
We introduce one further tool for the analysis, namely the concept of
displacement convexity. This concept plays a crucial role in the theory of
optimal transport and, quite remarkably, it is also well suited for the
analysis of spatially coupled systems. In cases where the concept applies,
displacement convexity allows functionals of distributions which are not convex
in the usual sense to be represented in an alternative form, so that they are
convex with respect to the new parametrization. As a proof of concept we
consider spatially coupled $(l,r)$-regular Gallager ensembles when transmission
takes place over the binary erasure channel. We show that the potential
function of the coupled system is displacement convex. Due to possible
translational degrees of freedom convexity by itself falls short of
establishing the uniqueness of the minimizing profile. For the spatially
coupled $(l,r)$-regular system strict displacement convexity holds when a
global translation degree of freedom is removed. Implications for the
uniqueness of the minimizer and for solutions of the density evolution equation
are discussed.
|
1304.6027 | Near-Optimal Stochastic Threshold Group Testing | cs.IT math.IT | We formulate and analyze a stochastic threshold group testing problem
motivated by biological applications. Here a set of $n$ items contains a subset
of $d \ll n$ defective items. Subsets (pools) of the $n$ items are tested --
the test outcomes are negative, positive, or stochastic (negative or positive
with certain probabilities that might depend on the number of defectives being
tested in the pool), depending on whether the number of defective items in the
pool being tested are fewer than the {\it lower threshold} $l$, greater than
the {\it upper threshold} $u$, or in between. The goal of a {\it stochastic
threshold group testing} scheme is to identify the set of $d$ defective items
via a "small" number of such tests. In the regime that $l = o(d)$ we present
schemes that are computationally feasible to design and implement, and require
near-optimal number of tests (significantly improving on existing schemes). Our
schemes are robust to a variety of models for probabilistic threshold group
testing.
|
1304.6033 | Robust Polyhedral Regularization | cs.IT math.IT | In this paper, we establish robustness to noise perturbations of polyhedral
regularization of linear inverse problems. We provide a sufficient condition
that ensures that the polyhedral face associated to the true vector is equal to
that of the recovered one. This criterion also implies that the $\ell^2$
recovery error is proportional to the noise level for a range of parameter. Our
criterion is expressed in terms of the hyperplanes supporting the faces of the
unit polyhedral ball of the regularization. This generalizes to an arbitrary
polyhedral regularization results that are known to hold for sparse synthesis
and analysis $\ell^1$ regularization which are encompassed in this framework.
As a byproduct, we obtain recovery guarantees for $\ell^\infty$ and
$\ell^1-\ell^\infty$ regularization.
|
1304.6078 | Automating the Dispute Resolution in Task Dependency Network | cs.AI | When perturbation or unexpected events do occur, agents need protocols for
repairing or reforming the supply chain. Unfortunate contingency could increase
too much the cost of performance, while breaching the current contract may be
more efficient. In our framework the principles of contract law are applied to
set penalties: expectation damages, opportunity cost, reliance damages, and
party design remedies, and they are introduced in the task dependency model
|
1304.6099 | Soft computing-based calibration of microplane M4 model parameters:
Methodology and validation | cs.CE | Constitutive models for concrete based on the microplane concept have
repeatedly proven their ability to well-reproduce its non-linear response on
material as well as structural scales. The major obstacle to a routine
application of this class of models is, however, the calibration of
microplane-related constants from macroscopic data. The goal of this paper is
two-fold: (i) to introduce the basic ingredients of a robust inverse procedure
for the determination of dominant parameters of the M4 model proposed by Bazant
and co-workers based on cascade Artificial Neural Networks trained by
Evolutionary Algorithm and (ii) to validate the proposed methodology against a
representative set of experimental data. The obtained results demonstrate that
the soft computing-based method is capable of delivering the searched response
with an accuracy comparable to the values obtained by expert users.
|
1304.6108 | The varifold representation of non-oriented shapes for diffeomorphic
registration | cs.CG cs.CV math.DG | In this paper, we address the problem of orientation that naturally arises
when representing shapes like curves or surfaces as currents. In the field of
computational anatomy, the framework of currents has indeed proved very
efficient to model a wide variety of shapes. However, in such approaches,
orientation of shapes is a fundamental issue that can lead to several drawbacks
in treating certain kind of datasets. More specifically, problems occur with
structures like acute pikes because of canceling effects of currents or with
data that consists in many disconnected pieces like fiber bundles for which
currents require a consistent orientation of all pieces. As a promising
alternative to currents, varifolds, introduced in the context of geometric
measure theory by F. Almgren, allow the representation of any non-oriented
manifold (more generally any non-oriented rectifiable set). In particular, we
explain how varifolds can encode numerically non-oriented objects both from the
discrete and continuous point of view. We show various ways to build a Hilbert
space structure on the set of varifolds based on the theory of reproducing
kernels. We show that, unlike the currents' setting, these metrics are
consistent with shape volume (theorem 4.1) and we derive a formula for the
variation of metric with respect to the shape (theorem 4.2). Finally, we
propose a generalization to non-oriented shapes of registration algorithms in
the context of Large Deformations Metric Mapping (LDDMM), which we detail with
a few examples in the last part of the paper.
|
1304.6123 | Two-Unicast Two-Hop Interference Network: Finite-Field Model | cs.IT math.IT | In this paper we present a novel framework to convert the $K$-user multiple
access channel (MAC) over $\FF_{p^m}$ into the $K$-user MAC over ground field
$\FF_{p}$ with $m$ multiple inputs/outputs (MIMO). This framework makes it
possible to develop coding schemes for MIMO channel as done in symbol extension
for time-varying channel. Using aligned network diagonalization based on this
framework, we show that the sum-rate of $(2m-1)\log{p}$ is achievable for a
$2\times 2\times 2$ interference channel over $\FF_{p^m}$. We also provide some
relation between field extension and symbol extension.
|
1304.6133 | On Maximal Correlation, Hypercontractivity, and the Data Processing
Inequality studied by Erkip and Cover | cs.IT math.IT | In this paper we provide a new geometric characterization of the
Hirschfeld-Gebelein-R\'{e}nyi maximal correlation of a pair of random $(X,Y)$,
as well as of the chordal slope of the nontrivial boundary of the
hypercontractivity ribbon of $(X,Y)$ at infinity. The new characterizations
lead to simple proofs for some of the known facts about these quantities. We
also provide a counterexample to a data processing inequality claimed by Erkip
and Cover, and find the correct tight constant for this kind of inequality.
|
1304.6146 | Manipulation in Clutter with Whole-Arm Tactile Sensing | cs.RO | We begin this paper by presenting our approach to robot manipulation, which
emphasizes the benefits of making contact with the world across the entire
manipulator. We assume that low contact forces are benign, and focus on the
development of robots that can control their contact forces during
goal-directed motion. Inspired by biology, we assume that the robot has
low-stiffness actuation at its joints, and tactile sensing across the entire
surface of its manipulator. We then describe a novel controller that exploits
these assumptions. The controller only requires haptic sensing and does not
need an explicit model of the environment prior to contact. It also handles
multiple contacts across the surface of the manipulator. The controller uses
model predictive control (MPC) with a time horizon of length one, and a linear
quasi-static mechanical model that it constructs at each time step. We show
that this controller enables both real and simulated robots to reach goal
locations in high clutter with low contact forces. Our experiments include
tests using a real robot with a novel tactile sensor array on its forearm
reaching into simulated foliage and a cinder block. In our experiments, robots
made contact across their entire arms while pushing aside movable objects,
deforming compliant objects, and perceiving the world.
|
1304.6152 | Iterative Detection and Decoding for MIMO Systems with Knowledge-Aided
Message Passing Algorithms | cs.IT math.IT | In this paper, we consider the problem of iterative detection and decoding
(IDD) for multi-antenna systems using low-density parity-check (LDPC) codes.
The proposed IDD system consists of a soft-input soft-output parallel
interference (PIC) cancellation scheme with linear minimum mean-square error
(MMSE) receive filters and two novel belief propagation (BP) decoding
algorithms. The proposed BP algorithms exploit the knowledge of short cycles in
the graph structure and the reweighting factors derived from the hypergraph's
expansion. Simulation results show that when used to perform IDD for
multi-antenna systems both proposed BP decoding algorithms can consistently
outperform existing BP techniques with a small number of decoding iterations.
|
1304.6154 | Adaptive Iterative Decision Feedback Detection Algorithms for Multi-User
MIMO Systems | cs.IT math.IT | An adaptive iterative decision multi-feedback detection algorithm with
constellation constraints is proposed for multiuser multi-antenna systems. An
enhanced detection and interference cancellation is performed by introducing
multiple constellation points as decision candidates. A complexity reduction
strategy is developed to avoid redundant processing with reliable decisions
along with an adaptive recursive least squares algorithm for time-varying
channels. An iterative detection and decoding scheme is also considered with
the proposed detection algorithm. Simulations show that the proposed technique
has a complexity as low as the conventional decision feedback detector while it
obtains a performance close to the maximum likelihood detector.
|
1304.6157 | Linear Precoding for Broadcast Channels with Confidential Messages under
Transmit-Side Channel Correlation | cs.IT math.IT | In this paper, we analyze the performance of regularized channel inversion
(RCI) precoding in multiple-input single-output (MISO) broadcast channels with
confidential messages under transmit-side channel correlation. We derive a
deterministic equivalent for the achievable per-user secrecy rate which is
almost surely exact as the number of transmit antennas and the number of users
grow to infinity in a fixed ratio, and we determine the optimal regularization
parameter that maximizes the secrecy rate. Furthermore, we obtain deterministic
equivalents for the secrecy rates achievable by: (i) zero forcing precoding and
(ii) single user beamforming. The accuracy of our analysis is validated by
simulations of finite-size systems.
|
1304.6159 | Secrecy Sum-Rates with Regularized Channel Inversion Precoding under
Imperfect CSI at the Transmitter | cs.IT math.IT | In this paper, we study the performance of regularized channel inversion
precoding in MISO broadcast channels with confidential messages under imperfect
channel state information at the transmitter (CSIT). We obtain an approximation
for the achievable secrecy sum-rate which is almost surely exact as the number
of transmit antennas and the number of users grow to infinity in a fixed ratio.
Simulations prove this anaylsis accurate even for finite-size systems. For FDD
systems, we determine how the CSIT error must scale with the SNR, and we derive
the number of feedback bits required to ensure a constant high-SNR rate gap to
the case with perfect CSIT. For TDD systems, we study the optimum amount of
channel training that maximizes the high-SNR secrecy sum-rate.
|
1304.6161 | Separation Properties and Related Bounds of Collusion-secure
Fingerprinting Codes | cs.CR cs.IT math.IT | In this paper we investigate the separation properties and related bounds of
some codes. We tried to obtain a new existence result for $(w_1,
w_2)$-separating codes and discuss the "optimality" of the upper bounds. Next
we tried to study some interesting relationship between separation and
existence of non-trivial subspace subcodes for Reed-Solomon codes.
|
1304.6172 | Outage Probability in Arbitrarily-Shaped Finite Wireless Networks | cs.IT math.IT | This paper analyzes the outage performance in finite wireless networks.
Unlike most prior works, which either assumed a specific network shape or
considered a special location of the reference receiver, we propose two general
frameworks for analytically computing the outage probability at any arbitrary
location of an arbitrarily-shaped finite wireless network: (i) a moment
generating function-based framework which is based on the numerical inversion
of the Laplace transform of a cumulative distribution and (ii) a reference link
power gain-based framework which exploits the distribution of the fading power
gain between the reference transmitter and receiver. The outage probability is
spatially averaged over both the fading distribution and the possible locations
of the interferers. The boundary effects are accurately accounted for using the
probability distribution function of the distance of a random node from the
reference receiver. For the case of the node locations modeled by a Binomial
point process and Nakagami-$m$ fading channel, we demonstrate the use of the
proposed frameworks to evaluate the outage probability at any location inside
either a disk or polygon region. The analysis illustrates the location
dependent performance in finite wireless networks and highlights the importance
of accurately modeling the boundary effects.
|
1304.6174 | How Hard Is It to Control an Election by Breaking Ties? | cs.AI cs.DS cs.GT | We study the computational complexity of controlling the result of an
election by breaking ties strategically. This problem is equivalent to the
problem of deciding the winner of an election under parallel universes
tie-breaking. When the chair of the election is only asked to break ties to
choose between one of the co-winners, the problem is trivially easy. However,
in multi-round elections, we prove that it can be NP-hard for the chair to
compute how to break ties to ensure a given result. Additionally, we show that
the form of the tie-breaking function can increase the opportunities for
control. Indeed, we prove that it can be NP-hard to control an election by
breaking ties even with a two-stage voting rule.
|
1304.6181 | Evaluating Web Content Quality via Multi-scale Features | cs.IR | Web content quality measurement is crucial to various web content processing
applications. This paper will explore multi-scale features which may affect the
quality of a host, and develop automatic statistical methods to evaluate the
Web content quality. The extracted properties include statistical content
features, page and host level link features and TFIDF features. The experiments
on ECML/PKDD 2010 Discovery Challenge data set show that the algorithm is
effective and feasible for the quality tasks of multiple languages, and the
multi-scale features have different identification ability and provide good
complement to each other for most tasks.
|
1304.6192 | A Bag of Visual Words Approach for Symbols-Based Coarse-Grained Ancient
Coin Classification | cs.CV | The field of Numismatics provides the names and descriptions of the symbols
minted on the ancient coins. Classification of the ancient coins aims at
assigning a given coin to its issuer. Various issuers used various symbols for
their coins. We propose to use these symbols for a framework that will coarsely
classify the ancient coins. Bag of visual words (BoVWs) is a well established
visual recognition technique applied to various problems in computer vision
like object and scene recognition. Improvements have been made by incorporating
the spatial information to this technique. We apply the BoVWs technique to our
problem and use three symbols for coarse-grained classification. We use
rectangular tiling, log-polar tiling and circular tiling to incorporate spatial
information to BoVWs. Experimental results show that the circular tiling proves
superior to the rest of the methods for our problem.
|
1304.6213 | Counting people from above: Airborne video based crowd analysis | cs.CV | Crowd monitoring and analysis in mass events are highly important
technologies to support the security of attending persons. Proposed methods
based on terrestrial or airborne image/video data often fail in achieving
sufficiently accurate results to guarantee a robust service. We present a novel
framework for estimating human count, density and motion from video data based
on custom tailored object detection techniques, a regression based density
estimate and a total variation based optical flow extraction. From the gathered
features we present a detailed accuracy analysis versus ground truth
measurements. In addition, all information is projected into world coordinates
to enable a direct integration with existing geo-information systems. The
resulting human counts demonstrate a mean error of 4% to 9% and thus represent
a most efficient measure that can be robustly applied in security critical
services.
|
1304.6237 | Self-Localization of Asynchronous Wireless Nodes With Parameter
Uncertainties | math.ST cs.IT cs.NI math.IT stat.TH | We investigate a wireless network localization scenario in which the need for
synchronized nodes is avoided. It consists of a set of fixed anchor nodes
transmitting according to a given sequence and a self-localizing receiver node.
The setup can accommodate additional nodes with unknown positions participating
in the sequence. We propose a localization method which is robust with respect
to uncertainty of the anchor positions and other system parameters. Further, we
investigate the Cram\'er-Rao bound for the considered problem and show through
numerical simulations that the proposed method attains the bound.
|
1304.6241 | A Security Protocol for the Identification and Data Encrypt Key
Management of Secure Mobile Devices | cs.CR cs.IT math.IT | In this paper, we proposed an identification and data encrypt key manage
protocol that can be used in some security system based on such secure devices
as secure USB memories or RFIDs, which are widely used for identifying persons
or other objects recently. In general, the default functions of the security
system using a mobile device are the authentication for the owner of the device
and secure storage of data stored on the device. We proposed a security model
that consists of the server and mobile devices in order to realize these
security features. In this model we defined the secure communication protocol
for the authentication and management of data encryption keys using a private
key encryption algorithm with the public key between the server and mobile
devices. In addition, we was performed the analysis for the attack to the
communication protocol between the mobile device and server. Using the
communication protocol, the system will attempt to authenticate the mobile
device. The data decrypt key is transmitted only if the authentication process
is successful. The data in the mobile device can be decrypted using the key.
Our analysis proved that this Protocol ensures anonymity, prevents replay
attacks and realizes the interactive identification between the security
devices and the authentication server.
|
1304.6245 | A Two-Phase Maximum-Likelihood Sequence Estimation for Receivers with
Partial CSI | cs.IT math.IT | The optimality of the conventional maximum likelihood sequence estimation
(MLSE), also known as the Viterbi Algorithm (VA), relies on the assumption that
the receiver has perfect knowledge of the channel coefficients or channel state
information (CSI). However, in practical situations that fail the assumption,
the MLSE method becomes suboptimal and then exhaustive checking is the only way
to obtain the ML sequence. At this background, considering directly the ML
criterion for partial CSI, we propose a two-phase low-complexity MLSE
algorithm, in which the first phase performs the conventional MLSE algorithm in
order to retain necessary information for the backward VA performed in the
second phase. Simulations show that when the training sequence is moderately
long in comparison with the entire data block such as 1/3 of the block, the
proposed two-phase MLSE can approach the performance of the optimal exhaustive
checking. In a normal case, where the training sequence consumes only 0.14 of
the bandwidth, our proposed method still outperforms evidently the conventional
MLSE.
|
1304.6257 | An Evolutionary Algorithm Approach to Link Prediction in Dynamic Social
Networks | physics.soc-ph cs.SI | Many real world, complex phenomena have underlying structures of evolving
networks where nodes and links are added and removed over time. A central
scientific challenge is the description and explanation of network dynamics,
with a key test being the prediction of short and long term changes. For the
problem of short-term link prediction, existing methods attempt to determine
neighborhood metrics that correlate with the appearance of a link in the next
observation period. Recent work has suggested that the incorporation of
topological features and node attributes can improve link prediction. We
provide an approach to predicting future links by applying the Covariance
Matrix Adaptation Evolution Strategy (CMA-ES) to optimize weights which are
used in a linear combination of sixteen neighborhood and node similarity
indices. We examine a large dynamic social network with over $10^6$ nodes
(Twitter reciprocal reply networks), both as a test of our general method and
as a problem of scientific interest in itself. Our method exhibits fast
convergence and high levels of precision for the top twenty predicted links.
Based on our findings, we suggest possible factors which may be driving the
evolution of Twitter reciprocal reply networks.
|
1304.6281 | Subspace Recovery from Structured Union of Subspaces | cs.IT math.IT | Lower dimensional signal representation schemes frequently assume that the
signal of interest lies in a single vector space. In the context of the
recently developed theory of compressive sensing (CS), it is often assumed that
the signal of interest is sparse in an orthonormal basis. However, in many
practical applications, this requirement may be too restrictive. A
generalization of the standard sparsity assumption is that the signal lies in a
union of subspaces. Recovery of such signals from a small number of samples has
been studied recently in several works. Here, we consider the problem of
subspace recovery in which our goal is to identify the subspace (from the
union) in which the signal lies using a small number of samples, in the
presence of noise. More specifically, we derive performance bounds and
conditions under which reliable subspace recovery is guaranteed using maximum
likelihood (ML) estimation. We begin by treating general unions and then obtain
the results for the special case in which the subspaces have structure leading
to block sparsity. In our analysis, we treat both general sampling operators
and random sampling matrices. With general unions, we show that under certain
conditions, the number of measurements required for reliable subspace recovery
in the presence of noise via ML is less than that implied using the restricted
isometry property which guarantees signal recovery. In the special case of
block sparse signals, we quantify the gain achievable over standard sparsity in
subspace recovery. Our results also strengthen existing results on sparse
support recovery in the presence of noise under the standard sparsity model.
|
1304.6291 | Learning Visual Symbols for Parsing Human Poses in Images | cs.CV | Parsing human poses in images is fundamental in extracting critical visual
information for artificial intelligent agents. Our goal is to learn
self-contained body part representations from images, which we call visual
symbols, and their symbol-wise geometric contexts in this parsing process. Each
symbol is individually learned by categorizing visual features leveraged by
geometric information. In the categorization, we use Latent Support Vector
Machine followed by an efficient cross validation procedure to learn visual
symbols. Then, these symbols naturally define geometric contexts of body parts
in a fine granularity. When the structure of the compositional parts is a tree,
we derive an efficient approach to estimating human poses in images.
Experiments on two large datasets suggest our approach outperforms state of the
art methods.
|
1304.6360 | Assessment of Path Reservation in Distributed Real-Time Vehicle Guidance | cs.MA | In this paper we assess the impact of path reservation as an additional
feature in our distributed real-time vehicle guidance protocol BeeJamA. Through
our microscopic simulations we show that na\"{\i}ve reservation of links
without any further measurements is only an improvement in case of complete
market penetration, otherwise it even reduces the performance of our approach
based on real-time link loads. Moreover, we modified the reservation process to
incorporate current travel times and show that this improves the results in our
simulations when at least 40% market penetration is possible.
|
1304.6379 | Semi-Optimal Edge Detector based on Simple Standard Deviation with
Adjusted Thresholding | cs.CV | This paper proposes a novel method which combines both median filter and
simple standard deviation to accomplish an excellent edge detector for image
processing. First of all, a denoising process must be applied on the grey scale
image using median filter to identify pixels which are likely to be
contaminated by noise. The benefit of this step is to smooth the image and get
rid of the noisy pixels. After that, the simple statistical standard deviation
could be computed for each 2X2 window size. If the value of the standard
deviation inside the 2X2 window size is greater than a predefined threshold,
then the upper left pixel in the 2?2 window represents an edge. The visual
differences between the proposed edge detector and the standard known edge
detectors have been shown to support the contribution in this paper.
|
1304.6383 | The Stochastic Gradient Descent for the Primal L1-SVM Optimization
Revisited | cs.LG cs.AI | We reconsider the stochastic (sub)gradient approach to the unconstrained
primal L1-SVM optimization. We observe that if the learning rate is inversely
proportional to the number of steps, i.e., the number of times any training
pattern is presented to the algorithm, the update rule may be transformed into
the one of the classical perceptron with margin in which the margin threshold
increases linearly with the number of steps. Moreover, if we cycle repeatedly
through the possibly randomly permuted training set the dual variables defined
naturally via the expansion of the weight vector as a linear combination of the
patterns on which margin errors were made are shown to obey at the end of each
complete cycle automatically the box constraints arising in dual optimization.
This renders the dual Lagrangian a running lower bound on the primal objective
tending to it at the optimum and makes available an upper bound on the relative
accuracy achieved which provides a meaningful stopping criterion. In addition,
we propose a mechanism of presenting the same pattern repeatedly to the
algorithm which maintains the above properties. Finally, we give experimental
evidence that algorithms constructed along these lines exhibit a considerably
improved performance.
|
1304.6420 | Preventing Unraveling in Social Networks Gets Harder | cs.SI cs.DM cs.DS | The behavior of users in social networks is often observed to be affected by
the actions of their friends. Bhawalkar et al. \cite{bhawalkar-icalp}
introduced a formal mathematical model for user engagement in social networks
where each individual derives a benefit proportional to the number of its
friends which are engaged. Given a threshold degree $k$ the equilibrium for
this model is a maximal subgraph whose minimum degree is $\geq k$. However the
dropping out of individuals with degrees less than $k$ might lead to a
cascading effect of iterated withdrawals such that the size of equilibrium
subgraph becomes very small. To overcome this some special vertices called
"anchors" are introduced: these vertices need not have large degree. Bhawalkar
et al. \cite{bhawalkar-icalp} considered the \textsc{Anchored $k$-Core}
problem: Given a graph $G$ and integers $b, k$ and $p$ do there exist a set of
vertices $B\subseteq H\subseteq V(G)$ such that $|B|\leq b, |H|\geq p$ and
every vertex $v\in H\setminus B$ has degree at least $k$ is the induced
subgraph $G[H]$. They showed that the problem is NP-hard for $k\geq 2$ and gave
some inapproximability and fixed-parameter intractability results. In this
paper we give improved hardness results for this problem. In particular we show
that the \textsc{Anchored $k$-Core} problem is W[1]-hard parameterized by $p$,
even for $k=3$. This improves the result of Bhawalkar et al.
\cite{bhawalkar-icalp} (who show W[2]-hardness parameterized by $b$) as our
parameter is always bigger since $p\geq b$. Then we answer a question of
Bhawalkar et al. \cite{bhawalkar-icalp} by showing that the \textsc{Anchored
$k$-Core} problem remains NP-hard on planar graphs for all $k\geq 3$, even if
the maximum degree of the graph is $k+2$. Finally we show that the problem is
FPT on planar graphs parameterized by $b$ for all $k\geq 7$.
|
1304.6442 | Verification of Inconsistency-Aware Knowledge and Action Bases (Extended
Version) | cs.AI | Description Logic Knowledge and Action Bases (KABs) have been recently
introduced as a mechanism that provides a semantically rich representation of
the information on the domain of interest in terms of a DL KB and a set of
actions to change such information over time, possibly introducing new objects.
In this setting, decidability of verification of sophisticated temporal
properties over KABs, expressed in a variant of first-order mu-calculus, has
been shown. However, the established framework treats inconsistency in a
simplistic way, by rejecting inconsistent states produced through action
execution. We address this problem by showing how inconsistency handling based
on the notion of repairs can be integrated into KABs, resorting to
inconsistency-tolerant semantics. In this setting, we establish decidability
and complexity of verification.
|
1304.6459 | Measuring Transport Difficulty of Data Dissemination in Large-Scale
Online Social Networks: An Interest-Driven Case | cs.SI cs.NI | In this paper, we aim to model the formation of data dissemination in online
social networks (OSNs), and measure the transport difficulty of generated data
traffic. We focus on a usual type of interest-driven social sessions in OSNs,
called \emph{Social-InterestCast}, under which a user will autonomously
determine whether to view the content from his followees depending on his
interest. It is challenging to figure out the formation mechanism of such a
Social-InterestCast, since it involves multiple interrelated factors such as
users' social relationships, users' interests, and content semantics. We
propose a four-layered system model, consisting of physical layer, social
layer, content layer, and session layer. By this model we successfully obtain
the geographical distribution of Social-InterestCast sessions, serving as the
precondition for quantifying data transport difficulty. We define the
fundamental limit of \emph{transport load} as a new metric, called
\emph{transport complexity}, i.e., the \emph{minimum required} transport load
for an OSN over a given carrier network. Specifically, we derive the transport
complexity for Social-InterestCast sessions in a large-scale OSN over the
carrier network with optimal communication architecture. The results can act as
the common lower bounds on transport load for Social-InterestCast over any
carrier networks. To the best of our knowledge, this is the first work to
measure the transport difficulty for data dissemination in OSNs by modeling
session patterns with the interest-driven characteristics.
|
1304.6468 | Adaptive Switched Lattice Reduction-Aided Linear Detection Techniques
for MIMO Systems | cs.IT math.IT | Lattice reduction (LR) aided multiple-input-multiple-out (MIMO) linear
detection can achieve the maximum receive diversity of the maximum likelihood
detection (MLD). By emloying the most commonly used Lenstra, Lenstra, and L.
Lovasz (LLL) algorithm, an equivalent channel matrix which is shorter and
nearly orthogonal is obtained. And thus the noise enhancement is greatly
reduced by employing the LR-aided detection. One problem is that the LLL
algorithm can not guarantee to find the optimal basis. The optimal lattice
basis can be found by the Korkin and Zolotarev (KZ) reduction. However, the KZ
reduction is infeasible in practice due to its high complexity. In this paper,
a simple algorithm is proposed based on the complex LLL (CLLL) algorithm to
approach the optimal performance while maintaining a reasonable complexity.
|
1304.6470 | Low-Complexity Lattice Reduction-Aided Channel Inversion Methods for
Large-Dimensional Multi-User MIMO Systems | cs.IT math.IT | Low-complexity precoding {algorithms} are proposed in this work to reduce the
computational complexity and improve the performance of regularized block
diagonalization (RBD) {based} precoding {schemes} for large multi-user {MIMO}
(MU-MIMO) systems. The proposed algorithms are based on a channel inversion
technique, QR decompositions{,} and lattice reductions to decouple the MU-MIMO
channel into equivalent SU-MIMO channels. Simulation results show that the
proposed precoding algorithms can achieve almost the same sum-rate performance
as RBD precoding, substantial bit error rate (BER) performance gains{,} and a
simplified receiver structure, while requiring a lower complexity.
|
1304.6473 | Technical report: Linking the scientific and clinical data with
KI2NA-LHC | cs.CY cs.DB cs.DL | We introduce a use case and propose a system for data and knowledge
integration in life sciences. In particular, we focus on linking clinical
resources (electronic patient records) with scientific documents and data
(research articles, biomedical ontologies and databases). Our motivation is
two-fold. Firstly, we aim to instantly provide scientific context of particular
patient cases for clinicians in order for them to propose treatments in a more
informed way. Secondly, we want to build a technical infrastructure for
researchers that will allow them to semi-automatically formulate and evaluate
their hypothesis against longitudinal patient data. This paper describes the
proposed system and its typical usage in a broader context of KI2NA, an ongoing
collaboration between the DERI research institute and Fujitsu Laboratories. We
introduce an architecture of the proposed framework called KI2NA-LHC (for
Linked Health Care) and outline the details of its implementation. We also
describe typical usage scenarios and propose a methodology for evaluation of
the whole framework. The main goal of this paper is to introduce our ongoing
work to a broader expert audience. By doing so, we aim to establish an
early-adopter community for our work and elicit feedback we could reflect in
the development of the prototype so that it is better tailored to the
requirements of target users.
|
1304.6476 | Remote Homology Detection in Proteins Using Graphical Models | cs.CE q-bio.QM | Given the amino acid sequence of a protein, researchers often infer its
structure and function by finding homologous, or evolutionarily-related,
proteins of known structure and function. Since structure is typically more
conserved than sequence over long evolutionary distances, recognizing remote
protein homologs from their sequence poses a challenge.
We first consider all proteins of known three-dimensional structure, and
explore how they cluster according to different levels of homology. An
automatic computational method reasonably approximates a human-curated
hierarchical organization of proteins according to their degree of homology.
Next, we return to homology prediction, based only on the one-dimensional
amino acid sequence of a protein. Menke, Berger, and Cowen proposed a Markov
random field model to predict remote homology for beta-structural proteins, but
their formulation was computationally intractable on many beta-strand
topologies.
We show two different approaches to approximate this random field, both of
which make it computationally tractable, for the first time, on all protein
folds. One method simplifies the random field itself, while the other retains
the full random field, but approximates the solution through stochastic search.
Both methods achieve improvements over the state of the art in remote homology
detection for beta-structural protein folds.
|
1304.6478 | The K-modes algorithm for clustering | cs.LG stat.ME stat.ML | Many clustering algorithms exist that estimate a cluster centroid, such as
K-means, K-medoids or mean-shift, but no algorithm seems to exist that clusters
data by returning exactly K meaningful modes. We propose a natural definition
of a K-modes objective function by combining the notions of density and cluster
assignment. The algorithm becomes K-means and K-medoids in the limit of very
large and very small scales. Computationally, it is slightly slower than
K-means but much faster than mean-shift or K-medoids. Unlike K-means, it is
able to find centroids that are valid patterns, truly representative of a
cluster, even with nonconvex clusters, and appears robust to outliers and
misspecification of the scale and number of clusters.
|
1304.6480 | A Theoretical Analysis of NDCG Type Ranking Measures | cs.LG cs.IR stat.ML | A central problem in ranking is to design a ranking measure for evaluation of
ranking functions. In this paper we study, from a theoretical perspective, the
widely used Normalized Discounted Cumulative Gain (NDCG)-type ranking measures.
Although there are extensive empirical studies of NDCG, little is known about
its theoretical properties. We first show that, whatever the ranking function
is, the standard NDCG which adopts a logarithmic discount, converges to 1 as
the number of items to rank goes to infinity. On the first sight, this result
is very surprising. It seems to imply that NDCG cannot differentiate good and
bad ranking functions, contradicting to the empirical success of NDCG in many
applications. In order to have a deeper understanding of ranking measures in
general, we propose a notion referred to as consistent distinguishability. This
notion captures the intuition that a ranking measure should have such a
property: For every pair of substantially different ranking functions, the
ranking measure can decide which one is better in a consistent manner on almost
all datasets. We show that NDCG with logarithmic discount has consistent
distinguishability although it converges to the same limit for all ranking
functions. We next characterize the set of all feasible discount functions for
NDCG according to the concept of consistent distinguishability. Specifically we
show that whether NDCG has consistent distinguishability depends on how fast
the discount decays, and 1/r is a critical point. We then turn to the cut-off
version of NDCG, i.e., NDCG@k. We analyze the distinguishability of NDCG@k for
various choices of k and the discount functions. Experimental results on real
Web search datasets agree well with the theory.
|
1304.6485 | Secure On-Off Transmission Design with Channel Estimation Errors | cs.IT math.IT | Physical layer security has recently been regarded as an emerging technique
to complement and improve the communication security in future wireless
networks. The current research and development in physical layer security is
often based on the ideal assumption of perfect channel knowledge or the
capability of variable-rate transmissions. In this work, we study the secure
transmission design in more practical scenarios by considering channel
estimation errors at the receiver and investigating both fixed-rate and
variable-rate transmissions. Assuming quasi-static fading channels, we design
secure on-off transmission schemes to maximize the throughput subject to a
constraint on secrecy outage probability. For systems with given and fixed
encoding rates, we show how the optimal on-off transmission thresholds and the
achievable throughput vary with the amount of knowledge on the eavesdropper's
channel. In particular, our design covers the interesting case where the
eavesdropper also uses the pilots sent from the transmitter to obtain imperfect
channel estimation. An interesting observation is that using too much pilot
power can harm the throughput of secure transmission if both the legitimate
receiver and the eavesdropper have channel estimation errors, while the secure
transmission always benefits from increasing pilot power when only the
legitimate receiver has channel estimation errors but not the eavesdropper.
When the encoding rates are controllable parameters to design, we further
derive both a non-adaptive and an adaptive rate transmission schemes by jointly
optimizing the encoding rates and the on-off transmission thresholds to
maximize the throughput of secure transmissions.
|
1304.6487 | Locally linear representation for image clustering | cs.LG stat.ML | It is a key to construct a similarity graph in graph-oriented subspace
learning and clustering. In a similarity graph, each vertex denotes a data
point and the edge weight represents the similarity between two points. There
are two popular schemes to construct a similarity graph, i.e., pairwise
distance based scheme and linear representation based scheme. Most existing
works have only involved one of the above schemes and suffered from some
limitations. Specifically, pairwise distance based methods are sensitive to the
noises and outliers compared with linear representation based methods. On the
other hand, there is the possibility that linear representation based
algorithms wrongly select inter-subspaces points to represent a point, which
will degrade the performance. In this paper, we propose an algorithm, called
Locally Linear Representation (LLR), which integrates pairwise distance with
linear representation together to address the problems. The proposed algorithm
can automatically encode each data point over a set of points that not only
could denote the objective point with less residual error, but also are close
to the point in Euclidean space. The experimental results show that our
approach is promising in subspace learning and subspace clustering.
|
1304.6494 | Route-Based Detection of Conflicting ATC Clearances on Airports | cs.SY | Runway incursions are among the most serious safety concerns in air traffic
control. Traditional A-SMGCS level 2 safety systems detect runway incursions
with the help of surveillance information only. In the context of SESAR,
complementary safety systems are emerging that also use other information in
addition to surveillance, and that aim at warning about potential runway
incursions at earlier points in time. One such system is "conflicting ATC
clearances", which processes the clearances entered by the air traffic
controller into an electronic flight strips system and cross-checks them for
potentially dangerous inconsistencies. The cross-checking logic may be
implemented directly based on the clearances and on surveillance data, but this
is cumbersome. We present an approach that instead uses ground routes as an
intermediate layer, thereby simplifying the core safety logic.
|
1304.6498 | Apricot - An Object-Oriented Modeling Language for Hybrid Systems | cs.SE cs.LO cs.SY | We propose Apricot as an object-oriented language for modeling hybrid
systems. The language combines the features in domain specific language and
object-oriented language, that fills the gap between design and implementation,
as a result, we put forward the modeling language with simple and distinct
syntax, structure and semantics. In addition, we introduce the concept of
design by convention into Apricot.As the characteristic of object-oriented and
the component architecture in Apricot, we conclude that it is competent for
modeling hybrid systems without losing scalability.
|
1304.6528 | Nonanticipative Rate Distortion Function for General Source-Channel
Matching | cs.IT math.IT | In this paper we invoke a nonanticipative information Rate Distortion
Function (RDF) for sources with memory, and we analyze its importance in
probabilistic matching of the source to the channel so that transmission of a
symbol-by-symbol code with memory without anticipation is optimal, with respect
to an average distortion and excess distortion probability. We show
achievability of the symbol-by-symbol code with memory without anticipation,
and we evaluate the probabilistic performance of the code for a Markov source.
|
1304.6551 | Decision-Theoretic Troubleshooting: Hardness of Approximation | cs.AI cs.CC | Decision-theoretic troubleshooting is one of the areas to which Bayesian
networks can be applied. Given a probabilistic model of a malfunctioning
man-made device, the task is to construct a repair strategy with minimal
expected cost. The problem has received considerable attention over the past
two decades. Efficient solution algorithms have been found for simple cases,
whereas other variants have been proven NP-complete. We study several variants
of the problem found in literature, and prove that computing approximate
troubleshooting strategies is NP-hard. In the proofs, we exploit a close
connection to set-covering problems.
|
1304.6554 | Identifying Communities and Key Vertices by Reconstructing Networks from
Samples | cs.SI physics.soc-ph | Sampling techniques such as Respondent-Driven Sampling (RDS) are widely used
in epidemiology to sample "hidden" populations, such that properties of the
network can be deduced from the sample. We consider how similar techniques can
be designed that allow the discovery of the structure, especially the community
structure, of networks. Our method involves collecting samples of a network by
random walks and reconstructing the network by probabilistically coalescing
vertices, using vertex attributes to determine the probabilities. Even though
our method can only approximately reconstruct a part of the original network,
it can recover its community structure relatively well. Moreover, it can find
the key vertices which, when immunized, can effectively reduce the spread of an
infection through the original network.
|
1304.6575 | Third Party Privacy Preserving Protocol for Perturbation Based
Classification of Vertically Fragmented Data Bases | cs.CR cs.DB | Privacy is become major issue in distributed data mining. In the literature
we can found many proposals of privacy preserving which can be divided into two
major categories that is trusted third party and multiparty based privacy
protocols. In case of trusted third party models the conventional asymmetric
cryptographic based techniques will be used and in case of multi party based
protocols data perturbed to make sure no other party to understand original
data. In order to enhance security features by combining strengths of both
models in this paper, we propose to use data perturbed techniques in third
party privacy preserving protocol to conduct the classification on vertically
fragmented data bases. Specially, we present a method to build Naive Bayes
classification from the disguised and decentralized databases. In order to
perform classification we propose third party protocol for secure computations.
We conduct experiments to compare the accuracy of our Naive Bayes with the one
built from the original undisguised data. Our results show that although the
data are disguised and decentralized, our method can still achieve fairly high
accuracy.
|
1304.6589 | Partitions of Frobenius Rings Induced by the Homogeneous Weight | cs.IT math.IT math.RA | The values of the homogeneous weight are determined for finite Frobenius
rings that are a direct product of local Frobenius rings. This is used to
investigate the partition induced by this weight and its dual partition under
character-theoretic dualization. A characterization is given of those rings for
which the induced partition is reflexive or even self-dual.
|
1304.6591 | Lp-Regularized Least Squares (0<p<1) and Critical Path | cs.IT math.IT | The least squares problem is formulated in terms of Lp quasi-norm
regularization (0<p<1). Two formulations are considered: (i) an Lp-constrained
optimization and (ii) an Lp-penalized (unconstrained) optimization. Due to the
nonconvexity of the Lp quasi-norm, the solution paths of the regularized least
squares problem are not ensured to be continuous. A critical path, which is a
maximal continuous curve consisting of critical points, is therefore considered
separately. The critical paths are piecewise smooth, as can be seen from the
viewpoint of the variational method, and generally contain non-optimal points
such as saddle points and local maxima as well as global/local minima. Along
each critical path, the correspondence between the regularization parameters
(which govern the 'strength' of regularization in the two formulations) is
non-monotonic and, more specifically, it has multiplicity. Two paths of
critical points connecting the origin and an ordinary least squares (OLS)
solution are highlighted. One is a main path starting at an OLS solution, and
the other is a greedy path starting at the origin. Part of the greedy path can
be constructed with a generalized Minkowskian gradient. The breakpoints of the
greedy path coincide with the step-by-step solutions generated by using
orthogonal matching pursuit (OMP), thereby establishing a direct link between
OMP and Lp-regularized least squares.
|
1304.6599 | Robust error correction for real-valued signals via message-passing
decoding and spatial coupling | cs.IT math.IT | We revisit the error correction scheme of real-valued signals when the
codeword is corrupted by gross errors on a fraction of entries and a small
noise on all the entries. Combining the recent developments of approximate
message passing and the spatially-coupled measurement matrix in compressed
sensing we show that the error correction and its robustness towards noise can
be enhanced considerably. We discuss the performance in the large signal limit
using previous results on state evolution, as well as for finite size signals
through numerical simulations. Even for relatively small sizes, the approach
proposed here outperforms convex-relaxation-based decoders.
|
1304.6601 | Time evolution of Wikipedia network ranking | physics.soc-ph cs.IR cs.SI | We study the time evolution of ranking and spectral properties of the Google
matrix of English Wikipedia hyperlink network during years 2003 - 2011. The
statistical properties of ranking of Wikipedia articles via PageRank and
CheiRank probabilities, as well as the matrix spectrum, are shown to be
stabilized for 2007 - 2011. A special emphasis is done on ranking of Wikipedia
personalities and universities. We show that PageRank selection is dominated by
politicians while 2DRank, which combines PageRank and CheiRank, gives more
accent on personalities of arts. The Wikipedia PageRank of universities
recovers 80 percents of top universities of Shanghai ranking during the
considered time period.
|
1304.6603 | Optimal Kullback-Leibler Aggregation via Information Bottleneck | cs.SY cs.IT math.IT | In this paper, we present a method for reducing a regular, discrete-time
Markov chain (DTMC) to another DTMC with a given, typically much smaller number
of states. The cost of reduction is defined as the Kullback-Leibler divergence
rate between a projection of the original process through a partition function
and a DTMC on the correspondingly partitioned state space. Finding the reduced
model with minimal cost is computationally expensive, as it requires an
exhaustive search among all state space partitions, and an exact evaluation of
the reduction cost for each candidate partition. Our approach deals with the
latter problem by minimizing an upper bound on the reduction cost instead of
minimizing the exact cost; The proposed upper bound is easy to compute and it
is tight if the original chain is lumpable with respect to the partition. Then,
we express the problem in the form of information bottleneck optimization, and
propose using the agglomerative information bottleneck algorithm for searching
a sub-optimal partition greedily, rather than exhaustively. The theory is
illustrated with examples and one application scenario in the context of
modeling bio-molecular interactions.
|
1304.6613 | Ovarian volume throughout life: a validated normative model | q-bio.TO cs.CE | The measurement of ovarian volume has been shown to be a useful indirect
indicator of the ovarian reserve in women of reproductive age, in the diagnosis
and management of a number of disorders of puberty and adult reproductive
function, and is under investigation as a screening tool for ovarian cancer. To
date there is no normative model of ovarian volume throughout life. By
searching the published literature for ovarian volume in healthy females, and
using our own data from multiple sources (combined n = 59,994) we have
generated and robustly validated the first model of ovarian volume from
conception to 82 years of age. This model shows that 69% of the variation in
ovarian volume is due to age alone. We have shown that in the average case
ovarian volume rises from 0.7 mL (95% CI 0.4 -- 1.1 mL) at 2 years of age to a
peak of 7.7 mL (95% CI 6.5 -- 9.2 mL) at 20 years of age with a subsequent
decline to about 2.8mL (95% CI 2.7 -- 2.9 mL) at the menopause and smaller
volumes thereafter. Our model allows us to generate normal values and ranges
for ovarian volume throughout life. This is the first validated normative model
of ovarian volume from conception to old age; it will be of use in the
diagnosis and management of a number of diverse gynaecological and reproductive
conditions in females from birth to menopause and beyond.
|
1304.6614 | Performance Analysis of Protograph LDPC Codes for Nakagami-$m$ Fading
Relay Channels | cs.IT math.IT | In this paper, we investigate the error performance of the protograph (LDPC)
codes over Nakagami-$m$ fading relay channels. We first calculate the decoding
thresholds of the protograph codes over such channels with different fading
depths (i.e., different values of $m$) by exploiting the modified protograph
extrinsic information transfer (PEXIT) algorithm. Furthermore, based on the
PEXIT analysis and using Gaussian approximation, we derive the bit-error-rate
(BER) expressions for the error-free (EF) relaying protocol and
decode-and-forward (DF) relaying protocol. We finally compare the threshold
with the theoretical BER and the simulated BER results of the protograph codes.
It reveals that the performance of DF protocol is approximately the same as
that of EF protocol. Moreover, the theoretical BER expressions, which are shown
to be reasonably consistent with the decoding thresholds and the simulated
BERs, are able to evaluate the system performance and predict the decoding
threshold with lower complexity as compared to the modified PEXIT algorithm. As
a result, this work can facilitate the design of the protograph codes for the
wireless communication systems.
|
1304.6617 | EM-based Semi-blind Channel Estimation in AF Two-Way Relay Networks | cs.IT math.IT stat.OT | We propose an expectation maximization (EM)-based algorithm for semi-blind
channel estimation of reciprocal channels in amplify-and-forward (AF) two-way
relay networks (TWRNs). By incorporating both data samples and pilots into the
estimation, the proposed algorithm provides substantially higher accuracy than
the conventional training-based approach. Furthermore, the proposed algorithm
has a linear computational complexity per iteration and converges after a small
number of iterations.
|
1304.6627 | Robust 1-bit Compressive Sensing via Gradient Support Pursuit | cs.IT math.IT math.OC math.ST stat.TH | This paper studies a formulation of 1-bit Compressed Sensing (CS) problem
based on the maximum likelihood estimation framework. In order to solve the
problem we apply the recently proposed Gradient Support Pursuit algorithm, with
a minor modification. Assuming the proposed objective function has a Stable
Restricted Hessian, the algorithm is shown to accurately solve the 1-bit CS
problem. Furthermore, the algorithm is compared to the state-of-the-art 1-bit
CS algorithms through numerical simulations. The results suggest that the
proposed method is robust to noise and at mid to low input SNR regime it
achieves the best reconstruction SNR vs. execution time trade-off.
|
1304.6663 | Low-rank optimization for distance matrix completion | math.OC cs.LG stat.ML | This paper addresses the problem of low-rank distance matrix completion. This
problem amounts to recover the missing entries of a distance matrix when the
dimension of the data embedding space is possibly unknown but small compared to
the number of considered data points. The focus is on high-dimensional
problems. We recast the considered problem into an optimization problem over
the set of low-rank positive semidefinite matrices and propose two efficient
algorithms for low-rank distance matrix completion. In addition, we propose a
strategy to determine the dimension of the embedding space. The resulting
algorithms scale to high-dimensional problems and monotonically converge to a
global solution of the problem. Finally, numerical experiments illustrate the
good performance of the proposed algorithms on benchmarks.
|
1304.6690 | Massive MIMO for Next Generation Wireless Systems | cs.IT math.IT | Multi-user Multiple-Input Multiple-Output (MIMO) offers big advantages over
conventional point-to-point MIMO: it works with cheap single-antenna terminals,
a rich scattering environment is not required, and resource allocation is
simplified because every active terminal utilizes all of the time-frequency
bins. However, multi-user MIMO, as originally envisioned with roughly equal
numbers of service-antennas and terminals and frequency division duplex
operation, is not a scalable technology. Massive MIMO (also known as
"Large-Scale Antenna Systems", "Very Large MIMO", "Hyper MIMO", "Full-Dimension
MIMO" & "ARGOS") makes a clean break with current practice through the use of a
large excess of service-antennas over active terminals and time division duplex
operation. Extra antennas help by focusing energy into ever-smaller regions of
space to bring huge improvements in throughput and radiated energy efficiency.
Other benefits of massive MIMO include the extensive use of inexpensive
low-power components, reduced latency, simplification of the media access
control (MAC) layer, and robustness to intentional jamming. The anticipated
throughput depend on the propagation environment providing asymptotically
orthogonal channels to the terminals, but so far experiments have not disclosed
any limitations in this regard. While massive MIMO renders many traditional
research problems irrelevant, it uncovers entirely new problems that urgently
need attention: the challenge of making many low-cost low-precision components
that work effectively together, acquisition and synchronization for
newly-joined terminals, the exploitation of extra degrees of freedom provided
by the excess of service-antennas, reducing internal power consumption to
achieve total energy efficiency reductions, and finding new deployment
scenarios. This paper presents an overview of the massive MIMO concept and
contemporary research.
|
1304.6693 | Reliable Deniable Communication: Hiding Messages in Noise | cs.IT math.IT | A transmitter Alice may wish to reliably transmit a message to a receiver Bob
over a binary symmetric channel (BSC), while simultaneously ensuring that her
transmission is deniable from an eavesdropper Willie. That is, if Willie
listening to Alice's transmissions over a "significantly noisier" BSC than the
one to Bob, he should be unable to estimate even whether Alice is transmitting.
We consider two scenarios. In our first scenario, we assume that the channel
transition probability from Alice to Bob and Willie is perfectly known to all
parties. Here, even when Alice's (potential) communication scheme is publicly
known to Willie (with no common randomness between Alice and Bob), we prove
that over 'n' channel uses Alice can transmit a message of length O(sqrt{n})
bits to Bob, deniably from Willie. We also prove information-theoretic
order-optimality of this result. In our second scenario, we allow uncertainty
in the knowledge of the channel transition probability parameters. In
particular, we assume that the channel transition probabilities for both Bob
and Willie are uniformly drawn from a known interval. Here, we show that, in
contrast to the previous setting, Alice can communicate O(n) bits of message
reliably and deniably (again, with no common randomness). We give both an
achievability result and a matching converse for this setting. Our work builds
upon the work of Bash et al on AWGN channels (but with common randomness) and
differs from other recent works (by Wang et al and Bloch) in two important ways
- firstly our deniability metric is variational distance (as opposed to
Kullback-Leibler divergence), and secondly, our techniques are significantly
different from these works.
|
1304.6736 | Networks in Cognitive Science | physics.soc-ph cs.SI q-bio.NC | Networks of interconnected nodes have long played a key role in Cognitive
Science, from artificial neural net- works to spreading activation models of
semantic mem- ory. Recently, however, a new Network Science has been developed,
providing insights into the emergence of global, system-scale properties in
contexts as diverse as the Internet, metabolic reactions, and collaborations
among scientists. Today, the inclusion of network theory into Cognitive
Sciences, and the expansion of complex- systems science, promises to
significantly change the way in which the organization and dynamics of
cognitive and behavioral processes are understood. In this paper, we review
recent contributions of network theory at different levels and domains within
the Cognitive Sciences.
|
1304.6743 | A Combinatorial Approach to Quantum Error Correcting Codes | math.CO cs.IT math.IT | Motivated from the theory of quantum error correcting codes, we investigate a
combinatorial problem that involves a symmetric $n$-vertices colourable graph
and a group of operations (colouring rules) on the graph: find the minimum
sequence of operations that maps between two given graph colourings. We provide
an explicit algorithm for computing the solution of our problem, which in turn
is directly related to computing the distance (performance) of an underlying
quantum error correcting code. Computing the distance of a quantum code is a
highly non-trivial problem and our method may be of use in the construction of
better codes.
|
1304.6753 | Clustering Consumption in Queues: A Scalable Model for Electric Vehicle
Scheduling | cs.SY | In this paper, we introduce a scalable model for the aggregate electricity
demand of a fleet of electric vehicles, which can provide the right balance
between model simplicity and accuracy. The model is based on classification of
tasks with similar energy consumption characteristics into a finite number of
clusters. The aggregator responsible for scheduling the charge of the vehicles
has two goals: 1) to provide a hard QoS guarantee to the vehicles at the lowest
possible cost; 2) to offer load or generation following services to the
wholesale market. In order to achieve these goals, we combine the scalable
demand model we propose with two scheduling mechanisms, a near-optimal and a
heuristic technique. The performance of the two mechanisms is compared under a
realistic setting in our numerical experiments.
|
1304.6759 | k-Modulus Method for Image Transformation | cs.CV | In this paper, we propose a new algorithm to make a novel spatial image
transformation. The proposed approach aims to reduce the bit depth used for
image storage. The basic technique for the proposed transformation is based of
the modulus operator. The goal is to transform the whole image into multiples
of predefined integer. The division of the whole image by that integer will
guarantee that the new image surely less in size from the original image. The
k-Modulus Method could not be used as a stand alone transform for image
compression because of its high compression ratio. It could be used as a scheme
embedded in other image processing fields especially compression. According to
its high PSNR value, it could be amalgamated with other methods to facilitate
the redundancy criterion.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.