id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1309.3151 | Distributed privacy-preserving network size computation: A
system-identification based method | math.OC cs.DC cs.SY math.DS | In this study, we propose an algorithm for computing the network size of
communicating agents. The algorithm is distributed: a) it does not require a
leader selection; b) it only requires local exchange of information, and; c)
its design can be implemented using local information only, without any global
information about the network. It is privacy-preserving, namely it does not
require to propagate identifying labels. This algorithm is based on system
identification, and more precisely on the identification of the order of a
suitably-constructed discrete-time linear time-invariant system over some
finite field. We provide a probabilistic guarantee for any randomly picked node
to correctly compute the number of nodes in the network. Moreover, numerical
implementation has been taken into account to make the algorithm applicable to
networks of hundreds of nodes, and therefore make the algorithm applicable in
real-world sensor or robotic networks. We finally illustrate our results in
simulation and conclude the paper with discussions on how our technique differs
from a previously-known strategy based on statistical inference.
|
1309.3173 | Low Complexity List Successive Cancellation Decoding of Polar Codes | cs.IT math.IT | We propose a low complexity list successive cancellation (LCLSC) decoding
algorithm to reduce complexity of traditional list successive cancellation
(LSC) decoding of polar codes while trying to maintain the LSC decoding
performance at the same time. By defining two thresholds, namely "likelihood
ratio (LR) threshold" and "Bhattacharyya parameter threshold", we classify the
reliability of each received information bit and the quality of each bit
channel. Based on this classification, we implement successive cancellation
(SC) decoding instead of LSC decoding when the information bits from "bad"
subchannels are received reliably and further attempt to skip LSC decoding for
the rest information bits in order to achieve a lower complexity compared to
full LSC decoding. Simulation results show that the complexity of LCLSC
decoding is much lower than LSC decoding and can be close to that of SC
decoding, especially in low code rate regions.
|
1309.3187 | Cache Performance Study of Portfolio-Based Parallel CDCL SAT Solvers | cs.DC cs.AI | Parallel SAT solvers are becoming mainstream. Their performance has made them
win the past two SAT competitions consecutively and are in the limelight of
research and industry. The problem is that it is not known exactly what is
needed to make them perform even better; that is, how to make them solve more
problems in less time. Also, it is also not know how well they scale in massive
multi-core environments which, predictably, is the scenario of comming new
hardware. In this paper we show that cache contention is a main culprit of a
slowing down in scalability, and provide empirical results that for some type
of searches, physically sharing the clause Database between threads is
beneficial.
|
1309.3195 | Improved LT Codes in Low Overhead Regions for Binary Erasure Channels | cs.IT math.IT | We study improved degree distribution for Luby Transform (LT) codes which
exhibits improved bit error rate performance particularly in low overhead
regions. We construct the degree distribution by modifying Robust Soliton
distribution. The performance of our proposed LT codes is evaluated and
compared to the conventional LT codes via And-Or tree analysis. Then we propose
a transmission scheme based on the proposed degree distribution to improve its
frame error rate in full recovery regions. Furthermore, the improved degree
distribution is applied to distributed multi-source relay networks and unequal
error protection. It is shown that our schemes achieve better performance and
reduced complexity especially in low overhead regions, compared with
conventional schemes.
|
1309.3197 | Inducing Honest Reporting Without Observing Outcomes: An Application to
the Peer-Review Process | cs.MA cs.AI cs.DL math.ST stat.TH | When eliciting opinions from a group of experts, traditional devices used to
promote honest reporting assume that there is an observable future outcome. In
practice, however, this assumption is not always reasonable. In this paper, we
propose a scoring method built on strictly proper scoring rules to induce
honest reporting without assuming observable outcomes. Our method provides
scores based on pairwise comparisons between the reports made by each pair of
experts in the group. For ease of exposition, we introduce our scoring method
by illustrating its application to the peer-review process. In order to do so,
we start by modeling the peer-review process using a Bayesian model where the
uncertainty regarding the quality of the manuscript is taken into account.
Thereafter, we introduce our scoring method to evaluate the reported reviews.
Under the assumptions that reviewers are Bayesian decision-makers and that they
cannot influence the reviews of other reviewers, we show that risk-neutral
reviewers strictly maximize their expected scores by honestly disclosing their
reviews. We also show how the group's scores can be used to find a consensual
review. Experimental results show that encouraging honest reporting through the
proposed scoring method creates more accurate reviews than the traditional
peer-review process.
|
1309.3214 | Modeling Based on Elman Wavelet Neural Network for Class-D Power
Amplifiers | cs.NE | In Class-D Power Amplifiers (CDPAs), the power supply noise can intermodulate
with the input signal, manifesting into power-supply induced intermodulation
distortion (PS-IMD) and due to the memory effects of the system, there exist
asymmetries in the PS-IMDs. In this paper, a new behavioral modeling based on
the Elman Wavelet Neural Network (EWNN) is proposed to study the nonlinear
distortion of the CDPAs. In EWNN model, the Morlet wavelet functions are
employed as the activation function and there is a normalized operation in the
hidden layer, the modification of the scale factor and translation factor in
the wavelet functions are ignored to avoid the fluctuations of the error
curves. When there are 30 neurons in the hidden layer, to achieve the same
square sum error (SSE) $\epsilon_{min}=10^{-3}$, EWNN needs 31 iteration steps,
while the basic Elman neural network (BENN) model needs 86 steps. The
Volterra-Laguerre model has 605 parameters to be estimated but still can't
achieve the same magnitude accuracy of EWNN. Simulation results show that the
proposed approach of EWNN model has fewer parameters and higher accuracy than
the Volterra-Laguerre model and its convergence rate is much faster than the
BENN model.
|
1309.3228 | Quantum hypothesis testing and the operational interpretation of the
quantum Renyi relative entropies | quant-ph cs.IT math-ph math.IT math.MP | We show that the new quantum extension of Renyi's \alpha-relative entropies,
introduced recently by Muller-Lennert, Dupuis, Szehr, Fehr and Tomamichel, J.
Math. Phys. 54, 122203, (2013), and Wilde, Winter, Yang, Commun. Math. Phys.
331, (2014), have an operational interpretation in the strong converse problem
of quantum hypothesis testing. Together with related results for the direct
part of quantum hypothesis testing, known as the quantum Hoeffding bound, our
result suggests that the operationally relevant definition of the quantum Renyi
relative entropies depends on the parameter \alpha: for \alpha<1, the right
choice seems to be the traditional definition, whereas for \alpha>1 the right
choice is the newly introduced version.
As a sideresult, we show that the new Renyi \alpha-relative entropies are
asymptotically attainable by measurements for \alpha>1, and give a new simple
proof for their monotonicity under completely positive trace-preserving maps.
|
1309.3233 | Efficient Orthogonal Tensor Decomposition, with an Application to Latent
Variable Model Learning | stat.ML cs.LG math.ST stat.TH | Decomposing tensors into orthogonal factors is a well-known task in
statistics, machine learning, and signal processing. We study orthogonal outer
product decompositions where the factors in the summands in the decomposition
are required to be orthogonal across summands, by relating this orthogonal
decomposition to the singular value decompositions of the flattenings. We show
that it is a non-trivial assumption for a tensor to have such an orthogonal
decomposition, and we show that it is unique (up to natural symmetries) in case
it exists, in which case we also demonstrate how it can be efficiently and
reliably obtained by a sequence of singular value decompositions. We
demonstrate how the factoring algorithm can be applied for parameter
identification in latent variable and mixture models.
|
1309.3242 | Using memristor crossbar structure to implement a novel adaptive real
time fuzzy modeling algorithm | cs.AI | Although fuzzy techniques promise fast meanwhile accurate modeling and
control abilities for complicated systems, different difficulties have been
re-vealed in real situation implementations. Usually there is no escape of
it-erative optimization based on crisp domain algorithms. Recently memristor
structures appeared promising to implement neural network structures and fuzzy
algorithms. In this paper a novel adaptive real-time fuzzy modeling algorithm
is proposed which uses active learning method concept to mimic recent
understandings of right brain processing techniques. The developed method is
based on processing fuzzy numbers to provide the ability of being sensitive to
each training data point to expand the knowledge tree leading to plasticity
while used defuzzification technique guaranties enough stability. An
outstanding characteristic of the proposed algorithm is its consistency to
memristor crossbar hardware processing concepts. An analog implemen-tation of
the proposed algorithm on memristor crossbars structure is also introduced in
this paper. The effectiveness of the proposed algorithm in modeling and pattern
recognition tasks is verified by means of computer simulations
|
1309.3256 | Recovery guarantees for exemplar-based clustering | stat.ML cs.CV cs.LG | For a certain class of distributions, we prove that the linear programming
relaxation of $k$-medoids clustering---a variant of $k$-means clustering where
means are replaced by exemplars from within the dataset---distinguishes points
drawn from nonoverlapping balls with high probability once the number of points
drawn and the separation distance between any two balls are sufficiently large.
Our results hold in the nontrivial regime where the separation distance is
small enough that points drawn from different balls may be closer to each other
than points drawn from the same ball; in this case, clustering by thresholding
pairwise distances between points can fail. We also exhibit numerical evidence
of high-probability recovery in a substantially more permissive regime.
|
1309.3285 | A tabu search algorithm with efficient diversification strategy for high
school timetabling problem | cs.AI | The school timetabling problem can be described as scheduling a set of
lessons (combination of classes, teachers, subjects and rooms) in a weekly
timetable. This paper presents a novel way to generate timetables for high
schools. The algorithm has three phases. Pre-scheduling, initial phase and
optimization through tabu search. In the first phase, a graph based algorithm
used to create groups of lessons to be scheduled simultaneously; then an
initial solution is built by a sequential greedy heuristic. Finally, the
solution is optimized using tabu search algorithm based on frequency based
diversification. The algorithm has been tested on a set of real problems
gathered from Iranian high schools. Experiments show that the proposed
algorithm can effectively build acceptable timetables.
|
1309.3292 | MacWilliams' Extension Theorem for Bi-Invariant Weights over Finite
Principal Ideal Rings | math.RA cs.IT math.IT | A finite ring R and a weight w on R satisfy the Extension Property if every
R-linear w-isometry between two R-linear codes in R^n extends to a monomial
transformation of R^n that preserves w. MacWilliams proved that finite fields
with the Hamming weight satisfy the Extension Property. It is known that finite
Frobenius rings with either the Hamming weight or the homogeneous weight
satisfy the Extension Property. Conversely, if a finite ring with the Hamming
or homogeneous weight satisfies the Extension Property, then the ring is
Frobenius.
This paper addresses the question of a characterization of all bi-invariant
weights on a finite ring that satisfy the Extension Property. Having solved
this question in previous papers for all direct products of finite chain rings
and for matrix rings, we have now arrived at a characterization of these
weights for finite principal ideal rings, which form a large subclass of the
finite Frobenius rings. We do not assume commutativity of the rings in
question.
|
1309.3307 | Delay-Sensitive Communication over Fading Channel: Queueing Behavior and
Code Parameter Selection | cs.IT math.IT | This article examines the queueing performance of communication systems that
transmit encoded data over unreliable channels. A fading formulation suitable
for wireless environments is considered where errors are caused by a discrete
channel with correlated behavior over time. Random codes and BCH codes are
employed as means to study the relationship between code-rate selection and the
queueing performance of point-to-point data links. For carefully selected
channel models and arrival processes, a tractable Markov structure composed of
queue length and channel state is identified. This facilitates the analysis of
the stationary behavior of the system, leading to evaluation criteria such as
bounds on the probability of the queue exceeding a threshold. Specifically,
this article focuses on system models with scalable arrival profiles, which are
based on Poisson processes, and finite-state channels with memory. These
assumptions permit the rigorous comparison of system performance for codes with
arbitrary block lengths and code rates. Based on the resulting
characterizations, it is possible to select the best code parameters for
delay-sensitive applications over various channels. The methodology introduced
herein offers a new perspective on the joint queueing-coding analysis of
finitestate channels with memory, and it is supported by numerical simulations.
|
1309.3317 | Pole-placement in higher-order sliding-mode control | math.OC cs.SY | We show that the well-known formula by Ackermann and Utkin can be generalized
to the case of higher-order sliding modes. By interpreting the eigenvalue
assignment of the sliding dynamics as a zero-placement problem, the
generalization becomes straightforward and the proof is greatly simplified. The
generalized formula retains the simplicity of the original one while allowing
to construct the sliding variable of a single-input linear time-invariant
system in such a way that it has desired relative degree and desired
sliding-mode dynamics. The formula can be used as part of a higher-order
sliding-mode control design methodology, achieving high accuracy and robustness
at the same time.
|
1309.3321 | Wedge Sampling for Computing Clustering Coefficients and Triangle Counts
on Large Graphs | cs.SI cs.DS | Graphs are used to model interactions in a variety of contexts, and there is
a growing need to quickly assess the structure of such graphs. Some of the most
useful graph metrics are based on triangles, such as those measuring social
cohesion. Algorithms to compute them can be extremely expensive, even for
moderately-sized graphs with only millions of edges. Previous work has
considered node and edge sampling; in contrast, we consider wedge sampling,
which provides faster and more accurate approximations than competing
techniques. Additionally, wedge sampling enables estimation local clustering
coefficients, degree-wise clustering coefficients, uniform triangle sampling,
and directed triangle counts. Our methods come with provable and practical
probabilistic error estimates for all computations. We provide extensive
results that show our methods are both more accurate and faster than
state-of-the-art alternatives.
|
1309.3323 | Mapping Mutable Genres in Structurally Complex Volumes | cs.CL cs.DL | To mine large digital libraries in humanistically meaningful ways, scholars
need to divide them by genre. This is a task that classification algorithms are
well suited to assist, but they need adjustment to address the specific
challenges of this domain. Digital libraries pose two problems of scale not
usually found in the article datasets used to test these algorithms. 1) Because
libraries span several centuries, the genres being identified may change
gradually across the time axis. 2) Because volumes are much longer than
articles, they tend to be internally heterogeneous, and the classification task
needs to begin with segmentation. We describe a multi-layered solution that
trains hidden Markov models to segment volumes, and uses ensembles of
overlapping classifiers to address historical change. We test this approach on
a collection of 469,200 volumes drawn from HathiTrust Digital Library. To
demonstrate the humanistic value of these methods, we extract 32,209 volumes of
fiction from the digital library, and trace the changing proportions of first-
and third-person narration in the corpus. We note that narrative points of view
seem to have strong associations with particular themes and genres.
|
1309.3330 | Reliable Crowdsourcing for Multi-Class Labeling using Coding Theory | cs.IT cs.SI math.IT | Crowdsourcing systems often have crowd workers that perform unreliable work
on the task they are assigned. In this paper, we propose the use of
error-control codes and decoding algorithms to design crowdsourcing systems for
reliable classification despite unreliable crowd workers. Coding-theory based
techniques also allow us to pose easy-to-answer binary questions to the crowd
workers. We consider three different crowdsourcing models: systems with
independent crowd workers, systems with peer-dependent reward schemes, and
systems where workers have common sources of information. For each of these
models, we analyze classification performance with the proposed coding-based
scheme. We develop an ordering principle for the quality of crowds and describe
how system performance changes with the quality of the crowd. We also show that
pairing among workers and diversification of the questions help in improving
system performance. We demonstrate the effectiveness of the proposed
coding-based scheme using both simulated data and real datasets from Amazon
Mechanical Turk, a crowdsourcing microtask platform. Results suggest that use
of good codes may improve the performance of the crowdsourcing task over
typical majority-voting approaches.
|
1309.3418 | A Novel Approach in detecting pose orientation of a 3D face required for
face | cs.CV | In this paper we present a novel approach that takes as input a 3D image and
gives as output its pose i.e. it tells whether the face is oriented with
respect the X, Y or Z axes with angles of rotation up to 40 degree. All the
experiments have been performed on the FRAV3D Database. After applying the
proposed algorithm to the 3D facial surface we have obtained i.e. on 848 3D
face images our method detected the pose correctly for 566 face images,thus
giving an approximately 67 % of correct pose detection.
|
1309.3421 | Indexing by Latent Dirichlet Allocation and Ensemble Model | cs.IR | The contribution of this paper is two-fold. First, we present Indexing by
Latent Dirichlet Allocation (LDI), an automatic document indexing method. The
probability distributions in LDI utilize those in Latent Dirichlet Allocation
(LDA), a generative topic model that has been previously used in applications
for document retrieval tasks. However, the ad hoc applications, or their
variants with smoothing techniques as prompted by previous studies in LDA-based
language modeling, result in unsatisfactory performance as the document
representations do not accurately reflect concept space. To improve
performance, we introduce a new definition of document probability vectors in
the context of LDA and present a novel scheme for automatic document indexing
based on LDA. Second, we propose an Ensemble Model (EnM) for document
retrieval. The EnM combines basis indexing models by assigning different
weights and attempts to uncover the optimal weights to maximize the Mean
Average Precision (MAP). To solve the optimization problem, we propose an
algorithm, EnM.B, which is derived based on the boosting method. The results of
our computational experiments on benchmark data sets indicate that both the
proposed approaches are viable options for document retrieval.
|
1309.3425 | A method for nose-tip based 3D face registration using maximum intensity
algorithm | cs.CV | In this paper we present a novel technique of registering 3D images across
pose. In this context, we have taken into account the images which are aligned
across X, Y and Z axes. We have first determined the angle across which the
image is rotated with respect to X, Y and Z axes and then translation is
performed on the images. After testing the proposed method on 472 images from
the FRAV3D database, the method correctly registers 358 images thus giving a
performance rate of 75.84%.
|
1309.3439 | Measuring the similarity of PML documents with RFID-based sensors | cs.DB cs.NI | The Electronic Product Code (EPC) Network is an important part of the
Internet of Things. The Physical Mark-Up Language (PML) is to represent and
de-scribe data related to objects in EPC Network. The PML documents of each
component to exchange data in EPC Network system are XML documents based on PML
Core schema. For managing theses huge amount of PML documents of tags captured
by Radio frequency identification (RFID) readers, it is inevitable to develop
the high-performance technol-ogy, such as filtering and integrating these tag
data. So in this paper, we propose an approach for meas-uring the similarity of
PML documents based on Bayesian Network of several sensors. With respect to the
features of PML, while measuring the similarity, we firstly reduce the
redundancy data except information of EPC. On the basis of this, the Bayesian
Network model derived from the structure of the PML documents being compared is
constructed.
|
1309.3446 | A Systematic Approach for Interference Alignment in CSIT-less
Relay-Aided X-Networks | cs.IT math.IT | The degrees of freedom (DoF) of an X-network with M transmit and N receive
nodes utilizing interference alignment with the support of $J$ relays each
equipped with $L_j$ antennas operating in a half-duplex non-regenerative mode
is investigated. Conditions on the feasibility of interference alignment are
derived using a proper transmit strategy and a structured approach based on a
Kronecker-product representation. The advantages of this approach are twofold:
First, it extends existing results on the achievable DoF to generalized antenna
configurations. Second, it unifies the analysis for time-varying and constant
channels and provides valuable insights and interconnections between the two
channel models. It turns out that a DoF of $\nicefrac{NM}{M+N-1}$ is feasible
whenever the sum of the $L_j^2 \geq [N-1][M-1]$.
|
1309.3467 | Wireless Bidirectional Relaying, Latin Squares and Graph Vertex Coloring | cs.IT math.IT | The problem of obtaining network coding maps for the physical layer network
coded two-way relay channel is considered, using the denoise-and-forward
forward protocol. It is known that network coding maps used at the relay node
which ensure unique decodability at the end nodes form a Latin Square. Also, it
is known that minimum distance of the effective constellation at the relay node
becomes zero, when the ratio of the fade coefficients from the end node to the
relay node, belongs to a finite set of complex numbers determined by the signal
set used, called the singular fade states. Furthermore, it has been shown
recently that the problem of obtaining network coding maps which remove the
harmful effects of singular fade states, reduces to the one of obtaining Latin
Squares, which satisfy certain constraints called \textit{singularity removal
constraints}. In this paper, it is shown that the singularity removal
constraints along with the row and column exclusion conditions of a Latin
Square, can be compactly represented by a graph called the \textit{singularity
removal graph} determined by the singular fade state and the signal set used.
It is shown that a Latin Square which removes a singular fade state can be
obtained from a proper vertex coloring of the corresponding singularity removal
graph. The minimum number of symbols used to fill in a Latin Square which
removes a singular fade state is equal to the chromatic number of the
singularity removal graph. It is shown that for any square $M$-QAM signal set,
there exists singularity removal graphs whose chromatic numbers exceed $M$ and
hence require more than $M$ colors for vertex coloring. Also, it is shown that
for any $2^{\lambda}$-PSK signal set, $\lambda \geq 3,$ all the singularity
removal graphs can be colored using $2^{\lambda}$ colors.
|
1309.3511 | Event-Triggered State Observers for Sparse Sensor Noise/Attacks | math.OC cs.CR cs.IT cs.SY math.IT | This paper describes two algorithms for state reconstruction from sensor
measurements that are corrupted with sparse, but otherwise arbitrary, "noise".
These results are motivated by the need to secure cyber-physical systems
against a malicious adversary that can arbitrarily corrupt sensor measurements.
The first algorithm reconstructs the state from a batch of sensor measurements
while the second algorithm is able to incorporate new measurements as they
become available, in the spirit of a Luenberger observer. A distinguishing
point of these algorithms is the use of event-triggered techniques to improve
the computational performance of the proposed algorithms.
|
1309.3522 | Tail bounds via generic chaining | math.PR cs.IT math.IT | We modify Talagrand's generic chaining method to obtain upper bounds for all
p-th moments of the supremum of a stochastic process. These bounds lead to an
estimate for the upper tail of the supremum with optimal deviation parameters.
We apply our procedure to improve and extend some known deviation inequalities
for suprema of unbounded empirical processes and chaos processes. As an
application we give a significantly simplified proof of the restricted isometry
property of the subsampled discrete Fourier transform.
|
1309.3533 | Mixed Membership Models for Time Series | stat.ME cs.LG stat.ML | In this article we discuss some of the consequences of the mixed membership
perspective on time series analysis. In its most abstract form, a mixed
membership model aims to associate an individual entity with some set of
attributes based on a collection of observed data. Although much of the
literature on mixed membership models considers the setting in which
exchangeable collections of data are associated with each member of a set of
entities, it is equally natural to consider problems in which an entire time
series is viewed as an entity and the goal is to characterize the time series
in terms of a set of underlying dynamic attributes or "dynamic regimes".
Indeed, this perspective is already present in the classical hidden Markov
model, where the dynamic regimes are referred to as "states", and the
collection of states realized in a sample path of the underlying process can be
viewed as a mixed membership characterization of the observed time series. Our
goal here is to review some of the richer modeling possibilities for time
series that are provided by recent developments in the mixed membership
framework.
|
1309.3546 | On Determining Deep Holes of Generalized Reed-Solomon Codes | cs.IT math.IT | For a linear code, deep holes are defined to be vectors that are further away
from codewords than all other vectors. The problem of deciding whether a
received word is a deep hole for generalized Reed-Solomon codes is proved to be
co-NP-complete. For the extended Reed-Solomon codes $RS_q(\F_q,k)$, a
conjecture was made to classify deep holes by Cheng and Murray in 2007. Since
then a lot of effort has been made to prove the conjecture, or its various
forms. In this paper, we classify deep holes completely for generalized
Reed-Solomon codes $RS_p (D,k)$, where $p$ is a prime, $|D| > k \geqslant
\frac{p-1}{2}$. Our techniques are built on the idea of deep hole trees, and
several results concerning the Erd{\"o}s-Heilbronn conjecture.
|
1309.3582 | Multihop Routing in Ad Hoc Networks | cs.IT cs.NI math.IT | This paper presents a dual method of closed-form analysis and lightweight
simulation that enables an evaluation of the performance of mobile ad hoc
networks that is more realistic, efficient, and accurate than those found in
existing publications. Some features accommodated by the new analysis are
shadowing, exclusion and guard zones, and distance-dependent fading. Three
routing protocols are examined: least-delay, nearest-neighbor, and
maximum-progress routing. The tradeoffs among the path reliabilities, average
conditional delays, average conditional number of hops, and area spectral
efficiencies are examined.
|
1309.3591 | Optimal Power Allocation for Parameter Tracking in a Distributed
Amplify-and-Forward Sensor Network | cs.IT math.IT | We consider the problem of optimal power allocation in a sensor network where
the sensors observe a dynamic parameter in noise and coherently amplify and
forward their observations to a fusion center (FC). The FC uses the
observations in a Kalman filter to track the parameter, and we show how to find
the optimal gain and phase of the sensor transmissions under both global and
individual power constraints in order to minimize the mean squared error (MSE)
of the parameter estimate. For the case of a global power constraint, a
closed-form solution can be obtained. A numerical optimization is required for
individual power constraints, but the problem can be relaxed to a semidefinite
programming problem (SDP), and we show that the optimal result can be
constructed from the SDP solution. We also study the dual problem of minimizing
global and individual power consumption under a constraint on the MSE. As
before, a closed-form solution can be found when minimizing total power, while
the optimal solution is constructed from the output of an SDP when minimizing
the maximum individual sensor power. For purposes of comparison, we derive an
exact expression for the outage probability on the MSE for equal-power
transmission, which can serve as an upper bound for the case of optimal power
control. Finally, we present the results of several simulations to show that
the use of optimal power control provides a significant reduction in either MSE
or transmit power compared with a non-optimized approach (i.e., equal power
transmission).
|
1309.3611 | Ultrametric Component Analysis with Application to Analysis of Text and
of Emotion | cs.AI | We review the theory and practice of determining what parts of a data set are
ultrametric. It is assumed that the data set, to begin with, is endowed with a
metric, and we include discussion of how this can be brought about if a
dissimilarity, only, holds. The basis for part of the metric-endowed data set
being ultrametric is to consider triplets of the observables (vectors). We
develop a novel consensus of hierarchical clusterings. We do this in order to
have a framework (including visualization and supporting interpretation) for
the parts of the data that are determined to be ultrametric. Furthermore a
major objective is to determine locally ultrametric relationships as opposed to
non-local ultrametric relationships. As part of this work, we also study a
particular property of our ultrametricity coefficient, namely, it being a
function of the difference of angles of the base angles of the isosceles
triangle. This work is completed by a review of related work, on consensus
hierarchies, and of a major new application, namely quantifying and
interpreting the emotional content of narrative.
|
1309.3623 | Unified Sum-BER Performance Analysis of AF MIMO Beamforming in Two-Way
Relay Networks | cs.IT math.IT | Unified performance analysis is carried out for amplify-and-forward (AF)
multiple-input-multiple-output (MIMO) beamforming (BF) two-way relay networks
in Rayleigh fading with five different relaying protocols including two novel
protocols for better performance. As a result, a novel closed-form sum-bit
error rate (BER) expression is presented in a unified expression for all
protocols. A new closed-form high signal-to-noise-ratio (SNR) performance is
also obtained in a single expression, and an analytical high-SNR gap expression
between the five protocols is provided. We compare the performance of the five
relaying protocols with respect to sum-BER with appropriately normalized rate
and power, and show that the proposed protocol with four time slots outperforms
other protocols when transmit powers from two sources are sufficiently
different, and the one with three time slots dominates other protocols when
multiple relay antennas are used, at high-SNR.
|
1309.3647 | Protecting Public OSN Posts from Unintended Access | cs.SI cs.CR | The design of secure and usable access schemes to personal data represent a
major challenge of online social networks (OSNs). State of the art requires
prior interaction to grant access. Sharing with users who are not subscribed or
previously have not been accepted as contacts in any case is only possible via
public posts, which can easily be abused by automatic harvesting for user
profiling, targeted spear-phishing, or spamming. Moreover, users are restricted
to the access rules defined by the provider, which may be overly restrictive,
cumbersome to define, or insufficiently fine-grained.
We suggest a complementary approach that can be easily deployed in addition
to existing access control schemes, does not require any interaction, and
includes even public, unsubscribed users. It exploits the fact that different
social circles of a user share different experiences and hence encrypts
arbitrary posts. Hence arbitrary posts are encrypted, such that only users with
sufficient knowledge about the owner can decrypt.
Assembling only well-established cryptographic primitives, we prove that the
security of our scheme is determined by the entropy of the required knowledge.
We consequently analyze the efficiency of an informed dictionary attack and
assess the entropy to be on par with common passwords. A fully functional
implementation is used for performance evaluations, and available for download
on the Web.
|
1309.3660 | (Failure of the) Wisdom of the crowds in an endogenous opinion dynamics
model with multiply biased agents | cs.SI cs.MA nlin.AO physics.soc-ph | We study an endogenous opinion (or, belief) dynamics model where we
endogenize the social network that models the link (`trust') weights between
agents. Our network adjustment mechanism is simple: an agent increases her
weight for another agent if that agent has been close to truth (whence, our
adjustment criterion is `past performance'). Moreover, we consider multiply
biased agents that do not learn in a fully rational manner but are subject to
persuasion bias - they learn in a DeGroot manner, via a simple `rule of thumb'
- and that have biased initial beliefs. In addition, we also study this setup
under conformity, opposition, and homophily - which are recently suggested
variants of DeGroot learning in social networks - thereby taking into account
further biases agents are susceptible to. Our main focus is on crowd wisdom,
that is, on the question whether the so biased agents can adequately aggregate
dispersed information and, consequently, learn the true states of the topics
they communicate about. In particular, we present several conditions under
which wisdom fails.
|
1309.3674 | Power Allocation for Distributed BLUE Estimation with Full and Limited
Feedback of CSI | cs.IT math.IT | This paper investigates the problem of adaptive power allocation for
distributed best linear unbiased estimation (BLUE) of a random parameter at the
fusion center (FC) of a wireless sensor network (WSN). An optimal
power-allocation scheme is proposed that minimizes the $L^2$-norm of the vector
of local transmit powers, given a maximum variance for the BLUE estimator. This
scheme results in the increased lifetime of the WSN compared to similar
approaches that are based on the minimization of the sum of the local transmit
powers. The limitation of the proposed optimal power-allocation scheme is that
it requires the feedback of the instantaneous channel state information (CSI)
from the FC to local sensors, which is not practical in most applications of
large-scale WSNs. In this paper, a limited-feedback strategy is proposed that
eliminates this requirement by designing an optimal codebook for the FC using
the generalized Lloyd algorithm with modified distortion metrics. Each sensor
amplifies its analog noisy observation using a quantized version of its optimal
amplification gain, which is received by the FC and used to estimate the
unknown parameter.
|
1309.3676 | Optimized projections for compressed sensing via rank-constrained
nearest correlation matrix | cs.IT cs.LG math.IT stat.ML | Optimizing the acquisition matrix is useful for compressed sensing of signals
that are sparse in overcomplete dictionaries, because the acquisition matrix
can be adapted to the particular correlations of the dictionary atoms. In this
paper a novel formulation of the optimization problem is proposed, in the form
of a rank-constrained nearest correlation matrix problem. Furthermore,
improvements for three existing optimization algorithms are introduced, which
are shown to be particular instances of the proposed formulation. Simulation
results show notable improvements and superior robustness in sparse signal
recovery.
|
1309.3692 | Sufficient Conditions on the Optimality of Myopic Sensing in
Opportunistic Channel Access: A Unifying Framework | cs.IT math.IT | This paper considers a widely studied stochastic control problem arising from
opportunistic spectrum access (OSA) in a multi-channel system, with the goal of
providing a unifying analytical framework whereby a number of prior results may
be viewed as special cases. Specifically, we consider a single wireless
transceiver/user with access to $N$ channels, each modeled as an iid
discrete-time two-state Markov chain. In each time step the user is allowed to
sense $k\leq N$ channels, and subsequently use up to $m\leq k$ channels out of
those sensed to be available. Channel sensing is assumed to be perfect, and for
each channel use in each time step the user gets a unit reward. The user's
objective is to maximize its total discounted or average reward over a finite
or infinite horizon. This problem has previously been studied in various
special cases including $k=1$ and $m=k\leq N$, often cast as a restless bandit
problem, with optimality results derived for a myopic policy that seeks to
maximize the immediate one-step reward when the two-state Markov chain model is
positively correlated. In this paper we study the general problem with $1\leq
m\leq k\leq N$, and derive sufficient conditions under which the myopic policy
is optimal for the finite and infinite horizon reward criteria, respectively.
It is shown that these results reduce to those derived in prior studies under
the corresponding special cases, and thus may be viewed as a set of unifying
optimality conditions. Numerical examples are also presented to highlight how
and why an optimal policy may deviate from the otherwise-optimal myopic sensing
given additional exploration opportunities, i.e., when $m<k$.
|
1309.3697 | Group Learning and Opinion Diffusion in a Broadcast Network | cs.LG | We analyze the following group learning problem in the context of opinion
diffusion: Consider a network with $M$ users, each facing $N$ options. In a
discrete time setting, at each time step, each user chooses $K$ out of the $N$
options, and receive randomly generated rewards, whose statistics depend on the
options chosen as well as the user itself, and are unknown to the users. Each
user aims to maximize their expected total rewards over a certain time horizon
through an online learning process, i.e., a sequence of exploration (sampling
the return of each option) and exploitation (selecting empirically good
options) steps.
Within this context we consider two group learning scenarios, (1) users with
uniform preferences and (2) users with diverse preferences, and examine how a
user should construct its learning process to best extract information from
other's decisions and experiences so as to maximize its own reward. Performance
is measured in {\em weak regret}, the difference between the user's total
reward and the reward from a user-specific best single-action policy (i.e.,
always selecting the set of options generating the highest mean rewards for
this user). Within each scenario we also consider two cases: (i) when users
exchange full information, meaning they share the actual rewards they obtained
from their choices, and (ii) when users exchange limited information, e.g.,
only their choices but not rewards obtained from these choices.
|
1309.3699 | Local Support Vector Machines:Formulation and Analysis | stat.ML cs.AI cs.LG | We provide a formulation for Local Support Vector Machines (LSVMs) that
generalizes previous formulations, and brings out the explicit connections to
local polynomial learning used in nonparametric estimation literature. We
investigate the simplest type of LSVMs called Local Linear Support Vector
Machines (LLSVMs). For the first time we establish conditions under which
LLSVMs make Bayes consistent predictions at each test point $x_0$. We also
establish rates at which the local risk of LLSVMs converges to the minimum
value of expected local risk at each point $x_0$. Using stability arguments we
establish generalization error bounds for LLSVMs.
|
1309.3704 | To Stay Or To Switch: Multiuser Dynamic Channel Access | cs.SY cs.IT math.IT | In this paper we study opportunistic spectrum access (OSA) policies in a
multiuser multichannel random access cognitive radio network, where users
perform channel probing and switching in order to obtain better channel
condition or higher instantaneous transmission quality. However, unlikely many
prior works in this area, including those on channel probing and switching
policies for a single user to exploit spectral diversity, and on probing and
access policies for multiple users over a single channel to exploit temporal
and multiuser diversity, in this study we consider the collective switching of
multiple users over multiple channels. In addition, we consider finite
arrivals, i.e., users are not assumed to always have data to send and demand
for channel follow a certain arrival process. Under such a scenario, the users'
ability to opportunistically exploit temporal diversity (the temporal variation
in channel quality over a single channel) and spectral diversity (quality
variation across multiple channels at a given time) is greatly affected by the
level of congestion in the system. We investigate the optimal decision process
in this case, and evaluate the extent to which congestion affects potential
gains from opportunistic dynamic channel switching.
|
1309.3716 | Revisiting Optimal Power Control: its Dual Effect on SNR and Contention | cs.SY | In this paper we study a transmission power tune problem with densely
deployed 802.11 Wireless Local Area Networks (WLANs). While previous papers
emphasize on tuning transmission power with either PHY or MAC layer separately,
optimally setting each Access Point's (AP's) transmission power of a densely
deployed 802.11 network considering its dual effects on both layers remains an
open problem. In this work, we design a measure by evaluating impacts of
transmission power on network performance on both PHY and MAC layers. We show
that such an optimization problem is intractable and then we investigate and
develop an analytical framework to allow simple yet efficient solutions.
Through simulations and numerical results, we observe clear benefits of the
dual-effect model compared to solutions optimizing solely on a single layer;
therefore, we conclude that tuning transmission power from a dual layer
(PHY-MAC) point of view is essential and necessary for dense WLANs. We further
form a game theoretical framework and investigate above power-tune problem in a
strategic network. We show that with decentralized and strategic users, the
Nash Equilibrium (N.E.) of the corresponding game is in-efficient and
thereafter we propose a punishment based mechanism to enforce users to adopt
the social optimal strategy profile under both perfect and imperfect sensing
environments.
|
1309.3720 | The Incidence and Cross Methods for Efficient Radar Detection | cs.IT math.IT | The designation of the radar system is to detect the position and velocity of
targets around us. The radar transmits a waveform, which is reflected back from
the targets, and echo waveform is received. In a commonly used model, the echo
is a sum of a superposition of several delay-Doppler shifts of the transmitted
waveform, and a noise component. The delay and Doppler parameters encode,
respectively, the distances, and relative velocities, between the targets and
the radar. Using standard digital-to-analog and sampling techniques, the
estimation task of the delay-Doppler parameters, which involves waveforms, is
reduced to a problem for complex sequences of finite length N. In these notes
we introduce the Incidence and Cross methods for radar detection. One of their
advantages, is robustness to inhomogeneous radar scene, i.e., for sensing small
targets in the vicinity of large objects. The arithmetic complexity of the
incidence and cross methods is O(NlogN + r^3) and O(NlogN + r^2), for r
targets, respectively. In the case of noisy environment, these are the fastest
radar detection techniques. Both methods employ chirp sequences, which are
commonly used by radar systems, and hence are attractive for real world
applications.
|
1309.3733 | Discovery of Approximate Differential Dependencies | cs.DB | Differential dependencies (DDs) capture the relationships between data
columns of relations. They are more general than functional dependencies (FDs)
and and the difference is that DDs are defined on the distances between values
of two tuples, not directly on the values. Because of this difference, the
algorithms for discovering FDs from data find only special DDs, not all DDs and
therefore are not applicable to DD discovery. In this paper, we propose an
algorithm to discover DDs from data following the way of fixing the left hand
side of a candidate DD to determine the right hand side. We also show some
properties of DDs and conduct a comprehensive analysis on how sampling affects
the DDs discovered from data.
|
1309.3745 | An Optimizer's Approach to Stochastic Control Problems with Nonclassical
Information Structures | math.OC cs.IT math.IT | We present an optimization-based approach to stochastic control problems with
nonclassical information structures. We cast these problems equivalently as
optimization prob- lems on joint distributions. The resulting problems are
necessarily nonconvex. Our approach to solving them is through convex
relaxation. We solve the instance solved by Bansal and Basar with a particular
application of this approach that uses the data processing inequality for
constructing the convex relaxation. Using certain f-divergences, we obtain a
new, larger set of inverse optimal cost functions for such problems. Insights
are obtained on the relation between the structure of cost functions and of
convex relaxations for inverse optimal control.
|
1309.3752 | Novel Repair-by-Transfer Codes and Systematic Exact-MBR Codes with Lower
Complexities and Smaller Field Sizes | cs.IT math.IT | The $(n,k,d)$ regenerating code is a class of $(n,k)$ erasure codes with the
capability to recover a lost code fragment from other $d$ existing code
fragments. This paper concentrates on the design of exact regenerating codes at
Minimum Bandwidth Regenerating (MBR) points. For $d=n-1$, a class of
$(n,k,d=n-1)$ Exact-MBR codes, termed as repair-by-transfer codes, have been
developed in prior work to avoid arithmetic operations in node repairing
process. The first result of this paper presents a new class of
repair-by-transfer codes via congruent transformations. As compared with the
prior works, the advantages of the proposed codes include: i) The minimum of
the finite field size is significantly reduced from $n \choose 2$ to $n$. ii)
The encoding complexity is decreased from $n^4$ to $n^3$. As shown in
simulations, the proposed repair-by-transfer codes have lower computational
overhead when $n$ is greater than a specific constant. The second result of
this paper presents a new form of coding matrix for product-matrix Exact-MBR
codes. The proposed coding matrix includes a number of advantages: i). The
minimum of the finite field size is reduced from $n-k+d$ to $n$. ii). The fast
Reed-Solomon erasure coding algorithms can be applied on the Exact-MBR codes to
reduce the time complexities.
|
1309.3775 | Beyond the quantum formalism: consequences of a neural-oscillator model
to quantum cognition | physics.bio-ph cs.AI q-bio.NC quant-ph | In this paper we present a neural oscillator model of stimulus response
theory that exhibits quantum-like behavior. We then show that without adding
any additional assumptions, a quantum model constructed to fit observable
pairwise correlations has no predictive power over the unknown triple moment,
obtainable through the activation of multiple oscillators. We compare this with
the results obtained in de Barros (2013), where a criteria of rationality gives
optimal ranges for the triple moment.
|
1309.3792 | Exact Complexity: The Spectral Decomposition of Intrinsic Computation | cond-mat.stat-mech cs.IT math.IT nlin.CD nlin.CG | We give exact formulae for a wide family of complexity measures that capture
the organization of hidden nonlinear processes. The spectral decomposition of
operator-valued functions leads to closed-form expressions involving the full
eigenvalue spectrum of the mixed-state presentation of a process's
epsilon-machine causal-state dynamic. Measures include correlation functions,
power spectra, past-future mutual information, transient and synchronization
informations, and many others. As a result, a direct and complete analysis of
intrinsic computation is now available for the temporal organization of
finitary hidden Markov models and nonlinear dynamical systems with generating
partitions and for the spatial organization in one-dimensional systems,
including spin systems, cellular automata, and complex materials via chaotic
crystallography.
|
1309.3797 | Robustness of skeletons and salient features in networks | physics.soc-ph cs.SI | Real world network datasets often contain a wealth of complex topological
information. In the face of these data, researchers often employ methods to
extract reduced networks containing the most important structures or pathways,
sometimes known as `skeletons' or `backbones'. Numerous such methods have been
developed. Yet data are often noisy or incomplete, with unknown numbers of
missing or spurious links. Relatively little effort has gone into understanding
how salient network extraction methods perform in the face of noisy or
incomplete networks. We study this problem by comparing how the salient
features extracted by two popular methods change when networks are perturbed,
either by deleting nodes or links, or by randomly rewiring links. Our results
indicate that simple, global statistics for skeletons can be accurately
inferred even for noisy and incomplete network data, but it is crucial to have
complete, reliable data to use the exact topologies of skeletons or backbones.
These results also help us understand how skeletons respond to damage to the
network itself, as in an attack scenario.
|
1309.3808 | Low-Complexity Design of Generalized Block Diagonalization Precoding
Algorithms for Multiuser MIMO Systems | cs.IT math.IT | Block diagonalization (BD) based precoding techniques are well-known linear
transmit strategies for multiuser MIMO (MU-MIMO) systems. By employing BD-type
precoding algorithms at the transmit side, the MU-MIMO broadcast channel is
decomposed into multiple independent parallel single user MIMO (SU-MIMO)
channels and achieves the maximum diversity order at high data rates. The main
computational complexity of BD-type precoding algorithms comes from two
singular value decomposition (SVD) operations, which depend on the number of
users and the dimensions of each user's channel matrix. In this work,
low-complexity precoding algorithms are proposed to reduce the computational
complexity and improve the performance of BD-type precoding algorithms. We
devise a strategy based on a common channel inversion technique, QR
decompositions, and lattice reductions to decouple the MU-MIMO channel into
equivalent SU-MIMO channels. Analytical and simulation results show that the
proposed precoding algorithms can achieve a comparable sum-rate performance as
BD-type precoding algorithms, substantial bit error rate (BER) performance
gains, and a simplified receiver structure, while requiring a much lower
complexity.
|
1309.3809 | Visual-Semantic Scene Understanding by Sharing Labels in a Context
Network | cs.CV cs.LG stat.ML | We consider the problem of naming objects in complex, natural scenes
containing widely varying object appearance and subtly different names.
Informed by cognitive research, we propose an approach based on sharing context
based object hypotheses between visual and lexical spaces. To this end, we
present the Visual Semantic Integration Model (VSIM) that represents object
labels as entities shared between semantic and visual contexts and infers a new
image by updating labels through context switching. At the core of VSIM is a
semantic Pachinko Allocation Model and a visual nearest neighbor Latent
Dirichlet Allocation Model. For inference, we derive an iterative Data
Augmentation algorithm that pools the label probabilities and maximizes the
joint label posterior of an image. Our model surpasses the performance of
state-of-art methods in several visual tasks on the challenging SUN09 dataset.
|
1309.3816 | Multiplicative Approximations, Optimal Hypervolume Distributions, and
the Choice of the Reference Point | cs.NE | Many optimization problems arising in applications have to consider several
objective functions at the same time. Evolutionary algorithms seem to be a very
natural choice for dealing with multi-objective problems as the population of
such an algorithm can be used to represent the trade-offs with respect to the
given objective functions. In this paper, we contribute to the theoretical
understanding of evolutionary algorithms for multi-objective problems. We
consider indicator-based algorithms whose goal is to maximize the hypervolume
for a given problem by distributing {\mu} points on the Pareto front. To gain
new theoretical insights into the behavior of hypervolume-based algorithms we
compare their optimization goal to the goal of achieving an optimal
multiplicative approximation ratio. Our studies are carried out for different
Pareto front shapes of bi-objective problems. For the class of linear fronts
and a class of convex fronts, we prove that maximizing the hypervolume gives
the best possible approximation ratio when assuming that the extreme points
have to be included in both distributions of the points on the Pareto front.
Furthermore, we investigate the choice of the reference point on the
approximation behavior of hypervolume-based approaches and examine Pareto
fronts of different shapes by numerical calculations.
|
1309.3842 | Estimation of intrinsic volumes from digital grey-scale images | math.ST cs.CV stat.TH | Local algorithms are common tools for estimating intrinsic volumes from
black-and-white digital images. However, these algorithms are typically biased
in the design based setting, even when the resolution tends to infinity.
Moreover, images recorded in practice are most often blurred grey-scale images
rather than black-and-white. In this paper, an extended definition of local
algorithms, applying directly to grey-scale images without thresholding, is
suggested. We investigate the asymptotics of these new algorithms when the
resolution tends to infinity and apply this to construct estimators for surface
area and integrated mean curvature that are asymptotically unbiased in certain
natural settings.
|
1309.3848 | SEEDS: Superpixels Extracted via Energy-Driven Sampling | cs.CV | Superpixel algorithms aim to over-segment the image by grouping pixels that
belong to the same object. Many state-of-the-art superpixel algorithms rely on
minimizing objective functions to enforce color ho- mogeneity. The optimization
is accomplished by sophis- ticated methods that progressively build the
superpix- els, typically by adding cuts or growing superpixels. As a result,
they are computationally too expensive for real-time applications. We introduce
a new approach based on a simple hill-climbing optimization. Starting from an
initial superpixel partitioning, it continuously refines the superpixels by
modifying the boundaries. We define a robust and fast to evaluate energy
function, based on enforcing color similarity between the bound- aries and the
superpixel color histogram. In a series of experiments, we show that we achieve
an excellent com- promise between accuracy and efficiency. We are able to
achieve a performance comparable to the state-of- the-art, but in real-time on
a single Intel i7 CPU at 2.8GHz.
|
1309.3864 | Unequal Error Protection by Partial Superposition Transmission Using
LDPC Codes | cs.IT math.IT | In this paper, we consider designing low-density parity-check (LDPC) coded
modulation systems to achieve unequal error protection (UEP). We propose a new
UEP approach by partial superposition transmission called UEP-by-PST. In the
UEP-by-PST system, the information sequence is distinguished as two parts, the
more important data (MID) and the less important data (LID), both of which are
coded with LDPC codes. The codeword that corresponds to the MID is superimposed
on the codeword that corresponds to the LID. The system performance can be
analyzed by using discretized density evolution. Also proposed in this paper is
a criterion from a practical point of view to compare the efficiencies of
different UEP approaches. Numerical results show that, over both additive white
Gaussian noise (AWGN) channels and uncorrelated Rayleigh fading channels, 1)
UEP-by-PST provides higher coding gain for the MID compared with the
traditional equal error protection (EEP) approach, but with negligible
performance loss for the LID; 2) UEP-by-PST is more efficient with the proposed
practical criterion than the UEP approach in the digital video broadcasting
(DVB) system.
|
1309.3874 | Finding an infection source under the SIS model | cs.SI q-bio.PE | We consider the problem of identifying an infection source based only on an
observed set of infected nodes in a network, assuming that the infection
process follows a Susceptible-Infected-Susceptible (SIS) model. We derive an
estimator based on estimating the most likely infection source associated with
the most likely infection path. Simulation results on regular trees suggest
that our estimator performs consistently better than the minimum distance
centrality based heuristic.
|
1309.3877 | A Metric-learning based framework for Support Vector Machines and
Multiple Kernel Learning | cs.LG | Most metric learning algorithms, as well as Fisher's Discriminant Analysis
(FDA), optimize some cost function of different measures of within-and
between-class distances. On the other hand, Support Vector Machines(SVMs) and
several Multiple Kernel Learning (MKL) algorithms are based on the SVM large
margin theory. Recently, SVMs have been analyzed from SVM and metric learning,
and to develop new algorithms that build on the strengths of each. Inspired by
the metric learning interpretation of SVM, we develop here a new
metric-learning based SVM framework in which we incorporate metric learning
concepts within SVM. We extend the optimization problem of SVM to include some
measure of the within-class distance and along the way we develop a new
within-class distance measure which is appropriate for SVM. In addition, we
adopt the same approach for MKL and show that it can be also formulated as a
Mahalanobis metric learning problem. Our end result is a number of SVM/MKL
algorithms that incorporate metric learning concepts. We experiment with them
on a set of benchmark datasets and observe important predictive performance
improvements.
|
1309.3888 | User-Relatedness and Community Structure in Social Interaction Networks | cs.SI physics.soc-ph | With social media and the according social and ubiquitous applications
finding their way into everyday life, there is a rapidly growing amount of user
generated content yielding explicit and implicit network structures. We
consider social activities and phenomena as proxies for user relatedness. Such
activities are represented in so-called social interaction networks or evidence
networks, with different degrees of explicitness. We focus on evidence networks
containing relations on users, which are represented by connections between
individual nodes. Explicit interaction networks are then created by specific
user actions, for example, when building a friend network. On the other hand,
more implicit networks capture user traces or evidences of user actions as
observed in Web portals, blogs, resource sharing systems, and many other social
services. These implicit networks can be applied for a broad range of analysis
methods instead of using expensive gold-standard information.
In this paper, we analyze different properties of a set of networks in social
media. We show that there are dependencies and correlations between the
networks. These allow for drawing reciprocal conclusions concerning pairs of
networks, based on the assessment of structural correlations and ranking
interchangeability. Additionally, we show how these inter-network correlations
can be used for assessing the results of structural analysis techniques, e.g.,
community mining methods.
|
1309.3901 | A new design criterion for spherically-shaped division algebra-based
space-time codes | cs.IT math.IT | This work considers normalized inverse determinant sums as a tool for
analyzing the performance of division algebra based space-time codes for
multiple antenna wireless systems. A general union bound based code design
criterion is obtained as a main result. In our previous work, the behavior of
inverse determinant sums was analyzed using point counting techniques for Lie
groups; it was shown that the asymptotic growth exponents of these sums
correctly describe the diversity-multiplexing gain trade-off of the space-time
code for some multiplexing gain ranges. This paper focuses on the constant
terms of the inverse determinant sums, which capture the coding gain behavior.
Pursuing the Lie group approach, a tighter asymptotic bound is derived,
allowing to compute the constant terms for several classes of space-time codes
appearing in the literature. The resulting design criterion suggests that the
performance of division algebra based codes depends on several fundamental
algebraic invariants of the underlying algebra.
|
1309.3908 | Exploring Image Virality in Google Plus | cs.SI cs.CY cs.MM physics.soc-ph | Reactions to posts in an online social network show different dynamics
depending on several textual features of the corresponding content. Do similar
dynamics exist when images are posted? Exploiting a novel dataset of posts,
gathered from the most popular Google+ users, we try to give an answer to such
a question. We describe several virality phenomena that emerge when taking into
account visual characteristics of images (such as orientation, mean saturation,
etc.). We also provide hypotheses and potential explanations for the dynamics
behind them, and include cases for which common-sense expectations do not hold
true in our experiments.
|
1309.3910 | Robustness analysis of finite precision implementations | cs.SE cs.SY | A desirable property of control systems is to be robust to inputs, that is
small perturbations of the inputs of a system will cause only small
perturbations on its outputs. But it is not clear whether this property is
maintained at the implementation level, when two close inputs can lead to very
different execution paths. The problem becomes particularly crucial when
considering finite precision implementations, where any elementary computation
can be affected by a small error. In this context, almost every test is
potentially unstable, that is, for a given input, the computed (finite
precision) path may differ from the ideal (same computation in real numbers)
path. Still, state-of-the-art error analyses do not consider this possibility
and rely on the stable test hypothesis, that control flows are identical. If
there is a discontinuity between the treatments in the two branches, that is
the conditional block is not robust to uncertainties, the error bounds can be
unsound.
We propose here a new abstract-interpretation based error analysis of finite
precision implementations, which is sound in presence of unstable tests. It
automatically bounds the discontinuity error coming from the difference between
the float and real values when there is a path divergence, and introduces a new
error term labeled by the test that introduced this potential discontinuity.
This gives a tractable error analysis, implemented in our static analyzer
FLUCTUAT: we present results on representative extracts of control programs.
|
1309.3917 | Strategic Planning in Air Traffic Control as a Multi-objective
Stochastic Optimization Problem | cs.AI | With the objective of handling the airspace sector congestion subject to
continuously growing air traffic, we suggest to create a collaborative working
plan during the strategic phase of air traffic control. The plan obtained via a
new decision support tool presented in this article consists in a schedule for
controllers, which specifies time of overflight on the different waypoints of
the flight plans. In order to do it, we believe that the decision-support tool
shall model directly the uncertainty at a trajectory level in order to
propagate the uncertainty to the sector level. Then, the probability of
congestion for any sector in the airspace can be computed. Since air traffic
regulations and sector congestion are antagonist, we designed and implemented a
multi-objective optimization algorithm for determining the best trade-off
between these two criteria. The solution comes up as a set of alternatives for
the multi-sector planner where the severity of the congestion cost is
adjustable. In this paper, the Non-dominated Sorting Genetic Algorithm
(NSGA-II) was used to solve an artificial benchmark problem involving 24
aircraft and 11 sectors, and is able to provide a good approximation of the
Pareto front.
|
1309.3921 | Computational Methods for Probabilistic Inference of Sector Congestion
in Air Traffic Management | cs.AI | This article addresses the issue of computing the expected cost functions
from a probabilistic model of the air traffic flow and capacity management. The
Clenshaw-Curtis quadrature is compared to Monte-Carlo algorithms defined
specifically for this problem. By tailoring the algorithms to this model, we
reduce the computational burden in order to simulate real instances. The study
shows that the Monte-Carlo algorithm is more sensible to the amount of
uncertainty in the system, but has the advantage to return a result with the
associated accuracy on demand. The performances for both approaches are
comparable for the computation of the expected cost of delay and the expected
cost of congestion. Finally, this study shows some evidences that the
simulation of the proposed probabilistic model is tractable for realistic
instances.
|
1309.3945 | A Neural Network based Approach for Predicting Customer Churn in
Cellular Network Services | cs.NE cs.CE | Marketing literature states that it is more costly to engage a new customer
than to retain an existing loyal customer. Churn prediction models are
developed by academics and practitioners to effectively manage and control
customer churn in order to retain existing customers. As churn management is an
important activity for companies to retain loyal customers, the ability to
correctly predict customer churn is necessary. As the cellular network services
market becoming more competitive, customer churn management has become a
crucial task for mobile communication operators. This paper proposes a neural
network based approach to predict customer churn in subscription of cellular
wireless services. The results of experiments indicate that neural network
based approach can predict customer churn.
|
1309.3946 | Using Self-Organizing Maps for Sentiment Analysis | cs.IR cs.CL cs.NE | Web 2.0 services have enabled people to express their opinions, experience
and feelings in the form of user-generated content. Sentiment analysis or
opinion mining involves identifying, classifying and aggregating opinions as
per their positive or negative polarity. This paper investigates the efficacy
of different implementations of Self-Organizing Maps (SOM) for sentiment based
visualization and classification of online reviews. Specifically, this paper
implements the SOM algorithm for both supervised and unsupervised learning from
text documents. The unsupervised SOM algorithm is implemented for sentiment
based visualization and classification tasks. For supervised sentiment
analysis, a competitive learning algorithm known as Learning Vector
Quantization is used. Both algorithms are also compared with their respective
multi-pass implementations where a quick rough ordering pass is followed by a
fine tuning pass. The experimental results on the online movie review data set
show that SOMs are well suited for sentiment based classification and sentiment
polarity visualization.
|
1309.3949 | Performance Investigation of Feature Selection Methods | cs.IR cs.CL cs.LG | Sentiment analysis or opinion mining has become an open research domain after
proliferation of Internet and Web 2.0 social media. People express their
attitudes and opinions on social media including blogs, discussion forums,
tweets, etc. and, sentiment analysis concerns about detecting and extracting
sentiment or opinion from online text. Sentiment based text classification is
different from topical text classification since it involves discrimination
based on expressed opinion on a topic. Feature selection is significant for
sentiment analysis as the opinionated text may have high dimensions, which can
adversely affect the performance of sentiment analysis classifier. This paper
explores applicability of feature selection methods for sentiment analysis and
investigates their performance for classification in term of recall, precision
and accuracy. Five feature selection methods (Document Frequency, Information
Gain, Gain Ratio, Chi Squared, and Relief-F) and three popular sentiment
feature lexicons (HM, GI and Opinion Lexicon) are investigated on movie reviews
corpus with a size of 2000 documents. The experimental results show that
Information Gain gave consistent results and Gain Ratio performs overall best
for sentimental feature selection while sentiment lexicons gave poor
performance. Furthermore, we found that performance of the classifier depends
on appropriate number of representative feature selected from text.
|
1309.3957 | Autocatalysis in Reaction Networks | math.DS cs.CE q-bio.MN | The persistence conjecture is a long-standing open problem in chemical
reaction network theory. It concerns the behavior of solutions to coupled ODE
systems that arise from applying mass-action kinetics to a network of chemical
reactions. The idea is that if all reactions are reversible in a weak sense,
then no species can go extinct. A notion that has been found useful in thinking
about persistence is that of "critical siphon." We explore the combinatorics of
critical siphons, with a view towards the persistence conjecture. We introduce
the notions of "drainable" and "self-replicable" (or autocatalytic) siphons. We
show that: every minimal critical siphon is either drainable or
self-replicable; reaction networks without drainable siphons are persistent;
and non-autocatalytic weakly-reversible networks are persistent. Our results
clarify that the difficulties in proving the persistence conjecture are
essentially due to competition between drainable and self-replicable siphons.
|
1309.3959 | Bounded Confidence Opinion Dynamics in a Social Network of Bayesian
Decision Makers | cs.SI physics.soc-ph | Bounded confidence opinion dynamics model the propagation of information in
social networks. However in the existing literature, opinions are only viewed
as abstract quantities without semantics rather than as part of a
decision-making system. In this work, opinion dynamics are examined when agents
are Bayesian decision makers that perform hypothesis testing or signal
detection, and the dynamics are applied to prior probabilities of hypotheses.
Bounded confidence is defined on prior probabilities through Bayes risk error
divergence, the appropriate measure between priors in hypothesis testing. This
definition contrasts with the measure used between opinions in standard models:
absolute error. It is shown that the rapid convergence of prior probabilities
to a small number of limiting values is similar to that seen in the standard
Krause-Hegselmann model. The most interesting finding in this work is that the
number of these limiting values and the time to convergence changes with the
signal-to-noise ratio in the detection task. The number of final values or
clusters is maximal at intermediate signal-to-noise ratios, suggesting that the
most contentious issues lead to the largest number of factions. It is at these
same intermediate signal-to-noise ratios at which the degradation in detection
performance of the aggregate vote of the decision makers is greatest in
comparison to the Bayes optimal detection performance.
|
1309.3964 | An Investigation of Data Privacy and Utility Preservation using KNN
Classification as a Gauge | cs.CR cs.DB | It is obligatory that organizations by law safeguard the privacy of
individuals when handling data sets containing personal identifiable
information (PII). Nevertheless, during the process of data privatization, the
utility or usefulness of the privatized data diminishes. Yet achieving the
optimal balance between data privacy and utility needs has been documented as
an NP-hard challenge. In this study, we investigate data privacy and utility
preservation using KNN machine learning classification as a gauge.
|
1309.3975 | Problem Complexity Research from Energy Perspective | cs.CC cs.IT math.IT physics.pop-ph | Computational complexity is a particularly important objective. The idea of
Landauer principle was extended through mapping three classic problems
(sorting,ordered searching and max of N unordered numbers) into Maxwell demon
thought experiment in this paper. The problems'complexity is defined on the
entropy basis and the minimum energy required to solve them are rigorous
deduced from the perspective of energy (entropy) and the second law of
thermodynamics. Then the theoretical energy consumed by real program and basic
operators of classical computer are both analyzed, the time complexity lower
bounds of three problems'all possible algorithms are derived in this way. The
lower bound is also deduced for the two n*n matrix multiplication problem. In
the end, the reason why reversible computation is impossible and the
possibility of super-linear energy consumption capacity which may be the power
behind quantum computation are discussed, a conjecture is proposed which may
prove NP!=P. The study will bring fresh and profound understanding of
computation complexity.
|
1309.3985 | The ADI iteration for Lyapunov equations implicitly performs H2
pseudo-optimal model order reduction | math.NA cs.SY math.DS | Two approaches for approximating the solution of large-scale Lyapunov
equations are considered: the alternating direction implicit (ADI) iteration
and projective methods by Krylov subspaces. A link between them is presented by
showing that the ADI iteration can always be identified by a Petrov-Galerkin
projection with rational block Krylov subspaces. Then a unique Krylov-projected
dynamical system can be associated with the ADI iteration, which is proven to
be an H2 pseudo-optimal approximation. This includes the generalization of
previous results on H2 pseudo-optimality to the multivariable case.
Additionally, a low-rank formulation of the residual in the Lyapunov equation
is presented, which is well-suited for implementation, and which yields a
measure of the "obliqueness" that the ADI iteration is associated with.
|
1309.4009 | Access Patterns for Robots and Humans in Web Archives | cs.DL cs.IR | Although user access patterns on the live web are well-understood, there has
been no corresponding study of how users, both humans and robots, access web
archives. Based on samples from the Internet Archive's public Wayback Machine,
we propose a set of basic usage patterns: Dip (a single access), Slide (the
same page at different archive times), Dive (different pages at approximately
the same archive time), and Skim (lists of what pages are archived, i.e.,
TimeMaps). Robots are limited almost exclusively to Dips and Skims, but human
accesses are more varied between all four types. Robots outnumber humans 10:1
in terms of sessions, 5:4 in terms of raw HTTP accesses, and 4:1 in terms of
megabytes transferred. Robots almost always access TimeMaps (95% of accesses),
but humans predominately access the archived web pages themselves (82% of
accesses). In terms of unique archived web pages, there is no overall
preference for a particular time, but the recent past (within the last year)
shows significant repeat accesses.
|
1309.4024 | The Cyborg Astrobiologist: Matching of Prior Textures by Image
Compression for Geological Mapping and Novelty Detection | cs.CV astro-ph.EP astro-ph.IM cs.LG | (abridged) We describe an image-comparison technique of Heidemann and Ritter
that uses image compression, and is capable of: (i) detecting novel textures in
a series of images, as well as of: (ii) alerting the user to the similarity of
a new image to a previously-observed texture. This image-comparison technique
has been implemented and tested using our Astrobiology Phone-cam system, which
employs Bluetooth communication to send images to a local laptop server in the
field for the image-compression analysis. We tested the system in a field site
displaying a heterogeneous suite of sandstones, limestones, mudstones and
coalbeds. Some of the rocks are partly covered with lichen. The image-matching
procedure of this system performed very well with data obtained through our
field test, grouping all images of yellow lichens together and grouping all
images of a coal bed together, and giving a 91% accuracy for similarity
detection. Such similarity detection could be employed to make maps of
different geological units. The novelty-detection performance of our system was
also rather good (a 64% accuracy). Such novelty detection may become valuable
in searching for new geological units, which could be of astrobiological
interest. The image-comparison technique is an unsupervised technique that is
not capable of directly classifying an image as containing a particular
geological feature; labeling of such geological features is done post facto by
human geologists associated with this study, for the purpose of analyzing the
system's performance. By providing more advanced capabilities for similarity
detection and novelty detection, this image-compression technique could be
useful in giving more scientific autonomy to robotic planetary rovers, and in
assisting human astronauts in their geological exploration and assessment.
|
1309.4026 | Secure Degrees of Freedom of MIMO X-Channels with Output Feedback and
Delayed CSIT | cs.IT math.IT | We investigate the problem of secure transmission over a two-user multi-input
multi-output (MIMO) X-channel in which channel state information is provided
with one-unit delay to both transmitters (CSIT), and each receiver feeds back
its channel output to a different transmitter. We refer to this model as MIMO
X-channel with asymmetric output feedback and delayed CSIT. The transmitters
are equipped with M-antennas each, and the receivers are equipped with
N-antennas each. For this model, accounting for both messages at each receiver,
we characterize the optimal sum secure degrees of freedom (SDoF) region. We
show that, in presence of asymmetric output feedback and delayed CSIT, the sum
SDoF region of the MIMO X-channel is same as the SDoF region of a two-user MIMO
BC with 2M-antennas at the transmitter, N-antennas at each receiver and delayed
CSIT. This result shows that, upon availability of asymmetric output feedback
and delayed CSIT, there is no performance loss in terms of sum SDoF due to the
distributed nature of the transmitters. Next, we show that this result also
holds if only output feedback is conveyed to the transmitters, but in a
symmetric manner, i.e., each receiver feeds back its output to both
transmitters and no CSIT. We also study the case in which only asymmetric
output feedback is provided to the transmitters, i.e., without CSIT, and derive
a lower bound on the sum SDoF for this model. Furthermore, we specialize our
results to the case in which there are no security constraints. In particular,
similar to the setting with security constraints, we show that the optimal sum
DoF region of the (M,M,N,N)--MIMO X-channel with asymmetric output feedback and
delayed CSIT is same as the DoF region of a two-user MIMO BC with 2M-antennas
at the transmitter, N-antennas at each receiver, and delayed CSIT. We
illustrate our results with some numerical examples.
|
1309.4034 | The Weighted Sum Rate Maximization in MIMO Interference Networks: The
Minimax Lagrangian Duality and Algorithm | cs.IT math.IT | We take a new perspective on the weighted sum-rate maximization in
multiple-input multiple-output (MIMO) interference networks, by formulating an
equivalent max-min problem. This seemingly trivial reformulation has
significant implications: the Lagrangian duality of the equivalent max-min
problem provides an elegant way to establish the sum-rate duality between an
interference network and its reciprocal when such a duality exists, and more
importantly, suggests a novel iterative minimax algorithm for the weighted
sum-rate maximization. Moreover, the design and convergence proof of the
algorithm use only general convex analysis. They apply and extend to any
max-min problems with similar structure, and thus provide a general class of
algorithms for such optimization problems. This paper presents a promising step
and lends hope for establishing a general framework based on the minimax
Lagrangian duality for characterizing the weighted sum-rate and developing
efficient algorithms for general MIMO interference networks.
|
1309.4035 | Domain and Function: A Dual-Space Model of Semantic Relations and
Compositions | cs.CL cs.AI cs.LG | Given appropriate representations of the semantic relations between carpenter
and wood and between mason and stone (for example, vectors in a vector space
model), a suitable algorithm should be able to recognize that these relations
are highly similar (carpenter is to wood as mason is to stone; the relations
are analogous). Likewise, with representations of dog, house, and kennel, an
algorithm should be able to recognize that the semantic composition of dog and
house, dog house, is highly similar to kennel (dog house and kennel are
synonymous). It seems that these two tasks, recognizing relations and
compositions, are closely connected. However, up to now, the best models for
relations are significantly different from the best models for compositions. In
this paper, we introduce a dual-space model that unifies these two tasks. This
model matches the performance of the best previous models for relations and
compositions. The dual-space model consists of a space for measuring domain
similarity and a space for measuring function similarity. Carpenter and wood
share the same domain, the domain of carpentry. Mason and stone share the same
domain, the domain of masonry. Carpenter and mason share the same function, the
function of artisans. Wood and stone share the same function, the function of
materials. In the composition dog house, kennel has some domain overlap with
both dog and house (the domains of pets and buildings). The function of kennel
is similar to the function of house (the function of shelters). By combining
domain and function similarities in various ways, we can model relations,
compositions, and other aspects of semantics.
|
1309.4050 | Analytical solution for a class of network dynamics with mechanical and
financial applications | cond-mat.stat-mech cond-mat.dis-nn cs.SI physics.soc-ph q-fin.ST | We show that for a certain class of dynamics at the nodes the response of a
network of any topology to arbitrary inputs is defined in a simple way by its
response to a monotone input. The nodes may have either a discrete or
continuous set of states and there is no limit on the complexity of the
network. The results provide both an efficient numerical method and the
potential for accurate analytic approximation of the dynamics on such networks.
As illustrative applications, we introduce a quasistatic mechanical model with
objects interacting via frictional forces, and a financial market model with
avalanches and critical behavior that are generated by momentum trading
strategies.
|
1309.4058 | Why SOV might be initially preferred and then lost or recovered? A
theoretical framework | cs.CL nlin.AO physics.soc-ph q-bio.NC | Little is known about why SOV order is initially preferred and then discarded
or recovered. Here we present a framework for understanding these and many
related word order phenomena: the diversity of dominant orders, the existence
of free words orders, the need of alternative word orders and word order
reversions and cycles in evolution. Under that framework, word order is
regarded as a multiconstraint satisfaction problem in which at least two
constraints are in conflict: online memory minimization and maximum
predictability.
|
1309.4061 | Learning a Loopy Model For Semantic Segmentation Exactly | cs.LG cs.CV | Learning structured models using maximum margin techniques has become an
indispensable tool for com- puter vision researchers, as many computer vision
applications can be cast naturally as an image labeling problem. Pixel-based or
superpixel-based conditional random fields are particularly popular examples.
Typ- ically, neighborhood graphs, which contain a large number of cycles, are
used. As exact inference in loopy graphs is NP-hard in general, learning these
models without approximations is usually deemed infeasible. In this work we
show that, despite the theoretical hardness, it is possible to learn loopy
models exactly in practical applications. To this end, we analyze the use of
multiple approximate inference techniques together with cutting plane training
of structural SVMs. We show that our proposed method yields exact solutions
with an optimality guarantees in a computer vision application, for little
additional computational cost. We also propose a dynamic caching scheme to
accelerate training further, yielding runtimes that are comparable with
approximate methods. We hope that this insight can lead to a reconsideration of
the tractability of loopy models in computer vision.
|
1309.4062 | Resource Optimization in Device-to-Device Cellular Systems Using
Time-Frequency Hopping | cs.IT math.IT | We develop a flexible and accurate framework for device-to-device (D2D)
communication in the context of a conventional cellular network, which allows
for time-frequency resources to be either shared or orthogonally partitioned
between the two networks. Using stochastic geometry, we provide accurate
expressions for SINR distributions and average rates, under an assumption of
interference randomization via time and/or frequency hopping, for both
dedicated and shared spectrum approaches. We obtain analytical results in
closed or semi-closed form in high SNR regime, that allow us to easily explore
the impact of key parameters (e.g., the load and hopping probabilities) on the
network performance. In particular, unlike other models, the expressions we
obtain are tractable, i.e., they can be efficiently optimized without extensive
simulation. Using these, we optimize the hopping probabilities for the D2D
links, i.e., how often they should request a time or frequency slot. This can
be viewed as an optimized lower bound to other more sophisticated scheduling
schemes. We also investigate the optimal resource partitions between D2D and
cellular networks when they use orthogonal resources.
|
1309.4067 | Facebook Applications' Installation and Removal: A Temporal Analysis | cs.SI physics.soc-ph | Facebook applications are one of the reasons for Facebook attractiveness.
Unfortunately, numerous users are not aware of the fact that many malicious
Facebook applications exist. To educate users, to raise users' awareness and to
improve Facebook users' security and privacy, we developed a Firefox add-on
that alerts users to the number of installed applications on their Facebook
profiles. In this study, we present the temporal analysis of the Facebook
applications' installation and removal dataset collected by our add-on. This
dataset consists of information from 2,945 users, collected during a period of
over a year. We used linear regression to analyze our dataset and discovered
the linear connection between the average percentage change of newly installed
Facebook applications and the number of days passed since the user initially
installed our add-on. Additionally, we found out that users who used our
Firefox add-on become more aware of their security and privacy installing on
average fewer new applications. Finally, we discovered that on average 86.4% of
Facebook users install an additional application every 4.2 days.
|
1309.4085 | Multiobjective Tactical Planning under Uncertainty for Air Traffic Flow
and Capacity Management | cs.AI | We investigate a method to deal with congestion of sectors and delays in the
tactical phase of air traffic flow and capacity management. It relies on
temporal objectives given for every point of the flight plans and shared among
the controllers in order to create a collaborative environment. This would
enhance the transition from the network view of the flow management to the
local view of air traffic control. Uncertainty is modeled at the trajectory
level with temporal information on the boundary points of the crossed sectors
and then, we infer the probabilistic occupancy count. Therefore, we can model
the accuracy of the trajectory prediction in the optimization process in order
to fix some safety margins. On the one hand, more accurate is our prediction;
more efficient will be the proposed solutions, because of the tighter safety
margins. On the other hand, when uncertainty is not negligible, the proposed
solutions will be more robust to disruptions. Furthermore, a multiobjective
algorithm is used to find the tradeoff between the delays and congestion, which
are antagonist in airspace with high traffic density. The flow management
position can choose manually, or automatically with a preference-based
algorithm, the adequate solution. This method is tested against two instances,
one with 10 flights and 5 sectors and one with 300 flights and 16 sectors.
|
1309.4111 | Regularized Spectral Clustering under the Degree-Corrected Stochastic
Blockmodel | stat.ML cs.LG math.ST stat.TH | Spectral clustering is a fast and popular algorithm for finding clusters in
networks. Recently, Chaudhuri et al. (2012) and Amini et al.(2012) proposed
inspired variations on the algorithm that artificially inflate the node degrees
for improved statistical performance. The current paper extends the previous
statistical estimation results to the more canonical spectral clustering
algorithm in a way that removes any assumption on the minimum degree and
provides guidance on the choice of the tuning parameter. Moreover, our results
show how the "star shape" in the eigenvectors--a common feature of empirical
networks--can be explained by the Degree-Corrected Stochastic Blockmodel and
the Extended Planted Partition model, two statistical models that allow for
highly heterogeneous degrees. Throughout, the paper characterizes and justifies
several of the variations of the spectral clustering algorithm in terms of
these models.
|
1309.4132 | Attribute-Efficient Evolvability of Linear Functions | cs.LG q-bio.PE | In a seminal paper, Valiant (2006) introduced a computational model for
evolution to address the question of complexity that can arise through
Darwinian mechanisms. Valiant views evolution as a restricted form of
computational learning, where the goal is to evolve a hypothesis that is close
to the ideal function. Feldman (2008) showed that (correlational) statistical
query learning algorithms could be framed as evolutionary mechanisms in
Valiant's model. P. Valiant (2012) considered evolvability of real-valued
functions and also showed that weak-optimization algorithms that use
weak-evaluation oracles could be converted to evolutionary mechanisms.
In this work, we focus on the complexity of representations of evolutionary
mechanisms. In general, the reductions of Feldman and P. Valiant may result in
intermediate representations that are arbitrarily complex (polynomial-sized
circuits). We argue that biological constraints often dictate that the
representations have low complexity, such as constant depth and fan-in
circuits. We give mechanisms for evolving sparse linear functions under a large
class of smooth distributions. These evolutionary algorithms are
attribute-efficient in the sense that the size of the representations and the
number of generations required depend only on the sparsity of the target
function and the accuracy parameter, but have no dependence on the total number
of attributes.
|
1309.4136 | Compression via Compressive Sensing : A Low-Power Framework for the
Telemonitoring of Multi-Channel Physiological Signals | cs.IT math.IT | Telehealth and wearable equipment can deliver personal healthcare and
necessary treatment remotely. One major challenge is transmitting large amount
of biosignals through wireless networks. The limited battery life calls for
low-power data compressors. Compressive Sensing (CS) has proved to be a
low-power compressor. In this study, we apply CS on the compression of
multichannel biosignals. We firstly develop an efficient CS algorithm from the
Block Sparse Bayesian Learning (BSBL) framework. It is based on a combination
of the block sparse model and multiple measurement vector model. Experiments on
real-life Fetal ECGs showed that the proposed algorithm has high fidelity and
efficiency. Implemented in hardware, the proposed algorithm was compared to a
Discrete Wavelet Transform (DWT) based algorithm, verifying the proposed one
has low power consumption and occupies less computational resources.
|
1309.4138 | Base Station Activation and Linear Transceiver Design for Optimal
Resource Management in Heterogeneous Networks | cs.IT math.IT | In a densely deployed heterogeneous network (HetNet), the number of
pico/micro base stations (BS) can be comparable with the number of the users.
To reduce the operational overhead of the HetNet, proper identification of the
set of serving BSs becomes an important design issue. In this work, we show
that by jointly optimizing the transceivers and determining the active set of
BSs, high system resource utilization can be achieved with only a small number
of BSs. In particular, we provide formulations and efficient algorithms for
such joint optimization problem, under the following two common design
criteria: i) minimization of the total power consumption at the BSs, and ii)
maximization of the system spectrum efficiency. In both cases, we introduce a
nonsmooth regularizer to facilitate the activation of the most appropriate BSs.
We illustrate the efficiency and the efficacy of the proposed algorithms via
extensive numerical simulations.
|
1309.4141 | Analysis of Blockage Effects on Urban Cellular Networks | cs.IT math.IT | Large-scale blockages like buildings affect the performance of urban cellular
networks, especially at higher frequencies. Unfortunately, such blockage
effects are either neglected or characterized by oversimplified models in the
analysis of cellular networks. Leveraging concepts from random shape theory,
this paper proposes a mathematical framework to model random blockages and
analyze their impact on cellular network performance. Random buildings are
modeled as a process of rectangles with random sizes and orientations whose
centers form a Poisson point process on the plane. The distribution of the
number of blockages in a link is proven to be Poisson random variable with
parameter dependent on the length of the link. A path loss model that
incorporates the blockage effects is proposed, which matches experimental
trends observed in prior work. The model is applied to analyze the performance
of cellular networks in urban areas with the presence of buildings, in terms of
connectivity, coverage probability, and average rate. Analytic results show
while buildings may block the desired signal, they may still have a positive
impact on network performance since they can block significantly more
interference.
|
1309.4151 | A Non-Local Means Filter for Removing the Poisson Noise | stat.AP cs.CV | A new image denoising algorithm to deal with the Poisson noise model is
given, which is based on the idea of Non-Local Mean. By using the "Oracle"
concept, we establish a theorem to show that the Non-Local Means Filter can
effectively deal with Poisson noise with some modification. Under the
theoretical result, we construct our new algorithm called Non-Local Means
Poisson Filter and demonstrate in theory that the filter converges at the usual
optimal rate. The filter is as simple as the classic Non-Local Means and the
simulation results show that our filter is very competitive.
|
1309.4156 | Trade integration and trade imbalances in the European Union: a network
perspective | physics.soc-ph cs.CE physics.data-an q-fin.GN | We study the ever more integrated and ever more unbalanced trade
relationships between European countries. To better capture the complexity of
economic networks, we propose two global measures that assess the trade
integration and the trade imbalances of the European countries. These measures
are the network (or indirect) counterparts to traditional (or direct) measures
such as the trade-to-GDP (Gross Domestic Product) and trade deficit-to-GDP
ratios. Our indirect tools account for the European inter-country trade
structure and follow (i) a decomposition of the global trade flow into
elementary flows that highlight the long-range dependencies between exporting
and importing economies and (ii) the commute-time distance for trade
integration,which measures the impact of a perturbation in the economy of a
country on another country, possibly through intermediate partners by domino
effect. Our application addresses the impact of the launch of the Euro. We find
that the indirect imbalance measures better identify the countries ultimately
bearing deficits and surpluses, by neutralizing the impact of trade transit
countries, such as the Netherlands. Among others, we find that ultimate
surpluses of Germany are quite concentrated in only three partners. We also
show that for some countries, the direct and indirect measures of trade
integration diverge, thereby revealing that these countries (e.g. Greece and
Portugal) trade to a smaller extent with countries considered as central in the
European Union network.
|
1309.4157 | EgoNet-UIUC: A Dataset For Ego Network Research | cs.SI physics.soc-ph | In this report, we introduce the version one of EgoNet-UIUC, which is a
dataset for ego network research. The dataset contains about 230 ego networks
in Linkedin, which have about 33K users (with their attributes) and 283K
relationships (with their relationship types) in total. We name this dataset as
EgoNet-UIUC, which stands for Ego Network Dataset from University of Illinois
at Urbana-Champaign.
|
1309.4161 | How to Identify an Infection Source with Limited Observations | cs.SI physics.soc-ph | A rumor spreading in a social network or a disease propagating in a community
can be modeled as an infection spreading in a network. Finding the infection
source is a challenging problem, which is made more difficult in many
applications where we have access only to a limited set of observations. We
consider the problem of estimating an infection source for a
Susceptible-Infected model, in which not all infected nodes can be observed.
When the network is a tree, we show that an estimator for the source node
associated with the most likely infection path that yields the limited
observations is given by a Jordan center, i.e., a node with minimum distance to
the set of observed infected nodes. We also propose approximate source
estimators for general networks. Simulation results on various synthetic
networks and real world networks suggest that our estimators perform better
than distance, closeness, and betweenness centrality based heuristics.
|
1309.4164 | The Development of ADS Virtual Accelerator Based on XAL | physics.acc-ph cs.SY | XAL is a high level accelerator application framework originally developed by
the Spallation Neutron Source (SNS), Oak Ridge National Laboratory. It has
advanced design concept and adopted by many international accelerator
laboratories. Adopting XAL for ADS is a key subject in the long term. This
paper will present the modifications to the original XAL applications for ADS.
The work includes proper relational database schema modification in order to
better suit ADS configuration data requirement, redesigning and re-implementing
db2xal application and modifying the virtual accelerator application. In
addition, the new device types and new device attributes for ADS online
modeling purpose is also described here.
|
1309.4166 | A New Class of Index Coding Instances Where Linear Coding is Optimal | cs.IT math.IT | We study index-coding problems (one sender broadcasting messages to multiple
receivers) where each message is requested by one receiver, and each receiver
may know some messages a priori. This type of index-coding problems can be
fully described by directed graphs. The aim is to find the minimum codelength
that the sender needs to transmit in order to simultaneously satisfy all
receivers' requests. For any directed graph, we show that if a maximum acyclic
induced subgraph (MAIS) is obtained by removing two or fewer vertices from the
graph, then the minimum codelength (i.e., the solution to the index-coding
problem) equals the number of vertices in the MAIS, and linear codes are
optimal for this index-coding problem. Our result increases the set of
index-coding problems for which linear index codes are proven to be optimal.
|
1309.4168 | Exploiting Similarities among Languages for Machine Translation | cs.CL | Dictionaries and phrase tables are the basis of modern statistical machine
translation systems. This paper develops a method that can automate the process
of generating and extending dictionaries and phrase tables. Our method can
translate missing word and phrase entries by learning language structures based
on large monolingual data and mapping between languages from small bilingual
data. It uses distributed representation of words and learns a linear mapping
between vector spaces of languages. Despite its simplicity, our method is
surprisingly effective: we can achieve almost 90% precision@5 for translation
of words between English and Spanish. This method makes little assumption about
the languages, so it can be used to extend and refine dictionaries and
translation tables for any language pairs.
|
1309.4203 | An efficient algorithm for weighted sum-rate maximization in multicell
downlink beamforming | cs.IT math.IT | This paper considers coordinated linear precoding for rate optimization in
downlink multicell, multiuser orthogonal frequency- division multiple access
networks. We focus on two different design criteria. In the first, the weighted
sum-rate is maximized under transmit power constraints per base station. In the
second, we minimize the total transmit power satisfying the
signal-to-interference-plus-noise-ratio constraints of the subcarriers per
cell. Both problems are solved using standard conic optimization packages. A
less complex, fast, and provably convergent algorithm that maximizes the
weighted sum-rate with per-cell transmit power constraints is formulated. We
approximate the nonconvex weighted sum- rate maximization (WSRM) problem with a
solvable convex form by means of a sequential parametric convex approximation
approach. The second- order cone formulations of an objective function and the
constraints of the optimization problem are derived through a proper change of
variables, first-order linear approximation, and hyperbolic constraints
transformation. This algorithm converges to the suboptimal solution while
taking fewer it- erations in comparison to other known iterative WSRM
algorithms. Numerical results are presented to demonstrate the effectiveness
and superiority of the proposed algorithm.
|
1309.4251 | Optimal Distributed Controller Design with Communication Delays:
Application to Vehicle Formations | cs.SY | This paper develops a controller synthesis algorithm for distributed LQG
control problems under output feedback. We consider a system consisting of
three interconnected linear subsystems with a delayed information sharing
structure. While the state-feedback case of this problem has previously been
solved, the extension to output-feedback is nontrivial, as the classical
separation principle fails. To find the optimal solution, the controller is
decomposed into two independent components. One is delayed centralized LQR, and
the other is the sum of correction terms based on additional local information.
Explicit discrete-time equations are derived whose solutions are the gains of
the optimal controller.
|
1309.4259 | Optimal scales in weighted networks | physics.data-an cond-mat.dis-nn cond-mat.stat-mech cs.SI | The analysis of networks characterized by links with heterogeneous intensity
or weight suffers from two long-standing problems of arbitrariness. On one
hand, the definitions of topological properties introduced for binary graphs
can be generalized in non-unique ways to weighted networks. On the other hand,
even when a definition is given, there is no natural choice of the (optimal)
scale of link intensities (e.g. the money unit in economic networks). Here we
show that these two seemingly independent problems can be regarded as
intimately related, and propose a common solution to both. Using a formalism
that we recently proposed in order to map a weighted network to an ensemble of
binary graphs, we introduce an information-theoretic approach leading to the
least biased generalization of binary properties to weighted networks, and at
the same time fixing the optimal scale of link intensities. We illustrate our
method on various social and economic networks.
|
1309.4291 | Models and algorithms for skip-free Markov decision processes on trees | math.OC cs.AI math.PR | We introduce a class of models for multidimensional control problems which we
call skip-free Markov decision processes on trees. We describe and analyse an
algorithm applicable to Markov decision processes of this type that are
skip-free in the negative direction. Starting with the finite average cost
case, we show that the algorithm combines the advantages of both value
iteration and policy iteration -- it is guaranteed to converge to an optimal
policy and optimal value function after a finite number of iterations but the
computational effort required for each iteration step is comparable with that
for value iteration. We show that the algorithm can also be used to solve
discounted cost models and continuous time models, and that a suitably modified
algorithm can be used to solve communicating models.
|
1309.4306 | Sparsity Based Poisson Denoising with Dictionary Learning | cs.CV stat.ML | The problem of Poisson denoising appears in various imaging applications,
such as low-light photography, medical imaging and microscopy. In cases of high
SNR, several transformations exist so as to convert the Poisson noise into an
additive i.i.d. Gaussian noise, for which many effective algorithms are
available. However, in a low SNR regime, these transformations are
significantly less accurate, and a strategy that relies directly on the true
noise statistics is required. A recent work by Salmon et al. took this route,
proposing a patch-based exponential image representation model based on GMM
(Gaussian mixture model), leading to state-of-the-art results. In this paper,
we propose to harness sparse-representation modeling to the image patches,
adopting the same exponential idea. Our scheme uses a greedy pursuit with
boot-strapping based stopping condition and dictionary learning within the
denoising process. The reconstruction performance of the proposed scheme is
competitive with leading methods in high SNR, and achieving state-of-the-art
results in cases of low SNR.
|
1309.4345 | Music Files Search System | cs.IR | This paper introduces a project of advanced system of music retrieval from
the Internet. The system uses combination of text search (by author, title and
other information about the music file included in id3 tag description or
similar for other file types) with more intuitive and novel method of melody
search using query by humming. The patterns for storing text and melody
information as well as improved clustering algorithm for the pattern space were
proposed. The search engine is planned to optimise the query due to the data
input by user, thanks to the structure of text and melody index database. The
system is planned to be a plug-in for popular digital music players or an
independent player. An advanced system of recommendation based on information
gathered from user's profile and search history is an integral part of the
system. The recommendation mechanism uses scrobbling methods and is responsible
for making suggestions of songs unknown to the user but similar to his
preferred music styles and positioning search results.
|
1309.4355 | Experimental Evaluation of Interference Alignment for Broadband WLAN
Systems | cs.IT math.IT | In this paper we present an experimental study on the performance of spatial
Interference Alignment (IA) in indoor wireless local area network scenarios
that use Orthogonal Frequency Division Multiplexing (OFDM) according to the
physical-layer specifications of the IEEE 802.11a standard. Experiments have
been carried out using a wireless network testbed capable of implementing a
3-user MIMO interference channel. We have implemented IA decoding schemes that
can be designed according to distinct criteria (e.g. zero-forcing or MaxSINR).
The measurement methodology has been validated considering practical issues
like the number of OFDM training symbols used for channel estimation or
feedback time. In case of asynchronous users, a time-domain IA decoding filter
is also compared to its frequency-domain counterpart. We also evaluated the
performance of IA from bit error rate measurement-based results in comparison
to different time-division multiple access transmission schemes. The comparison
includes single- and multiple-antenna systems transmitting over the dominant
mode of the MIMO channel. Our results indicate that spatial IA is suitable for
practical indoor scenarios in which wireless channels often exhibit relatively
large coherence times.
|
1309.4385 | Photon counting compressive depth mapping | physics.optics cs.CV | We demonstrate a compressed sensing, photon counting lidar system based on
the single-pixel camera. Our technique recovers both depth and intensity maps
from a single under-sampled set of incoherent, linear projections of a scene of
interest at ultra-low light levels around 0.5 picowatts. Only two-dimensional
reconstructions are required to image a three-dimensional scene. We demonstrate
intensity imaging and depth mapping at 256 x 256 pixel transverse resolution
with acquisition times as short as 3 seconds. We also show novelty filtering,
reconstructing only the difference between two instances of a scene. Finally,
we acquire 32 x 32 pixel real-time video for three-dimensional object tracking
at 14 frames-per-second.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.