id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1302.1886 | Collective Motion of Moshers at Heavy Metal Concerts | physics.soc-ph cs.SI physics.bio-ph | Human collective behavior can vary from calm to panicked depending on social
context. Using videos publicly available online, we study the highly energized
collective motion of attendees at heavy metal concerts. We find these extreme
social gatherings generate similarly extreme behaviors: a disordered gas-like
state called a mosh pit and an ordered vortex-like state called a circle pit.
Both phenomena are reproduced in flocking simulations demonstrating that human
collective behavior is consistent with the predictions of simplified models.
|
1302.1901 | Relational Access Control with Bivalent Permissions in a Social
Web/Collaboration Architecture | cs.SI cs.CR | We describe an access control model that has been implemented in the web
content management framework "Deme" (which rhymes with "team"). Access control
in Deme is an example of what we call "bivalent relation object access
control"(BROAC). This model builds on recent work by Giunchiglia et al. on
relation-based access control (RelBAC), as well as other work on relational,
flexible, fine-grained, and XML access control models. We describe Deme's
architecture and review access control models, motivating our approach. BROAC
allows for both positive and negative permissions, which may conflict with each
other. We argue for the usefulness of defining access control rules as objects
in the target database, and for the necessity of resolving permission conflicts
in a social Web/collaboration architecture. After describing how Deme access
control works, including the precedence relations between different permission
types in Deme, we provide several examples of realistic scenarios in which
permission conflicts arise, and show how Deme resolves them. Initial
performance tests indicate that permission checking scales linearly in time on
a practical Deme website.
|
1302.1902 | Practical Analysis of Codebook Design and Frequency Offset Estimation
for Virtual-MIMO Systems | cs.IT math.IT | A virtual multiple-input multiple-output (MIMO) wireless system using the
receiver-side cooperation with the compress-and-forward (CF) protocol, is an
alternative to a point-to-point MIMO system, when a single receiver is not
equipped with multiple antennas. It is evident that the practicality of CF
cooperation will be greatly enhanced if an efficient source coding technique
can be used at the relay. It is even more desirable that CF cooperation should
not be unduly sensitive to carrier frequency offsets (CFOs). This paper
presents a practical study of these two issues. Firstly, codebook designs of
the Voronoi vector quantization (VQ) and the tree-structure vector quantization
(TSVQ) to enable CF cooperation at the relay are described. A comparison in
terms of the codebook design and encoding complexity is analyzed. It is shown
that the TSVQ is much simpler to design and operate, and can achieve a
favorable performance-complexity tradeoff. Furthermore, this paper demonstrates
that CFO can lead to significant performance degradation for the virtual MIMO
system. To overcome this, it is proposed to maintain clock synchronization and
jointly estimate the CFO between the relay and the destination. This approach
is shown to provide a significant performance improvement.
|
1302.1920 | SWATI: Synthesizing Wordlengths Automatically Using Testing and
Induction | cs.SY | In this paper, we present an automated technique SWATI: Synthesizing
Wordlengths Automatically Using Testing and Induction, which uses a combination
of Nelder-Mead optimization based testing, and induction from examples to
automatically synthesize optimal fixedpoint implementation of numerical
routines. The design of numerical software is commonly done using
floating-point arithmetic in design-environments such as Matlab. However, these
designs are often implemented using fixed-point arithmetic for speed and
efficiency reasons especially in embedded systems. The fixed-point
implementation reduces implementation cost, provides better performance, and
reduces power consumption. The conversion from floating-point designs to
fixed-point code is subject to two opposing constraints: (i) the word-width of
fixed-point types must be minimized, and (ii) the outputs of the fixed-point
program must be accurate. In this paper, we propose a new solution to this
problem. Our technique takes the floating-point program, specified accuracy and
an implementation cost model and provides the fixed-point program with
specified accuracy and optimal implementation cost. We demonstrate the
effectiveness of our approach on a set of examples from the domain of automated
control, robotics and digital signal processing.
|
1302.1923 | Update XML Views | cs.DB | View update is the problem of translating an update to a view to some updates
to the source data of the view. In this paper, we show the factors determining
XML view update translation, propose a translation procedure, and propose
translated updates to the source document for different types of views. We
further show that the translated updates are precise. The proposed solution
makes it possible for users who do not have access privileges to the source
data to update the source data via a view.
|
1302.1931 | Coding for Combined Block-Symbol Error Correction | cs.IT math.CO math.IT | We design low-complexity error correction coding schemes for channels that
introduce different types of errors and erasures: on the one hand, the proposed
schemes can successfully deal with symbol errors and erasures, and, on the
other hand, they can also successfully handle phased burst errors and erasures.
|
1302.1937 | Embedding agents in business applications using enterprise integration
patterns | cs.MA | This paper addresses the issue of integrating agents with a variety of
external resources and services, as found in enterprise computing environments.
We propose an approach for interfacing agents and existing message routing and
mediation engines based on the endpoint concept from the enterprise integration
patterns of Hohpe and Woolf. A design for agent endpoints is presented, and an
architecture for connecting the Jason agent platform to the Apache Camel
enterprise integration framework using this type of endpoint is described. The
approach is illustrated by means of a business process use case, and a number
of Camel routes are presented. These demonstrate the benefits of interfacing
agents to external services via a specialised message routing tool that
supports enterprise integration patterns.
|
1302.1942 | Surveillance Video Processing Using Compressive Sensing | cs.CV cs.IT math.IT | A compressive sensing method combined with decomposition of a matrix formed
with image frames of a surveillance video into low rank and sparse matrices is
proposed to segment the background and extract moving objects in a surveillance
video. The video is acquired by compressive measurements, and the measurements
are used to reconstruct the video by a low rank and sparse decomposition of
matrix. The low rank component represents the background, and the sparse
component is used to identify moving objects in the surveillance video. The
decomposition is performed by an augmented Lagrangian alternating direction
method. Experiments are carried out to demonstrate that moving objects can be
reliably extracted with a small amount of measurements.
|
1302.1947 | A new compressive video sensing framework for mobile broadcast | cs.MM cs.CV cs.IT math.IT | A new video coding method based on compressive sampling is proposed. In this
method, a video is coded using compressive measurements on video cubes. Video
reconstruction is performed by minimization of total variation (TV) of the
pixelwise DCT coefficients along the temporal direction. A new reconstruction
algorithm is developed from TVAL3, an efficient TV minimization algorithm based
on the alternating minimization and augmented Lagrangian methods. Video coding
with this method is inherently scalable, and has applications in mobile
broadcast.
|
1302.2017 | Cooperative Environmental Monitoring for PTZ Visual Sensor Networks: A
Payoff-based Learning Approach | cs.SY | This paper investigates cooperative environmental monitoring for
Pan-Tilt-Zoom (PTZ) visual sensor networks. We first present a novel
formulation of the optimal environmental monitoring problem, whose objective
function is intertwined with the uncertain state of the environment. In
addition, due to the large volume of vision data, it is desired for each sensor
to execute processing through local computation and communication. To address
the issues, we present a distributed solution to the problem based on game
theoretic cooperative control and payoff-based learning. At the first stage, a
utility function is designed so that the resulting game constitutes a potential
game with potential function equal to the group objective function, where the
designed utility is shown to be computable through local image processing and
communication. Then, we present a payoff-based learning algorithm so that the
sensors are led to the global objective function maximizers without using any
prior information on the environmental state. Finally, we run experiments to
demonstrate the effectiveness of the present approach.
|
1302.2048 | Improving success probability and embedding efficiency in code based
steganography | cs.IT math.IT | For stegoschemes arising from error correcting codes, embedding depends on a
decoding map for the corresponding code. As decoding maps are usually not
complete, embedding can fail. We propose a method to ensure or increase the
probability of embedding success for these stegoschemes. This method is based
on puncturing codes. We show how the use of punctured codes may also increase
the embedding efficiency of the obtained stegoschemes.
|
1302.2056 | Complexity distribution of agent policies | cs.AI | We analyse the complexity of environments according to the policies that need
to be used to achieve high performance. The performance results for a
population of policies leads to a distribution that is examined in terms of
policy complexity and analysed through several diagrams and indicators. The
notion of environment response curve is also introduced, by inverting the
performance results into an ability scale. We apply all these concepts,
diagrams and indicators to a minimalistic environment class, agent-populated
elementary cellular automata, showing how the difficulty, discriminating power
and ranges (previous to normalisation) may vary for several environments.
|
1302.2073 | pROST : A Smoothed Lp-norm Robust Online Subspace Tracking Method for
Realtime Background Subtraction in Video | cs.CV | An increasing number of methods for background subtraction use Robust PCA to
identify sparse foreground objects. While many algorithms use the L1-norm as a
convex relaxation of the ideal sparsifying function, we approach the problem
with a smoothed Lp-norm and present pROST, a method for robust online subspace
tracking. The algorithm is based on alternating minimization on manifolds.
Implemented on a graphics processing unit it achieves realtime performance.
Experimental results on a state-of-the-art benchmark for background subtraction
on real-world video data indicate that the method succeeds at a broad variety
of background subtraction scenarios, and it outperforms competing approaches
when video quality is deteriorated by camera jitter.
|
1302.2082 | Sequences with Minimal Time-Frequency Uncertainty | cs.IT math.IT | A central problem in signal processing and communications is to design
signals that are compact both in time and frequency. Heisenberg's uncertainty
principle states that a given function cannot be arbitrarily compact both in
time and frequency, defining an "uncertainty" lower bound. Taking the variance
as a measure of localization in time and frequency, Gaussian functions reach
this bound for continuous-time signals. For sequences, however, this is not
true; it is known that Heisenberg's bound is generally unachievable. For a
chosen frequency variance, we formulate the search for "maximally compact
sequences" as an exactly and efficiently solved convex optimization problem,
thus providing a sharp uncertainty principle for sequences. Interestingly, the
optimization formulation also reveals that maximally compact sequences are
derived from Mathieu's harmonic cosine function of order zero. We further
provide rational asymptotic expansions of this sharp uncertainty bound. We use
the derived bounds as a benchmark to compare the compactness of well-known
window functions with that of the optimal Mathieu's functions.
|
1302.2093 | A distributed accelerated gradient algorithm for distributed model
predictive control of a hydro power valley | math.OC cs.MA cs.SY math.NA | A distributed model predictive control (DMPC) approach based on distributed
optimization is applied to the power reference tracking problem of a hydro
power valley (HPV) system. The applied optimization algorithm is based on
accelerated gradient methods and achieves a convergence rate of O(1/k^2), where
k is the iteration number. Major challenges in the control of the HPV include a
nonlinear and large-scale model, nonsmoothness in the power-production
functions, and a globally coupled cost function that prevents distributed
schemes to be applied directly. We propose a linearization and approximation
approach that accommodates the proposed the DMPC framework and provides very
similar performance compared to a centralized solution in simulations. The
provided numerical studies also suggest that for the sparsely interconnected
system at hand, the distributed algorithm we propose is faster than a
centralized state-of-the-art solver such as CPLEX.
|
1302.2112 | Cryptanalysis and Improvement of Akleylek et al.'s cryptosystem | cs.CR cs.IT math.IT | Akleylek et al. [S. Akleylek, L. Emmungil and U. Nuriyev, A mod ified
algorithm for peer-to-peer security, journal of Appl. Comput. Math., vol. 6(2),
pp.258-264, 2007.], introduced a modified public-key encryption scheme with
steganographic approach for security in peer-to-peer (P2P) networks. In this
cryptosystem, Akleylek et al. attempt to increase security of the P2P networks
by mixing ElGamal cryptosystem with knapsack problem. In this paper, we present
a ciphertext-only attack against their system to recover message. In addition,
we show that for their scheme completeness property is not holds, and
therefore, the receiver cannot uniquely decrypts messages. Furthermore, we also
show that this system is not chosen-ciphertext secure, thus the proposed scheme
is vulnerable to man-in-the-middle-attack, one of the most pernicious attacks
against P2P networks. Therefore, this scheme is not suitable to implement in
the P2P networks.
We modify this cryptosystem in order to increase its security and efficiency.
Our construction is the efficient CCA2-secure variant of the Akleylek et al.'s
encryption scheme in the standard model, the de facto security notion for
public-key encryption schemes.
|
1302.2128 | Modulus Computational Entropy | cs.IT cs.CR math.IT | The so-called {\em leakage-chain rule} is a very important tool used in many
security proofs. It gives an upper bound on the entropy loss of a random
variable $X$ in case the adversary who having already learned some random
variables $Z_{1},\ldots,Z_{\ell}$ correlated with $X$, obtains some further
information $Z_{\ell+1}$ about $X$. Analogously to the information-theoretic
case, one might expect that also for the \emph{computational} variants of
entropy the loss depends only on the actual leakage, i.e. on $Z_{\ell+1}$.
Surprisingly, Krenn et al.\ have shown recently that for the most commonly used
definitions of computational entropy this holds only if the computational
quality of the entropy deteriorates exponentially in
$|(Z_{1},\ldots,Z_{\ell})|$. This means that the current standard definitions
of computational entropy do not allow to fully capture leakage that occurred
"in the past", which severely limits the applicability of this notion.
As a remedy for this problem we propose a slightly stronger definition of the
computational entropy, which we call the \emph{modulus computational entropy},
and use it as a technical tool that allows us to prove a desired chain rule
that depends only on the actual leakage and not on its history. Moreover, we
show that the modulus computational entropy unifies other,sometimes seemingly
unrelated, notions already studied in the literature in the context of
information leakage and chain rules. Our results indicate that the modulus
entropy is, up to now, the weakest restriction that guarantees that the chain
rule for the computational entropy works. As an example of application we
demonstrate a few interesting cases where our restricted definition is
fulfilled and the chain rule holds.
|
1302.2131 | Data Mining of the Concept "End of the World" in Twitter Microblogs | cs.SI cs.CL cs.IR physics.soc-ph | This paper describes the analysis of quantitative characteristics of frequent
sets and association rules in the posts of Twitter microblogs, related to the
discussion of "end of the world", which was allegedly predicted on December 21,
2012 due to the Mayan calendar. Discovered frequent sets and association rules
characterize semantic relations between the concepts of analyzed subjects.The
support for some fequent sets reaches the global maximum before the expected
event with some time delay. Such frequent sets may be considered as predictive
markers that characterize the significance of expected events for blogosphere
users. It was shown that time dynamics of confidence of some revealed
association rules can also have predictive characteristics. Exceeding a certain
threshold, it may be a signal for the corresponding reaction in the society
during the time interval between the maximum and probable coming of an event.
|
1302.2157 | Passive Learning with Target Risk | cs.LG | In this paper we consider learning in passive setting but with a slight
modification. We assume that the target expected loss, also referred to as
target risk, is provided in advance for learner as prior knowledge. Unlike most
studies in the learning theory that only incorporate the prior knowledge into
the generalization bounds, we are able to explicitly utilize the target risk in
the learning process. Our analysis reveals a surprising result on the sample
complexity of learning: by exploiting the target risk in the learning
algorithm, we show that when the loss function is both strongly convex and
smooth, the sample complexity reduces to $\O(\log (\frac{1}{\epsilon}))$, an
exponential improvement compared to the sample complexity
$\O(\frac{1}{\epsilon})$ for learning with strongly convex loss functions.
Furthermore, our proof is constructive and is based on a computationally
efficient stochastic optimization algorithm for such settings which demonstrate
that the proposed algorithm is practically useful.
|
1302.2167 | Information, Estimation, and Lookahead in the Gaussian channel | cs.IT math.IT | We consider mean squared estimation with lookahead of a continuous-time
signal corrupted by additive white Gaussian noise. We show that the mutual
information rate function, i.e., the mutual information rate as function of the
signal-to-noise ratio (SNR), does not, in general, determine the minimum mean
squared error (MMSE) with fixed finite lookahead, in contrast to the special
cases with 0 and infinite lookahead (filtering and smoothing errors),
respectively, which were previously established in the literature. We also
establish a new expectation identity under a generalized observation model
where the Gaussian channel has an SNR jump at $t=0$, capturing the tradeoff
between lookahead and SNR.
Further, we study the class of continuous-time stationary Gauss-Markov
processes (Ornstein-Uhlenbeck processes) as channel inputs, and explicitly
characterize the behavior of the minimum mean squared error (MMSE) with finite
lookahead and signal-to-noise ratio (SNR). The MMSE with lookahead is shown to
converge exponentially rapidly to the non-causal error, with the exponent being
the reciprocal of the non-causal error. We extend our results to mixtures of
Ornstein-Uhlenbeck processes, and use the insight gained to present lower and
upper bounds on the MMSE with lookahead for a class of stationary Gaussian
input processes, whose spectrum can be expressed as a mixture of
Ornstein-Uhlenbeck spectra.
|
1302.2168 | Optimal Throughput-Outage Trade-off in Wireless One-Hop Caching Networks | cs.IT cs.NI math.IT | We consider a wireless device-to-device (D2D) network where the nodes have
cached information from a library of possible files. Inspired by the current
trend in the standardization of the D2D mode for 4th generation wireless
networks, we restrict to one-hop communication: each node place a request to a
file in the library, and downloads from some other node which has the requested
file in its cache through a direct communication link, without going through a
base station. We describe the physical layer communication through a simple
"protocol-model", based on interference avoidance (independent set scheduling).
For this network we define the outage-throughput tradeoff problem and
characterize the optimal scaling laws for various regimes where both the number
of nodes and the files in the library grow to infinity.
|
1302.2176 | Minimax Optimal Algorithms for Unconstrained Linear Optimization | cs.LG | We design and analyze minimax-optimal algorithms for online linear
optimization games where the player's choice is unconstrained. The player
strives to minimize regret, the difference between his loss and the loss of a
post-hoc benchmark strategy. The standard benchmark is the loss of the best
strategy chosen from a bounded comparator set. When the the comparison set and
the adversary's gradients satisfy L_infinity bounds, we give the value of the
game in closed form and prove it approaches sqrt(2T/pi) as T -> infinity.
Interesting algorithms result when we consider soft constraints on the
comparator, rather than restricting it to a bounded set. As a warmup, we
analyze the game with a quadratic penalty. The value of this game is exactly
T/2, and this value is achieved by perhaps the simplest online algorithm of
all: unprojected gradient descent with a constant learning rate.
We then derive a minimax-optimal algorithm for a much softer penalty
function. This algorithm achieves good bounds under the standard notion of
regret for any comparator point, without needing to specify the comparator set
in advance. The value of this game converges to sqrt{e} as T ->infinity; we
give a closed-form for the exact value as a function of T. The resulting
algorithm is natural in unconstrained investment or betting scenarios, since it
guarantees at worst constant loss, while allowing for exponential reward
against an "easy" adversary.
|
1302.2178 | Gaussian State Amplification with Noisy State Observations | cs.IT math.IT | The problem of simultaneous message transmission and state amplification in a
Gaussian channel with additive Gaussian state is studied when the sender has
imperfect noncausal knowledge of the state sequence. Inner and outer bounds to
the rate--state-distortion region are provided. The coding scheme underlying
the inner bound combines analog signaling and Gelfand-Pinsker coding, where the
latter deviates from the operating point of Costa's dirty paper coding.
|
1302.2183 | The Importance of Tie-Breaking in Finite-Blocklength Bounds | cs.IT math.IT | We consider upper bounds on the error probability in channel coding. We
derive an improved maximum-likelihood union bound, which takes into account
events where the likelihood of the correct codeword is tied with that of some
competitors. We compare this bound to various previous results, both
qualitatively and quantitatively. With respect to maximal error probability of
linear codes, we observe that when the channel is additive, the derivation of
bounds, as well as the assumptions on the admissible encoder and decoder,
simplify considerably.
|
1302.2185 | Passive Self-Interference Suppression for Full-Duplex Infrastructure
Nodes | cs.IT cs.NI math.IT | Recent research results have demonstrated the feasibility of full-duplex
wireless communication for short-range links. Although the focus of the
previous works has been active cancellation of the self-interference signal, a
majority of the overall self-interference suppression is often due to passive
suppression, i.e., isolation of the transmit and receive antennas. We present a
measurement-based study of the capabilities and limitations of three key
mechanisms for passive self-interference suppression: directional isolation,
absorptive shielding, and cross-polarization. The study demonstrates that more
than 70 dB of passive suppression can be achieved in certain environments, but
also establishes two results on the limitations of passive suppression: (1)
environmental reflections limit the amount of passive suppression that can be
achieved, and (2) passive suppression, in general, increases the frequency
selectivity of the residual self-interference signal. These results suggest two
design implications: (1) deployments of full-duplex infrastructure nodes should
minimize near-antenna reflectors, and (2) active cancellation in concatenation
with passive suppression should employ higher-order filters or per-subcarrier
cancellation.
|
1302.2187 | Linear Precoding and Equalization for Network MIMO with Partial
Cooperation | cs.IT math.IT math.OC | A cellular multiple-input multiple-output (MIMO) downlink system is studied
in which each base station (BS) transmits to some of the users, so that each
user receives its intended signal from a subset of the BSs. This scenario is
referred to as network MIMO with partial cooperation, since only a subset of
the BSs are able to coordinate their transmission towards any user. The focus
of this paper is on the optimization of linear beamforming strategies at the
BSs and at the users for network MIMO with partial cooperation. Individual
power constraints at the BSs are enforced, along with constraints on the number
of streams per user. It is first shown that the system is equivalent to a MIMO
interference channel with generalized linear constraints (MIMO-IFC-GC). The
problems of maximizing the sum-rate(SR) and minimizing the weighted sum mean
square error (WSMSE) of the data estimates are non-convex, and suboptimal
solutions with reasonable complexity need to be devised. Based on this,
suboptimal techniques that aim at maximizing the sum-rate for the MIMO-IFC-GC
are reviewed from recent literature and extended to the MIMO-IFC-GC where
necessary. Novel designs that aim at minimizing the WSMSE are then proposed.
Extensive numerical simulations are provided to compare the performance of the
considered schemes for realistic cellular systems.
|
1302.2222 | Ontology-Based Administration of Web Directories | cs.IR cs.DL | Administration of a Web directory and maintenance of its content and the
associated structure is a delicate and labor intensive task performed
exclusively by human domain experts. Subsequently there is an imminent risk of
a directory structures becoming unbalanced, uneven and difficult to use to all
except for a few users proficient with the particular Web directory and its
domain. These problems emphasize the need to establish two important issues: i)
generic and objective measures of Web directories structure quality, and ii)
mechanism for fully automated development of a Web directory's structure. In
this paper we demonstrate how to formally and fully integrate Web directories
with the Semantic Web vision. We propose a set of criteria for evaluation of a
Web directory's structure quality. Some criterion functions are based on
heuristics while others require the application of ontologies. We also suggest
an ontology-based algorithm for construction of Web directories. By using
ontologies to describe the semantics of Web resources and Web directories'
categories it is possible to define algorithms that can build or rearrange the
structure of a Web directory. Assessment procedures can provide feedback and
help steer the ontology-based construction process. The issues raised in the
article can be equally applied to new and existing Web directories.
|
1302.2223 | WNtags: A Web-Based Tool For Image Labeling And Retrieval With Lexical
Ontologies | cs.IR cs.AI cs.MM | Ever growing number of image documents available on the Internet continuously
motivates research in better annotation models and more efficient retrieval
methods. Formal knowledge representation of objects and events in pictures,
their interaction as well as context complexity becomes no longer an option for
a quality image repository, but a necessity. We present an ontology-based
online image annotation tool WNtags and demonstrate its usefulness in several
typical multimedia retrieval tasks using International Affective Picture System
emotionally annotated image database. WNtags is built around WordNet lexical
ontology but considers Suggested Upper Merged Ontology as the preferred
labeling formalism. WNtags uses sets of weighted WordNet synsets as high-level
image semantic descriptors and query matching is performed with word stemming
and node distance metrics. We also elaborate our near future plans to expand
image content description with induced affect as in stimuli for research of
human emotion and attention.
|
1302.2244 | Efficient Data Gathering in Wireless Sensor Networks Based on Matrix
Completion and Compressive Sensing | cs.NI cs.IT math.IT | Gathering data in an energy efficient manner in wireless sensor networks is
an important design challenge. In wireless sensor networks, the readings of
sensors always exhibit intra-temporal and inter-spatial correlations.
Therefore, in this letter, we use low rank matrix completion theory to explore
the inter-spatial correlation and use compressive sensing theory to take
advantage of intra-temporal correlation. Our method, dubbed MCCS, can
significantly reduce the amount of data that each sensor must send through
network and to the sink, thus prolong the lifetime of the whole networks.
Experiments using real datasets demonstrate the feasibility and efficacy of our
MCCS method.
|
1302.2246 | Lower bounds on the minimum distance of long codes in the Lee metric | cs.IT math.IT | The Gilbert type bound for codes in the title is reviewed, both for small and
large alphabets. Constructive lower bounds better than these existential bounds
are derived from geometric codes, either over Fp or Fp2 ; or over even degree
extensions of Fp: In the latter case the approach is concatena- tion with a
good code for the Hamming metric as outer code and a short code for the Lee
metric as an inner code. In the former case lower bounds on the minimum Lee
distance are derived by algebraic geometric arguments inspired by results of
Wu, Kuijper, Udaya (2007).
|
1302.2261 | On the list decodability of random linear codes with large error rates | cs.IT math.IT | It is well known that a random q-ary code of rate \Omega(\epsilon^2) is list
decodable up to radius (1 - 1/q - \epsilon) with list sizes on the order of
1/\epsilon^2, with probability 1 - o(1). However, until recently, a similar
statement about random linear codes has until remained elusive. In a recent
paper, Cheraghchi, Guruswami, and Velingker show a connection between list
decodability of random linear codes and the Restricted Isometry Property from
compressed sensing, and use this connection to prove that a random linear code
of rate \Omega(\epsilon^2 / log^3(1/\epsilon)) achieves the list decoding
properties above, with constant probability. We improve on their result to show
that in fact we may take the rate to be \Omega(\epsilon^2), which is optimal,
and further that the success probability is 1 - o(1), rather than constant. As
an added benefit, our proof is relatively simple. Finally, we extend our
methods to more general ensembles of linear codes. As an example, we show that
randomly punctured Reed-Muller codes have the same list decoding properties as
the original codes, even when the rate is improved to a constant.
|
1302.2273 | Learning Universally Quantified Invariants of Linear Data Structures | cs.PL cs.FL cs.LG | We propose a new automaton model, called quantified data automata over words,
that can model quantified invariants over linear data structures, and build
poly-time active learning algorithms for them, where the learner is allowed to
query the teacher with membership and equivalence queries. In order to express
invariants in decidable logics, we invent a decidable subclass of QDAs, called
elastic QDAs, and prove that every QDA has a unique
minimally-over-approximating elastic QDA. We then give an application of these
theoretically sound and efficient active learning algorithms in a passive
learning framework and show that we can efficiently learn quantified linear
data structure invariants from samples obtained from dynamic runs for a large
class of programs.
|
1302.2277 | A Time Series Forest for Classification and Feature Extraction | cs.LG | We propose a tree ensemble method, referred to as time series forest (TSF),
for time series classification. TSF employs a combination of the entropy gain
and a distance measure, referred to as the Entrance (entropy and distance)
gain, for evaluating the splits. Experimental studies show that the Entrance
gain criterion improves the accuracy of TSF. TSF randomly samples features at
each tree node and has a computational complexity linear in the length of a
time series and can be built using parallel computing techniques such as
multi-core computing used here. The temporal importance curve is also proposed
to capture the important temporal characteristics useful for classification.
Experimental studies show that TSF using simple features such as mean,
deviation and slope outperforms strong competitors such as one-nearest-neighbor
classifiers with dynamic time warping, is computationally efficient, and can
provide insights into the temporal characteristics.
|
1302.2315 | A sampling theorem on shift-invariant spaces associated with the
fractional Fourier transform domain | math.FA cs.IT math.IT | As a generalization of the Fourier transform, the fractional Fourier
transform was introduced and has been further investigated both in theory and
in applications of signal processing. We obtain a sampling theorem on
shift-invariant spaces associated with the fractional Fourier transform domain.
The resulting sampling theorem extends not only the classical
Whittaker-Shannon-Kotelnikov sampling theorem associated with the fractional
Fourier transform domain, but also extends the prior sampling theorems on
shift-invariant spaces.
|
1302.2318 | On Search Engine Evaluation Metrics | cs.IR | The search engine evaluation research has quite a lot metrics available to
it. Only recently, the question of the significance of individual metrics
started being raised, as these metrics' correlations to real-world user
experiences or performance have generally not been well-studied. The first part
of this thesis provides an overview of previous literature on the evaluation of
search engine evaluation metrics themselves, as well as critiques of and
comments on individual studies and approaches. The second part introduces a
meta-evaluation metric, the Preference Identification Ratio (PIR), that
quantifies the capacity of an evaluation metric to capture users' preferences.
Also, a framework for simultaneously evaluating many metrics while varying
their parameters and evaluation standards is introduced. Both PIR and the
meta-evaluation framework are tested in a study which shows some interesting
preliminary results; in particular, the unquestioning adherence to metrics or
their ad hoc parameters seems to be disadvantageous. Instead, evaluation
methods should themselves be rigorously evaluated with regard to goals set for
a particular study.
|
1302.2330 | Power Allocation and Time-Domain Artificial Noise Design for Wiretap
OFDM with Discrete Inputs | cs.IT math.IT | Optimal power allocation for orthogonal frequency division multiplexing
(OFDM) wiretap channels with Gaussian channel inputs has already been studied
in some previous works from an information theoretical viewpoint. However,
these results are not sufficient for practical system design. One reason is
that discrete channel inputs, such as quadrature amplitude modulation (QAM)
signals, instead of Gaussian channel inputs, are deployed in current practical
wireless systems to maintain moderate peak transmission power and receiver
complexity. In this paper, we investigate the power allocation and artificial
noise design for OFDM wiretap channels with discrete channel inputs. We first
prove that the secrecy rate function for discrete channel inputs is nonconcave
with respect to the transmission power. To resolve the corresponding nonconvex
secrecy rate maximization problem, we develop a low-complexity power allocation
algorithm, which yields a duality gap diminishing in the order of
O(1/\sqrt{N}), where N is the number of subcarriers of OFDM. We then show that
independent frequency-domain artificial noise cannot improve the secrecy rate
of single-antenna wiretap channels. Towards this end, we propose a novel
time-domain artificial noise design which exploits temporal degrees of freedom
provided by the cyclic prefix of OFDM systems {to jam the eavesdropper and
boost the secrecy rate even with a single antenna at the transmitter}.
Numerical results are provided to illustrate the performance of the proposed
design schemes.
|
1302.2331 | The Phase Transition of Matrix Recovery from Gaussian Measurements
Matches the Minimax MSE of Matrix Denoising | cs.IT math.IT math.ST stat.TH | Let $X_0$ be an unknown $M$ by $N$ matrix. In matrix recovery, one takes $n <
MN$ linear measurements $y_1,..., y_n$ of $X_0$, where $y_i = \Tr(a_i^T X_0)$
and each $a_i$ is a $M$ by $N$ matrix. For measurement matrices with Gaussian
i.i.d entries, it known that if $X_0$ is of low rank, it is recoverable from
just a few measurements. A popular approach for matrix recovery is Nuclear Norm
Minimization (NNM). Empirical work reveals a \emph{phase transition} curve,
stated in terms of the undersampling fraction $\delta(n,M,N) = n/(MN)$, rank
fraction $\rho=r/N$ and aspect ratio $\beta=M/N$. Specifically, a curve
$\delta^* = \delta^*(\rho;\beta)$ exists such that, if $\delta >
\delta^*(\rho;\beta)$, NNM typically succeeds, while if $\delta <
\delta^*(\rho;\beta)$, it typically fails. An apparently quite different
problem is matrix denoising in Gaussian noise, where an unknown $M$ by $N$
matrix $X_0$ is to be estimated based on direct noisy measurements $Y = X_0 +
Z$, where the matrix $Z$ has iid Gaussian entries. It has been empirically
observed that, if $X_0$ has low rank, it may be recovered quite accurately from
the noisy measurement $Y$. A popular matrix denoising scheme solves the
unconstrained optimization problem $\text{min} \| Y - X \|_F^2/2 + \lambda
\|X\|_* $. When optimally tuned, this scheme achieves the asymptotic minimax
MSE $\cM(\rho) = \lim_{N \goto \infty} \inf_\lambda \sup_{\rank(X) \leq \rho
\cdot N} MSE(X,\hat{X}_\lambda)$. We report extensive experiments showing that
the phase transition $\delta^*(\rho)$ in the first problem coincides with the
minimax risk curve $\cM(\rho)$ in the second problem, for {\em any} rank
fraction $0 < \rho < 1$.
|
1302.2339 | Robust Low-Rank LCMV Beamforming Algorithms Based on Joint Iterative
Optimization Strategies | cs.IT math.IT | This chapter presents reduced-rank linearly constrained minimum variance
(LCMV) algorithms based on the concept of joint iterative optimization of
parameters. The proposed reduced-rank scheme is based on a constrained robust
joint iterative optimization (RJIO) of parameters according to the minimum
variance criterion. The robust optimization procedure adjusts the parameters of
a rank-reduction matrix, a reduced-rank beamformer and the diagonal loading in
an alternating manner. LCMV expressions are developed for the design of the
rank-reduction matrix and the reduced-rank beamformer. Stochastic gradient and
recursive least-squares adaptive algorithms are then devised for an efficient
implementation of the RJIO robust beamforming technique. Simulations for a
application in the presence of uncertainties show that the RJIO scheme and
algorithms outperform in convergence and tracking performances existing
algorithms while requiring a comparable complexity.
|
1302.2343 | Adaptive Space-Time Beamforming in Radar Systems | cs.IT math.IT | The goal of this chapter is to review the recent work and advances in the
area of space-time beamforming algorithms and their application to radar
systems. These systems include phased-array \cite{melvin} and multi-input
multi-output (MIMO) radar systems \cite{haimo_08}, mono-static and bi-static
radar systems and other configurations \cite{melvin}. Furthermore, this chapter
also describes in detail some of the most successful space-time beamforming
algorithms that exploit low-rank and sparsity properties as well as the use of
prior-knowledge to improve the performance of STAP algorithms in radar systems.
|
1302.2376 | Modeling Morphology of Social Network Cascades | cs.SI physics.soc-ph | Cascades represent an important phenomenon across various disciplines such as
sociology, economy, psychology, political science, marketing, and epidemiology.
An important property of cascades is their morphology, which encompasses the
structure, shape, and size. However, cascade morphology has not been rigorously
characterized and modeled in prior literature. In this paper, we propose a
Multi-order Markov Model for the Morphology of Cascades ($M^4C$) that can
represent and quantitatively characterize the morphology of cascades with
arbitrary structures, shapes, and sizes. $M^4C$ can be used in a variety of
applications to classify different types of cascades. To demonstrate this, we
apply it to an unexplored but important problem in online social networks --
cascade size prediction. Our evaluations using real-world Twitter data show
that $M^4C$ based cascade size prediction scheme outperforms the baseline
scheme based on cascade graph features such as edge growth rate, degree
distribution, clustering, and diameter. $M^4C$ based cascade size prediction
scheme consistently achieves more than 90% classification accuracy under
different experimental scenarios.
|
1302.2384 | Efficient Desynchronization of Thermostatically Controlled Loads | math.OC cs.SY | This paper considers demand side management in smart power grid systems
containing significant numbers of thermostatically controlled loads such as air
conditioning systems, heat pumps, etc. Recent studies have shown that the
overall power consumption of such systems can be regulated up and down
centrally by broadcasting small setpoint change commands without significantly
impacting consumer comfort. However, sudden simultaneous setpoint changes
induce undesirable power consumption oscillations due to sudden synchronization
of the on/off cycles of the individual units. In this paper, we present a novel
algorithm for counter-acting these unwanted oscillations, which requires
neither central management of the individual units nor communication between
units. We present a formal proof of convergence of homogeneous populations to
desynchronized status, as well as simulations that indicate that the algorithm
is able to effectively dampen power consumption oscillations for both
homogeneous and heterogeneous populations of thermostatically controlled loads.
|
1302.2420 | Compressed Sensing with Incremental Sparse Measurements | cs.IT math.IT | This paper proposes a verification-based decoding approach for reconstruction
of a sparse signal with incremental sparse measurements. In its first step, the
verification-based decoding algorithm is employed to reconstruct the signal
with a fixed number of sparse measurements. Often, it may fail as the number of
sparse measurements may be not enough, possibly due to an underestimate of the
signal sparsity. However, we observe that even if this first recovery fails,
many component samples of the sparse signal have been identified. Hence, it is
natural to further employ incremental measurements tuned to the unidentified
samples with known locations. This approach has been proven very efficiently by
extensive simulations.
|
1302.2427 | Turbo DPSK in Bi-directional Relaying | cs.IT math.IT | In this paper, iterative differential phase-shift keying (DPSK) demodulation
and channel decoding scheme is investigated for the Joint Channel decoding and
physical layer Network Coding (JCNC) approach in two-way relaying systems. The
Bahl, Cocke, Jelinek, and Raviv (BCJR) algorithm for both coherent and
noncoherent detection is derived for soft-in soft-out decoding of DPSK
signalling over the two-user multiple-access channel with Rayleigh fading.
Then, we propose a pragmatic approach with the JCNC scheme for iteratively
exploiting the extrinsic information of the outer code. With coherent
detection, we show that DPSK can be well concatenated with simple convolutional
codes to achieve excellent coding gain just like in traditional point-to-point
communication scenarios. The proposed noncoherent detection, which essentially
requires that the channel keeps constant over two consecutive symbols, can work
without explicit channel estimation. Simulation results show that the iterative
processing converges very fast and most of the coding gain is obtained within
two iterations.
|
1302.2436 | Extracting useful rules through improved decision tree induction using
information entropy | cs.LG | Classification is widely used technique in the data mining domain, where
scalability and efficiency are the immediate problems in classification
algorithms for large databases. We suggest improvements to the existing C4.5
decision tree algorithm. In this paper attribute oriented induction (AOI) and
relevance analysis are incorporated with concept hierarchys knowledge and
HeightBalancePriority algorithm for construction of decision tree along with
Multi level mining. The assignment of priorities to attributes is done by
evaluating information entropy, at different levels of abstraction for building
decision tree using HeightBalancePriority algorithm. Modified DMQL queries are
used to understand and explore the shortcomings of the decision trees generated
by C4.5 classifier for education dataset and the results are compared with the
proposed approach.
|
1302.2465 | RIO: Minimizing User Interaction in Debugging of Knowledge Bases | cs.AI | The best currently known interactive debugging systems rely upon some
meta-information in terms of fault probabilities in order to improve their
efficiency. However, misleading meta information might result in a dramatic
decrease of the performance and its assessment is only possible a-posteriori.
Consequently, as long as the actual fault is unknown, there is always some risk
of suboptimal interactions. In this work we present a reinforcement learning
strategy that continuously adapts its behavior depending on the performance
achieved and minimizes the risk of using low-quality meta information.
Therefore, this method is suitable for application scenarios where reliable
prior fault estimates are difficult to obtain. Using diverse real-world
knowledge bases, we show that the proposed interactive query strategy is
scalable, features decent reaction time, and outperforms both entropy-based and
no-risk strategies on average w.r.t. required amount of user interaction.
|
1302.2472 | Quantifying the effects of social influence | physics.soc-ph cs.SI | How do humans respond to indirect social influence when making decisions? We
analysed an experiment where subjects had to repeatedly guess the correct
answer to factual questions, while having only aggregated information about the
answers of others. While the response of humans to aggregated information is a
widely observed phenomenon, it has not been investigated quantitatively, in a
controlled setting. We found that the adjustment of individual guesses depends
linearly on the distance to the mean of all guesses. This is a remarkable, and
yet surprisingly simple, statistical regularity. It holds across all questions
analysed, even though the correct answers differ in several orders of
magnitude. Our finding supports the assumption that individual diversity does
not affect the response to indirect social influence. It also complements
previous results on the nonlinear response in information-rich scenarios. We
argue that the nature of the response to social influence crucially changes
with the level of information aggregation. This insight contributes to the
empirical foundation of models for collective decisions under social influence.
|
1302.2481 | A Lower Bound on the Noncoherent Capacity Pre-log for the MIMO Channel
with Temporally Correlated Fading | cs.IT math.IT | We derive a lower bound on the capacity pre-log of a temporally correlated
Rayleigh block-fading multiple-input multiple-output (MIMO) channel with T
transmit antennas and R receive antennas in the noncoherent setting (no a
priori channel knowledge at the transmitter and the receiver). In this model,
the fading process changes independently across blocks of length L and is
temporally correlated within each block for each transmit-receive antenna pair,
with a given rank Q of the corresponding correlation matrix. Our result implies
that for almost all choices of the coloring matrix that models the temporal
correlation, the pre-log can be lower-bounded by T(1-1/L) for T <= (L-1)/Q
provided that R is sufficiently large. The widely used constant block-fading
model is equivalent to the temporally correlated block-fading model with Q = 1
for the special case when the temporal correlation for each transmit-receive
antenna pair is the same, which is unlikely to be observed in practice. For the
constant block-fading model, the capacity pre-log is given by T(1-T/L), which
is smaller than our lower bound for the case Q = 1. Thus, our result suggests
that the assumptions underlying the constant block- fading model lead to a
pessimistic result for the capacity pre-log.
|
1302.2501 | Optimal Forgery and Suppression of Ratings for Privacy Enhancement in
Recommendation Systems | cs.IT math.IT math.OC | Recommendation systems are information-filtering systems that tailor
information to users on the basis of knowledge about their preferences. The
ability of these systems to profile users is what enables such intelligent
functionality, but at the same time, it is the source of serious privacy
concerns. In this paper we investigate a privacy-enhancing technology that aims
at hindering an attacker in its efforts to accurately profile users based on
the items they rate. Our approach capitalizes on the combination of two
perturbative mechanisms---the forgery and the suppression of ratings. While
this technique enhances user privacy to a certain extent, it inevitably comes
at the cost of a loss in data utility, namely a degradation of the
recommendation's accuracy. In short, it poses a trade-off between privacy and
utility.
The theoretical analysis of said trade-off is the object of this work. We
measure privacy as the Kullback-Leibler divergence between the user's and the
population's item distributions, and quantify utility as the proportion of
ratings users consent to forge and eliminate. Equipped with these quantitative
measures, we find a closed-form solution to the problem of optimal forgery and
suppression of ratings, and characterize the trade-off among privacy, forgery
rate and suppression rate. Experimental results on a popular recommendation
system show how our approach may contribute to privacy enhancement.
|
1302.2512 | Which Boolean Functions are Most Informative? | cs.IT math.IT | We introduce a simply stated conjecture regarding the maximum mutual
information a Boolean function can reveal about noisy inputs. Specifically, let
$X^n$ be i.i.d. Bernoulli(1/2), and let $Y^n$ be the result of passing $X^n$
through a memoryless binary symmetric channel with crossover probability
$\alpha$. For any Boolean function $b:\{0,1\}^n\rightarrow \{0,1\}$, we
conjecture that $I(b(X^n);Y^n)\leq 1-H(\alpha)$. While the conjecture remains
open, we provide substantial evidence supporting its validity.
|
1302.2518 | Minimum Dominating Sets in Scale-Free Network Ensembles | physics.soc-ph cond-mat.stat-mech cs.SI | We study the scaling behavior of the size of minimum dominating set (MDS) in
scale-free networks, with respect to network size $N$ and power-law exponent
$\gamma$, while keeping the average degree fixed. We study ensembles generated
by three different network construction methods, and we use a greedy algorithm
to approximate the MDS. With a structural cutoff imposed on the maximal degree
($k_{\max}=\sqrt{N}$) we find linear scaling of the MDS size with respect to
$N$ in all three network classes. Without any cutoff ($k_{\max}=N-1$) two of
the network classes display a transition at $\gamma \approx 1.9$, with linear
scaling above, and vanishingly weak dependence below, but in the third network
class we find linear scaling irrespective of $\gamma$. We find that the partial
MDS, which dominates a given $z<1$ fraction of nodes, displays essentially the
same scaling behavior as the MDS.
|
1302.2550 | Online Regret Bounds for Undiscounted Continuous Reinforcement Learning | cs.LG | We derive sublinear regret bounds for undiscounted reinforcement learning in
continuous state space. The proposed algorithm combines state aggregation with
the use of upper confidence bounds for implementing optimism in the face of
uncertainty. Beside the existence of an optimal policy which satisfies the
Poisson equation, the only assumptions made are Holder continuity of rewards
and transition probabilities.
|
1302.2552 | Selecting the State-Representation in Reinforcement Learning | cs.LG | The problem of selecting the right state-representation in a reinforcement
learning problem is considered. Several models (functions mapping past
observations to a finite set) of the observations are given, and it is known
that for at least one of these models the resulting state dynamics are indeed
Markovian. Without knowing neither which of the models is the correct one, nor
what are the probabilistic characteristics of the resulting MDP, it is required
to obtain as much reward as the optimal policy for the correct model (or for
the best of the correct models, if there are several). We propose an algorithm
that achieves that, with a regret of order T^{2/3} where T is the horizon time.
|
1302.2553 | Optimal Regret Bounds for Selecting the State Representation in
Reinforcement Learning | cs.LG | We consider an agent interacting with an environment in a single stream of
actions, observations, and rewards, with no reset. This process is not assumed
to be a Markov Decision Process (MDP). Rather, the agent has several
representations (mapping histories of past interactions to a discrete state
space) of the environment with unknown dynamics, only some of which result in
an MDP. The goal is to minimize the average regret criterion against an agent
who knows an MDP representation giving the highest optimal reward, and acts
optimally in it. Recent regret bounds for this setting are of order
$O(T^{2/3})$ with an additive term constant yet exponential in some
characteristics of the optimal MDP. We propose an algorithm whose regret after
$T$ time steps is $O(\sqrt{T})$, with all constants reasonably small. This is
optimal in $T$ since $O(\sqrt{T})$ is the optimal regret in the setting of
learning in a (single discrete) MDP.
|
1302.2563 | Temporal motifs reveal homophily, gender-specific patterns and group
talk in mobile communication networks | physics.soc-ph cs.SI physics.data-an | Electronic communication records provide detailed information about temporal
aspects of human interaction. Previous studies have shown that individuals'
communication patterns have complex temporal structure, and that this structure
has system-wide effects. In this paper we use mobile phone records to show that
interaction patterns involving multiple individuals have non-trivial temporal
structure that cannot be deduced from a network presentation where only
interaction frequencies are taken into account. We apply a recently introduced
method, temporal motifs, to identify interaction patterns in a temporal network
where nodes have additional attributes such as gender and age. We then develop
a null model that allows identifying differences between various types of nodes
so that these differences are independent of the network based on interaction
frequencies. We find gender-related differences in communication patters, and
show the existence of temporal homophily, the tendency of similar individuals
to participate in interaction patterns beyond what would be expected on the
basis of the network structure alone. We also show that temporal patterns
differ between dense and sparse parts of the network. Because this result is
independent of edge weights, it can be considered as an extension of
Granovetter's hypothesis to temporal networks.
|
1302.2569 | Toric grammars: a new statistical approach to natural language modeling | stat.ML cs.CL math.PR | We propose a new statistical model for computational linguistics. Rather than
trying to estimate directly the probability distribution of a random sentence
of the language, we define a Markov chain on finite sets of sentences with many
finite recurrent communicating classes and define our language model as the
invariant probability measures of the chain on each recurrent communicating
class. This Markov chain, that we call a communication model, recombines at
each step randomly the set of sentences forming its current state, using some
grammar rules. When the grammar rules are fixed and known in advance instead of
being estimated on the fly, we can prove supplementary mathematical properties.
In particular, we can prove in this case that all states are recurrent states,
so that the chain defines a partition of its state space into finite recurrent
communicating classes. We show that our approach is a decisive departure from
Markov models at the sentence level and discuss its relationships with Context
Free Grammars. Although the toric grammars we use are closely related to
Context Free Grammars, the way we generate the language from the grammar is
qualitatively different. Our communication model has two purposes. On the one
hand, it is used to define indirectly the probability distribution of a random
sentence of the language. On the other hand it can serve as a (crude) model of
language transmission from one speaker to another speaker through the
communication of a (large) set of sentences.
|
1302.2575 | Coded aperture compressive temporal imaging | cs.CV cs.IT math.IT | We use mechanical translation of a coded aperture for code division multiple
access compression of video. We present experimental results for reconstruction
at 148 frames per coded snapshot.
|
1302.2576 | The trace norm constrained matrix-variate Gaussian process for multitask
bipartite ranking | cs.LG stat.ML | We propose a novel hierarchical model for multitask bipartite ranking. The
proposed approach combines a matrix-variate Gaussian process with a generative
model for task-wise bipartite ranking. In addition, we employ a novel trace
constrained variational inference approach to impose low rank structure on the
posterior matrix-variate Gaussian process. The resulting posterior covariance
function is derived in closed form, and the posterior mean function is the
solution to a matrix-variate regression with a novel spectral elastic net
regularizer. Further, we show that variational inference for the trace
constrained matrix-variate Gaussian process combined with maximum likelihood
parameter estimation for the bipartite ranking model is jointly convex. Our
motivating application is the prioritization of candidate disease genes. The
goal of this task is to aid the identification of unobserved associations
between human genes and diseases using a small set of observed associations as
well as kernels induced by gene-gene interaction networks and disease
ontologies. Our experimental results illustrate the performance of the proposed
model on real world datasets. Moreover, we find that the resulting low rank
solution improves the computational scalability of training and testing as
compared to baseline models.
|
1302.2606 | A new bio-inspired method for remote sensing imagery classification | cs.NE cs.CV | The problem of supervised classification of the satellite image is considered
to be the task of grouping pixels into a number of homogeneous regions in space
intensity. This paper proposes a novel approach that combines a radial basic
function clustering network with a growing neural gas include utility factor
classifier to yield improved solutions, obtained with previous networks. The
double objective technique is first used to the development of a method to
perform the satellite images classification, and finally, the implementation to
address the issue of the number of nodes in the hidden layer of the classic
Radial Basis functions network. Results demonstrating the effectiveness of the
proposed technique are provided for numeric remote sensing imagery. Moreover,
the remotely sensed image of Oran city in Algeria has been classified using the
proposed technique to establish its utility.
|
1302.2615 | Assessing Semantic Quality of Web Directory Structure | cs.IR cs.DL | The administration of a Web directory content and associated structure is a
labor intensive task performed by human domain experts. Because of that there
always exists a realistic risk of the structure becoming unbalanced, uneven and
difficult to use to all except for a few users proficient in a particular Web
directory. These problems emphasize the importance of generic and objective
measures of Web directories structure quality. In this paper we demonstrate how
to formally merge Web directories into the Semantic Web vision. We introduce a
set of objective criterions for evaluation of a Web directory's structure
quality. Some criteria functions are based on heuristics while others require
the application of ontologies.
|
1302.2645 | Geometrical complexity of data approximators | stat.ML cs.LG | There are many methods developed to approximate a cloud of vectors embedded
in high-dimensional space by simpler objects: starting from principal points
and linear manifolds to self-organizing maps, neural gas, elastic maps, various
types of principal curves and principal trees, and so on. For each type of
approximators the measure of the approximator complexity was developed too.
These measures are necessary to find the balance between accuracy and
complexity and to define the optimal approximations of a given type. We propose
a measure of complexity (geometrical complexity) which is applicable to
approximators of several types and which allows comparing data approximations
of different types.
|
1302.2654 | Enabling Secure Database as a Service using Fully Homomorphic
Encryption: Challenges and Opportunities | cs.DB cs.CR | The database community, at least for the last decade, has been grappling with
querying encrypted data, which would enable secure database as a service
solutions. A recent breakthrough in the cryptographic community (in 2009)
related to fully homomorphic encryption (FHE) showed that arbitrary computation
on encrypted data is possible. Successful adoption of FHE for query processing
is, however, still a distant dream, and numerous challenges have to be
addressed. One challenge is how to perform algebraic query processing of
encrypted data, where we produce encrypted intermediate results and operations
on encrypted data can be composed. In this paper, we describe our solution for
algebraic query processing of encrypted data, and also outline several other
challenges that need to be addressed, while also describing the lessons that
can be learnt from a decade of work by the database community in querying
encrypted data.
|
1302.2671 | Latent Self-Exciting Point Process Model for Spatial-Temporal Networks | cs.SI cs.LG stat.ML | We propose a latent self-exciting point process model that describes
geographically distributed interactions between pairs of entities. In contrast
to most existing approaches that assume fully observable interactions, here we
consider a scenario where certain interaction events lack information about
participants. Instead, this information needs to be inferred from the available
observations. We develop an efficient approximate algorithm based on
variational expectation-maximization to infer unknown participants in an event
given the location and the time of the event. We validate the model on
synthetic as well as real-world data, and obtain very promising results on the
identity-inference task. We also use our model to predict the timing and
participants of future events, and demonstrate that it compares favorably with
baseline approaches.
|
1302.2672 | Competing With Strategies | stat.ML cs.GT cs.LG | We study the problem of online learning with a notion of regret defined with
respect to a set of strategies. We develop tools for analyzing the minimax
rates and for deriving regret-minimization algorithms in this scenario. While
the standard methods for minimizing the usual notion of regret fail, through
our analysis we demonstrate existence of regret-minimization methods that
compete with such sets of strategies as: autoregressive algorithms, strategies
based on statistical models, regularized least squares, and follow the
regularized leader strategies. In several cases we also derive efficient
learning algorithms.
|
1302.2684 | A Tensor Approach to Learning Mixed Membership Community Models | cs.LG cs.SI stat.ML | Community detection is the task of detecting hidden communities from observed
interactions. Guaranteed community detection has so far been mostly limited to
models with non-overlapping communities such as the stochastic block model. In
this paper, we remove this restriction, and provide guaranteed community
detection for a family of probabilistic network models with overlapping
communities, termed as the mixed membership Dirichlet model, first introduced
by Airoldi et al. This model allows for nodes to have fractional memberships in
multiple communities and assumes that the community memberships are drawn from
a Dirichlet distribution. Moreover, it contains the stochastic block model as a
special case. We propose a unified approach to learning these models via a
tensor spectral decomposition method. Our estimator is based on low-order
moment tensor of the observed network, consisting of 3-star counts. Our
learning method is fast and is based on simple linear algebraic operations,
e.g. singular value decomposition and tensor power iterations. We provide
guaranteed recovery of community memberships and model parameters and present a
careful finite sample analysis of our learning method. As an important special
case, our results match the best known scaling requirements for the
(homogeneous) stochastic block model.
|
1302.2702 | On the Capacity of Channels with Timing Synchronization Errors | cs.IT math.IT | We consider a new formulation of a class of synchronization error channels
and derive analytical bounds and numerical estimates for the capacity of these
channels. For the binary channel with only deletions, we obtain an expression
for the symmetric information rate in terms of subsequence weights which
reduces to a tight lower bound for small deletion probabilities. We are also
able to exactly characterize the Markov-1 rate for the binary channel with only
replications. For a channel that introduces deletions as well as replications
of input symbols, we design approximating channels that parameterize the state
space and show that the information rates of these approximate channels
approach that of the deletion-replication channel as the state space grows. For
the case of the channel where deletions and replications occur with the same
probabilities, a stronger result in the convergence of mutual information rates
is shown. The numerous advantages this new formulation presents are explored.
|
1302.2712 | Bayesian Nonparametric Dictionary Learning for Compressed Sensing MRI | cs.CV physics.med-ph stat.AP | We develop a Bayesian nonparametric model for reconstructing magnetic
resonance images (MRI) from highly undersampled k-space data. We perform
dictionary learning as part of the image reconstruction process. To this end,
we use the beta process as a nonparametric dictionary learning prior for
representing an image patch as a sparse combination of dictionary elements. The
size of the dictionary and the patch-specific sparsity pattern are inferred
from the data, in addition to other dictionary learning variables. Dictionary
learning is performed directly on the compressed image, and so is tailored to
the MRI being considered. In addition, we investigate a total variation penalty
term in combination with the dictionary learning model, and show how the
denoising property of dictionary learning removes dependence on regularization
parameters in the noisy setting. We derive a stochastic optimization algorithm
based on Markov Chain Monte Carlo (MCMC) for the Bayesian model, and use the
alternating direction method of multipliers (ADMM) for efficiently performing
total variation minimization. We present empirical results on several MRI,
which show that the proposed regularization framework can improve
reconstruction accuracy over other methods.
|
1302.2752 | Adaptive Metric Dimensionality Reduction | cs.LG cs.DS stat.ML | We study adaptive data-dependent dimensionality reduction in the context of
supervised learning in general metric spaces. Our main statistical contribution
is a generalization bound for Lipschitz functions in metric spaces that are
doubling, or nearly doubling. On the algorithmic front, we describe an analogue
of PCA for metric spaces: namely an efficient procedure that approximates the
data's intrinsic dimension, which is often much lower than the ambient
dimension. Our approach thus leverages the dual benefits of low dimensionality:
(1) more efficient algorithms, e.g., for proximity search, and (2) more
optimistic generalization bounds.
|
1302.2767 | Coherence and sufficient sampling densities for reconstruction in
compressed sensing | cs.LG cs.IT math.AG math.IT stat.ML | We give a new, very general, formulation of the compressed sensing problem in
terms of coordinate projections of an analytic variety, and derive sufficient
sampling rates for signal reconstruction. Our bounds are linear in the
coherence of the signal space, a geometric parameter independent of the
specific signal and measurement, and logarithmic in the ambient dimension where
the signal is presented. We exemplify our approach by deriving sufficient
sampling densities for low-rank matrix completion and distance matrix
completion which are independent of the true matrix.
|
1302.2787 | Acquaintance Time of a Graph | cs.CC cs.DS cs.SI math.CO | We define the following parameter of connected graphs. For a given graph $G$
we place one agent in each vertex of $G$. Every pair of agents sharing a common
edge is declared to be acquainted. In each round we choose some matching of $G$
(not necessarily a maximal matching), and for each edge in the matching the
agents on this edge swap places. After the swap, again, every pair of agents
sharing a common edge become acquainted, and the process continues. We define
the \emph{acquaintance time} of a graph $G$, denoted by $AC(G)$, to be the
minimal number of rounds required until every two agents are acquainted.
We first study the acquaintance time for some natural families of graphs
including the path, expanders, the binary tree, and the complete bipartite
graph. We also show that for all positive integers $n$ and $k \leq n^{1.5}$
there exists an $n$-vertex graph $G$ such that $AC(G) =\Theta(k)$. We also
prove that for all $n$-vertex connected graphs $G$ we have $AC(G) =
O\left(\frac{n^2}{\log(n)/\log\log(n)}\right)$, improving the $O(n^2)$ trivial
upper bound achieved by sequentially letting each agent perform depth-first
search along a spanning tree of $G$.
Studying the computational complexity of this problem, we prove that for any
constant $t \geq 1$ the problem of deciding that a given graph $G$ has $AC(G)
\leq t$ or $AC(G) \geq 2t$ is $\mathcal{NP}$-complete. That is, $AC(G)$ is
$\mathcal{NP}$-hard to approximate within multiplicative factor of 2, as well
as within any additive constant factor.
On the algorithmic side, we give a deterministic algorithm that given a graph
$G$ with $AC(G)=1$ finds a ${\lceil n/c\rceil}$-rounds strategy for
acquaintance in time $n^{c+O(1)}$. We also design a randomized polynomial time
algorithm that given a graph $G$ with $AC(G)=1$ finds with high probability an
$O(\log(n))$-rounds strategy for acquaintance.
|
1302.2820 | Linear and Geometric Mixtures - Analysis | cs.IT math.IT | Linear and geometric mixtures are two methods to combine arbitrary models in
data compression. Geometric mixtures generalize the empirically well-performing
PAQ7 mixture. Both mixture schemes rely on weight vectors, which heavily
determine their performance. Typically weight vectors are identified via Online
Gradient Descent. In this work we show that one can obtain strong code length
bounds for such a weight estimation scheme. These bounds hold for arbitrary
input sequences. For this purpose we introduce the class of nice mixtures and
analyze how Online Gradient Descent with a fixed step size combined with a nice
mixture performs. These results translate to linear and geometric mixtures,
which are nice, as we show. The results hold for PAQ7 mixtures as well, thus we
provide the first theoretical analysis of PAQ7.
|
1302.2828 | Multi-agent RRT*: Sampling-based Cooperative Pathfinding (Extended
Abstract) | cs.RO cs.AI cs.MA | Cooperative pathfinding is a problem of finding a set of non-conflicting
trajectories for a number of mobile agents. Its applications include planning
for teams of mobile robots, such as autonomous aircrafts, cars, or underwater
vehicles. The state-of-the-art algorithms for cooperative pathfinding typically
rely on some heuristic forward-search pathfinding technique, where A* is often
the algorithm of choice. Here, we propose MA-RRT*, a novel algorithm for
multi-agent path planning that builds upon a recently proposed
asymptotically-optimal sampling-based algorithm for finding single-agent
shortest path called RRT*. We experimentally evaluate the performance of the
algorithm and show that the sampling-based approach offers better scalability
than the classical forward-search approach in relatively large, but sparse
environments, which are typical in real-world applications such as
multi-aircraft collision avoidance.
|
1302.2839 | Mixing Strategies in Data Compression | cs.IT math.IT | We propose geometric weighting as a novel method to combine multiple models
in data compression. Our results reveal the rationale behind PAQ-weighting and
generalize it to a non-binary alphabet. Based on a similar technique we present
a new, generic linear mixture technique. All novel mixture techniques rely on
given weight vectors. We consider the problem of finding optimal weights and
show that the weight optimization leads to a strictly convex (and thus,
good-natured) optimization problem. Finally, an experimental evaluation
compares the two presented mixture techniques for a binary alphabet. The
results indicate that geometric weighting is superior to linear weighting.
|
1302.2855 | Polar-Coded Modulaton | cs.IT math.IT | A framework is proposed that allows for a joint description and optimization
of both binary polar coding and $2^m$-ary digital pulse-amplitude modulation
(PAM) schemes such as multilevel coding (MLC) and bit-interleaved coded
modulation (BICM). The conceptual equivalence of polar coding and multilevel
coding is pointed out in detail. Based on a novel characterization of the
channel polarization phenomenon, rules for the optimal choice of the labeling
in coded modulation schemes employing polar codes are developed. Simulation
results regarding the error performance of the proposed schemes on the AWGN
channel are included.
|
1302.2856 | Combining non-stationary prediction, optimization and mixing for data
compression | cs.IT math.IT | In this paper an approach to modelling nonstationary binary sequences, i.e.,
predicting the probability of upcoming symbols, is presented. After studying
the prediction model we evaluate its performance in two non-artificial test
cases. First the model is compared to the Laplace and Krichevsky-Trofimov
estimators. Secondly a statistical ensemble model for compressing
Burrows-Wheeler-Transform output is worked out and evaluated. A systematic
approach to the parameter optimization of an individual model and the ensemble
model is stated.
|
1302.2875 | Information Transmission using the Nonlinear Fourier Transform, Part
III: Spectrum Modulation | cs.IT math.IT | Motivated by the looming "capacity crunch" in fiber-optic networks,
information transmission over such systems is revisited. Among numerous
distortions, inter-channel interference in multiuser wavelength-division
multiplexing (WDM) is identified as the seemingly intractable factor limiting
the achievable rate at high launch power. However, this distortion and similar
ones arising from nonlinearity are primarily due to the use of methods suited
for linear systems, namely WDM and linear pulse-train transmission, for the
nonlinear optical channel. Exploiting the integrability of the nonlinear
Schr\"odinger (NLS) equation, a nonlinear frequency-division multiplexing
(NFDM) scheme is presented, which directly modulates non-interacting signal
degrees-of-freedom under NLS propagation. The main distinction between this and
previous methods is that NFDM is able to cope with the nonlinearity, and thus,
as the the signal power or transmission distance is increased, the new method
does not suffer from the deterministic cross-talk between signal components
which has degraded the performance of previous approaches. In this paper,
emphasis is placed on modulation of the discrete component of the nonlinear
Fourier transform of the signal and some simple examples of achievable spectral
efficiencies are provided.
|
1302.2937 | The Biological Origin of Linguistic Diversity | physics.soc-ph cs.MA q-bio.PE | In contrast with animal communication systems, diversity is characteristic of
almost every aspect of human language. Languages variously employ tones,
clicks, or manual signs to signal differences in meaning; some languages lack
the noun-verb distinction (e.g., Straits Salish), whereas others have a
proliferation of fine-grained syntactic categories (e.g., Tzeltal); and some
languages do without morphology (e.g., Mandarin), while others pack a whole
sentence into a single word (e.g., Cayuga). A challenge for evolutionary
biology is to reconcile the diversity of languages with the high degree of
biological uniformity of their speakers. Here, we model processes of language
change and geographical dispersion and find a consistent pressure for flexible
learning, irrespective of the language being spoken. This pressure arises
because flexible learners can best cope with the observed high rates of
linguistic change associated with divergent cultural evolution following human
migration. Thus, rather than genetic adaptations for specific aspects of
language, such as recursion, the coevolution of genes and fast-changing
linguistic structure provides the biological basis for linguistic diversity.
Only biological adaptations for flexible learning combined with cultural
evolution can explain how each child has the potential to learn any human
language.
|
1302.2966 | The Family of MapReduce and Large Scale Data Processing Systems | cs.DB | In the last two decades, the continuous increase of computational power has
produced an overwhelming flow of data which has called for a paradigm shift in
the computing architecture and large scale data processing mechanisms.
MapReduce is a simple and powerful programming model that enables easy
development of scalable parallel applications to process vast amounts of data
on large clusters of commodity machines. It isolates the application from the
details of running a distributed program such as issues on data distribution,
scheduling and fault tolerance. However, the original implementation of the
MapReduce framework had some limitations that have been tackled by many
research efforts in several followup works after its introduction. This article
provides a comprehensive survey for a family of approaches and mechanisms of
large scale data processing mechanisms that have been implemented based on the
original idea of the MapReduce framework and are currently gaining a lot of
momentum in both research and industrial communities. We also cover a set of
introduced systems that have been implemented to provide declarative
programming interfaces on top of the MapReduce framework. In addition, we
review several large scale data processing systems that resemble some of the
ideas of the MapReduce framework for different purposes and application
scenarios. Finally, we discuss some of the future research directions for
implementing the next generation of MapReduce-like solutions.
|
1302.2994 | Equivalence of Two Proof Techniques for Non-Shannon-type Inequalities | cs.IT math.IT math.PR | We compare two different techniques for proving non-Shannon-type information
inequalities. The first one is the original Zhang-Yeung's method, commonly
referred to as the copy/pasting lemma/trick. The copy lemma was used to derive
the first conditional and unconditional non-Shannon-type inequalities. The
second technique first appeared in Makarychev et al paper [7] and is based on a
coding lemma from Ahlswede and K\"orner works. We first emphasize the
importance of balanced inequalities and provide a simpler proof of a theorem of
Chan's for the case of Shannon-type inequalities. We compare the power of
various proof systems based on a single technique.
|
1302.3020 | Output Filter Aware Optimization of the Noise Shaping Properties of
{\Delta}{\Sigma} Modulators via Semi-Definite Programming | cs.IT math.IT | The Noise Transfer Function (NTF) of {\Delta}{\Sigma} modulators is typically
designed after the features of the input signal. We suggest that in many
applications, and notably those involving D/D and D/A conversion or actuation,
the NTF should instead be shaped after the properties of the
output/reconstruction filter. To this aim, we propose a framework for optimal
design based on the Kalman-Yakubovich-Popov (KYP) lemma and semi-definite
programming. Some examples illustrate how in practical cases the proposed
strategy can outperform more standard approaches.
|
1302.3033 | Structural Diversity for Resisting Community Identification in Published
Social Networks | cs.SI cs.DS | As an increasing number of social networking data is published and shared for
commercial and research purposes, privacy issues about the individuals in
social networks have become serious concerns. Vertex identification, which
identifies a particular user from a network based on background knowledge such
as vertex degree, is one of the most important problems that has been
addressed. In reality, however, each individual in a social network is inclined
to be associated with not only a vertex identity but also a community identity,
which can represent the personal privacy information sensitive to the public,
such as political party affiliation. This paper first addresses the new privacy
issue, referred to as community identification, by showing that the community
identity of a victim can still be inferred even though the social network is
protected by existing anonymity schemes. For this problem, we then propose the
concept of \textit{structural diversity} to provide the anonymity of the
community identities. The $k$-Structural Diversity Anonymization ($k$-SDA) is
to ensure sufficient vertices with the same vertex degree in at least $k$
communities in a social network. We propose an Integer Programming formulation
to find optimal solutions to $k$-SDA and also devise scalable heuristics to
solve large-scale instances of $k$-SDA from different perspectives. The
performance studies on real data sets from various perspectives demonstrate the
practical utility of the proposed privacy scheme and our anonymization
approaches.
|
1302.3051 | Some Properties of Generalized Self-reciprocal Polynomials over Finite
Fields | math.NT cs.IT math.IT math.RA | Numerous results on self-reciprocal polynomials over finite fields have been
studied. In this paper we generalize some of these to a-self reciprocal
polynomials defined in [4]. We consider some properties of the divisibility of
a-reciprocal polynomials and characterize the parity of the number of
irreducible factors for a-self reciprocal polynomials over finite fields of odd
characteristic.
|
1302.3057 | Building a reordering system using tree-to-string hierarchical model | cs.CL | This paper describes our submission to the First Workshop on Reordering for
Statistical Machine Translation. We have decided to build a reordering system
based on tree-to-string model, using only publicly available tools to
accomplish this task. With the provided training data we have built a
translation model using Moses toolkit, and then we applied a chart decoder,
implemented in Moses, to reorder the sentences. Even though our submission only
covered English-Farsi language pair, we believe that the approach itself should
work regardless of the choice of the languages, so we have also carried out the
experiments for English-Italian and English-Urdu. For these language pairs we
have noticed a significant improvement over the baseline in BLEU, Kendall-Tau
and Hamming metrics. A detailed description is given, so that everyone can
reproduce our results. Also, some possible directions for further improvements
are discussed.
|
1302.3086 | Viral spread with or without emotions in online community | cs.SI nlin.AO physics.soc-ph | Diffusion of information and viral content, social contagion and influence
are still topics of broad evaluation. We have studied the information epidemic
in a social networking platform in order compare different campaign setups. The
goal of this work is to present the new knowledge obtained from studying two
artificial (experimental) and one natural (where people act emotionally) viral
spread that took place in a closed virtual world. We propose an approach to
modeling the behavior of online community exposed on external impulses as an
epidemic process. The presented results base on online multilayer system
observation, and show characteristic difference between setups, moreover, some
important aspects of branching processes are presented. We run experiments,
where we introduced viral to system and agents were able to propagate it. There
were two modes of experiment: with or without award. Dynamic of spreading both
of virals were described by epidemiological model and diffusion. Results of
experiments were compared with real propagation process - spontaneous
organization against ACTA. During general-national protest against new
antypiracy multinational agreement - ACTA, criticized for its adverse effect on
e.g. freedom of expression and privacy of communication, members of chosen
community could send a viral such as Stop-ACTA transparent. In this scenario,
we are able to capture behavior of society, when real emotions play a role, and
compare results with artificiality conditioned experiments. Moreover, we could
measure effect of emotions in viral propagation. As theory explaining the role
of emotions in spreading behaviour as an factor of message targeting and
individuals spread emotional-oriented content in a more carefully and more
influential way, the experiments show that probabilities of secondary
infections are four times bigger if emotions play a role.
|
1302.3101 | Trend prediction in temporal bipartite networks: the case of Movielens,
Netflix, and Digg | cs.SI physics.soc-ph | Online systems where users purchase or collect items of some kind can be
effectively represented by temporal bipartite networks where both nodes and
links are added with time. We use this representation to predict which items
might become popular in the near future. Various prediction methods are
evaluated on three distinct datasets originating from popular online services
(Movielens, Netflix, and Digg). We show that the prediction performance can be
further enhanced if the user social network is known and centrality of
individual users in this network is used to weight their actions.
|
1302.3110 | Concatenated Capacity-Achieving Polar Codes for Optical Quantum Channels | quant-ph cs.IT math.IT | We construct concatenated capacity-achieving quantum codes for noisy optical
quantum channels. We demonstrate that the error-probability of
capacity-achieving quantum polar encoding can be reduced by the proposed
low-complexity concatenation scheme.
|
1302.3114 | Polaractivation of Hidden Private Classical Capacity Region of Quantum
Channels | quant-ph cs.IT math.IT | We define a new phenomenon for communication over noisy quantum channels. The
investigated solution is called polaractivation and based on quantum polar
encoding. Polaractivation is a natural consequence of the channel polarization
effect in quantum systems and makes possible to open the hidden capacity
regions of a noisy quantum channel by using the idea of rate increment. While
in case of a classical channel only the rate of classical communication can be
increased, in case of a quantum channel the channel polarization and the rate
improvement can be exploited to open unreachable capacity regions. We
demonstrate the results for the opening of private classical capacity-domain.
We prove that the method works for arbitrary quantum channels if a given
criteria in the symmetric classical capacity is satisfied. We also derived a
necessary lower bound on the rate of classical communication for the
polaractivation of private classical capacity-domain.
|
1302.3118 | The Correlation Conversion Property of Quantum Channels | quant-ph cs.IT math.IT | Transmission of quantum entanglement will play a crucial role in future
networks and long-distance quantum communications. Quantum Key Distribution,
the working mechanism of quantum repeaters and the various quantum
communication protocols are all based on quantum entanglement. On the other
hand, quantum entanglement is extremely fragile and sensitive to the noise of
the communication channel over which it has been transmitted. To share
entanglement between distant points, high fidelity quantum channels are needed.
In practice, these communication links are noisy, which makes it impossible or
extremely difficult and expensive to distribute entanglement. In this work we
first show that quantum entanglement can be generated by a new idea, exploiting
the most natural effect of the communication channels: the noise itself of the
link. We prove that the noise transformation of quantum channels that are not
able to transmit quantum entanglement can be used to generate distillable
(useable) entanglement from classically correlated input. We call this new
phenomenon the Correlation Conversion property (CC-property) of quantum
channels. The proposed solution does not require any non-local operation or
local measurement by the parties, only the use of standard quantum channels.
Our results have implications and consequences for the future of quantum
communications, and for global-scale quantum communication networks. The
discovery also revealed that entanglement generation by local operations is
possible.
|
1302.3119 | Comparision and analysis of photo image forgery detection techniques | cs.CV cs.CR cs.MM | Digital Photo images are everywhere, on the covers of magazines, in
newspapers, in courtrooms, and all over the Internet. We are exposed to them
throughout the day and most of the time. Ease with which images can be
manipulated; we need to be aware that seeing does not always imply believing.
We propose methodologies to identify such unbelievable photo images and
succeeded to identify forged region by given only the forged image. Formats are
additive tag for every file system and contents are relatively expressed with
extension based on most popular digital camera uses JPEG and Other image
formats like png, bmp etc. We have designed algorithm running behind with the
concept of abnormal anomalies and identify the forgery regions.
|
1302.3120 | Fast Compressed Sensing SAR Imaging based on Approximated Observation | cs.IT math.IT | In recent years, compressed sensing (CS) has been applied in the field of
synthetic aperture radar (SAR) imaging and shows great potential. The existing
models are, however, based on application of the sensing matrix acquired by the
exact observation functions. As a result, the corresponding reconstruction
algorithms are much more time consuming than traditional matched filter (MF)
based focusing methods, especially in high resolution and wide swath systems.
In this paper, we formulate a new CS-SAR imaging model based on the use of the
approximated SAR observation deducted from the inverse of focusing procedures.
We incorporate CS and MF within an sparse regularization framework that is then
solved by a fast iterative thresholding algorithm. The proposed model forms a
new CS-SAR imaging method that can be applied to high-quality and
high-resolution imaging under sub-Nyquist rate sampling, while saving the
computational cost substantially both in time and memory. Simulations and real
SAR data applications support that the proposed method can perform SAR imaging
effectively and efficiently under Nyquist rate, especially for large scale
applications.
|
1302.3123 | An Analysis of Gene Expression Data using Penalized Fuzzy C-Means
Approach | cs.CV cs.CE | With the rapid advances of microarray technologies, large amounts of
high-dimensional gene expression data are being generated, which poses
significant computational challenges. A first step towards addressing this
challenge is the use of clustering techniques, which is essential in the data
mining process to reveal natural structures and identify interesting patterns
in the underlying data. A robust gene expression clustering approach to
minimize undesirable clustering is proposed. In this paper, Penalized Fuzzy
C-Means (PFCM) Clustering algorithm is described and compared with the most
representative off-line clustering techniques: K-Means Clustering, Rough
K-Means Clustering and Fuzzy C-Means clustering. These techniques are
implemented and tested for a Brain Tumor gene expression Dataset. Analysis of
the performance of the proposed approach is presented through qualitative
validation experiments. From experimental results, it can be observed that
Penalized Fuzzy C-Means algorithm shows a much higher usability than the other
projected clustering algorithms used in our comparison study. Significant and
promising clustering results are presented using Brain Tumor Gene expression
dataset. Thus patterns seen in genome-wide expression experiments can be
interpreted as indications of the status of cellular processes. In these
clustering results, we find that Penalized Fuzzy C-Means algorithm provides
useful information as an aid to diagnosis in oncology.
|
1302.3126 | Is Europe Evolving Toward an Integrated Research Area? | physics.soc-ph cs.DL cs.SI physics.data-an | An integrated European Research Area (ERA) is a critical component for a more
competitive and open European R&D system. However, the impact of EU-specific
integration policies aimed at overcoming innovation barriers associated with
national borders is not well understood. Here we analyze 2.4 x 10^6 patent
applications filed with the European Patent Office (EPO) over the 25-year
period 1986-2010 along with a sample of 2.6 x 10^5 records from the ISI Web of
Science to quantitatively measure the role of borders in international R&D
collaboration and mobility. From these data we construct five different
networks for each year analyzed: (i) the patent co-inventor network, (ii) the
publication co-author network, (iii) the co-applicant patent network, (iv) the
patent citation network, and (v) the patent mobility network. We use methods
from network science and econometrics to perform a comparative analysis across
time and between EU and non-EU countries to determine the "treatment effect"
resulting from EU integration policies. Using non-EU countries as a control
set, we provide quantitative evidence that, despite decades of efforts to build
a European Research Area, there has been little integration above global trends
in patenting and publication. This analysis provides concrete evidence that
Europe remains a collection of national innovation systems.
|
1302.3155 | Morphological Analusis Of The Left Ventricular Eendocardial Surface
Using A Bag-Of-Features Descriptor | cs.CV | The limitations of conventional imaging techniques have hitherto precluded a
thorough and formal investigation of the complex morphology of the left
ventricular (LV) endocardial surface and its relation to the severity of
Coronary Artery Disease (CAD). Recent developments in high-resolution
Multirow-Detector Computed Tomography (MDCT) scanner technology have enabled
the imaging of LV endocardial surface morphology in a single heart beat.
Analysis of high-resolution Computed Tomography (CT) images from a 320-MDCT
scanner allows the study of the relationship between percent Diameter Stenosis
(DS) of the major coronary arteries and localization of the cardiac segments
affected by coronary arterial stenosis. In this paper a novel approach for the
analysis using a combination of rigid transformation-invariant shape
descriptors and a more generalized isometry-invariant Bag-of-Features (BoF)
descriptor, is proposed and implemented. The proposed approach is shown to be
successful in identifying, localizing and quantifying the incidence and extent
of CAD and thus, is seen to have a potentially significant clinical impact.
Specifically, the association between the incidence and extent of CAD,
determined via the percent DS measurements of the major coronary arteries, and
the alterations in the endocardial surface morphology is formally quantified. A
multivariate regression test performed on a strict leave-one-out basis are
shown to exhibit a distinct pattern in terms of the correlation coefficient
within the cardiac segments where the incidence of coronary arterial stenosis
is localized.
|
1302.3160 | A New Construction of Multi-receiver Authentication Codes from
Pseudo-Symplectic Geometry over Finite Fields | cs.IT math.IT | Multi-receiver authentication codes allow one sender to construct an
authenticated message for a group of receivers such that each receiver can
verify authenticity of the received message. In this paper, we constructed one
multi-receiver authentication codes from pseudo-symplectic geometry over finite
fields. The parameters and the probabilities of deceptions of this codes are
also computed.
|
1302.3166 | CSI Sharing Strategies for Transmitter Cooperation in Wireless Networks | cs.IT math.IT | Multiple-antenna "based" transmitter (TX) cooperation has been established as
a promising tool towards avoiding, aligning, or shaping the interference
resulting from aggressive spectral reuse. The price paid in the form of
feedback and exchanging channel state information (CSI) between cooperating
devices in most existing methods is often underestimated however. In reality,
feedback and information overhead threatens the practicality and scalability of
TX cooperation approaches in dense networks. Hereby we addresses a "Who needs
to know what?" problem, when it comes to CSI at cooperating transmitters. A
comprehensive answer to this question remains beyond our reach and the scope of
this paper. Nevertheless, recent results in this area suggest that CSI overhead
can be contained for even large networks provided the allocation of feedback to
TXs is made non-uniform and to properly depend on the network's topology. This
paper provides a few hints toward solving the problem.
|
1302.3167 | Equiaffine Structure and Conjugate Ricci-symmetry of a Statistical
Manifold | math.DS cs.IT math-ph math.DG math.IT math.MP | A condition for a statistical manifold to have an equiaffine structure is
studied. The facts that dual flatness and conjugate symmetry of a statistical
manifold are sufficient conditions for a statistical manifold to have an
equiaffine structure were obtained in [2] and [3]. In this paper, a fact that a
statistical manifold, which is conjugate Ricci-symmetric, has an equiaffine
structure is given, where conjugate Ricci-symmetry is weaker condition than
conjugate symmetry. A condition for conjugate symmetry and conjugate
Ricci-symmetry to coincide is also given.
|
1302.3203 | Local Privacy, Data Processing Inequalities, and Statistical Minimax
Rates | math.ST cs.CR cs.IT math.IT stat.TH | Working under a model of privacy in which data remains private even from the
statistician, we study the tradeoff between privacy guarantees and the utility
of the resulting statistical estimators. We prove bounds on
information-theoretic quantities, including mutual information and
Kullback-Leibler divergence, that depend on the privacy guarantees. When
combined with standard minimax techniques, including the Le Cam, Fano, and
Assouad methods, these inequalities allow for a precise characterization of
statistical rates under local privacy constraints. We provide a treatment of
several canonical families of problems: mean estimation, parameter estimation
in fixed-design regression, multinomial probability estimation, and
nonparametric density estimation. For all of these families, we provide lower
and upper bounds that match up to constant factors, and exhibit new (optimal)
privacy-preserving mechanisms and computationally efficient estimators that
achieve the bounds.
|
1302.3209 | "Groupware for Groups": Problem-Driven Design in Deme | cs.HC cs.SI | Design choices can be clarified when group interaction software is directed
at solving the interaction needs of particular groups that pre-date the
groupware. We describe an example: the Deme platform for online deliberation.
Traditional threaded conversation systems are insufficient for solving the
problem at which Deme is aimed, namely, that the democratic process in
grassroots community groups is undermined both by the limited availability of
group members for face-to-face meetings and by constraints on the use of
information in real-time interactions. We describe and motivate design
elements, either implemented or planned for Deme, that addresses this problem.
We believe that "problem focused" design of software for preexisting groups
provides a useful framework for evaluating the appropriateness of design
elements in groupware generally.
|
1302.3219 | An Efficient Dual Approach to Distance Metric Learning | cs.LG | Distance metric learning is of fundamental interest in machine learning
because the distance metric employed can significantly affect the performance
of many learning methods. Quadratic Mahalanobis metric learning is a popular
approach to the problem, but typically requires solving a semidefinite
programming (SDP) problem, which is computationally expensive. Standard
interior-point SDP solvers typically have a complexity of $O(D^{6.5})$ (with
$D$ the dimension of input data), and can thus only practically solve problems
exhibiting less than a few thousand variables. Since the number of variables is
$D (D+1) / 2 $, this implies a limit upon the size of problem that can
practically be solved of around a few hundred dimensions. The complexity of the
popular quadratic Mahalanobis metric learning approach thus limits the size of
problem to which metric learning can be applied. Here we propose a
significantly more efficient approach to the metric learning problem based on
the Lagrange dual formulation of the problem. The proposed formulation is much
simpler to implement, and therefore allows much larger Mahalanobis metric
learning problems to be solved. The time complexity of the proposed method is
$O (D ^ 3) $, which is significantly lower than that of the SDP approach.
Experiments on a variety of datasets demonstrate that the proposed method
achieves an accuracy comparable to the state-of-the-art, but is applicable to
significantly larger problems. We also show that the proposed method can be
applied to solve more general Frobenius-norm regularized SDP problems
approximately.
|
1302.3261 | Pavlov's dog associative learning demonstrated on synaptic-like organic
transistors | q-bio.NC cond-mat.dis-nn cs.ET cs.NE | In this letter, we present an original demonstration of an associative
learning neural network inspired by the famous Pavlov's dogs experiment. A
single nanoparticle organic memory field effect transistor (NOMFET) is used to
implement each synapse. We show how the physical properties of this dynamic
memristive device can be used to perform low power write operations for the
learning and implement short-term association using temporal coding and spike
timing dependent plasticity based learning. An electronic circuit was built to
validate the proposed learning scheme with packaged devices, with good
reproducibility despite the complex synaptic-like dynamic of the NOMFET in
pulse regime.
|
1302.3268 | Adaptive Crowdsourcing Algorithms for the Bandit Survey Problem | cs.LG | Very recently crowdsourcing has become the de facto platform for distributing
and collecting human computation for a wide range of tasks and applications
such as information retrieval, natural language processing and machine
learning. Current crowdsourcing platforms have some limitations in the area of
quality control. Most of the effort to ensure good quality has to be done by
the experimenter who has to manage the number of workers needed to reach good
results.
We propose a simple model for adaptive quality control in crowdsourced
multiple-choice tasks which we call the \emph{bandit survey problem}. This
model is related to, but technically different from the well-known multi-armed
bandit problem. We present several algorithms for this problem, and support
them with analysis and simulations. Our approach is based in our experience
conducting relevance evaluation for a large commercial search engine.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.